ideabrowser.com — find trending startup ideas with real demand
Try itnpx skills add https://github.com/livekit/agent-skills --skill livekit-agentsThis skill provides opinionated guidance for building voice AI agents with LiveKit Cloud. It assumes you are using LiveKit Cloud (the recommended path) and encodes how to approach agent development, not API specifics. All factual information about APIs, methods, and configurations must come from live documentation.
This skill is for LiveKit Cloud developers. If you're self-hosting LiveKit, some recommendations (particularly around LiveKit Inference) won't apply directly.
Before writing ANY code, complete this checklist:
LIVEKIT_URL, LIVEKIT_API_KEY, and LIVEKIT_API_SECRET from your Cloud projectThis checklist applies regardless of whether MCP is available. MCP provides documentation access but does NOT replace the guidance in this skill.
LiveKit Cloud is the fastest way to get a voice agent running. It provides:
Sign up at cloud.livekit.io if you haven't already
Create a project (or use an existing one)
Get your credentials from the project settings:
LIVEKIT_URL - Your project's WebSocket URL (e.g., wss://your-project.livekit.cloud)LIVEKIT_API_KEY - API key for authenticationLIVEKIT_API_SECRET - API secret for authenticationSet these as environment variables (typically in .env.local):
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your-api-key
LIVEKIT_API_SECRET=your-api-secret
The LiveKit CLI can automate credential setup. Consult the CLI documentation for current commands.
LiveKit Inference is the recommended way to use AI models with LiveKit Cloud. It provides access to leading AI model providers—all through your LiveKit credentials with no separate API keys needed.
Benefits of LiveKit Inference:
Consult the documentation for available models, supported providers, and current usage patterns. The documentation always has the most up-to-date information.
LiveKit Agents is a fast-evolving SDK. Model training data is outdated the moment it's created. When working with LiveKit:
This rule applies even when confident about an API. Verify anyway.
Before writing any LiveKit code, ensure access to the LiveKit documentation MCP server. This provides current, verified API information and prevents reliance on stale model knowledge.
Look for livekit-docs MCP tools. If available, use them for all documentation lookups:
If the LiveKit MCP server is not configured, inform the user and recommend installation. Installation instructions for all supported platforms are available at:
https://docs.livekit.io/intro/mcp-server/
Fetch the installation instructions appropriate for the user's coding agent from that page.
If MCP cannot be installed in the current session:
# UNVERIFIED: Please check docs.livekit.io for current APIVoice AI agents have fundamentally different requirements than text-based agents or traditional software. Internalize these principles:
Voice conversations are real-time. Users expect responses within hundreds of milliseconds, not seconds. Every architectural decision should consider latency impact:
Large system prompts and extensive tool lists directly increase latency. A voice agent with 50 tools and a 10,000-token system prompt will feel sluggish regardless of model speed.
Design agents with minimal viable context:
Voice interface constraints differ from text:
Complex voice agents should not be monolithic. LiveKit Agents supports structured workflows that maintain low latency while handling sophisticated use cases.
A single agent handling an entire conversation flow accumulates:
This creates latency and reduces reliability.
Handoffs allow one agent to transfer control to another. Use handoffs to:
Design handoffs around natural conversation boundaries where context can be summarized rather than transferred wholesale.
Tasks are tightly-scoped prompts designed to achieve a specific outcome. Use tasks for:
Consult the documentation for implementation details on handoffs and tasks.
Voice agent behavior is code. Every agent implementation MUST include tests. Shipping an agent without tests is shipping untested code.
When building or modifying a LiveKit agent:
tests/ directory if one doesn't existWhen modifying agent behavior—instructions, tool descriptions, workflows—begin by writing tests for the desired behavior:
This approach prevents shipping agents that "seem to work" but fail in production.
At minimum, write tests for:
Focus tests on:
Use LiveKit's testing framework. Consult the testing documentation via MCP for current patterns:
search: "livekit agents testing"
The framework supports:
Agents that "seem to work" in manual testing frequently fail in production:
Tests catch these issues before users do.
If a user explicitly requests no tests, proceed without them but inform them:
"I've built the agent without tests as requested. I strongly recommend adding tests before deploying to production. Voice agents are difficult to verify manually and tests prevent silent regressions."
Starting with one agent that "does everything" and adding tools/instructions over time. Instead, design workflow structure upfront, even if initial implementation is simple.
Latency issues compound. An agent that feels "a bit slow" in development becomes unusable in production with real network conditions. Measure and optimize latency continuously.
Examples in documentation demonstrate specific patterns. Copying code without understanding its purpose leads to bloated, poorly-structured agents. Understand what each component does before including it.
Agent behavior is code. Prompt changes affect behavior as much as code changes. Test agent behavior with the same rigor as traditional software. Never deliver an agent implementation without at least one test file.
Reiterating the critical rule: never trust model memory for LiveKit APIs. The SDK evolves faster than model training cycles. Verify everything.
Always consult documentation for:
This skill provides guidance on:
The distinction matters: this skill tells you how to think about building voice agents. The documentation tells you how to implement specific features.
When using LiveKit documentation via MCP, note any gaps, outdated information, or confusing content. Reporting documentation issues helps improve the ecosystem for all developers.
Building effective voice agents with LiveKit Cloud requires:
These principles remain valid regardless of SDK version or API changes. For all implementation specifics, consult the LiveKit documentation via MCP.