voice-assistant

Verified·Scanned 2/17/2026

Provides a real-time voice assistant that streams browser mic audio to configurable STT/TTS providers and an OpenClaw gateway. It reads env vars like DEEPGRAM_API_KEY/ELEVENLABS_API_KEY, connects to wss://api.deepgram.com and https://api.elevenlabs.io, and Quick Start runs uv run scripts/server.py.

from clawhub.ai·vc128cb3·42.1 KB·0 installs
Scanned from 0.1.0 at c128cb3 · Transparency log ↗
$ vett add clawhub.ai/charantejmandali18/voice-assistant

Voice Assistant

Real-time voice interface for your OpenClaw agent. Talk to your agent and hear it respond — with configurable STT and TTS providers, full streaming at every stage, and sub-2 second time-to-first-audio.

Architecture

Browser Mic → WebSocket → STT (Deepgram / ElevenLabs) → Text
  → OpenClaw Gateway (/v1/chat/completions, streaming) → Response Text
  → TTS (Deepgram Aura / ElevenLabs) → Audio chunks → Browser Speaker

The voice interface connects to your running OpenClaw gateway's OpenAI-compatible endpoint. It's the same agent with all its context, tools, and memory — just with a voice.

Quick Start

cd {baseDir}
cp .env.example .env
# Fill in your API keys and gateway URL
uv run scripts/server.py
# Open http://localhost:7860 and click the mic

Supported Providers

STT (Speech-to-Text)

ProviderModelLatencyNotes
Deepgramnova-2 (streaming)~200-300msWebSocket streaming, best accuracy/speed
ElevenLabsScribe v1~300-500msREST-based, good multilingual

TTS (Text-to-Speech)

ProviderModelLatencyNotes
Deepgramaura-2~200msWebSocket streaming, low cost
ElevenLabsTurbo v2.5~300msBest voice quality, streaming

Configuration

All configuration is via environment variables in .env:

# === Required ===
OPENCLAW_GATEWAY_URL=http://localhost:4141/v1    # Your OpenClaw gateway
OPENCLAW_MODEL=claude-sonnet-4-5-20250929        # Model your gateway routes to

# === STT Provider (pick one) ===
VOICE_STT_PROVIDER=deepgram                      # "deepgram" or "elevenlabs"
DEEPGRAM_API_KEY=your-key-here                   # Required if STT=deepgram
ELEVENLABS_API_KEY=your-key-here                 # Required if STT=elevenlabs

# === TTS Provider (pick one) ===
VOICE_TTS_PROVIDER=elevenlabs                    # "deepgram" or "elevenlabs"
# Uses the same API keys as above

# === Optional Tuning ===
VOICE_TTS_VOICE=rachel                           # ElevenLabs voice name/ID
VOICE_TTS_VOICE_DG=aura-2-theia-en              # Deepgram Aura voice
VOICE_VAD_SILENCE_MS=400                         # Silence before end-of-turn (ms)
VOICE_SAMPLE_RATE=16000                          # Audio sample rate
VOICE_SERVER_PORT=7860                           # Server port
VOICE_SYSTEM_PROMPT=""                           # Optional system prompt override

Provider Combinations

SetupBest For
Deepgram STT + ElevenLabs TTSBest quality voice output
Deepgram STT + Deepgram TTSLowest latency, single vendor
ElevenLabs STT + ElevenLabs TTSBest multilingual support

How It Works

  1. Browser captures mic audio via Web Audio API and streams raw PCM over a WebSocket
  2. Server receives audio and pipes it to the configured STT provider's streaming endpoint
  3. STT returns partial transcripts in real-time; on end-of-utterance the full text is sent to the OpenClaw gateway
  4. OpenClaw gateway streams the LLM response token-by-token via SSE (Server-Sent Events)
  5. Tokens are accumulated into sentence-sized chunks and streamed to the TTS provider
  6. TTS returns audio chunks that are immediately forwarded to the browser over the same WebSocket
  7. Browser plays audio using the Web Audio API with a jitter buffer for smooth playback

Interruption Handling (Barge-In)

When the user starts speaking while the agent is still talking:

  • Current TTS audio is immediately cancelled
  • The agent stops its current response
  • New STT session begins capturing the user's interruption

Usage Examples

User: "Hey, set up my voice assistant"
→ OpenClaw runs: cd {baseDir} && cp .env.example .env
→ Opens .env for the user to fill in API keys
→ Runs: uv run scripts/server.py

User: "Start a voice chat"
→ Opens http://localhost:7860 in the browser

User: "Switch TTS to Deepgram"
→ Updates VOICE_TTS_PROVIDER=deepgram in .env
→ Restarts the server

Troubleshooting

  • No audio output? Check that your TTS API key is valid and the provider is set correctly
  • High latency? Use Deepgram for both STT and TTS; ensure your gateway is on the same network
  • Cuts off speech? Increase VOICE_VAD_SILENCE_MS to 600-800ms
  • Echo/feedback? Use headphones, or enable the built-in echo cancellation in the browser UI

Latency Budget

StageTargetActual (typical)
Audio capture + VAD<200ms~100-150ms
STT transcription<400ms~200-400ms
OpenClaw LLM first token<1500ms~500-1500ms
TTS first audio chunk<400ms~200-400ms
Total first audio<2.5s~1.0-2.5s