llm-supervisor
✓Verified·Scanned 2/17/2026
Provides graceful fallback from cloud LLMs to a local Ollama model and requires confirmation before local code generation. The skill includes ollama CLI usage (echo "your prompt" | ollama run qwen2.5:7b) and configures agents to use http://127.0.0.1:11434.
from clawhub.ai·v414ce7a·18.0 KB·0 installs
Scanned from 0.3.0 at 414ce7a · Transparency log ↗
$ vett add clawhub.ai/dhardie/llm-supervisor
LLM Supervisor
Automatically switches OpenClaw between cloud and local LLMs when rate limits occur — without silently degrading code quality.
What this skill does
- Monitors cloud LLM errors (e.g. Anthropic rate limits)
- Automatically switches the main agent to a local Ollama model
- Ensures new agents inherit the active LLM mode
- Requires explicit user confirmation before running code tasks on local LLMs
- Automatically switches back to cloud when manually requested
Why this exists
Cloud LLMs are powerful but rate-limited.
Local LLMs are reliable but weaker for code.
This skill provides:
- Reliability without dead bots
- Transparency when switching models
- Safety for production codebases
Default behavior
- Enabled by default
- Auto-switches to local on cloud rate limit
- Chat and planning are always allowed
- Code tasks on local require confirmation:
CONFIRM LOCAL CODE
Commands
/llm status— show current mode/llm switch local— force local mode/llm switch cloud— return to cloud
Configuration
The following options are configurable:
localModel(default:qwen2.5:7b)cooldownMinutesrequireConfirmationForLocalCodeconfirmationPhrase
Safety
This skill will never:
- Silently downgrade code generation quality
- Burn cloud credits unexpectedly
- Modify existing agents mid-task
License
MIT