llm-supervisor
Provides graceful fallback from cloud LLMs to a local Ollama model and requires confirmation before local code generation. The skill includes `ollama` CLI usage (`echo "your prompt" | ollama run qwen2.5:7b`) and configures agents to use `http://127.0.0.1:11434`.