modelready
✓Verified·Scanned 2/17/2026
ModelReady starts a local or Hugging Face model as an OpenAI-compatible server and lets users chat with it from chat. The skill executes python3 -m vllm.entrypoints.openai.api_server, writes "$RUN_DIR/defaults.env" (e.g. ~/.model2skill/defaults.env), and makes HTTP requests to http://${client_host}:${PORT}/v1.
from clawhub.ai·v3e244e6·8.1 KB·0 installs
Scanned from 1.0.0 at 3e244e6 · Transparency log ↗
$ vett add clawhub.ai/carol-gutianle/modelready
ModelReady
ModelReady lets you start using a local or Hugging Face model immediately, without leaving clawdbot.
It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.
When to use
Use this skill when you want to:
- Quickly start using a local or Hugging Face model
- Chat with a locally running model
- Test or interact with a model directly from chat
Commands
Start a model server
/modelready start repo=<path-or-hf-repo> port=<port> [tp=<n>] [dtype=<dtype>]
Examples:
/modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001
/modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16
Chat with a running model
/modelready chat port=<port> text="<message>"
Example:
/modelready chat port=8010 text="hello"
Check status or stop the server
/modelready status port=<port>
/modelready stop port=<port>
Set default host or port
/modelready set_ip ip=<host>
/modelready set_port port=<port>
Notes
- The model is served locally using vLLM.
- The exposed endpoint follows the OpenAI API format.
- The server must be started before sending chat requests.