ollama-memory-embeddings
This skill installs and configures OpenClaw to use Ollama for embeddings, updating ~/.openclaw/openclaw.json and managing local model files under ~/.node-llama-cpp/models. It runs shell commands during install (e.g., bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh) and sets remote.apiKey (default ollama) in the config.
ollama-memory-embeddings
Installable OpenClaw skill to use Ollama as the embeddings server for
memory search (OpenAI-compatible /v1/embeddings).
Embeddings only — chat/completions routing is not affected.
This skill is available on GitHub under the MIT license.
Features
- Interactive embedding model selection:
embeddinggemma(default — closest to OpenClaw built-in)nomic-embed-text(strong quality, efficient)all-minilm(smallest/fastest)mxbai-embed-large(highest quality, larger)
- Optional import of a local embedding GGUF into Ollama (
ollama create)- Detects: embeddinggemma, nomic-embed, all-minilm, mxbai-embed GGUFs
- Model name normalization (handles
:latesttag automatically) - Surgical OpenClaw config update (
agents.defaults.memorySearch) - Post-write config sanity check
- Smart gateway restart (detects available restart method)
- Two-step verification: model existence + endpoint response
- Non-interactive mode for automation (GGUF import is opt-in)
- Optional memory reindex during install (
--reindex-memory auto|yes|no) - Idempotent drift enforcement (
enforce.sh) - Optional auto-heal watchdog (
watchdog.sh, launchd on macOS)
Install
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh
Bulletproof install (enforce + watchdog):
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto \
--install-watchdog \
--watchdog-interval 60
From repo:
bash skills/ollama-memory-embeddings/install.sh
Non-interactive example
bash ~/.openclaw/skills/ollama-memory-embeddings/install.sh \
--non-interactive \
--model embeddinggemma \
--reindex-memory auto \
--import-local-gguf yes # explicit opt-in; "auto" = "no" in non-interactive
Verify
~/.openclaw/skills/ollama-memory-embeddings/verify.sh
~/.openclaw/skills/ollama-memory-embeddings/verify.sh --verbose # dump raw response on failure
Drift guard and self-heal
One-time check/heal:
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh --once --model embeddinggemma
Manual enforce (idempotent):
~/.openclaw/skills/ollama-memory-embeddings/enforce.sh --model embeddinggemma
Install launchd watchdog (macOS):
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh \
--install-launchd \
--model embeddinggemma \
--interval-sec 60
Remove launchd watchdog:
~/.openclaw/skills/ollama-memory-embeddings/watchdog.sh --uninstall-launchd
Important: re-embed when changing model
If you switch embedding model, existing vectors may be incompatible with the new vector space. Rebuild/re-embed your memory index after model changes to avoid retrieval quality regressions.
Installer behavior:
--reindex-memory auto(default): reindex only when embedding fingerprint changed (provider,model,baseUrl,apiKey presence).--reindex-memory yes: always runopenclaw memory index --force --verbose.--reindex-memory no: never reindex automatically.
Notes:
enforce.sh --check-onlytreats apiKey drift as missing apiKey (empty), not strict equality to"ollama".- Backups are created only when config changes are actually written.
- Legacy config fallback supported: if canonical
agents.defaults.memorySearchis missing, scripts read known legacy paths and mirror updates for compatibility.