llm-shield
LLM Shield validates incoming messages with Glitchward's API to detect prompt-injection and installs as an OpenClaw skill. It sends message content to `https://glitchward.com/api/shield/validate` using the `GLITCHWARD_SHIELD_TOKEN` env var; this network use and token handling are purpose-aligned and pose a low security risk.