moodcast

Review·Scanned 2/17/2026

This skill converts text into emotionally expressive audio using ElevenLabs v3 and ambient soundscapes. It reads ELEVENLABS_API_KEY, calls ElevenLabs APIs via client.text_to_speech.convert, and runs shell commands such as subprocess.check_call([sys.executable, "-m", "pip", "install", "elevenlabs", "-q"]).

from clawhub.ai·vdb11d0c·28.8 KB·0 installs
Scanned from 1.0.2 at db11d0c · Transparency log ↗
$ vett add clawhub.ai/ashutosh887/moodcastReview findings below

MoodCast

Transform any text into emotionally expressive audio with ambient soundscapes.

MoodCast is a Moltbot skill that uses ElevenLabs' most advanced features to create compelling audio content. It analyzes your text, adds emotional expression using Eleven v3 audio tags, and can layer ambient soundscapes for immersive experiences.


Features

FeatureDescription
Emotion DetectionAutomatically analyzes text and inserts v3 audio tags ([excited], [whispers], [laughs], etc.)
Ambient SoundscapesGenerates matching background sounds using Sound Effects API
Multiple MoodsPre-configured moods: dramatic, calm, excited, scary, news, story
Smart Text ProcessingAuto-splits long text, handles multiple speakers

Demo

Input:

Breaking news! Scientists have discovered something incredible. 
This could change everything we know about the universe...
I can't believe it.

MoodCast Output:

[excited] Breaking news! Scientists have discovered something incredible.
[pause] This could change everything we know about the universe...
[gasps] [whispers] I can't believe it.

The AI voice delivers this with genuine excitement, dramatic pauses, and a whispered ending.


Quick Start

1. Install the Skill

# Option 1: Clone to your Moltbot skills directory
git clone https://github.com/ashutosh887/moodcast ~/.clawdbot/skills/moodcast

# Option 2: Install via MoltHub (recommended)
npx molthub@latest install moodcast

# Option 3: Install to workspace (for per-agent skills)
# After installing, move to workspace or use git clone method

2. Set Your API Key

export ELEVENLABS_API_KEY="your-api-key-here"

Or add to ~/.clawdbot/moltbot.json:

{
  "skills": {
    "entries": {
      "moodcast": {
        "enabled": true,
        "apiKey": "your-api-key-here",
        "env": {
          "ELEVENLABS_API_KEY": "your-api-key-here"
        }
      }
    }
  }
}

Note: apiKey automatically maps to ELEVENLABS_API_KEY when the skill declares primaryEnv.

3. Use It!

Via Moltbot (WhatsApp/Telegram/Discord/iMessage):

Hey Molty, moodcast this: "It was a dark and stormy night..."

Or use the slash command:

/moodcast "It was a dark and stormy night..."

Via Command Line:

python3 ~/.clawdbot/skills/moodcast/scripts/moodcast.py --text "Hello world!"

Usage Examples

Basic Usage

python3 moodcast.py --text "This is amazing news!"

With Mood Preset

python3 moodcast.py --text "The door creaked open slowly..." --mood scary

With Ambient Sound

python3 moodcast.py --text "Welcome to my café" --ambient "coffee shop busy morning"

Save to File

python3 moodcast.py --text "Your story here" --output narration.mp3

Show Enhanced Text

python3 moodcast.py --text "Wow this is great!" --show-enhanced
# Output: [excited] Wow this is great!

Custom Configuration

# Custom voice, model, and output format
python3 moodcast.py --text "Hello" --voice VOICE_ID --model eleven_v3 --output-format mp3_44100_128

# Override mood's default voice
python3 moodcast.py --text "Dramatic scene" --mood dramatic --voice CUSTOM_VOICE_ID

# Skip emotion enhancement
python3 moodcast.py --text "Plain text" --no-enhance

Supported Audio Tags (Eleven v3)

MoodCast automatically detects emotions and inserts these tags:

Emotions

TagTriggers
[excited]amazing, incredible, wow, !!!
[happy]happy, delighted, thrilled
[nervous]scared, afraid, terrified
[angry]angry, furious, hate
[sorrowful]sad, sorry, tragic
[calm]peaceful, gentle, quiet

Delivery

TagEffect
[whispers]Soft, secretive tone
[shouts]Loud, emphatic delivery
[slows down]Deliberate pacing
[rushed]Fast, urgent speech

Reactions

TagEffect
[laughs]Natural laughter
[sighs]Weary exhale
[gasps]Surprise intake
[giggles]Light laughter
[pause]Dramatic beat

Mood Presets

MoodVoiceStyleBest For
dramaticRogerTheatrical, expressiveStories, scripts
calmLilySoothing, peacefulMeditation, ASMR
excitedLiamEnergetic, upbeatNews, announcements
scaryRoger (deep)Tense, ominousHorror, thrillers
newsLilyProfessional, clearBriefings, reports
storyRachelWarm, engagingAudiobooks, tales

Configuration

Command Line Arguments

ArgumentShortDescription
--text-tText to convert to speech (required)
--mood-mMood preset: dramatic, calm, excited, scary, news, story
--voice-vVoice ID (overrides mood's default voice)
--modelModel ID (default: eleven_v3)
--output-formatOutput format (default: mp3_44100_128)
--ambient-aGenerate ambient sound effect (prompt)
--ambient-durationAmbient duration in seconds (max 30, default: 10)
--output-oSave audio to file instead of playing
--no-enhanceSkip automatic emotion enhancement
--show-enhancedPrint enhanced text before generating
--list-voicesList available voices

Environment Variables

VariableRequiredDescriptionDefault
ELEVENLABS_API_KEYYesYour ElevenLabs API key-
MOODCAST_DEFAULT_VOICENoDefault voice ID (overridden by --voice or --mood)CwhRBWXzGAHq8TQ4Fs17
MOODCAST_MODELNoDefault model ID (overridden by --model)eleven_v3
MOODCAST_OUTPUT_FORMATNoDefault output format (overridden by --output-format)mp3_44100_128
MOODCAST_AUTO_AMBIENTNoAuto-generate ambient sounds when using --mood-

Priority order: CLI arguments > Environment variables > Hardcoded defaults

Moltbot Config (~/.clawdbot/moltbot.json)

{
  "skills": {
    "entries": {
      "moodcast": {
        "enabled": true,
        "apiKey": "xi-xxxxxxxxxxxx",
        "env": {
          "ELEVENLABS_API_KEY": "xi-xxxxxxxxxxxx",
          "MOODCAST_AUTO_AMBIENT": "true"
        }
      }
    }
  }
}

Note: apiKey is a convenience field that maps to ELEVENLABS_API_KEY when primaryEnv is set in the skill metadata.


ElevenLabs APIs Used

This skill demonstrates deep integration with multiple ElevenLabs APIs:

1. Text-to-Speech (Eleven v3)

  • Model: eleven_v3 for audio tag support
  • Format: mp3_44100_128
  • Features: Full audio tag expression system

2. Sound Effects API

  • Generates ambient soundscapes from text prompts
  • Up to 30 seconds per generation
  • Seamless looping support

3. Voices API

  • Lists available voices
  • Supports custom voice selection
  • Mood-based voice matching

Project Structure

moodcast/
├── SKILL.md           # Moltbot skill definition (AgentSkills format)
├── README.md          # Project documentation
├── requirements.txt   # Python dependencies
├── .gitignore         # Git ignore rules
├── scripts/
│   └── moodcast.py    # Main Python script
└── examples/
    ├── news.txt       # News article example
    ├── scary.txt      # Horror story example
    ├── dramatic.txt   # Dramatic narrative example
    ├── calm.txt       # Peaceful scene example
    └── story.txt      # Adventure story example

Skill Installation Locations

Moltbot loads skills from three locations (in precedence order):

  1. Workspace skills: <workspace>/skills/moodcast (per-agent, highest precedence)
  2. Managed skills: ~/.clawdbot/skills/moodcast (shared across agents)
  3. Bundled skills: Shipped with Moltbot install (lowest precedence)

Use npx molthub@latest install moodcast to install to the managed directory, or clone directly to your workspace for per-agent installation.


Technical Details

API Integration

CriteriaImplementation
ElevenLabs API usageUses Eleven v3 audio tags (deepest TTS feature), Sound Effects API, Voices API
Practical use casesContent creators, writers, podcasters, anyone who wants expressive audio
Demo approachSingle clear hook: "Text that feels emotion" with live demonstration

License

MIT License - feel free to use, modify, and share!


Acknowledgments

Built for the #ClawdEleven Hackathon (ElevenLabs × Moltbot)