Chatterbox Local AI Pipeline
This example demonstrates how to build a fully local, open-source voice agent stack with Piopiy that requires zero cloud API keys.
It runs entirely on your local machine using:
- Whisper (Local) for Speech-to-Text
- Ollama for Local Large Language Models (LLaMA 3, Mistral, etc.)
- Chatterbox TTS for Text-to-Speech
Requirements
pip install "piopiy-ai[whisper,silero]"
You must also install and run Ollama locally:
ollama serve
ollama run llama3
Environment Setup
Since this runs locally, you do not need OpenAI or Deepgram keys!
# Piopiy Dashboard
AGENT_ID="your_agent_id"
AGENT_TOKEN="your_agent_token"
How to Run
Save the script below as chatterbox_agent.py and run it:
python chatterbox_agent.py
- Log in to the Piopiy Dashboard.
- Ensure you have purchased a Piopiy phone number and mapped it to your new AI Agent. (See Dashboard Setup Guide for help).
- Dial that phone number from your personal phone to interact with your local agent!
The call will be processed 100% locally on your machine with zero cloud AI API usage!
Full Script
import asyncio
import os
from dotenv import load_dotenv
from piopiy.agent import Agent
from piopiy.voice_agent import VoiceAgent
from piopiy.services.whisper.stt import WhisperLocalSTTService
from piopiy.services.ollama.llm import OllamaLLMService
from piopiy.services.chatterbox.tts import ChatterboxTTSService
load_dotenv()
async def create_session(agent_id, call_id, from_number, to_number, metadata=None):
# 1. VoiceAgent Configuration
voice_agent = VoiceAgent(
instructions="You are a brilliant, strictly local open-source AI assistant running on LLaMA 3.",
greeting="System initialized. Local open source pipeline active. How can I assist?",
)
# 2. Local Speech-to-Text Setup
# Auto-downloads the ONNX whisper model on first run
stt = WhisperLocalSTTService(
model_size="base",
language="en"
)
# 3. Local Language Model (Ollama) Setup
# Connects to your local Ollama daemon (default: http://localhost:11434)
llm = OllamaLLMService(
model="llama3",
)
# 4. Local Text-to-Speech Setup
tts = ChatterboxTTSService(
voice="en-us-linda", # Built-in local voice
)
# 5. Start the voice agent pipeline
await voice_agent.Action(
stt=stt,
llm=llm,
tts=tts,
vad=True,
)
await voice_agent.start()
async def main():
agent = Agent(
agent_id=os.getenv("AGENT_ID"),
agent_token=os.getenv("AGENT_TOKEN"),
create_session=create_session,
)
print("🚀 Fully Local Open Source Agent starting...")
print(" Whisper -> Ollama -> Chatterbox")
await agent.connect()
if __name__ == "__main__":
asyncio.run(main())