Together AI
Overview
The TogetherLLMService provides access to over 100+ open-source models (Llama, Mixtral, Qwen, etc.) via Together AI's high-speed inference engine. It follows the OpenAI-compatible interface for easy integration.
Installation
To use Together AI, install the required dependencies:
pip install "piopiy-ai[together]"
Prerequisites
- A Together AI account and API key (Get yours here).
- Set your API key in your environment:
export TOGETHER_API_KEY="your_api_key_here"
Configuration
TogetherLLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | Required | Your Together AI API key. |
model | str | "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo" | Model identifier. |
base_url | str | "https://api.together.xyz/v1" | API endpoint. |
Usage
Basic Setup
import os
from piopiy.services.together.llm import TogetherLLMService
llm = TogetherLLMService(
api_key=os.getenv("TOGETHER_API_KEY"),
model="mistralai/Mixtral-8x7B-Instruct-v0.1"
)
Notes
- Low Latency: Together AI's "Turbo" models are specifically optimized for sub-second responses, ideal for voice agents.
- Model Variety: Supports a massive range of open-weight models that can be swapped by changing the
modelstring.