Fireworks AI
Overview
The FireworksLLMService integrates Fireworks AI's lightning-fast inference for open models. It is particularly well-suited for high-throughput applications.
Installation
pip install piopiy-ai
Prerequisites
- A Fireworks AI API key (Get yours here).
- Set your API key in your environment:
export FIREWORKS_API_KEY="your_api_key_here"
Configuration
FireworksLLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | Required | Your Fireworks API key. |
model | str | "accounts/fireworks/models/firefunction-v2" | Model identifier. |
base_url | str | "https://api.fireworks.ai/inference/v1" | API endpoint. |
Usage
Basic Setup
import os
from piopiy.services.fireworks.llm import FireworksLLMService
llm = FireworksLLMService(
api_key=os.getenv("FIREWORKS_API_KEY"),
model="accounts/fireworks/models/llama-v3p1-70b-instruct"
)
Notes
- Speed: Fireworks AI is optimized for ultra-low latency, making it a top choice for real-time AI agents.
- Function Calling: The
firefunction-v2model is specifically tuned for reliable tool use.