OpenAI LLM
The OpenAILLMService provides conversational reasoning and response generation using OpenAI's GPT models. It supports streaming completions, tool usage (function calling), and vision capabilities.
Installation
To use OpenAI, install the required dependencies:
pip install "piopiy-ai[openai]"
Prerequisites
- An OpenAI account and API key (Get yours here).
- Set your API key in your environment:
export OPENAI_API_KEY="your_api_key_here"
Configuration
OpenAILLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "gpt-4o" | The OpenAI model name to use. |
api_key | str | None | OpenAI API key (defaults to env var). |
base_url | str | None | Custom base URL for the API. |
params | InputParams | InputParams() | Advanced sampling and token settings. |
InputParams
| Parameter | Type | Default | Description |
|---|---|---|---|
temperature | float | 1.0 | Sampling temperature (0.0 to 2.0). |
max_tokens | int | None | Maximum tokens in response. |
top_p | float | 1.0 | Nucleus sampling parameter. |
presence_penalty | float | 0.0 | Penalty for new topics. |
frequency_penalty | float | 0.0 | Penalty for repeat tokens. |
Usage
Basic Setup
import os
from piopiy.services.openai.llm import OpenAILLMService
llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o"
)
With Tool Usage
from piopiy.services.openai.llm import OpenAILLMService
llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o"
)
# Tools are typically registered on the VoiceAgent
# voice_agent.register_tool("get_weather", weather_handler)
Notes
- Model Choice:
gpt-4ois recommended for the best balance of speed and intelligence. - Streaming: Tokens are streamed to the TTS service as they are generated, minimizing response latency.