Skip to main content

OpenAI OpenAI LLM

The OpenAILLMService provides conversational reasoning and response generation using OpenAI's GPT models. It supports streaming completions, tool usage (function calling), and vision capabilities.

Installation

To use OpenAI, install the required dependencies:

pip install "piopiy-ai[openai]"

Prerequisites

  • An OpenAI account and API key (Get yours here).
  • Set your API key in your environment:
    export OPENAI_API_KEY="your_api_key_here"

Configuration

OpenAILLMService Parameters

ParameterTypeDefaultDescription
modelstr"gpt-4o"The OpenAI model name to use.
api_keystrNoneOpenAI API key (defaults to env var).
base_urlstrNoneCustom base URL for the API.
paramsInputParamsInputParams()Advanced sampling and token settings.

InputParams

ParameterTypeDefaultDescription
temperaturefloat1.0Sampling temperature (0.0 to 2.0).
max_tokensintNoneMaximum tokens in response.
top_pfloat1.0Nucleus sampling parameter.
presence_penaltyfloat0.0Penalty for new topics.
frequency_penaltyfloat0.0Penalty for repeat tokens.

Usage

Basic Setup

import os
from piopiy.services.openai.llm import OpenAILLMService

llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o"
)

With Tool Usage

from piopiy.services.openai.llm import OpenAILLMService

llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o"
)

# Tools are typically registered on the VoiceAgent
# voice_agent.register_tool("get_weather", weather_handler)

Notes

  • Model Choice: gpt-4o is recommended for the best balance of speed and intelligence.
  • Streaming: Tokens are streamed to the TTS service as they are generated, minimizing response latency.