OpenPipe LLM
The OpenPipeLLMService extends the standard OpenAI service to integrate with OpenPipe's powerful fine-tuning and monitoring platform. It allows developers to log requests and apply tags for easy dataset collection and model evaluation.
Installation
To use OpenPipe, install the required dependencies:
pip install "piopiy-ai[openpipe]"
Prerequisites
- An OpenPipe API key (Get yours here).
- Set your keys in your environment:
export OPENPIPE_API_KEY="your_openpipe_key_here"
export OPENAI_API_KEY="your_openai_key_here"
Configuration
OpenPipeLLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "gpt-4.1" | The model name (OpenAI or fine-tuned). |
api_key | str | None | OpenAI API key (falls back to env). |
openpipe_api_key | str | None | OpenPipe API key (falls back to env). |
openpipe_base_url | str | Default | OpenPipe API endpoint URL. |
tags | dict | None | Metadata tags for tracking requests. |
Usage
Basic Setup with Tagging
import os
from piopiy.services.openpipe.llm import OpenPipeLLMService
llm = OpenPipeLLMService(
model="gpt-4.1",
openpipe_api_key=os.getenv("OPENPIPE_API_KEY"),
tags={
"environment": "production",
"app_version": "1.2.0",
"experiment_id": "voice-v1"
}
)
Notes
- Automatic Logging: All requests made through this service are automatically logged to the OpenPipe dashboard, making it easy to create fine-tuning datasets from production traffic.
- OpenAI Compatible: Since it inherits from
OpenAILLMService, you can use it as a drop-in replacement for standard GPT models while gaining monitoring benefits.