Google (Gemini)
The GoogleLLMService integrates Google AI's Gemini models for powerful conversational reasoning. It supports the standard LLM features along with Gemini-specific capabilities like "Thinking" mode for deep reasoning.
Installation
To use Google Gemini, install the required dependencies:
pip install "piopiy-ai[google]"
Prerequisites
- A Google AI Studio API key (Get yours here).
- Set your API key in your environment:
export GOOGLE_API_KEY="your_api_key_here"
Configuration
GoogleLLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | Required | Your Google AI API key. |
model | str | "gemini-2.5-flash" | Gemini model ID. |
params | InputParams | InputParams() | Model generation settings. |
system_instruction | str | None | System prompt for the model. |
InputParams
| Parameter | Type | Default | Description |
|---|---|---|---|
max_tokens | int | 4096 | Maximum generation length. |
temperature | float | None | Sampling temperature. |
thinking | ThinkingConfig | None | Configuration for "Thinking" models. |
Usage
Basic Setup
import os
from piopiy.services.google.llm import GoogleLLMService
llm = GoogleLLMService(
api_key=os.getenv("GOOGLE_API_KEY"),
model="gemini-2.5-flash"
)
Using Thinking Mode
from piopiy.services.google.llm import GoogleLLMService
llm = GoogleLLMService(
api_key=os.getenv("GOOGLE_API_KEY"),
model="gemini-2.0-pro-exp",
params=GoogleLLMService.InputParams(
thinking=GoogleLLMService.ThinkingConfig(
thinking_budget=1024
)
)
)
Notes
- Thinking Budget: For high-reasoning models (like Gemini 2.5 Pro), you can set a
thinking_budget(in tokens) to control the depth of the reasoning process. - Multimodal: Gemini models natively support images and audio in the context.