Skip to main content

Google Google (Gemini)

The GoogleLLMService integrates Google AI's Gemini models for powerful conversational reasoning. It supports the standard LLM features along with Gemini-specific capabilities like "Thinking" mode for deep reasoning.

Installation

To use Google Gemini, install the required dependencies:

pip install "piopiy-ai[google]"

Prerequisites

  • A Google AI Studio API key (Get yours here).
  • Set your API key in your environment:
    export GOOGLE_API_KEY="your_api_key_here"

Configuration

GoogleLLMService Parameters

ParameterTypeDefaultDescription
api_keystrRequiredYour Google AI API key.
modelstr"gemini-2.5-flash"Gemini model ID.
paramsInputParamsInputParams()Model generation settings.
system_instructionstrNoneSystem prompt for the model.

InputParams

ParameterTypeDefaultDescription
max_tokensint4096Maximum generation length.
temperaturefloatNoneSampling temperature.
thinkingThinkingConfigNoneConfiguration for "Thinking" models.

Usage

Basic Setup

import os
from piopiy.services.google.llm import GoogleLLMService

llm = GoogleLLMService(
api_key=os.getenv("GOOGLE_API_KEY"),
model="gemini-2.5-flash"
)

Using Thinking Mode

from piopiy.services.google.llm import GoogleLLMService

llm = GoogleLLMService(
api_key=os.getenv("GOOGLE_API_KEY"),
model="gemini-2.0-pro-exp",
params=GoogleLLMService.InputParams(
thinking=GoogleLLMService.ThinkingConfig(
thinking_budget=1024
)
)
)

Notes

  • Thinking Budget: For high-reasoning models (like Gemini 2.5 Pro), you can set a thinking_budget (in tokens) to control the depth of the reasoning process.
  • Multimodal: Gemini models natively support images and audio in the context.