Ollama (Local)
Overview
The OLLamaLLMService allows you to run large language models locally using Ollama. This is ideal for privacy-conscious applications or development environments where an internet connection to external LLM providers is not desired.
Installation
- Install Ollama from ollama.com.
- Pull a model (e.g.,
ollama pull llama3). - Install the SDK:
pip install piopiy-ai
Configuration
OLLamaLLMService Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "llama2" | The model pulled in Ollama. |
base_url | str | "http://localhost:11434/v1" | Local API endpoint. |
Usage
Basic Setup
from piopiy.services.ollama.llm import OLLamaLLMService
llm = OLLamaLLMService(
model="llama3",
base_url="http://localhost:11434/v1"
)
Notes
- Compatibility: The service uses Ollama's OpenAI-compatible API layer.
- Performance: Local performance depends heavily on your hardware (CPU/GPU).
- Privacy: No data leaves your local machine when using this service.