backend/docs/gemini.md
The Google AI provider enables PentAGI to use Google's Gemini language models through the Generative AI API. This provider supports advanced features like function calling, streaming responses, and competitive pricing.
| Variable | Default | Description |
|---|---|---|
GEMINI_API_KEY | - | Your Google AI API key (required) |
GEMINI_SERVER_URL | https://generativelanguage.googleapis.com | Google AI API base URL |
GEMINI_API_KEY environment variable| Model | Context Window | Max Output | Input Price* | Output Price* | Best For |
|---|---|---|---|---|---|
| gemini-2.5-flash | 1M tokens | 65K tokens | $0.15 | $0.60 | General tasks, fast responses |
| gemini-2.5-pro | 1M tokens | 65K tokens | $2.50 | $10.00 | Complex reasoning, analysis |
| gemini-2.0-flash | 1M tokens | 8K tokens | $0.15 | $0.60 | High-frequency tasks |
| gemini-1.5-flash | 1M tokens | 8K tokens | $0.075 | $0.30 | Legacy model (deprecated) |
| gemini-1.5-pro | 2M tokens | 8K tokens | $1.25 | $5.00 | Legacy model (deprecated) |
*Prices per 1M tokens (USD)
Each agent type is optimized with specific parameters for Google AI models:
# Set environment variables
export GEMINI_API_KEY="your_api_key_here"
export GEMINI_SERVER_URL="https://generativelanguage.googleapis.com"
# Test the provider
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
vxcontrol/pentagi /opt/pentagi/bin/ctester -type gemini
# gemini-custom.yml
simple:
model: "gemini-2.5-pro"
temperature: 0.3
top_p: 0.4
max_tokens: 8000
price:
input: 2.50
output: 10.00
coder:
model: "gemini-2.5-flash"
temperature: 0.05
top_p: 0.1
max_tokens: 16000
price:
input: 0.15
output: 0.60
# Using pre-configured Gemini provider
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
vxcontrol/pentagi /opt/pentagi/bin/ctester \
-config /opt/pentagi/conf/gemini.provider.yml
# Using custom configuration
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
-v $(pwd)/gemini-custom.yml:/opt/pentagi/gemini-custom.yml \
vxcontrol/pentagi /opt/pentagi/bin/ctester \
-type gemini \
-config /opt/pentagi/gemini-custom.yml
# Google AI Configuration
GEMINI_API_KEY=your_api_key_here
GEMINI_SERVER_URL=https://generativelanguage.googleapis.com
# Optional: Proxy settings
PROXY_URL=http://your-proxy:port
The Google AI provider is automatically available when GEMINI_API_KEY is set. You can use it for:
max_tokens limits based on your use casetemperature values for deterministic taskstop_p to balance creativity and consistencyAPI Key Issues
Error: failed to create gemini provider: invalid API key
Model Not Found
Error: model "gemini-x.x-xxx" not found
Rate Limiting
Error: quota exceeded
Network Issues
Error: connection timeout
# Test basic functionality
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
vxcontrol/pentagi /opt/pentagi/bin/ctester \
-type gemini \
-agent simple \
-prompt "Hello, world!"
# Test JSON functionality
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
vxcontrol/pentagi /opt/pentagi/bin/ctester \
-type gemini \
-agent simple_json \
-prompt "Generate a JSON object with name and age fields"
# Test all agents
docker run --rm \
-v $(pwd)/.env:/opt/pentagi/.env \
vxcontrol/pentagi /opt/pentagi/bin/ctester \
-type gemini
For provider-specific issues, include:
gemini