cookbook/gollem_go_agent_framework/README.md
A working example showing how to use gollem, a production-grade Go agent framework, with LiteLLM as a proxy gateway. This lets Go developers access 100+ LLM providers through a single proxy while keeping compile-time type safety for tools and structured output.
# Simple start with a single model
litellm --model gpt-4o
# Or with the example config for multi-provider access
litellm --config proxy_config.yaml
# Install Go dependencies
go mod tidy
# Basic agent
go run ./basic
# Agent with type-safe tools
go run ./tools
# Streaming responses
go run ./streaming
The included proxy_config.yaml sets up three providers through LiteLLM:
model_list:
- model_name: gpt-4o # OpenAI
- model_name: claude-sonnet # Anthropic
- model_name: gemini-pro # Google Vertex AI
Switch providers in Go by changing a single string — no code changes needed:
model := openai.NewLiteLLM("http://localhost:4000",
openai.WithModel("gpt-4o"), // OpenAI
// openai.WithModel("claude-sonnet"), // Anthropic
// openai.WithModel("gemini-pro"), // Google
)
basic/ — Basic AgentConnects gollem to LiteLLM and runs a simple prompt. Demonstrates the NewLiteLLM constructor and basic agent creation.
tools/ — Type-Safe ToolsShows gollem's compile-time type-safe tool framework working through LiteLLM's tool-use passthrough. The tool parameters are Go structs with JSON tags — the schema is generated automatically at compile time.
streaming/ — Streaming ResponsesReal-time token streaming using Go 1.23+ range-over-function iterators, proxied through LiteLLM's SSE passthrough.
Gollem's openai.NewLiteLLM() constructor creates an OpenAI-compatible provider pointed at your LiteLLM proxy. Since LiteLLM speaks the OpenAI API protocol, everything works out of the box:
Go App (gollem) → LiteLLM Proxy → OpenAI / Anthropic / Google / ...
go build produces one binary, no pip/venv/Docker needed# Required for providers you want to use (set in LiteLLM config or env)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
# Optional: point to a non-default LiteLLM proxy
export LITELLM_PROXY_URL="http://localhost:4000"
Connection errors?
litellm --model gpt-4ohttp://localhost:4000)Model not found?
curl http://localhost:4000/models to see available modelsTool calls not working?