cookbook/misc/RELEASE_NOTES_GENERATION_INSTRUCTIONS.md
This document provides comprehensive instructions for AI agents to generate release notes for LiteLLM following the established format and style.
v1.77.3-stable)The GitHub release page (e.g. https://github.com/BerriAI/litellm/releases/tag/v1.83.3-stable) does not list the real changelog directly. The "What's Changed" section contains staging PRs that each bundle many individual commits/PRs. For example:
Litellm oss staging 03 14 2026 by @RheagalFire in #23686Litellm ryan march 16 by @ryan-crabbe in #23822To get the real changelog, you MUST click into each staging PR (e.g. #23686, #23822), open its Commits tab, and extract every underlying commit/PR (look for the (#NNNNN) suffix on commit titles). Those underlying PRs — not the staging PRs — are what get categorized in the release notes. Never treat a staging PR title as a single changelog entry.
IMPORTANT — staging PRs are not the complete source. Some PRs land on the release branch before the staging PRs and are therefore not reachable via gh api /pulls/<staging>/commits. GitHub's auto-generated "What's Changed" on the release page also misses these. To catch every PR in the release, you MUST additionally walk the full git log range between the previous release's commit and this release's commit:
git fetch origin --tags
git log <prev_release_commit>..<this_release_commit> --oneline | grep -oE '#[0-9]+' | sort -u
Union the PR set from the staging-PR walk with the PR set from git log. Any PR in git log but missing from your staging-expanded set is almost certainly a content PR that merged directly to the release branch — fetch its title/body with gh pr view <N> and categorize it. Do not trust the GH release body or the staging PRs alone as the authoritative list.
Sanity check for new contributors. The GH release body's "New Contributors" list is a floor, not authoritative. For every PR author who appears in the release (including underlying PRs from staging and PRs found only via git log), verify whether they are a first-time contributor by running:
gh api "search/issues?q=is:pr+author:<login>+repo:BerriAI/litellm+is:merged&sort=created&order=asc" --jq '.items[0] | {n:.number, merged:.closed_at}'
If the author's earliest merged PR number matches a PR in this release window, they are a new contributor. If their earliest merged PR predates the previous release tag, they are not. Do not copy the GH release body's list blindly — it can both miss contributors (PRs that merged via an older dev branch) and falsely include contributors whose "first" PR in this window was not actually their first ever.
# Check git diff for model pricing changes
git diff <previous_commit_hash> HEAD -- model_prices_and_context_window.json
Key Analysis Points:
Follow this exact structure based on recent stable releases (v1.76.3-stable, v1.77.2-stable, v1.77.5-stable):
---
title: "v1.77.X-stable - [Key Theme]"
slug: "v1-77-X"
date: YYYY-MM-DDTHH:mm:ss
authors: [standard author block]
hide_table_of_contents: false
---
## Deploy this version
[Docker and pip installation tabs]
## Key Highlights
[3-5 bullet points of major features - prioritize MCP OAuth 2.0, scheduled key rotations, and major model updates]
## New Providers and Endpoints
### New Providers
[Table with Provider, Supported Endpoints, Description columns]
### New LLM API Endpoints
[Optional table for new endpoint additions with Endpoint, Method, Description, Documentation columns]
## New Models / Updated Models
#### New Model Support
[Model pricing table]
#### Features
[Provider-specific features organized by provider]
### Bug Fixes
[Provider-specific bug fixes organized by provider]
## LLM API Endpoints
#### Features
[API-specific features organized by API type]
#### Bugs
[General bug fixes]
## Management Endpoints / UI
#### Features
[UI and management features - group by functionality like Proxy CLI Auth, Virtual Keys, Models + Endpoints]
#### Bugs
[Management-related bug fixes]
## AI Integrations
### Logging
[Logging integrations organized by provider with proper doc links, includes General subsection]
### Guardrails
[Guardrail-specific features and fixes]
### Prompt Management
[Prompt management integrations like BitBucket]
### Secret Managers
[Secret manager integrations - AWS, HashiCorp Vault, CyberArk, etc.]
## Spend Tracking, Budgets and Rate Limiting
[Cost tracking, service tier pricing, rate limiting improvements]
## MCP Gateway
[MCP-specific features, OAuth 2.0, configuration improvements]
## Performance / Loadbalancing / Reliability improvements
[Infrastructure improvements, memory fixes, performance optimizations]
## Documentation Updates
[Documentation improvements, guides, corrections - separate section for visibility]
## New Contributors
[List of first-time contributors]
## Full Changelog
[Link to GitHub comparison]
Performance Improvements:
New Models/Updated Models:
#### New Model Support - pricing table#### Features - organized by provider with documentation links### Bug Fixes - provider-specific bug fixes#### New Provider Support - major new provider integrations**[Provider Name](../../docs/providers/[provider])**LLM API Endpoints:
#### Features - organized by API type (Responses API, Batch API, etc.)#### Bugs - general bug fixes under General categoryUI/Management:
AI Integrations:
### Logging - organized by integration provider with proper doc links, includes General subsection### Guardrails - guardrail-specific features and fixes### Prompt Management - prompt management integrations### Secret Managers - secret manager integrationsLink to docs when:
Link format: ../../docs/[category]/[specific_doc]
Common doc paths:
../../docs/providers/[provider] - Provider-specific docs../../docs/image_generation - Image generation../../docs/video_generation - Video generation (if exists)../../docs/response_api - Responses API../../docs/proxy/logging - Logging integrations../../docs/proxy/guardrails - Guardrails../../docs/pass_through/[provider] - Passthrough endpointsFrom git diff analysis, create tables like:
| Provider | Model | Context Window | Input ($/1M tokens) | Output ($/1M tokens) | Features |
| -------- | ----- | -------------- | ------------------- | -------------------- | -------- |
| OpenRouter | `openrouter/openai/gpt-4.1` | 1M | $2.00 | $8.00 | Chat completions with vision |
Extract from JSON:
max_input_tokens → Context Windowinput_cost_per_token × 1,000,000 → Input costoutput_cost_per_token × 1,000,000 → Output costsupports_* fields → FeaturesBy Keywords in PR Title:
[Perf], Performance, RPS → Performance Improvements[Bug], [Bug Fix], Fix → Bug Fixes section[Feat], [Feature], Add support → Features section[Docs] → Documentation Updates sectionMCP, oauth, Model Context Protocol → MCP Gatewayservice_tier, priority, cost tracking → Spend Tracking, Budgets and Rate LimitingBy PR Content Analysis:
Special Categorization Rules:
Tone:
Formatting:
[PR #XXXXX](https://github.com/BerriAI/litellm/pull/XXXXX)Warnings/Notes:
Before finalizing:
## MM/DD/YYYY
* New Models / Updated Models: XX
* LLM API Endpoints: XX
* Management Endpoints / UI: XX
* Logging / Guardrail / Prompt Management Integrations: XX
* Spend Tracking, Budgets and Rate Limiting: XX
* MCP Gateway: XX
* Performance / Loadbalancing / Reliability improvements: XX
* Documentation Updates: XX
Performance Changes:
- **+400 RPS Performance Boost** - Description - [PR #XXXXX](link)
New Models: Always include pricing table and feature highlights
Breaking Changes:
:::warning
This release has a known issue...
:::
Provider Features (New Models / Updated Models section):
#### Features
- **[Provider Name](../../docs/providers/provider)**
- Feature description - [PR #XXXXX](link)
- Another feature description - [PR #YYYYY](link)
API Features (LLM API Endpoints section):
#### Features
- **[API Name](../../docs/api_path)**
- Feature description - [PR #XXXXX](link)
- Another feature - [PR #YYYYY](link)
- **General**
- Miscellaneous improvements - [PR #ZZZZZ](link)
Integration Features (Logging / Guardrail Integrations section):
#### Features
- **[Integration Name](../../docs/proxy/logging#integration)**
- Feature description - [PR #XXXXX](link)
- Bug fix description - [PR #YYYYY](link)
Bug Fixes Pattern:
### Bug Fixes
- **[Provider/Component Name](../../docs/providers/provider)**
- Bug fix description - [PR #XXXXX](link)
Review for missing docs:
Flag for documentation needs:
MCP Gateway Section:
Spend Tracking, Budgets and Rate Limiting Section:
Documentation Updates Section:
Management Endpoints / UI Grouping:
AI Integrations Section Expansion:
New Providers and Endpoints Section:
provider_endpoints_support.json in the repository root (see Section 13)Always include counts in section headers for:
### New Providers (X new providers)### New LLM API Endpoints (X new endpoints)#### New Model Support (X new models)Format:
### New Providers (4 new providers)
| Provider | Supported LiteLLM Endpoints | Description |
| -------- | --------------------------- | ----------- |
...
### New LLM API Endpoints (2 new endpoints)
| Endpoint | Method | Description | Documentation |
| -------- | ------ | ----------- | ------------- |
...
#### New Model Support (32 new models)
| Provider | Model | Context Window | Input ($/1M tokens) | Output ($/1M tokens) | Features |
| -------- | ----- | -------------- | ------------------- | -------------------- | -------- |
...
Counting Rules:
When adding new providers or endpoints, you MUST also update provider_endpoints_support.json in the repository root.
This file tracks which endpoints are supported by each LiteLLM provider and is used to generate documentation.
Required Steps:
provider_endpoints_support.jsonProvider Entry Format:
"provider_slug": {
"display_name": "Provider Name (`provider_slug`)",
"url": "https://docs.litellm.ai/docs/providers/provider_slug",
"endpoints": {
"chat_completions": true,
"messages": true,
"responses": true,
"embeddings": false,
"image_generations": false,
"audio_transcriptions": false,
"audio_speech": false,
"moderations": false,
"batches": false,
"rerank": false,
"a2a": true
}
}
Available Endpoint Types:
chat_completions - /chat/completions endpointmessages - /messages endpoint (Anthropic format)responses - /responses endpoint (OpenAI/Anthropic unified)embeddings - /embeddings endpointimage_generations - /image/generations endpointaudio_transcriptions - /audio/transcriptions endpointaudio_speech - /audio/speech endpointmoderations - /moderations endpointbatches - /batches endpointrerank - /rerank endpointocr - /ocr endpointsearch - /search endpointvector_stores - /vector_stores endpointa2a - /a2a/{agent}/message/send endpoint (A2A Protocol)Checklist:
provider_endpoints_support.json# 1. Get model changes
git diff <commit> HEAD -- model_prices_and_context_window.json
# 2. Analyze PR list for categorization
# 3. Create release notes following template
# 4. Link to appropriate documentation
# 5. Review for missing documentation needs
This process ensures consistent, comprehensive release notes that help users understand changes and upgrade smoothly.