cookbook/codellama-server/README.MD
Works with: Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.
LIVE DEMO - https://litellm.ai/playground
Uses Together AI's CodeLlama to answer coding questions, with GPT-4 + Claude-2 as backups (you can easily switch this to any model from Huggingface, Replicate, Cohere, AI21, Azure, OpenAI, etc.)
Sets default system prompt for guardrails system_prompt = "Only respond to questions about code. Say 'I don't know' to anything outside of that."
Integrates with Promptlayer for model + prompt tracking
Example output
Consistent Input/Output Format
completion(model, messages)['choices'][0]['message']['content']['choices'][0]['delta']['content']Error Handling Using Model Fallbacks (if CodeLlama fails, try GPT-4) with cooldowns, and retries
Prompt Logging - Log successful completions to promptlayer for testing + iterating on your prompts in production! (Learn more: https://litellm.readthedocs.io/en/latest/advanced/
Example: Logs sent to PromptLayer
Token Usage & Spend - Track Input + Completion tokens used + Spend/model - https://docs.litellm.ai/docs/token_usage
Caching - Provides in-memory cache + GPT-Cache integration for more advanced usage - https://docs.litellm.ai/docs/caching/gpt_cache
Streaming & Async Support - Return generators to stream text responses - TEST IT 👉 https://litellm.ai/
/chat/completions (POST)This endpoint is used to generate chat completions for 50+ support LLM API Models. Use llama2, GPT-4, Claude2 etc
This API endpoint accepts all inputs in raw JSON and expects the following inputs
prompt (string, required): The user's coding related questiontemperature, functions, function_call, top_p, n, stream. See the full list of supported inputs here: https://litellm.readthedocs.io/en/latest/input/For claude-2
{
"prompt": "write me a function to print hello world"
}
import requests
import json
url = "localhost:4000/chat/completions"
payload = json.dumps({
"prompt": "write me a function to print hello world"
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
Responses from the server are given in the following format. All responses from the server are returned in the following format (for all LLM models). More info on output here: https://litellm.readthedocs.io/en/latest/output/
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": ".\n\n```\ndef print_hello_world():\n print(\"hello world\")\n",
"role": "assistant"
}
}
],
"created": 1693279694.6474009,
"model": "togethercomputer/CodeLlama-34b-Instruct",
"usage": {
"completion_tokens": 14,
"prompt_tokens": 28,
"total_tokens": 42
}
}
git clone https://github.com/BerriAI/litellm-CodeLlama-server
pip install requirements.txt
os.environ['OPENAI_API_KEY]` = "YOUR_API_KEY"
or
set OPENAI_API_KEY in your .env file
python main.py
Quick Start: Deploy on Railway
GCP, AWS, Azure
This project includes a Dockerfile allowing you to build and deploy a Docker Project on your providers