docs/language-models/hosted-models/openrouter.mdx
To use Open Interpreter with a model from OpenRouter, set the model flag to begin with openrouter/:
interpreter --model openrouter/openai/gpt-3.5-turbo
from interpreter import interpreter
interpreter.llm.model = "openrouter/openai/gpt-3.5-turbo"
interpreter.chat()
We support any model on OpenRouter's models page:
<CodeGroup>interpreter --model openrouter/openai/gpt-3.5-turbo
interpreter --model openrouter/openai/gpt-3.5-turbo-16k
interpreter --model openrouter/openai/gpt-4
interpreter --model openrouter/openai/gpt-4-32k
interpreter --model openrouter/anthropic/claude-2
interpreter --model openrouter/anthropic/claude-instant-v1
interpreter --model openrouter/google/palm-2-chat-bison
interpreter --model openrouter/google/palm-2-codechat-bison
interpreter --model openrouter/meta-llama/llama-2-13b-chat
interpreter --model openrouter/meta-llama/llama-2-70b-chat
interpreter.llm.model = "openrouter/openai/gpt-3.5-turbo"
interpreter.llm.model = "openrouter/openai/gpt-3.5-turbo-16k"
interpreter.llm.model = "openrouter/openai/gpt-4"
interpreter.llm.model = "openrouter/openai/gpt-4-32k"
interpreter.llm.model = "openrouter/anthropic/claude-2"
interpreter.llm.model = "openrouter/anthropic/claude-instant-v1"
interpreter.llm.model = "openrouter/google/palm-2-chat-bison"
interpreter.llm.model = "openrouter/google/palm-2-codechat-bison"
interpreter.llm.model = "openrouter/meta-llama/llama-2-13b-chat"
interpreter.llm.model = "openrouter/meta-llama/llama-2-70b-chat"
Set the following LiteLLM environment variables (click here to learn how) to use these models.
| Environment Variable | Description | Where to Find |
|---|---|---|
OPENROUTER_API_KEY | The API key for authenticating to OpenRouter's services. | OpenRouter Account Page |
OR_SITE_URL | An optional app URL for tracking usage, such as https://github.com/openinterpreter/open-interpreter/. | Your choice |
OR_APP_NAME | An optional app name for tracking usage, such as "Open Interpreter". | Your choice |