docs/language-models/hosted-models/openai.mdx
To use Open Interpreter with a model from OpenAI, simply run:
<CodeGroup>interpreter
from interpreter import interpreter
interpreter.chat()
This will default to gpt-4-turbo, which is the most capable publicly available model for code interpretation (Open Interpreter was designed to be used with gpt-4).
To run a specific model from OpenAI, set the model flag:
interpreter --model gpt-3.5-turbo
from interpreter import interpreter
interpreter.llm.model = "gpt-3.5-turbo"
interpreter.chat()
We support any model on OpenAI's models page:
<CodeGroup>interpreter --model gpt-4o
interpreter.llm.model = "gpt-4o"
Set the following environment variables (click here to learn how) to use these models.
| Environment Variable | Description | Where to Find |
|---|---|---|
OPENAI_API_KEY | The API key for authenticating to OpenAI's services. | OpenAI Account Page |