Back to Open Interpreter

OpenAI

docs/language-models/hosted-models/openai.mdx

0.4.21.5 KB
Original Source

To use Open Interpreter with a model from OpenAI, simply run:

<CodeGroup>
bash
interpreter
python
from interpreter import interpreter

interpreter.chat()
</CodeGroup>

This will default to gpt-4-turbo, which is the most capable publicly available model for code interpretation (Open Interpreter was designed to be used with gpt-4).

To run a specific model from OpenAI, set the model flag:

<CodeGroup>
bash
interpreter --model gpt-3.5-turbo
python
from interpreter import interpreter

interpreter.llm.model = "gpt-3.5-turbo"
interpreter.chat()
</CodeGroup>

Supported Models

We support any model on OpenAI's models page:

<CodeGroup>
bash
interpreter --model gpt-4o
python
interpreter.llm.model = "gpt-4o"
</CodeGroup>

Required Environment Variables

Set the following environment variables (click here to learn how) to use these models.

Environment VariableDescriptionWhere to Find
OPENAI_API_KEYThe API key for authenticating to OpenAI's services.OpenAI Account Page