Back to Open Interpreter

All Settings

docs/settings/all-settings.mdx

0.4.216.5 KB
Original Source
<CardGroup cols={3}> <Card title="Language Model Settings" icon="microchip" href="#language-model"> Set your `model`, `api_key`, `temperature`, etc. </Card>

<Card title="Interpreter Settings" icon="circle" iconType="solid" href="#interpreter"

Change your system_message, set your interpreter to run offline, etc. </Card> <Card title="Code Execution Settings" icon="code" iconType="solid" href="#computer"

Modify the interpreter.computer, which handles code execution. </Card>

</CardGroup>

Language Model

Model Selection

Specifies which language model to use. Check out the models section for a list of available models. Open Interpreter uses LiteLLM under the hood to support over 100+ models.

<CodeGroup>
bash
interpreter --model "gpt-3.5-turbo"
python
interpreter.llm.model = "gpt-3.5-turbo"
yaml
llm:
  model: gpt-3.5-turbo
</CodeGroup>

Temperature

Sets the randomness level of the model's output. The default temperature is 0, you can set it to any value between 0 and 1. The higher the temperature, the more random and creative the output will be.

<CodeGroup>
bash
interpreter --temperature 0.7
python
interpreter.llm.temperature = 0.7
yaml
llm:
  temperature: 0.7
</CodeGroup>

Context Window

Manually set the context window size in tokens for the model. For local models, using a smaller context window will use less RAM, which is more suitable for most devices.

<CodeGroup>
bash
interpreter --context_window 16000
python
interpreter.llm.context_window = 16000
yaml
llm:
  context_window: 16000
</CodeGroup>

Max Tokens

Sets the maximum number of tokens that the model can generate in a single response.

<CodeGroup>
bash
interpreter --max_tokens 100
python
interpreter.llm.max_tokens = 100
yaml
llm:
  max_tokens: 100
</CodeGroup>

Max Output

Set the maximum number of characters for code outputs.

<CodeGroup>
bash
interpreter --max_output 1000
python
interpreter.llm.max_output = 1000
yaml
llm:
  max_output: 1000
</CodeGroup>

API Base

If you are using a custom API, specify its base URL with this argument.

<CodeGroup>
bash
interpreter --api_base "https://api.example.com"
python
interpreter.llm.api_base = "https://api.example.com"
yaml
llm:
  api_base: https://api.example.com
</CodeGroup>

API Key

Set your API key for authentication when making API calls. For OpenAI models, you can get your API key here.

<CodeGroup>
bash
interpreter --api_key "your_api_key_here"
python
interpreter.llm.api_key = "your_api_key_here"
yaml
llm:
  api_key: your_api_key_here
</CodeGroup>

API Version

Optionally set the API version to use with your selected model. (This will override environment variables)

<CodeGroup>
bash
interpreter --api_version 2.0.2
python
interpreter.llm.api_version = '2.0.2'
yaml
llm:
  api_version: 2.0.2
</CodeGroup>

LLM Supports Functions

Inform Open Interpreter that the language model you're using supports function calling.

<CodeGroup>
bash
interpreter --llm_supports_functions
python
interpreter.llm.supports_functions = True
yaml
llm:
  supports_functions: true
</CodeGroup>

LLM Does Not Support Functions

Inform Open Interpreter that the language model you're using does not support function calling.

<CodeGroup>
bash
interpreter --no-llm_supports_functions
python
interpreter.llm.supports_functions = False
yaml
llm:
  supports_functions: false
</CodeGroup>

Execution Instructions

If llm.supports_functions is False, this value will be added to the system message. This parameter tells language models how to execute code. This can be set to an empty string or to False if you don't want to tell the LLM how to do this.

<CodeGroup>
python
interpreter.llm.execution_instructions = "To execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."
python
interpreter.llm.execution_instructions = "To execute code on the user's machine, write a markdown code block. Specify the language after the ```. You will receive the output. Use any programming language."
</CodeGroup>

LLM Supports Vision

Inform Open Interpreter that the language model you're using supports vision. Defaults to False.

<CodeGroup>
bash
interpreter --llm_supports_vision
python
interpreter.llm.supports_vision = True
yaml
llm:
  supports_vision: true
</CodeGroup>

Interpreter

Vision Mode

Enables vision mode, which adds some special instructions to the prompt and switches to gpt-4o.

<CodeGroup> ```bash Terminal interpreter --vision ```
python
interpreter.llm.model = "gpt-4o" # Any vision supporting model
interpreter.llm.supports_vision = True
interpreter.llm.supports_functions = True

interpreter.custom_instructions = """The user will show you an image of the code you write. You can view images directly.
For HTML: This will be run STATELESSLY. You may NEVER write '<!-- previous code here... --!>' or `<!-- header will go here -->` or anything like that. It is CRITICAL TO NEVER WRITE PLACEHOLDERS. Placeholders will BREAK it. You must write the FULL HTML CODE EVERY TIME. Therefore you cannot write HTML piecemeal—write all the HTML, CSS, and possibly Javascript **in one step, in one code block**. The user will help you review it visually.
If the user submits a filepath, you will also see the image. The filepath and user image will both be in the user's message.
If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you."""
yaml
loop: True

llm:
  model: "gpt-4o"
  temperature: 0
  supports_vision: True
  supports_functions: True
  context_window: 110000
  max_tokens: 4096
  custom_instructions: >
    The user will show you an image of the code you write. You can view images directly.
    For HTML: This will be run STATELESSLY. You may NEVER write '<!-- previous code here... --!>' or `<!-- header will go here -->` or anything like that. It is CRITICAL TO NEVER WRITE PLACEHOLDERS. Placeholders will BREAK it. You must write the FULL HTML CODE EVERY TIME. Therefore you cannot write HTML piecemeal—write all the HTML, CSS, and possibly Javascript **in one step, in one code block**. The user will help you review it visually.
    If the user submits a filepath, you will also see the image. The filepath and user image will both be in the user's message.
    If you use `plt.show()`, the resulting image will be sent to you. However, if you use `PIL.Image.show()`, the resulting image will NOT be sent to you.
</CodeGroup>

OS Mode

Enables OS mode for multimodal models. Currently not available in Python. Check out more information on OS mode here.

<CodeGroup>
bash
interpreter --os
yaml
os: true
</CodeGroup>

Version

Get the current installed version number of Open Interpreter.

<CodeGroup>
bash
interpreter --version
</CodeGroup>

Open Local Models Directory

Opens the models directory. All downloaded Llamafiles are saved here.

<CodeGroup>
bash
interpreter --local_models
</CodeGroup>

Open Profiles Directory

Opens the profiles directory. New yaml profile files can be added to this directory.

<CodeGroup>
bash
interpreter --profiles
</CodeGroup>

Select Profile

Select a profile to use. If no profile is specified, the default profile will be used.

<CodeGroup>
bash
interpreter --profile local.yaml
</CodeGroup>

Help

Display all available terminal arguments.

<CodeGroup>
bash
interpreter --help
</CodeGroup>

Loop (Force Task Completion)

Runs Open Interpreter in a loop, requiring it to admit to completing or failing every task.

<CodeGroup>
bash
interpreter --loop
python
interpreter.loop = True
yaml
loop: true
</CodeGroup>

Verbose

Run the interpreter in verbose mode. Debug information will be printed at each step to help diagnose issues.

<CodeGroup>
bash
interpreter --verbose
python
interpreter.verbose = True
yaml
verbose: true
</CodeGroup>

Safe Mode

Enable or disable experimental safety mechanisms like code scanning. Valid options are off, ask, and auto.

<CodeGroup>
bash
interpreter --safe_mode ask
python
interpreter.safe_mode = 'ask'
yaml
safe_mode: ask
</CodeGroup>

Auto Run

Automatically run the interpreter without requiring user confirmation.

<CodeGroup>
bash
interpreter --auto_run
python
interpreter.auto_run = True
yaml
auto_run: true
</CodeGroup>

Max Budget

Sets the maximum budget limit for the session in USD.

<CodeGroup>
bash
interpreter --max_budget 0.01
python
interpreter.max_budget = 0.01
yaml
max_budget: 0.01
</CodeGroup>

Local Mode

Run the model locally. Check the models page for more information.

<CodeGroup>
bash
interpreter --local
python
from interpreter import interpreter

interpreter.offline = True # Disables online features like Open Procedures
interpreter.llm.model = "openai/x" # Tells OI to send messages in OpenAI's format
interpreter.llm.api_key = "fake_key" # LiteLLM, which we use to talk to local models, requires this
interpreter.llm.api_base = "http://localhost:1234/v1" # Point this at any OpenAI compatible server

interpreter.chat()
yaml
local: true
</CodeGroup>

Fast Mode

Sets the model to gpt-3.5-turbo and encourages it to only write code without confirmation.

<CodeGroup>
bash
interpreter --fast
yaml
fast: true
</CodeGroup>

Custom Instructions

Appends custom instructions to the system message. This is useful for adding information about your system, preferred languages, etc.

<CodeGroup>
bash
interpreter --custom_instructions "This is a custom instruction."
python
interpreter.custom_instructions = "This is a custom instruction."
yaml
custom_instructions: "This is a custom instruction."
</CodeGroup>

System Message

We don't recommend modifying the system message, as doing so opts you out of future updates to the core system message. Use --custom_instructions instead, to add relevant information to the system message. If you must modify the system message, you can do so by using this argument, or by changing a profile file.

<CodeGroup>
bash
interpreter --system_message "You are Open Interpreter..."
python
interpreter.system_message = "You are Open Interpreter..."
yaml
system_message: "You are Open Interpreter..."
</CodeGroup>

Disable Telemetry

Opt out of telemetry.

<CodeGroup>
bash
interpreter --disable_telemetry
python
interpreter.anonymized_telemetry = False
yaml
disable_telemetry: true
</CodeGroup>

Offline

This boolean flag determines whether to enable or disable some offline features like open procedures. Use this in conjunction with the model parameter to set your language model.

<CodeGroup>
python
interpreter.offline = True
bash
interpreter --offline true
yaml
offline: true
</CodeGroup>

Messages

This property holds a list of messages between the user and the interpreter.

You can use it to restore a conversation:

python
interpreter.chat("Hi! Can you print hello world?")

print(interpreter.messages)

# This would output:

# [
#    {
#       "role": "user",
#       "message": "Hi! Can you print hello world?"
#    },
#    {
#       "role": "assistant",
#       "message": "Sure!"
#    }
#    {
#       "role": "assistant",
#       "language": "python",
#       "code": "print('Hello, World!')",
#       "output": "Hello, World!"
#    }
# ]

#You can use this to restore `interpreter` to a previous conversation.
interpreter.messages = messages # A list that resembles the one above

User Message Template

A template applied to the User's message. {content} will be replaced with the user's message, then sent to the language model.

<CodeGroup>
python
interpreter.user_message_template = "{content} Please send me some code that would be able to answer my question, in the form of ```python\n... the code ...\n``` or ```shell\n... the code ...\n```"
python
interpreter.user_message_template = "{content}. Be concise, don't include anything unnecessary. Don't use placeholders, I can't edit code."
</CodeGroup>

Always Apply User Message Template

The boolean flag for whether the User Message Template will be applied to every user message. The default is False which means the template is only applied to the last User message.

<CodeGroup>
python
interpreter.always_apply_user_message_template = False
python
interpreter.always_apply_user_message_template = False
</CodeGroup>

Code Message Template

A template applied to the Computer's output after running code. {content} will be replaced with the computer's output, then sent to the language model.

<CodeGroup>
python
interpreter.code_output_template = "Code output: {content}\nWhat does this output mean / what's next (if anything, or are we done)?"
python
interpreter.code_output_template = "Code output: {content}\nWhat code needs to be run next?"
</CodeGroup>

Empty Code Message Template

If the computer does not output anything after code execution, this value will be sent to the language model.

<CodeGroup>
python
interpreter.empty_code_output_template = "The code above was executed on my machine. It produced no text output. what's next (if anything, or are we done?)"
python
interpreter.empty_code_output_template = "The code above was executed on my machine. It produced no text output. what's next?"
</CodeGroup>

Code Output Sender

This field determines whether the computer / code output messages are sent as the assistant or as the user. The default is user.

<CodeGroup>
python
interpreter.code_output_sender = "user"
python
interpreter.code_output_sender = "assistant"
</CodeGroup>

Computer

The computer object in interpreter.computer is a virtual computer that the AI controls. Its primary interface/function is to execute code and return the output in real-time.

Offline

Running the computer in offline mode will disable some online features, like the hosted Computer API. Inherits from interpreter.offline.

<CodeGroup>
python
interpreter.computer.offline = True
yaml
computer.offline: True
</CodeGroup>

Verbose

This is primarily used for debugging interpreter.computer. Inherits from interpreter.verbose.

<CodeGroup>
python
interpreter.computer.verbose = True
yaml
computer.verbose: True
</CodeGroup>

Emit Images

The emit_images attribute in interpreter.computer controls whether the computer should emit images or not. This is inherited from interpreter.llm.supports_vision.

This is used for multimodel vs. text only models. Running computer.display.view() will return an actual screenshot for multimodal models if emit_images is True. If it's False, computer.display.view() will return all the text on the screen.

Many other functions of the computer can produce image/text outputs, and this parameter controls that.

<CodeGroup>
python
interpreter.computer.emit_images = True
yaml
computer.emit_images: True
</CodeGroup>

Import Computer API

Include the computer API in the system message. The default is False and won't import the computer API automatically

<CodeGroup>
python
interpreter.computer.import_computer_api = True
yaml
computer.import_computer_api: True
</CodeGroup> ````