docs/usage/python/arguments.mdx
<Card title="New: Streaming responses in Python" icon="arrow-up-right" href="/usage/python/streaming-response"
Learn how to build Open Interpreter into your application. </Card>
messagesThis property holds a list of messages between the user and the interpreter.
You can use it to restore a conversation:
interpreter.chat("Hi! Can you print hello world?")
print(interpreter.messages)
# This would output:
[
{
"role": "user",
"message": "Hi! Can you print hello world?"
},
{
"role": "assistant",
"message": "Sure!"
}
{
"role": "assistant",
"language": "python",
"code": "print('Hello, World!')",
"output": "Hello, World!"
}
]
You can use this to restore interpreter to a previous conversation.
interpreter.messages = messages # A list that resembles the one above
offline<Info>This replaced interpreter.local in the New Computer Update (0.2.0).</Info>
This boolean flag determines whether to enable or disable some offline features like open procedures.
interpreter.offline = True # Check for updates, use procedures
interpreter.offline = False # Don't check for updates, don't use procedures
Use this in conjunction with the model parameter to set your language model.
auto_runSetting this flag to True allows Open Interpreter to automatically run the generated code without user confirmation.
interpreter.auto_run = True # Don't require user confirmation
interpreter.auto_run = False # Require user confirmation (default)
verboseUse this boolean flag to toggle verbose mode on or off. Verbose mode will print information at every step to help diagnose problems.
interpreter.verbose = True # Turns on verbose mode
interpreter.verbose = False # Turns off verbose mode
max_outputThis property sets the maximum number of tokens for the output response.
interpreter.max_output = 2000
conversation_historyA boolean flag to indicate if the conversation history should be stored or not.
interpreter.conversation_history = True # To store history
interpreter.conversation_history = False # To not store history
conversation_filenameThis property sets the filename where the conversation history will be stored.
interpreter.conversation_filename = "my_conversation.json"
conversation_history_pathYou can set the path where the conversation history will be stored.
import os
interpreter.conversation_history_path = os.path.join("my_folder", "conversations")
modelSpecifies the language model to be used.
interpreter.llm.model = "gpt-3.5-turbo"
temperatureSets the randomness level of the model's output.
interpreter.llm.temperature = 0.7
system_messageThis stores the model's system message as a string. Explore or modify it:
interpreter.system_message += "\nRun all shell commands with -y."
context_windowThis manually sets the context window size in tokens.
We try to guess the right context window size for you model, but you can override it with this parameter.
interpreter.llm.context_window = 16000
max_tokensSets the maximum number of tokens the model can generate in a single response.
interpreter.llm.max_tokens = 100
api_baseIf you are using a custom API, you can specify its base URL here.
interpreter.llm.api_base = "https://api.example.com"
api_keySet your API key for authentication.
interpreter.llm.api_key = "your_api_key_here"
max_budgetThis property sets the maximum budget limit for the session in USD.
interpreter.max_budget = 0.01 # 1 cent