docs/config.md
This page covers SDK-wide defaults that you usually set once during application startup, such as the default OpenAI key or client, the default OpenAI API shape, tracing export defaults, and logging behavior.
These defaults still apply to sandbox-based workflows, but sandbox workspaces, sandbox clients, and session reuse are configured separately.
If you need to configure a specific agent or run instead, start with:
Agent.RunConfig, sessions, and conversation-state options.SandboxRunConfig, manifests, capabilities, and sandbox-client-specific workspace setup.By default, the SDK uses the OPENAI_API_KEY environment variable for LLM requests and tracing. The key is resolved when the SDK first creates an OpenAI client (lazy initialization), so set the environment variable before your first model call. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
from agents import set_default_openai_key
set_default_openai_key("sk-...")
Alternatively, you can also configure an OpenAI client to be used. By default, the SDK creates an AsyncOpenAI instance, using the API key from the environment variable or the default key set above. You can change this by using the [set_default_openai_client()][agents.set_default_openai_client] function.
from openai import AsyncOpenAI
from agents import set_default_openai_client
custom_client = AsyncOpenAI(base_url="...", api_key="...")
set_default_openai_client(custom_client)
If you prefer environment-based endpoint configuration, the default OpenAI provider also reads OPENAI_BASE_URL. When you enable Responses websocket transport, it also reads OPENAI_WEBSOCKET_BASE_URL for the websocket /responses endpoint.
export OPENAI_BASE_URL="https://your-openai-compatible-endpoint.example/v1"
export OPENAI_WEBSOCKET_BASE_URL="wss://your-openai-compatible-endpoint.example/v1"
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
from agents import set_default_openai_api
set_default_openai_api("chat_completions")
Tracing is enabled by default. By default it uses the same OpenAI API key as your model requests from the section above (that is, the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [set_tracing_export_api_key][agents.set_tracing_export_api_key] function.
from agents import set_tracing_export_api_key
set_tracing_export_api_key("sk-...")
If your model traffic uses one key or client but tracing should use a different OpenAI key, pass use_for_tracing=False when setting the default key or client, then configure tracing separately. The same pattern works with [set_default_openai_key()][agents.set_default_openai_key] if you are not using a custom client.
from openai import AsyncOpenAI
from agents import (
set_default_openai_client,
set_tracing_export_api_key,
)
custom_client = AsyncOpenAI(base_url="https://your-openai-compatible-endpoint.example/v1", api_key="provider-key")
set_default_openai_client(custom_client, use_for_tracing=False)
set_tracing_export_api_key("sk-tracing")
If you need to attribute traces to a specific organization or project when using the default exporter, set these environment variables before your app starts:
export OPENAI_ORG_ID="org_..."
export OPENAI_PROJECT_ID="proj_..."
You can also set a tracing API key per run without changing the global exporter.
from agents import Runner, RunConfig
await Runner.run(
agent,
input="Hello",
run_config=RunConfig(tracing={"api_key": "sk-tracing-123"}),
)
You can also disable tracing entirely by using the [set_tracing_disabled()][agents.set_tracing_disabled] function.
from agents import set_tracing_disabled
set_tracing_disabled(True)
If you want to keep tracing enabled but exclude potentially sensitive inputs/outputs from trace payloads, set [RunConfig.trace_include_sensitive_data][agents.run.RunConfig.trace_include_sensitive_data] to False:
from agents import Runner, RunConfig
await Runner.run(
agent,
input="Hello",
run_config=RunConfig(trace_include_sensitive_data=False),
)
You can also change the default without code by setting this environment variable before your app starts:
export OPENAI_AGENTS_TRACE_INCLUDE_SENSITIVE_DATA=0
For full tracing controls, see the tracing guide.
The SDK defines two Python loggers (openai.agents and openai.agents.tracing) and does not attach handlers by default. Logs follow your application's Python logging configuration.
To enable verbose logging, use the [enable_verbose_stdout_logging()][agents.enable_verbose_stdout_logging] function.
from agents import enable_verbose_stdout_logging
enable_verbose_stdout_logging()
Alternatively, you can customize the logs by adding handlers, filters, formatters, etc. You can read more in the Python logging guide.
import logging
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
# To make all logs show up
logger.setLevel(logging.DEBUG)
# To make info and above show up
logger.setLevel(logging.INFO)
# To make warning and above show up
logger.setLevel(logging.WARNING)
# etc
# You can customize this as needed, but this will output to `stderr` by default
logger.addHandler(logging.StreamHandler())
Certain logs may contain sensitive data (for example, user data).
By default, the SDK does not log LLM inputs/outputs or tool inputs/outputs. These protections are controlled by:
OPENAI_AGENTS_DONT_LOG_MODEL_DATA=1
OPENAI_AGENTS_DONT_LOG_TOOL_DATA=1
If you need to include this data temporarily for debugging, set either variable to 0 (or false) before your app starts:
export OPENAI_AGENTS_DONT_LOG_MODEL_DATA=0
export OPENAI_AGENTS_DONT_LOG_TOOL_DATA=0