content/guides/11.ai/1.assistant/1.setup.md
AI Assistant requires an API key from a supported AI provider. This page covers the administrator setup process.
AI_ENABLED environment variableAlternatively, you can use an OpenAI-compatible provider like Ollama or LM Studio for self-hosted models.
::callout{icon="material-symbols:info" color="info"} Note that all users of AI Assistant will share a single API key from your configured provider. Usage limits and costs will be shared across all users. See your provider's dashboard for monitoring usage details and costs. ::
You'll need an API key from at least one provider. Choose based on which models you want to use.
::accordion{type="single"}
:::accordion-item{label="OpenAI" icon="i-simple-icons-openai"}
OpenAI provides GPT-5 models (Nano, Mini, Standard).
::callout{icon="material-symbols:info" color="info"} OpenAI requires a payment method and has usage-based pricing. Set spending limits in Settings → Limits to control costs. ::
:::
:::accordion-item{label="Anthropic" icon="i-simple-icons-anthropic"}
Anthropic provides Claude models (Haiku 4.5, Sonnet 4.5, Opus 4.5).
::callout{icon="material-symbols:info" color="info"} Anthropic requires a payment method and has usage-based pricing. Monitor usage in the Console dashboard. ::
:::
:::accordion-item{label="Google AI" icon="i-simple-icons-googlegemini"}
Google provides Gemini models (2.5 Flash, 2.5 Pro, 3 Flash Preview, 3 Pro Preview).
::callout{icon="material-symbols:info" color="info"} Google AI Studio offers a free tier with rate limits. For production use, consider enabling billing in Google Cloud Console to increase quotas. ::
:::
::
::steps{level="3"}
Go to Settings → AI in the Directus admin panel.
Add your API key for one or more providers:
::callout{icon="material-symbols:security" color="info"} API keys are encrypted at rest in the database and masked in the UI. ::
For each provider, you can restrict which models are available to users. Use the Allowed Models dropdown next to each API key field to select the models users can choose from.
This is useful for:
Click Save to apply your changes. AI Assistant is now available to all users with App access.
::
In addition to the built-in providers, Directus supports any OpenAI-compatible API endpoint. This allows you to use self-hosted models, alternative providers, or private deployments.
::callout{icon="material-symbols:warning" color="warning"} For best results, use built-in cloud providers. Local models vary significantly in their tool-calling capabilities and may produce inconsistent results. If using OpenAI-compatible providers, we recommend cloud-hosted frontier models over locally-run models on personal hardware. ::
::callout{icon="material-symbols:info" color="info"} File attachments are not supported with OpenAI-compatible providers. File uploads require a built-in provider (OpenAI, Anthropic, or Google). The file attachment buttons are hidden when an OpenAI-compatible model is selected. ::
In Settings → AI, scroll to the OpenAI-Compatible section and configure:
| Field | Description |
|---|---|
| Provider Name | Display name shown in the model selector (e.g., "Ollama", "LM Studio") |
| Base URL | The API endpoint URL (required). Must be OpenAI-compatible. |
| API Key | Authentication key if required by your provider |
| Custom Headers | Additional HTTP headers for authentication or configuration |
| Models | List of models available from this provider |
For each model, you can specify:
| Field | Description |
|---|---|
| Model ID | The model identifier used in API requests |
| Display Name | Human-readable name shown in the UI |
| Context Window | Maximum input tokens (default: 128,000) |
| Max Output | Maximum output tokens (default: 16,000) |
| Supports Attachments | Whether the model can process images/files |
| Supports Reasoning | Whether the model has chain-of-thought capabilities |
| Provider Options | JSON object for model-specific parameters |
::callout{icon="material-symbols:info" color="info"} The Provider Options field allows you to pass provider-specific parameters to the AI SDK. This is useful for enabling features like extended thinking or custom sampling parameters. See the Vercel AI SDK documentation for details. ::
::accordion{type="single"}
:::accordion-item{label="Ollama" icon="i-simple-icons-ollama"}
Ollama lets you run open-source models locally.
ollama pull gpt-oss:20bhttp://localhost:11434 by defaultDirectus Configuration:
Ollamahttp://localhost:11434/v1ollama (required by the OpenAI SDK but ignored by Ollama)gpt-oss:20b, gpt-oss:120b, qwen3:8b)::callout{icon="material-symbols:info" color="info"}
You can copy an existing model to an OpenAI-compatible name if needed: ollama cp gpt-oss:20b gpt-4
::
See Ollama OpenAI compatibility docs for supported endpoints and features.
:::
:::accordion-item{label="Azure OpenAI" icon="i-simple-icons-microsoftazure"}
Azure OpenAI Service provides OpenAI models through Microsoft Azure.
Directus Configuration:
Azure OpenAIhttps://YOUR-RESOURCE.openai.azure.com/openai/v1::callout{icon="material-symbols:info" color="info"}
The v1 API (August 2025+) no longer requires an api-version header. If using an older API version, add api-version as a custom header (e.g., 2024-10-21).
::
See Azure OpenAI documentation for setup details.
:::
:::accordion-item{label="Mistral AI" icon="simple-icons:mistralai"}
Mistral AI provides high-performance open and commercial models.
Directus Configuration:
Mistralhttps://api.mistral.ai/v1mistral-large-latest, mistral-small-latest, codestral-latestSee Mistral AI documentation for available models and pricing.
:::
::
Optionally customize how the AI assistant behaves in Settings → AI → Custom System Prompt.
The default system prompt provides the AI with helpful instructions on how to interact with Directus and is tuned to provide good results.
If you choose to customize the system prompt, it's recommended to use the following template as a starting point:
::accordion{type="single"}
:::accordion-item{label="View Default System Prompt" icon="material-symbols:code"}
<behavior_instructions>
You are **Directus Assistant**, a Directus CMS expert with access to a Directus instance through specialized tools
## Communication Style
- **Be concise**: Users prefer short, direct responses. One-line confirmations: "Created collection 'products'"
- **Match the audience**: Technical for developers, plain language for content editors
- **NEVER guess**: If not at least 99% about field values or user intent, ask for clarification
## Tool Usage Patterns
### Discovery First
1. Understand the user's task and what they need to achieve.
2. Discover schema if needed for task - **schema()** with no params → lightweight collection list or **schema({ keys: ["products", "categories"] })** → full field/relation details
3. Use other tools as needed to achieve the user's task.
### Content Items
- Use `fields` whenever possible to fetch only the exact fields you need
- Use `filter` and `limit` to control the number of fetched items unless you absolutely need larger datasets
- When presenting repeated structured data with 4+ items, use markdown tables for better readability
### Schema & Data Changes
- **Confirm before modifying any schema**: Collections, fields, relations always need approval from the user.
- **Check namespace conflicts**: Collection folders and regular collections share namespace. Collection folders are distinct from file folders.
### Safety Rules
- **Deletions require confirmation**: ALWAYS ask before deleting anything
- **Warn on bulk operations**: Alert when affecting many items ("This updates 500 items")
- **Avoid duplicates**: Never create duplicates if you can't modify existing items
- **Use semantic HTML**: No classes, IDs, or inline styles in content fields (unless explicitly asked for by the user)
- **Permission errors**: Report immediately, don't retry
### Behavior Rules
- Call tools immediately without explanatory text
- Use parallel tool calls when possible
- If you don't have access to a certain tool, ask the user to grant you access to the tool from the chat settings.
- If there are unused tools in context but task is simple, suggest disabling unused tools (once per conversation)
## Error Handling
- Auto-retry once for clear errors ("field X required")
- Stop after 2 failures, consult user
- If tool unavailable, suggest enabling in chat settings
</behavior_instructions>
:::
::
Leave blank to use the default behavior.
Enable reusable prompts in AI Assistant by configuring a prompts collection:
::callout{icon="material-symbols:info" color="info"} This is the same collection used by the MCP Server. Prompts created here are available in both AI Assistant and external MCP clients. This also requires MCP to be enabled. ::
For details on creating prompts with variables, see MCP Prompts.
::callout{icon="material-symbols:warning" color="warning"} AI Assistant uses your own AI provider API keys. Every message and tool call costs money. Be mindful of usage, especially with larger models. You are responsible for the costs of your usage. ::
Tips for controlling costs:
::card-group
:::card{title="User Guide" icon="material-symbols:chat" to="/guides/ai/assistant/usage"} Learn how users interact with AI Assistant. :::
:::card{title="Available Tools" icon="material-symbols:construction" to="/guides/ai/assistant/tools"} See what actions the AI can perform. :::
:::card{title="Tips & Best Practices" icon="material-symbols:lightbulb" to="/guides/ai/assistant/tips"} Get the most out of AI Assistant. :::
:::card{title="Security" icon="material-symbols:security" to="/guides/ai/assistant/security"} Access control and data protection. :::
::