Back to Cube

Overview

docs-mintlify/admin/ai/index.mdx

1.6.436.0 KB
Original Source

Every Cube deployment ships with an agent that powers AI features such as Analytics Chat. You can customize the agent's behavior with rules, certified queries, accessible views, model selection, and memory configuration.

Agent configuration

Agent configuration lives in your Cube data model repository, alongside your cubes and views. Configuration is defined as YAML and Markdown files under an agents/ directory in your project:

text
your-cube-project/
├── model/
│   └── cubes/
│       └── orders.yml
├── agents/
│   ├── config.yml                 # Agent configuration
│   ├── rules/                     # Rules as Markdown
│   │   └── fiscal-year.md
│   └── certified_queries/         # Certified queries as Markdown
│       └── quarterly-revenue.md
└── cube.py

Storing configuration as code in your repository enables version control, code review, and consistent behavior across environments.

<Info> Cube version 1.6.5 or above is required. Agent configuration is enabled by setting `CUBE_CLOUD_AGENTS_CONFIG_ENABLED=true`. The directory path defaults to `agents` and can be overridden with `CUBE_CLOUD_AGENTS_CONFIG_PATH`. </Info>

Configure the agent

Place agent properties at the root of agents/config.yml:

yaml
# agents/config.yml
llm: claude_4_sonnet
runtime: plain
accessible_views:
  - orders_view
  - customers_view
memory_mode: user

Properties

PropertyTypeDefaultDescription
llmstring or objectautoLLM provider — auto, a predefined model name, or a BYOM reference.
embedding_llmstring or objecttext-embedding-3-largeEmbedding model — a predefined name or a BYOM reference.
runtimestringplainRuntime modeplain or reasoning.
accessible_viewsarrayall viewsList of view names the agent is allowed to query. If omitted or empty, the agent has access to all views.
memory_modestringspaceMemory isolation modespace, user, or disabled.

Rules and certified queries have their own dedicated pages:

  • Rules — instructions that guide the agent's behavior
  • Certified queries — a library of trusted SQL examples for the agent

LLM

The default value is auto — Cube picks a recommended model on your behalf and may change it as better models become available. Use auto unless you have a reason to pin a specific model.

To pin a specific model, set the llm property to one of the predefined models:

Anthropic Claude:

  • claude_3_5_sonnetv2
  • claude_3_7_sonnet
  • claude_3_7_sonnet_thinking
  • claude_4_sonnet
  • claude_4_5_sonnet
  • claude_4_5_haiku
  • claude_4_5_opus
  • claude_4_6_sonnet
  • claude_4_6_opus
  • claude_4_7_opus

OpenAI GPT:

  • gpt_4o
  • gpt_4_1
  • gpt_4_1_mini
  • gpt_5
  • gpt_5_mini
  • gpt_5_3
  • gpt_5_4
  • o3
  • o4_mini

To use your own model, see Bring your own model.

Embedding models

Predefined embedding models for the embedding_llm property:

  • text-embedding-3-large
  • text-embedding-3-small

BYOM is also supported for embedding models.

Runtime

The runtime property controls how the agent processes requests:

ModeDescription
plainDefault. Optimized for speed and cost. Recommended for most use cases.
reasoningEnables extended thinking for complex analysis.
<Warning> `reasoning` is **experimental**. It may be unstable and is not yet feature-complete. Stick with `plain` unless you have a specific need for extended reasoning. </Warning>

Memory

The memory_mode property controls how the agent persists context across conversations:

ModeDescription
spaceDefault. Memories are shared across all users — useful when the agent serves a single team.
userMemories are isolated per user — useful when each user has private context.
disabledThe agent does not persist memory between conversations.

See Memories for details on how memories are stored and used.

Customize the agent

<CardGroup cols={2}> <Card title="Rules" icon="list-check" href="/admin/ai/rules"> Define instructions that guide how the agent responds and analyzes data. </Card> <Card title="Certified queries" icon="certificate" href="/admin/ai/certified-queries"> Provide a library of trusted SQL queries for the agent to reference. </Card> <Card title="Bring your own model" icon="brain" href="/admin/ai/bring-your-own-model"> Configure the agent to use your own LLM provider or model. </Card> <Card title="Memories" icon="database" href="/admin/ai/memory-isolation"> Control how the agent persists context across conversations and users. </Card> </CardGroup>