x-pack/platform/plugins/shared/agent_builder/CONTRIBUTOR_GUIDE.md
This document is intended for platform contributors to the Agent Builder framework, explains the base concepts and how to register "platform" tools and agents.
(But please also check the README.md too for more general information about Agent Builder)
Platform and user-created tools and agents share the same concepts and API, but have some notable differences:
Platform tools and agents are read-only, and cannot be modified or deleted by the user.
platform.core.*)Platform tools can use the internal builtin tool type, allowing them to register tools executing arbitrary code from
the Kibana server, where user-created tools can only use the other (serializable) tool types.
Registering tools can be done using the tools.register API of the agentBuilder plugin's setup contract.
class MyPlugin {
setup(core: CoreSetup, { agentBuilder }: { agentBuilder: AgentBuilderPluginSetup }) {
agentBuilder.tools.register(myToolDefinition);
}
}
To allow the agent builder owners to control which tools are added to our framework, we are maintaining a hardcoded list of all internally registered tools. The intention is simply to trigger a code review from the team when tools are added, so that we can review it.
To add a tool to the allow list, simply add the tool's id to the AGENT_BUILDER_BUILTIN_TOOLS array,
in x-pack/platform/packages/shared/agent-builder/agent-builder-server/allow_lists.ts
(Kibana will fail to start otherwise, with an explicit error message explaining what to do)
Platform tools should all be namespaced under protected namespaces, to avoid id collisions with user-created tools.
When introducing a new protected namespace (e.g. when adding a new category of tools), it must be added
to the protectedNamespaces array in x-pack/platform/packages/shared/agent-builder/agent-builder-common/base/namespaces.ts
A simple example, with a tool just doing some math:
agentBuilder.tools.register({
id: 'platform.examples.add_42',
type: ToolType.builtin,
description: 'Returns the sum of the input number and 42.',
tags: ['example'],
schema: z.object({
someNumber: z.number().describe('The number to add 42 to.'),
}),
handler: async ({ someNumber }) => {
return {
results: [
{
type: ToolResultType.other,
data: { value: 42 + someNumber },
},
],
};
},
});
To let tools use services scoped to the current user during execution, we expose a set of services
from the context object, exposed as the second parameter of the tool's handler.
This context exposes, in addition to the request object, a panel of pre-scoped services such as:
agentBuilder.tools.register({
id: 'platform.examples.scoped_services',
type: ToolType.builtin,
description: 'Some example',
tags: ['example'],
schema: z.object({
indexPattern: z.string().describe('Index pattern to filter on'),
}),
handler: async ({ indexPattern }, { request, modelProvider, esClient }) => {
const indices = await esClient.asCurrentUser.cat.indices({ index: indexPattern });
const model = await modelProvider.getDefaultModel();
const response = await model.inferenceClient.chatComplete(somethingWith(indices));
const myCustomScopedService = await getMyCustomScopedService(request);
myCustomScopedService.doSomething(response);
return {
results: [{ type: ToolResultType.other, data: response }],
};
},
});
Refer to ToolHandlerContext in x-pack/platform/packages/shared/agent-builder/agent-builder-server/tools/handler.ts to
have access to the full list of services available from the handler context.
Agentic tool execution (performing LLM calls) can take some time.
To allow the user to know what the tool is currently doing, we expose a progress reporting API accessible via
the events service from the handler context, which can be used to report progress updates of the tool.
Those progress updates will be displayed in the UI (inside the thinking panel), improving the user experience by being transparent regarding what is happening under the hood.
agentBuilder.tools.register({
id: 'platform.examples.progress_report',
type: ToolType.builtin,
description: 'Some example',
tags: ['example'],
schema: z.object({}),
handler: async ({}, { events }) => {
events.reportProgress('Doing something');
const response = doSomething();
events.reportProgress('Doing something else');
return doSomethingElse(response);
return {
results: [{ type: ToolResultType.other, data: response }],
};
},
});
For our framework to understand what kind of data is being returned by a tool, all tools must return a list of results following a specific format.
This is useful to allow the framework to perform specific processing on the results. For example,
this is how we perform visualization rendering for the esql_results type, by recognizing that
a tool returned some result which can be rendered as a visualization if we want to.
This is also how we render specific type of results differently in the UI, e.g we inline query results
in the thinking panel.
agentBuilder.tools.register({
id: 'platform.examples.result_types',
type: ToolType.builtin,
description: 'Some example',
tags: ['example'],
schema: z.object({
indexPattern: z.string().describe('Index pattern to filter on'),
}),
handler: async ({ indexPattern }, { events, esClient }) => {
const esqlQuery = await generateSomeQuery(indexPattern);
const data = await executeEsql(esqlQuery, esClient);
return {
results: [
{ type: ToolResultType.query, data: { esql: esqlQuery } },
{ type: ToolResultType.esqlResults, data },
],
};
},
});
See the ToolResultType and corresponding types in x-pack/platform/packages/shared/agent-builder/agent-builder-common/tools/tool_result.ts
Platform contributors aren't stuck to using the builtin tool type. They are free to leverage the other
existing tool types, and create static instances of them.
E.g. registering a built-in index_search tool:
agentBuilderSetup.tools.register({
id: 'platform.core.some_knowledge_base',
type: ToolType.index_search,
description: 'Use this tool to retrieve documentation from our knowledge base',
configuration: {
pattern: '.my_knowledge_base',
},
});
class MyPlugin {
setup(core: CoreSetup, { agentBuilder }: { agentBuilder: AgentBuilderPluginSetup }) {
agentBuilder.agents.register(myAgentDefinition);
}
}
Similar to tools, we keep an hardcoded list of registered agents to trigger a code review from the team when agents are added.
To add a tool to the allow list, simply add the tool's id to the AGENT_BUILDER_BUILTIN_AGENTS array,
in x-pack/platform/packages/shared/agent-builder/agent-builder-server/allow_lists.ts
(Kibana will fail to start otherwise, with an explicit error message explaining what to do)
Platform agents should all be namespaced under protected namespaces, to avoid id collisions with user-created agents.
When introducing a new protected namespace (e.g. when adding a new category of agents), it must be added
to the protectedNamespaces array in x-pack/platform/packages/shared/agent-builder/agent-builder-common/base/namespaces.ts
How registering a basic agent looks like:
agentBuilder.agents.register({
id: 'platform.core.dashboard',
name: 'Dashboard agent',
description: 'Agent specialized in dashboard related tasks',
avatar_icon: 'dashboardApp',
configuration: {
instructions: 'You are a dashboard specialist [...]',
tools: [
{
tool_ids: [
'platform.dashboard.create_dashboard',
'platform.dashboard.edit_dashboard',
'[...]',
],
},
],
},
});
It is possible to specify specific research and answer instructions for an agent, to avoid mixing instructions, which can sometimes be confusing for the agent. It also allows to specify different instructions for each step of the agent's flow..
agentBuilder.agents.register({
id: 'platform.core.dashboard',
name: 'Dashboard agent',
description: 'Agent specialized in dashboard related tasks',
avatar_icon: 'dashboardApp',
configuration: {
research: {
instructions:
'You are a dashboard builder specialist assistant. Always uses the XXX tool when the user wants to YYY...',
},
answer: {
instructions:
'When answering, if a dashboard configuration is present in the results, always render it using [...]',
},
tools: [
{
tool_ids: [someListOfToolIds],
},
],
},
});
Refer to AgentConfiguration
for the full list of available configuration options.
Attachments are used to provide additional context when conversing with an agent.
It is possible to register custom attachment types, to have control over how the data is exposed to the agent, and how it is rendered in the UI.
You can register an attachment type by using the attachments.registerType API of the agentBuilder plugin's setup contract.
class MyPlugin {
setup(core: CoreSetup, { agentBuilder }: { agentBuilder: AgentBuilderPluginSetup }) {
agentBuilder.attachments.registerType(myAttachmentDefinition);
}
}
Attachments are created in two ways; both use the same AttachmentTypeDefinition (there is no separate inline / reference discriminator on the definition):
data. The server runs validate and stores that payload. origin stays unset unless you later call updateOrigin (see below).origin string (for example a saved object ID). If the type implements the optional resolve hook, the framework calls it once at add time, persists the returned content as data, and records origin plus origin_snapshot_at. Optional isStale detects when the live source changed so the UI can offer a resync. See By-reference attachments with resolve and Detecting stale attachments with isStale.Example of attachment type definition (by-value only, no resolve):
const textDataSchema = z.object({
content: z.string(),
});
const textAttachmentType: AttachmentTypeDefinition = {
// unique id of the attachment type
id: AttachmentType.text,
// validate and parse the input when received from the client
validate: (input) => {
const parseResult = textDataSchema.safeParse(input);
if (parseResult.success) {
return { valid: true, data: parseResult.data };
} else {
return { valid: false, error: parseResult.error.message };
}
},
// format the data to be exposed to the LLM
format: (attachment) => {
return { type: 'text', value: attachment.data.content };
},
};
Refer to AttachmentTypeDefinition
for the full list of available configuration options.
getAgentDescription — describing inline rendering to the agentWhen your attachment type supports inline rendering, getAgentDescription should tell
the agent what it looks like when rendered inline. This description is injected into the
ATTACHMENT TYPES prompt block whenever an attachment of your type is present in the
conversation.
Keep the description focused on the user-visible outcome of rendering — not on when or why:
const myAttachmentType: AttachmentTypeDefinition = {
id: 'image',
validate: ...,
format: ...,
getAgentDescription: () =>
'Represents an image attachment. Rendering this attachment inline displays the image inside the conversation UI.',
};
Do not include guidance on when to render inline — that is the responsibility of the skill that owns the relevant task. See Inline rendering guidance in skills.
Register a UI definition for your attachment type using the attachments.addAttachmentType API from the agentBuilder plugin's start contract:
class MyPlugin {
start(core: CoreStart, { agentBuilder }: { agentBuilder: AgentBuilderPluginStart }) {
agentBuilder.attachments.addAttachmentType('my_type', myAttachmentDefinition);
}
}
import React from 'react';
import { i18n } from '@kbn/i18n';
import { EuiCodeBlock } from '@elastic/eui';
import {
ActionButtonType,
type AttachmentUIDefinition,
} from '@kbn/agent-builder-browser/attachments';
import type { Attachment } from '@kbn/agent-builder-common/attachments';
type MyAttachment = Attachment<'my_type'>;
export const myAttachmentDefinition: AttachmentUIDefinition<MyAttachment> = {
getLabel: () => 'My attachment',
getIcon: () => 'document',
// Compact view rendered inline in the conversation
renderInlineContent: ({ attachment, isSidebar }) => {
if (isSidebar) {
// For example: render a condensed view in the sidebar only
}
return (
<EuiCodeBlock fontSize="s">{attachment.data.content}</EuiCodeBlock>
);
},
// Optional: preferred width of the canvas flyout in full-screen context.
// Accepts any valid CSS width value (e.g. '600px', '40vw').
// Defaults to '50vw' when not specified. Has no effect in sidebar context
// or on narrow viewports (where the canvas always fills available width).
canvasWidth: '600px',
// Expanded view rendered in the canvas flyout
renderCanvasContent: ({ attachment }) => (
<EuiCodeBlock fontSize="m" lineNumbers isCopyable>
{attachment.data.content}
</EuiCodeBlock>
),
// Customize buttons based on viewport context
getActionButtons: ({ attachment, isCanvas, isSidebar, openCanvas, setPreviewBadgeState, openSidebarConversation }) => {
const buttons = [];
if (isSidebar) {
// add sidebar only buttons
}
if (isCanvas) {
// add canvas only buttons
}
buttons.push({
label: 'Copy',
icon: 'copy',
type: ActionButtonType.SECONDARY,
handler: async () => navigator.clipboard.writeText(attachment.data.content),
});
// openCanvas is {undefined} when already in canvas mode
if (openCanvas) {
buttons.push({
label: 'Open Canvas',
icon: 'play',
type: ActionButtonType.PRIMARY,
handler: openCanvas,
});
}
// openSidebarConversation is {undefined} when already in the sidebar
if (openSidebarConversation) {
buttons.push({
label: 'Continue in sidebar',
icon: 'discuss',
type: ActionButtonType.SECONDARY,
handler: openSidebarConversation,
});
}
// Optional: if preview happens outside canvas, keep inline badge state in sync
buttons.push({
label: 'Preview',
icon: 'eye',
type: ActionButtonType.SECONDARY,
handler: () => {
setPreviewBadgeState?.('previewing');
},
});
return buttons;
},
};
The getActionButtons params include flags to customize behavior per viewport:
isSidebar - true when rendered in the sidebar (constrained width)isCanvas - true when rendered in the canvas flyout (expanded view)openCanvas - Callback to open canvas mode; undefined when already in canvasopenSidebarConversation - Callback to open the agent builder sidebar with the current conversation loaded; undefined when already in the sidebarBy default the canvas flyout opens at 50vw in full-screen context. You can override this per attachment type using the optional canvasWidth property on AttachmentUIDefinition:
export const myAttachmentDefinition: AttachmentUIDefinition<MyAttachment> = {
// ...
canvasWidth: '600px', // any valid CSS width value
};
'600px', '40vw', '80%', etc.l EUI breakpoint, ~992px) — the canvas switches to overlay mode and fills available width regardless of this setting.When an attachment is rendered inline in the full-screen Agent Builder experience, you can use openSidebarConversation to open the conversation in the sidebar on demand. This is useful when an action button navigates the user away from the full-screen experience (e.g., navigating to Discover or Dashboards). By calling openSidebarConversation after navigation, the user can continue the conversation in the sidebar while viewing the destination page.
getActionButtons: ({ attachment, openSidebarConversation }) => {
const buttons = [];
buttons.push({
label: 'Open in Discover',
icon: 'discoverApp',
type: ActionButtonType.PRIMARY,
handler: async () => {
// Navigate to Discover (this leaves the full-screen Agent Builder)
await discoverLocator.navigate({ query: { esql: attachment.data.query } });
// Open the sidebar so the conversation remains accessible
openSidebarConversation?.();
},
});
return buttons;
},
The callback handles setting the correct conversation context in localStorage before opening the sidebar, ensuring the sidebar loads the same conversation. It is undefined when already in the sidebar context.
setPreviewBadgeState - Optional callback to control inline preview badge state when preview is driven outside the canvassetPreviewBadgeState accepts:
none - regular inline statepreview_available - show "Preview Only" badgepreviewing - show "You're previewing this" badge and hide inline action buttonsFor canvas content that needs to register buttons dynamically (e.g., a "Save" button that depends on runtime state like an API being available), use the registerActionButtons callback passed as the second argument to renderCanvasContent.
The getActionButtons function provides static buttons. The registerActionButtons callback allows canvas content to add dynamic buttons that are merged with the static ones.
The callbacks object also exposes closeCanvas, which allows canvas content to close the flyout from within attachment UI actions (for example after an "Edit in app" navigation).
import React, { useEffect, useState } from 'react';
import {
ActionButtonType,
type ActionButton,
type AttachmentRenderProps,
type CanvasRenderCallbacks,
} from '@kbn/agent-builder-browser/attachments';
interface MyCanvasContentProps extends AttachmentRenderProps<MyAttachment> {
callbacks: CanvasRenderCallbacks;
}
const MyCanvasContent: React.FC<MyCanvasContentProps> = ({
attachment,
callbacks: { registerActionButtons, updateOrigin, closeCanvas },
}) => {
const [api, setApi] = useState<MyApi | undefined>();
// Register buttons once the API is available
useEffect(() => {
if (!registerActionButtons || !api) {
return;
}
registerActionButtons([
{
label: 'Save',
icon: 'save',
type: ActionButtonType.PRIMARY,
handler: async () => {
const savedObjectId = await api.save();
// Link the attachment to the saved object
await updateOrigin(savedObjectId);
},
},
]);
}, [api, registerActionButtons, updateOrigin]);
return (
<MyEditor onApiReady={setApi} />
);
};
// In the attachment definition:
export const myAttachmentDefinition: AttachmentUIDefinition<MyAttachment> = {
// ...
renderCanvasContent: (props, callbacks) => (
<MyCanvasContent {...props} callbacks={callbacks} />
),
};
Use closeCanvas when an action inside renderCanvasContent should dismiss the flyout.
renderCanvasContent: (props, { closeCanvas }) => (
<EuiButton
onClick={async () => {
await locator.navigate({ /* ... */ });
closeCanvas();
}}
>
Edit in app
</EuiButton>
);
The updateOrigin callback allows you to link a by-value attachment to its persistent storage location (e.g., a saved object) after it has been saved.
This callback is available in two places:
getActionButtons params - for static action buttons defined at registration timerenderCanvasContent callbacks - for dynamic buttons registered at runtime (see Dynamic canvas buttons with registerActionButtons above)When to use updateOrigin:
updateOrigin to record the reference back to the attachmentWhy this matters:
Example: Save button that links to a saved object
getActionButtons: ({ attachment, updateOrigin, isCanvas }) => {
const buttons = [];
// Only show save button if not already linked to a saved object
if (!attachment.origin && isCanvas) {
buttons.push({
label: 'Save to library',
icon: 'save',
type: ActionButtonType.PRIMARY,
handler: async () => {
// 1. Save to your persistent storage (e.g., saved objects)
const savedObjectId = await myApi.saveToLibrary(attachment.data);
// 2. Link the attachment to the saved object
await updateOrigin(savedObjectId);
},
});
}
// Show "Open in App" if already linked (`origin` is a string, e.g. saved object id)
if (attachment.origin) {
buttons.push({
label: 'Open in App',
icon: 'popout',
type: ActionButtonType.SECONDARY,
handler: () => {
window.open(`/app/myApp/${attachment.origin}`, '_blank');
},
});
}
return buttons;
},
origin is a string:
On the wire and in Attachment, origin is always a string (for example a saved object ID). The same string is passed to your type’s resolve hook when the attachment is added or resynced. updateOrigin and updateAttachmentOrigin also take that string — not an object.
If you need to update an attachment's origin from outside the getActionButtons context (e.g., from a different plugin or component that has the conversation and attachment IDs), you can use the updateAttachmentOrigin API from the agentBuilder plugin's start contract:
// In your plugin
class MyPlugin {
start(core: CoreStart, { agentBuilder }: { agentBuilder: AgentBuilderPluginStart }) {
// Update an attachment's origin directly
await agentBuilder.updateAttachmentOrigin(conversationId, attachmentId, savedObjectId);
}
}
This is useful when the save operation happens outside the attachment's UI, such as when a separate "Save to library" workflow completes asynchronously. It is your responsibility to pass the conversationId and attachmentId to your plugin when navigating away from the chat - how you do this is up to you (e.g., URL parameters, local storage, or other mechanisms).
resolveThe optional resolve hook in AttachmentTypeDefinition enables by-reference attachment creation: instead of providing inline data, the caller provides an origin string (e.g. a saved object ID), and the framework calls resolve once at add time to fetch and store the content.
const myAttachmentType: AttachmentTypeDefinition<'my_type', MyContent> = {
id: 'my_type',
validate: (input) => { /* ... */ },
format: (attachment) => { /* ... */ },
/**
* Called once when an attachment is added with an `origin`.
* Returns the current content for that origin, or undefined if not found.
*/
resolve: async (origin, context) => {
const savedObject = await context.savedObjectsClient?.get('my_type', origin);
if (!savedObject) return undefined;
return { content: savedObject.attributes.content };
},
};
origin — the reference string passed by the caller (typically a saved object ID)context.savedObjectsClient — scoped to the current user; use it to fetch saved objectscontext.request / context.spaceId — available for other service lookupsundefined if the origin cannot be resolved (the add operation will fail with an error)data in the attachment version, and an origin_snapshot_at timestamp is recordedRefer to AttachmentTypeDefinition for the full type signature.
isStaleWhen an attachment is linked to a persistent origin (e.g. a dashboard saved object), the underlying data can change after the attachment was created. The optional isStale hook lets your attachment type detect this so the UI can prompt the user to refresh.
const myAttachmentType: AttachmentTypeDefinition<'my_type', MyContent> = {
id: 'my_type',
validate: (input) => { /* ... */ },
format: (attachment) => { /* ... */ },
resolve: async (origin, context) => { /* ... */ },
/**
* Called to check whether the stored attachment data is behind the current state
* of the referenced origin. Return true if the attachment is stale.
*
* Only invoked for attachments that have a populated `origin`.
* No automatic fallback — staleness detection is opt-in per type.
*/
isStale: async (attachment, context) => {
const savedObject = await context.savedObjectsClient?.get('my_type', attachment.origin);
if (!savedObject) return false;
// Compare the saved object's last-modified time against when the attachment was snapshotted
return (
Boolean(savedObject.updated_at) &&
Boolean(attachment.origin_snapshot_at) &&
new Date(savedObject.updated_at) > new Date(attachment.origin_snapshot_at)
);
},
};
attachment.origin_snapshot_at — ISO timestamp of when resolve last ran; use it to compare against the origin's current versioncontext — same AttachmentResolveContext as resolve (includes savedObjectsClient, request, spaceId)true if the stored data is outdated; the framework will call resolve again to fetch fresh content and surface a resync prompt in the UIorigin; inline-only types that never set origin will never have isStale calledHow the resync flow works end-to-end:
GET /{conversationId}/attachments/staleisStale for each active attachment that has an originresolve is called again to fetch fresh contentRefer to AttachmentStaleCheckResult for the result types returned by the stale check API.
Plugins can integrate with the active chat surface (the embeddable sidebar and the full-page routed chat) through the agentBuilder start contract.
This is useful when the surrounding application wants to attach page context only under specific conditions, and react when the active chat binds to a new or existing conversation.
A pending attachment is a client-only attachment attached to the active conversation that has not yet been persisted to a round; it lives in the chat UI until the user submits the next message, at which point it is sent with that round and persisted.
setChatConfig(...)Scope: sidebar only.
setChatConfig(...) configures the next sidebar open, or updates the active sidebar if it is already open.
It supports the regular embeddable conversation props, including:
newConversation - force the sidebar to start a fresh conversation instead of restoring the persisted oneattachments - pre-populate the pending attachment list for the active sidebar conversationUse clearChatConfig() to remove that runtime configuration.
newConversationSet newConversation: true when the sidebar must always bind to a fresh conversation:
agentBuilder.setChatConfig({
newConversation: true,
});
attachmentsSet attachments when you want the sidebar to open with one or more pending attachments already present:
agentBuilder.setChatConfig({
attachments: [
{
id: 'my-context',
type: 'my_type',
data: { ... },
},
],
});
addAttachment(...)Scope: sidebar only. If no sidebar is open, the call is silently ignored.
addAttachment(...) adds or updates a pending attachment in the active sidebar conversation.
agentBuilder.addAttachment({
id: 'my-pending-context',
type: 'my_type',
data: { ... },
origin: 'saved-object-id',
});
Pending attachments added through agentBuilder.addAttachment(...) can include an origin string, just like other attachment inputs sent to the Agent Builder APIs. Use this when your pending attachment already corresponds to a persistent resource (for example, a saved object-backed dashboard or visualization), and your attachment type expects origin to be present.
The agentBuilder start contract exposes observables on the events.ui namespace that let plugins react to the chat surface lifecycle (currently the active conversation binding).
If you need to know whether the Agent Builder sidebar is currently open, subscribe to the core chrome sidebar primitive and match on the agentBuilder app id:
useEffect(() => {
const sub = chrome.sidebar.getCurrentAppId$().subscribe((appId) => {
const isOpen = appId === 'agentBuilder';
// react to the {isOpen} value
});
return () => sub.unsubscribe();
}, [chrome.sidebar]);
events.ui.activeConversation$Use events.ui.activeConversation$ when you need to react to the conversation currently bound to the active chat surface.
The non-null payload is:
id?: string - the currently bound conversation id, or undefined when the chat is currently bound to a new conversationconversation?: Conversation - the fully loaded conversation when it has been successfully fetched (undefined for new conversations, while loading, or on fetch errors)class MyPlugin {
private conversationSubscription?: Subscription;
start(core: CoreStart, { agentBuilder }: { agentBuilder: AgentBuilderPluginStart }) {
this.conversationSubscription = agentBuilder.events.ui.activeConversation$.subscribe((change) => {
if (!change) {
// No chat surface currently bound — tear down local state.
return;
}
const { id, conversation } = change;
if (!id) {
agentBuilder.addAttachment({
id: 'my-pending-context',
type: 'my_type',
data: { ... },
});
return;
}
const hasMyAttachment = conversation?.attachments?.some(
(attachment) => attachment.id === 'my-pending-context'
);
if (!hasMyAttachment) {
// Handle the switch away from the pending attachment in your plugin state.
}
});
}
stop() {
this.conversationSubscription?.unsubscribe();
}
}
Note: Skills are currently an experimental feature. You need to enable the agentBuilder:experimentalFeatures uiSetting to enable and use them.
Skills for Agent Builder are very close to the same concept is being used in Cursor or Claude for example. They are markdown files the agent can access via the filestore, providing specific instructions to complete a task. Skills can also expose tools when enabled, similar to how that works for attachments: when the agent reads the skill from the filestore, the tools attached to it will be automatically enabled.
You can register a skill by using the skills.register API of the agentBuilder plugin's setup contract.
class MyPlugin {
setup(core: CoreSetup, { agentBuilder }: { agentBuilder: AgentBuilderPluginSetup }) {
agentBuilder.skills.register(mySkillDefinition);
}
}
agentBuilder.skills.register({
// unique identifier of the skill
id: 'my-skill',
// represents the name, which will be used as the filepath inside the skill directory
name: 'my-skill',
// the directory where the skill will be stored on the filesystem
basePath: 'skills/platform',
// short description of the skill, which will be exposed to the LLM for skill selection
description: 'Just an example of skill',
// full text content of the skill, which can be accessed via the filesystem
content: 'full text content of the skill, in markdown format',
// list of tools (from the tool registry) which will be enabled when the skill is read
getRegistryTools: () => ['platform.core.generate_esql'],
// list of inline tools which will be enabled when the skill is read
getInlineTools: () => [myInlineToolDefinition],
});
Base paths are enforced to a specific list of values using the DirectoryPath type.
To create new base paths to use for your skills, you need to add them to the SkillsDirectoryStructure
You can define sub-content for the skill, using the referencedContent property of the skill definition.
Those files will be exposed on the filesystem in the skill's directory, in the specified subfolder.
agentBuilder.skills.register({
id: 'bake-me-something',
name: 'bake-me-something',
basePath: 'skills/platform',
description: 'Pick and bake a tasty dessert',
content: `
1. select a recipe from the available list of recipes. Recipes can be found in the [recipes folder](./recipes).
2. follow the instructions in the recipe to bake the dessert.
3. enjoy your dessert!`,
referencedContent: [
{ name: 'pie-recipe', relativePath: './recipes', content: '[some pie recipe]' },
{ name: 'brownie-recipe', relativePath: './recipes', content: '[some brownie recipe]' },
],
});
Whether and when the agent should render an attachment inline depends on the task it is performing. Skills that create or modify attachments should therefore include explicit guidance on this in their instructions.
Rule of thumb: tell the agent exactly which attachment to render and at what point in the task.
Examples:
A skill that creates a single visualization:
"Once you have created the visualization, render it inline so the user can see it."
A skill that builds a dashboard (composed of multiple visualizations):
"Render the dashboard attachment inline once you have finished building it. Do NOT render each individual visualization inline — only the final dashboard."
This per-skill guidance is what controls inline rendering behaviour across different tasks:
the attachment type definition (via getAgentDescription) tells the agent what rendering
does; the skill tells it when to do it.
Individual built-in skills can be flagged as experimental by setting experimental: true on their definition.
Experimental skills are only visible and usable when the agentBuilder:experimentalFeatures uiSetting is enabled.
Example:
agentBuilder.skills.register({
id: 'my-experimental-skill',
name: 'my-experimental-skill',
basePath: 'skills/platform',
description: 'An experimental skill only visible when experimental features are on',
experimental: true,
content: 'Skill instructions...',
});
The Semantic Metadata Layer is an indexing and search subsystem inside Agent Builder. It allows solutions to expose their Kibana assets (visualizations, dashboards, saved searches, …) so the AI agent can find and attach them to a conversation.
┌──────────────────────────────────────────────────────────────┐
│ Solution plugin (e.g. agent_builder_platform) │
│ ┌────────────────────────────┐ │
│ │ SmlTypeDefinition │ ← you provide this │
│ │ • id │ │
│ │ • list() │ │
│ │ • getSmlData() │ │
│ │ • toAttachment() │ │
│ └────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
│
│ agentBuilder.sml.registerType(...)
▼
┌──────────────────────────────────────────────────────────────┐
│ agent_builder plugin (server) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌───────────────┐ │
│ │ Type Registry │───▶│ Crawler │───▶│ ES Indices │ │
│ └──────────────┘ │ (Task Mgr) │ │ .chat-sml-* │ │
│ └──────────────┘ └───────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ ┌──────────────────────────────────┐ │
│ │ sml_search │◀───│ SmlService.search() │ │
│ │ sml_attach │ │ (space + permission filtering) │ │
│ └──────────────┘ └──────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
| Concept | Description |
|---|---|
| SML Type | A category of content you expose (e.g. visualization, dashboard). You implement SmlTypeDefinition. |
| Crawler | A Task Manager background task that periodically calls your list() and getSmlData() hooks, indexing content into system indices. Uses mark-and-sweep with last_crawled_at timestamps for efficient change detection. |
| SML Document | A single indexed chunk stored in the .chat-sml-data system index, containing title, content, permissions, and space information. |
sml_search tool | A built-in Agent Builder tool the AI uses to keyword-search SML documents. Results are filtered by the requesting user's space and permissions. |
sml_attach tool | A built-in Agent Builder tool the AI uses to convert SML search hits into conversation attachments. It accepts chunk_ids from sml_search; chunk_id format is attachment_type:origin_id:uuid. |
| Origin ID | The unique identifier for the source asset (typically a saved object ID). Used to link SML documents back to their source. |
list() to enumerate items, detects
changes via timestamps, and calls getSmlData() for new/updated items..chat-sml-data system index.
Crawler state (which items have been seen) is stored in a separate
.chat-sml-crawler-state index.sml_search, the SML service queries
the data index, filtering by the user's current space and checking Kibana
privileges against each result's permissions array.sml_attach with chunk_ids, the service loads each chunk, resolves the saved object via your toAttachment() hook, and adds the result as a conversation attachment (with origin when applicable).asInternalUser) — it indexes
content from all spaces.permissions array you set in
getSmlData).SmlTypeDefinitionCreate a file in your plugin (e.g.
server/sml_types/my_asset.ts). You need to implement four things:
import type { SmlTypeDefinition } from '@kbn/agent-builder-plugin/server';
export const myAssetSmlType: SmlTypeDefinition = {
// Unique identifier — lowercase, alphanumeric, hyphens, underscores.
// Must match /^[a-z][a-z0-9_-]*$/
id: 'my-asset',
// Optional: how often the crawler re-indexes this type.
// Defaults to '10m' if omitted.
fetchFrequency: () => '30m',
// Yield pages of items to consider for indexing.
// Called by the crawler with internal credentials.
async *list(context) {
// Use createPointInTimeFinder for efficient pagination
const finder = context.savedObjectsClient.createPointInTimeFinder({
type: 'my-saved-object-type',
perPage: 1000,
namespaces: ['*'], // all spaces
fields: ['title'], // only fetch fields needed for the list
});
try {
for await (const response of finder.find()) {
yield response.saved_objects.map((so) => ({
id: so.id,
updatedAt: so.updated_at ?? new Date().toISOString(),
spaces: so.namespaces ?? [],
}));
}
} finally {
await finder.close();
}
},
// Fetch the full data for a single item to index.
// Return undefined to skip the item (e.g. if it was deleted).
getSmlData: async (originId, context) => {
try {
const so = await context.savedObjectsClient.get('my-saved-object-type', originId);
const attrs = so.attributes as { title?: string; description?: string };
return {
chunks: [
{
type: 'my-asset',
title: attrs.title ?? originId,
content: [attrs.title, attrs.description].filter(Boolean).join('\n'),
// Kibana feature privileges required to access this item.
// Users without these privileges won't see the item in search results.
permissions: ['saved_object:my-saved-object-type/get'],
},
],
};
} catch {
return undefined;
}
},
// Convert an SML document back into a conversation attachment.
// Called when the AI agent wants to "attach" a search result.
toAttachment: async (item, context) => {
const resolveResult = await context.savedObjectsClient.resolve(
'my-saved-object-type',
item.origin_id
);
if ((resolveResult.saved_object as { error?: unknown }).error) {
return undefined;
}
return {
type: 'my-asset',
data: {
title: resolveResult.saved_object.attributes.title,
// ... whatever data the attachment renderer needs
},
};
},
};
In your plugin's setup method:
import { myAssetSmlType } from './sml_types/my_asset';
export class MyPlugin implements Plugin {
setup(core: CoreSetup, { agentBuilder }: { agentBuilder: AgentBuilderPluginSetup }) {
agentBuilder.sml.registerType(myAssetSmlType);
}
}
That's it. The Agent Builder crawler will automatically pick up your type and start indexing on the configured interval.
list() — Use AsyncIterable for memory safetyThe list hook must return an AsyncIterable<SmlListItem[]>. Each yielded
array is one "page" of items. The crawler processes pages with O(page_size)
memory, so even types with millions of items won't cause OOM.
Use createPointInTimeFinder with namespaces: ['*'] to enumerate across
all spaces. The crawler indexes everything; access control happens at query time.
getSmlData() — Chunks and permissionsYou can return multiple chunks per item (e.g. if a dashboard has multiple panels). Each chunk gets its own document in the SML index.
The permissions array should list the Kibana saved object privileges
required to access the underlying asset. Common patterns:
['saved_object:lens/get'] for Lens visualizations['saved_object:dashboard/get'] for dashboards['saved_object:search/get'] for saved searchesUsers without the listed privileges won't see the item in sml_search results.
toAttachment() — Resolving saved objectsUse savedObjectsClient.resolve() instead of get() when possible — it
handles saved object aliasing (e.g. after a space migration).
Return undefined if the item can no longer be resolved. The sml_attach
tool will report a per-item error to the AI agent without failing the entire
call.
You may include an optional description string on the object returned from
toAttachment. It is stored on the conversation
attachment and shown in the Agent Builder UI (for example, the “Attachment
added: …” line). If you omit it, a default label is derived from the SML
document’s type and title.
fetchFrequency — Choose an appropriate interval5m–10m30m–1h1h–4hThe default is 10m if you don't specify fetchFrequency.
The visualization SML type is registered in
x-pack/platform/plugins/shared/agent_builder_platform/server/sml_types/visualization.ts.
It:
lens saved objects across all spacespermissions: ['saved_object:lens/get']// Registration (in agent_builder_platform plugin setup):
setupDeps.agentBuilder.sml.registerType(visualizationSmlType);
The full implementation is ~130 lines and serves as the reference for new types.