docs/guides/agent/agent_component_reference/agent.mdx
The component equipped with reasoning, tool usage, and multi-agent collaboration capabilities.
An Agent component fine-tunes the LLM and sets its prompt. From v0.20.5 onwards, an Agent component is able to work independently and with the following capabilities:
An Agent component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
The corresponding configuration panel appears to the right of the canvas. Use this panel to define and fine-tune the Agent component's behavior.
Click Model, and select a chat model from the dropdown menu.
:::tip NOTE If no model appears, check if your have added a chat model on the Model providers page. :::
The system prompt typically defines your model's role. You can either keep the system prompt as is or customize it to override the default.
The user prompt typically defines your model's task. You will find the sys.query variable auto-populated. Type / or click (x) to view or add variables.
In this quickstart, we assume your Agent component is used standalone (without tools or sub-Agents below), then you may also need to specify retrieved chunks using the formalized_content variable:
The + Add tools and + Add agent sections are used only when you need to configure your Agent component as a planner (with tools or sub-Agents beneath). In this quickstart, we assume your Agent component is used standalone (without tools or sub-Agents beneath).
When necessary, click the + button on the Agent component to choose the next component in the workflow from the dropdown list.
:::danger IMPORTANT In this section, we assume your Agent will be configured as a planner, with a Tavily tool beneath it. :::
Update your MCP server's name, URL (including the API key), server type, and other necessary settings. When configured correctly, the available tools will be displayed.
Click MCP to show the available MCP servers.
Select your MCP server:
The target MCP server appears below your Agent component, and your Agent will autonomously decide when to invoke the available tools it offers.
To ensure reliable tool calls, you may specify within the system prompt which tasks should trigger each tool call.
On the canvas, click the newly-populated Tavily server to view and select its available tools:
Click the dropdown menu of Model to show the model configuration window.
:::tip NOTE
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
An Agent component relies on keys (variables) to specify its data inputs. Its immediate upstream component is not necessarily its data input, and the arrows in the workflow indicate only the processing sequence. Keys in a Agent component are used in conjunction with the system prompt to specify data inputs for the LLM. Use a forward slash / or the (x) button to show the keys to use.
From v0.20.5 onwards, four framework-level prompt blocks are available in the System prompt field, enabling you to customize and override prompts at the framework level. Type / or click (x) to view them; they appear under the Framework entry in the dropdown menu.
task_analysis prompt block
agent_prompt: The system prompt.task: The user prompt for either a lead Agent or a sub-Agent. The lead Agent's user prompt is defined by the user, while a sub-Agent's user prompt is defined by the lead Agent when delegating tasks.tool_desc: A description of the tools and sub_Agents that can be called.context: The operational context, which stores interactions between the Agent, tools, and sub-agents; initially empty.plan_generation prompt block
task_analysis: The analysis result of the current task.desc: A description of the tools or sub-Agents currently being called.today: The date of today.reflection prompt block
goal: The goal of the current task. It is the user prompt for either a lead Agent or a sub-Agent. The lead Agent's user prompt is defined by the user, while a sub-Agent's user prompt is defined by the lead Agent.tool_calls: The history of tool callingcall.name:The name of the tool called.call.result:The result of tool callingcitation_guidelines prompt block
The screenshots below show the framework prompt blocks available to an Agent component, both as a standalone and as a planner (with a Tavily tool below):
The user-defined prompt. Defaults to sys.query, the user query. As a general rule, when using the Agent component as a standalone module (not as a planner), you usually need to specify the corresponding Retrieval component’s output variable (formalized_content) here as part of the input to the LLM.
You can use an Agent component as a collaborator that reasons and reflects with the aid of other tools; for instance, Retrieval can serve as one such tool for an Agent.
You use an Agent component as a collaborator that reasons and reflects with the aid of subagents or other tools, forming a multi-agent system.
An integer specifying the number of previous dialogue rounds to input into the LLM. For example, if it is set to 12, the tokens from the last 12 dialogue rounds will be fed to the LLM. This feature consumes additional tokens.
:::tip IMPORTANT This feature is used for multi-turn dialogue only. :::
Defines the maximum number of attempts the agent will make to retry a failed task or operation before stopping or reporting failure.
The waiting period in seconds that the agent observes before retrying a failed task, helping to prevent immediate repeated attempts and allowing system conditions to improve. Defaults to 1 second.
Defines the maximum number reflection rounds of the selected chat model. Defaults to 1 round.
:::tip NOTE Increasing this value will significantly extend your agent's response time. :::
The global variable name for the output of the Agent component, which can be referenced by other components in the workflow.
See here for details.