docs/en/dev/star/guides/ai.md
AstrBot provides built-in support for multiple Large Language Model (LLM) providers and offers a unified interface, making it convenient for plugin developers to access various LLM services.
You can use the LLM / Agent interfaces provided by AstrBot to implement your own intelligent agents.
Starting from version v4.5.7, we've made significant improvements to the way LLM providers are invoked. We recommend using the new approach, which is more concise and supports additional features. The legacy invocation method remains documented in the previous Chinese-only guide.
[!TIP] Added in v4.5.7
umo = event.unified_msg_origin
provider_id = await self.context.get_current_chat_provider_id(umo=umo)
[!TIP] Added in v4.5.7
llm_resp = await self.context.llm_generate(
chat_provider_id=provider_id, # Chat model ID
prompt="Hello, world!",
)
# print(llm_resp.completion_text) # Get the returned text
Tools enable large language models to invoke external capabilities.
from pydantic import Field
from pydantic.dataclasses import dataclass
from astrbot.core.agent.run_context import ContextWrapper
from astrbot.core.agent.tool import FunctionTool, ToolExecResult
from astrbot.core.astr_agent_context import AstrAgentContext
@dataclass
class BilibiliTool(FunctionTool[AstrAgentContext]):
name: str = "bilibili_videos" # Tool name
description: str = "A tool to fetch Bilibili videos." # Tool description
parameters: dict = Field(
default_factory=lambda: {
"type": "object",
"properties": {
"keywords": {
"type": "string",
"description": "Keywords to search for Bilibili videos.",
},
},
"required": ["keywords"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> ToolExecResult:
return "1. Video Title: How to Use AstrBot\nVideo Link: xxxxxx"
[!TIP] Added in v4.5.7
An Agent can be defined as a combination of system_prompt + tools + llm, enabling more sophisticated intelligent behavior.
After defining the Tool above, you can invoke an Agent as follows:
llm_resp = await self.context.tool_loop_agent(
event=event,
chat_provider_id=prov_id,
prompt="Search for videos related to AstrBot on Bilibili.",
tools=ToolSet([BilibiliTool()]),
max_steps=30, # Maximum agent execution steps
tool_call_timeout=120, # Tool invocation timeout
)
# print(llm_resp.completion_text) # Get the returned text
tool_loop_agent() method automatically handles the loop of tool invocations and LLM requests until the model stops calling tools or the maximum number of steps is reached.
[!TIP] Added in v4.5.7
Multi-Agent systems decompose complex applications into multiple specialized agents that collaborate to solve problems. Unlike relying on a single agent to handle every step, multi-agent architectures allow smaller, more focused agents to be composed into coordinated workflows. We implement multi-agent systems using the agent-as-tool pattern.
In the example below, we define a Main Agent responsible for delegating tasks to different Sub-Agents based on user queries. Each Sub-Agent focuses on specific tasks, such as retrieving weather information.
Define Tools:
@dataclass
class AssignAgentTool(FunctionTool[AstrAgentContext]):
"""Main agent uses this tool to decide which sub-agent to delegate a task to."""
name: str = "assign_agent"
description: str = "Assign an agent to a task based on the given query"
parameters: dict = field(
default_factory=lambda: {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to call the sub-agent with.",
},
},
"required": ["query"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> str | CallToolResult:
# Here you would implement the actual agent assignment logic.
# For demonstration purposes, we'll return a dummy response.
return "Based on the query, you should assign agent 1."
@dataclass
class WeatherTool(FunctionTool[AstrAgentContext]):
"""In this example, sub agent 1 uses this tool to get weather information."""
name: str = "weather"
description: str = "Get weather information for a location"
parameters: dict = field(
default_factory=lambda: {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get weather information for.",
},
},
"required": ["city"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> str | CallToolResult:
city = kwargs["city"]
# Here you would implement the actual weather fetching logic.
# For demonstration purposes, we'll return a dummy response.
return f"The current weather in {city} is sunny with a temperature of 25°C."
@dataclass
class SubAgent1(FunctionTool[AstrAgentContext]):
"""Define a sub-agent as a function tool."""
name: str = "subagent1_name"
description: str = "subagent1_description"
parameters: dict = field(
default_factory=lambda: {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to call the sub-agent with.",
},
},
"required": ["query"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> str | CallToolResult:
ctx = context.context.context
event = context.context.event
logger.info(f"the llm context messages: {context.messages}")
llm_resp = await ctx.tool_loop_agent(
event=event,
chat_provider_id=await ctx.get_current_chat_provider_id(
event.unified_msg_origin
),
prompt=kwargs["query"],
tools=ToolSet([WeatherTool()]),
max_steps=30,
)
return llm_resp.completion_text
@dataclass
class SubAgent2(FunctionTool[AstrAgentContext]):
"""Define a sub-agent as a function tool."""
name: str = "subagent2_name"
description: str = "subagent2_description"
parameters: dict = field(
default_factory=lambda: {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The query to call the sub-agent with.",
},
},
"required": ["query"],
}
)
async def call(
self, context: ContextWrapper[AstrAgentContext], **kwargs
) -> str | CallToolResult:
return "I am useless :(, you shouldn't call me :("
Then, similarly, invoke the Agent using the tool_loop_agent() method:
@filter.command("test")
async def test(self, event: AstrMessageEvent):
umo = event.unified_msg_origin
prov_id = await self.context.get_current_chat_provider_id(umo)
llm_resp = await self.context.tool_loop_agent(
event=event,
chat_provider_id=prov_id,
prompt="Test calling sub-agent for Beijing's weather information.",
system_prompt=(
"You are the main agent. Your task is to delegate tasks to sub-agents based on user queries."
"Before delegating, use the 'assign_agent' tool to determine which sub-agent is best suited for the task."
),
tools=ToolSet([SubAgent1(), SubAgent2(), AssignAgentTool()]),
max_steps=30,
)
yield event.plain_result(llm_resp.completion_text)
from astrbot.core.conversation_mgr import Conversation
uid = event.unified_msg_origin
conv_mgr = self.context.conversation_manager
curr_cid = await conv_mgr.get_curr_conversation_id(uid)
conversation = await conv_mgr.get_conversation(uid, curr_cid) # Conversation
::: details Conversation 类型定义
@dataclass
class Conversation:
"""The conversation entity representing a chat session."""
platform_id: str
"""The platform ID in AstrBot"""
user_id: str
"""The user ID associated with the conversation."""
cid: str
"""The conversation ID, in UUID format."""
history: str = ""
"""The conversation history as a string."""
title: str | None = ""
"""The title of the conversation. For now, it's only used in WebChat."""
persona_id: str | None = ""
"""The persona ID associated with the conversation."""
created_at: int = 0
"""The timestamp when the conversation was created."""
updated_at: int = 0
"""The timestamp when the conversation was last updated."""
:::
new_conversationunified_msg_origin: str – In the format platform_name:message_type:session_idplatform_id: str | None – Platform identifier, defaults to parsing from unified_msg_origincontent: list[dict] | None – Initial message historytitle: str | None – Conversation titlepersona_id: str | None – Associated persona IDstr – Newly generated UUID conversation IDswitch_conversationunified_msg_origin: strconversation_id: strNonedelete_conversationconversation_id is None, deletes the current conversation.unified_msg_origin: strconversation_id: str | NoneNoneget_curr_conversation_idunified_msg_origin: strstr | None – Current conversation ID, returns None if it doesn't existget_conversationcreate_if_not_exists=True.unified_msg_origin: strconversation_id: strcreate_if_not_exists: bool = FalseConversation | Noneget_conversationsunified_msg_origin: str | None – When None, does not filter by userplatform_id: str | NoneList[Conversation]update_conversationunified_msg_origin: strconversation_id: str | None – Uses the current conversation when Nonehistory: list[dict] | Nonetitle: str | Nonepersona_id: str | NoneNonePersonaManager is responsible for unified loading, caching, and providing CRUD interfaces for all Personas, while maintaining compatibility with the legacy persona format (v3) from before AstrBot 4.x.
During initialization, it automatically reads all personas from the database and generates v3-compatible data for seamless use with legacy code.
persona_mgr = self.context.persona_manager
get_personapersona_id: str – Persona IDPersona – Persona data, returns None if it doesn't existValueError – Raised when it doesn't existget_all_personaslist[Persona] – Persona list, may be emptycreate_personapersona_id: str – New persona ID (unique)system_prompt: str – System promptbegin_dialogs: list[str] – Optional, opening dialogs (even number of entries, alternating user/assistant)tools: list[str] – Optional, list of allowed tools; None=all tools, []=disable allPersona – Newly created persona objectValueError – If persona_id already existsupdate_personapersona_id: str – Persona ID to updatesystem_prompt: str – Optional, new system promptbegin_dialogs: list[str] – Optional, new opening dialogstools: list[str] – Optional, new tool list; semantics same as create_personaPersona – Updated persona objectValueError – If persona_id doesn't existdelete_personapersona_id: str – Persona ID to deleteValueError – If persona_id doesn't existget_default_persona_v3DEFAULT_PERSONALITY if configuration doesn't specify one or the specified persona doesn't exist.umo: str | MessageSession | None – Session identifier, used to read user-level configurationPersonality – Default persona object in v3 format::: details Persona / Personality 类型定义
class Persona(SQLModel, table=True):
"""Persona is a set of instructions for LLMs to follow.
It can be used to customize the behavior of LLMs.
"""
__tablename__ = "personas"
id: int = Field(primary_key=True, sa_column_kwargs={"autoincrement": True})
persona_id: str = Field(max_length=255, nullable=False)
system_prompt: str = Field(sa_type=Text, nullable=False)
begin_dialogs: Optional[list] = Field(default=None, sa_type=JSON)
"""a list of strings, each representing a dialog to start with"""
tools: Optional[list] = Field(default=None, sa_type=JSON)
"""None means use ALL tools for default, empty list means no tools, otherwise a list of tool names."""
created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
updated_at: datetime = Field(
default_factory=lambda: datetime.now(timezone.utc),
sa_column_kwargs={"onupdate": datetime.now(timezone.utc)},
)
__table_args__ = (
UniqueConstraint(
"persona_id",
name="uix_persona_id",
),
)
class Personality(TypedDict):
"""LLM Persona class.
Starting from v4.0.0 and later, it's recommended to use the Persona class above. Additionally, the mood_imitation_dialogs field has been deprecated.
"""
prompt: str
name: str
begin_dialogs: list[str]
mood_imitation_dialogs: list[str]
"""Mood imitation dialog preset. Deprecated since v4.0.0 and later."""
tools: list[str] | None
"""Tool list. None means use all tools, empty list means don't use any tools"""
:::