multimodal/websites/tarko/docs/en/guide/get-started/architecture.mdx
Tarko is designed with a clean three-layer architecture that separates concerns and enables flexible agent development.
graph TB
subgraph "Engineering Layer"
CLI["Agent CLI"]
Server["Agent Server"]
UI["Agent UI"]
end
subgraph "Application Layer"
AgentTARS["Agent TARS"]
OmniAgent["Omni Agent"]
GithubAgent["Github Agent"]
CustomAgent["Custom Agent"]
end
subgraph "Kernel Layer"
ContextEng["Context Engineering"]
ToolCall["Tool Call Engine"]
EventStream["Event Stream"]
AgentProtocol["Agent Protocol"]
ModelProvider["Model Provider"]
AgentHooks["Agent Hooks"]
end
CLI --> AgentTARS
Server --> OmniAgent
UI --> GithubAgent
AgentTARS --> ContextEng
OmniAgent --> ToolCall
GithubAgent --> EventStream
CustomAgent --> AgentProtocol
The engineering layer provides production-ready solutions for deploying Tarko-based agents.
@tarko/agent-cli)Purpose: One-click agent development and deployment
Use Cases:
tarko run [agent] for local developmentExample:
# Development
tarko run my-agent.ts
# Production deployment
tarko deploy my-agent.ts --platform tars
@tarko/agent-server)Purpose: Node.js API for custom server integrations
Use Cases:
Example:
import { AgentServer } from '@tarko/agent-server';
const server = new AgentServer({
agent: myAgent,
auth: customAuthProvider,
storage: customStorageProvider
});
server.listen(3000);
@tarko/agent-ui)Purpose: Official web UI for Tarko Agent Protocol
Customization Levels:
The application layer contains Tarko-based agent implementations for specific use cases.
Purpose: Open-source general-purpose multimodal agent Capabilities: Browser automation, file system, command execution, search, MCP
Purpose: UI-TARS-2 specialized multimodal agent Capabilities: Same as Agent TARS but optimized for Seed Agent integration
Purpose: Git workflow and coding agent Capabilities: Github workflow, code search, code generation, command execution
Developers can build custom agents using the Tarko kernel while maintaining compatibility with the engineering layer.
The kernel layer solves core agent runtime challenges.
Problem: Building agents capable of long-running operations Solution: Advanced context management with automatic optimization
Features:
Problem: Different LLM providers have varying Tool Call support Solution: Unified interface following OpenAI Function Call protocol
Supported Engines:
Problem: Standard communication between agent components Solution: Unified event stream protocol
Benefits:
Problem: Inconsistent agent interfaces Solution: Standard protocol definitions
Components:
Design: OpenAI Compatible protocol Supported Providers: Volcengine, OpenAI, Anthropic, Gemini
Benefits:
Purpose: Extensible customization points Use Cases: Custom logging, monitoring, behavior modification
Each layer has clear responsibilities and minimal dependencies on other layers.
Standardized protocols enable interoperability and tooling ecosystem.
Leveraging existing standards reduces learning curve and increases compatibility.
Hooks and protocols allow customization without core modifications.
import { Agent } from '@tarko/agent';
// Application Layer
const myAgent = new Agent({
// Kernel Layer integration
contextEngineering: { /* config */ },
toolCallEngine: { /* config */ },
hooks: { /* custom hooks */ }
});
// Engineering Layer consumption
export default myAgent;
# Engineering Layer handles deployment
tarko run my-agent.ts --production