multimodal/websites/tarko/docs/en/guide/cli/commands.mdx
Complete reference for all Tarko Agent CLI commands and their options.
tarko / tarko runLaunches interactive Web UI for real-time conversation and file browsing.
# Start interactive Web UI (default)
tarko
# Equivalent explicit command
tarko run
# Run with specific agent
tarko run ./my-agent.js
# Run with built-in agents
tarko run agent-tars
tarko run omni-tars
tarko run mcp-agent
# Custom port and auto-open browser
tarko run --port 8888 --open
# Development mode with hot reload
tarko run --dev
# Debug mode with verbose logging
tarko run --debug
# Custom configuration file
tarko run --config ./custom-config.ts
# Custom workspace
tarko run --workspace ./my-workspace
Silent mode execution with stdout output, perfect for scripting:
# Direct input with text output (default)
tarko run --headless --input "Analyze current directory structure"
# Pipeline input
echo "Summarize this code" | tarko run --headless
# JSON output for programmatic use
tarko run --headless --input "Analyze files" --format json
# Include debug logs in output
tarko run --headless --input "Analyze files" --include-logs
# Disable cache for fresh execution
tarko run --headless --input "Analyze files" --use-cache false
# Combine with built-in agents
tarko run agent-tars --headless --input "List directory contents"
| Option | Description | Default |
|---|---|---|
--input <text> | Direct input text | - |
--format <type> | Output format: text, json | text |
--include-logs | Include debug logs in output | false |
--use-cache <bool> | Enable/disable caching | true |
tarko serveStarts headless API server for system integration and production deployment.
# Start headless server
tarko serve
# Start server with specific agent
tarko serve ./my-agent
# Start server with built-in agent
tarko serve omni-tars
# Custom port and configuration
tarko serve --port 8888 --config ./production.config.ts
# Custom port (default: 3000)
tarko serve --port 8888
# Custom host (default: localhost)
tarko serve --host 0.0.0.0
# Debug mode
tarko serve --debug
# Custom configuration
tarko serve --config ./server.config.yaml
# Custom workspace
tarko serve --workspace ./server-workspace
When running tarko serve, the following endpoints are available:
GET /api/v1/health - Health checkGET /api/v1/status - Detailed statusPOST /api/v1/chat - Chat with agentGET /api/v1/events - Event stream (WebSocket)GET /metrics - Prometheus metrics (if enabled)tarko requestDirect LLM requests for debugging and testing purposes.
# Basic request
tarko request --provider openai --model gpt-4 --body '{"messages":[{"role":"user","content":"Hello"}]}'
# Load request from file
tarko request --provider openai --model gpt-4 --body ./request.json
# Custom API configuration
tarko request --provider openai --model gpt-4 \
--apiKey sk-xxx \
--baseURL https://api.openai.com/v1 \
--body request.json
# Streaming mode
tarko request --provider openai --model gpt-4 --body request.json --stream
# Reasoning mode (for supported models like o1)
tarko request --provider openai --model o1-preview --body request.json --thinking
# Semantic output format
tarko request --provider openai --model gpt-4 --body request.json --format semantic
openai - OpenAI GPT modelsanthropic - Anthropic Claude modelsazure - Azure OpenAI Serviceollama - Local Ollama modelsgemini - Google Gemini models{
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
"temperature": 0.7,
"max_tokens": 1000
}
tarko workspaceWorkspace management utilities for organizing agent projects.
# Initialize new workspace in current directory
tarko workspace --init
# Open workspace in VSCode
tarko workspace --open
# Enable global workspace
tarko workspace --enable
# Disable global workspace
tarko workspace --disable
# Show workspace status and configuration
tarko workspace --status
Initializing a workspace creates:
my-workspace/
├── tarko.config.ts # Main configuration
├── agents/ # Custom agents
├── tools/ # Custom tools
├── data/ # Agent data and cache
├── logs/ # Execution logs
└── .tarko/ # Internal workspace data
These options work with all commands:
# Model configuration
tarko --model.provider openai --model.id gpt-4 --model.apiKey sk-xxx
# Custom configuration file
tarko --config ./custom.config.ts
# Custom workspace
tarko --workspace ./my-workspace
# Enable debug logging
tarko --debug
# Verbose output
tarko --verbose
# Dry run (show what would be executed)
tarko --dry-run
# Show configuration and exit
tarko --show-config
# Include specific tools
tarko --tool.include "file_*,web_*"
# Exclude specific tools
tarko --tool.exclude "dangerous_*"
# Include specific MCP servers
tarko --mcpServer.include "filesystem,browser"
# Exclude specific MCP servers
tarko --mcpServer.exclude "experimental_*"
Alternative to CLI options:
# Model configuration
export OPENAI_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-key
# Server configuration
export TARKO_PORT=3000
export TARKO_HOST=0.0.0.0
# Debug settings
export DEBUG=tarko:*
export TARKO_LOG_LEVEL=debug
# Workspace
export TARKO_WORKSPACE=./my-workspace
| Code | Description |
|---|---|
| 0 | Success |
| 1 | General error |
| 2 | Configuration error |
| 3 | Network error |
| 4 | Authentication error |
| 5 | Agent execution error |
# 1. Initialize workspace
tarko workspace --init --name my-project
# 2. Start development with UI
tarko run --dev --open
# 3. Test with headless mode
tarko run --headless --input "Test my agent"
# 4. Deploy to production
tarko serve --port 3000 --config production.config.ts
# Test agent functionality
tarko run agent-tars --headless --input "Run tests" --format json > results.json
# Validate configuration
tarko --dry-run --show-config
# Health check
curl -f http://localhost:3000/api/v1/health || exit 1
# Debug with verbose logging
DEBUG=tarko:* tarko run --debug --verbose
# Test direct LLM requests
tarko request --provider openai --model gpt-4 --body '{"messages":[{"role":"user","content":"Hello"}]}' --debug
# Inspect configuration
tarko --show-config --debug