docs/devguide/ai/dynamic-workflows.md
Conductor supports three levels of agent dynamism, from simple tool use to fully self-generating agents.
The defining pattern of an autonomous agent is the loop: call an LLM, execute a tool, observe the result, decide whether to continue. Conductor models this with DO_WHILE:
{
"name": "autonomous_agent",
"description": "Agent that loops until the task is complete",
"version": 1,
"schemaVersion": 2,
"tasks": [
{
"name": "agent_loop",
"taskReferenceName": "loop",
"type": "DO_WHILE",
"loopCondition": "if ($.loop['think'].output.result.done == true) { false; } else { true; }",
"loopOver": [
{
"name": "think",
"taskReferenceName": "think",
"type": "LLM_CHAT_COMPLETE",
"inputParameters": {
"llmProvider": "anthropic",
"model": "claude-sonnet-4-20250514",
"messages": [
{
"role": "system",
"message": "You are an agent. Available tools: ${workflow.input.tools}. Previous results: ${loop.output.results}. Respond with JSON: {\"action\": \"tool_name\", \"arguments\": {}, \"done\": false} or {\"answer\": \"...\", \"done\": true}"
},
{
"role": "user",
"message": "${workflow.input.task}"
}
],
"temperature": 0.1
}
},
{
"name": "act",
"taskReferenceName": "act",
"type": "SWITCH",
"evaluatorType": "javascript",
"expression": "$.think.output.result.done ? 'done' : 'call_tool'",
"decisionCases": {
"call_tool": [
{
"name": "execute_tool",
"taskReferenceName": "tool_call",
"type": "CALL_MCP_TOOL",
"inputParameters": {
"mcpServer": "${workflow.input.mcpServerUrl}",
"method": "${think.output.result.action}",
"arguments": "${think.output.result.arguments}"
}
}
]
},
"defaultCase": []
}
]
}
],
"outputParameters": {
"answer": "${loop.output.think.output.result.answer}",
"iterations": "${loop.output.iteration}"
}
}
What makes this durable:
Conductor supports dynamic workflow execution where the complete workflow definition is provided at start time, without pre-registration. This is the most powerful form of agent dynamism — the LLM generates the entire execution plan as JSON, and Conductor runs it immediately.
StartWorkflowRequest.{
"name": "dynamic_agent_planner",
"version": 1,
"schemaVersion": 2,
"tasks": [
{
"name": "generate_plan",
"taskReferenceName": "planner",
"type": "LLM_CHAT_COMPLETE",
"inputParameters": {
"llmProvider": "anthropic",
"model": "claude-sonnet-4-20250514",
"messages": [
{
"role": "system",
"message": "You are a workflow planner. Given a user task, generate a Conductor workflow definition as JSON. Available task types: LLM_CHAT_COMPLETE, CALL_MCP_TOOL, LIST_MCP_TOOLS, HTTP, HUMAN, LLM_SEARCH_INDEX. The workflow must include a 'name', 'tasks' array, and 'outputParameters'."
},
{
"role": "user",
"message": "${workflow.input.task}"
}
],
"temperature": 0.2
}
},
{
"name": "review_plan",
"taskReferenceName": "approval",
"type": "HUMAN",
"inputParameters": {
"generatedWorkflow": "${planner.output.result}"
}
},
{
"name": "execute_plan",
"taskReferenceName": "execution",
"type": "START_WORKFLOW",
"inputParameters": {
"startWorkflow": {
"workflowDefinition": "${planner.output.result}",
"input": "${workflow.input.taskInput}"
}
}
}
],
"outputParameters": {
"generatedPlan": "${planner.output.result}",
"executionId": "${execution.output.workflowId}"
}
}
What happens:
planner — LLM_CHAT_COMPLETE generates an entire workflow definition as JSON based on the user's task description.approval — HUMAN task pauses the workflow so a reviewer can inspect the generated plan before it runs. This is critical — you don't want an LLM-generated workflow executing unsupervised.execution — START_WORKFLOW launches the generated workflow definition directly. Conductor validates it, persists it, and executes it with full durability. No pre-registration needed.The generated child workflow gets all the same guarantees as any Conductor workflow: persisted state, retry policies, failure handling, full observability. The fact that it was generated by an LLM 30 seconds ago doesn't matter — it runs on the same durable execution engine.
Combined with DYNAMIC tasks (where the task type is resolved at runtime based on input) and DYNAMIC_FORK (where the number and type of parallel tasks is determined at runtime), this enables agents that create, modify, and execute their own plans.
A more focused example — an agent that discovers tools, plans, gets approval, and executes. Every step uses a built-in system task.
{
"name": "mcp_agent_with_approval",
"description": "Discover tools, plan, execute with approval, summarize",
"version": 1,
"schemaVersion": 2,
"tasks": [
{
"name": "list_available_tools",
"taskReferenceName": "discover_tools",
"type": "LIST_MCP_TOOLS",
"inputParameters": {
"mcpServer": "${workflow.input.mcpServerUrl}"
}
},
{
"name": "decide_which_tools_to_use",
"taskReferenceName": "plan",
"type": "LLM_CHAT_COMPLETE",
"inputParameters": {
"llmProvider": "anthropic",
"model": "claude-sonnet-4-20250514",
"messages": [
{
"role": "system",
"message": "You are an AI agent. Available tools: ${discover_tools.output.tools}. User wants to: ${workflow.input.task}"
},
{
"role": "user",
"message": "Which tool should I use and what parameters? Respond with JSON: {\"method\": \"string\", \"arguments\": {}}"
}
],
"temperature": 0.1,
"maxTokens": 500
}
},
{
"name": "human_review",
"taskReferenceName": "approval",
"type": "HUMAN",
"inputParameters": {
"plannedAction": "${plan.output.result}"
}
},
{
"name": "execute_tool",
"taskReferenceName": "execute",
"type": "CALL_MCP_TOOL",
"inputParameters": {
"mcpServer": "${workflow.input.mcpServerUrl}",
"method": "${plan.output.result.method}",
"arguments": "${plan.output.result.arguments}"
}
},
{
"name": "summarize_result",
"taskReferenceName": "summarize",
"type": "LLM_CHAT_COMPLETE",
"inputParameters": {
"llmProvider": "anthropic",
"model": "claude-sonnet-4-20250514",
"messages": [
{
"role": "user",
"message": "The user asked: ${workflow.input.task}\n\nTool result: ${execute.output.content}\n\nSummarize this result for the user."
}
],
"maxTokens": 500
}
}
],
"outputParameters": {
"plan": "${plan.output.result}",
"toolResult": "${execute.output.content}",
"summary": "${summarize.output.result}",
"approvedBy": "${approval.output.reviewer}"
}
}
Every task type here — LIST_MCP_TOOLS, LLM_CHAT_COMPLETE, CALL_MCP_TOOL, HUMAN — is a native Conductor system task. No custom workers, no external frameworks.
See the full set of examples in the ai/examples/ directory.