docs/devguide/faq.md
Yes. Conductor is a fully open source workflow engine, released under the Apache 2.0 license. You can self-host it on your own infrastructure — there is no vendor lock-in, no proprietary runtime, and no cloud dependency. The self-hosted workflow engine supports 8+ persistence backends, 6 message brokers, and runs anywhere Docker or a JVM runs.
Yes. Conductor OSS is the continuation of the original Netflix Conductor repository after Netflix contributed the project to the open-source foundation.
No. The original Netflix repository has transitioned to Conductor OSS, which is the new home for the project. Active development and maintenance continues here.
Yes. Orkes is the primary maintainer of this repository and offers an enterprise SaaS platform for Conductor across all major cloud providers.
100% compatible. Orkes Conductor is built on top of Conductor OSS, ensuring full compatibility between the open-source version and the enterprise offering.
No. While Conductor excels at asynchronous orchestration, it also supports synchronous workflow execution when immediate results are required.
Not at all. Conductor is language and framework agnostic. Use your preferred language and framework — SDKs provide native integration for Java, Python, JavaScript, Go, C#, and more.
No. Conductor is designed for developers who write code. While workflows can be defined in JSON, the power comes from building workers and tasks in your preferred programming language.
Yes. Conductor supports advanced patterns including nested loops, dynamic branching, sub-workflows, and workflows with thousands of tasks.
Conductor combines durable execution, 14+ native LLM providers, JSON-native workflow definitions, 7+ language SDKs, and battle-tested scale (Netflix, Tesla, LinkedIn, JP Morgan). It's the only open source workflow engine with native AI/LLM task types, MCP integration, and built-in vector database support.
No — JSON makes workflows more capable, not less. A JSON workflow definition is pure orchestration: it describes what runs, in what order, with what inputs. It cannot open connections, mutate state, or produce side effects. This means every execution is deterministic by construction — given the same inputs, the same task graph executes every time. That is why replay, restart, and retry work unconditionally.
Code-based workflow engines embed orchestration logic alongside business logic, which means your workflow code can introduce non-determinism (system clocks, random values, uncontrolled I/O). These engines must impose restrictions on what your code is allowed to do — and bugs from violating those restrictions are subtle and hard to debug.
Conductor's dynamic primitives — DYNAMIC tasks, DYNAMIC_FORK, and dynamic sub-workflows — provide more runtime flexibility than code-based definitions. An LLM can generate a complete workflow definition as JSON and Conductor executes it immediately, with full durability and observability. No code generation, no compilation, no deployment. See JSON + Code Native for the full picture.
Both are durable execution engines, but with fundamentally different approaches. Conductor's JSON-native definitions separate orchestration from implementation, making workflows deterministic by construction — no side-effect restrictions to remember, no non-determinism bugs to debug. Temporal embeds orchestration in code, which requires developers to avoid non-deterministic operations (system clocks, random values, uncontrolled I/O) or risk subtle replay failures.
Conductor is fully open source (Apache 2.0) with no proprietary server components. It provides native LLM orchestration for 14+ providers, MCP tool calling, and vector database support out of the box — capabilities Temporal does not offer. Conductor's JSON definitions can be generated and modified at runtime by LLMs or APIs without a compile/deploy cycle.
Step Functions is a proprietary, cloud-locked service. Conductor is an open source, self-hosted workflow engine you can run on any infrastructure. Conductor supports 7+ language SDKs, 8+ persistence backends, and provides native AI agent orchestration — none of which Step Functions offers. If you need an open source Step Functions alternative with no cloud lock-in, Conductor is a strong fit.
Airflow is a DAG-based batch scheduler designed for data pipelines. Conductor is a real-time workflow orchestration engine designed for microservice orchestration, event-driven workflows, and AI agent orchestration. Conductor provides durable execution with sub-second task scheduling, while Airflow is optimized for scheduled batch jobs. If you need a real-time workflow engine rather than a job scheduler, Conductor is the better choice.
Yes. Conductor is a developer-first workflow automation platform — not a low-code drag-and-drop tool, but a code-first workflow engine where you define workflows as code or JSON and implement task workers in any language. It is well suited for automating business processes, data pipelines, and multi-service workflows that need durable execution and full observability.
Yes. Conductor provides native AI agent orchestration with LLM tasks (chat completion, text completion), MCP tool calling and function calling (LIST_MCP_TOOLS, CALL_MCP_TOOL), human-in-the-loop approval (HUMAN task), and dynamic workflows that agents can generate at runtime. Every agent built on Conductor is a durable agent — LLM orchestration runs with the same durable execution guarantees as any other workflow, so agents survive crashes, retries, and infrastructure failures without losing progress.
Yes. LIST_MCP_TOOLS discovers available tools from any MCP server, and CALL_MCP_TOOL executes them. Workflows can also be exposed as MCP tools via the MCP Gateway.
14+ providers natively: Anthropic (Claude), OpenAI (GPT), Azure OpenAI, Google Gemini, AWS Bedrock, Mistral, Cohere, HuggingFace, Ollama, Perplexity, Grok, StabilityAI, and more. All accessible as workflow system tasks with built-in function calling and tool use via MCP integration.
Yes. Built-in support for Pinecone, pgvector, and MongoDB Atlas Vector Search. System tasks handle embedding generation, storage, indexing, and semantic search — enabling RAG pipelines as standard workflows.
Yes. Every workflow execution is persisted at each step. If a task fails, it's retried with configurable backoff. If a worker crashes, the task is rescheduled. If the server restarts, execution resumes exactly where it left off. See Durable Execution.
Yes. Originally built at Netflix to handle massive scale, Conductor scales horizontally across multiple server instances. Workers scale independently, and the server supports millions of concurrent workflow executions across multiple persistence backends. This horizontal scaling architecture makes Conductor suitable for production workflow deployments at any scale.
Yes. Configure a failureWorkflow that runs compensation logic when the main workflow fails. Combined with task-level retries and timeout policies, Conductor provides full saga pattern support for distributed transactions. See Handling Errors.
Yes. Workflow definitions are JSON and can be created, modified, and started dynamically via the API or SDKs. LLMs can generate workflow definitions that Conductor executes immediately without pre-registration.
Yes. The HUMAN task type pauses workflow execution until an external signal (approval, rejection, or data input) is received via API. The pause survives server restarts and deploys.
Redis+Dynomite, PostgreSQL, MySQL, Cassandra, SQLite, Elasticsearch, and OpenSearch. Choose based on your scale and operational requirements.
Kafka, NATS, NATS Streaming, AMQP (RabbitMQ), SQS, and Conductor's internal queue. Use them for event-driven workflows and external system integration.
After polling for the task update the status of the task to IN_PROGRESS and set the callbackAfterSeconds value to the desired time. The task will remain in the queue until the specified second before worker polling for it will receive it again.
If there is a timeout set for the task, and the callbackAfterSeconds exceeds the timeout value, it will result in task being TIMED_OUT.
Yes. As long as the timeouts on the tasks are set to handle long running workflows, it will stay in running state.
Ensure all the tasks are registered via /metadata/taskdefs APIs. Add any missing task definition (as reported in the error) and try again.
Conductor does not run the workers. When a task is scheduled, it is put into the queue maintained by Conductor. Workers are required to poll for tasks using /tasks/poll API at periodic interval, execute the business logic for the task and report back the results using POST {{ api_prefix }}/tasks API call.
Conductor, however will run system tasks on the Conductor server.
Conductor itself does not provide any scheduling mechanism. But there is a community project Schedule Conductor Workflows which provides workflow scheduling capability as a pluggable module as well as workflow server. Other way is you can use any of the available scheduling systems to make REST calls to Conductor to start a workflow. Alternatively, publish a message to a supported eventing system like SQS to trigger a workflow. More details about eventing.
Yes. Workers can be written in any language as long as they can poll and update the task results via HTTP endpoints. Conductor provides official and community SDKs for many languages:
Make sure that the worker is actively polling for this task. Navigate to the Task Queues tab on the Conductor UI and select your task name in the search box. Ensure that Last Poll Time for this task is current.
In Conductor 3.x, conductor.redis.availabilityZone defaults to us-east-1c. Ensure that this matches where your workers are, and that it also matchesconductor.redis.hosts.
When a workflow fails, you can configure a "failure workflow" to run using thefailureWorkflow parameter. By default, three parameters are passed:
You can also use the Workflow Status Listener:
Refer to this documentation to extend conductor to send out events/notifications upon workflow completion/failure.
In a PreDestroy block within your application, call the shutdown() method on the TaskRunnerConfigurer instance that you have created to facilitate a graceful shutdown of your worker in case the process is being terminated.
Set the status to FAILED_WITH_TERMINAL_ERROR in the TaskResult object within your worker. This would mark the task as FAILED and fail the workflow without retrying the task as a fail-fast mechanism.