Back to Copilotkit

HITL Overview

showcase/shell-docs/src/content/docs/human-in-the-loop/index.mdx

1.57.03.6 KB
Original Source

What is this?

Human-in-the-loop (HITL) lets an agent pause mid-run to collect input, confirmation, or a choice from the user, then resume with that answer folded back into its reasoning. It's what turns an autonomous workflow into a collaborative one: the agent keeps its context, the user keeps the steering wheel.

<video src="https://cdn.copilotkit.ai/docs/copilotkit/images/coagents/human-in-the-loop-example.mp4" className="rounded-lg shadow-xl" loop playsInline controls autoPlay muted />

When should I use this?

Use HITL when you need:

  • Quality control — a human gate at high-stakes decision points
  • Edge cases — graceful fallbacks when the agent's confidence is low
  • Expert input — lean on the user for domain knowledge the model lacks
  • Reliability — a more robust loop for real-world, production traffic

Two patterns for HITL in CopilotKit

CopilotKit ships two complementary ways to pause an agent turn and ask the human something. They look similar from the outside (the chat pauses, a custom component appears, the user answers, the run resumes) but they're wired differently on the backend, and each has its own niche.

PatternWho decides to pause?Backend surface
useHumanInTheLoopThe LLM, by calling a registered client-side toolA frontend-only tool description (Zod schema + render)
useInterruptThe graph, by calling interrupt(...) during a nodeA server-side interrupt() call in your LangGraph agent

Pick useHumanInTheLoop when the pause is an agent-initiated decision — the model chose to ask the user — and you want the picker UI inlined into the normal tool-call flow.

Pick useInterrupt when the pause is a graph-enforced checkpoint — the code path deterministically requires a human answer — and you want langgraph.interrupt() as the server-side contract.

Pattern 1 — useHumanInTheLoop (tool-based)

The agent registers a HITL tool on the client with useHumanInTheLoop. When the LLM calls that tool, CopilotKit routes the call through your render function, which shows a custom component and calls respond with the user's answer. The agent sees the answer as the tool result and continues from there.

<Snippet region="hitl-hook" title="frontend/src/app/page.tsx — useHumanInTheLoop" />

The picker UI is fed a static list of candidate slots — this is just data the demo page owns, so you can swap in real availability, a calendar API, or anything else:

<Snippet region="time-slots" title="frontend/src/app/page.tsx — candidate slots" /> <InlineDemo demo="hitl-in-chat" />

Pattern 2 — useInterrupt (graph-paused)

With LangGraph's interrupt() the pause is enforced by the graph itself: a node calls interrupt({...}), the run suspends, the client receives the payload, renders a UI, and resumes the run with the user's answer. CopilotKit's useInterrupt hook is the render contract.

See the useInterrupt deep dive for the full walkthrough, including the backend tool and render-prop wiring.

<InlineDemo demo="gen-ui-interrupt" />

Going headless

Both patterns above ship with a render prop — CopilotKit handles the "when to show the picker" logic for you. If you want to drive interrupt resolution from a custom UI that lives anywhere in the tree (not necessarily inside a chat), see the headless interrupts guide — it shows how to compose useAgent, agent.subscribe, and copilotkit.runAgent to build your own useInterrupt equivalent.

<IntegrationGrid path="human-in-the-loop" />