docs/features/langflow-assistant.md
Generated on: 2026-01-21 Updated on: 2026-03-30 Status: Draft Owner: Engineering Team
The Langflow Assistant is an AI-powered chat interface that helps users generate custom Langflow components through natural language prompts. It provides real-time streaming feedback during component generation, automatic code validation with retry logic, and seamless integration with the Langflow canvas.
Building custom components in Langflow requires knowledge of the component architecture, Python programming, and understanding of inputs/outputs. The Langflow Assistant removes this barrier by allowing users to describe what they want in natural language, and the AI generates validated, ready-to-use component code that can be added directly to their flow.
Context: Agentic - AI-assisted development capabilities within Langflow
This context owns:
| Context | Relationship | Description |
|---|---|---|
Flow | Customer-Supplier | Assistant generates components that integrate with flows; Flow context supplies flow IDs and component APIs |
Model Providers | Conformist | Assistant conforms to configured model providers (OpenAI, Anthropic, etc.) for LLM capabilities |
Variables | Customer-Supplier | Variables context supplies API keys; Assistant uses them for model authentication |
Custom Components | Customer-Supplier | Custom Components context supplies validation APIs; Assistant uses them to validate generated code |
| Term | Definition | Code Reference |
|---|---|---|
| Assistant | AI-powered chat interface that generates Langflow components from natural language | AssistantPanel, AssistantService |
| AssistantMessage | A single message in the chat, either from user or assistant | AssistantMessage interface |
| ComponentCode | Python code that defines a Langflow component with inputs, outputs, and processing logic | component_code field, extract_component_code() |
| IntentClassification | LLM-based detection of whether user wants to generate a component, ask a question, or is off-topic | classify_intent(), IntentResult |
| ProgressStep | A discrete stage in the component generation pipeline (generating, validating, etc.) | StepType, AgenticStepType |
| SSE | Server-Sent Events - Protocol for streaming real-time progress updates from server to client | StreamingResponse, postAssistStream() |
| TokenEvent | Real-time streaming of LLM output tokens for Q&A responses | AgenticTokenEvent, format_token_event() |
| Validation | Two-phase process: static AST analysis (validate_component_code()) followed by runtime instantiation (validate_component_runtime()) | validate_component_code(), validate_component_runtime(), ValidationResult |
| ValidationRetry | Automatic re-generation attempt when validation fails, including error context | VALIDATION_RETRY_TEMPLATE, max_retries |
| FloatingPanel | The assistant panel displayed as a floating overlay centered on the canvas | AssistantPanel |
| ModelProvider | External LLM service (OpenAI, Anthropic, etc.) used for generation | provider, PREFERRED_PROVIDERS |
| EnabledProvider | A model provider that has been configured with valid API credentials | get_enabled_providers_for_user() |
| FlowExecutor | Service that runs Langflow flows programmatically for assistant operations | FlowExecutor, execute_flow_file() |
| TranslationFlow | Pre-built flow that translates user input and classifies intent | TranslationFlow.json, TRANSLATION_FLOW |
| LangflowAssistantFlow | Pre-built flow containing the main assistant prompt and component generation logic | LangflowAssistant.json, LANGFLOW_ASSISTANT_FLOW |
| ReasoningUI | Animated typing display showing "thinking" messages during component generation | AssistantLoadingState |
| ApproveAction | User action to add a validated component to the canvas | handleApprove(), addComponent() |
| OffTopic | Intent classification for questions unrelated to Langflow (other tools, general knowledge) | "off_topic", OFF_TOPIC_REFUSAL_MESSAGE |
| RuntimeValidation | Second-phase validation that instantiates the component class to catch import/runtime errors | validate_component_runtime(), build_custom_component_template() |
| AgenticSessionPrefix | agentic_ prefix on session IDs to isolate Assistant sessions from Playground | AGENTIC_SESSION_PREFIX |
The core aggregate managing a user's interaction session with the assistant.
AssistantSession (implicit, managed via session_id)AssistantMessage - Individual messages in the conversationAgenticProgressState - Current step in generation pipelineAssistantModel - Selected provider/model combinationAgenticResult - Final generation result with validation statusValidationResult - Outcome of code validationIntentResult - Translation and intent classificationflow_id to generate componentsstreaming status at a timesession_id is generated once per session on the frontend and reused across all requests in the same sessionsession_id is only generated when the user explicitly clicks "New session"session_id must be passed with every request to maintain conversation memorysession_id (it is stateless)Represents a single component generation attempt with validation.
validation_attempts)ComponentCode - Extracted Python codeValidationResult - Compilation and instantiation resultmax_retriesComponentConfiguration for available LLM providers.
EnabledProvider - Provider with valid API keyProviderModel - Available model for a providerThe frontend implements automatic model selection to ensure a valid model is always sent to the backend:
langflow-assistant-selected-model)| Event | Trigger | Payload | Consumers |
|---|---|---|---|
ProgressUpdate | Each pipeline stage transition | {step, attempt, max_attempts, message?, error?} | Frontend UI (SSE) |
TokenGenerated | Each LLM output token (Q&A only) | {chunk: string} | Frontend UI (SSE) |
GenerationComplete | Pipeline finished successfully | {result, validated, class_name?, component_code?} | Frontend UI (SSE) |
GenerationError | Unrecoverable error occurred | {message: string} | Frontend UI (SSE) |
GenerationCancelled | User cancelled or disconnected | {message?: string} | Frontend UI (SSE) |
ValidationSucceeded | Code compiled and instantiated | {class_name, code} | Assistant Service |
ValidationFailed | Code failed to compile/instantiate | {error, code, class_name?} | Assistant Service (triggers retry) |
ComponentApproved | User clicked "Add to Canvas" | {component_code, class_name} | Canvas (adds node) |
As a Langflow user I want to generate custom components using natural language So that I can build flows without writing Python code manually
result.validated (e.g., format mismatch)completedSteps containing component generation stepssession_id should be generatedStatus: Accepted
The assistant needs to provide real-time feedback during component generation, which can take 10-60 seconds. Users need to see progress updates and token streaming to understand the system is working.
Use Server-Sent Events (SSE) for streaming progress updates and tokens from backend to frontend, instead of WebSockets or polling.
Benefits:
fetch with ReadableStreamTrade-offs:
Impact on Product:
Status: Accepted
The assistant needs to distinguish between component generation requests and general questions. Additionally, users may write prompts in any language.
Use a dedicated LLM-based TranslationFlow to classify intent and translate input to English before processing.
Benefits:
Trade-offs:
Impact on Product:
Status: Accepted
LLMs sometimes generate code with syntax errors or missing imports. Manual retry is frustrating for users.
Automatically validate generated code by instantiating the component class. On failure, retry with error context included in the prompt.
Benefits:
Trade-offs:
Impact on Product:
Status: Accepted (supersedes previous floating+sidebar decision)
The initial design supported both floating and sidebar view modes. However, the floating panel with dynamic open/close and size expansion worked well as a standalone solution — it stays out of the way, doesn't conflict with other areas of Langflow (sidebar, playground, canvas), and the open/close/resize behavior feels natural. The sidebar mode added complexity (spacer divs, negative margins, conditional styling) for a view that wasn't needed.
Remove the sidebar view mode entirely. The assistant always uses the floating panel. Removed: view mode toggle, AssistantViewMode type, useAssistantViewMode hook, sidebar spacer div, and all sidebar-conditional CSS from FlowPage.
Benefits:
Trade-offs:
Impact on Product:
Status: Accepted
The assistant had no conversation memory — every message was treated as a new session because the frontend never sent a session_id. The backend generated a new UUID per request (request.session_id or str(uuid.uuid4())), so the Agent's memory component never found previous messages.
The frontend generates a session_id once (via useRef) when the useAssistantChat hook initializes, and includes it in every postAssistStream request. A new session_id is only generated when the user clicks "New session" (handleClearHistory).
Benefits:
Trade-offs:
Key Files:
src/frontend/.../hooks/use-assistant-chat.ts — sessionIdRef stores the ID, passed in every requestsrc/backend/.../agentic/api/router.py — falls back to uuid.uuid4() only if no session_id is sentStatus: Accepted
The TranslationFlow (intent classification) and LangflowAssistant flow shared the same session_id. This caused cross-flow contamination: the TranslationFlow's JSON intent responses were stored alongside the assistant's messages. On subsequent requests, the TranslationFlow's LLM saw messages from both flows in its history, causing intent classification to fail and default to "question".
session_id=None when calling classify_intent — the TranslationFlow is stateless and does not need conversation memory.should_store_message=False on both ChatInput and ChatOutput in the TranslationFlow — it should never persist messages.Benefits:
Trade-offs:
Key Files:
src/backend/.../agentic/services/assistant_service.py — session_id=None in classify_intent callsrc/backend/.../agentic/flows/translation_flow.py — should_store_message=FalseStatus: Accepted
The original ADR-007 introduced intent-independent code extraction: all responses were scanned for component code regardless of intent. This caused a critical bug: when users asked questions like "how do I create a component?", the LLM's example code in the answer was extracted, validated, and displayed as a component card instead of the text answer.
Additionally, the TranslationFlow only classified two intents (generate_component and question), allowing questions about unrelated tools (n8n, Docker, etc.) to pass through as "question" and receive full LLM responses.
Three changes:
Q&A path isolation — When intent is "question", the backend returns the response immediately as plain text without code extraction/validation. Code extraction only runs for "generate_component" intent.
Off-topic intent — Added "off_topic" as a third intent classification. Questions about other tools, platforms, or unrelated topics are blocked before calling the main LLM, saving API cost and enforcing scope.
Frontend fallback scoping — The frontend only shows a component card for Q&A responses if message.completedSteps contains component generation steps (indicating the backend intended to generate a component). This prevents example code in explanatory answers from being misinterpreted.
Benefits:
Trade-offs:
Key Files:
src/backend/.../agentic/services/assistant_service.py — if not is_component_request: yield complete; returnsrc/backend/.../agentic/flows/translation_flow.py — three intents: generate_component, question, off_topicsrc/frontend/.../components/assistant-message.tsx — fallback scoped by completedStepsStatus: Accepted
The zoom percentage in the canvas controls bar (e.g., "65%", "150%", "200%") caused the entire controls bar to shift width when the zoom changed between values with different character counts. This created a visually distracting layout jump.
Apply a fixed width (w-11, 44px) with text-center to the zoom percentage display. Reduce the button's outer padding (px-0.5) to remove dead space between the redo icon and the percentage, and add gap-0.5 between the percentage text and the chevron icon.
src/frontend/.../canvasControlsComponent/CanvasControlsDropdown.tsx — fixed-width zoom displayStatus: Accepted
Opening the assistant panel felt sluggish when there were previous chat messages. The root cause was transition-all duration-300 on the panel container, which forced the browser to transition every CSS property (including height, width, border, shadow) across the entire message DOM on every open/close.
transition-all with transition-[opacity,transform] — only animate the two properties needed for the fade+slide effect.duration-300 to duration-200 for a snappier feel.will-change-[opacity,transform] to hint the browser to GPU-accelerate these properties, avoiding expensive repaints on the message list.Benefits:
Trade-offs:
Key Files:
src/frontend/.../assistantPanel/assistant-panel.tsx — containerClasses transition propertiesStatus: Accepted
The original validation (validate_component_code) only performed static AST analysis — syntax, class name extraction, overlapping I/O names, return statements. Code with valid syntax but wrong imports (e.g., from lfx.base import Component instead of from lfx.custom import Component) passed validation, was marked as validated: true, and showed "Add to Canvas". Clicking it failed silently because the /api/v1/custom_component endpoint performed real instantiation.
Add a second validation phase (validate_component_runtime) that attempts to instantiate the component using Component(_code=code) + build_custom_component_template(). If runtime validation fails, the error is fed back into the retry loop.
Benefits:
validated: true can always be added to canvasTrade-offs:
Key Files:
src/backend/.../agentic/helpers/validation.py — validate_component_runtime()src/backend/.../agentic/services/assistant_service.py — calls runtime validation after AST passesStatus: Accepted
Assistant sessions appeared in the Playground's session list because both used the same MessageTable with the same flow_id. The Assistant's ChatOutput component stored messages with should_store_message=True, and the Playground queried SELECT DISTINCT session_id FROM message WHERE flow_id = ?.
agentic_ on the frontend.agentic_-prefixed sessions in the Playground's session query.Benefits:
Key Files:
src/frontend/.../hooks/use-assistant-chat.ts — AGENTIC_SESSION_PREFIXsrc/backend/.../api/v1/monitor.py — WHERE session_id NOT LIKE 'agentic_%'Status: Accepted
The "A" key shortcut to open the assistant was hardcoded in FlowPage/index.tsx, making it impossible for users to remap or disable via Settings > Shortcuts.
Register the shortcut in the existing customDefaultShortcuts system with name "AI Assistant" and default key "a". The FlowPage reads the shortcut from useShortcutsStore instead of using a hardcoded string.
Key Files:
src/frontend/.../customization/constants.ts — "AI Assistant" entrysrc/frontend/.../stores/shortcuts.ts — aiAssistant: "a"src/frontend/.../pages/FlowPage/index.tsx — reads from storeStatus: Accepted
Models like IBM granite return non-JSON responses from the TranslationFlow, causing all requests to default to "question" intent. This prevented component generation from ever triggering with these models.
Add three progressive fallbacks when json.loads() fails:
```json ... ```)"generate_component", "off_topic")Key Files:
src/backend/.../services/helpers/intent_classification.py — _MARKDOWN_JSON_RE, _EMBEDDED_JSON_RE, keyword fallbackStatus: Accepted
IBM WatsonX and Ollama require additional parameters beyond API key and model name (WatsonX: URL + project ID; Ollama: base URL). The inject_model_into_flow function only injected the model value into Agent nodes, leaving provider-specific fields empty. This caused authentication failures for WatsonX in the Assistant.
Thread provider_vars (resolved from database) through flow_executor → flow_loader → inject_model_into_flow. The injection function now sets api_key, base_url_ibm_watsonx, project_id (WatsonX) and base_url_ollama (Ollama) on Agent node templates.
Key Files:
src/backend/.../services/flow_preparation.py — provider_fields injectionsrc/backend/.../services/flow_executor.py — passes global_variables as provider_varsStatus: Accepted
Users expect assistant session history to persist. A decision was needed on whether to store sessions in the database (like the Playground) or in browser localStorage.
Session history is stored in browser localStorage (key: langflow-assistant-sessions), limited to 10 sessions. Sessions are serialized/deserialized with progress state stripped and in-flight messages marked as "cancelled".
Benefits:
Trade-offs:
Important: This is a known limitation. If users report lost sessions, the answer is that assistant sessions are browser-local only. Database persistence can be added as a future enhancement if needed.
Key Files:
src/frontend/.../hooks/use-session-history.ts — saveCurrentSession(), switchSession(), deleteSession()src/frontend/.../helpers/session-storage.ts — serialization/deserializationsrc/frontend/.../assistant-panel.constants.ts — ASSISTANT_SESSIONS_STORAGE_KEY, ASSISTANT_MAX_SESSIONS| Type | Name | Purpose |
|---|---|---|
| Service | FlowExecutor | Executes pre-built assistant flows (.py or .json, with .py taking priority) |
| Service | ProviderService | Detects configured model providers and retrieves API keys |
| Service | VariableService | Retrieves user's stored API keys from encrypted storage |
| Service | ValidationService | Compiles and instantiates component code for validation |
| External API | LLM Provider APIs | OpenAI, Anthropic, Azure, Google, IBM WatsonX, Ollama, Groq - for text generation |
| Library | lfx.run | Flow execution engine |
| Library | lfx.custom.validate | Component class creation and validation |
| Frontend | use-stick-to-bottom | Auto-scroll behavior in chat |
| Frontend | @xyflow/react | Canvas integration for component placement |
Purpose: Generate component or answer question with streaming progress updates
Request:
{
"flow_id": "string - Required. UUID of the current flow",
"input_value": "string - The user's message/prompt",
"provider": "string - Optional. Model provider (openai, anthropic, etc.)",
"model_name": "string - Optional. Specific model name (gpt-4o, claude-3-opus, etc.)",
"max_retries": "integer - Optional. Total validation attempts (default: 3)",
"session_id": "string - Required for conversation memory. Prefixed with 'agentic_' to isolate from Playground. Generated once per session by the frontend, reused across all requests. New ID on 'New session' only. Backend falls back to uuid4() if omitted."
}
Response (SSE Stream):
Event: progress
{
"event": "progress",
"step": "generating_component | generating | extracting_code | validating | validated | validation_failed | retrying",
"attempt": 0,
"max_attempts": 3,
"message": "string - Human-readable status message",
"error": "string - Optional. Error message for validation_failed",
"class_name": "string - Optional. Component class name",
"component_code": "string - Optional. Generated code for validation_failed"
}
Event: token (Q&A only)
{
"event": "token",
"chunk": "string - Token text"
}
Event: complete
{
"event": "complete",
"data": {
"result": "string - Full response text",
"validated": true,
"class_name": "UppercaseComponent",
"component_code": "class UppercaseComponent(Component):...",
"validation_attempts": 1
}
}
Event: error
{
"event": "error",
"message": "string - Friendly error message"
}
Event: cancelled
{
"event": "cancelled",
"message": "string - Optional cancellation reason"
}
Purpose: Check if assistant is properly configured and return available providers
Request: None (uses authenticated user context)
Response (Success):
{
"configured": true,
"configured_providers": ["openai", "anthropic"],
"providers": [
{
"name": "openai",
"configured": true,
"default_model": "gpt-4o",
"models": [
{"name": "gpt-4o", "display_name": "GPT-4o"},
{"name": "gpt-4-turbo", "display_name": "GPT-4 Turbo"}
]
}
],
"default_provider": "openai",
"default_model": "gpt-4o"
}
Purpose: Non-streaming version of assist (prefer streaming for better UX)
Request: Same as /assist/stream
Response (Success):
{
"result": "string - Full response",
"validated": true,
"class_name": "MyComponent",
"component_code": "string - Python code",
"validation_attempts": 1
}
| Error Code | Condition | User Message | Recovery Action |
|---|---|---|---|
400 | No provider configured | "No model provider is configured. Please configure at least one model provider in Settings." | Navigate to Settings > Model Providers |
400 | Provider not available | "Provider 'X' is not configured. Available providers: [list]" | Select a different provider or configure the requested one |
400 | Missing API key | "OPENAI_API_KEY is required for the Langflow Assistant with openai. Please configure it in Settings > Model Providers." | Add API key in Settings |
400 | Unknown provider | "Unknown provider: X" | Use a supported provider |
404 | Flow file not found | "Flow file 'X.json' not found" | Ensure agentic flows are deployed |
500 | Flow execution error | Friendly error extracted from the actual error (e.g., "Rate limit exceeded. Please wait a moment and try again.") | Retry request; check server logs |
ValidationError | Code syntax error | Includes SyntaxError: ... | System auto-retries with error context |
ValidationError | Import error | Includes ModuleNotFoundError: ... | System auto-retries with error context |
ValidationError | Missing Component base | "Could not extract class name from code" | System auto-retries with hint |
NetworkError | Client disconnected | "Request cancelled" | User can retry |
| Metric | Type | Description | Alert Threshold |
|---|---|---|---|
assistant_requests_total | Counter | Total number of assistant requests | N/A (baseline) |
assistant_requests_by_intent | Counter | Requests segmented by intent (generate_component, question, off_topic) | N/A |
assistant_generation_duration_seconds | Histogram | Time from request to completion | P95 > 60s |
assistant_validation_attempts | Histogram | Number of validation attempts per request | P95 > 2 |
assistant_validation_success_rate | Gauge | Percentage of validations succeeding on first attempt | < 70% |
assistant_provider_usage | Counter | Requests by provider (openai, anthropic, etc.) | N/A |
assistant_errors_total | Counter | Total errors by type | > 10/min |
assistant_cancellations_total | Counter | User-initiated cancellations | > 20% of requests |
| Log Level | Event | Fields | When |
|---|---|---|---|
INFO | assistant.request.started | user_id, flow_id, provider, model_name, intent | Request received |
INFO | assistant.generation.attempt | attempt, max_retries | Each generation attempt |
INFO | assistant.validation.success | class_name, attempts | Component validated successfully |
WARNING | assistant.validation.failed | error, attempt, class_name | Validation failed, will retry |
ERROR | assistant.validation.exhausted | error, attempts, code_snippet | Max retries reached |
INFO | assistant.request.completed | duration_ms, validated, attempts | Request finished |
INFO | assistant.request.cancelled | reason, duration_ms | User cancelled |
ERROR | assistant.flow.error | error_type, error_message, flow_name | Flow execution failed |
Assistant Usage Dashboard:
Assistant Health Dashboard:
No dedicated feature flags are currently implemented. The assistant is always enabled when the agentic backend is available. Feature flags may be added in the future for granular control.
variables table (API keys)session_id) for Agent conversation memory within a sessionagentic_, and generated per hook instance (reset on "New session")localStorage only — not in the database. Clearing browser data deletes all assistant session history. This is by design (see ADR-015)agentic_-prefixed sessions to avoid cross-contamination (see ADR-011)assistant_enabled feature flag to offC4Context
title System Context diagram for Langflow Assistant
Person(user, "Langflow User", "Builds AI workflows using visual canvas")
System(assistant, "Langflow Assistant", "AI-powered component generation through natural language")
System_Ext(llm_providers, "LLM Providers", "OpenAI, Anthropic, Azure, Google - text generation")
System_Ext(langflow_core, "Langflow Core", "Flow execution, component validation, canvas")
Rel(user, assistant, "Sends prompts, receives components")
Rel(assistant, llm_providers, "Generates text via API")
Rel(assistant, langflow_core, "Validates code, adds to canvas")
C4Container
title Container diagram for Langflow Assistant
Person(user, "User", "Langflow user")
Container_Boundary(frontend, "Frontend") {
Container(assistant_panel, "AssistantPanel", "React", "Chat UI with progress indicators")
Container(assistant_hooks, "Assistant Hooks", "React Hooks", "State management and API calls")
Container(sse_client, "SSE Client", "TypeScript", "Parses streaming events")
}
Container_Boundary(backend, "Backend") {
Container(agentic_api, "Agentic API", "FastAPI", "HTTP endpoints for assistant")
Container(assistant_service, "AssistantService", "Python", "Orchestrates generation with retry")
Container(flow_executor, "FlowExecutor", "Python", "Runs assistant flows")
Container(validation_service, "ValidationService", "Python", "Validates component code")
}
Container_Ext(flows, "Assistant Flows", "JSON/Python", "LangflowAssistant.json, translation_flow.py")
System_Ext(llm, "LLM Provider", "External API")
Rel(user, assistant_panel, "Enters prompts")
Rel(assistant_panel, assistant_hooks, "Uses")
Rel(assistant_hooks, sse_client, "Processes stream")
Rel(sse_client, agentic_api, "POST /assist/stream", "SSE")
Rel(agentic_api, assistant_service, "Delegates")
Rel(assistant_service, flow_executor, "Executes flows")
Rel(assistant_service, validation_service, "Validates code")
Rel(flow_executor, flows, "Loads")
Rel(flow_executor, llm, "Calls API")
flowchart TD
A[User Input] --> B{Intent Classification
TranslationFlow - stateless}
B -->|off_topic| Z[Return Refusal Message
no LLM call]
B -->|generate_component| C[Execute LangflowAssistant Flow]
B -->|question| D[Execute LangflowAssistant Flow
with token streaming]
D --> F[Complete Response
plain text / Q&A]
C --> G[Extract Component Code]
G --> H{Code Found?}
H -->|No| F
H -->|Yes| I[Static Validation
AST parsing]
I --> I2{AST Valid?}
I2 -->|No| L
I2 -->|Yes| I3[Runtime Validation
instantiate component]
I3 --> J{Runtime Valid?}
J -->|Yes| K[Return Validated Component
component card with Add to Canvas]
J -->|No| L{Retries Left?}
L -->|Yes| M[Retry with Error Context]
M --> C
L -->|No| N[Return Friendly Error
collapsible details + Try Again]
K --> O[User Clicks Add to Canvas]
O --> P[Component API Validation]
P --> Q[Add to Canvas]
┌──────────┐ ┌─────────────────────────┐
│ Start │───▶│ Intent Classification │
└──────────┘ └─────────┬───────────────┘
│
┌───────────┼────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌────────────┐
│off_topic │ │ question │ │gen_component│
└────┬─────┘ └────┬─────┘ └─────┬──────┘
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌────────────────────────┐
│ refusal │ │generating│ │ generating_component │
│ message │ └────┬─────┘ └─────────┬──────────────┘
└──────────┘ │ │
▼ ▼
┌────────────┐ ┌─────────────────┐
│ complete │ │generation_complete│
│(plain text)│ └────────┬────────┘
└────────────┘ │
▼
┌─────────────────┐
│ extracting_code │
└────────┬────────┘
│
┌────────▼────────┐
│ validating │
│ (AST + Runtime) │
└────────┬────────┘
│
┌───────────────────┼────────────────┐
│ Valid │ Invalid │
▼ ▼ │
┌─────────────────┐ ┌──────────────────────┐ │
│ validated │ │ validation_failed │ │
└────────┬────────┘ └──────────┬───────────┘ │
│ │ │
▼ ▼ │
┌─────────────────┐ ┌─────────────────┐ │
│ complete │ │ retrying │────┘
│ (validated) │ └─────────────────┘
└─────────────────┘ │
│ max attempts
▼
┌─────────────────┐
│ complete │
│ (not validated) │
│ friendly error │
└─────────────────┘