docs/docs/Components/bundles-agentics.mdx
import Icon from "@site/src/components/icon"; import PartialParams from '@site/docs/_partial-hidden-params.mdx';
The Agentics component bundle uses LLMs to transform tabular data. Add or fill columns row-by-row with the aMap component, collapse many rows into one with aReduce component, or generate synthetic rows with the aGenerate component.
Watch it in action with a sentiment analysis example that transforms customer reviews into structured insights:
<div style={{position: 'relative', paddingBottom: '56.25%', height: 0, overflow: 'hidden', maxWidth: '800px', marginBottom: '1rem'}}> <iframe style={{position: 'absolute', top: 0, left: 0, width: '100%', height: '100%', borderRadius: '8px'}} src="https://www.youtube.com/embed/bugrV2pX-9Q" title="Agentics Tutorial Video" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen> </iframe> </div>Install the Agentics package in Langflow's virtual environment:
uv pip install agentics-py
Restart Langflow so the Agentics components are available:
uv run langflow run
Agentics components require an LLM. Configure your LLM provider API keys as global variables or environment variables. Supported providers include OpenAI, Anthropic, Google Generative AI, IBM WatsonX, and Ollama.
aGenerate generates synthetic data from a schema or from an example DataFrame. Use it for test data, augmentation, or documentation examples.
For example, this schema definition creates the following DataFrame output:
Schema definition:
customer_name (str): Full nameemail (str): Email addressage (int): Age between 18–80purchase_categories (str, As List): List of product categories purchasedOutput DataFrame (example, 10 rows generated):
| customer_name | age | purchase_categories | |
|---|---|---|---|
| Sarah Johnson | [email protected] | 34 | Electronics, Books, Home & Garden |
| Michael Chen | [email protected] | 28 | Sports, Clothing, Electronics |
| ... | ... | ... | ... |
| Name | Type | Description |
|---|---|---|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | Table | Optional. Example DataFrame to learn from; only first 50 rows used. If not provided, Schema is used. |
| Schema | Table | Define columns to generate when no Input DataFrame is provided. See the component's schema definition. |
| Instructions | String | Optional instructions for generation. |
| Number of Rows to Generate | Integer | How many synthetic rows to create. Default: 10. |
aMap transforms each row of input data using natural language instructions and a defined output schema (one row in, one row out). Use aMap for enriching data with LLM-generated columns such as sentiment, categories, and entity extraction.
Rows are processed concurrently; default batch size is 10. Token usage scales with number of rows.
For example, aMap keeps each input row and fills in sentiment, confidence, and key_topics with the connected LLM.
Input DataFrame:
| review_id | text |
|---|---|
| 1 | Great product, fast shipping! |
| 2 | Terrible quality, broke after one use |
Schema definition:
sentiment (str): "positive", "negative", or "neutral"confidence (float): Confidence score 0–1key_topics (str, As List): Main topics mentionedOutput DataFrame:
| review_id | text | sentiment | confidence | key_topics |
|---|---|---|---|---|
| 1 | Great product, fast shipping! | positive | 0.95 | product quality, shipping |
| 2 | Terrible quality, broke after one use | negative | 0.92 | quality, durability |
| Name | Type | Description |
|---|---|---|
| Language Model | Dropdown | Select the LLM provider and model. Use guided experience. |
| Input DataFrame | Table | Input DataFrame (list of dicts or DataFrame). Each row is processed independently. |
| Schema | Table | Define the structure and types for generated columns. See the component's schema definition. |
| Instructions | String | Natural language instructions for transforming each row into the output schema. |
| As List | Boolean | If true, generate multiple instances of the schema per row and concatenate. |
| Keep Source Columns | Boolean | If true, append new columns to original data; if false, return only generated columns. Ignored if As List is true. Default: true. |
aReduce aggregates all rows in the input DataFrame into a single row following the output schema.
Use aReduce for summaries, reports, or consolidated insights.
To aggregate rows into a list, set As List to true in the component.
All rows are sent in one request; token usage can be high for large DataFrames. Consider filtering or sampling first.
For example, aReduce takes all input rows and produces a single row with LLM-generated aggregates.
It sums revenue into total_revenue, identifies the best-selling product in best_selling_product, and writes a short summary of the sales.
Input DataFrame:
| date | product | revenue | units |
|---|---|---|---|
| 2024-01-01 | Widget A | 1200 | 50 |
| 2024-01-02 | Widget B | 800 | 30 |
| 2024-01-03 | Widget A | 1500 | 60 |
Schema definition:
total_revenue (float): Sum of all revenuebest_selling_product (str): Product with highest unitssummary (str): Natural language summaryOutput DataFrame:
| total_revenue | best_selling_product | summary |
|---|---|---|
| 3500 | Widget A | Over 3 days, Widget A was the best seller with 110 units, generating $2700 in revenue. |
| Name | Type | Description |
|---|---|---|
| Language Model | Dropdown | Select the LLM provider and model. |
| Input DataFrame | Table | Input DataFrame (list of dicts or DataFrame). Required. |
| Schema | Table | Define the structure and types for the aggregated output. See the component's schema definition. |
| As List | Boolean | If true, output is a list of instances of the schema. |
| Instructions | String | Optional instructions for aggregation. If omitted, the LLM infers from field descriptions. |
uv pip install agentics-py==0.3.1 and restart Langflow. For API/key errors, set the provider’s API key as a global variable or env var. For DataFrame errors, ensure input is a list of dicts or use a DataFrame component output.Publications: