docs/guides/agent/agent_component_reference/begin.md
The starting component in a workflow.
The Begin component sets an opening greeting or accepts inputs from the user. It is automatically populated onto the canvas when you create an agent, whether from a template or from scratch (from a blank template). There should be only one Begin component in the workflow.
A Begin component is essential in all cases. Every agent includes a Begin component, which cannot be deleted.
Click the component to display its Configuration window. Here, you can set an opening greeting and the input parameters (global variables) for the agent.
Mode defines how the workflow is triggered.
The supported HTTP methods. Available only when Webhook is selected as Mode.
The authentication method to choose, available only when Webhook is selected as Mode. Including:
The schema defines the data structure for HTTP requests received by the system in Webhook mode. It configurations include:
application/jsonmultipart/form-dataapplication/x-www-form-urlencodedtext-plainapplication/octet-streamAvailable only when Webhook is selected as Mode.
The response mode of the workflow, i.e., how the workflow respond to external HTTP requests. Supported options:
200-399.200-399.Conversational mode only.
An agent in conversational mode begins with an opening greeting. It is the agent's first message to the user in conversational mode, which can be a welcoming remark or an instruction to guide the user forward.
You can define global variables within the Begin component, which can be either mandatory or optional. Once set, users will need to provide values for these variables when engaging with the agent. Click + Add variable to add a global variable, each with the following attributes:
:::tip NOTE To pass in parameters from a client, call:
:::danger IMPORTANT If you set the key type as file, ensure the token count of the uploaded file does not exceed your model provider's maximum token limit; otherwise, the plain text in your file will be truncated and incomplete. :::
:::note
You can tune document parsing and embedding efficiency by setting the environment variables DOC_BULK_SIZE and EMBEDDING_BATCH_SIZE.
:::
No. Files uploaded to an agent as input are not stored in a dataset and hence will not be processed using RAGFlow's built-in OCR, DLR or TSR models, or chunked using RAGFlow's built-in chunking methods.
There is no specific file size limit for a file uploaded to an agent. However, note that model providers typically have a default or explicit maximum token setting, which can range from 8196 to 128k: The plain text part of the uploaded file will be passed in as the key value, but if the file's token count exceeds this limit, the string will be truncated and incomplete.
:::tip NOTE
The variables MAX_CONTENT_LENGTH in /docker/.env and client_max_body_size in /docker/nginx/nginx.conf set the file size limit for each upload to a dataset or RAGFlow's File system. These settings DO NOT apply in this scenario.
:::