docs/admin-guide/guides/manage-concurrency.mdx
Concurrency limits control how many flow runs can execute simultaneously within a project. When a project reaches its limit, new runs are queued and retried automatically with exponential backoff until a slot becomes available.
This is useful to:
When the field is empty, the project uses the default concurrency limit from your platform plan.
When a project hits its concurrency limit:
This means flows will always eventually execute, but they may experience delays when the project is at capacity.
If you are embedding Activepieces, you can manage concurrency limits programmatically through the JWT token used to provision users. This allows you to group multiple projects into a shared concurrency pool so they share the same limit.
See Provision Users for the concurrencyPoolKey and concurrencyPoolLimit JWT claims.