website/content/en/docs/setup/going-to-prod/arch/agent.md
{{< warning >}} If you have a complex production environment that makes deploying Vector as an agent difficult, consider starting with the aggregator architecture or combine them for the unified architecture. {{< /warning >}}
This agent architecture deploys Vector as an agent on each node for local data collection and processing.
Data can be collected directly by Vector, indirectly through another agent, or both simultaneously. Data processing can happen locally on the node or remotely in an aggregator.
We recommend this architecture for:
If your use case violates these recommendations, consider the aggregator or unified architectures.
{{< info >}} See the architecting document for more detail. {{< /info >}}
{{< info >}} See the high availability document for more detail. {{< /info >}}
{{< info >}} See the hardening recommendations for more detail. {{< /info >}}
{{< info >}} See the sizing, scaling, and capacity planning document for more detail. {{< /info >}}
{{< info >}} See the rolling out document for more detail. {{< /info >}}
We recommend deploying Vector alongside other agents that integrate with specific systems and produce unique data. Otherwise, Vector should replace the agent. See the collecting data section for more detail.
As a general rule of thumb, agents should not hold onto data. Furthermore, processing and delivery of data should be fast and streaming. If you need to perform complex processing or long-lived batching, use the aggregator architecture.
For easy setup and maintenance of this architecture, consider the Vector’s discussions or chat. These are free best effort channels. For enterprise needs, consider Datadog Observability Pipelines, which comes with enterprise-level support.