rfcs/2021-05-17-7469-load-balancing-k8s-aggregators.md
This RFC describes the need for a tooling and environment agnostic load balancing solution to be bundled with the Vector aggregator Helm chart.
Load balancing will be a concern for any sink or source supported by Vector; some of which can use a general solution (ex: load balancing for our HTTP based sinks) and some of which are specific to the component (ex: Kafka source/sink). Due to the breadth of the topic, this RFC will focus on three specific cases for load balancing to Vector aggregators in Kubernetes, while also giving consideration to future adoption for other components.
sourceToday, scaling Vector horizontally (by increasing replicas) is a manual process when deployed as an aggregator. This limits Vector aggregators in both reliability and performance, causing adoption concerns for users. A single aggregator will be limited in performance by the resources that can be dedicated to it, presumably with some (currently) unknown upper bounds. Vector aims to be vendor neutral, and as such we should provide the capacity to scale and load balance across Vector aggregators regardless of environment or upstream event collectors.
Include a configuration for a dedicated reverse proxy that will be deployed as part of the vector-aggregator Helm chart. We should provide basic, but functional, configurations out-of-the box to enable users to "one click" install Vector as an aggregator. The proxy should dynamically resolve downstream Vector instances and allow users to update the balance config to provide for more consistent targets in situations that require it (aggregation transforms). I propose our initially supported proxy should be HAProxy, with the next second being NGINX or Envoy. HAProxy, compared to NGINX, provides more metrics (exposed as JSON or in Prometheus format) and has native service discovery to dynamically populate its configuration. Lua can be used with NGINX to provide service discovery, for example the nginx-ingress-controller.
HAProxy intentionally has little support for proxying UDP, as of 2.3 there is support for forwarding syslog traffic however it doesn't allow for dynamic backend configuration greatly limiting the usability for us.
Below is a basic HAProxy configuration configured to leverage service discovery in a Kubernetes cluster:
resolvers coredns
nameserver dns1 kube-dns.kube-system.svc.cluster.local:53
hold timeout 600s
hold refused 600s
frontend vector
bind *:9000
default_backend vector_template
backend vector_template
balance roundrobin
option tcp-check
server-template srv 10 _vector._tcp.vector-aggregator-headless.vector.svc.cluster.local resolvers coredns check
sources with the smallest amount of engineering effort.sources.The Vector aggregator can currently function as a single instance and be scaled vertically rather than horizontally. While this reduces complexity, it causes Vector to be a single point of failure and introduces an upper limit for throughput.
The library powering the v2 Vector sink/source does provide the capabilities to do client-side load balancing, however that just covers a single sink to source pairing. For certain clients like Beats and Logstash we could implement an Elasticsearch compatible API and allow those clients to use their native load balancing and integrations, this would generally be per source and not available for all sources.
Users already leveraging a service mesh could offload the load balancing to the mesh, however, requiring a service mesh to run and scale Vector aggregators horizontally is a large barrier to adoption.
Project like Thanos and Loki have used hashrings to enable multi-tenancy, we could likely do something similar to ensure events are forwarded to the correct aggregator. I don't think anyone wants to turn Vector into a distributed system though.
roundrobin, with documentations around setting to source as an alternativesource needs its unique port; what defaults and/or templating do we provide to the load balancer? - Out of the box configurations for Datadog agents and Vector agents