docs/sources/setup/install/helm/reference.md
This is the generated reference for the Loki Helm Chart values.
Because the Loki Helm chart exposes a large number of configuration options, this reference is intentionally exhaustive and can be quite long.
Configuration keys are grouped by prefix. For example:
adminApi.* — configuration for the admin API componentbackend.* — configuration for backend podsbackend.persistence.* — storage configuration for backend podsbackend.autoscaling.* — autoscaling configuration for backend podsTo navigate it more easily:
Ctrl+F) to locate specific configuration keys.backend. or ingester.) to jump between related settings.<!-- vale Grafana.Spelling = NO --> <!-- Override default values table from helm-docs. See https://github.com/norwoodj/helm-docs/tree/master#advanced-table-rendering -->Note: This reference is for the Loki Helm chart version 3.0 or greater. If you are using the
grafana/loki-stackHelm chart from the community repo, please refer to thevalues.yamlof the respective Github repository grafana/helm-charts.
{{< responsive-table >}}
<table> <thead> <th>Key</th> <th>Type</th> <th>Description</th> <th>Default</th> </thead> <tbody> <tr> <td>adminApi</td> <td>object</td> <td>Configuration for the `admin-api` target</td> <td><pre lang="json"> { "affinity": {}, "annotations": {}, "containerSecurityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": [ "ALL" ] }, "readOnlyRootFilesystem": true }, "dnsConfig": {}, "env": [], "extraArgs": {}, "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "initContainers": [], "labels": {}, "livenessProbe": {}, "nodeSelector": {}, "podSecurityContext": { "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 }, "readinessProbe": { "httpGet": { "path": "/ready", "port": "http-metrics" }, "initialDelaySeconds": 45 }, "replicas": 1, "resources": {}, "service": { "annotations": {}, "labels": {} }, "startupProbe": {}, "strategy": { "type": "RollingUpdate" }, "terminationGracePeriodSeconds": 60, "tolerations": [], "topologySpreadConstraints": [] } </pre> </td> </tr> <tr> <td>adminApi.affinity</td> <td>object</td> <td>Affinity for admin-api Pods The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.annotations</td> <td>object</td> <td>Additional annotations for the `admin-api` Deployment</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.dnsConfig</td> <td>object</td> <td>DNSConfig for `admin-api` pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.env</td> <td>list</td> <td>Configure optional environment variables</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.extraArgs</td> <td>object</td> <td>Additional CLI arguments for the `admin-api` target</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.extraContainers</td> <td>list</td> <td>Configure optional extraContainers</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.extraEnv</td> <td>list</td> <td>Environment variables to add to the admin-api pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the admin-api pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.extraVolumeMounts</td> <td>list</td> <td>Additional volume mounts for Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.extraVolumes</td> <td>list</td> <td>Additional volumes for Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.hostUsers</td> <td>string</td> <td>Use the host's user namespace in admin-api pods</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>adminApi.initContainers</td> <td>list</td> <td>Configure optional initContainers</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.labels</td> <td>object</td> <td>Additional labels for the `admin-api` Deployment</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.livenessProbe</td> <td>object</td> <td>Liveness probe</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.nodeSelector</td> <td>object</td> <td>Node selector for admin-api Pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.podSecurityContext</td> <td>object</td> <td>Run container as user `enterprise-logs(uid=10001)` `fsGroup` must not be specified, because these security options are applied on container level not on Pod level.</td> <td><pre lang="json"> { "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 } </pre> </td> </tr> <tr> <td>adminApi.readinessProbe</td> <td>object</td> <td>Readiness probe</td> <td><pre lang="json"> { "httpGet": { "path": "/ready", "port": "http-metrics" }, "initialDelaySeconds": 45 } </pre> </td> </tr> <tr> <td>adminApi.replicas</td> <td>int</td> <td>Define the amount of instances</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>adminApi.resources</td> <td>object</td> <td>Values are defined in small.yaml and large.yaml</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.service</td> <td>object</td> <td>Additional labels and annotations for the `admin-api` Service</td> <td><pre lang="json"> { "annotations": {}, "labels": {} } </pre> </td> </tr> <tr> <td>adminApi.startupProbe</td> <td>object</td> <td>Startup probe</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>adminApi.strategy</td> <td>object</td> <td>Update strategy</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>adminApi.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the admin-api to shutdown before it is killed</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>adminApi.tolerations</td> <td>list</td> <td>Tolerations for admin-api Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>adminApi.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for admin-api pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend</td> <td>object</td> <td>Configuration for the backend pod(s)</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "backend", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "annotations": {}, "autoscaling": { "behavior": {}, "enabled": false, "maxReplicas": 6, "minReplicas": 3, "targetCPUUtilizationPercentage": 60, "targetMemoryUtilizationPercentage": null }, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "maxUnavailable": 1, "nodeSelector": {}, "persistence": { "accessModes": [ "ReadWriteOnce" ], "annotations": {}, "dataVolumeParameters": { "emptyDir": {} }, "enableStatefulSetAutoDeletePVC": true, "labels": {}, "selector": null, "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null, "volumeClaimsEnabled": true }, "podAnnotations": {}, "podLabels": {}, "podManagementPolicy": "Parallel", "priorityClassName": null, "replicas": 3, "resources": {}, "selectorLabels": {}, "service": { "annotations": {}, "labels": {}, "trafficDistribution": "", "type": "ClusterIP" }, "targetModule": "backend", "terminationGracePeriodSeconds": 300, "tolerations": [], "topologySpreadConstraints": [] } </pre> </td> </tr> <tr> <td>backend.affinity</td> <td>object</td> <td>Affinity for backend pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>backend.annotations</td> <td>object</td> <td>Annotations for backend StatefulSet</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.autoscaling.behavior</td> <td>object</td> <td>Behavior policies while scaling.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.autoscaling.enabled</td> <td>bool</td> <td>Enable autoscaling for the backend.</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>backend.autoscaling.maxReplicas</td> <td>int</td> <td>Maximum autoscaling replicas for the backend.</td> <td><pre lang="json"> 6 </pre> </td> </tr> <tr> <td>backend.autoscaling.minReplicas</td> <td>int</td> <td>Minimum autoscaling replicas for the backend.</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>backend.autoscaling.targetCPUUtilizationPercentage</td> <td>int</td> <td>Target CPU utilization percentage for the backend.</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>backend.autoscaling.targetMemoryUtilizationPercentage</td> <td>string</td> <td>Target memory utilization percentage for the backend.</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.dnsConfig</td> <td>object</td> <td>DNS config for backend pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.extraArgs</td> <td>list</td> <td>Additional CLI args for the backend</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.extraContainers</td> <td>list</td> <td>Containers to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.extraEnv</td> <td>list</td> <td>Environment variables to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.extraVolumes</td> <td>list</td> <td>Volumes to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the backend pods.</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>backend.image.registry</td> <td>string</td> <td>The Docker registry for the backend image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.image.repository</td> <td>string</td> <td>Docker image repository for the backend image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.image.tag</td> <td>string</td> <td>Docker image tag for the backend image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.initContainers</td> <td>list</td> <td>Init containers to add to the backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.maxUnavailable</td> <td>int</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>backend.nodeSelector</td> <td>object</td> <td>Node selector for backend pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.persistence.accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>backend.persistence.annotations</td> <td>object</td> <td>Annotations for volume claim</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.persistence.dataVolumeParameters</td> <td>object</td> <td>Parameters used for the `data` volume when volumeClaimEnabled if false</td> <td><pre lang="json"> { "emptyDir": {} } </pre> </td> </tr> <tr> <td>backend.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>backend.persistence.labels</td> <td>object</td> <td>Labels for volume claim</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.persistence.selector</td> <td>string</td> <td>Selector for persistent disk</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.persistence.size</td> <td>string</td> <td>Size of persistent disk</td> <td><pre lang="json"> "10Gi" </pre> </td> </tr> <tr> <td>backend.persistence.storageClass</td> <td>string</td> <td>Storage class to be used. If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.persistence.volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.persistence.volumeClaimsEnabled</td> <td>bool</td> <td>Enable volume claims in pod spec</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>backend.podAnnotations</td> <td>object</td> <td>Annotations for backend pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.podLabels</td> <td>object</td> <td>Additional labels for each `backend` pod</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.podManagementPolicy</td> <td>string</td> <td>The default is to deploy all pods in parallel.</td> <td><pre lang="json"> "Parallel" </pre> </td> </tr> <tr> <td>backend.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for backend pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>backend.replicas</td> <td>int</td> <td>Number of replicas for the backend</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>backend.resources</td> <td>object</td> <td>Resource requests and limits for the backend</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.selectorLabels</td> <td>object</td> <td>Additional selector labels for each `backend` pod</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.service.annotations</td> <td>object</td> <td>Annotations for backend Service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.service.labels</td> <td>object</td> <td>Additional labels for backend Service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>backend.service.trafficDistribution</td> <td>string</td> <td>trafficDistribution for backend Service</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>backend.service.type</td> <td>string</td> <td>Service type for backend Service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>backend.targetModule</td> <td>string</td> <td>Comma-separated list of Loki modules to load for the backend</td> <td><pre lang="json"> "backend" </pre> </td> </tr> <tr> <td>backend.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the backend to shutdown before it is killed. Especially for the ingester, this must be increased. It must be long enough so backends can be gracefully shutdown flushing/transferring all data and to successfully leave the member ring on shutdown.</td> <td><pre lang="json"> 300 </pre> </td> </tr> <tr> <td>backend.tolerations</td> <td>list</td> <td>Tolerations for backend pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>backend.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for backend pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder</td> <td>object</td> <td>Configuration for the bloom-builder</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "bloom-builder", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "autoscaling": { "behavior": { "enabled": false, "scaleDown": {}, "scaleUp": {} }, "customMetrics": [], "enabled": false, "maxReplicas": 3, "minReplicas": 1, "targetCPUUtilizationPercentage": 60, "targetMemoryUtilizationPercentage": null }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "maxUnavailable": null, "nodeSelector": {}, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "replicas": 0, "resources": {}, "serviceAnnotations": {}, "serviceLabels": {}, "terminationGracePeriodSeconds": 30, "tolerations": [] } </pre> </td> </tr> <tr> <td>bloomBuilder.affinity</td> <td>object</td> <td>Affinity for bloom-builder pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>bloomBuilder.appProtocol</td> <td>object</td> <td>Adds the appProtocol field to the queryFrontend service. This allows bloomBuilder to work with istio protocol selection.</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>bloomBuilder.appProtocol.grpc</td> <td>string</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.behavior.enabled</td> <td>bool</td> <td>Enable autoscaling behaviours</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.behavior.scaleDown</td> <td>object</td> <td>define scale down policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.behavior.scaleUp</td> <td>object</td> <td>define scale up policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.customMetrics</td> <td>list</td> <td>Allows one to define custom metrics using the HPA/v2 schema (for example, Pods, Object or External metrics)</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.enabled</td> <td>bool</td> <td>Enable autoscaling for the bloom-builder</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.maxReplicas</td> <td>int</td> <td>Maximum autoscaling replicas for the bloom-builder</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.minReplicas</td> <td>int</td> <td>Minimum autoscaling replicas for the bloom-builder</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.targetCPUUtilizationPercentage</td> <td>int</td> <td>Target CPU utilisation percentage for the bloom-builder</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>bloomBuilder.autoscaling.targetMemoryUtilizationPercentage</td> <td>string</td> <td>Target memory utilisation percentage for the bloom-builder</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.dnsConfig</td> <td>object</td> <td>DNSConfig for bloom-builder pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.extraArgs</td> <td>list</td> <td>Additional CLI args for the bloom-builder</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.extraContainers</td> <td>list</td> <td>Containers to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.extraEnv</td> <td>list</td> <td>Environment variables to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.extraVolumes</td> <td>list</td> <td>Volumes to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the boom-builder</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>bloomBuilder.image.registry</td> <td>string</td> <td>The Docker registry for the bloom-builder image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.image.repository</td> <td>string</td> <td>Docker image repository for the bloom-builder image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.image.tag</td> <td>string</td> <td>Docker image tag for the bloom-builder image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.initContainers</td> <td>list</td> <td>Init containers to add to the bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomBuilder.maxUnavailable</td> <td>string</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.nodeSelector</td> <td>object</td> <td>Node selector for bloom-builder pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.podAnnotations</td> <td>object</td> <td>Annotations for bloom-builder pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.podLabels</td> <td>object</td> <td>Labels for bloom-builder pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for bloom-builder pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomBuilder.replicas</td> <td>int</td> <td>Number of replicas for the bloom-builder</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>bloomBuilder.resources</td> <td>object</td> <td>Resource requests and limits for the bloom-builder</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.serviceAnnotations</td> <td>object</td> <td>Annotations for bloom-builder service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.serviceLabels</td> <td>object</td> <td>Labels for bloom-builder service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomBuilder.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the bloom-builder to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>bloomBuilder.tolerations</td> <td>list</td> <td>Tolerations for bloom-builder pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway</td> <td>object</td> <td>Configuration for the bloom-gateway</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "bloom-gateway", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "livenessProbe": {}, "nodeSelector": {}, "persistence": { "annotations": {}, "claims": [ { "accessModes": [ "ReadWriteOnce" ], "name": "data", "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null } ], "enableStatefulSetAutoDeletePVC": false, "enabled": false, "labels": {}, "whenDeleted": "Retain", "whenScaled": "Retain" }, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "readinessProbe": {}, "replicas": 0, "resources": {}, "serviceAccount": { "annotations": {}, "automountServiceAccountToken": true, "create": false, "imagePullSecrets": [], "name": null }, "serviceAnnotations": {}, "serviceLabels": {}, "startupProbe": {}, "terminationGracePeriodSeconds": 30, "tolerations": [] } </pre> </td> </tr> <tr> <td>bloomGateway.affinity</td> <td>object</td> <td>Affinity for bloom-gateway pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>bloomGateway.appProtocol</td> <td>object</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>bloomGateway.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.dnsConfig</td> <td>object</td> <td>DNSConfig for bloom-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.extraArgs</td> <td>list</td> <td>Additional CLI args for the bloom-gateway</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.extraContainers</td> <td>list</td> <td>Containers to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.extraEnv</td> <td>list</td> <td>Environment variables to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.extraVolumes</td> <td>list</td> <td>Volumes to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the bloom-gateway</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>bloomGateway.image.registry</td> <td>string</td> <td>The Docker registry for the bloom-gateway image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.image.repository</td> <td>string</td> <td>Docker image repository for the bloom-gateway image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.image.tag</td> <td>string</td> <td>Docker image tag for the bloom-gateway image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.initContainers</td> <td>list</td> <td>Init containers to add to the bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.livenessProbe</td> <td>object</td> <td>liveness probe settings for bloom-gateway pods. If empty use `loki.livenessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.nodeSelector</td> <td>object</td> <td>Node selector for bloom-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.persistence.annotations</td> <td>object</td> <td>Annotations for bloom-gateway PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.persistence.claims</td> <td>list</td> <td>List of the bloom-gateway PVCs</td> <td><pre lang="list"> </pre> </td> </tr> <tr> <td>bloomGateway.persistence.claims[0].accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>bloomGateway.persistence.claims[0].size</td> <td>string</td> <td>Size of persistent disk</td> <td><pre lang="json"> "10Gi" </pre> </td> </tr> <tr> <td>bloomGateway.persistence.claims[0].volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomGateway.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs for the bloom-gateway</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomGateway.persistence.labels</td> <td>object</td> <td>Labels for bloom gateway PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.podAnnotations</td> <td>object</td> <td>Annotations for bloom-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.podLabels</td> <td>object</td> <td>Labels for bloom-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for bloom-gateway pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.readinessProbe</td> <td>object</td> <td>readiness probe settings for bloom-gateway pods. If empty, use `loki.readinessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.replicas</td> <td>int</td> <td>Number of replicas for the bloom-gateway</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>bloomGateway.resources</td> <td>object</td> <td>Resource requests and limits for the bloom-gateway</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.serviceAccount.annotations</td> <td>object</td> <td>Annotations for the bloom-gateway service account</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.serviceAccount.automountServiceAccountToken</td> <td>bool</td> <td>Set this toggle to false to opt out of automounting API credentials for the service account</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>bloomGateway.serviceAccount.imagePullSecrets</td> <td>list</td> <td>Image pull secrets for the bloom-gateway service account</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomGateway.serviceAccount.name</td> <td>string</td> <td>The name of the ServiceAccount to use for the bloom-gateway. If not set and create is true, a name is generated by appending "-bloom-gateway" to the common ServiceAccount.</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomGateway.serviceAnnotations</td> <td>object</td> <td>Annotations for bloom-gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.serviceLabels</td> <td>object</td> <td>Labels for bloom-gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.startupProbe</td> <td>object</td> <td>startup probe settings for bloom-gateway pods. If empty, use `loki.startupProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomGateway.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the bloom-gateway to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>bloomGateway.tolerations</td> <td>list</td> <td>Tolerations for bloom-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner</td> <td>object</td> <td>Configuration for the bloom-planner</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "bloom-planner", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "livenessProbe": {}, "nodeSelector": {}, "persistence": { "claims": [ { "accessModes": [ "ReadWriteOnce" ], "annotations": {}, "labels": {}, "name": "data", "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null } ], "enableStatefulSetAutoDeletePVC": false, "enabled": false, "whenDeleted": "Retain", "whenScaled": "Retain" }, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "readinessProbe": {}, "replicas": 0, "resources": {}, "serviceAccount": { "annotations": {}, "automountServiceAccountToken": true, "create": false, "imagePullSecrets": [], "name": null }, "serviceAnnotations": {}, "serviceLabels": {}, "startupProbe": {}, "terminationGracePeriodSeconds": 30, "tolerations": [] } </pre> </td> </tr> <tr> <td>bloomPlanner.affinity</td> <td>object</td> <td>Affinity for bloom-planner pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>bloomPlanner.appProtocol</td> <td>object</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>bloomPlanner.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.dnsConfig</td> <td>object</td> <td>DNSConfig for bloom-planner pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.extraArgs</td> <td>list</td> <td>Additional CLI args for the bloom-planner</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.extraContainers</td> <td>list</td> <td>Containers to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.extraEnv</td> <td>list</td> <td>Environment variables to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.extraVolumes</td> <td>list</td> <td>Volumes to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the bloom-planner</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>bloomPlanner.image.registry</td> <td>string</td> <td>The Docker registry for the bloom-planner image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.image.repository</td> <td>string</td> <td>Docker image repository for the bloom-planner image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.image.tag</td> <td>string</td> <td>Docker image tag for the bloom-planner image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.initContainers</td> <td>list</td> <td>Init containers to add to the bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.livenessProbe</td> <td>object</td> <td>liveness probe settings for bloom-planner pods. If empty use `loki.livenessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.nodeSelector</td> <td>object</td> <td>Node selector for bloom-planner pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims</td> <td>list</td> <td>List of the bloom-planner PVCs</td> <td><pre lang="list"> </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims[0].accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims[0].annotations</td> <td>object</td> <td>Annotations for bloom-planner PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims[0].labels</td> <td>object</td> <td>Labels for bloom planner PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims[0].size</td> <td>string</td> <td>Size of persistent disk</td> <td><pre lang="json"> "10Gi" </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.claims[0].volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomPlanner.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs for the bloom-planner</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>bloomPlanner.podAnnotations</td> <td>object</td> <td>Annotations for bloom-planner pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.podLabels</td> <td>object</td> <td>Labels for bloom-planner pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for bloom-planner pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.readinessProbe</td> <td>object</td> <td>readiness probe settings for bloom-planner pods. If empty, use `loki.readinessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.replicas</td> <td>int</td> <td>Number of replicas for the bloom-planner</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>bloomPlanner.resources</td> <td>object</td> <td>Resource requests and limits for the bloom-planner</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.serviceAccount.annotations</td> <td>object</td> <td>Annotations for the bloom-planner service account</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.serviceAccount.automountServiceAccountToken</td> <td>bool</td> <td>Set this toggle to false to opt out of automounting API credentials for the service account</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>bloomPlanner.serviceAccount.imagePullSecrets</td> <td>list</td> <td>Image pull secrets for the bloom-planner service account</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>bloomPlanner.serviceAccount.name</td> <td>string</td> <td>The name of the ServiceAccount to use for the bloom-planner. If not set and create is true, a name is generated by appending "-bloom-planner" to the common ServiceAccount.</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>bloomPlanner.serviceAnnotations</td> <td>object</td> <td>Annotations for bloom-planner service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.serviceLabels</td> <td>object</td> <td>Labels for bloom-planner service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.startupProbe</td> <td>object</td> <td>startup probe settings for bloom-planner pods. If empty use `loki.startupProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>bloomPlanner.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the bloom-planner to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>bloomPlanner.tolerations</td> <td>list</td> <td>Tolerations for bloom-planner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.addresses</td> <td>string</td> <td>Comma separated addresses list in DNS Service Discovery format</td> <td><pre lang="json"> "dnssrvnoa+_memcached-client._tcp.{{ include \"loki.resourceName\" (dict \"ctx\" $ \"component\" \"chunks-cache\" \"suffix\" $.Values.chunksCache.suffix ) }}.{{ include \"loki.namespace\" $ }}.svc.{{ .Values.global.clusterDomain }}" </pre> </td> </tr> <tr> <td>chunksCache.affinity</td> <td>object</td> <td>Affinity for chunks-cache pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.allocatedCPU</td> <td>string</td> <td>Amount of cpu allocated to chunks-cache for object storage (in integer or millicores).</td> <td><pre lang="json"> "500m" </pre> </td> </tr> <tr> <td>chunksCache.allocatedMemory</td> <td>int</td> <td>Amount of memory allocated to chunks-cache for object storage (in MB).</td> <td><pre lang="json"> 8192 </pre> </td> </tr> <tr> <td>chunksCache.annotations</td> <td>object</td> <td>Annotations for the chunks-cache pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.batchSize</td> <td>int</td> <td>Batchsize for sending and receiving chunks from chunks cache</td> <td><pre lang="json"> 4 </pre> </td> </tr> <tr> <td>chunksCache.connectionLimit</td> <td>int</td> <td>Maximum number of connections allowed</td> <td><pre lang="json"> 16384 </pre> </td> </tr> <tr> <td>chunksCache.defaultValidity</td> <td>string</td> <td>Specify how long cached chunks should be stored in the chunks-cache before being expired</td> <td><pre lang="json"> "0s" </pre> </td> </tr> <tr> <td>chunksCache.dnsConfig</td> <td>object</td> <td>DNSConfig for chunks-cache</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.enabled</td> <td>bool</td> <td>Specifies whether memcached based chunks-cache should be enabled</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>chunksCache.extraArgs</td> <td>object</td> <td>Additional CLI args for chunks-cache</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.extraContainers</td> <td>list</td> <td>Additional containers to be added to the chunks-cache pod.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.extraExtendedOptions</td> <td>string</td> <td>Add extended options for chunks-cache memcached container. The format is the same as for the memcached -o/--extend flag. Example: extraExtendedOptions: 'tls,no_hashexpand'</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>chunksCache.extraVolumeMounts</td> <td>list</td> <td>Additional volume mounts to be added to the chunks-cache pod (applies to both memcached and exporter containers). Example: extraVolumeMounts: - name: extra-volume mountPath: /etc/extra-volume readOnly: true</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.extraVolumes</td> <td>list</td> <td>Additional volumes to be added to the chunks-cache pod (applies to both memcached and exporter containers). Example: extraVolumes: - name: extra-volume secret: secretName: extra-volume-secret</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.hostUsers</td> <td>string</td> <td>Use the host's user namespace in chunks-cache pods</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>chunksCache.initContainers</td> <td>list</td> <td>Extra init containers for chunks-cache pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2</td> <td>object</td> <td>l2 memcache configuration</td> <td><pre lang="json"> { "addresses": "dnssrvnoa+_memcached-client._tcp.{{ include \"loki.resourceName\" (dict \"ctx\" $ \"component\" \"chunks-cache\" \"suffix\" $.Values.chunksCache.l2.suffix ) }}.{{ include \"loki.namespace\" $ }}.svc.{{ .Values.global.clusterDomain }}", "affinity": {}, "allocatedCPU": "500m", "allocatedMemory": 8192, "annotations": {}, "batchSize": 4, "connectionLimit": 16384, "defaultValidity": "0s", "dnsConfig": {}, "enabled": false, "extraArgs": {}, "extraContainers": [], "extraExtendedOptions": "", "extraVolumeMounts": [], "extraVolumes": [], "hostUsers": "nil", "initContainers": [], "l2ChunkCacheHandoff": "345600s", "maxItemMemory": 5, "maxUnavailable": 1, "nodeSelector": {}, "parallelism": 5, "persistence": { "enabled": false, "labels": {}, "mountPath": "/data", "storageClass": null, "storageSize": "10G", "volumeAttributesClassName": null }, "podAnnotations": {}, "podLabels": {}, "podManagementPolicy": "Parallel", "port": 11211, "priorityClassName": null, "replicas": 1, "resources": null, "service": { "annotations": {}, "labels": {} }, "statefulStrategy": { "type": "RollingUpdate" }, "suffix": "l2", "terminationGracePeriodSeconds": 60, "timeout": "2000ms", "tolerations": [], "topologySpreadConstraints": [], "writebackBuffer": 500000, "writebackParallelism": 1, "writebackSizeLimit": "500MB" } </pre> </td> </tr> <tr> <td>chunksCache.l2.addresses</td> <td>string</td> <td>Comma separated addresses list in DNS Service Discovery format</td> <td><pre lang="json"> "dnssrvnoa+_memcached-client._tcp.{{ include \"loki.resourceName\" (dict \"ctx\" $ \"component\" \"chunks-cache\" \"suffix\" $.Values.chunksCache.l2.suffix ) }}.{{ include \"loki.namespace\" $ }}.svc.{{ .Values.global.clusterDomain }}" </pre> </td> </tr> <tr> <td>chunksCache.l2.affinity</td> <td>object</td> <td>Affinity for chunks-cache-l2 pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.allocatedCPU</td> <td>string</td> <td>Amount of cpu allocated to chunks-cache-l2 for object storage (in integer or millicores).</td> <td><pre lang="json"> "500m" </pre> </td> </tr> <tr> <td>chunksCache.l2.allocatedMemory</td> <td>int</td> <td>Amount of memory allocated to chunks-cache-l2 for object storage (in MB).</td> <td><pre lang="json"> 8192 </pre> </td> </tr> <tr> <td>chunksCache.l2.annotations</td> <td>object</td> <td>Annotations for the chunks-cache-l2 pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.batchSize</td> <td>int</td> <td>Batchsize for sending and receiving chunks from chunks cache</td> <td><pre lang="json"> 4 </pre> </td> </tr> <tr> <td>chunksCache.l2.connectionLimit</td> <td>int</td> <td>Maximum number of connections allowed</td> <td><pre lang="json"> 16384 </pre> </td> </tr> <tr> <td>chunksCache.l2.defaultValidity</td> <td>string</td> <td>Specify how long cached chunks should be stored in the chunks-cache-l2 before being expired</td> <td><pre lang="json"> "0s" </pre> </td> </tr> <tr> <td>chunksCache.l2.dnsConfig</td> <td>object</td> <td>DNSConfig for chunks-cache-l2</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.enabled</td> <td>bool</td> <td>Specifies whether memcached based chunks-cache-l2 should be enabled</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>chunksCache.l2.extraArgs</td> <td>object</td> <td>Additional CLI args for chunks-cache-l2</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.extraContainers</td> <td>list</td> <td>Additional containers to be added to the chunks-cache-l2 pod.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.extraExtendedOptions</td> <td>string</td> <td>Add extended options for chunks-cache-l2 memcached container. The format is the same as for the memcached -o/--extend flag. Example: extraExtendedOptions: 'tls,no_hashexpand'</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>chunksCache.l2.extraVolumeMounts</td> <td>list</td> <td>Additional volume mounts to be added to the chunks-cache-l2 pod (applies to both memcached and exporter containers). Example: extraVolumeMounts: - name: extra-volume mountPath: /etc/extra-volume readOnly: true</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.extraVolumes</td> <td>list</td> <td>Additional volumes to be added to the chunks-cache-l2 pod (applies to both memcached and exporter containers). Example: extraVolumes: - name: extra-volume secret: secretName: extra-volume-secret</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.hostUsers</td> <td>string</td> <td>Use the host's user namespace in chunks-cache-l2 pods</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>chunksCache.l2.initContainers</td> <td>list</td> <td>Extra init containers for chunks-cache-l2 pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.l2ChunkCacheHandoff</td> <td>string</td> <td>The age of chunks should be transfered from l1 cache to l2 4 days</td> <td><pre lang="json"> "345600s" </pre> </td> </tr> <tr> <td>chunksCache.l2.maxItemMemory</td> <td>int</td> <td>Maximum item memory for chunks-cache-l2 (in MB).</td> <td><pre lang="json"> 5 </pre> </td> </tr> <tr> <td>chunksCache.l2.maxUnavailable</td> <td>int</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.l2.nodeSelector</td> <td>object</td> <td>Node selector for chunks-cach-l2 pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.parallelism</td> <td>int</td> <td>Parallel threads for sending and receiving chunks from chunks cache</td> <td><pre lang="json"> 5 </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence</td> <td>object</td> <td>Persistence settings for the chunks-cache-l2</td> <td><pre lang="json"> { "enabled": false, "labels": {}, "mountPath": "/data", "storageClass": null, "storageSize": "10G", "volumeAttributesClassName": null } </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs for the chunks-cache-l2</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence.mountPath</td> <td>string</td> <td>Volume mount path</td> <td><pre lang="json"> "/data" </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence.storageClass</td> <td>string</td> <td>Storage class to be used. If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence.storageSize</td> <td>string</td> <td>Size of persistent disk, must be in G or Gi</td> <td><pre lang="json"> "10G" </pre> </td> </tr> <tr> <td>chunksCache.l2.persistence.volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.l2.podAnnotations</td> <td>object</td> <td>Annotations for chunks-cache-l2 pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.podLabels</td> <td>object</td> <td>Labels for chunks-cache-l2 pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.l2.podManagementPolicy</td> <td>string</td> <td>Management policy for chunks-cache-l2 pods</td> <td><pre lang="json"> "Parallel" </pre> </td> </tr> <tr> <td>chunksCache.l2.port</td> <td>int</td> <td>Port of the chunks-cache-l2 service</td> <td><pre lang="json"> 11211 </pre> </td> </tr> <tr> <td>chunksCache.l2.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for chunks-cache-l2 pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.l2.replicas</td> <td>int</td> <td>Specify how long cached chunks should be stored in the chunks-cache-l2 before being expired</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.l2.resources</td> <td>string</td> <td>Resource requests and limits for the chunks-cache-l2 By default a safe memory limit will be requested based on allocatedMemory value (floor (* 1.2 allocatedMemory)).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.l2.service</td> <td>object</td> <td>Service annotations and labels</td> <td><pre lang="json"> { "annotations": {}, "labels": {} } </pre> </td> </tr> <tr> <td>chunksCache.l2.statefulStrategy</td> <td>object</td> <td>Stateful chunks-cache strategy</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>chunksCache.l2.suffix</td> <td>string</td> <td>Append to the name of the resources to make names different for l1 and l2</td> <td><pre lang="json"> "l2" </pre> </td> </tr> <tr> <td>chunksCache.l2.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the chunks-cache-l2 to shutdown before it is killed</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>chunksCache.l2.timeout</td> <td>string</td> <td>Memcached operation timeout</td> <td><pre lang="json"> "2000ms" </pre> </td> </tr> <tr> <td>chunksCache.l2.tolerations</td> <td>list</td> <td>Tolerations for chunks-cache-l2 pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.topologySpreadConstraints</td> <td>list</td> <td>topologySpreadConstraints allows to customize the default topologySpreadConstraints. This can be either a single dict as shown below or a slice of topologySpreadConstraints. labelSelector is taken from the constraint itself (if it exists) or is generated by the chart using the same selectors as for services.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.l2.writebackBuffer</td> <td>int</td> <td>Max number of objects to use for cache write back</td> <td><pre lang="json"> 500000 </pre> </td> </tr> <tr> <td>chunksCache.l2.writebackParallelism</td> <td>int</td> <td>Number of parallel threads for cache write back</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.l2.writebackSizeLimit</td> <td>string</td> <td>Max memory to use for cache write back</td> <td><pre lang="json"> "500MB" </pre> </td> </tr> <tr> <td>chunksCache.maxItemMemory</td> <td>int</td> <td>Maximum item memory for chunks-cache (in MB).</td> <td><pre lang="json"> 5 </pre> </td> </tr> <tr> <td>chunksCache.maxUnavailable</td> <td>int</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.nodeSelector</td> <td>object</td> <td>Node selector for chunks-cache pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.parallelism</td> <td>int</td> <td>Parallel threads for sending and receiving chunks from chunks cache</td> <td><pre lang="json"> 5 </pre> </td> </tr> <tr> <td>chunksCache.persistence</td> <td>object</td> <td>Persistence settings for the chunks-cache</td> <td><pre lang="json"> { "enabled": false, "labels": {}, "mountPath": "/data", "storageClass": null, "storageSize": "10G", "volumeAttributesClassName": null } </pre> </td> </tr> <tr> <td>chunksCache.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs for the chunks-cache</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>chunksCache.persistence.mountPath</td> <td>string</td> <td>Volume mount path</td> <td><pre lang="json"> "/data" </pre> </td> </tr> <tr> <td>chunksCache.persistence.storageClass</td> <td>string</td> <td>Storage class to be used. If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.persistence.storageSize</td> <td>string</td> <td>Size of persistent disk, must be in G or Gi</td> <td><pre lang="json"> "10G" </pre> </td> </tr> <tr> <td>chunksCache.persistence.volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.podAnnotations</td> <td>object</td> <td>Annotations for chunks-cache pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.podLabels</td> <td>object</td> <td>Labels for chunks-cache pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>chunksCache.podManagementPolicy</td> <td>string</td> <td>Management policy for chunks-cache pods</td> <td><pre lang="json"> "Parallel" </pre> </td> </tr> <tr> <td>chunksCache.port</td> <td>int</td> <td>Port of the chunks-cache service</td> <td><pre lang="json"> 11211 </pre> </td> </tr> <tr> <td>chunksCache.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for chunks-cache pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.replicas</td> <td>int</td> <td>Specify how long cached chunks should be stored in the chunks-cache before being expired</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.resources</td> <td>string</td> <td>Resource requests and limits for the chunks-cache By default a safe memory limit will be requested based on allocatedMemory value (floor (* 1.2 allocatedMemory)).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>chunksCache.service</td> <td>object</td> <td>Service annotations and labels</td> <td><pre lang="json"> { "annotations": {}, "labels": {} } </pre> </td> </tr> <tr> <td>chunksCache.statefulStrategy</td> <td>object</td> <td>Stateful chunks-cache strategy</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>chunksCache.suffix</td> <td>string</td> <td>Append to the name of the resources to make names different for l1 and l2</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>chunksCache.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the chunks-cache to shutdown before it is killed</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>chunksCache.timeout</td> <td>string</td> <td>Memcached operation timeout</td> <td><pre lang="json"> "2000ms" </pre> </td> </tr> <tr> <td>chunksCache.tolerations</td> <td>list</td> <td>Tolerations for chunks-cache pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.topologySpreadConstraints</td> <td>list</td> <td>topologySpreadConstraints allows to customize the default topologySpreadConstraints. This can be either a single dict as shown below or a slice of topologySpreadConstraints. labelSelector is taken from the constraint itself (if it exists) or is generated by the chart using the same selectors as for services.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>chunksCache.writebackBuffer</td> <td>int</td> <td>Max number of objects to use for cache write back</td> <td><pre lang="json"> 500000 </pre> </td> </tr> <tr> <td>chunksCache.writebackParallelism</td> <td>int</td> <td>Number of parallel threads for cache write back</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>chunksCache.writebackSizeLimit</td> <td>string</td> <td>Max memory to use for cache write back</td> <td><pre lang="json"> "500MB" </pre> </td> </tr> <tr> <td>clusterLabelOverride</td> <td>string</td> <td>Overrides the chart's cluster label</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>commonLabels</td> <td>object</td> <td>Labels to be added to resources</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor</td> <td>object</td> <td>Configuration for the compactor</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "compactor", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "livenessProbe": {}, "nodeSelector": {}, "persistence": { "claims": [ { "accessModes": [ "ReadWriteOnce" ], "annotations": {}, "labels": {}, "name": "data", "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null } ], "enableStatefulSetAutoDeletePVC": false, "enabled": false, "whenDeleted": "Retain", "whenScaled": "Retain" }, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "readinessProbe": {}, "replicas": 0, "resources": {}, "serviceAccount": { "annotations": {}, "automountServiceAccountToken": true, "create": false, "imagePullSecrets": [], "name": null }, "serviceAnnotations": {}, "serviceLabels": {}, "serviceType": "ClusterIP", "startupProbe": {}, "terminationGracePeriodSeconds": 30, "tolerations": [] } </pre> </td> </tr> <tr> <td>compactor.affinity</td> <td>object</td> <td>Affinity for compactor pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>compactor.appProtocol</td> <td>object</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>compactor.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.dnsConfig</td> <td>object</td> <td>DNSConfig for compactor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.extraArgs</td> <td>list</td> <td>Additional CLI args for the compactor</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.extraContainers</td> <td>list</td> <td>Containers to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.extraEnv</td> <td>list</td> <td>Environment variables to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.extraVolumes</td> <td>list</td> <td>Volumes to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the compactor</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>compactor.image.registry</td> <td>string</td> <td>The Docker registry for the compactor image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.image.repository</td> <td>string</td> <td>Docker image repository for the compactor image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.image.tag</td> <td>string</td> <td>Docker image tag for the compactor image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.initContainers</td> <td>list</td> <td>Init containers to add to the compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.livenessProbe</td> <td>object</td> <td>liveness probe settings for compactor pods. If empty use `loki.livenessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.nodeSelector</td> <td>object</td> <td>Node selector for compactor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.persistence.claims</td> <td>list</td> <td>List of the compactor PVCs</td> <td><pre lang="list"> </pre> </td> </tr> <tr> <td>compactor.persistence.claims[0].accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>compactor.persistence.claims[0].annotations</td> <td>object</td> <td>Annotations for compactor PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.persistence.claims[0].labels</td> <td>object</td> <td>Labels for compactor PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.persistence.claims[0].volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>compactor.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs for the compactor</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>compactor.podAnnotations</td> <td>object</td> <td>Annotations for compactor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.podLabels</td> <td>object</td> <td>Labels for compactor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for compactor pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.readinessProbe</td> <td>object</td> <td>readiness probe settings for compactor pods. If empty, use `loki.readinessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.replicas</td> <td>int</td> <td>Number of replicas for the compactor</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>compactor.resources</td> <td>object</td> <td>Resource requests and limits for the compactor</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.serviceAccount.annotations</td> <td>object</td> <td>Annotations for the compactor service account</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.serviceAccount.automountServiceAccountToken</td> <td>bool</td> <td>Set this toggle to false to opt out of automounting API credentials for the service account</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>compactor.serviceAccount.imagePullSecrets</td> <td>list</td> <td>Image pull secrets for the compactor service account</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>compactor.serviceAccount.name</td> <td>string</td> <td>The name of the ServiceAccount to use for the compactor. If not set and create is true, a name is generated by appending "-compactor" to the common ServiceAccount.</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>compactor.serviceAnnotations</td> <td>object</td> <td>Annotations for compactor service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.serviceLabels</td> <td>object</td> <td>Labels for compactor service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.serviceType</td> <td>string</td> <td>Service type for compactor service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>compactor.startupProbe</td> <td>object</td> <td>liveness probe settings for ingester pods. If empty use `loki.livenessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>compactor.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the compactor to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>compactor.tolerations</td> <td>list</td> <td>Tolerations for compactor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>deploymentMode</td> <td>string</td> <td>Deployment mode lets you specify how to deploy Loki. There are 3 options: - SingleBinary: Loki is deployed as a single binary, useful for small installs typically without HA, up to a few tens of GB/day. - SimpleScalable: Loki is deployed as 3 targets: read, write, and backend. Useful for medium installs easier to manage than distributed, up to a about 1TB/day. - Distributed: Loki is deployed as individual microservices. The most complicated but most capable, useful for large installs, typically over 1TB/day. There are also 2 additional modes used for migrating between deployment modes: - SingleBinary<->SimpleScalable: Migrate from SingleBinary to SimpleScalable (or vice versa) - SimpleScalable<->Distributed: Migrate from SimpleScalable to Distributed (or vice versa) Note: SimpleScalable and Distributed REQUIRE the use of object storage.</td> <td><pre lang="json"> "SimpleScalable" </pre> </td> </tr> <tr> <td>distributor</td> <td>object</td> <td>Configuration for the distributor</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "distributor", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "autoscaling": { "behavior": { "enabled": false, "scaleDown": {}, "scaleUp": {} }, "customMetrics": [], "enabled": false, "maxReplicas": 3, "minReplicas": 1, "targetCPUUtilizationPercentage": 60, "targetMemoryUtilizationPercentage": null }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "maxSurge": 0, "maxUnavailable": null, "nodeSelector": {}, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "replicas": 0, "resources": {}, "serviceAnnotations": {}, "serviceLabels": {}, "serviceType": "ClusterIP", "terminationGracePeriodSeconds": 30, "tolerations": [], "topologySpreadConstraints": [], "trafficDistribution": "" } </pre> </td> </tr> <tr> <td>distributor.affinity</td> <td>object</td> <td>Affinity for distributor pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>distributor.appProtocol</td> <td>object</td> <td>Adds the appProtocol field to the distributor service. This allows distributor to work with istio protocol selection.</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>distributor.appProtocol.grpc</td> <td>string</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>distributor.autoscaling.behavior.enabled</td> <td>bool</td> <td>Enable autoscaling behaviours</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>distributor.autoscaling.behavior.scaleDown</td> <td>object</td> <td>define scale down policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.autoscaling.behavior.scaleUp</td> <td>object</td> <td>define scale up policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.autoscaling.customMetrics</td> <td>list</td> <td>Allows one to define custom metrics using the HPA/v2 schema (for example, Pods, Object or External metrics)</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.autoscaling.enabled</td> <td>bool</td> <td>Enable autoscaling for the distributor</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>distributor.autoscaling.maxReplicas</td> <td>int</td> <td>Maximum autoscaling replicas for the distributor</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>distributor.autoscaling.minReplicas</td> <td>int</td> <td>Minimum autoscaling replicas for the distributor</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>distributor.autoscaling.targetCPUUtilizationPercentage</td> <td>int</td> <td>Target CPU utilisation percentage for the distributor</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>distributor.autoscaling.targetMemoryUtilizationPercentage</td> <td>string</td> <td>Target memory utilisation percentage for the distributor</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.dnsConfig</td> <td>object</td> <td>DNSConfig for distributor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.extraArgs</td> <td>list</td> <td>Additional CLI args for the distributor</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.extraContainers</td> <td>list</td> <td>Containers to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.extraEnv</td> <td>list</td> <td>Environment variables to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.extraVolumes</td> <td>list</td> <td>Volumes to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the distributor</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>distributor.image.registry</td> <td>string</td> <td>The Docker registry for the distributor image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.image.repository</td> <td>string</td> <td>Docker image repository for the distributor image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.image.tag</td> <td>string</td> <td>Docker image tag for the distributor image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.initContainers</td> <td>list</td> <td>Init containers to add to the distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.maxSurge</td> <td>int</td> <td>Max Surge for distributor pods</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>distributor.maxUnavailable</td> <td>string</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.nodeSelector</td> <td>object</td> <td>Node selector for distributor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.podAnnotations</td> <td>object</td> <td>Annotations for distributor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.podLabels</td> <td>object</td> <td>Labels for distributor pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for distributor pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>distributor.replicas</td> <td>int</td> <td>Number of replicas for the distributor</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>distributor.resources</td> <td>object</td> <td>Resource requests and limits for the distributor</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.serviceAnnotations</td> <td>object</td> <td>Annotations for distributor service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.serviceLabels</td> <td>object</td> <td>Labels for distributor service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>distributor.serviceType</td> <td>string</td> <td>Service type for distributor service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>distributor.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the distributor to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>distributor.tolerations</td> <td>list</td> <td>Tolerations for distributor pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for distributor pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>distributor.trafficDistribution</td> <td>string</td> <td>trafficDistribution for distributor service</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>enterprise</td> <td>object</td> <td>Configuration for running Enterprise Loki</td> <td><pre lang="json"> { "adminApi": { "enabled": true }, "adminToken": { "secret": null }, "canarySecret": null, "cluster_name": null, "config": "{{- if .Values.enterprise.adminApi.enabled }}\nadmin_client:\n {{ include \"enterprise-logs.adminAPIStorageConfig\" . | nindent 2 }}\n{{ end }}\nauth:\n type: {{ .Values.enterprise.adminApi.enabled | ternary \"enterprise\" \"trust\" }}\nauth_enabled: {{ .Values.loki.auth_enabled }}\ncluster_name: {{ include \"loki.clusterName\" . }}\nlicense:\n path: /etc/loki/license/license.jwt\n", "enabled": false, "externalConfigName": "", "externalLicenseName": null, "gelGateway": true, "image": { "digest": null, "pullPolicy": "IfNotPresent", "registry": "docker.io", "repository": "grafana/enterprise-logs", "tag": "3.6.7" }, "license": { "contents": "NOTAVALIDLICENSE" }, "provisioner": { "additionalTenants": [], "affinity": {}, "annotations": {}, "apiUrl": "{{ include \"loki.address\" . }}", "enabled": true, "env": [], "extraVolumeMounts": [], "extraVolumes": [], "hookType": "post-install", "hostUsers": "nil", "image": { "digest": null, "pullPolicy": "IfNotPresent", "registry": "us-docker.pkg.dev", "repository": "grafanalabs-global/docker-enterprise-provisioner-prod/enterprise-provisioner", "tag": "latest" }, "labels": {}, "nodeSelector": {}, "priorityClassName": null, "provisionedSecretPrefix": null, "securityContext": { "fsGroup": 10001, "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 }, "tolerations": [] }, "useExternalLicense": false, "version": "3.6.5" } </pre> </td> </tr> <tr> <td>enterprise.adminApi</td> <td>object</td> <td>If enabled, the correct admin_client storage will be configured. If disabled while running enterprise, make sure auth is set to `type: trust`, or that `auth_enabled` is set to `false`.</td> <td><pre lang="json"> { "enabled": true } </pre> </td> </tr> <tr> <td>enterprise.adminToken.secret</td> <td>string</td> <td>Name of external secret containing the admin token for enterprise provisioner This secret must exist before deploying and must contain a key named 'token'</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.canarySecret</td> <td>string</td> <td>Alternative name of the secret to store token for the canary</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.cluster_name</td> <td>string</td> <td>Optional name of the GEL cluster, otherwise will use .Release.Name The cluster name must match what is in your GEL license</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.externalConfigName</td> <td>string</td> <td>Name of the external config secret to use</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>enterprise.externalLicenseName</td> <td>string</td> <td>Name of external license secret to use</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.gelGateway</td> <td>bool</td> <td>Use GEL gateway, if false will use the default nginx gateway</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>enterprise.image.digest</td> <td>string</td> <td>Overrides the image tag with an image digest</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.image.pullPolicy</td> <td>string</td> <td>Docker image pull policy</td> <td><pre lang="json"> "IfNotPresent" </pre> </td> </tr> <tr> <td>enterprise.image.registry</td> <td>string</td> <td>The Docker registry</td> <td><pre lang="json"> "docker.io" </pre> </td> </tr> <tr> <td>enterprise.image.repository</td> <td>string</td> <td>Docker image repository</td> <td><pre lang="json"> "grafana/enterprise-logs" </pre> </td> </tr> <tr> <td>enterprise.image.tag</td> <td>string</td> <td>Docker image tag</td> <td><pre lang="json"> "3.6.7" </pre> </td> </tr> <tr> <td>enterprise.license</td> <td>object</td> <td>Grafana Enterprise Logs license In order to use Grafana Enterprise Logs features, you will need to provide the contents of your Grafana Enterprise Logs license, either by providing the contents of the license.jwt, or the name Kubernetes Secret that contains your license.jwt. To set the license contents, use the flag `--set-file 'enterprise.license.contents=./license.jwt'`</td> <td><pre lang="json"> { "contents": "NOTAVALIDLICENSE" } </pre> </td> </tr> <tr> <td>enterprise.provisioner</td> <td>object</td> <td>Configuration for `provisioner` target Note: Uses enterprise.adminToken.secret value to mount the admin token used to call the admin api.</td> <td><pre lang="json"> { "additionalTenants": [], "affinity": {}, "annotations": {}, "apiUrl": "{{ include \"loki.address\" . }}", "enabled": true, "env": [], "extraVolumeMounts": [], "extraVolumes": [], "hookType": "post-install", "hostUsers": "nil", "image": { "digest": null, "pullPolicy": "IfNotPresent", "registry": "us-docker.pkg.dev", "repository": "grafanalabs-global/docker-enterprise-provisioner-prod/enterprise-provisioner", "tag": "latest" }, "labels": {}, "nodeSelector": {}, "priorityClassName": null, "provisionedSecretPrefix": null, "securityContext": { "fsGroup": 10001, "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 }, "tolerations": [] } </pre> </td> </tr> <tr> <td>enterprise.provisioner.additionalTenants</td> <td>list</td> <td>Additional tenants to be created. Each tenant will get a read and write policy and associated token. Tenant must have a name and a namespace for the secret containting the token to be created in. For example additionalTenants: - name: loki secretNamespace: grafana</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterprise.provisioner.affinity</td> <td>object</td> <td>Affinity for provisioner Pods The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterprise.provisioner.annotations</td> <td>object</td> <td>Additional annotations for the `provisioner` Job</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterprise.provisioner.apiUrl</td> <td>string</td> <td>url of the admin api to use for the provisioner</td> <td><pre lang="json"> "{{ include \"loki.address\" . }}" </pre> </td> </tr> <tr> <td>enterprise.provisioner.enabled</td> <td>bool</td> <td>Whether the job should be part of the deployment</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>enterprise.provisioner.env</td> <td>list</td> <td>Additional Kubernetes environment</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterprise.provisioner.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the provisioner pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterprise.provisioner.extraVolumes</td> <td>list</td> <td>Additional volumes for Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterprise.provisioner.hookType</td> <td>string</td> <td>Hook type(s) to customize when the job runs. defaults to post-install</td> <td><pre lang="json"> "post-install" </pre> </td> </tr> <tr> <td>enterprise.provisioner.hostUsers</td> <td>string</td> <td>Use the host's user namespace in provisioner pods</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>enterprise.provisioner.image</td> <td>object</td> <td>Provisioner image to Utilize</td> <td><pre lang="json"> { "digest": null, "pullPolicy": "IfNotPresent", "registry": "us-docker.pkg.dev", "repository": "grafanalabs-global/docker-enterprise-provisioner-prod/enterprise-provisioner", "tag": "latest" } </pre> </td> </tr> <tr> <td>enterprise.provisioner.image.digest</td> <td>string</td> <td>Overrides the image tag with an image digest</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.provisioner.image.pullPolicy</td> <td>string</td> <td>Docker image pull policy</td> <td><pre lang="json"> "IfNotPresent" </pre> </td> </tr> <tr> <td>enterprise.provisioner.image.registry</td> <td>string</td> <td>The Docker registry</td> <td><pre lang="json"> "us-docker.pkg.dev" </pre> </td> </tr> <tr> <td>enterprise.provisioner.image.repository</td> <td>string</td> <td>Docker image repository</td> <td><pre lang="json"> "grafanalabs-global/docker-enterprise-provisioner-prod/enterprise-provisioner" </pre> </td> </tr> <tr> <td>enterprise.provisioner.image.tag</td> <td>string</td> <td>Overrides the image tag whose default is the chart's appVersion</td> <td><pre lang="json"> "latest" </pre> </td> </tr> <tr> <td>enterprise.provisioner.labels</td> <td>object</td> <td>Additional labels for the `provisioner` Job</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterprise.provisioner.nodeSelector</td> <td>object</td> <td>Node selector for provisioner Pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterprise.provisioner.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for provisioner Job</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.provisioner.provisionedSecretPrefix</td> <td>string</td> <td>Name of the secret to store provisioned tokens in</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>enterprise.provisioner.securityContext</td> <td>object</td> <td>Run containers as user `enterprise-logs(uid=10001)`</td> <td><pre lang="json"> { "fsGroup": 10001, "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 } </pre> </td> </tr> <tr> <td>enterprise.provisioner.tolerations</td> <td>list</td> <td>Tolerations for provisioner Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterprise.useExternalLicense</td> <td>bool</td> <td>Set to true when providing an external license</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>enterpriseGateway</td> <td>object</td> <td>If running enterprise and using the default enterprise gateway, configs go here.</td> <td><pre lang="json"> { "affinity": {}, "annotations": {}, "containerSecurityContext": { "allowPrivilegeEscalation": false, "capabilities": { "drop": [ "ALL" ] }, "readOnlyRootFilesystem": true }, "env": [], "extraArgs": {}, "extraContainers": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "initContainers": [], "labels": {}, "livenessProbe": {}, "nodeSelector": {}, "podSecurityContext": { "fsGroup": 10001, "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 }, "readinessProbe": { "httpGet": { "path": "/ready", "port": "http-metrics" }, "initialDelaySeconds": 45 }, "replicas": 1, "resources": {}, "service": { "annotations": {}, "labels": {}, "type": "ClusterIP" }, "startupProbe": {}, "strategy": { "type": "RollingUpdate" }, "terminationGracePeriodSeconds": 60, "tolerations": [], "topologySpreadConstraints": [], "useDefaultProxyURLs": true } </pre> </td> </tr> <tr> <td>enterpriseGateway.affinity</td> <td>object</td> <td>Affinity for gateway Pods The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.annotations</td> <td>object</td> <td>Additional annotations for the `gateway` Pod</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.env</td> <td>list</td> <td>Configure optional environment variables</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.extraArgs</td> <td>object</td> <td>Additional CLI arguments for the `gateway` target</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.extraContainers</td> <td>list</td> <td>Conifgure optional extraContainers</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the enterprise gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.extraVolumeMounts</td> <td>list</td> <td>Additional volume mounts for Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.extraVolumes</td> <td>list</td> <td>Additional volumes for Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the `gateway` pod</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>enterpriseGateway.initContainers</td> <td>list</td> <td>Configure optional initContainers</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.labels</td> <td>object</td> <td>Additional labels for the `gateway` Pod</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.livenessProbe</td> <td>object</td> <td>Liveness probe</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.nodeSelector</td> <td>object</td> <td>Node selector for gateway Pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.podSecurityContext</td> <td>object</td> <td>Run container as user `enterprise-logs(uid=10001)`</td> <td><pre lang="json"> { "fsGroup": 10001, "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 } </pre> </td> </tr> <tr> <td>enterpriseGateway.readinessProbe</td> <td>object</td> <td>Readiness probe</td> <td><pre lang="json"> { "httpGet": { "path": "/ready", "port": "http-metrics" }, "initialDelaySeconds": 45 } </pre> </td> </tr> <tr> <td>enterpriseGateway.replicas</td> <td>int</td> <td>Define the amount of instances</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>enterpriseGateway.resources</td> <td>object</td> <td>Values are defined in small.yaml and large.yaml</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.service</td> <td>object</td> <td>Service overriding service type</td> <td><pre lang="json"> { "annotations": {}, "labels": {}, "type": "ClusterIP" } </pre> </td> </tr> <tr> <td>enterpriseGateway.startupProbe</td> <td>object</td> <td>Startup probe</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>enterpriseGateway.strategy</td> <td>object</td> <td>update strategy</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>enterpriseGateway.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the gateway to shutdown before it is killed</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>enterpriseGateway.tolerations</td> <td>list</td> <td>Tolerations for gateway Pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for enterprise-gateway pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>enterpriseGateway.useDefaultProxyURLs</td> <td>bool</td> <td>If you want to use your own proxy URLs, set this to false.</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>extraObjects</td> <td>string</td> <td></td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>fullnameOverride</td> <td>string</td> <td>Overrides the chart's computed fullname</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.affinity</td> <td>object</td> <td>Affinity for gateway pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>gateway.annotations</td> <td>object</td> <td>Annotations for gateway deployment</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.autoscaling.behavior</td> <td>object</td> <td>Behavior policies while scaling.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.autoscaling.enabled</td> <td>bool</td> <td>Enable autoscaling for the gateway</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>gateway.autoscaling.maxReplicas</td> <td>int</td> <td>Maximum autoscaling replicas for the gateway</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>gateway.autoscaling.minReplicas</td> <td>int</td> <td>Minimum autoscaling replicas for the gateway</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>gateway.autoscaling.targetCPUUtilizationPercentage</td> <td>int</td> <td>Target CPU utilisation percentage for the gateway</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>gateway.autoscaling.targetMemoryUtilizationPercentage</td> <td>string</td> <td>Target memory utilisation percentage for the gateway</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.basicAuth.enabled</td> <td>bool</td> <td>Enables basic authentication for the gateway</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>gateway.basicAuth.existingSecret</td> <td>string</td> <td>Existing basic auth secret to use. Must contain '.htpasswd'</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.basicAuth.htpasswd</td> <td>string</td> <td>Uses the specified users from the `loki.tenants` list to create the htpasswd file. if `loki.tenants` is not set, the `gateway.basicAuth.username` and `gateway.basicAuth.password` are used. The value is templated using `tpl`. Override this to use a custom htpasswd, e.g. in case the default causes high CPU load.</td> <td><pre lang=""> Either `loki.tenants` or `gateway.basicAuth.username` and `gateway.basicAuth.password`. </pre> </td> </tr> <tr> <td>gateway.basicAuth.password</td> <td>string</td> <td>The basic auth password for the gateway</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.basicAuth.username</td> <td>string</td> <td>The basic auth username for the gateway</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.containerPort</td> <td>int</td> <td>Default container port</td> <td><pre lang="json"> 8080 </pre> </td> </tr> <tr> <td>gateway.containerSecurityContext</td> <td>object</td> <td>The SecurityContext for gateway containers</td> <td><pre lang="json"> { "allowPrivilegeEscalation": false, "capabilities": { "drop": [ "ALL" ] }, "readOnlyRootFilesystem": true } </pre> </td> </tr> <tr> <td>gateway.deploymentStrategy.type</td> <td>string</td> <td></td> <td><pre lang="json"> "RollingUpdate" </pre> </td> </tr> <tr> <td>gateway.dnsConfig</td> <td>object</td> <td>DNS config for gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.enabled</td> <td>bool</td> <td>Specifies whether the gateway should be enabled</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>gateway.extraArgs</td> <td>list</td> <td>Additional CLI args for the gateway</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.extraContainers</td> <td>list</td> <td>Containers to add to the gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.extraEnv</td> <td>list</td> <td>Environment variables to add to the gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.extraVolumes</td> <td>list</td> <td>Volumes to add to the gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the gateway</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>gateway.image.digest</td> <td>string</td> <td>Overrides the gateway image tag with an image digest</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.image.pullPolicy</td> <td>string</td> <td>The gateway image pull policy</td> <td><pre lang="json"> "IfNotPresent" </pre> </td> </tr> <tr> <td>gateway.image.registry</td> <td>string</td> <td>The Docker registry for the gateway image</td> <td><pre lang="json"> "docker.io" </pre> </td> </tr> <tr> <td>gateway.image.repository</td> <td>string</td> <td>The gateway image repository</td> <td><pre lang="json"> "nginxinc/nginx-unprivileged" </pre> </td> </tr> <tr> <td>gateway.image.tag</td> <td>string</td> <td>The gateway image tag</td> <td><pre lang="json"> "1.29-alpine" </pre> </td> </tr> <tr> <td>gateway.ingress.annotations</td> <td>object</td> <td>Annotations for the gateway ingress</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.ingress.enabled</td> <td>bool</td> <td>Specifies whether an ingress for the gateway should be created</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>gateway.ingress.hosts</td> <td>list</td> <td>Hosts configuration for the gateway ingress, passed through the `tpl` function to allow templating</td> <td><pre lang="json"> [ { "host": "gateway.loki.example.com", "paths": [ { "path": "/" } ] } ] </pre> </td> </tr> <tr> <td>gateway.ingress.ingressClassName</td> <td>string</td> <td>Ingress Class Name. MAY be required for Kubernetes versions >= 1.18</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>gateway.ingress.labels</td> <td>object</td> <td>Labels for the gateway ingress</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.ingress.tls</td> <td>list</td> <td>TLS configuration for the gateway ingress. Hosts passed through the `tpl` function to allow templating</td> <td><pre lang="json"> [ { "hosts": [ "gateway.loki.example.com" ], "secretName": "loki-gateway-tls" } ] </pre> </td> </tr> <tr> <td>gateway.lifecycle</td> <td>object</td> <td>Lifecycle for the gateway container</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.livenessProbe</td> <td>object</td> <td>liveness probe for the nginx container in the gateway pods.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.nginxConfig.clientMaxBodySize</td> <td>string</td> <td>Allows customizing the `client_max_body_size` directive</td> <td><pre lang="json"> "4M" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.customBackendUrl</td> <td>string</td> <td>Override Backend URL</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.nginxConfig.customReadUrl</td> <td>string</td> <td>Override Read URL</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.nginxConfig.customWriteUrl</td> <td>string</td> <td>Override Write URL</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.nginxConfig.enableIPv6</td> <td>bool</td> <td>Enable listener for IPv6, disable on IPv4-only systems</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>gateway.nginxConfig.file</td> <td>string</td> <td>Config file contents for Nginx. Passed through the `tpl` function to allow templating</td> <td><pre lang=""> See values.yaml </pre> </td> </tr> <tr> <td>gateway.nginxConfig.httpSnippet</td> <td>string</td> <td>Allows appending custom configuration to the http block, passed through the `tpl` function to allow templating</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.locationSnippet</td> <td>string</td> <td>Allows appending custom configuration inside every location block, useful for authentication or setting headers that are not inherited from the server block, passed through the `tpl` function to allow templating.</td> <td><pre lang="json"> "{{ if .Values.loki.tenants }}proxy_set_header X-Scope-OrgID $remote_user;{{ end }}" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.logFormat</td> <td>string</td> <td>NGINX log format</td> <td><pre lang="json"> "main '$remote_addr - $remote_user [$time_local] $status '\n '\"$request\" $body_bytes_sent \"$http_referer\" '\n '\"$http_user_agent\" \"$http_x_forwarded_for\"';" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.resolver</td> <td>string</td> <td>Allows overriding the DNS resolver address nginx will use.</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.schema</td> <td>string</td> <td>Which schema to be used when building URLs. Can be 'http' or 'https'.</td> <td><pre lang="json"> "http" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.serverSnippet</td> <td>string</td> <td>Allows appending custom configuration to the server block</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>gateway.nginxConfig.ssl</td> <td>bool</td> <td>Whether ssl should be appended to the listen directive of the server block or not.</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>gateway.nodeSelector</td> <td>object</td> <td>Node selector for gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.podAnnotations</td> <td>object</td> <td>Annotations for gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.podLabels</td> <td>object</td> <td>Additional labels for gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.podSecurityContext</td> <td>object</td> <td>The SecurityContext for gateway containers</td> <td><pre lang="json"> { "fsGroup": 101, "runAsGroup": 101, "runAsNonRoot": true, "runAsUser": 101 } </pre> </td> </tr> <tr> <td>gateway.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for gateway pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.readinessProbe.httpGet.path</td> <td>string</td> <td></td> <td><pre lang="json"> "/" </pre> </td> </tr> <tr> <td>gateway.readinessProbe.httpGet.port</td> <td>string</td> <td></td> <td><pre lang="json"> "http-metrics" </pre> </td> </tr> <tr> <td>gateway.readinessProbe.initialDelaySeconds</td> <td>int</td> <td></td> <td><pre lang="json"> 15 </pre> </td> </tr> <tr> <td>gateway.readinessProbe.timeoutSeconds</td> <td>int</td> <td></td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>gateway.replicas</td> <td>int</td> <td>Number of replicas for the gateway</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>gateway.resources</td> <td>object</td> <td>Resource requests and limits for the gateway</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.service.annotations</td> <td>object</td> <td>Annotations for the gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.service.clusterIP</td> <td>string</td> <td>ClusterIP of the gateway service</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.service.labels</td> <td>object</td> <td>Labels for gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.service.loadBalancerIP</td> <td>string</td> <td>Load balancer IPO address if service type is LoadBalancer</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.service.nodePort</td> <td>int</td> <td>Node port if service type is NodePort</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>gateway.service.port</td> <td>int</td> <td>Port of the gateway service</td> <td><pre lang="json"> 80 </pre> </td> </tr> <tr> <td>gateway.service.trafficDistribution</td> <td>string</td> <td>trafficDistribution for gateway service</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>gateway.service.type</td> <td>string</td> <td>Type of the gateway service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>gateway.startupProbe</td> <td>object</td> <td>startup probe for the nginx container in the gateway pods.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>gateway.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the gateway to shutdown before it is killed</td> <td><pre lang="json"> 30 </pre> </td> </tr> <tr> <td>gateway.tolerations</td> <td>list</td> <td>Tolerations for gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for gateway pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>gateway.verboseLogging</td> <td>bool</td> <td>Enable logging of 2xx and 3xx HTTP requests</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>global.clusterDomain</td> <td>string</td> <td>configures cluster domain ("cluster.local" by default)</td> <td><pre lang="json"> "cluster.local" </pre> </td> </tr> <tr> <td>global.dnsNamespace</td> <td>string</td> <td>configures DNS service namespace</td> <td><pre lang="json"> "kube-system" </pre> </td> </tr> <tr> <td>global.dnsService</td> <td>string</td> <td>configures DNS service name</td> <td><pre lang="json"> "kube-dns" </pre> </td> </tr> <tr> <td>global.extraArgs</td> <td>list</td> <td>Common additional CLI arguments for all jobs (that is, -log.level debug, -config.expand-env=true or -log-config-reverse-order) scope: admin-api, backend, bloom-builder, bloom-gateway, bloom-planner, compactor, distributor, index-gateway, ingester, overrides-exporter, pattern-ingester, querier, query-frontend, query-scheduler, read, ruler, write.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>global.extraEnv</td> <td>list</td> <td>Common environment variables to add to all pods directly managed by this chart. scope: admin-api, backend, bloom-builder, bloom-gateway, bloom-planner, compactor, distributor, index-gateway, ingester, overrides-exporter, pattern-ingester, querier, query-frontend, query-scheduler, read, ruler, write.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>global.extraEnvFrom</td> <td>list</td> <td>Common source of environment injections to add to all pods directly managed by this chart. scope: admin-api, backend, bloom-builder, bloom-gateway, bloom-planner, compactor, distributor, index-gateway, ingester, overrides-exporter, pattern-ingester, querier, query-frontend, query-scheduler, read, ruler, write. For example to inject values from a Secret, use: extraEnvFrom: - secretRef: name: mysecret</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>global.extraVolumeMounts</td> <td>list</td> <td>Common mount points to add to all pods directly managed by this chart. scope: admin-api, backend, bloom-builder, bloom-gateway, bloom-planner, compactor, distributor, index-gateway, ingester, overrides-exporter, pattern-ingester, querier, query-frontend, query-scheduler, read, ruler, write.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>global.extraVolumes</td> <td>list</td> <td>Common volumes to add to all pods directly managed by this chart. scope: admin-api, backend, bloom-builder, bloom-gateway, bloom-planner, compactor, distributor, index-gateway, ingester, overrides-exporter, pattern-ingester, querier, query-frontend, query-scheduler, read, ruler, write.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>global.image.registry</td> <td>string</td> <td>Overrides the Docker registry globally for all images (deprecated, use global.imageRegistry)</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>global.imageRegistry</td> <td>string</td> <td>Overrides the Docker registry globally for all images (standard format)</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>global.priorityClassName</td> <td>string</td> <td>Overrides the priorityClassName for all pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>imagePullSecrets</td> <td>list</td> <td>Image pull secrets for Docker images</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway</td> <td>object</td> <td>Configuration for the index-gateway</td> <td><pre lang="json"> { "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "index-gateway", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "joinMemberlist": true, "lifecycle": {}, "maxUnavailable": null, "nodeSelector": {}, "persistence": { "accessModes": [ "ReadWriteOnce" ], "annotations": {}, "enableStatefulSetAutoDeletePVC": false, "enabled": false, "inMemory": false, "labels": {}, "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null, "whenDeleted": "Retain", "whenScaled": "Retain" }, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "replicas": 0, "resources": {}, "serviceAnnotations": {}, "serviceLabels": {}, "serviceType": "ClusterIP", "terminationGracePeriodSeconds": 300, "tolerations": [], "topologySpreadConstraints": [], "trafficDistribution": "", "updateStrategy": { "type": "RollingUpdate" } } </pre> </td> </tr> <tr> <td>indexGateway.affinity</td> <td>object</td> <td>Affinity for index-gateway pods. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>indexGateway.appProtocol</td> <td>object</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>indexGateway.dnsConfig</td> <td>object</td> <td>DNSConfig for index-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.extraArgs</td> <td>list</td> <td>Additional CLI args for the index-gateway</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.extraContainers</td> <td>list</td> <td>Containers to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.extraEnv</td> <td>list</td> <td>Environment variables to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.extraVolumes</td> <td>list</td> <td>Volumes to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the index-gateway</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>indexGateway.image.registry</td> <td>string</td> <td>The Docker registry for the index-gateway image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.image.repository</td> <td>string</td> <td>Docker image repository for the index-gateway image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.image.tag</td> <td>string</td> <td>Docker image tag for the index-gateway image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.initContainers</td> <td>list</td> <td>Init containers to add to the index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.joinMemberlist</td> <td>bool</td> <td>Whether the index gateway should join the memberlist hashring</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>indexGateway.lifecycle</td> <td>object</td> <td>Lifecycle for the index-gateway container</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.maxUnavailable</td> <td>string</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.nodeSelector</td> <td>object</td> <td>Node selector for index-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.persistence.accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>indexGateway.persistence.annotations</td> <td>object</td> <td>Annotations for index gateway PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>indexGateway.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs which is required when using boltdb-shipper</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>indexGateway.persistence.inMemory</td> <td>bool</td> <td>Use emptyDir with ramdisk for storage. **Please note that all data in indexGateway will be lost on pod restart**</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>indexGateway.persistence.labels</td> <td>object</td> <td>Labels for index gateway PVCs</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.persistence.size</td> <td>string</td> <td>Size of persistent or memory disk</td> <td><pre lang="json"> "10Gi" </pre> </td> </tr> <tr> <td>indexGateway.persistence.storageClass</td> <td>string</td> <td>Storage class to be used. If defined, storageClassName: <storageClass>. If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.persistence.volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.podAnnotations</td> <td>object</td> <td>Annotations for index-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.podLabels</td> <td>object</td> <td>Labels for index-gateway pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.priorityClassName</td> <td>string</td> <td>The name of the PriorityClass for index-gateway pods</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>indexGateway.replicas</td> <td>int</td> <td>Number of replicas for the index-gateway</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>indexGateway.resources</td> <td>object</td> <td>Resource requests and limits for the index-gateway</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.serviceAnnotations</td> <td>object</td> <td>Annotations for index-gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.serviceLabels</td> <td>object</td> <td>Labels for index-gateway service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>indexGateway.serviceType</td> <td>string</td> <td>Service type for index-gateway service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>indexGateway.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the index-gateway to shutdown before it is killed.</td> <td><pre lang="json"> 300 </pre> </td> </tr> <tr> <td>indexGateway.tolerations</td> <td>list</td> <td>Tolerations for index-gateway pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.topologySpreadConstraints</td> <td>list</td> <td>Topology Spread Constraints for index-gateway pods The value will be passed through tpl.</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>indexGateway.trafficDistribution</td> <td>string</td> <td>trafficDistribution for index-gateway service</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>indexGateway.updateStrategy</td> <td>object</td> <td>UpdateStrategy for the indexGateway StatefulSet.</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>indexGateway.updateStrategy.type</td> <td>string</td> <td>One of 'OnDelete' or 'RollingUpdate'</td> <td><pre lang="json"> "RollingUpdate" </pre> </td> </tr> <tr> <td>ingester</td> <td>object</td> <td>Configuration for the ingester</td> <td><pre lang="json"> { "addIngesterNamePrefix": false, "affinity": { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "ingester", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "topologyKey": "kubernetes.io/hostname" } ] } }, "appProtocol": { "grpc": "" }, "autoscaling": { "behavior": { "enabled": false, "scaleDown": {}, "scaleUp": {} }, "customMetrics": [], "enabled": false, "maxReplicas": 3, "minReplicas": 1, "targetCPUUtilizationPercentage": 60, "targetMemoryUtilizationPercentage": null }, "command": null, "dnsConfig": {}, "extraArgs": [], "extraContainers": [], "extraEnv": [], "extraEnvFrom": [], "extraVolumeMounts": [], "extraVolumes": [], "hostAliases": [], "hostUsers": "nil", "image": { "registry": null, "repository": null, "tag": null }, "initContainers": [], "labels": {}, "lifecycle": {}, "livenessProbe": {}, "maxUnavailable": 1, "nodeSelector": {}, "persistence": { "claims": [ { "accessModes": [ "ReadWriteOnce" ], "name": "data", "size": "10Gi", "storageClass": null, "volumeAttributesClassName": null } ], "enableStatefulSetAutoDeletePVC": false, "enabled": false, "inMemory": false, "whenDeleted": "Retain", "whenScaled": "Retain" }, "podAnnotations": {}, "podLabels": {}, "priorityClassName": null, "readinessProbe": {}, "replicas": 0, "resources": {}, "rolloutGroupPrefix": null, "serviceAnnotations": {}, "serviceLabels": {}, "serviceType": "ClusterIP", "startupProbe": {}, "terminationGracePeriodSeconds": 300, "tolerations": [], "topologySpreadConstraints": [ { "labelSelector": { "matchLabels": { "app.kubernetes.io/component": "ingester", "app.kubernetes.io/instance": "{{ .Release.Name }}", "app.kubernetes.io/name": "{{ include \"loki.name\" . }}" } }, "maxSkew": 1, "topologyKey": "kubernetes.io/hostname", "whenUnsatisfiable": "ScheduleAnyway" } ], "trafficDistribution": "", "updateStrategy": { "type": "RollingUpdate" }, "zoneAwareReplication": { "enabled": true, "maxUnavailablePct": 33, "migration": { "enabled": false, "excludeDefaultZone": false, "readPath": false, "writePath": false }, "zoneA": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} }, "zoneB": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} }, "zoneC": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} } } } </pre> </td> </tr> <tr> <td>ingester.affinity</td> <td>object</td> <td>Affinity for ingester pods. Ignored if zoneAwareReplication is enabled. The value will be passed through tpl.</td> <td><pre lang=""> Hard node anti-affinity </pre> </td> </tr> <tr> <td>ingester.appProtocol</td> <td>object</td> <td>Adds the appProtocol field to the ingester service. This allows ingester to work with istio protocol selection.</td> <td><pre lang="json"> { "grpc": "" } </pre> </td> </tr> <tr> <td>ingester.appProtocol.grpc</td> <td>string</td> <td>Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>ingester.autoscaling.behavior.enabled</td> <td>bool</td> <td>Enable autoscaling behaviours</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>ingester.autoscaling.behavior.scaleDown</td> <td>object</td> <td>define scale down policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.autoscaling.behavior.scaleUp</td> <td>object</td> <td>define scale up policies, must conform to HPAScalingRules</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.autoscaling.customMetrics</td> <td>list</td> <td>Allows one to define custom metrics using the HPA/v2 schema (for example, Pods, Object or External metrics)</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.autoscaling.enabled</td> <td>bool</td> <td>Enable autoscaling for the ingester</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>ingester.autoscaling.maxReplicas</td> <td>int</td> <td>Maximum autoscaling replicas for the ingester</td> <td><pre lang="json"> 3 </pre> </td> </tr> <tr> <td>ingester.autoscaling.minReplicas</td> <td>int</td> <td>Minimum autoscaling replicas for the ingester</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>ingester.autoscaling.targetCPUUtilizationPercentage</td> <td>int</td> <td>Target CPU utilisation percentage for the ingester</td> <td><pre lang="json"> 60 </pre> </td> </tr> <tr> <td>ingester.autoscaling.targetMemoryUtilizationPercentage</td> <td>string</td> <td>Target memory utilisation percentage for the ingester</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.command</td> <td>string</td> <td>Command to execute instead of defined in Docker image</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.dnsConfig</td> <td>object</td> <td>DNSConfig for ingester pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.extraArgs</td> <td>list</td> <td>Additional CLI args for the ingester</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.extraContainers</td> <td>list</td> <td>Containers to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.extraEnv</td> <td>list</td> <td>Environment variables to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.extraEnvFrom</td> <td>list</td> <td>Environment variables from secrets or configmaps to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.extraVolumeMounts</td> <td>list</td> <td>Volume mounts to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.extraVolumes</td> <td>list</td> <td>Volumes to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.hostAliases</td> <td>list</td> <td>hostAliases to add</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.hostUsers</td> <td>string</td> <td>Use the host's user namespace in the ingester</td> <td><pre lang="json"> "nil" </pre> </td> </tr> <tr> <td>ingester.image.registry</td> <td>string</td> <td>The Docker registry for the ingester image. Overrides `loki.image.registry`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.image.repository</td> <td>string</td> <td>Docker image repository for the ingester image. Overrides `loki.image.repository`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.image.tag</td> <td>string</td> <td>Docker image tag for the ingester image. Overrides `loki.image.tag`</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.initContainers</td> <td>list</td> <td>Init containers to add to the ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.lifecycle</td> <td>object</td> <td>Lifecycle for the ingester container</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.livenessProbe</td> <td>object</td> <td>liveness probe settings for ingester pods. If empty use `loki.livenessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.maxUnavailable</td> <td>int</td> <td>Pod Disruption Budget maxUnavailable</td> <td><pre lang="json"> 1 </pre> </td> </tr> <tr> <td>ingester.nodeSelector</td> <td>object</td> <td>Node selector for ingester pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.persistence.claims</td> <td>list</td> <td>List of the ingester PVCs</td> <td><pre lang="list"> </pre> </td> </tr> <tr> <td>ingester.persistence.claims[0].accessModes</td> <td>list</td> <td>Set access modes on the PersistentVolumeClaim</td> <td><pre lang="json"> [ "ReadWriteOnce" ] </pre> </td> </tr> <tr> <td>ingester.persistence.claims[0].volumeAttributesClassName</td> <td>string</td> <td>Volume attributes class name to be used. If empty or set to null, no volumeAttributesClassName spec is set. Requires Kubernetes 1.31</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.persistence.enableStatefulSetAutoDeletePVC</td> <td>bool</td> <td>Enable StatefulSetAutoDeletePVC feature</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>ingester.persistence.enabled</td> <td>bool</td> <td>Enable creating PVCs which is required when using boltdb-shipper</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>ingester.persistence.inMemory</td> <td>bool</td> <td>Use emptyDir with ramdisk for storage. **Please note that all data in ingester will be lost on pod restart**</td> <td><pre lang="json"> false </pre> </td> </tr> <tr> <td>ingester.podAnnotations</td> <td>object</td> <td>Annotations for ingester pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.podLabels</td> <td>object</td> <td>Labels for ingester pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.readinessProbe</td> <td>object</td> <td>readiness probe settings for ingester pods. If empty, use `loki.readinessProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.replicas</td> <td>int</td> <td>Number of replicas for the ingester, when zoneAwareReplication.enabled is true, the total number of replicas will match this value with each zone having 1/3rd of the total replicas.</td> <td><pre lang="json"> 0 </pre> </td> </tr> <tr> <td>ingester.resources</td> <td>object</td> <td>Resource requests and limits for the ingester</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.serviceAnnotations</td> <td>object</td> <td>Annotations for ingester service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.serviceLabels</td> <td>object</td> <td>Labels for ingester service</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.serviceType</td> <td>string</td> <td>Service type for ingester service</td> <td><pre lang="json"> "ClusterIP" </pre> </td> </tr> <tr> <td>ingester.startupProbe</td> <td>object</td> <td>startup probe settings for ingester pods. If empty use `loki.startupProbe`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.terminationGracePeriodSeconds</td> <td>int</td> <td>Grace period to allow the ingester to shutdown before it is killed. Especially for the ingestor, this must be increased. It must be long enough so ingesters can be gracefully shutdown flushing/transferring all data and to successfully leave the member ring on shutdown.</td> <td><pre lang="json"> 300 </pre> </td> </tr> <tr> <td>ingester.tolerations</td> <td>list</td> <td>Tolerations for ingester pods</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>ingester.topologySpreadConstraints</td> <td>list</td> <td>topologySpread for ingester pods. The value will be passed through tpl.</td> <td><pre lang=""> Defaults to allow skew no more than 1 node </pre> </td> </tr> <tr> <td>ingester.trafficDistribution</td> <td>string</td> <td>trafficDistribution for ingester service</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>ingester.updateStrategy</td> <td>object</td> <td>UpdateStrategy for the ingester StatefulSets.</td> <td><pre lang="json"> { "type": "RollingUpdate" } </pre> </td> </tr> <tr> <td>ingester.updateStrategy.type</td> <td>string</td> <td>One of 'OnDelete' or 'RollingUpdate'</td> <td><pre lang="json"> "RollingUpdate" </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication</td> <td>object</td> <td>Enabling zone awareness on ingesters will create 3 statefulests where all writes will send a replica to each zone. This is primarily intended to accelerate rollout operations by allowing for multiple ingesters within a single zone to be shutdown and restart simultaneously (the remaining 2 zones will be guaranteed to have at least one copy of the data). Note: This can be used to run Loki over multiple cloud provider availability zones however this is not currently recommended as Loki is not optimized for this and cross zone network traffic costs can become extremely high extremely quickly. Even with zone awareness enabled, it is recommended to run Loki in a single availability zone.</td> <td><pre lang="json"> { "enabled": true, "maxUnavailablePct": 33, "migration": { "enabled": false, "excludeDefaultZone": false, "readPath": false, "writePath": false }, "zoneA": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} }, "zoneB": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} }, "zoneC": { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} } } </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.enabled</td> <td>bool</td> <td>Enable zone awareness.</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.maxUnavailablePct</td> <td>int</td> <td>The percent of replicas in each zone that will be restarted at once. In a value of 0-100</td> <td><pre lang="json"> 33 </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.migration</td> <td>object</td> <td>The migration block allows migrating non zone aware ingesters to zone aware ingesters.</td> <td><pre lang="json"> { "enabled": false, "excludeDefaultZone": false, "readPath": false, "writePath": false } </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneA</td> <td>object</td> <td>zoneA configuration</td> <td><pre lang="json"> { "annotations": {}, "extraAffinity": {}, "nodeSelector": null, "podAnnotations": {} } </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneA.annotations</td> <td>object</td> <td>Specific annotations to add to zone A statefulset</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneA.extraAffinity</td> <td>object</td> <td>optionally define extra affinity rules, by default different zones are not allowed to schedule on the same host The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneA.nodeSelector</td> <td>string</td> <td>optionally define a node selector for this zone</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneA.podAnnotations</td> <td>object</td> <td>Specific annotations to add to zone A pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneB.annotations</td> <td>object</td> <td>Specific annotations to add to zone B statefulset</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneB.extraAffinity</td> <td>object</td> <td>optionally define extra affinity rules, by default different zones are not allowed to schedule on the same host The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneB.nodeSelector</td> <td>string</td> <td>optionally define a node selector for this zone</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneB.podAnnotations</td> <td>object</td> <td>Specific annotations to add to zone B pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneC.annotations</td> <td>object</td> <td>Specific annotations to add to zone C statefulset</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneC.extraAffinity</td> <td>object</td> <td>optionally define extra affinity rules, by default different zones are not allowed to schedule on the same host The value will be passed through tpl.</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneC.nodeSelector</td> <td>string</td> <td>optionally define a node selector for this zone</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>ingester.zoneAwareReplication.zoneC.podAnnotations</td> <td>object</td> <td>Specific annotations to add to zone C pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>ingress</td> <td>object</td> <td>Ingress configuration Use either this ingress or the gateway, but not both at once. If you enable this, make sure to disable the gateway. You'll need to supply authn configuration for your ingress controller.</td> <td><pre lang="json"> { "annotations": {}, "enabled": false, "hosts": [ "loki.example.com" ], "ingressClassName": "", "labels": {}, "paths": { "compactor": [ "/loki/api/v1/delete" ], "distributor": [ "/api/prom/push", "/loki/api/v1/push", "/otlp/v1/logs", "/ui" ], "queryFrontend": [ "/api/prom/query", "/api/prom/label", "/api/prom/series", "/api/prom/tail", "/loki/api/v1/query", "/loki/api/v1/query_range", "/loki/api/v1/tail", "/loki/api/v1/label", "/loki/api/v1/labels", "/loki/api/v1/series", "/loki/api/v1/index/stats", "/loki/api/v1/index/volume", "/loki/api/v1/index/volume_range", "/loki/api/v1/format_query", "/loki/api/v1/detected_field", "/loki/api/v1/detected_fields", "/loki/api/v1/detected_labels", "/loki/api/v1/patterns" ], "ruler": [ "/api/prom/rules", "/api/prom/api/v1/rules", "/api/prom/api/v1/alerts", "/loki/api/v1/rules", "/prometheus/api/v1/rules", "/prometheus/api/v1/alerts" ] }, "tls": [] } </pre> </td> </tr> <tr> <td>ingress.hosts</td> <td>list</td> <td>Hosts configuration for the ingress, passed through the `tpl` function to allow templating</td> <td><pre lang="json"> [ "loki.example.com" ] </pre> </td> </tr> <tr> <td>ingress.paths.compactor</td> <td>list</td> <td>Paths that are exposed by Loki Compactor. If deployment mode is Distributed, the requests are forwarded to the service: `{{"loki.compactorFullname"}}`. If deployment mode is SimpleScalable, the requests are forwarded to k8s service: `{{"loki.backendFullname"}}`. If deployment mode is SingleBinary, the requests are forwarded to the central/single k8s service: `{{"loki.singleBinaryFullname"}}`</td> <td><pre lang="json"> [ "/loki/api/v1/delete" ] </pre> </td> </tr> <tr> <td>ingress.paths.distributor</td> <td>list</td> <td>Paths that are exposed by Loki Distributor. If deployment mode is Distributed, the requests are forwarded to the service: `{{"loki.distributorFullname"}}`. If deployment mode is SimpleScalable, the requests are forwarded to write k8s service: `{{"loki.writeFullname"}}`. If deployment mode is SingleBinary, the requests are forwarded to the central/single k8s service: `{{"loki.singleBinaryFullname"}}`</td> <td><pre lang="json"> [ "/api/prom/push", "/loki/api/v1/push", "/otlp/v1/logs", "/ui" ] </pre> </td> </tr> <tr> <td>ingress.paths.queryFrontend</td> <td>list</td> <td>Paths that are exposed by Loki Query Frontend. If deployment mode is Distributed, the requests are forwarded to the service: `{{"loki.queryFrontendFullname"}}`. If deployment mode is SimpleScalable, the requests are forwarded to write k8s service: `{{"loki.readFullname"}}`. If deployment mode is SingleBinary, the requests are forwarded to the central/single k8s service: `{{"loki.singleBinaryFullname"}}`</td> <td><pre lang="json"> [ "/api/prom/query", "/api/prom/label", "/api/prom/series", "/api/prom/tail", "/loki/api/v1/query", "/loki/api/v1/query_range", "/loki/api/v1/tail", "/loki/api/v1/label", "/loki/api/v1/labels", "/loki/api/v1/series", "/loki/api/v1/index/stats", "/loki/api/v1/index/volume", "/loki/api/v1/index/volume_range", "/loki/api/v1/format_query", "/loki/api/v1/detected_field", "/loki/api/v1/detected_fields", "/loki/api/v1/detected_labels", "/loki/api/v1/patterns" ] </pre> </td> </tr> <tr> <td>ingress.paths.ruler</td> <td>list</td> <td>Paths that are exposed by Loki Ruler. If deployment mode is Distributed, the requests are forwarded to the service: `{{"loki.rulerFullname"}}`. If deployment mode is SimpleScalable, the requests are forwarded to k8s service: `{{"loki.backendFullname"}}`. If deployment mode is SimpleScalable but `read.legacyReadTarget` is `true`, the requests are forwarded to k8s service: `{{"loki.readFullname"}}`. If deployment mode is SingleBinary, the requests are forwarded to the central/single k8s service: `{{"loki.singleBinaryFullname"}}`</td> <td><pre lang="json"> [ "/api/prom/rules", "/api/prom/api/v1/rules", "/api/prom/api/v1/alerts", "/loki/api/v1/rules", "/prometheus/api/v1/rules", "/prometheus/api/v1/alerts" ] </pre> </td> </tr> <tr> <td>ingress.tls</td> <td>list</td> <td>TLS configuration for the ingress. Hosts passed through the `tpl` function to allow templating</td> <td><pre lang="json"> [] </pre> </td> </tr> <tr> <td>kubeVersionOverride</td> <td>string</td> <td>Overrides the version used to determine compatibility of resources with the target Kubernetes cluster. This is useful when using `helm template`, because then helm will use the client version of kubectl as the Kubernetes version, which may or may not match your cluster's server version. Example: 'v1.24.4'. Set to null to use the version that helm devises.</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>loki</td> <td>object</td> <td>Configuration for running Loki</td> <td><pre lang=""> See values.yaml </pre> </td> </tr> <tr> <td>loki.analytics</td> <td>object</td> <td>Optional analytics configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.annotations</td> <td>object</td> <td>Common annotations for all deployments/StatefulSets</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.block_builder</td> <td>object</td> <td>Optional block builder configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.commonConfig</td> <td>object</td> <td>Check https://grafana.com/docs/loki/latest/configuration/#common_config for more info on how to provide a common configuration</td> <td><pre lang="json"> { "compactor_grpc_address": "{{ include \"loki.compactorAddress\" . }}", "path_prefix": "/var/loki", "replication_factor": 3 } </pre> </td> </tr> <tr> <td>loki.commonConfig.compactor_grpc_address</td> <td>string</td> <td>The gRPC address of the compactor. The use of compactor_grpc_address is prefered over compactor_address. If a customized compactor_address is set, compactor_grpc_address should be set to an empty string.</td> <td><pre lang="json"> "{{ include \"loki.compactorAddress\" . }}" </pre> </td> </tr> <tr> <td>loki.compactor</td> <td>object</td> <td>Optional compactor configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.compactor_grpc_client</td> <td>object</td> <td>Optional compactor grpc client configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.config</td> <td>string</td> <td>Config file contents for Loki</td> <td><pre lang=""> See values.yaml </pre> </td> </tr> <tr> <td>loki.configObjectName</td> <td>string</td> <td>The name of the object which Loki will mount as a volume containing the config. If the configStorageType is Secret, this will be the name of the Secret, if it is ConfigMap, this will be the name of the ConfigMap. The value will be passed through tpl.</td> <td><pre lang="json"> "{{ include \"loki.name\" . }}" </pre> </td> </tr> <tr> <td>loki.configStorageType</td> <td>string</td> <td>Defines what kind of object stores the configuration, a ConfigMap or a Secret. In order to move sensitive information (such as credentials) from the ConfigMap/Secret to a more secure location (e.g. vault), it is possible to use [environment variables in the configuration](https://grafana.com/docs/loki/latest/configuration/#use-environment-variables-in-the-configuration). Such environment variables can be then stored in a separate Secret and injected via the global.extraEnvFrom value. For details about environment injection from a Secret please see [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables).</td> <td><pre lang="json"> "ConfigMap" </pre> </td> </tr> <tr> <td>loki.containerSecurityContext</td> <td>object</td> <td>The SecurityContext for Loki containers</td> <td><pre lang="json"> { "allowPrivilegeEscalation": false, "capabilities": { "drop": [ "ALL" ] }, "readOnlyRootFilesystem": true } </pre> </td> </tr> <tr> <td>loki.distributor</td> <td>object</td> <td>Optional distributor configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.dnsConfig</td> <td>object</td> <td>DNS config for Loki pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.enableServiceLinks</td> <td>bool</td> <td>Should enableServiceLinks be enabled. Default to enable</td> <td><pre lang="json"> true </pre> </td> </tr> <tr> <td>loki.extraMemberlistConfig</td> <td>object</td> <td>Extra memberlist configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.generatedConfigObjectName</td> <td>string</td> <td>The name of the Secret or ConfigMap that will be created by this chart. If empty, no configmap or secret will be created. The value will be passed through tpl.</td> <td><pre lang="json"> "{{ include \"loki.name\" . }}" </pre> </td> </tr> <tr> <td>loki.image.digest</td> <td>string</td> <td>Overrides the image tag with an image digest</td> <td><pre lang="json"> null </pre> </td> </tr> <tr> <td>loki.image.pullPolicy</td> <td>string</td> <td>Docker image pull policy</td> <td><pre lang="json"> "IfNotPresent" </pre> </td> </tr> <tr> <td>loki.image.registry</td> <td>string</td> <td>The Docker registry</td> <td><pre lang="json"> "docker.io" </pre> </td> </tr> <tr> <td>loki.image.repository</td> <td>string</td> <td>Docker image repository</td> <td><pre lang="json"> "grafana/loki" </pre> </td> </tr> <tr> <td>loki.image.tag</td> <td>string</td> <td>Overrides the image tag whose default is the chart's appVersion</td> <td><pre lang="json"> "3.6.7" </pre> </td> </tr> <tr> <td>loki.index_gateway</td> <td>object</td> <td>Optional index gateway configuration</td> <td><pre lang="json"> { "mode": "simple" } </pre> </td> </tr> <tr> <td>loki.ingester</td> <td>object</td> <td>Optional ingester configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.ingester_client</td> <td>object</td> <td>Optional ingester client configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.limits_config</td> <td>object</td> <td>Limits config</td> <td><pre lang="json"> { "max_cache_freshness_per_query": "10m", "query_timeout": "300s", "reject_old_samples": true, "reject_old_samples_max_age": "168h", "split_queries_by_interval": "15m", "volume_enabled": true } </pre> </td> </tr> <tr> <td>loki.memberlistConfig</td> <td>object</td> <td>memberlist configuration (overrides embedded default)</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.operational_config</td> <td>object</td> <td>Optional operational configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.pattern_ingester</td> <td>object</td> <td>Optional pattern ingester configuration</td> <td><pre lang="json"> { "enabled": false } </pre> </td> </tr> <tr> <td>loki.podAnnotations</td> <td>object</td> <td>Common annotations for all pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.podLabels</td> <td>object</td> <td>Common labels for all pods</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.podSecurityContext</td> <td>object</td> <td>The SecurityContext for Loki pods</td> <td><pre lang="json"> { "fsGroup": 10001, "fsGroupChangePolicy": "OnRootMismatch", "runAsGroup": 10001, "runAsNonRoot": true, "runAsUser": 10001 } </pre> </td> </tr> <tr> <td>loki.querier</td> <td>object</td> <td>Optional querier configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.query_range</td> <td>object</td> <td>Optional querier configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.query_scheduler</td> <td>object</td> <td>Additional query scheduler config</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.revisionHistoryLimit</td> <td>int</td> <td>The number of old ReplicaSets to retain to allow rollback</td> <td><pre lang="json"> 10 </pre> </td> </tr> <tr> <td>loki.rulerConfig</td> <td>object</td> <td>Check https://grafana.com/docs/loki/latest/configuration/#ruler for more info on configuring ruler</td> <td><pre lang="json"> { "wal": { "dir": "/var/loki/ruler-wal" } } </pre> </td> </tr> <tr> <td>loki.runtimeConfig</td> <td>object</td> <td>Provides a reloadable runtime configuration file for some specific configuration</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.schemaConfig</td> <td>object</td> <td>Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.server</td> <td>object</td> <td>Check https://grafana.com/docs/loki/latest/configuration/#server for more info on the server configuration.</td> <td><pre lang="json"> { "grpc_listen_port": 9095, "http_listen_port": 3100, "http_server_read_timeout": "600s", "http_server_write_timeout": "600s" } </pre> </td> </tr> <tr> <td>loki.service.trafficDistribution</td> <td>string</td> <td>trafficDistribution for services Ref: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-distribution</td> <td><pre lang="json"> "" </pre> </td> </tr> <tr> <td>loki.serviceAnnotations</td> <td>object</td> <td>Common annotations for all services</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.serviceLabels</td> <td>object</td> <td>Common labels for all services</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.storage</td> <td>object</td> <td>In case of using thanos storage, enable use_thanos_objstore and the configuration should be done inside the object_store section.</td> <td><pre lang="json"> { "azure": { "accountKey": null, "accountName": null, "chunkDelimiter": null, "connectionString": null, "endpointSuffix": null, "requestTimeout": null, "useFederatedToken": false, "useManagedIdentity": false, "userAssignedId": null }, "filesystem": { "chunks_directory": "/var/loki/chunks", "rules_directory": "/var/loki/rules" }, "gcs": { "chunkBufferSize": 0, "enableHttp2": true, "requestTimeout": "0s" }, "object_store": { "azure": { "account_key": null, "account_name": null }, "gcs": { "bucket_name": null, "service_account": null }, "s3": { "access_key_id": null, "endpoint": null, "http": {}, "insecure": false, "region": null, "secret_access_key": null, "sse": {} }, "storage_prefix": null, "type": "s3" }, "s3": { "accessKeyId": null, "backoff_config": {}, "disable_dualstack": false, "endpoint": null, "http_config": {}, "insecure": false, "region": null, "s3": null, "s3ForcePathStyle": false, "secretAccessKey": null, "signatureVersion": null }, "swift": { "auth_url": null, "auth_version": null, "connect_timeout": null, "container_name": null, "domain_id": null, "domain_name": null, "internal": null, "max_retries": null, "password": null, "project_domain_id": null, "project_domain_name": null, "project_id": null, "project_name": null, "region_name": null, "request_timeout": null, "user_domain_id": null, "user_domain_name": null, "user_id": null, "username": null }, "type": "s3", "use_thanos_objstore": false } </pre> </td> </tr> <tr> <td>loki.storage.s3.backoff_config</td> <td>object</td> <td>Check https://grafana.com/docs/loki/latest/configure/#s3_storage_config for more info on how to provide a backoff_config</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.storage_config</td> <td>object</td> <td>Additional storage config</td> <td><pre lang="json"> { "bloom_shipper": { "working_directory": "/var/loki/data/bloomshipper" }, "boltdb_shipper": { "index_gateway_client": { "server_address": "{{ include \"loki.indexGatewayAddress\" . }}" } }, "hedging": { "at": "250ms", "max_per_second": 20, "up_to": 3 }, "tsdb_shipper": { "index_gateway_client": { "server_address": "{{ include \"loki.indexGatewayAddress\" . }}" } } } </pre> </td> </tr> <tr> <td>loki.structuredConfig</td> <td>object</td> <td>Structured loki configuration, takes precedence over `loki.config`, `loki.schemaConfig`, `loki.storageConfig`</td> <td><pre lang="json"> {} </pre> </td> </tr> <tr> <td>loki.tenants</td> <td>list</td> <td>Tenants list to be created on nginx htpasswd file, with name and password or passwordHash keysExample: <pre> tenants:
htpasswd -nbBC10 test-user-2 test-password-2 </pre></td>
<td><pre lang="json">
[]
</pre>