operator/docs/user-guides/forwarding_logs_to_gateway.md
This document will describe how to send application, infrastructure, audit and network logs to the LokiStack Gateway as different tenants using Promtail or Fluentd. The built-in gateway provides secure access to the distributor (and query-frontend) via consulting an OAuth/OIDC endpoint for the request subject.
Please read the hacking guide before proceeding with the following instructions.
Note: While this document will only give instructions for two methods of log forwarding into the gateway, the examples given in the Promtail and Fluentd sections can be extrapolated to other log forwarders.
OpenShift Logging supports forwarding logs to an external Loki instance. This can also be used to forward logs to LokiStack gateway.
Deploy the Loki Operator and an lokistack instance with the gateway flag enabled.
Deploy the OpenShift Logging Operator from the Operator Hub or using the following command locally:
make deploy-image deploy-catalog install
Create a Cluster Logging instance in the openshift-logging namespace with only collection defined.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
collection:
logs:
type: fluentd
fluentd: {}
The LokiStack Gateway requires a bearer token for communication with fluentd. Therefore, create a secret with token key and the path to the file.
kubectl -n openshift-logging create secret generic lokistack-gateway-bearer-token \
--from-literal=token="$(kubectl -n openshift-logging get secret logcollector-token --template='{{.data.token | base64decode}}')" \
--from-literal=ca-bundle.crt="$(kubectl -n openshift-logging get configmap openshift-service-ca.crt --template='{{index .data "service-ca.crt"}}')"
Create the following ClusterRole and ClusterRoleBinding which will allow the cluster to authenticate the user(s) submitting the logs:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lokistack-dev-tenant-logs
rules:
- apiGroups:
- 'loki.grafana.com'
resources:
- application
- infrastructure
- audit
resourceNames:
- logs
verbs:
- 'create'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: lokistack-dev-tenant-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lokistack-dev-tenant-logs
subjects:
- kind: ServiceAccount
name: logcollector
namespace: openshift-logging
Now create a ClusterLogForwarder CR to forward logs to LokiStack:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: loki-app
type: loki
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/application
secret:
name: lokistack-gateway-bearer-token
- name: loki-infra
type: loki
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/infrastructure
secret:
name: lokistack-gateway-bearer-token
- name: loki-audit
type: loki
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/audit
secret:
name: lokistack-gateway-bearer-token
pipelines:
- name: send-app-logs
inputRefs:
- application
outputRefs:
- loki-app
- name: send-infra-logs
inputRefs:
- infrastructure
outputRefs:
- loki-infra
- name: send-audit-logs
inputRefs:
- audit
outputRefs:
- loki-audit
Note: You can add/remove any pipeline from the ClusterLogForwarder spec in case if you want to limit the logs being sent.
Network Observability also require an external loki instance and is compatible with LokiStack Gateway. You must use a separate instance than openshift-logging one.
The Network Observability Operator can automatically install and configure dependent operators. However, if you need to configure these manually, follow the step below.
lokistack for Network Observability:network-observability namespacelokistack-networkObject Storage -> Secret check object storage documentationTenants Configuration -> Mode to openshift-networkCreate the following ClusterRole and ClusterRoleBinding which allow flowlogs-pipeline and network-observability-plugin service accounts to read and write the network logs:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lokistack-network-tenant-logs
rules:
- apiGroups:
- 'loki.grafana.com'
resources:
- network
resourceNames:
- logs
verbs:
- 'get'
- 'create'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: lokistack-network-tenant-logs
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lokistack-network-tenant-logs
subjects:
- kind: ServiceAccount
name: flowlogs-pipeline
namespace: network-observability
- kind: ServiceAccount
name: network-observability-plugin
namespace: network-observability
Deploy the Network Observability Operator following the Getting started documentation either from Operator Hub or commands available in the repository.
Apply the following configuration in FlowCollector for network tenant of lokistack-network:
loki:
tenantID: network
sendAuthToken: true
url: 'https://lokistack-network-gateway-http.network-observability.svc.cluster.local:8080/api/logs/v1/network/'
Check config samples for all options.
In order to enable communication between the client(s) and the gateway, follow these steps:
Deploy the Loki Operator and an lokistack instance with the gateway flag enabled.
Create a ServiceAccount to generate the Secret which will be used to authorize the forwarder.
kubectl -n openshift-logging create serviceaccount <SERVICE_ACCOUNT_NAME>
Configure the forwarder and deploy it to the openshift-logging namespace.
Create the following ClusterRole and ClusterRoleBinding which will allow the cluster to authenticate the user(s) submitting the logs:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: lokistack-dev-tenant-logs-role
rules:
- apiGroups:
- 'loki.grafana.com'
resources:
- application
- infrastructure
- audit
resourceNames:
- logs
verbs:
- 'get'
- 'create'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: lokistack-dev-tenant-logs-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: lokistack-dev-tenant-logs-role
subjects:
- kind: ServiceAccount
name: "<SERVICE_ACCOUNT_NAME>"
namespace: openshift-logging
Promtail is an agent managed by Grafana which forwards logs to a Loki instance. The Grafana documentation can be consulted for configuring and deploying an instance of Promtail in a Kubernetes cluster.
To configure Promtail to send application, audit, and infrastructure logs, add the following clients to the Promtail configuration
clients:
- # ...
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /run/secrets/kubernetes.io/serviceaccount/service-ca.crt
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/audit/loki/api/v1/push
- # ...
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /run/secrets/kubernetes.io/serviceaccount/service-ca.crt
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/application/loki/api/v1/push
- # ...
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /run/secrets/kubernetes.io/serviceaccount/service-ca.crt
url: https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/infrastructure/loki/api/v1/push
The rest of the configuration can be configured to the developer's desire.
Loki can receive logs from Fluentd via the Grafana plugin.
The Fluentd configuration can be overrided to target the application endpoint to send those log types.
<match **>
@type loki
# ...
bearer_token_file /var/run/secrets/kubernetes.io/serviceaccount/token
ca_cert /run/secrets/kubernetes.io/serviceaccount/service-ca.crt
url https://lokistack-dev-gateway-http.openshift-logging.svc:8080/api/logs/v1/application
</match>
If the forwarder is configured to send too much data in a short span of time, Loki will back-pressure the forwarder and respond to the POST requests with 429 errors. In order to alleviate this, a few changes could be made to the spec:
kubectl -n openshift-logging edit lokistack
size: 1x.medium
lokistack:kubectl -n openshift-logging edit lokistack
limits:
tenants:
<TENANT_NAME>:
IngestionLimits:
IngestionRate: 15
where <TENANT_NAME> can be application, audit or infrastructure.