docs/sources/get-started/quick-start/tutorial.md
This quickstart guide will walk you through deploying Loki in single binary mode (also known as monolithic mode) using Docker Compose. Grafana Loki is only one component of the Grafana observability stack for logs. In this tutorial we will refer to this stack as the Loki stack.
{{< figure max-width="100%" src="/media/docs/loki/getting-started-loki-stack-3.png" caption="Loki stack" alt="Loki stack" >}}
The Loki stack consists of the following components:
Before you start, you need to have the following installed on your local system:
{{< admonition type="tip" >}} Alternatively, you can try out this example in our interactive learning environment: Loki Quickstart Sandbox.
It's a fully configured environment with all the dependencies already installed.
Provide feedback, report bugs, and raise issues in the Grafana Killercoda repository. {{< /admonition >}}
<!-- INTERACTIVE ignore END --> <!-- INTERACTIVE page intro.md END --> <!-- INTERACTIVE page step1.md START -->{{< admonition type="note" >}} This quickstart assumes you are running Linux or MacOS. Windows users can follow the same steps using Windows Subsystem for Linux. {{< /admonition >}}
<!-- INTERACTIVE ignore END -->To deploy the Loki stack locally, follow these steps:
Clone the Loki fundamentals repository and check out the getting-started branch:
git clone https://github.com/grafana/loki-fundamentals.git -b getting-started
Change to the loki-fundamentals directory:
cd loki-fundamentals
With loki-fundamentals as the current working directory deploy Loki, Alloy, and Grafana using Docker Compose:
docker compose up -d
After running the command, you should see a similar output:
✔ Container loki-fundamentals-grafana-1 Started 0.3s
✔ Container loki-fundamentals-loki-1 Started 0.3s
✔ Container loki-fundamentals-alloy-1 Started 0.4s
With the Loki stack running, you can now verify each component is up and running:
Since Grafana Alloy is configured to tail logs from all Docker containers, Loki should already be receiving logs. The best place to verify log collection is using the Grafana Logs Drilldown feature. To do this, navigate to http://localhost:3000/drilldown. Select Logs. You should see the Grafana Logs Drilldown page.
{{< figure max-width="100%" src="/media/docs/loki/get-started-drill-down.png" caption="Grafana Logs Drilldown" alt="Grafana Logs Drilldown" >}}
If you have only the getting started demo deployed in your Docker environment, you should see three containers and their logs; loki-fundamentals-alloy-1, loki-fundamentals-grafana-1 and loki-fundamentals-loki-1. In the loki-fundamentals-loki-1 container, click Show Logs to drill down into the logs for that container.
{{< figure max-width="100%" src="/media/docs/loki/get-started-drill-down-container.png" caption="Grafana Drilldown Service View" alt="Grafana Drilldown Service View" >}}
We will not cover the rest of the Grafana Logs Drilldown features in this quickstart guide. For more information on how to use the Grafana Logs Drilldown feature, refer to Get started with Grafana Logs Drilldown.
<!-- INTERACTIVE page step2.md END --> <!-- INTERACTIVE page step3.md START -->Currently, the Loki stack is collecting logs about itself. To provide a more realistic example, you can deploy a sample application that generates logs. The sample application is called The Carnivorous Greenhouse, a microservices application that allows users to login and simulate a greenhouse with carnivorous plants to monitor. The application consists of seven services:
The architecture of the application is shown below:
{{< figure max-width="100%" src="/media/docs/loki/get-started-architecture.png" caption="Sample Microservice Architecture" alt="Sample Microservice Architecture" >}}
To deploy the sample application, follow these steps:
With loki-fundamentals as the current working directory, deploy the sample application using Docker Compose:
docker compose -f greenhouse/docker-compose-micro.yml up -d --build
{{< admonition type="note" >}} This may take a few minutes to complete since the images for the sample application need to be built. Go grab a coffee and come back. {{< /admonition >}}
Once the command completes, you should see a similar output:
✔ Container greenhouse-websocket_service-1 Started 0.7s
✔ Container greenhouse-db-1 Started 0.7s
✔ Container greenhouse-user_service-1 Started 0.8s
✔ Container greenhouse-bug_service-1 Started 0.8s
✔ Container greenhouse-plant_service-1 Started 0.8s
✔ Container greenhouse-simulation_service-1 Started 0.7s
✔ Container greenhouse-main_app-1 Started 0.7s
To verify the sample application is running, open a browser and navigate to http://localhost:5005. You should see the login page for the Carnivorous Greenhouse application.
Now that the sample application is running, run some actions in the application to generate logs. Here is a list of actions:
Your greenhouse should look something like this:
{{< figure max-width="100%" src="/media/docs/loki/get-started-greenhouse.png" caption="Greenhouse Dashboard" alt="Greenhouse Dashboard" >}}
Now that you have generated some logs, you can return to the Grafana Logs Drilldown page http://localhost:3000/drilldown. You should see seven new services such as greenhouse-main_app-1, greenhouse-plant_service-1, greenhouse-user_service-1, etc.
At this point, you have viewed logs using the Grafana Logs Drilldown feature. In many cases this will provide you with all the information you need. However, we can also manually query Loki to ask more advanced questions about the logs. This can be done via Grafana Explore.
Open a browser and navigate to http://localhost:3000 to open Grafana.
From the Grafana main menu, click the Explore icon (1) to open the Explore tab.
To learn more about Explore, refer to the Explore documentation.
{{< figure src="/media/docs/loki/grafana-query-builder-v2.png" caption="Grafana Explore" alt="Grafana Explore" >}}
From the menu in the dashboard header, select the Loki data source (2).
This displays the Loki query editor.
In the query editor you use the Loki query language, LogQL, to query your logs. To learn more about the query editor, refer to the query editor documentation.
The Loki query editor has two modes (3):
Next we’ll walk through a few queries using the code view.
Click Code (3) to work in Code mode in the query editor.
Here are some sample queries to get you started using LogQL. After copying any of these queries into the query editor, click Run Query (4) to execute the query.
View all the log lines which have the container label value greenhouse-main_app-1:
{container="greenhouse-main_app-1"}
In Loki, this is a log stream.
Loki uses labels as metadata to describe log streams.
Loki queries always start with a label selector.
In the previous query, the label selector is {container="greenhouse-main_app-1"}.
Find all the log lines in the {container="greenhouse-main_app-1"} stream that contain the string POST:
{container="greenhouse-main_app-1"} |= "POST"
Loki by design does not force log lines into a specific schema format. Whether you are using JSON, key-value pairs, plain text, Logfmt, or any other format, Loki ingests these logs lines as a stream of characters. The sample application we are using stores logs in Logfmt format:
<!-- INTERACTIVE copy START -->ts=2025-02-21 16:09:42,176 level=INFO line=97 msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -"
To break this down:
ts=2025-02-21 16:09:42,176 is the timestamp of the log line.level=INFO is the log level.line=97 is the line number in the code.msg="192.168.65.1 - - [21/Feb/2025 16:09:42] "GET /static/style.css HTTP/1.1" 304 -" is the log message.When querying Loki, you can pipe the result of the label selector through a parser. This extracts attributes from the log line for further processing. For example, lets pipe {container="greenhouse-main_app-1"} through the logfmt parser to extract the level and line attributes:
{container="greenhouse-main_app-1"} | logfmt
When you now expand a log line in the query result, you will see the extracted attributes.
{{< admonition type="tip" >}}
Before we move on to the next section, let's generate some error logs. To do this, enable the bug service in the sample application. This is done by setting the Toggle Error Mode to On in the Carnivorous Greenhouse application. This will cause the bug service to randomly cause services to fail.
{{< /admonition >}}
With Error Mode enabled the bug service will start causing services to fail, in these next few LogQL examples we will track down some of these errors. Lets start by parsing the logs to extract the level attribute and then filter for logs with a level of ERROR:
{container="greenhouse-plant_service-1"} | logfmt | level="ERROR"
This query will return all the logs from the greenhouse-plant_service-1 container that have a level attribute of ERROR. You can further refine this query by filtering for a specific code line:
{container="greenhouse-plant_service-1"} | logfmt | level="ERROR", line="58"
This query will return all the logs from the greenhouse-plant_service-1 container that have a level attribute of ERROR and a line attribute of 58.
LogQL also supports metrics queries. Metrics are useful for abstracting the raw log data aggregating attributes into numeric values. This allows you to utilise more visualization options in Grafana as well as generate alerts on your logs.
For example, you can use a metric query to count the number of logs per second that have a specific attribute:
<!-- INTERACTIVE copy START -->sum(rate({container="greenhouse-plant_service-1"} | logfmt | level="ERROR" [$__auto]))
It worth changing the visualization from lines to bars to visualize the error rate over time since the error count is quite low.
Another example is to get the top 10 services producing the highest rate of errors:
<!-- INTERACTIVE copy START -->topk(10,sum(rate({level="error"} | logfmt [5m])) by (service_name))
{{< admonition type="note" >}}
service_name is a label created by Loki when no service name is provided in the log line. It will use the container name as the service name. A list of all automatically generated labels can be found in Labels.
{{< /admonition >}}
Finally, lets take a look at the total log throughput of each container in our production environment:
<!-- INTERACTIVE copy START -->sum by (service_name) (rate({env="production"} | logfmt [$__auto]))
This is made possible by the service_name label and the env label that we have added to our log lines. Note that env is a static label that we added to all log lines as they are processed by Alloy.
At this point you will have a running Loki stack and a sample application generating logs. You have also queried Loki using Grafana Logs Drilldown and Grafana Explore. In this next section we will take a look under the hood to understand how the Loki stack has been configured to collect logs, the Loki configuration file, and how the Loki data source has been configured in Grafana.
Grafana Alloy is collecting logs from all the Docker containers and forwarding them to Loki.
It needs a configuration file to know which logs to collect and where to forward them to. Within the loki-fundamentals directory, you will find a file called config.alloy:
// This component is responsible for discovering new containers within the Docker environment
discovery.docker "getting_started" {
host = "unix:///var/run/docker.sock"
refresh_interval = "5s"
}
// This component is responsible for relabeling the discovered containers
discovery.relabel "getting_started" {
targets = []
rule {
source_labels = ["__meta_docker_container_name"]
regex = "/(.*)"
target_label = "container"
}
}
// This component is responsible for collecting logs from the discovered containers
loki.source.docker "getting_started" {
host = "unix:///var/run/docker.sock"
targets = discovery.docker.getting_started.targets
forward_to = [loki.process.getting_started.receiver]
relabel_rules = discovery.relabel.getting_started.rules
refresh_interval = "5s"
}
// This component is responsible for processing the logs (In this case adding static labels)
loki.process "getting_started" {
stage.static_labels {
values = {
env = "production",
}
}
forward_to = [loki.write.getting_started.receiver]
}
// This component is responsible for writing the logs to Loki
loki.write "getting_started" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
// Enables the ability to view logs in the Alloy UI in realtime
livedebugging {
enabled = true
}
This configuration file can be viewed visually via the Alloy UI at http://localhost:12345/graph.
{{< figure max-width="100%" src="/media/docs/loki/getting-started-alloy-ui.png" caption="Alloy UI" alt="Alloy UI" >}}
In this view you can see the components of the Alloy configuration file and how they are connected:
__meta_docker_container_name) label into a Loki label (container).discovery.docker component and applies the relabeling rules from the discovery.relabel component.env=production to all logs.http://loki:3100/loki/api/v1/push.Grafana Alloy provides a built-in real time log viewer. This allows you to view current log entries and how they are being transformed via specific components of the pipeline. To view live debugging mode open a browser tab and navigate to: http://localhost:12345/debug/loki.process.getting_started.
<!-- INTERACTIVE page step6.md END --> <!-- INTERACTIVE page step7.md START -->Grafana Loki requires a configuration file to define how it should run. Within the loki-fundamentals directory, you will find a file called loki-config.yaml:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: info
grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
limits_config:
metric_aggregation_enabled: true
allow_structured_metadata: true
volume_enabled: true
retention_period: 24h # 24h
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
pattern_ingester:
enabled: true
metric_aggregation:
loki_address: localhost:3100
ruler:
enable_alertmanager_discovery: true
enable_api: true
frontend:
encoding: protobuf
compactor:
working_directory: /tmp/loki/retention
delete_request_store: filesystem
retention_enabled: true
To summarize the configuration file:
false, meaning Loki does not need a tenant ID for ingest or query. Note that this is not recommended for production environments. When deploying the Loki Helm chart, this is set to true by default.protobuf.The above configuration file is a basic configuration file for Loki. For more advanced configuration options, refer to the Loki Configuration documentation.
<!-- INTERACTIVE page step7.md END --> <!-- INTERACTIVE page step8.md START -->The final piece of the puzzle is the Grafana Loki data source. This is used by Grafana to connect to Loki and query the logs. Grafana has multiple ways to define a data source;
In this case we are using the provisioning method. Instead of mounting the Grafana configuration directory, we have defined the data source in this portion of the docker-compose.yml file:
grafana:
image: grafana/grafana:latest
environment:
- GF_FEATURE_TOGGLES_ENABLE=grafanaManagedRecordingRules
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_BASIC_ENABLED=false
ports:
- 3000:3000/tcp
entrypoint:
- sh
- -euc
- |
mkdir -p /etc/grafana/provisioning/datasources
cat <<EOF > /etc/grafana/provisioning/datasources/ds.yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
orgId: 1
url: 'http://loki:3100'
basicAuth: false
isDefault: true
version: 1
editable: true
EOF
/run.sh
networks:
- loki
Within the entrypoint section of the docker-compose.yml file, we have defined a file called run.sh this runs on startup and creates the data source configuration file ds.yaml in the Grafana provisioning directory.
This file defines the Loki data source and tells Grafana to use it. Since Loki is running in the same Docker network as Grafana, we can use the service name loki as the URL.
{{< docs/ignore >}}
Head back to where you started from to continue with the Loki documentation: Loki documentation. {{< /docs/ignore >}}
You have completed the Loki Quickstart demo. So where to go next? Here are a few suggestions:
If you would like to run a demonstration environment that includes Mimir, Loki, Tempo, and Grafana, you can use Introduction to Metrics, Logs, Traces, and Profiling in Grafana. It's a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana.
The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. You can also push the data from the environment to Grafana Cloud.
<!-- INTERACTIVE page finish.md END -->