doc/user/group/value_stream_analytics/_index.md
{{< details >}}
{{< /details >}}
Value stream analytics calculates the duration of every stage of your software development process. You can measure how much time it takes to go from an idea to production by tracking merge request or issue events.
Use value stream analytics to identify:
Value stream analytics helps businesses:
For a click-through demo, see the Value Stream Management product tour.
Value stream analytics has a hierarchical structure:
A value stream is the entire work process that delivers value to customers. Value streams are container objects for stages. You can have multiple value streams per group, to focus on different aspects of the DevOps lifecycle.
A stage represents an event pair (start and end events) with additional metadata, such as the name of the stage. You can use value stream analytics with the built-in default stages, which you can reorder and hide. You can also create and add custom stages that align with your specific development workflows.
{{< history >}}
{{< /history >}}
Events are the building blocks that define when stages start and end. Each event has a start and end time:
GitLab calculates stage duration based on the start and end event times, using this formula: Stage duration = End event time - Start event time
Value stream analytics supports the following events:
You can share your ideas or feedback about stage events in issue 520962.
{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
Value stream analytics uses a backend process to collect and aggregate stage-level data, which ensures it can scale for large groups with a high number of issues and merge requests. Due to this process, there may be a slight delay between when an action is taken (for example, closing an issue) and when the data displays on the value stream analytics page.
It may take up to 10 minutes to process the data and display results. Data collection may take longer than 10 minutes in the following cases:
To view when the data was most recently updated, in the right corner next to Edit, hover over the Last updated badge.
Value stream analytics measures each stage from its start event to its end event. Only items that have reached their end event are included in the stage time calculation.
By default, blocked issues are not included in the lifecycle overview.
However, you can use custom labels (for example workflow::blocked) to track them.
You can customize stages in value stream analytics based on pre-defined events. To help you with the configuration, GitLab provides a pre-defined list of stages that you can use as a template. For example, you can define a stage that starts when you add a label to an issue, and ends when you add another label.
The following table gives an overview of the pre-defined stages in value stream analytics.
| Stage | Measurement method |
|---|---|
| Issue | The median time between creating an issue and taking action to solve it, by either labeling it or adding it to a milestone, whichever comes first. The label is tracked only if it already has an issue board list created for it. |
| Plan | The median time between the action you took for the previous stage, and pushing the first commit to the branch. The first commit on the branch triggers the separation between Plan and Code. At least one of the commits in the branch must contain the related issue number (for example, #42). If none of the commits in the branch mention the related issue number, it is not considered in the measurement time of the stage. |
| Code | The median time between pushing a first commit (previous stage) and creating a merge request (MR) related to that commit. The key to keep the process tracked is to include the issue closing pattern in the description of the merge request. For example, Closes #xxx, where xxx is the number of the issue related to this merge request. If the closing pattern is not present, then the calculation uses the creation time of the first commit in the merge request as the start time. |
| Test | The median time to run the entire pipeline for that project. It's related to the time GitLab CI/CD takes to run every job for the commits pushed to that merge request. It is basically the start->finish time for all pipelines. |
| Review | The median time taken to review a merge request that has a closing issue pattern, between its creation and until it's merged. |
| Staging | The median time between merging a merge request that has a closing issue pattern until the very first deployment to a production environment. If there isn't a production environment, this is not tracked. |
[!note] Value stream analytics works on timestamp data and aggregates only the final start and stop events of the stage. For items that move back and forth between stages multiple times, the stage time is calculated solely from the final events' timestamps.
This example shows a workflow through all seven stages in one day.
If a stage does not include a start and a stop time, its data is not included in the median time. In this example, milestones have been created and CI/CD for testing and setting environments is configured.
.gitlab-ci.yml file.production environment finishes. Staging stops.Value stream analytics records the following times for each stage:
Keep in mind the following observations related to this example:
{{< history >}}
enable_vsa_cumulative_label_duration_calculation and vsa_duration_from_db. Disabled by default.vsa_duration_from_db removed.enable_vsa_cumulative_label_duration_calculation removed in GitLab 17.0.{{< /history >}}
With this feature, value stream analytics measures the duration of repetitive events for label-based stages. You should configure label removal or addition events for both start and end events.
For example, a stage tracks when the in progress label is added and removed, with the following times:
With the original calculation method, the duration is five hours (from 9:00 to 14:00). With cumulative label event duration calculation enabled, the duration is three hours (9:00 to 10:00 and 12:00 to 14:00).
[!note] When you upgrade your GitLab version to 16.10 (or to a higher version), existing label-based value stream analytics stages are automatically reaggregated using the background aggregation process.
{{< details >}}
{{< /details >}}
On large instances, when you upgrade the GitLab version and especially if several minor versions are skipped, the background aggregation processes might last longer. This delay can result in outdated data on the Value Stream Analytics page. To speed up the aggregation process and avoid outdated data, in the rails console you can invoke the synchronous aggregation snippet for a given group:
group = Group.find(-1) # put your group id here
group_to_aggregate = group.root_ancestor
loop do
cursor = {}
context = Analytics::CycleAnalytics::AggregationContext.new(cursor: cursor)
service_response = Analytics::CycleAnalytics::DataLoaderService.new(group: group_to_aggregate, model: Issue, context: context).execute
if service_response.success? && service_response.payload[:reason] == :limit_reached
cursor = service_response.payload[:context].cursor
elsif service_response.success?
puts "finished"
break
else
puts "failed"
break
end
end
loop do
cursor = {}
context = Analytics::CycleAnalytics::AggregationContext.new(cursor: cursor)
service_response = Analytics::CycleAnalytics::DataLoaderService.new(group: group_to_aggregate, model: MergeRequest, context: context).execute
if service_response.success? && service_response.payload[:reason] == :limit_reached
cursor = service_response.payload[:context].cursor
elsif service_response.success?
puts "finished"
break
else
puts "failed"
break
end
end
Value stream analytics identifies production environments by looking for project environments with a name matching any of these patterns:
prod or prod/*production or production/*These patterns are not case-sensitive.
You can change the name of a project environment in your GitLab CI/CD configuration.
{{< history >}}
vsa_predefined_date_ranges. Disabled by default.vsa_predefined_date_ranges removed.{{< /history >}}
Prerequisites:
To view value stream analytics for your group or project:
Select the Filter results text box.
Select a parameter.
Select a value or enter text to refine the results.
To view metrics in a particular date range, from the dropdown list select a predefined date range or the Custom option. With the Custom option selected:
The charts and list display workflow items created during the date range.
A badge next to the workflow items table header shows the number of workflow items that completed during the selected stage.
The table shows a list of related workflow items for the selected stage. Based on the stage you select, this can be:
[!note] The end date for each predefined date range is the current day, and is included in the number of days selected. For example, the start date for
Last 30 daysis 29 days prior to the current day for a total of 30 days.
You can filter value stream analytics to view data that matches specific criteria. The following filters are supported:
The Overview page in value stream analytics displays key metrics of the DevSecOps lifecycle performance for projects and groups.
Value stream analytics includes the following lifecycle metrics:
# followed by the issue number (for example, #123) in the commit message, otherwise no data is displayed. Cycle time is typically shorter than lead time because the merge request is created after the first commit.{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
Value stream analytics includes the following DORA metrics:
DORA metrics are calculated based on data from the DORA API.
If you have a GitLab Premium or Ultimate subscription:
Prerequisites:
To view lifecycle metrics:
To view the Value Streams Dashboard and DORA metrics:
/analytics/dashboards/value_streams_dashboard to the group URL
(for example, https://gitlab.com/groups/gitlab-org/-/analytics/dashboards/value_streams_dashboard).Value stream analytics shows the median time spent by issues or merge requests in each development stage.
To view the median time spent in each stage by a group:
[!note] The date range selector filters items by the event time. The event time is when the selected stage finished for the given item.
{{< details >}}
{{< /details >}}
The Tasks by type chart displays the cumulative number of completed tasks (closed issues and merged merge requests) per day for your group.
The chart uses the global page filters to display data based on the selected group and time frame.
To view tasks by type:
{{< details >}}
{{< /details >}}
{{< history >}}
vsa_standalone_settings_page. Disabled by default.vsa_standalone_settings_page removed.{{< /history >}}
To create a value stream with default stages:
[!note] If you have recently upgraded to GitLab Premium, it can take up to 30 minutes for data to collect and display.
To create a value stream with custom stages:
<i class="fa-youtube-play" aria-hidden="true"></i> For a video explanation, see Optimizing merge request review process with Value Stream Analytics.
<!-- Video published on 2024-07-29 -->To measure complex workflows, you can use scoped labels. For example, to measure deployment time from a staging environment to production, you could use the following labels:
workflow::staging label is added to the merge request.workflow::production label is added to the merge request.You can automatically add labels by using GitLab webhook events, so that a label is applied to merge requests or issues when a specific event occurs. Then, you can add label-based stages to track your workflow. To learn more about the implementation, see the blog post Applying GitLab Labels Automatically.
In the previous example, two independent value streams are set up for two teams that are using different development workflows in the Test Group (top-level namespace).
The first value stream uses standard timestamp-based events for defining the stages. The second value stream uses label events.
{{< details >}}
{{< /details >}}
{{< history >}}
vsa_standalone_settings_page. Disabled by default.vsa_standalone_settings_page removed.{{< /history >}}
After you create a value stream, you can customize it to suit your purposes. To edit a value stream:
{{< details >}}
{{< /details >}}
To delete a custom value stream:
{{< details >}}
{{< /details >}}
The Total time chart shows the average number of days it takes for development cycles to complete. The chart shows data for the last 500 workflow items.
Access permissions for value stream analytics depend on the project type.
| Project type | Permissions |
|---|---|
| Public | Anyone can access. |
| Internal | Any authenticated user can access. |
| Private | Any user with the Reporter, Developer, Maintainer, or Owner role can access. |
{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
With the VSA GraphQL API, you can request metrics from your configured value streams and value stream stages. This can be useful if you want to export VSA data to an external system or for a report.
The following metrics are available:
Prerequisites:
First, you must determine which value stream you want to use in the reporting.
To request the configured value streams for a group, run:
group(fullPath: "your-group-path") {
valueStreams {
nodes {
id
name
}
}
}
Similarly, to request metrics for a project, run:
project(fullPath: "your-project-path") {
valueStreams {
nodes {
id
name
}
}
}
To request metrics for stages of a value stream, run:
group(fullPath: "your-group-path") {
valueStreams(id: "your-value-stream-id") {
nodes {
stages {
id
name
}
}
}
}
Depending how you want to consume the data, you can request metrics for one specific stage or all stages in your value stream.
[!note] Requesting metrics for all stages might be too slow for some installations. The recommended approach is to request metrics stage by stage.
Requesting metrics for the stage:
group(fullPath: "your-group-path") {
valueStreams(id: "your-value-stream-id") {
nodes {
stages(id: "your-stage-id") {
id
name
metrics(timeframe: { start: "2024-03-01", end: "2024-03-31" }) {
average {
value
unit
}
median {
value
unit
}
count {
value
unit
}
}
}
}
}
}
[!note] You should always request metrics with a given time frame. The longest supported time frame is 180 days.
The metrics node supports additional filtering options:
Example request with filters:
group(fullPath: "your-group-path") {
valueStreams(id: "your-value-stream-id") {
nodes {
stages(id: "your-stage-id") {
id
name
metrics(
labelNames: ["backend"],
milestoneTitle: "17.0",
timeframe: { start: "2024-03-01", end: "2024-03-31" }
) {
average {
value
unit
}
median {
value
unit
}
count {
value
unit
}
}
}
}
}
}
Value stream analytics offers different features at the project and group level for FOSS and licensed versions.
| Feature | Group level (licensed) | Project level (licensed) | Project level (FOSS) |
|---|---|---|---|
| Create custom value streams | Yes | Yes | No, only one value stream (default) is present with the default stages |
| Create custom stages | Yes | Yes | No |
| Filtering (for example, by author, label, milestone) | Yes | Yes | Yes |
| Stage time chart | Yes | Yes | No |
| Total time chart | Yes | Yes | No |
| Task by type chart | Yes | No | No |
| DORA Metrics | Yes | Yes | No |
| Cycle time and lead time summary (Lifecycle metrics) | Yes | Yes | No |
| New issues, commits, and deploys (Lifecycle metrics) | Yes, excluding commits | Yes | Yes |
| Uses aggregated backend | Yes | Yes | No |
| Date filter behavior | Filters items finished in the date range | Filters items by creation date. | Filters items by creation date. |
| Authorization | At least reporter | At least reporter | Can be public |
cronjob:analytics_cycle_analyticsIt is possible that value stream analytics background jobs strongly impact performance by monopolizing CPU resources.
To recover from this situation:
Disable the feature for all projects in the Rails console, and remove existing jobs:
Project.find_each do |p|
p.analytics_access_level='disabled';
p.save!
end
Analytics::CycleAnalytics::GroupStage.delete_all
Analytics::CycleAnalytics::Aggregation.delete_all
Configure a Sidekiq routing
with for example a single feature_category=value_stream_management
and multiple feature_category!=value_stream_management entries.
Find other relevant queue metadata in the
Enterprise Edition list.
Enable value stream analytics for one project after another. You might need to tweak the Sidekiq routing further according to your performance requirements.