doc/user/project/ml/experiment_tracking/_index.md
{{< details >}}
{{< /details >}}
{{< history >}}
{{< /history >}}
As you create machine learning models, you likely experiment with different parameters, configurations, and feature engineering to improve the model's performance. To replicate your experiments later, you need to effectively track the metadata and artifacts. Use GitLab model experiments to track and log parameters, metrics, and artifacts directly into GitLab.
In a project, an experiment is a collection of comparable model runs. Experiments can be long-lived (for example, when they represent a use case), or short-lived (results from hyperparameter tuning triggered by a merge request), but usually hold model runs that have a similar set of parameters measured by the same metrics.
A model run is a variation of the training of a machine learning model, that can be eventually promoted to a version of the model.
The goal of a data scientist is to find the model run whose parameter values lead to the best model performance, as indicated by the given metrics.
Some example parameters:
Experiment and trials can only be tracked through the MLflow client compatibility. See MLflow client compatibility for more information on how to use GitLab as a backend for the MLflow Client.
To list the current active experiments:
Trial artifacts are saved as packages. After an artifact is logged for a run, all artifacts logged for the run are listed in the package registry. The package name for a run is ml_experiment_<experiment_id>, where the version is the run IID. The link to the artifacts can also be accessed from the Experiment Runs list or Run detail.
You can associate runs to the CI job that created them, allowing quick links to the merge request, pipeline, and user that triggered the pipeline:
When you run an experiment, GitLab logs certain related data, including its metrics, parameters, and metadata. You can view the metrics in a chart for analysis.
To view logged metrics: