docs/en/modes/val.md
Validation is a critical step in the machine learning pipeline, allowing you to assess the quality of your trained models. Val mode in Ultralytics YOLO26 provides a robust suite of tools and metrics for evaluating the performance of your object detection models. This guide serves as a complete resource for understanding how to effectively use the Val mode to ensure that your models are both accurate and reliable.
<p align="center"> <iframe loading="lazy" width="720" height="405" src="https://www.youtube.com/embed/j8uQc0qB91s?start=47" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen> </iframe><strong>Watch:</strong> Ultralytics Modes Tutorial: Validation
</p>Here's why using YOLO26's Val mode is advantageous:
These are the notable functionalities offered by YOLO26's Val mode:
!!! tip
* YOLO26 models automatically remember their training settings, so you can validate a model at the same image size and on the original dataset easily with just `yolo val model=yolo26n.pt` or `YOLO("yolo26n.pt").val()`
Validate a trained YOLO26n model accuracy on the COCO8 dataset. No arguments are needed as the model retains its training data and arguments as model attributes. See the Arguments section below for a full list of validation arguments.
!!! warning "Windows Multi-Processing Error"
On Windows, you may receive a `RuntimeError` when launching the validation as a script. Add an `if __name__ == "__main__":` block before your validation code to resolve it.
!!! example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt") # load an official model
model = YOLO("path/to/best.pt") # load a custom model
# Validate the model
metrics = model.val() # no arguments needed, dataset and settings remembered
metrics.box.map # map50-95
metrics.box.map50 # map50
metrics.box.map75 # map75
metrics.box.maps # a list containing mAP50-95 for each category
metrics.box.image_metrics # per-image metrics dictionary with precision, recall, F1, TP, FP, and FN
```
=== "CLI"
```bash
yolo detect val model=yolo26n.pt # val official model
yolo detect val model=path/to/best.pt # val custom model
```
When validating YOLO models, several arguments can be fine-tuned to optimize the evaluation process. These arguments control aspects such as input image size, batch processing, and performance thresholds. Below is a detailed breakdown of each argument to help you customize your validation settings effectively.
{% include "macros/validation-args.md" %}
Each of these settings plays a vital role in the validation process, allowing for a customizable and efficient evaluation of YOLO models. Adjusting these parameters according to your specific needs and resources can help achieve the best balance between accuracy and performance.
<strong>Watch:</strong> How to Export Model Validation Results in CSV, JSON, SQL, Polars DataFrame & More
</p><a href="https://github.com/ultralytics/notebooks/blob/main/notebooks/how-to-export-the-validation-results-into-dataframe-csv-sql-and-other-formats.ipynb"></a>
The below examples showcase YOLO model validation with custom arguments in Python and CLI.
!!! example
=== "Python"
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt")
# Customize validation settings
metrics = model.val(data="coco8.yaml", imgsz=640, batch=16, conf=0.25, iou=0.7, device="0")
```
=== "CLI"
```bash
yolo val model=yolo26n.pt data=coco8.yaml imgsz=640 batch=16 conf=0.25 iou=0.7 device=0
```
!!! tip "Export ConfusionMatrix"
You can also save the ConfusionMatrix results in different formats using the provided code.
```python
from ultralytics import YOLO
model = YOLO("yolo26n.pt")
results = model.val(data="coco8.yaml", plots=True)
print(results.confusion_matrix.to_df())
```
!!! tip "Per-Image Precision, Recall, and F1"
Validation stores per-image precision, recall, F1, TP, FP, and FN metrics (at IoU threshold 0.5) for all tasks
except classification. Access them through `results.box.image_metrics` for detection and OBB, `results.seg.image_metrics`
for segmentation, and `results.pose.image_metrics` for pose after validation completes.
```python
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt")
# Validate and access per-image metrics
results = model.val(data="coco8.yaml")
# image_metrics is a dictionary with image filenames as keys
print(results.box.image_metrics)
# Output: {'image1.jpg': {'precision': 0.85, 'recall': 0.92, 'f1': 0.88, 'tp': 17, 'fp': 3, 'fn': 1}, ...}
# Access metrics for a specific image
results.box.image_metrics["image1.jpg"] # {'precision': 0.85, 'recall': 0.92, 'f1': 0.88, 'tp': 17, 'fp': 3, 'fn': 1}
```
Each entry in `image_metrics` contains the following keys:
| Key | Description |
|-------------|---------------------------------------------------|
| `precision` | Precision score for the image (`tp / (tp + fp)`). |
| `recall` | Recall score for the image (`tp / (tp + fn)`). |
| `f1` | Harmonic mean of precision and recall. |
| `tp` | Number of true positives for the image. |
| `fp` | Number of false positives for the image. |
| `fn` | Number of false negatives for the image. |
This feature is available for detection, segmentation, pose, and OBB tasks.
| Method | Return Type | Description |
|---|---|---|
summary() | List[Dict[str, Any]] | Converts validation results to a summarized dictionary. |
to_df() | DataFrame | Returns the validation results as a structured Polars DataFrame. |
to_csv() | str | Exports the validation results in CSV format and returns the CSV string. |
to_json() | str | Exports the validation results in JSON format and returns the JSON string. |
For more details see the DataExportMixin class documentation.
To validate your YOLO26 model, you can use the Val mode provided by Ultralytics. For example, using the Python API, you can load a model and run validation with:
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt")
# Validate the model
metrics = model.val()
print(metrics.box.map) # map50-95
Alternatively, you can use the command-line interface (CLI):
yolo val model=yolo26n.pt
For further customization, you can adjust various arguments like imgsz, batch, and conf in both Python and CLI modes. Check the Arguments for YOLO Model Validation section for the full list of parameters.
YOLO26 model validation provides several key metrics to assess model performance. These include:
Using the Python API, you can access these metrics as follows:
metrics = model.val() # assumes `model` has been loaded
print(metrics.box.map) # mAP50-95
print(metrics.box.map50) # mAP50
print(metrics.box.map75) # mAP75
print(metrics.box.maps) # list of mAP50-95 for each category
print(metrics.box.image_metrics) # per-image metrics dictionary with precision, recall, F1, TP, FP, and FN
For a complete performance evaluation, it's crucial to review all these metrics. For more details, refer to the Key Features of Val Mode.
Using Ultralytics YOLO for validation provides several advantages:
These benefits ensure that your models are evaluated thoroughly and can be optimized for superior results. Learn more about these advantages in the Why Validate with Ultralytics YOLO section.
Yes, you can validate your YOLO26 model using a custom dataset. Specify the data argument with the path to your dataset configuration file. This file should include the path to the validation data.
!!! note
Validation is performed using the model's own class names, which you can view using `model.names`, and which may be different to those specified in the dataset configuration file.
Example in Python:
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt")
# Validate with a custom dataset
metrics = model.val(data="path/to/your/custom_dataset.yaml")
print(metrics.box.map) # map50-95
Example using CLI:
yolo val model=yolo26n.pt data=path/to/your/custom_dataset.yaml
For more customizable options during validation, see the Example Validation with Arguments section.
To save the validation results to a JSON file, you can set the save_json argument to True when running validation. This can be done in both the Python API and CLI.
Example in Python:
from ultralytics import YOLO
# Load a model
model = YOLO("yolo26n.pt")
# Save validation results to JSON
metrics = model.val(save_json=True)
Example using CLI:
yolo val model=yolo26n.pt save_json=True
This functionality is particularly useful for further analysis or integration with other tools. Check the Arguments for YOLO Model Validation for more details.