Back to Yolov5

Tutorial

classify/tutorial.ipynb

7.013.4 KB
Original Source
<div align="center"> <a href="https://ultralytics.com/yolo" target="_blank"> </a>

中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | Türkçe | Tiếng Việt | العربية

<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yml"></a> <a href="https://console.paperspace.com/github/ultralytics/ultralytics"></a> <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/classify/tutorial.ipynb"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo11"></a>

<a href="https://ultralytics.com/discord"></a> <a href="https://community.ultralytics.com"></a> <a href="https://reddit.com/r/ultralytics"></a>

</div>

This Ultralytics YOLOv5 Classification Colab Notebook is the easiest way to get started with YOLO models—no installation needed. Built by Ultralytics, the creators of YOLO, this notebook walks you through running state-of-the-art models directly in your browser.

Ultralytics models are constantly updated for performance and flexibility. They're fast, accurate, and easy to use, and they excel at object detection, tracking, instance segmentation, image classification, and pose estimation.

Find detailed documentation in the Ultralytics Docs. Get support via GitHub Issues. Join discussions on Discord, Reddit, and the Ultralytics Community Forums!

Request an Enterprise License for commercial use at Ultralytics Licensing.

<div> <a href="https://www.youtube.com/watch?v=ZN3nRZT7b24" target="_blank"> </a> <p style="font-size: 16px; font-family: Arial, sans-serif; color: #555;"> <strong>Watch: </strong> How to Train <a href="https://github.com/ultralytics/ultralytics">Ultralytics</a> <a href="https://docs.ultralytics.com/models/yolo11/">YOLO11</a> Model on Custom Dataset using Google Colab Notebook 🚀 </p> </div>

Setup

Clone GitHub repository, install dependencies and check PyTorch and GPU.

python
!git clone https://github.com/ultralytics/yolov5  # clone
%cd yolov5
%pip install -qr requirements.txt  # install

import torch

import utils

display = utils.notebook_init()  # checks

1. Predict

classify/predict.py runs YOLOv5 Classification inference on a variety of sources, downloading models automatically from the latest YOLOv5 release, and saving results to runs/predict-cls. Example inference sources are:

shell
python classify/predict.py --source 0  # webcam
                              img.jpg  # image 
                              vid.mp4  # video
                              screen  # screenshot
                              path/  # directory
                              'path/*.jpg'  # glob
                              'https://youtu.be/LNwODJXcvt4'  # YouTube
                              'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream
python
!python classify/predict.py --weights yolov5s-cls.pt --img 224 --source data/images
# display.Image(filename='runs/predict-cls/exp/zidane.jpg', width=600)

        

2. Validate

Validate a model's accuracy on the Imagenet dataset's val or test splits. Models are downloaded automatically from the latest YOLOv5 release. To show results by class use the --verbose flag.

python
# Download Imagenet val (6.3G, 50000 images)
!bash data/scripts/get_imagenet.sh --val
python
# Validate YOLOv5s on Imagenet val
!python classify/val.py --weights yolov5s-cls.pt --data ../datasets/imagenet --img 224 --half

3. Train

<p align=""><a href="https://platform.ultralytics.com"></a></p>

Train a YOLOv5s Classification model on the Imagenette dataset with --data imagenet, starting from pretrained --pretrained yolov5s-cls.pt.

  • Pretrained Models are downloaded automatically from the latest YOLOv5 release
  • Training Results are saved to runs/train-cls/ with incrementing run directories, i.e. runs/train-cls/exp2, runs/train-cls/exp3 etc.

A Mosaic Dataloader is used for training which combines 4 images into 1 mosaic.

python
# @title Select YOLOv5 🚀 logger {run: 'auto'}
logger = "Comet"  # @param ['Comet', 'ClearML', 'TensorBoard']

if logger == "Comet":
    %pip install -q comet_ml
    import comet_ml

    comet_ml.init()
elif logger == "ClearML":
    %pip install -q clearml
    import clearml

    clearml.browser_login()
elif logger == "TensorBoard":
    %load_ext tensorboard
    %tensorboard --logdir runs/train
python
# Train YOLOv5s Classification on Imagenette160 for 3 epochs
!python classify/train.py --model yolov5s-cls.pt --data imagenette160 --epochs 5 --img 224 --cache

4. Visualize

Comet Logging and Visualization 🌟 NEW

Comet is now fully integrated with YOLOv5. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!

Getting started is easy:

shell
pip install comet_ml  # 1. install
export COMET_API_KEY=<Your API Key>  # 2. paste API key
python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt  # 3. train

To learn more about all of the supported Comet features for this integration, check out the Comet Tutorial. If you'd like to learn more about Comet, head over to our documentation. Get started by trying out the Comet Colab Notebook:

<a href="https://bit.ly/yolov5-readme-comet2"> </a>

ClearML Logging and Automation 🌟 NEW

ClearML is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML (check cells above):

You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).

You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the ClearML Tutorial for details!

<a href="https://cutt.ly/yolov5-notebook-clearml"> </a>

Local Logging

Training results are automatically logged with Tensorboard and CSV loggers to runs/train, with a new experiment directory created for each new training as runs/train/exp2, runs/train/exp3, etc.

This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

  • Notebooks with free GPU: <a href="https://bit.ly/yolov5-paperspace-notebook"></a> <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"></a> <a href="https://www.kaggle.com/models/ultralytics/yolov5"></a>
  • Google Cloud Deep Learning VM. See GCP Quickstart Guide
  • Amazon Deep Learning AMI. See AWS Quickstart Guide
  • Docker Image. See Docker Quickstart Guide <a href="https://hub.docker.com/r/ultralytics/yolov5"></a>

Status

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Appendix

Additional content below.

python
# YOLOv5 PyTorch HUB Inference (DetectionModels only)

model = torch.hub.load(
    "ultralytics/yolov5", "yolov5s", force_reload=True, trust_repo=True
)  # or yolov5n - yolov5x6 or custom
im = "https://ultralytics.com/images/zidane.jpg"  # file, Path, PIL.Image, OpenCV, nparray, list
results = model(im)  # inference
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.