Back to Ultralytics

Setup

examples/tutorial.ipynb

8.4.4613.9 KB
Original Source
<div align="center"> <a href="https://ultralytics.com/yolo" target="_blank"> </a>

中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | Türkçe | Tiếng Việt | العربية

<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yml"></a> <a href="https://console.paperspace.com/github/ultralytics/ultralytics"></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo26"></a>

<a href="https://ultralytics.com/discord"></a> <a href="https://community.ultralytics.com"></a> <a href="https://reddit.com/r/ultralytics"></a>

</div>

This Ultralytics Colab Notebook is the easiest way to get started with YOLO models—no installation needed. Built by Ultralytics, the creators of YOLO, this notebook walks you through running state-of-the-art models directly in your browser.

Ultralytics models are constantly updated for performance and flexibility. They're fast, accurate, and easy to use, and they excel at object detection, tracking, instance segmentation, image classification, and pose estimation.

Find detailed documentation in the Ultralytics Docs. Get support via GitHub Issues. Join discussions on Discord, Reddit, and the Ultralytics Community Forums!

Request an Enterprise License for commercial use at Ultralytics Licensing.

Setup

pip install ultralytics and dependencies and check software and hardware.

!uv pip install ultralytics
import ultralytics
ultralytics.checks()

1. Predict

YOLO26 may be used directly in the Command Line Interface (CLI) with a yolo command for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See a full list of available yolo arguments and other details in the YOLO26 Predict Docs.

# Run inference on an image with YOLO26n
!yolo predict model=yolo26n.pt source='https://ultralytics.com/images/zidane.jpg'

        

2. Val

Validate a model's accuracy on the COCO dataset's val or test splits. The latest YOLO26 models are downloaded automatically the first time they are used. See YOLO26 Val Docs for more information.

# Download COCO val
from ultralytics.utils.downloads import download

download('https://ultralytics.com/assets/coco2017val.zip', unzip=True, dir='datasets') # download (780MB - 5000 images)
# Validate YOLO26n on COCO8 val
!yolo val model=yolo26n.pt data=coco8.yaml

3. Train

<p align=""><a href="https://ultralytics.com/hub"></a></p>

Train YOLO26 on Detect, Segment, Classify and Pose datasets. See YOLO26 Train Docs for more information.

#@title Select YOLO26 🚀 logger {run: 'auto'}
logger = 'TensorBoard' #@param ['TensorBoard', 'Weights & Biases']

if logger == 'TensorBoard':
  !yolo settings tensorboard=True
  %load_ext tensorboard
  %tensorboard --logdir .
elif logger == 'Weights & Biases':
  !yolo settings wandb=True
# Train YOLO26n on COCO8 for 3 epochs
!yolo train model=yolo26n.pt data=coco8.yaml epochs=3 imgsz=640

4. Export

Export a YOLO model to any supported format below with the format argument, i.e. format=onnx. See Export Docs for more information.

  • 💡 ProTip: Export to ONNX or OpenVINO for up to 3x CPU speedup.
  • 💡 ProTip: Export to TensorRT for up to 5x GPU speedup.
Formatformat ArgumentModelMetadataArguments
PyTorch-yolo26n.pt-
TorchScripttorchscriptyolo26n.torchscriptimgsz, batch, dynamic, optimize, half, nms, device
ONNXonnxyolo26n.onnximgsz, batch, dynamic, half, opset, simplify, nms, device
OpenVINOopenvinoyolo26n_openvino_model/imgsz, batch, data, fraction, dynamic, half, int8, nms, device
TensorRTengineyolo26n.engineimgsz, batch, data, fraction, dynamic, half, int8, simplify, nms, device, workspace
CoreMLcoremlyolo26n.mlpackageimgsz, batch, half, int8, nms, device
TF SavedModelsaved_modelyolo26n_saved_model/imgsz, batch, data, fraction, int8, keras, nms, device
TF GraphDefpbyolo26n.pbimgsz, batch, device
TF Litetfliteyolo26n.tfliteimgsz, batch, data, fraction, half, int8, nms, device
TF Edge TPUedgetpuyolo26n_edgetpu.tfliteimgsz, int8, data, fraction, device
TF.jstfjsyolo26n_web_model/imgsz, batch, data, fraction, half, int8, nms, device
PaddlePaddlepaddleyolo26n_paddle_model/imgsz, batch, device
MNNmnnyolo26n.mnnimgsz, batch, half, int8, device
NCNNncnnyolo26n_ncnn_model/imgsz, batch, half, device
IMX500imxyolo26n_imx_model/imgsz, int8, data, fraction, device
RKNNrknnyolo26n_rknn_model/imgsz, batch, name, device
ExecuTorchexecutorchyolo26n_executorch_model/imgsz, device
Axelera AIaxelerayolo26n_axelera_model/imgsz, int8, data, fraction, device
!yolo export model=yolo26n.pt format=torchscript

5. Python Usage

YOLO26 was reimagined using Python-first principles for the most seamless Python YOLO experience yet. YOLO26 models can be loaded from a trained checkpoint or created from scratch. Then methods are used to train, val, predict, and export the model. See detailed Python usage examples in the YOLO26 Python Docs.

from ultralytics import YOLO

# Load a model
model = YOLO('yolo26n.yaml')  # build a new model from scratch
model = YOLO('yolo26n.pt')  # load a pretrained model (recommended for training)

# Use the model
results = model.train(data='coco8.yaml', epochs=3)  # train the model
results = model.val()  # evaluate model performance on the validation set
results = model('https://ultralytics.com/images/bus.jpg')  # predict on an image
results = model.export(format='onnx')  # export the model to ONNX format

6. Tasks

YOLO26 can train, val, predict and export models for the most common tasks in vision AI: Detect, Segment, Classify and Pose. See YOLO26 Tasks Docs for more information.

1. Detection

YOLO26 detection models have no suffix and are the default YOLO26 models, i.e. yolo26n.pt and are pretrained on COCO. See Detection Docs for full details.

# Load YOLO26n, train it on COCO128 for 3 epochs and predict an image with it
from ultralytics import YOLO

model = YOLO('yolo26n.pt')  # load a pretrained YOLO detection model
model.train(data='coco8.yaml', epochs=3)  # train the model
model('https://ultralytics.com/images/bus.jpg')  # predict on an image

2. Segmentation

YOLO26 segmentation models use the -seg suffix, i.e. yolo26n-seg.pt and are pretrained on COCO. See Segmentation Docs for full details.

# Load YOLO26n-seg, train it on COCO128-seg for 3 epochs and predict an image with it
from ultralytics import YOLO

model = YOLO('yolo26n-seg.pt')  # load a pretrained YOLO segmentation model
model.train(data='coco8-seg.yaml', epochs=3)  # train the model
model('https://ultralytics.com/images/bus.jpg')  # predict on an image

3. Classification

YOLO26 classification models use the -cls suffix, i.e. yolo26n-cls.pt and are pretrained on ImageNet. See Classification Docs for full details.

# Load YOLO26n-cls, train it on mnist160 for 3 epochs and predict an image with it
from ultralytics import YOLO

model = YOLO('yolo26n-cls.pt')  # load a pretrained YOLO classification model
model.train(data='mnist160', epochs=3)  # train the model
model('https://ultralytics.com/images/bus.jpg')  # predict on an image

4. Pose

YOLO26 pose models use the -pose suffix, i.e. yolo26n-pose.pt and are pretrained on COCO Keypoints. See Pose Docs for full details.

# Load YOLO26n-pose, train it on COCO8-pose for 3 epochs and predict an image with it
from ultralytics import YOLO

model = YOLO('yolo26n-pose.pt')  # load a pretrained YOLO pose model
model.train(data='coco8-pose.yaml', epochs=3)  # train the model
model('https://ultralytics.com/images/bus.jpg')  # predict on an image

5. Oriented Bounding Boxes (OBB)

YOLO26 OBB models use the -obb suffix, i.e. yolo26n-obb.pt and are pretrained on the DOTA dataset. See OBB Docs for full details.

# Load YOLO26n-obb, train it on DOTA8 for 3 epochs and predict an image with it
from ultralytics import YOLO

model = YOLO('yolo26n-obb.pt')  # load a pretrained YOLO OBB model
model.train(data='dota8.yaml', epochs=3)  # train the model
model('https://ultralytics.com/images/boats.jpg')  # predict on an image

Appendix

Additional content below.

# Pip install from source
!uv pip install git+https://github.com/ultralytics/ultralytics@main
# Git clone and run tests on 'main' branch
!git clone https://github.com/ultralytics/ultralytics -b main
!uv pip install -qe ultralytics
# Run tests (Git clone only)
!pytest ultralytics/tests
# Validate multiple models
for x in 'nsmlx':
  !yolo val model=yolo26{x}.pt data=coco.yaml