examples/tutorial.ipynb
中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | Türkçe | Tiếng Việt | العربية
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yml"></a> <a href="https://console.paperspace.com/github/ultralytics/ultralytics"></a> <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"></a> <a href="https://www.kaggle.com/models/ultralytics/yolo26"></a>
<a href="https://ultralytics.com/discord"></a> <a href="https://community.ultralytics.com"></a> <a href="https://reddit.com/r/ultralytics"></a>
</div>This Ultralytics Colab Notebook is the easiest way to get started with YOLO models—no installation needed. Built by Ultralytics, the creators of YOLO, this notebook walks you through running state-of-the-art models directly in your browser.
Ultralytics models are constantly updated for performance and flexibility. They're fast, accurate, and easy to use, and they excel at object detection, tracking, instance segmentation, image classification, and pose estimation.
Find detailed documentation in the Ultralytics Docs. Get support via GitHub Issues. Join discussions on Discord, Reddit, and the Ultralytics Community Forums!
Request an Enterprise License for commercial use at Ultralytics Licensing.
pip install ultralytics and dependencies and check software and hardware.
!uv pip install ultralytics
import ultralytics
ultralytics.checks()
YOLO26 may be used directly in the Command Line Interface (CLI) with a yolo command for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See a full list of available yolo arguments and other details in the YOLO26 Predict Docs.
# Run inference on an image with YOLO26n
!yolo predict model=yolo26n.pt source='https://ultralytics.com/images/zidane.jpg'
Validate a model's accuracy on the COCO dataset's val or test splits. The latest YOLO26 models are downloaded automatically the first time they are used. See YOLO26 Val Docs for more information.
# Download COCO val
from ultralytics.utils.downloads import download
download('https://ultralytics.com/assets/coco2017val.zip', unzip=True, dir='datasets') # download (780MB - 5000 images)
# Validate YOLO26n on COCO8 val
!yolo val model=yolo26n.pt data=coco8.yaml
Train YOLO26 on Detect, Segment, Classify and Pose datasets. See YOLO26 Train Docs for more information.
#@title Select YOLO26 🚀 logger {run: 'auto'}
logger = 'TensorBoard' #@param ['TensorBoard', 'Weights & Biases']
if logger == 'TensorBoard':
!yolo settings tensorboard=True
%load_ext tensorboard
%tensorboard --logdir .
elif logger == 'Weights & Biases':
!yolo settings wandb=True
# Train YOLO26n on COCO8 for 3 epochs
!yolo train model=yolo26n.pt data=coco8.yaml epochs=3 imgsz=640
Export a YOLO model to any supported format below with the format argument, i.e. format=onnx. See Export Docs for more information.
| Format | format Argument | Model | Metadata | Arguments |
|---|---|---|---|---|
| PyTorch | - | yolo26n.pt | ✅ | - |
| TorchScript | torchscript | yolo26n.torchscript | ✅ | imgsz, batch, dynamic, optimize, half, nms, device |
| ONNX | onnx | yolo26n.onnx | ✅ | imgsz, batch, dynamic, half, opset, simplify, nms, device |
| OpenVINO | openvino | yolo26n_openvino_model/ | ✅ | imgsz, batch, data, fraction, dynamic, half, int8, nms, device |
| TensorRT | engine | yolo26n.engine | ✅ | imgsz, batch, data, fraction, dynamic, half, int8, simplify, nms, device, workspace |
| CoreML | coreml | yolo26n.mlpackage | ✅ | imgsz, batch, half, int8, nms, device |
| TF SavedModel | saved_model | yolo26n_saved_model/ | ✅ | imgsz, batch, data, fraction, int8, keras, nms, device |
| TF GraphDef | pb | yolo26n.pb | ❌ | imgsz, batch, device |
| TF Lite | tflite | yolo26n.tflite | ✅ | imgsz, batch, data, fraction, half, int8, nms, device |
| TF Edge TPU | edgetpu | yolo26n_edgetpu.tflite | ✅ | imgsz, int8, data, fraction, device |
| TF.js | tfjs | yolo26n_web_model/ | ✅ | imgsz, batch, data, fraction, half, int8, nms, device |
| PaddlePaddle | paddle | yolo26n_paddle_model/ | ✅ | imgsz, batch, device |
| MNN | mnn | yolo26n.mnn | ✅ | imgsz, batch, half, int8, device |
| NCNN | ncnn | yolo26n_ncnn_model/ | ✅ | imgsz, batch, half, device |
| IMX500 | imx | yolo26n_imx_model/ | ✅ | imgsz, int8, data, fraction, device |
| RKNN | rknn | yolo26n_rknn_model/ | ✅ | imgsz, batch, name, device |
| ExecuTorch | executorch | yolo26n_executorch_model/ | ✅ | imgsz, device |
| Axelera AI | axelera | yolo26n_axelera_model/ | ✅ | imgsz, int8, data, fraction, device |
!yolo export model=yolo26n.pt format=torchscript
YOLO26 was reimagined using Python-first principles for the most seamless Python YOLO experience yet. YOLO26 models can be loaded from a trained checkpoint or created from scratch. Then methods are used to train, val, predict, and export the model. See detailed Python usage examples in the YOLO26 Python Docs.
from ultralytics import YOLO
# Load a model
model = YOLO('yolo26n.yaml') # build a new model from scratch
model = YOLO('yolo26n.pt') # load a pretrained model (recommended for training)
# Use the model
results = model.train(data='coco8.yaml', epochs=3) # train the model
results = model.val() # evaluate model performance on the validation set
results = model('https://ultralytics.com/images/bus.jpg') # predict on an image
results = model.export(format='onnx') # export the model to ONNX format
YOLO26 can train, val, predict and export models for the most common tasks in vision AI: Detect, Segment, Classify and Pose. See YOLO26 Tasks Docs for more information.
YOLO26 detection models have no suffix and are the default YOLO26 models, i.e. yolo26n.pt and are pretrained on COCO. See Detection Docs for full details.
# Load YOLO26n, train it on COCO128 for 3 epochs and predict an image with it
from ultralytics import YOLO
model = YOLO('yolo26n.pt') # load a pretrained YOLO detection model
model.train(data='coco8.yaml', epochs=3) # train the model
model('https://ultralytics.com/images/bus.jpg') # predict on an image
YOLO26 segmentation models use the -seg suffix, i.e. yolo26n-seg.pt and are pretrained on COCO. See Segmentation Docs for full details.
# Load YOLO26n-seg, train it on COCO128-seg for 3 epochs and predict an image with it
from ultralytics import YOLO
model = YOLO('yolo26n-seg.pt') # load a pretrained YOLO segmentation model
model.train(data='coco8-seg.yaml', epochs=3) # train the model
model('https://ultralytics.com/images/bus.jpg') # predict on an image
YOLO26 classification models use the -cls suffix, i.e. yolo26n-cls.pt and are pretrained on ImageNet. See Classification Docs for full details.
# Load YOLO26n-cls, train it on mnist160 for 3 epochs and predict an image with it
from ultralytics import YOLO
model = YOLO('yolo26n-cls.pt') # load a pretrained YOLO classification model
model.train(data='mnist160', epochs=3) # train the model
model('https://ultralytics.com/images/bus.jpg') # predict on an image
YOLO26 pose models use the -pose suffix, i.e. yolo26n-pose.pt and are pretrained on COCO Keypoints. See Pose Docs for full details.
# Load YOLO26n-pose, train it on COCO8-pose for 3 epochs and predict an image with it
from ultralytics import YOLO
model = YOLO('yolo26n-pose.pt') # load a pretrained YOLO pose model
model.train(data='coco8-pose.yaml', epochs=3) # train the model
model('https://ultralytics.com/images/bus.jpg') # predict on an image
YOLO26 OBB models use the -obb suffix, i.e. yolo26n-obb.pt and are pretrained on the DOTA dataset. See OBB Docs for full details.
# Load YOLO26n-obb, train it on DOTA8 for 3 epochs and predict an image with it
from ultralytics import YOLO
model = YOLO('yolo26n-obb.pt') # load a pretrained YOLO OBB model
model.train(data='dota8.yaml', epochs=3) # train the model
model('https://ultralytics.com/images/boats.jpg') # predict on an image
Additional content below.
# Pip install from source
!uv pip install git+https://github.com/ultralytics/ultralytics@main
# Git clone and run tests on 'main' branch
!git clone https://github.com/ultralytics/ultralytics -b main
!uv pip install -qe ultralytics
# Run tests (Git clone only)
!pytest ultralytics/tests
# Validate multiple models
for x in 'nsmlx':
!yolo val model=yolo26{x}.pt data=coco.yaml