docs/en/index.md
<a href="https://platform.ultralytics.com/ultralytics/yolo26?utm_source=docs&utm_medium=referral&utm_campaign=platform_launch&utm_content=banner&utm_term=ultralytics_docs" target="_blank"></a>
<a href="https://docs.ultralytics.com/zh/">中文</a> · <a href="https://docs.ultralytics.com/ko/">한국어</a> · <a href="https://docs.ultralytics.com/ja/">日本語</a> · <a href="https://docs.ultralytics.com/ru/">Русский</a> · <a href="https://docs.ultralytics.com/de/">Deutsch</a> · <a href="https://docs.ultralytics.com/fr/">Français</a> · <a href="https://docs.ultralytics.com/es/">Español</a> · <a href="https://docs.ultralytics.com/pt/">Português</a> · <a href="https://docs.ultralytics.com/tr/">Türkçe</a> · <a href="https://docs.ultralytics.com/vi/">Tiếng Việt</a> · <a href="https://docs.ultralytics.com/ar/">العربية</a>
<a href="https://github.com/ultralytics/ultralytics/actions/workflows/ci.yml"></a>
<a href="https://clickpy.clickhouse.com/dashboard/ultralytics"></a>
<a href="https://zenodo.org/badge/latestdoi/264818686"></a>
<a href="https://discord.com/invite/ultralytics"></a>
<a href="https://community.ultralytics.com/"></a>
<a href="https://www.reddit.com/r/ultralytics/"></a>
<a href="https://console.paperspace.com/github/ultralytics/ultralytics"></a>
<a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb"></a>
<a href="https://www.kaggle.com/models/ultralytics/yolo26"></a>
<a href="https://mybinder.org/v2/gh/ultralytics/ultralytics/HEAD?labpath=examples%2Ftutorial.ipynb"></a>
Introducing Ultralytics YOLO26, the latest version of the acclaimed real-time object detection and image segmentation model. YOLO26 is built on deep learning and computer vision advancements, featuring end-to-end NMS-free inference and optimized edge deployment. Its streamlined design makes it suitable for various applications and easily adaptable to different hardware platforms, from edge devices to cloud APIs. For stable production workloads, both YOLO26 and YOLO11 are recommended.
Explore the Ultralytics Docs, a comprehensive resource designed to help you understand and utilize its features and capabilities. Whether you are a seasoned machine learning practitioner or new to the field, this hub aims to maximize YOLO's potential in your projects.
Request an Enterprise License for commercial use at Ultralytics Licensing.
<div align="center"><a href="https://github.com/ultralytics"></a>
<a href="https://www.linkedin.com/company/ultralytics/"></a>
<a href="https://twitter.com/ultralytics"></a>
<a href="https://www.youtube.com/ultralytics?sub_confirmation=1"></a>
<a href="https://www.tiktok.com/@ultralytics"></a>
<a href="https://ultralytics.com/bilibili"></a>
<a href="https://discord.com/invite/ultralytics"></a>
</div>:material-clock-fast:{ .lg .middle } Getting Started
Install ultralytics with pip and get up and running in minutes to train a YOLO model
:material-image:{ .lg .middle } Predict
Predict on new images, videos and streams with YOLO
:fontawesome-solid-brain:{ .lg .middle } Train a Model
Train a new YOLO model on your own custom dataset from scratch or load and train on a pretrained model
:material-magnify-expand:{ .lg .middle } Explore Computer Vision Tasks
Discover YOLO tasks like detect, segment, classify, pose, OBB and track
:rocket:{ .lg .middle } Explore YOLO26 🚀 NEW
Discover Ultralytics' latest YOLO26 models with NMS-free inference and edge optimization
:material-select-all:{ .lg .middle } SAM 3: Segment Anything with Concepts 🚀 NEW
Meta's latest SAM 3 with Promptable Concept Segmentation - segment all instances using text or image exemplars
:material-scale-balance:{ .lg .middle } Open Source, AGPL-3.0
Ultralytics offers two YOLO licenses: AGPL-3.0 and Enterprise. Explore YOLO on GitHub.
<strong>Watch:</strong> How to Train a YOLO26 model on Your Custom Dataset in <a href="https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb" target="_blank">Google Colab</a>.
</p>YOLO (You Only Look Once), a popular object detection and image segmentation model, was developed by Joseph Redmon and Ali Farhadi at the University of Washington. Launched in 2015, YOLO gained popularity for its high speed and accuracy.
Ultralytics offers two licensing options to accommodate diverse use cases:
Our licensing strategy is designed to ensure that any improvements to our open-source projects are returned to the community. We believe in open source, and our mission is to ensure that our contributions can be used and expanded in ways that benefit everyone.
Object detection has evolved significantly over the years, from traditional computer vision techniques to advanced deep learning models. The YOLO family of models has been at the forefront of this evolution, consistently pushing the boundaries of what's possible in real-time object detection.
YOLO's unique approach treats object detection as a single regression problem, predicting bounding boxes and class probabilities directly from full images in one evaluation. This revolutionary method has made YOLO models significantly faster than previous two-stage detectors while maintaining high accuracy.
With each new version, YOLO has introduced architectural improvements and innovative techniques that have enhanced performance across various metrics. YOLO26 continues this tradition by incorporating the latest advancements in computer vision research, featuring end-to-end NMS-free inference and optimized edge deployment for real-world applications.
Ultralytics YOLO is the acclaimed YOLO (You Only Look Once) series for real-time object detection and image segmentation. The latest model, YOLO26, builds on previous versions by introducing end-to-end NMS-free inference and optimized edge deployment. YOLO supports various vision AI tasks such as detection, segmentation, pose estimation, tracking, and classification. Its efficient architecture ensures excellent speed and accuracy, making it suitable for diverse applications, including edge devices and cloud APIs.
Getting started with YOLO is quick and straightforward. You can install the Ultralytics package using pip and get up and running in minutes. Here's a basic installation command:
!!! example "Installation using pip"
=== "CLI"
```bash
pip install -U ultralytics
```
For a comprehensive step-by-step guide, visit our Quickstart page. This resource will help you with installation instructions, initial setup, and running your first model.
Training a custom YOLO model on your dataset involves a few detailed steps:
yolo TASK train command to start training. (Each TASK has its own argument)Here's example code for the Object Detection Task:
!!! example "Train Example for Object Detection Task"
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained YOLO model (you can choose n, s, m, l, or x versions)
model = YOLO("yolo26n.pt")
# Start training on your custom dataset
model.train(data="path/to/dataset.yaml", epochs=100, imgsz=640)
```
=== "CLI"
```bash
# Train a YOLO model from the command line
yolo detect train data=path/to/dataset.yaml epochs=100 imgsz=640
```
For a detailed walkthrough, check out our Train a Model guide, which includes examples and tips for optimizing your training process.
Ultralytics offers two licensing options for YOLO:
For more details, visit our Licensing page.
Ultralytics YOLO supports efficient and customizable multi-object tracking. To utilize tracking capabilities, you can use the yolo track command, as shown below:
!!! example "Example for Object Tracking on a Video"
=== "Python"
```python
from ultralytics import YOLO
# Load a pretrained YOLO model
model = YOLO("yolo26n.pt")
# Start tracking objects in a video
# You can also use live video streams or webcam input
model.track(source="path/to/video.mp4")
```
=== "CLI"
```bash
# Perform object tracking on a video from the command line
# You can specify different sources like webcam (0) or RTSP streams
yolo track source=path/to/video.mp4
```
For a detailed guide on setting up and running object tracking, check our Track Mode documentation, which explains the configuration and practical applications in real-time scenarios.