docs/en/platform/train/models.md
Ultralytics Platform provides comprehensive model management for training, analyzing, and deploying YOLO models. Upload pretrained models or train new ones directly on the platform.
Upload existing model weights to the platform:
.pt files onto the project page or models sidebarMultiple files can be uploaded simultaneously (up to 3 concurrent).
Supported model formats:
| Format | Extension | Description |
|---|---|---|
| PyTorch | .pt | Native Ultralytics format |
After upload, the platform parses model metadata:
Train a new model directly on the platform:
See Cloud Training for detailed instructions.
graph LR
A[Upload .pt] --> B[Overview]
C[Train] --> B
B --> D[Predict]
B --> E[Export]
B --> F[Deploy]
E --> G[17+ Formats]
F --> H[Endpoint]
style A fill:#4CAF50,color:#fff
style C fill:#FF9800,color:#fff
style E fill:#2196F3,color:#fff
style F fill:#9C27B0,color:#fff
Each model page has the following tabs:
| Tab | Content |
|---|---|
| Overview | Model metadata, key metrics, dataset link |
| Train | Training charts, console output, system stats |
| Predict | Interactive browser inference |
| Export | Format conversion with GPU selection |
| Deploy | Endpoint creation and management |
Displays model metadata and key metrics:
The Train tab has three subtabs:
Interactive training metric charts showing loss curves and performance metrics over epochs:
| Chart Group | Metrics |
|---|---|
| Metrics | mAP50, mAP50-95, precision, recall |
| Train Loss | train/box_loss, train/cls_loss, train/dfl_loss |
| Val Loss | val/box_loss, val/cls_loss, val/dfl_loss |
| Learning Rate | lr/pg0, lr/pg1, lr/pg2 |
Live console output from the training process:
GPU and system metrics during training:
| Metric | Description |
|---|---|
| GPU Util | GPU utilization percentage |
| GPU Memory | GPU memory usage |
| GPU Temp | GPU temperature |
| CPU Usage | CPU utilization |
| RAM | System memory usage |
| Disk | Disk usage |
Run interactive inference directly in the browser:
!!! tip "Quick Testing"
The Predict tab runs inference on Ultralytics Cloud, so you don't need a local GPU. Results are displayed with interactive overlays matching the model's task type.
Export your model to 17+ deployment formats. See Export Model below and the core Export mode guide for full details.
Create and manage dedicated inference endpoints. See Deployments for details.
After training completes, view detailed validation analysis:
Interactive heatmap showing prediction accuracy per class:
Performance curves at different confidence thresholds:
| Curve | Description |
|---|---|
| Precision-Recall | Trade-off between precision and recall |
| F1-Confidence | F1 score at different confidence levels |
| Precision-Confidence | Precision at different confidence levels |
| Recall-Confidence | Recall at different confidence levels |
graph LR
A[Select Format] --> B[Configure Args]
B --> C[Export]
C --> D{GPU Required?}
D -->|Yes| E[Cloud GPU Export]
D -->|No| F[CPU Export]
E --> G[Download]
F --> G
style A fill:#2196F3,color:#fff
style C fill:#FF9800,color:#fff
style G fill:#4CAF50,color:#fff
Export your model to 17+ deployment formats:
The Platform supports export to 17+ deployment formats: ONNX, TorchScript, OpenVINO, TensorRT, CoreML, TF SavedModel, TF GraphDef, TF Lite, TF Edge TPU, TF.js, PaddlePaddle, NCNN, MNN, RKNN, IMX500, Axelera, and ExecuTorch.
| Target | Recommended Format | Notes |
|---|---|---|
| NVIDIA GPUs | TensorRT | Maximum inference speed |
| Intel Hardware | OpenVINO | CPUs, GPUs, and VPUs |
| Apple Devices | CoreML | iOS, macOS, Apple Silicon |
| Android | TF Lite or NCNN | Best mobile performance |
| Web Browsers | TF.js or ONNX | ONNX via ONNX Runtime Web |
| Edge Devices | TF Edge TPU or RKNN | Coral and Rockchip (see supported chips) |
| General | ONNX | Works with most runtimes |
When exporting to RKNN format, select your target Rockchip device:
| Chip | Description |
|---|---|
| RK3588 | High-end edge SoC |
| RK3576 | Mid-range edge SoC |
| RK3568 | Mid-range edge SoC |
| RK3566 | Mid-range edge SoC |
| RK3562 | Entry-level edge SoC |
| RV1103 | Vision processor |
| RV1106 | Vision processor |
| RV1103B | Vision processor |
| RV1106B | Vision processor |
| RK2118 | AI processor |
| RV1126B | Vision processor |
Export jobs progress through the following statuses:
| Status | Description |
|---|---|
| Queued | Export job is waiting to start |
| Starting | Export job is initializing |
| Running | Export is in progress |
| Completed | Export finished — download available |
| Failed | Export failed (see error message) |
| Cancelled | Export was cancelled by the user |
!!! tip "Export Time"
Export time varies by format. TensorRT exports may take several minutes due to engine optimization. GPU-required formats (TensorRT) run on Ultralytics Cloud GPUs — the default export GPU is RTX 5090.
Export All to start export jobs for all CPU-based formats with default settings.Delete All to remove all exports for the model.Some export formats have architecture or task restrictions:
| Format | Restriction |
|---|---|
| IMX500 | Available only for YOLOv8n and YOLO11n |
| Axelera | Detect models only |
| PaddlePaddle | Not available for YOLO26 detect/segment/pose/OBB models |
!!! note "Additional Export Rules"
- Classification exports do not include NMS.
- CoreML exports with batch sizes greater than `1` use `dynamic=true`.
- Unsupported format/model combinations are disabled in the export dialog before you launch.
Clone a model to a different project:
The model and its weights are copied to the target project.
Download your model weights:
.pt file downloads automaticallyExported formats can be downloaded from the Export tab after export completes.
Models can be linked to their source dataset:
When training with Platform datasets using the ul:// URI format, linking is automatic.
!!! example "Dataset URI Format"
```bash
# Train with a Platform dataset — linking is automatic
yolo train model=yolo26n.pt data=ul://username/datasets/my-dataset epochs=100
```
The `ul://` scheme resolves to your Platform dataset. The trained model's Overview tab will show a link back to this dataset (see [Using Platform Datasets](../api/index.md#using-platform-datasets)).
Control who can see your model:
| Setting | Description |
|---|---|
| Private | Only you can access |
| Public | Anyone can view on Explore page |
To change visibility, click the visibility badge (e.g., private or public) on the model page. Switching to private takes effect immediately. Switching to public shows a confirmation dialog before applying.
Remove a model you no longer need:
!!! note "Trash and Restore"
Deleted models go to Trash for 30 days. Restore from [Settings > Trash](../account/trash.md).
Ultralytics Platform fully supports all YOLO architectures with dedicated projects:
All architectures support 5 task types: detect, segment, pose, OBB, and classify.
Yes, download your model weights from the model page:
.pt file downloads automaticallyCurrently, model comparison is within projects. To compare across projects:
There's no strict limit, but very large models (>2GB) may have longer upload and processing times.
Yes! You can use any of the official YOLO26 models as a base, or select one of your own completed models from the model selector in the training dialog. The Platform supports fine-tuning from any uploaded checkpoint.