docs/source/en/model_doc/chmv2.md
This model was released on 2026-03-11 and added to Hugging Face Transformers on 2026-03-11.
The Canopy Height Maps v2 (CHMv2) model was proposed in CHMv2: Improvements in Global Canopy Height Mapping using DINOv3. Building on our original high-resolution canopy height maps released in 2024, CHMv2 delivers substantial improvements in accuracy, detail, and global consistency by leveraging DINOv3, Meta's self-supervised vision model.
You can find more information here, and the original code here.
The abstract from the paper is the following:
Accurate canopy height information is essential for quantifying forest carbon, monitoring restoration and degradation, and assessing habitat structure, yet high-fidelity measurements from airborne laser scanning (ALS) remain unevenly available globally. Here we present CHMv2, a global, meter-resolution canopy height map derived from high-resolution optical satellite imagery using a depth-estimation model built on DINOv3 and trained against ALS canopy height models. Compared to existing products, CHMv2 substantially improves accuracy, reduces bias in tall forests, and better preserves fine-scale structure such as canopy edges and gaps. These gains are enabled by a large expansion of geographically diverse training data, automated data curation and registration, and a loss formulation and data sampling strategy tailored to canopy height distributions. We validate CHMv2 against independent ALS test sets and against tens of millions of GEDI and ICESat-2 observations, demonstrating consistent performance across major forest biomes.
Run inference on an image with the following code:
import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head")
model = AutoModelForDepthEstimation.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head", device_map="auto")
image = Image.open("image.tif")
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
depth = processor.post_process_depth_estimation(
outputs, target_sizes=[(image.height, image.width)]
)[0]["predicted_depth"]
[[autodoc]] CHMv2Config
[[autodoc]] CHMv2ImageProcessor - preprocess - post_process_depth_estimation
[[autodoc]] CHMv2ForDepthEstimation - forward