examples/README.md
[!IMPORTANT]
⚠️ Not for Benchmarking! ⚠️
These examples are designed solely for demonstrating CUTLASS functionality and may NOT optimized for performance benchmarking.
For accurate performance measurements, please use the CUTLASS Profiler instead (recommended) or manually auto-tune the example, if unavailable via the profiler.
launches a basic GEMM with single precision inputs and outputs
demonstrates CUTLASS Utilities for allocating and initializing tensors
debugging utilities for printing register and shared memory contents
utility for visualizing all layout functions in CUTLASS
example demonstrating an iterator over tiles in memory
example demonstrating CUTLASS's batched strided GEMM operation
example demonstrating CUTLASS's Split-K parallel reduction kernel
example demonstrating mixed precision GEMM using Volta Tensor Cores
example demonstrating integer GEMM using Turing Tensor Cores
09_turing_tensorop_conv2dfprop
example demonstrating integer implicit GEMM convolution (forward propagation) using Turing Tensor Cores
example demonstrating planar complex GEMM kernels
example demonstrating planar complex kernels with batch-specific problem sizes
example demonstrating GEMM fused with bias and relu
example demonstrating two GEMMs or convolutions fused in one kernel
example demonstrating FP32 GEMM with implicit TF32 conversion
15_ampere_sparse_tensorop_gemm
example demonstrating usage of Sparse Tensor cores
16_ampere_tensorop_conv2dfprop
example demonstrating forward convolution on tensors of layout NHWC
example demonstrating convolution fused with per channel bias and relu
18_ampere_fp64_tensorop_affine2_gemm
example demonstrating Affine-2 GEMM
Canonical GEMM using tensor cores
Canonical GEMM using SIMT
example demonstrating Quaternion GEMM computations
example demonstrating Quaternion convolution
23_ampere_gemm_operand_reduction_fusion
example demonstrating how to reduce one of the operands of the GEMM along the k-dimension when computing GEMM
example demonstrating batch of GEMM operations with distinct problem sizes
25_ampere_fprop_mainloop_fusion
example demonstrating fusing activation's per channel scale+bias+relu into the fgrad mainloop
26_ampere_wgrad_mainloop_fusion
example demonstrating fusing activation's per channel scale+bias+relu into the wgrad mainloop
27_ampere_3xtf32_fast_accurate_tensorop_gemm
example demonstrating emulation of a fast accurate SGEMM with TF32 operations
28_ampere_3xtf32_fast_accurate_tensorop_fprop
example demonstrating emulation of a fast accurate FP32 convolution with TF32 operation
29_ampere_3xtf32_fast_accurate_tensorop_complex_gemm
example demonstrating emulation of a fast accurate CGEMM with TF32 operation
example demonstrating how to compute conv2d gradient with respect to weight (wgrad) together with split-K
example demonstrating Symmetric Rank-K update
example demonstrating Triangular Matrix-Matrix multiplication
33_ampere_3xtf32_tensorop_symm
example demonstrating Symmetric Matrix-Matrix multiplication with FP32 emulation
example demonstrating how to compute 2d transposed convolution, also known as deconvolution, using CUTLASS conv2d Dgrad kernels
example demonstrating GEMM fused with Softmax in mixed precision using Ampere Tensor Cores
example demonstrating fuses gather before GEMM and scatter after GEMM into the same GEMM kernel
example demonstrating fuses gemm->layernorm->gemm into one kernel.
example demonstrating a batch of SYR2K operations with distinct problem sizes
example demonstrating batched GEMM operations with output results permuted as reshaped tensors
example demonstrating CUTLASS with Python interface
example demonstrating attention example with non-fixed sequence length input
example demonstrating how to run group convolution kernels using functions and data structures provided by CUTLASS using tensor cores
example demonstrating a Block-Ell sparse gemm
example demonstrating fused multihead attention (fixed & variable) using shared memory
example demonstrating how to fuse two GEMMs sharing the same left input matrix into one kernel
example demonstrating depthwise 2d convolution kernels using functions and data structures provided by CUTLASS using SIMT instruction
47_ampere_gemm_universal_streamk
example contrasting the Stream-K parallel decomposition for GEMM threadblocks versus the "classic data-parallel" and "Split-K" decompositions.
48_hopper_warp_specialized_gemm
Simple tensorop GEMM example using CUTLASS 3.0 APIs targeting NVIDIA Hopper architecture
49_hopper_gemm_schedules_with_collective_builder
Hopper GEMM example leveraging collective operation builders to showcase the builder API and the various kernel scheduled supported in CUTLASS 3.0 such as warp specialized persistent mainloops.
50_hopper_gemm_with_epilogue_swizzle
Hopper GEMM example to create a GEMM kernel with custom a collective mainloop and a custom vectorized epilogue.
Hopper GETT example illustrating the ease with which GETTs can be run due to CUTLASS 3.0's unified micro-kernels and CuTe's hierarchical layouts.
52_hopper_gather_scatter_fusion
Hopper example that fuses gather before GEMM and scatter after GEMM into the same kernel
Hopper example demonstrating the fusion of tensor permutation operations with a GEMM kernel
54_hopper_fp8_warp_specialized_gemm
Hopper example of instantiating and running an FP8 GEMM kernel
Hopper GEMM example with different A and B data types using CUTLASS 3.x APIs for DL kernels with fused dequantization.
56_hopper_ptr_array_batched_gemm
Hopper Ptr-Array Batched GEMM example using CUTLASS 3.x API.
Hopper Grouped GEMM using CUTLASS 3.x API.
Ada GEMM kernel targetting Ada FP8 tensor cores via the CUTLASS 2.x API.
CuTe and CUTLASS 3.x based Ampere convolution fprop kernel capable of operating on both affine and gather/scatter tensors, showing how kernel authors can re-use CUTLASS 3.x collectives in their custom kernels.
61_hopper_gemm_with_topk_and_softmax
Hopper GEMM kernel with Top-K and softmax epilogue fusion.
Simple dense GEMM example targeting the NVIDIA Blackwell SM100 Tensor Core MMA using CUTLASS 3.x APIs.
71_blackwell_gemm_with_collective_builder
Blackwell SM100 GEMM example demonstrating compatible mainloop+epilogue builder schedules and epilogue visitor tree (EVT) construction
72_blackwell_narrow_precision_gemm
Block-scaled dense GEMM example targeting the NVIDIA Blackwell SM100 Tensor Core MMA using CUTLASS 3.x APIs.
73_blackwell_gemm_preferred_cluster
Blackwell SM100 GEMM kernel with preferred cluster feature.
Blackwell SM100 GEMM kernel using the Stream-K scheduler
Blackwell SM100 grouped GEMM kernel
Simple convolution(fprop/dgrad/wgrad) example targeting NVIDIA Blackwell SM100 Tensor Core MMA using CUTLASS 3.x APIs.
Blackwell SM100 FMHA kernel
78_blackwell_emulated_bf16x9_gemm
Blackwell SM100 FastFP32 (using BF16 to emulate SGEMM) kernel
Blackwell SM120 MMA kernel targeting GeForce RTX 50 series CUDA Cores
80_blackwell_geforce_sparse_gemm
Blackwell SM120 sparse MMA kernel targeting GeForce RTX 50 series CUDA Cores
Blackwell SM100 Sparse Gemm kernel
84_blackwell_narrow_precision_sparse_gemm
Blackwell Block Scaled SM100 Sparse Gemm kernel
Examples that do not rely on CUTLASS and directly showcase the features of CuTe are located in cutlass/examples/cute.
Additionally, CuTe's core layout and layout algebra have their own test cases within cutlass/test/unit/cute/core/ that users might find useful as examples of CuTe.
Examples leveraging CUTLASS's Python interface are located in cutlass/examples/python.
Copyright (c) 2017 - 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved. SPDX-License-Identifier: BSD-3-Clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.