docs/shared__load__iterator_8h.html
| | CUTLASS
CUDA Templates for Linear Algebra Subroutines and Solvers |
shared_load_iterator.h File Reference
Epilogue for threadblock scoped GEMMs using Tensor Ops. More...
#include "cutlass/cutlass.h"
#include "cutlass/numeric_types.h"
#include "cutlass/array.h"
#include "cutlass/layout/matrix.h"
#include "cutlass/matrix_shape.h"
#include "cutlass/tensor_ref.h"
#include "cutlass/epilogue/threadblock/output_tile_thread_map.h"
Include dependency graph for shared_load_iterator.h:
This graph shows which files directly or indirectly include this file:
[Go to the source code of this file.](shared load iterator_8h_source.html)
|
| | class | cutlass::epilogue::threadblock::SharedLoadIterator< ThreadMap_, Element_, MaxAlignment > | | |
|
| | | cutlass | | | | | cutlass::epilogue | | | | | cutlass::epilogue::threadblock | | |
The epilogue rearranges the result of a matrix product through shared memory to match canonical tensor layouts in global memory. Epilogues support conversion and reduction operations.
Generated by 1.8.11