tinytorch/README.md
Most ML courses teach you to use frameworks. TinyTorch teaches you to build them.
The Vision Β· 20 Modules Β· Share Feedback
</div>π§ Preview Release β TinyTorch is functional but evolving. We're sharing early to shape the direction with community input rather than building in isolation.
π Classroom Ready: Summer/Fall 2026 Β· Right Now: We want your feedback
Everyone wants to be an astronaut π§βπ. Very few want to be the rocket scientist π.
In machine learning, we see the same pattern. Everyone wants to train models, run inference, deploy AI. Very few want to understand how the frameworks actually work. Even fewer want to build one.
The world is full of users. We do not have enough builders.
TinyTorch teaches you the AI bricksβthe stable engineering foundations you can use to build any AI system.
A Harvard University course that transforms you from framework user to systems engineer, giving you the deep understanding needed to optimize, debug, and innovate at the foundation of AI.
A complete ML framework capable of:
π― North Star Achievement: Train CNNs for image classification
Additional Capabilities:
No dependencies on PyTorch or TensorFlow - everything is YOUR code!
We're sharing TinyTorch early because we'd rather shape the direction with community input than build in isolation. Before diving into code, we want to hear from you:
If you're a student: β What hands-on labs or projects would help you learn ML systems?
If you teach: β What would make TinyTorch easy to bring into a course?
If you're a practitioner: β What real-world systems tasks should we simulate?
For everyone: β What natural extensions belong in this "AI bricks" model?
π£ Share your thoughts in the discussion β
Want to explore the code? Browse the repository structure to see how modules are organized.
Adventurous early adopter? Local installation works, but expect rough edges. See the setup guide.
Build your framework through four progressive parts:
<table> <thead> <tr> <th width="20%">Part</th> <th width="15%">Modules</th> <th width="65%">What You Build</th> </tr> </thead> <tbody> <tr> <td align="center"><b>I. Foundations</b></td> <td align="center">01-08</td> <td>Tensors, activations, layers, losses, dataloader, autograd, optimizers, training</td> </tr> <tr> <td align="center"><b>II. Vision</b></td> <td align="center">09</td> <td>Conv2d, CNNs for image classification</td> </tr> <tr> <td align="center"><b>III. Language</b></td> <td align="center">10-13</td> <td>Tokenization, embeddings, attention, transformers</td> </tr> <tr> <td align="center"><b>IV. Optimization</b></td> <td align="center">14-20</td> <td>Profiling, quantization, compression, acceleration, benchmarking, capstone</td> </tr> </tbody> </table>Each module asks: "Can I build this capability from scratch?"
π Full curriculum and module details β
As you progress, unlock recreations of landmark ML achievements:
<table> <thead> <tr> <th width="15%">Year</th> <th width="35%">Milestone</th> <th width="50%">Your Achievement</th> </tr> </thead> <tbody> <tr> <td align="center"><b>1958</b></td> <td>Perceptron</td> <td>Binary classification with gradient descent</td> </tr> <tr> <td align="center"><b>1969</b></td> <td>XOR Crisis</td> <td>Multi-layer networks solve non-linear problems</td> </tr> <tr> <td align="center"><b>1986</b></td> <td>Backpropagation</td> <td>Multi-layer network training</td> </tr> <tr> <td align="center"><b>1998</b></td> <td>CNN Revolution</td> <td><b>Image classification with convolutions</b></td> </tr> <tr> <td align="center"><b>2017</b></td> <td>Transformer Era</td> <td>Language generation with self-attention</td> </tr> <tr> <td align="center"><b>2018+</b></td> <td>MLPerf</td> <td>Production-ready optimization</td> </tr> </tbody> </table>These aren't toy demos - they're historically significant ML achievements rebuilt with YOUR framework!
# Traditional Course:
import torch
model.fit(X, y) # Magic happens
# TinyTorch:
# You implement every component
# You measure memory usage
# You optimize performance
# You understand the systems
Why Build Your Own Framework?
loss.backward() doesTinyTorch/
βββ src/ # π» Python source files (developers/contributors edit here)
β βββ 01_tensor/ # Module 01: Tensor operations from scratch
β β βββ 01_tensor.py # Python source (version controlled)
β β βββ ABOUT.md # Conceptual overview & learning objectives
β βββ 02_activations/ # Module 02: ReLU, Softmax activations
β βββ 03_layers/ # Module 03: Linear layers, Module system
β βββ 04_losses/ # Module 04: MSE, CrossEntropy losses
β βββ 05_dataloader/ # Module 05: Efficient data pipelines
β βββ 06_autograd/ # Module 06: Automatic differentiation
β βββ 07_optimizers/ # Module 07: SGD, Adam optimizers
β βββ 08_training/ # Module 08: Complete training loops
β βββ 09_convolutions/ # Module 09: Conv2d, MaxPool2d, CNNs
β βββ 10_tokenization/ # Module 10: Text processing
β βββ 11_embeddings/ # Module 11: Token & positional embeddings
β βββ 12_attention/ # Module 12: Multi-head attention
β βββ 13_transformers/ # Module 13: Complete transformer blocks
β βββ 14_profiling/ # Module 14: Performance analysis
β βββ 15_quantization/ # Module 15: Model compression (precision reduction)
β βββ 16_compression/ # Module 16: Pruning & distillation
β βββ 17_acceleration/ # Module 17: Hardware optimization
β βββ 18_memoization/ # Module 18: KV-cache/memoization
β βββ 19_benchmarking/ # Module 19: Performance measurement
β βββ 20_capstone/ # Module 20: Complete ML systems
β
βββ modules/ # π Generated notebooks (learners work here)
β βββ 01_tensor/ # Auto-generated from src/
β β βββ tensor.ipynb # Jupyter notebook for learning
β β βββ README.md # Practical implementation guide
β β βββ tensor.py # Your implementation
β βββ ... # (20 module directories)
β
βββ site/ # π Course website & documentation (Jupyter Book)
β βββ intro.md # Landing page
β βββ _toc.yml # Site navigation (links to modules)
β βββ _config.yml # HTML website configuration
β βββ chapters/ # Course content chapters
β βββ modules/ # Module documentation
β
βββ milestones/ # π Historical ML evolution - prove what you built!
β βββ 01_1958_perceptron/ # Rosenblatt's first trainable network
β βββ 02_1969_xor/ # Minsky's challenge & multi-layer solution
β βββ 03_1986_mlp/ # Backpropagation & MNIST digits
β βββ 04_1998_cnn/ # LeCun's CNNs & CIFAR-10
β βββ 05_2017_transformer/ # Attention mechanisms & language
β βββ 06_2018_mlperf/ # Modern optimization & profiling
β
βββ tito/ # ποΈ CLI tool for streamlined workflows
β βββ main.py # Entry point
β βββ commands/ # 23 command modules
β βββ core/ # Core utilities
β
βββ tinytorch/ # π¦ Generated package (import from here)
β βββ core/ # Core ML components
β βββ ... # Your built framework!
β
βββ tests/ # β
Comprehensive test suite (600+ tests)
Key workflow: src/*.py β modules/*.ipynb β tinytorch/*.py
TinyTorch is part of the ML Systems Book ecosystem. We're building an open community of learners and educators passionate about ML systems.
Ways to get involved:
See CONTRIBUTING.md for guidelines.
"TinyTorch" is a popular name for educational ML frameworks. We acknowledge excellent projects with similar names:
Our TinyTorch distinguishes itself through its 20-module curriculum, NBGrader integration, ML systems focus, and connection to the ML Systems Book ecosystem.
Thanks to these wonderful people who helped improve TinyTorch!
Legend: πͺ² Bug Hunter Β· β‘ Code Warrior Β· π Documentation Hero Β· π¨ Design Artist Β· π§ Idea Generator Β· π Code Reviewer Β· π§ͺ Test Engineer Β· π οΈ Tool Builder
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <table> <tbody> <tr> <td align="center" valign="top" width="14.28%"><a href="https://github.com/profvjreddi"> <sub><b>Vijay Janapa Reddi</b></sub></a> πͺ² π§βπ» π¨ βοΈ π§ π π§ͺ π οΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/kai4avaya"> <sub><b>kai</b></sub></a> πͺ² π§βπ» π¨ βοΈ π§ͺ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/minhdang26403"> <sub><b>Dang Truong</b></sub></a> πͺ² π§βπ» βοΈ π§ͺ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/didier-durand"> <sub><b>Didier Durand</b></sub></a> πͺ² π§βπ» βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/Pratham-ja"> <sub><b>Pratham Chaudhary</b></sub></a> πͺ² π§βπ» βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/karthikdani"> <sub><b>Karthik Dani</b></sub></a> πͺ² π§βπ»</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/avikde"> <sub><b>Avik De</b></sub></a> πͺ² π§ͺ</td> </tr> <tr> <td align="center" valign="top" width="14.28%"><a href="https://github.com/Takosaga"> <sub><b>Takosaga</b></sub></a> πͺ² βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/rnjema"> <sub><b>rnjema</b></sub></a> π§βπ» π οΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/joeswagson"> <sub><b>joeswagson</b></sub></a> π§βπ» π οΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/AndreaMattiaGaravagno"> <sub><b>AndreaMattiaGaravagno</b></sub></a> π§βπ» βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/Roldao-Neto"> <sub><b>Rolds</b></sub></a> πͺ² π§βπ»</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/AmirAlasady"> <sub><b>Amir Alasady</b></sub></a> πͺ²</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/jettythek"> <sub><b>jettythek</b></sub></a> π§βπ»</td> </tr> <tr> <td align="center" valign="top" width="14.28%"><a href="https://github.com/wz1114841863"> <sub><b>wzz</b></sub></a> πͺ²</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/ngbolin"> <sub><b>Ng Bo Lin</b></sub></a> βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/keo-dara"> <sub><b>keo-dara</b></sub></a> πͺ²</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/Kobra299"> <sub><b>Wayne Norman</b></sub></a> πͺ²</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/lalalostcode"> <sub><b>Ilham Rafiqin</b></sub></a> πͺ²</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/oscarf189"> <sub><b>Oscar Flores</b></sub></a> βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/harishb00a"> <sub><b>harishb00a</b></sub></a> βοΈ</td> </tr> <tr> <td align="center" valign="top" width="14.28%"><a href="https://github.com/sotoblanco"> <sub><b>Pastor Soto</b></sub></a> βοΈ</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/salmanmkc"> <sub><b>Salman Chishti</b></sub></a> π§βπ»</td> <td align="center" valign="top" width="14.28%"><a href="https://github.com/adityamulik"> <sub><b>Aditya Mulik</b></sub></a> βοΈ</td> </tr> </tbody> </table> <!-- markdownlint-restore --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END -->Recognize a contributor: Comment on any issue or PR:
@all-contributors please add @username for bug, code, doc, or ideas
Created by Prof. Vijay Janapa Reddi at Harvard University.
MIT License - see LICENSE for details.
<b><a href="https://mlsysbook.ai/tinytorch">Full Documentation</a></b> γ» <b><a href="https://github.com/harvard-edge/cs249r_book/discussions">Discussions</a></b> γ» <b><a href="https://mlsysbook.ai">ML Systems Book</a></b>
<b>Start Small. Go Deep. Build ML Systems.</b>
</div>