README.md
tinygrad: For something between PyTorch and karpathy/micrograd. Maintained by tiny corp.
<h3>Homepage | Documentation | Discord
</h3> </div>tinygrad is an end-to-end deep learning stack:
It’s inspired by PyTorch (ergonomics), JAX (functional transforms and IR-based AD), and TVM (scheduling and codegen), but stays intentionally tiny and hackable.
PyTorch
Tensor API, autograd, optim, basic datasets and layers.JAX
TinyJit) that captures and replays kernels.vmap/pmap yet), but far easier to read.TVM
Try a matmul. See how, despite the style, it is fused into one kernel with the power of laziness.
DEBUG=3 python3 -c "from tinygrad import Tensor;
N = 1024; a, b = Tensor.empty(N, N), Tensor.empty(N, N);
(a.reshape(N, 1, N) * b.T.reshape(1, N, N)).sum(axis=2).realize()"
And we can change DEBUG to 4 to see the generated code.
As it turns out, 90% of what you need for neural networks are a decent autograd/tensor library. Throw in an optimizer, a data loader, and some compute, and you have all you need.
from tinygrad import Tensor, nn
class LinearNet:
def __init__(self):
self.l1 = Tensor.kaiming_uniform(784, 128)
self.l2 = Tensor.kaiming_uniform(128, 10)
def __call__(self, x:Tensor) -> Tensor:
return x.flatten(1).dot(self.l1).relu().dot(self.l2)
model = LinearNet()
optim = nn.optim.Adam([model.l1, model.l2], lr=0.001)
x, y = Tensor.rand(4, 1, 28, 28), Tensor([2,4,3,7]) # replace with real mnist dataloader
with Tensor.train():
for i in range(10):
optim.zero_grad()
loss = model(x).sparse_categorical_crossentropy(y).backward()
optim.step()
print(i, loss.item())
See examples/beautiful_mnist.py for the full version that gets 98% in ~5 seconds
tinygrad already supports numerous accelerators, including:
And it is easy to add more! Your accelerator of choice only needs to support a total of ~25 low level ops.
To check default accelerator run: python3 -c "from tinygrad import Device; print(Device.DEFAULT)"
The current recommended way to install tinygrad is from source.
git clone https://github.com/tinygrad/tinygrad.git
cd tinygrad
python3 -m pip install -e .
python3 -m pip install git+https://github.com/tinygrad/tinygrad.git
Documentation along with a quick start guide can be found on the docs website built from the docs/ directory.
from tinygrad import Tensor
x = Tensor.eye(3, requires_grad=True)
y = Tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.tolist()) # dz/dx
print(y.grad.tolist()) # dz/dy
The same thing but in PyTorch:
import torch
x = torch.eye(3, requires_grad=True)
y = torch.tensor([[2.0,0,-2.0]], requires_grad=True)
z = y.matmul(x).sum()
z.backward()
print(x.grad.tolist()) # dz/dx
print(y.grad.tolist()) # dz/dy
There has been a lot of interest in tinygrad lately. Following these guidelines will help your PR get accepted.
We'll start with what will get your PR closed with a pointer to this section:
\ns does nothing to help with that.tinygrad/ folder is not well tested, so unless the current code there is broken, you shouldn't be changing it.Now, what we want:
@unittest.expectedFailure is great. This is how we make progress.tinygrad/ folder. We don't care about the code in extra, but removing dead code from the core library is great. Less for new people to read and be confused by.You should install the pre-commit hooks with pre-commit install. This will run the linter, mypy, and a subset of the tests on every commit.
For more examples on how to run the full test suite please refer to the CI workflow.
Some examples of running tests locally:
python3 -m pip install -e '.[testing]' # install extra deps for testing
python3 test/backend/test_ops.py # just the ops tests
python3 -m pytest test/ # whole test suite
Process replay compares your PR's generated kernels against master. If your PR is a refactor or speedup without any expected behavior change, It should include [pr] in the pull request title.