# tinygrad > tinygrad is an open-source deep learning framework written in Python that focuses on simplicity and hackability, supporting a wide range of hardware accelerators. tinygrad is a minimalist, open-source deep learning framework written in Python, designed to be simple enough to understand in its entirety while still being powerful enough to train and run modern neural networks. It supports a wide variety of hardware backends including NVIDIA, AMD, Apple Metal, and more, making it highly portable. The codebase is intentionally kept small and readable, making it an excellent tool for researchers, students, and engineers who want to understand how deep learning frameworks work under the hood. - **Minimalist design**: tinygrad keeps the core codebase extremely small, making it easy to read, understand, and modify the entire framework. - **Multi-backend support**: Runs on NVIDIA (CUDA), AMD (ROCm), Apple Metal, CPU, and other accelerators via a unified lazy evaluation engine. - **Lazy evaluation**: Operations are lazily evaluated and fused, enabling efficient kernel generation and execution across backends. - **Neural network training**: Supports forward and backward passes, automatic differentiation, and common optimizers for training models from scratch. - **MNIST and beyond**: Get started quickly with example scripts like MNIST digit classification; simply clone the repo and run example scripts. - **JIT compilation**: Includes a JIT compiler that caches and replays GPU kernels for fast repeated execution. - **Tensor operations**: Provides a NumPy-like tensor API covering arithmetic, reductions, reshaping, and more. - **Open source**: Licensed under MIT; the full source is available on GitHub and contributions are welcome. - **Hardware support**: Targets consumer and datacenter GPUs, enabling use cases from research prototyping to running LLMs locally. - **Python-first**: Pure Python implementation with optional C/C++ extensions for performance-critical paths. ## Features - Minimalist codebase - Multi-backend hardware support (NVIDIA, AMD, Apple Metal, CPU) - Lazy tensor evaluation and kernel fusion - Automatic differentiation - JIT compilation - NumPy-like tensor API - Neural network training and inference - MIT open-source license - Example scripts (MNIST, LLMs) - Python-first implementation ## Integrations CUDA (NVIDIA), ROCm (AMD), Apple Metal, OpenCL, CPU, NumPy ## Platforms WEB, API, DEVELOPER_SDK, CLI ## Pricing Open Source ## Links - Website: https://tinygrad.org/ - Documentation: https://docs.tinygrad.org/ - Repository: https://github.com/tinygrad/tinygrad - EveryDev.ai: https://www.everydev.ai/tools/tinygrad