# Tinker > Tinker is an API for efficient LoRA fine-tuning of large language models—you write simple Python scripts with your data and training logic, and Tinker handles distributed GPU training. Tinker from Thinking Machines is a training API that lets researchers and developers focus on data and algorithms while handling the complexity of distributed training. You write a simple loop that runs on your local machine—including your data, environment, and loss function—and Tinker runs the computation efficiently across GPU clusters. Changing models is a single string change in your code. - **Clean abstraction, full control** — Tinker shields you from distributed training complexity while preserving control over your training loop, loss functions, and algorithmic details. It's not a black box—it's a powerful abstraction. - **API-driven training primitives** — Use forward_backward(), optim_step(), sample(), and save_state() to control training loops programmatically from simple Python scripts. - **Large model support** — Fine-tune models from Llama (1B–70B), Qwen (4B–235B including MoE), DeepSeek-V3.1, GPT-OSS, and Kimi-K2 series. VLM support for image understanding with Qwen3-VL models. - **LoRA fine-tuning** — Uses parameter-efficient LoRA adaptation, which matches full fine-tuning performance for many use cases while requiring less compute. - **Fault-tolerant distributed training** — Hardware failures are handled transparently; training runs reliably on distributed GPU infrastructure. - **Model export** — Download trained weights to use with your inference provider of choice. To get started, read the Tinker Cookbook, run the simple Python examples, and adapt the provided recipes for supervised learning or RL workflows to your dataset. ## Features - LoRA fine-tuning (parameter-efficient, matches full fine-tuning performance) - Distributed, fault-tolerant training for large models (Llama 70B, Qwen 235B) - Vision-language model (VLM) support for image understanding tasks - API primitives: forward_backward(), optim_step(), sample(), save_state() - Download trained model weights for external inference - Supports supervised learning and RL workflows (RLHF, DPO) - Usage-based pricing starting at $0.09 per million tokens ## Integrations Python, External inference providers, Custom RL environments, Vision/image inputs (VLMs) ## Platforms LINUX, WEB, API, DEVELOPER_SDK ## Pricing Freemium — Free tier available with paid upgrades ## Links - Website: https://thinkingmachines.ai/tinker/ - Documentation: https://tinker-docs.thinkingmachines.ai/ - EveryDev.ai: https://www.everydev.ai/tools/tinker