# Unsloth Studio

> Unsloth Studio is a local, no-code UI for fine-tuning and running LLMs with up to 2x faster training and 60% less VRAM usage.

Unsloth Studio is a free, locally-run graphical interface built on top of the open-source Unsloth library, enabling users to fine-tune and run large language models without writing code. It delivers up to 2x faster training speeds and up to 60% VRAM reduction compared to standard approaches, making LLM fine-tuning accessible on consumer hardware. Unsloth Studio is part of the broader Unsloth ecosystem, which also offers Pro and Enterprise tiers for teams needing even greater performance and multi-GPU support.

- **Local execution** — *runs entirely on your own machine, keeping data private and eliminating cloud costs*
- **No-code fine-tuning UI** — *provides a visual interface to configure, launch, and monitor LLM training jobs without writing Python*
- **2x faster training** — *uses custom Triton kernels and optimized backpropagation to dramatically cut training time*
- **60% VRAM reduction** — *intelligent memory management allows fine-tuning large models on GPUs with limited VRAM*
- **Wide model support** — *compatible with Mistral, Gemma, LLaMA 1/2/3, and other popular open-weight models*
- **4-bit and 16-bit LoRA** — *supports both quantization levels for flexible trade-offs between speed, memory, and accuracy*
- **Open-source foundation** — *built on the unslothai/unsloth GitHub repository with 62,000+ stars, ensuring community-driven development*
- **Docker support** — *can be deployed via Docker for reproducible environments and easier setup*
- **Hugging Face integration** — *works seamlessly with Hugging Face model hubs for downloading and uploading models*

To get started, install Unsloth via the GitHub repository or Docker, then launch Unsloth Studio to access the visual fine-tuning interface. Select a base model, configure your LoRA parameters, load your dataset, and start training — all from within the UI.

## Features
- Local LLM fine-tuning UI
- No-code interface
- 2x faster training
- 60% VRAM reduction
- 4-bit and 16-bit LoRA support
- Supports Mistral, Gemma, LLaMA 1/2/3
- Docker support
- Hugging Face model integration
- Open-source

## Integrations
Hugging Face, Docker, PyTorch

## Platforms
WINDOWS, MACOS, LINUX

## Pricing
Open Source, Free tier available

## Links
- Website: https://unsloth.ai/docs/new/studio
- Documentation: https://unsloth.ai/docs
- Repository: https://github.com/unslothai/unsloth
- EveryDev.ai: https://www.everydev.ai/tools/unsloth-studio
