Axolotl
Open-source tool for fine-tuning LLMs faster and at scale, supporting multi-GPU training, LoRA, FSDP, and a wide range of model architectures.
At a Glance
Pricing
Fully free and open-source fine-tuning framework available on GitHub.
Engagement
Available On
Alternatives
Developer
Listed Mar 2026
About Axolotl
Axolotl is a free, open-source fine-tuning framework that makes customizing large language models faster, more accessible, and scalable. It supports a wide range of model architectures via Hugging Face Transformers and integrates cutting-edge techniques like LoRA, FSDP+qLoRA, and Multipack to deliver training speeds 3–5x faster than alternatives. Axolotl is used by researchers, model builders, and enterprise AI platforms who need full control over their data and infrastructure.
- Broad LLM Support — Fine-tune popular models including LLaMA, Mistral, Falcon, Gemma, Qwen, Cerebras, MS Phi, RWKV, and more out of the box.
- Multi-GPU Training — Integrates with xformers and DeepSpeed for efficient distributed training across multiple GPUs.
- YAML-Based Configuration — Get started quickly using pre-built YAML configs in the OSS repo without needing to write custom training code.
- Advanced Fine-Tuning Techniques — Supports FSDP+qLoRA, LoRA+, Multipack, PEFT/LoRA, GRPO, and Process Reward Models for state-of-the-art results.
- Bring Your Own Data (BYOD) — Keep your data on-premises or in your own cloud — no need to upload to third-party services, ensuring compliance and data governance.
- Flexible Deployment — Run anywhere: your own cloud, Docker, Kubernetes, or partner platforms like Runpod, Modal, Lambda Labs, and Jarvislabs.
- Active Open-Source Community — Over 170 contributors and 500+ active Discord members provide support, recipes, and cookbooks to help you get started fast.
- Axolotl Cookbooks — Pre-made recipes and notebooks (e.g., fine-tuning GPT-OSS 120B, talk-like-a-pirate examples) make experimentation easy.
- Enterprise-Ready — Supports custom Docker and k8s setups, full data control, and is trusted by research companies and Gen AI platforms at scale.
Community Discussions
Be the first to start a conversation about Axolotl
Share your experience with Axolotl, ask questions, or help others learn from your insights.
Pricing
Open Source
Fully free and open-source fine-tuning framework available on GitHub.
- Fine-tune LLMs including LLaMA, Mistral, Falcon, Gemma, and more
- Multi-GPU training with DeepSpeed and xformers
- LoRA, PEFT, FSDP+qLoRA, Multipack support
- YAML-based configuration
- Bring Your Own Data (BYOD)
Capabilities
Key Features
- LLM fine-tuning
- Multi-GPU training
- LoRA and PEFT support
- FSDP+qLoRA
- Multipack training
- GRPO support
- YAML-based configuration
- Bring Your Own Data (BYOD)
- Docker and Kubernetes deployment
- Hugging Face Transformers integration
- Process Reward Models
- Pre-built cookbooks and recipes
