Axolotl AI
Make fine-tuning large language models (LLMs) accessible, scalable, and efficient through open-source tools and managed infrastructure.
At a Glance
- AI Researchers
- Enterprise ML Teams
- Open Source Developers
- Startups
AI Tools by Axolotl AI
(1)Axolotl
Open Source LLM Fine-Tuning
Discussions
No discussions yet
Be the first to start a discussion about Axolotl AI
Latest News
Reducing inference costs for fine-tuned models with Red Hat AI
Enabling Long Context Training with Sequence Parallelism in Axolotl
Stretching Phi-3 context windows to 128K in Axolotl
Validation and support for H100 GPUs and bitsandbytes optimization
Products & Services
A powerful open-source tool for fine-tuning large language models (LLMs) supporting multiple architectures including Llama, Mistral, and Phi.
A collection of pre-made recipes and notebooks for common fine-tuning tasks and specific model architectures.
A managed platform (currently signup-based) for scaling LLM training and fine-tuning in the cloud.
Market Position
Leading open-source alternative to proprietary fine-tuning platforms, prioritized for flexibility and performance across hardware vendors.
Leadership
Founders
Wing Lian
Founder of Axolotl AI. Previously Platform Lead at SoundCloud, Principal Enterprise Architect at UnitedMasters, Director of Server Engineering at Dwell Media, Chief Architect at Babyscripts, and CTO at ConsumerBell. Extensive background in platform architecture and server engineering.
Executive Team
Wing Lian
Founder
Expert in platform architecture and AI infrastructure. Former lead roles at SoundCloud and UnitedMasters.
Founding Story
Started as an open-source project in 2023 by Wing Lian to simplify the complex process of fine-tuning LLMs, eventually evolving into a company in 2024 to support the growing ecosystem and enterprise needs.
Business Model
Revenue Model
Open Core / Managed Cloud SaaS / Sponsorships
Pricing Tiers
Free to use, community-supported framework.
Sponsorship opportunities for the OSS project (e.g., Modal).
Managed platform for enterprise-scale training.
Target Markets
- AI Researchers
- Enterprise ML Teams
- Open Source Developers
- Startups
- Domain-specific model fine-tuning
- LLM post-training (alignment, RLHF)
- Scaling training on multi-node clusters
- Efficient low-rank adaptation for edge devices
- Nous Research
- Replicate
- OpenPipe
- Teknium