# Ludwig > Ludwig is a low-code, declarative deep learning framework for building custom AI models including LLMs and neural networks using YAML configuration files. Ludwig is a **low-code** declarative deep learning framework built for scale and efficiency, enabling researchers and engineers to train custom AI models — including LLMs and other deep neural networks — using simple YAML configuration files. It abstracts away machine learning boilerplate while retaining expert-level control over model architecture, training, and deployment. Hosted by the Linux Foundation AI & Data, Ludwig supports multi-modal and multi-task learning out of the box. - **Declarative YAML configuration** — *define your entire model, preprocessing, training loop, and hyperparameter search in a single config file without writing boilerplate code.* - **LLM fine-tuning** — *fine-tune pretrained large language models (e.g., Llama-3.1-8B) with support for 4-bit quantization (QLoRA), LoRA adapters, and instruction tuning.* - **Distributed training** — *scale from a single GPU to multi-GPU, multi-node clusters using DDP and DeepSpeed, with native Ray and Kubernetes support.* - **Parameter-efficient fine-tuning (PEFT)** — *reduce compute and memory requirements using adapter-based methods like LoRA.* - **AutoML** — *automatically train models by providing just a dataset, target column, and time budget via Ludwig AutoML.* - **Multi-modal, multi-task learning** — *mix tabular data, text, images, and audio into complex model configurations without writing code.* - **Hyperparameter optimization** — *built-in hyperopt support for automated search over model and training parameters.* - **Rich integrations** — *track experiments with TensorBoard, Comet ML, Weights & Biases, MLFlow, and Aim Stack.* - **Production-ready export** — *export models to Torchscript and Triton, upload to HuggingFace with one command, and serve via a built-in REST API.* - **Extensible architecture** — *add custom encoders, decoders, combiners, feature types, metrics, and tokenizers through a modular developer API.* - **HuggingFace Transformers integration** — *use any pretrained PyTorch model from HuggingFace without writing code.* - **CLI and Python API** — *interact with Ludwig via command-line interface or the `LudwigModel` Python API.* ## Features - Declarative YAML-based model configuration - LLM fine-tuning with LoRA and QLoRA - 4-bit quantization support - Distributed training with DDP and DeepSpeed - Parameter-efficient fine-tuning (PEFT) - AutoML with time budget - Multi-modal learning (text, image, audio, tabular) - Multi-task learning - Hyperparameter optimization - Experiment tracking integrations - Model export to Torchscript and Triton - HuggingFace model upload - Built-in REST API serving - Ray and Kubernetes support - Python API and CLI - Prebuilt Docker containers - Rich metric visualizations - Dataset Zoo with built-in datasets ## Integrations PyTorch, HuggingFace Transformers, Ray, Kubernetes, DeepSpeed, TensorBoard, Weights & Biases, MLFlow, Comet ML, Aim Stack, Triton Inference Server, Docker, Torchscript, Pydantic ## Platforms LINUX, API, DEVELOPER_SDK, CLI ## Pricing Open Source ## Version 0.11.2 ## Links - Website: https://ludwig.ai - Documentation: https://ludwig.ai/latest/ - Repository: https://github.com/ludwig-ai/ludwig - EveryDev.ai: https://www.everydev.ai/tools/ludwig