# Liquid AI > Liquid AI builds ultra-efficient multimodal foundation models (LFMs) optimized for on-device deployment across CPUs, GPUs, and NPUs for privacy- and latency-critical applications. Liquid AI is a foundation model company spun out of MIT that builds high-performance, efficient AI systems purpose-built for real-world environments. Their Liquid Foundation Models (LFMs) are architected from the ground up for on-device intelligence, delivering advanced capabilities directly to smartphones, laptops, vehicles, and embedded systems where compute is limited. Taking a first-principles approach rooted in dynamical systems, signal processing, and numerical linear algebra, Liquid designs models with unmatched structural control and efficiency. The LFM2 family supports text, audio, vision-language, and multimodal data across a range of parameter sizes. - **Liquid Foundation Models (LFMs)**: *Purpose-built for efficiency and speed, LFMs run on GPUs, CPUs, and NPUs across wearables, robotics, phones, laptops, and cars — download via Hugging Face or LEAP.* - **LEAP Platform**: *A developer platform for building, customizing, and deploying on-device AI in a single workflow — get started at leap.liquid.ai.* - **Apollo App**: *A free private on-device AI chat app available on iOS and Android, letting users vibe-check small language models directly on their phone.* - **Multimodal Support**: *LFM2 family covers text, audio (LFM2-Audio), vision-language (LFM2-VL), and retrieval (LFM2-ColBERT) modalities.* - **Enterprise Solutions**: *Full-scale custom AI solutions tailored to business needs, hardware, and data — including white-glove support for designing and deploying end-to-end intelligence solutions.* - **Startup Program**: *Selected startups gain access to the full tech stack and direct guidance from Liquid's engineering and product teams.* - **Open Source Access**: *Core LFMs are free to use self-service via Hugging Face and LEAP, with commercial use available for companies under $10M in annual revenue.* - **Edge & Hybrid Deployment**: *Models optimized for on-device, cloud, or hybrid deployment — enabling privacy-, low-latency, and security-critical applications everywhere.* ## Features - Ultra-efficient multimodal foundation models (LFMs) - On-device AI deployment for CPUs, GPUs, and NPUs - LEAP developer platform for model customization and deployment - Apollo private on-device AI chat app - LFM2 model family: text, audio, vision-language, and retrieval - Open-source model access via Hugging Face - Enterprise custom AI solutions with white-glove support - Startup program with direct engineering guidance - Hybrid cloud and edge deployment - Mixture-of-Experts architecture (LFM2-8B-A1B) - Reasoning models (LFM2.5-1.2B-Thinking) - Playground for interactive model testing ## Integrations Hugging Face, Amazon Bedrock, AMD Ryzen, Qualcomm, ExecuTorch, Ollama, Shopify, Capgemini, Robotec.ai ## Platforms WEB, API, ANDROID, IOS, DEVELOPER_SDK ## Pricing Open Source, Free tier available ## Version LFM2.5 ## Links - Website: https://www.liquid.ai - Documentation: https://docs.liquid.ai/ - Repository: https://huggingface.co/LiquidAI - EveryDev.ai: https://www.everydev.ai/tools/liquid-ai