# LM Studio > Run local LLMs, chat with documents, and power apps using a local AI server. LM Studio is a cross-platform toolkit that lets you download and run open-source LLMs (like Llama, Qwen, DeepSeek, Gemma) directly on your laptop or desktop. You can use it for chatting, retrieving answers from local documents (RAG), or hosting a local LLM server to integrate into your own applications. It includes a discovery UI for finding Hugging Face models in GGUF or MLX format, a chat interface, document RAG system, and a local server mode for API access. Developers can use the LM Studio SDK (via npm) to build local AI apps without managing model dependencies. LM Studio is free for both personal and commercial use, with paid plans only needed for enterprise features like private sharing, SSO, and model access control. ## Features - Run open-source LLMs locally on Mac, Windows, or Linux - Chat interface with support for document RAG - Host local AI server with REST API - Discover and download GGUF and MLX models - Cross-platform support for Llama.cpp and MLX formats - Free SDK for building local AI apps - Team and enterprise collaboration via LM Hub (upcoming) ## Integrations Hugging Face, Llama.cpp, MLX ## Platforms MACOS, WINDOWS, LINUX, API ## Pricing Freemium — Free tier available with paid upgrades ## Version 0.3.20 ## Links - Website: https://lmstudio.ai/ - Documentation: https://lmstudio.ai/docs/app - Repository: https://github.com/lmstudio-ai - EveryDev.ai: https://www.everydev.ai/tools/lm-studio