# Ollama > Run Llama 3.3, DeepSeek-R1, Phi-4, Mistral, Gemma 3, and other models locally on your device Ollama is a lightweight, cross-platform application that enables developers and AI enthusiasts to run large language models (LLMs) entirely on their local hardware. With a focus on simplicity and accessibility, Ollama makes it easy to download, run, and customize state-of-the-art open-source LLMs without requiring cloud resources or specialized knowledge. The application supports a wide range of popular models including Llama 3.3, DeepSeek-R1, Phi-4, Mistral, Gemma 3, and many others through a straightforward command-line interface and REST API. Users can run these models with minimal setup, making advanced AI capabilities accessible even to those with limited technical expertise. One of Ollama's key strengths is its efficient resource utilization, allowing models to run on consumer hardware with reasonable memory requirements. The smallest models can run on systems with just 8GB of RAM, while larger models scale accordingly with hardware capabilities. For developers, Ollama provides a comprehensive REST API that makes it easy to integrate local LLM capabilities into applications, along with official Python and JavaScript libraries for seamless interaction. This has led to a thriving ecosystem of community integrations spanning web interfaces, terminal applications, IDE extensions, and mobile apps. The tool supports model customization through a simple Modelfile format, allowing users to adjust system prompts, import models from various formats, and create specialized versions for specific use cases. This flexibility enables users to fine-tune models for their unique requirements without needing extensive machine learning knowledge. As an open-source project with an active community, Ollama continues to evolve rapidly, with regular updates enhancing performance, adding new features, and supporting the latest models. By bringing powerful LLM capabilities to local hardware, Ollama represents a significant step in democratizing access to advanced AI technologies while maintaining user privacy and reducing dependency on cloud services. ## Features - Run state-of-the-art LLMs locally on your device - Support for models like Llama 3.3, DeepSeek-R1, Phi-4, Mistral, Gemma 3, and more - Simple command-line interface for model management - Comprehensive REST API for application integration - Official Python and JavaScript libraries - Model customization through Modelfile format - Cross-platform support for macOS, Windows, and Linux - Docker image available for containerized deployments - Minimal resource requirements for smaller models - Vibrant community ecosystem of integrations and extensions ## Integrations Python, JavaScript, Docker, Visual Studio Code, JetBrains IDEs, Terminal applications, Web interfaces, Mobile applications, Database systems, Observability tools ## Platforms WINDOWS, MACOS, LINUX ## Pricing Open Source ## Version 0.1.27 ## Links - Website: https://ollama.com - Documentation: https://github.com/ollama/ollama/tree/main/docs - Repository: https://github.com/ollama/ollama - EveryDev.ai: https://www.everydev.ai/tools/ollama