LLM CLI icon

LLM CLI

LLM is a lightweight command-line tool and Python library for working with large language models. It runs prompts and chat sessions, streams output, manages system prompts and templates, and logs everything to SQLite for later search and analysis. It supports structured outputs via JSON schemas, multimodal inputs (images/audio/video) through attachments, and can grant models controlled access to tools. Embeddings are first-class: you can generate, store, and run similarity search against vectors in SQLite. The plugin system adds providers and local runtimes—OpenAI, Anthropic (Claude), Google Gemini, Mistral, Ollama, llama.cpp, GPT4All and more—so you can mix remote APIs with models that run on your own machine. Installable with pip/pipx/Homebrew/uv, and usable as a Python API as well as a CLI. (Docs & feature list, plugins, CLI help, and PyPI requirements cited in sources.)

No discussions yet

Be the first to start a discussion about LLM CLI

Demo Video for LLM CLI

Developer

Open-source developer and creator of Datasette and LLM; builds Python and SQLite-centric tools for data journalism and developer workfl…read more

Pricing and Plans

PlanPriceFeatures
Open SourceFree
  • Apache 2.0 license
  • Full CLI and Python API
  • Plugin ecosystem
  • Community support

System Requirements

Operating System
WINDOWS, MACOS, LINUX
Memory (RAM)
4GB+ recommended
Processor
Modern multi-core processor
Disk Space
Core package ~60KB; additional space for embeddings DB and any local model files

AI Capabilities

CLI prompts and chat with streaming
System prompts and reusable templates
Fragments for assembling long context
Schema-validated structured outputs (JSON)
Embeddings generation and vector search
Multimodal inputs (image/audio/video) via attachments
Tool/function calling
SQLite logging and analytics
Plugin-based provider and model support
Local model runtimes (Ollama, llama.cpp, GPT4All)
Python API for sync/async usage and conversations
Token usage tracking