LiteLLM icon

LiteLLM

LiteLLM is an open-source LLM gateway (proxy server) and Python SDK that lets teams call 100+ model providers through the OpenAI API format. It adds platform features—load-balancing and fallbacks, spend tracking per key/user/team, budgets and RPM/TPM limits, virtual keys, and an admin UI. It integrates with observability stacks (Langfuse, LangSmith, OpenTelemetry, Prometheus) and supports logging to S3/GCS. Enterprise options layer on SSO/JWT auth and audit logs, plus fine-grained guardrails per project.

No discussions yet

Be the first to start a discussion about LiteLLM

Demo Video for LiteLLM

Developer

BerriAI is the team behind LiteLLM, an open-source LLM gateway and SDK that unifies access to 100+ providers with cost tracking, securi…read more

Pricing and Plans

PlanPriceFeatures
Open SourceFree
  • 100+ provider integrations
  • OpenAI-compatible endpoints
  • Virtual keys, teams, budgets
  • Rate limits, load balancing, guardrails
  • Observability integrations (Langfuse, LangSmith, OTEL, Prometheus)
Enterprise (Cloud or Self-Hosted)Contact us
  • Everything in OSS
  • Enterprise support & custom SLAs
  • JWT/SSO, audit logs, advanced guardrails
  • Usage-based pricing; contact sales

System Requirements

Operating System
WINDOWS, MACOS, LINUX
Memory (RAM)
2GB minimum (4GB+ recommended for proxy + logs)
Processor
Modern 64-bit CPU (x86_64 or ARM64)
Disk Space
200MB+ for proxy binaries/config; additional for logs and Docker images

AI Capabilities

Unified OpenAI-format API for multiple providers
Provider routing and automatic fallbacks
Streaming responses
Spend tracking and usage attribution
Budgets, rate limiting, and quotas
Guardrails and moderation hooks
Observability and metrics export
Batching, caching, and prompt formatting