# LiteLLM > Open-source LLM gateway and Python SDK that unifies 100+ providers behind an OpenAI-compatible API with cost tracking, budgets, rate limits, and fallbacks. LiteLLM is an open-source LLM gateway (proxy server) and Python SDK that lets teams call 100+ model providers through the OpenAI API format. It adds platform features—load-balancing and fallbacks, spend tracking per key/user/team, budgets and RPM/TPM limits, virtual keys, and an admin UI. It integrates with observability stacks (Langfuse, LangSmith, OpenTelemetry, Prometheus) and supports logging to S3/GCS. Enterprise options layer on SSO/JWT auth and audit logs, plus fine-grained guardrails per project. ## Features - OpenAI-compatible API across 100+ LLM providers - Proxy (LLM Gateway) with routing, load balancing, and fallbacks - Python SDK for direct calls and streaming - Cost tracking and spend attribution per key/user/team/org - Budgets and rate limiting (RPM/TPM) - Virtual keys, teams, and role/permission controls - Admin UI for keys, models, teams, and budgets - Observability: Langfuse, LangSmith, OpenTelemetry, Prometheus - Audit logs, SSO/JWT auth (Enterprise) - Guardrails and moderation integrations; per-project guardrails - Batch API, caching, prompt formatting for HF models - S3/GCS logging of requests and costs ## Integrations OpenAI, Azure OpenAI, Anthropic, Google AI Studio, Vertex AI, AWS Bedrock, Mistral, Cohere, Groq, xAI, Ollama, vLLM, LM Studio, Hugging Face, Databricks, NVIDIA NIM, Fireworks AI, Perplexity, Deepgram, Arize, Aporia, Langfuse, LangSmith, OpenTelemetry, Prometheus, S3, GCS ## Platforms WEB, API, DEVELOPER_SDK ## Pricing Open Source, Free tier available ## Links - Website: https://www.litellm.ai - Documentation: https://docs.litellm.ai/docs/ - Repository: https://github.com/BerriAI/litellm - EveryDev.ai: https://www.everydev.ai/tools/litellm