# Portkey > Production stack for GenAI builders with AI Gateway, Observability, Guardrails, Governance, and Prompt Management in one platform. Portkey is a comprehensive production stack designed for GenAI teams, providing everything needed to deploy and manage AI applications at scale. The platform offers a unified interface for accessing over 1,600 LLMs via a single API, along with built-in observability, guardrails, governance controls, and prompt management capabilities. Portkey processes billions of tokens daily and serves thousands of GenAI teams worldwide. - **AI Gateway** provides a universal API to access 1,600+ LLMs from various providers, with automatic fallbacks, load balancing, retries, and request timeouts to ensure reliability and resilience in production environments. - **Observability Dashboard** enables real-time monitoring of LLM behavior, tracking logs, traces, feedback, and custom metadata to catch anomalies early and manage usage proactively. - **Guardrails** keep AI outputs in check with deterministic and LLM-based guardrails, including PII redaction to automatically remove sensitive data before requests reach the LLM. - **Prompt Management** offers a prompt engineering studio with templates, playground, API endpoints, versioning, and variable management for collaborative prompt development. - **AI Governance** provides role-based access control (RBAC), team management, SSO integration, budget limits, and granular rate controls to manage resources across multiple teams securely. - **MCP Gateway** centralizes authentication, access, and observability of Model Context Protocol servers, enabling teams to deploy and govern MCP servers efficiently. - **Caching** includes both simple and semantic caching options to reduce costs and improve response times by avoiding redundant API calls. - **Security & Compliance** features SOC2 Type 2, ISO27001, GDPR, and HIPAA compliance with options for VPC hosting, private cloud deployment, and data isolation. To get started, integrate Portkey in just 3 lines of code using the Python or Node.js SDK. Sign up for a free account, obtain your API key, and configure your preferred LLM provider. The platform adds minimal latency (20-40ms) while providing comprehensive monitoring, cost optimization, and reliability features for production AI applications. ## Features - Universal API for 1600+ LLMs - AI Gateway with fallbacks and load balancing - Real-time observability dashboard - Logs, traces, and custom metadata tracking - Deterministic and LLM guardrails - PII redaction - Prompt engineering studio - Prompt versioning and variable management - Role-based access control (RBAC) - Team management - SSO integration - Budget and rate limits - Simple and semantic caching - MCP Gateway for MCP servers - Virtual keys and key management - Automatic retries and timeouts - Config management - Privacy mode - Alerts and notifications - SOC2, ISO27001, GDPR, HIPAA compliance ## Integrations OpenAI, Azure OpenAI, AWS Bedrock, Google Cloud AI, MongoDB, GitHub, Docker, Auth0, Cloudflare, Figma, Microsoft Azure ## Platforms WEB, API, DEVELOPER_SDK ## Pricing Open Source, Free tier available ## Links - Website: https://portkey.ai - Documentation: https://portkey.ai/docs - Repository: https://github.com/Portkey-AI/gateway - EveryDev.ai: https://www.everydev.ai/tools/portkey