# Martian > LLM model router and unified API that selects the best model per request to improve performance and reduce cost. Martian provides a model router and gateway that forwards each request to the most suitable LLM based on performance, cost, and reliability. It’s a drop-in, OpenAI-compatible endpoint with controls for max cost and willingness-to-pay, automatic failover across providers, benchmarking reports, and access to a large catalog of supported models. A free developer tier includes 2,500 requests; paid usage is metered, and enterprises can deploy custom routers with SLA and VPC options. ## Features - Real-time routing across multiple LLM providers - OpenAI-compatible API (base_url swap) for easy integration - Cost/performance controls (e.g., max_cost, willingness_to_pay) - Automatic failover and outage rerouting for higher uptime - Benchmarking reports to validate router performance - Unified access to hundreds of models via adapters - Single Martian key or bring-your-own provider keys - Enterprise options: SLA, VPC deployment, and custom-built routers - Supported SDK examples for Python, Node, and LangChain - Leaderboards and datasets to compare model cost/latency/performance ## Integrations OpenAI SDK (Python/Node), Anthropic, LangChain (Python/JS), DeepInfra, Cerebras ## Platforms WEB, API, DEVELOPER_SDK ## Pricing Open Source, Free tier available ## Links - Website: https://www.withmartian.com - Documentation: https://docs.withmartian.com/martian-model-router - Repository: https://github.com/withmartian/llm-adapters - EveryDev.ai: https://www.everydev.ai/tools/martian