Martian
LLM model router and unified API that selects the best model per request to improve performance and reduce cost.
At a Glance
Pricing
Get started with Martian at no cost with 2,500 requests free and API access, unlimited runs.
Engagement
Available On
About Martian
Martian provides a model router and gateway that forwards each request to the most suitable LLM based on performance, cost, and reliability. It’s a drop-in, OpenAI-compatible endpoint with controls for max cost and willingness-to-pay, automatic failover across providers, benchmarking reports, and access to a large catalog of supported models. A free developer tier includes 2,500 requests; paid usage is metered, and enterprises can deploy custom routers with SLA and VPC options.
Community Discussions
Be the first to start a conversation about Martian
Share your experience with Martian, ask questions, or help others learn from your insights.
Pricing
Free Plan Available
Get started with Martian at no cost with 2,500 requests free and API access, unlimited runs.
- 2,500 requests free
- API access, unlimited runs
- Access to full model list
- Performance vs. cost optimization
Developer (usage-based)
Developer (usage-based) plan with $20 per additional 5,000 requests after free tier.
- $20 per additional 5,000 requests after free tier
Enterprise (Custom Router)
Enterprise-grade solution with Custom-built router tuned to your data and tasks and SLA and VPC deployment and dedicated support.
- Custom-built router tuned to your data and tasks
- SLA and VPC deployment
- API access, unlimited runs
- Performance vs. cost optimization
- Access to complete model list
Capabilities
Key Features
- Real-time routing across multiple LLM providers
- OpenAI-compatible API (base_url swap) for easy integration
- Cost/performance controls (e.g., max_cost, willingness_to_pay)
- Automatic failover and outage rerouting for higher uptime
- Benchmarking reports to validate router performance
- Unified access to hundreds of models via adapters
- Single Martian key or bring-your-own provider keys
- Enterprise options: SLA, VPC deployment, and custom-built routers
- Supported SDK examples for Python, Node, and LangChain
- Leaderboards and datasets to compare model cost/latency/performance



