Main Menu
  • Tools
  • Developers
  • Topics
  • Discussions
  • Communities
  • News
  • Podcasts
  • Blogs
  • Builds
  • Contests
  • Compare
  • Arena
Create
    EveryDev.ai
    Sign inSubscribe
    Home
    Tools

    2,330+ AI tools

    • New
    • Trending
    • Featured
    • Compare
    • Arena
    Categories
    • Agents1228
    • Coding1045
    • Infrastructure455
    • Marketing414
    • Design374
    • Projects340
    • Analytics319
    • Research306
    • Testing200
    • Data171
    • Integration169
    • Security169
    • MCP164
    • Learning146
    • Communication131
    • Prompts122
    • Extensions120
    • Commerce116
    • Voice107
    • DevOps92
    • Web73
    • Finance19
    1. Home
    2. Tools
    3. Torrix
    Torrix icon

    Torrix

    Observability Platforms
    Featured

    Self-hosted AI observability platform that tracks every LLM API call with real-time cost, token, and latency visibility across 300+ models and providers.

    Visit Website

    At a Glance

    Pricing
    Free tier available

    Self-hosted, free forever. Everything you need to observe your LLMs with your data on your server.

    Pro (Founding Member): $19/mo
    Enterprise: Custom/contact

    Engagement

    Available On

    Web
    API
    CLI
    Browser
    SDK

    Resources

    WebsiteDocsGitHubllms.txt

    Topics

    Observability PlatformsLLM EvaluationsMonitoring Tools

    Alternatives

    TraceloopOpenObserveArize AI
    Developer
    TorrixDüsseldorf, GermanyEst. 2026

    Listed May 2026

    About Torrix

    Torrix is a self-hosted AI observability tool built for developers who want full visibility into their LLM API traffic without sending data to a third-party cloud. It captures every token, dollar, and millisecond from API calls and browser conversations in real time, and deploys on your own infrastructure in about 60 seconds via Docker.

    What It Is

    Torrix sits between your application and any LLM provider — OpenAI, Anthropic, Google Gemini, Groq, Mistral, Azure OpenAI, NVIDIA NIM, DeepSeek, Ollama, and more — and logs every request as a structured trace. The dashboard surfaces cost, token usage, latency percentiles, error rates, and full prompt/response bodies. Because it is self-hosted by default, prompts and responses never leave your server.

    How It Integrates With Your Stack

    Torrix offers multiple integration paths with minimal code changes:

    • Python and Node.js SDKs with auto-instrumentation: call torrix.init() once and every LLM call in your codebase is captured automatically, including streaming responses.
    • Go, C#/.NET, and Java SDKs for manual ingest with zero external dependencies.
    • HTTP proxy for any tool that speaks HTTP — curl, n8n, Make, GitHub Copilot, SAP AI Core, or any OpenAI-compatible endpoint.
    • Browser extension that intercepts ChatGPT, Claude, Gemini, Perplexity, Grok, and Mistral conversations directly, with no proxy or code changes.
    • OpenTelemetry receiver at /v1/traces for applications already instrumented with the OTel SDK.
    • MCP server built in at /mcp, compatible with Claude Code, Cursor, Windsurf, and n8n.
    • n8n community node (@torrix-ai/n8n-nodes-torrix) for native workflow integration.

    Key Observability Capabilities

    Torrix goes beyond basic logging with a range of analysis and control features:

    • Real-time cost tracking with per-call dollar amounts and projected month-end spend based on daily velocity.
    • Full prompt and response traces including finish reason, tool calls, reasoning steps (OpenAI o1/o3/o4, DeepSeek R1, Claude extended thinking, Gemini 2.5, Ollama Qwen3), and multimodal image inputs.
    • Agent trace grouping via x-torrix-trace header, rendering multi-step agent runs as a collapsible parent-child tree.
    • Budget controls: soft alert webhooks and hard caps that block proxy requests once a daily limit is reached.
    • PII detection and masking for emails, phone numbers, credit cards, and IP addresses before storage.
    • 300+ model cost comparison showing what the same prompt would cost across alternative models, live-priced.
    • Regression testing and evals: mark golden runs, replay against any model, batch-test datasets with LLM judge auto-scoring, and track pass rates.
    • Grafana/Prometheus export via a /metrics scrape endpoint.
    • SQL query interface for direct SELECT queries against the Torrix database.

    Deployment Model and Privacy

    Torrix is self-hosted by default. The homepage states that prompts and responses never touch a third-party cloud, and the capture overhead is described as under 1 millisecond with async logging. Docker deployment is the primary path. The community edition is free with no credit card required, covering up to 10,000 most recent runs and 7-day data retention. The Pro edition adds unlimited runs, 30-day retention, team management with per-project roles, model routing rules, audit log, online evals, and scheduled cost reports.

    Why It Got Attention

    The Torrix homepage explicitly positions the tool as a self-hosted drop-in replacement for Helicone, noting that Helicone raised its entry plan price in 2026 and was acquired by Mintlify. Torrix claims compatibility with the same proxy model and header conventions, and provides a migration guide in its GitHub docs. This positioning targets developers who want cost-controlled, privacy-preserving observability without a managed SaaS dependency.

    Torrix - 1

    Community Discussions

    Be the first to start a conversation about Torrix

    Share your experience with Torrix, ask questions, or help others learn from your insights.

    Pricing

    FREE

    Community

    Self-hosted, free forever. Everything you need to observe your LLMs with your data on your server.

    • 1 user
    • 7-day data retention
    • 10,000 most recent runs
    • Budget controls (soft alert + hard cap)
    • Evals & regression testing

    Pro (Founding Member)

    Unlimited runs, 30-day retention, and full team management. Founding member price locks in forever.

    $19
    per month
    • Everything in Community
    • 30-day data retention
    • Unlimited runs
    • Unlimited golden runs for evals
    • Up to 10 users
    • Scheduled cost reports (weekly digest)
    • Model routing rules
    • Audit log
    • Online evals (auto-score every production run)
    • Priority email support
    • GitHub Actions eval CLI (coming soon)

    Enterprise

    For regulated industries that need compliance, SSO, and dedicated support.

    Custom
    contact sales
    • Unlimited users
    • 90-day retention
    • SSO (SAML / Okta)
    • Helm chart (Kubernetes)
    • Dedicated support
    View official pricing

    Capabilities

    Key Features

    • Real-time cost tracking per API call
    • Full prompt and response trace logging
    • Token usage and latency analytics (p50/p95/p99)
    • 300+ model cost comparison
    • Budget controls with soft alerts and hard caps
    • PII detection and masking
    • Agent trace grouping and tree view
    • Regression testing with golden run replay
    • Batch eval datasets with LLM judge auto-scoring
    • Grafana/Prometheus metrics export
    • OpenTelemetry OTLP/HTTP receiver
    • MCP server built-in
    • Browser extension for ChatGPT, Claude, Gemini, and more
    • Streaming response instrumentation
    • Thinking and reasoning capture (o1, DeepSeek R1, Claude extended thinking)
    • Multi-project namespaces
    • Per-user cost attribution
    • Model routing rules
    • Audit log
    • SQL query interface
    • CSV and JSON export
    • Prompt management and versioning
    • Outbound webhooks for Slack and PagerDuty
    • Cost forecasting
    • Repeated prompt detection
    • Multimodal trace support
    • Shareable run links
    • Custom run tags
    • Run scoring and LLM judge
    • Weekly cost digest
    • API key management with scoped permissions

    Integrations

    OpenAI
    Anthropic
    Google Gemini
    Azure OpenAI
    Groq
    Mistral
    NVIDIA NIM
    DeepSeek
    Ollama
    Fireworks
    Together AI
    n8n
    Make
    Perplexity
    OpenRouter
    SAP AI Core
    GitHub Copilot
    Claude Code
    Cursor
    Windsurf
    Grafana
    Prometheus
    Slack
    PagerDuty
    Spring AI
    LangChain4j
    API Available
    View Docs

    Reviews & Ratings

    No ratings yet

    Be the first to rate Torrix and help others make informed decisions.

    Developer

    Torrix Team

    Torrix builds a self-hosted AI observability platform that gives developers full visibility into every LLM API call — tracking cost, tokens, latency, and full prompt/response traces in real time. The tool deploys on your own infrastructure via Docker in about 60 seconds, ensuring prompts and responses never leave your server. Torrix supports 300+ models across all major providers and integrates via Python, Node.js, Go, C#, Java SDKs, an HTTP proxy, a browser extension, and OpenTelemetry. It is built for developers who prioritize privacy and cost control over managed SaaS convenience.

    Founded 2026
    Düsseldorf, Germany
    1 employees

    Used by

    Targeted at Enterprise users; names not…
    Read more about Torrix Team
    WebsiteGitHub
    1 tool in directory

    Similar Tools

    Traceloop icon

    Traceloop

    LLM reliability platform that turns evals and monitors into a continuous feedback loop for faster, more reliable AI app releases.

    OpenObserve icon

    OpenObserve

    Open source, petabyte-scale observability platform unifying logs, metrics, and traces with 140x lower storage costs than Elasticsearch.

    Arize AI icon

    Arize AI

    Arize AI is an enterprise AI and agent engineering platform for development, observability, and evaluation of LLM applications, AI agents, and ML models in production.

    Browse all tools

    Related Topics

    Observability Platforms

    Comprehensive platforms that combine metrics, logs, and traces with AI-powered analytics to provide deep insights into complex distributed systems and application behavior.

    74 tools

    LLM Evaluations

    Platforms and frameworks for evaluating, testing, and benchmarking LLM systems and AI applications. These tools provide evaluators and evaluation models to score AI outputs, measure hallucinations, assess RAG quality, detect failures, and optimize model performance. Features include automated testing with LLM-as-a-judge metrics, component-level evaluation with tracing, regression testing in CI/CD pipelines, custom evaluator creation, dataset curation, and real-time monitoring of production systems. Teams use these solutions to validate prompt effectiveness, compare models side-by-side, ensure answer correctness and relevance, identify bias and toxicity, prevent PII leakage, and continuously improve AI product quality through experiments, benchmarks, and performance analytics.

    70 tools

    Monitoring Tools

    AI-enhanced monitoring solutions that provide real-time visibility into system performance, anomaly detection, and predictive alerting for proactive issue resolution.

    66 tools
    Browse all topics
    Back to all tools
    Explore AI Tools
    • AI Coding Assistants
    • Agent Frameworks
    • MCP Servers
    • AI Prompt Tools
    • Vibe Coding Tools
    • AI Design Tools
    • AI Database Tools
    • AI Website Builders
    • AI Testing Tools
    • LLM Evaluations
    Follow Us
    • X / Twitter
    • LinkedIn
    • Reddit
    • Discord
    • Threads
    • Bluesky
    • Mastodon
    • YouTube
    • GitHub
    • Instagram
    Get Started
    • About
    • Editorial Standards
    • Corrections & Disclosures
    • Community Guidelines
    • Advertise
    • Contact Us
    • Newsletter
    • Submit a Tool
    • Start a Discussion
    • Write A Blog
    • Share A Build
    • Terms of Service
    • Privacy Policy
    Explore with AI
    • ChatGPT
    • Gemini
    • Claude
    • Grok
    • Perplexity
    Agent Experience
    • llms.txt
    Theme
    With AI, Everyone is a Dev. EveryDev.ai © 2026
    Discussions