Helicone icon

Helicone

Observability Platforms

Helicone provides observability and analytics for large language model usage via a web dashboard and API to capture telemetry, metrics, and logs from LLM calls.

At a Glance

Pricing

Free tier available

Kickstart your AI project with free requests and dashboard access.

Pro: $20/mo
Team: $200/mo
Enterprise: Custom/contact

Engagement

Available On

Web
API
SDK

About Helicone

Helicone provides observability and analytics for large language model (LLM) usage, capturing telemetry, token usage, latency, and request metadata through a proxy or SDK and exposing results in a web dashboard and API. It helps teams monitor LLM performance, troubleshoot failures, and understand cost-driving usage patterns. Helicone is designed to be deployed self-hosted or used as a hosted service to collect LLM metrics and logs.

  • Telemetry capture — Use Helicone as a proxy or instrument the SDK to capture requests, responses, tokens, and latency for LLM calls.
  • Dashboard and API analytics — View aggregated metrics, token consumption, and latency trends in the web dashboard or query metrics programmatically via the API.
  • Request logging and replay — Store request/response pairs and metadata for debugging and post-hoc analysis.
  • Integrations and SDKs — Install language- or framework-specific SDKs or route traffic through the Helicone proxy to start collecting data.
  • Self-host or hosted — Deploy the open-source components to your infrastructure for local data control or use the hosted offering for managed telemetry.

To get started, create an account or deploy the open-source components, configure your LLM client to use Helicone's proxy or SDK, and open the dashboard to inspect metrics and logs. Use the API for automated metric queries and export.

Demo Video

Helicone Demo Video
Watch on YouTube

Community Discussions

Be the first to start a conversation about Helicone

Share your experience with Helicone, ask questions, or help others learn from your insights.

Pricing

FREE

Free Plan Available

Kickstart your AI project with free requests and dashboard access.

  • 10,000 free requests
  • Requests and Dashboard
  • Free, truly
TRIAL

7 days

Try the Pro plan free for 7 days.

  • Everything in Hobby
  • Scale beyond 10k requests
  • Core observability features
  • Standard support
TRIAL

7 days

Try the Team plan free for 7 days.

  • Everything in Pro
  • Unlimited seats
  • Prompt Management
  • SOC-2 & HIPAA compliance
  • Dedicated Slack channel

Pro

Popular

Starter plan for teams. Usage-based pricing applies. Includes 7-day free trial.

$20
per month
  • Everything in Hobby
  • Scale beyond 10k requests
  • Core observability features
  • Standard support

Team

For growing companies. Includes 7-day free trial.

$200
per month
  • Everything in Pro
  • Unlimited seats
  • Prompt Management
  • SOC-2 & HIPAA compliance
  • Dedicated Slack channel

Enterprise

Custom-built packages. Contact sales for pricing.

Custom
contact sales
  • Everything in Team
  • Custom MSA
  • SAML SSO
  • On-prem deployment
  • Bulk cloud discounts
View official pricing

Capabilities

Key Features

  • LLM observability and analytics
  • API proxy and SDK instrumentation
  • Token usage and cost analytics
  • Latency and performance metrics
  • Request/response logging for debugging
  • Web dashboard and metrics API

Integrations

OpenAI
Anthropic
Hugging Face
Custom HTTP-based LLM endpoints
API Available
View Docs