EveryDev.ai
Sign inSubscribe
Home
Tools

1,377+ AI tools

  • Trending
  • New
  • Featured
Categories
  • Coding721
  • Agents636
  • Marketing299
  • Infrastructure296
  • Design235
  • Analytics228
  • Research222
  • Projects205
  • Integration148
  • Testing128
  • Data125
  • Learning115
  • MCP113
  • Security105
  • Extensions91
  • Prompts79
  • Communication73
  • Commerce70
  • Voice68
  • Web59
  • DevOps46
  • Finance11
Sign In
  1. Home
  2. Tools
  3. Agenta
Agenta icon

Agenta

Prompt Management

Open-source LLMOps platform for prompt management, evaluation, and observability for developer and product teams.

Visit Website

At a Glance

Pricing

Open Source
Free tier available

Get started with Agenta at no cost with Unlimited prompts and 2 seats included.

Pro: $49/mo
Business: $399/mo
Enterprise: Custom/contact

Engagement

Available On

Web
API
SDK

Resources

WebsiteDocsGitHubllms.txt

Topics

Prompt ManagementLLM EvaluationsObservability Platforms

Updated Feb 2026

About Agenta

Agenta is an open-source LLMOps platform that helps developers and product teams build reliable LLM applications. It covers the LLM development lifecycle with tools for prompt management, evaluation, and observability. Agenta provides a web UI, APIs, and SDKs so teams can collaborate, run systematic evaluations, and monitor production behavior.

  • Prompt management: Organize, version, and collaborate on prompts so subject-matter experts can edit prompts outside the codebase; use the playground to iterate and run side-by-side comparisons.
  • Evaluation: Run automatic evaluations at scale, human annotation workflows, and online evaluation against production traffic to validate changes before and after deployment.
  • Observability & tracing: Capture traces and user feedback via API, debug agent execution flows, and track cost and failure cases over time to add to test sets.
  • Playground and test sets: Experiment with prompts and test sets inside the UI to reproduce and fix edge cases, then promote validated prompts to production.
  • Integrations and SDKs: Integrate with major LLM providers and frameworks via API and SDKs; Agenta is OpenTelemetry-compliant for tracing and supports adding custom or self-hosted models.

To get started, sign up for the web product or self-host the MIT-licensed project, add your model providers and prompts, create test sets, and run evaluations from the UI or via the API/SDK.

Agenta - 1

Community Discussions

Be the first to start a conversation about Agenta

Share your experience with Agenta, ask questions, or help others learn from your insights.

Pricing

FREE

Free Plan Available

Get started with Agenta at no cost with Unlimited prompts and 2 seats included.

  • Unlimited prompts
  • 2 seats included
  • 20 evaluations per month included
  • 5k traces per month included
  • 30 days retention period

Pro

Popular

Includes additional seats, higher trace allowances, in-app support, and longer retention.

$49
per month
  • 3 seats included
  • Up to 10 seats
  • Unlimited evaluations
  • 10k traces per month included then $5 per 10k
  • In-app support
  • 90 days retention period

Business

Everything from Pro plus enterprise security, compliance, and extended retention.

$399
per month
  • Unlimited seats
  • Unlimited evaluations
  • 1M traces per month included then $5 per 10k
  • Role-based access control
  • SOC 2 reports
  • Private Slack channel
  • 365 days retention period

Enterprise

Personalized service, enterprise security, and custom terms for large organizations.

Custom
contact sales
  • Everything from Business
  • Volume pricing
  • Audit logs
  • Custom retention periods
  • Bring Your Own Cloud
  • Dedicated support and self-hosted deployment options
  • Security reviews, custom SLA and DPA
View official pricing

Capabilities

Key Features

  • Prompt management and versioning
  • Playground for prompt experimentation
  • Automatic evaluation at scale
  • Human evaluation workflows
  • Online evaluation for production
  • Tracing and observability (OpenTelemetry-compatible)
  • Test sets and A/B testing
  • Cost tracking and retention controls
  • Role-based access control and enterprise features
  • Self-hostable MIT-licensed deployment

Integrations

OpenAI
Anthropic
Cohere
OpenRouter
Anyscale
Perplexity AI
TogetherAI
DeepInfra
Aleph Alpha
Groq
Gemini
Mistral
Ollama
AWS Bedrock
Azure OpenAI
Vertex AI
LangChain
LlamaIndex
OpenTelemetry
API Available
View Docs

Demo Video

Agenta Demo Video
Watch on YouTube

Reviews & Ratings

No ratings yet

Be the first to rate Agenta and help others make informed decisions.

Developer

Agentatech UG

Agenta builds an open-source LLMOps platform that provides prompt management, evaluation, and observability for developer and product teams. The team ships tooling and SDKs that integrate with major LLM providers and frameworks and focuses on collaboration between engineers and subject-matter experts. Agenta maintains an MIT license and supports self-hosting, cloud-hosted tiers, and OpenTelemetry-compatible tracing.

Read more about Agentatech UG
WebsiteGitHubX / Twitter
1 tool in directory

Similar Tools

Lunary icon

Lunary

Open-source platform to monitor, improve, and secure AI chatbots with observability, prompt management, evaluations, and analytics.

Klu icon

Klu

Design, deploy, and optimize LLM apps with collaborative prompt design, evaluation workflows, and observability tools.

HoneyHive icon

HoneyHive

AI observability and evaluation platform to monitor, evaluate, and govern AI agents and applications across any model, framework, or agent runtime.

Browse all tools

Related Topics

Prompt Management

Tools for organizing, versioning, and managing AI prompts.

22 tools

LLM Evaluations

Platforms and frameworks for evaluating, testing, and benchmarking LLM systems and AI applications. These tools provide evaluators and evaluation models to score AI outputs, measure hallucinations, assess RAG quality, detect failures, and optimize model performance. Features include automated testing with LLM-as-a-judge metrics, component-level evaluation with tracing, regression testing in CI/CD pipelines, custom evaluator creation, dataset curation, and real-time monitoring of production systems. Teams use these solutions to validate prompt effectiveness, compare models side-by-side, ensure answer correctness and relevance, identify bias and toxicity, prevent PII leakage, and continuously improve AI product quality through experiments, benchmarks, and performance analytics.

35 tools

Observability Platforms

Comprehensive platforms that combine metrics, logs, and traces with AI-powered analytics to provide deep insights into complex distributed systems and application behavior.

37 tools
Browse all topics
Back to all tools
Explore AI Tools
  • AI Coding Assistants
  • Agent Frameworks
  • MCP Servers
  • AI Prompt Tools
  • Vibe Coding Tools
  • AI Design Tools
  • AI Database Tools
  • AI Website Builders
  • AI Testing Tools
  • LLM Evaluations
Follow Us
  • X / Twitter
  • LinkedIn
  • Reddit
  • Discord
  • Threads
  • Bluesky
  • Mastodon
  • YouTube
  • GitHub
  • Instagram
Get Started
  • About
  • Editorial Standards
  • Corrections & Disclosures
  • Community Guidelines
  • Advertise
  • Contact Us
  • Newsletter
  • Submit a Tool
  • Start a Discussion
  • Write A Blog
  • Share A Build
  • Terms of Service
  • Privacy Policy
Explore with AI
  • ChatGPT
  • Gemini
  • Claude
  • Grok
  • Perplexity
Agent Experience
  • llms.txt
Theme
With AI, Everyone is a Dev. EveryDev.ai © 2026
Main Menu
  • Tools
  • Developers
  • Topics
  • Discussions
  • News
  • Blogs
  • Builds
  • Contests
Create
Sign In
    Sign in
    15views
    0upvotes
    0discussions