# LM Arena > Web platform for comparing, running, and deploying large language models with hosted inference and API access. LM Arena provides a web-based environment for running, comparing, and deploying large language models. It focuses on making model evaluation, hosted inference, and simple deployment workflows accessible from a browser and via an API. The platform supports uploading or connecting models, running evaluation workloads, and exposing inference endpoints for applications. - **Model comparison** *run side-by-side evaluations and benchmarks across models.* - **Hosted inference** *deploy models to managed endpoints for production usage.* - **API access** *programmatically invoke models and integrate into applications.* - **Custom model uploads** *bring your own model artifacts for testing and deployment.* - **Usage monitoring** *track usage metrics and performance of deployed endpoints.* To get started, sign up on the web app, upload or connect a model, run a comparison job, and create an inference endpoint; use the provided API keys to integrate inference into your applications. ## Features - Model comparison - Hosted inference endpoints - API access - Custom model uploads - Usage monitoring and metrics ## Integrations Hugging Face, OpenAI, Docker ## Platforms WEB, API ## Pricing Open Source ## Links - Website: https://lmarena.ai - Documentation: https://help.lmarena.ai/ - Repository: https://github.com/lmarena/lmarena.github.io - EveryDev.ai: https://www.everydev.ai/tools/lm-arena