EveryDev.ai
Sign inSubscribe
  1. Home
  2. Tools
  3. LocalScore
LocalScore icon

LocalScore

Local Inference

An open benchmark tool that helps you understand how well your computer can handle local AI tasks.

Visit Website

At a Glance

Pricing

Open Source

Completely free and open source benchmarking tool

Engagement

Available On

Web
Windows
macOS
Linux

Resources

WebsiteDocsGitHubllms.txt

Topics

Local InferencePerformance MetricsAI Infrastructure

About LocalScore

LocalScore is an open-source benchmarking tool designed to measure and compare how effectively different hardware configurations can run local AI workloads. It provides standardized performance metrics for running large language models locally, helping users understand their system's capabilities for AI inference tasks. The tool is part of the Mozilla Builders program and offers both a web interface for viewing results and a CLI for running benchmarks.

Key Features:

  • Open Benchmark System - Provides transparent, community-driven benchmarking for local AI performance across various hardware configurations including NVIDIA GPUs, Apple Silicon, and other accelerators.

  • Comprehensive Performance Metrics - Measures key indicators including prompt processing speed (tokens/s), generation speed (tokens/s), time to first token (TTFT), and an overall LocalScore rating for easy comparison.

  • Hardware Comparison Database - Maintains an extensive database of benchmark results across different GPUs and accelerators, from consumer cards like RTX 4090 to enterprise hardware like NVIDIA H100 and A100.

  • Model-Specific Testing - Benchmarks performance across popular AI models including Llama 3.2, Meta Llama 3.1, and Qwen2.5 in various quantization formats.

  • CLI Tool Integration - Offers a command-line interface built on llamafile for running benchmarks on your own hardware and contributing results to the community database.

  • Cross-Platform Support - Works across different operating systems and hardware platforms, supporting both GPU and CPU-based inference testing.

To get started, visit the LocalScore website to browse existing benchmark results and compare hardware performance. Download the CLI tool from the GitHub repository to run benchmarks on your own system. Results can be submitted to the community database to help others make informed decisions about hardware for local AI workloads. The tool is particularly useful for developers and enthusiasts looking to optimize their local AI inference setup or evaluate hardware purchases for AI applications.

LocalScore - 1

Community Discussions

Be the first to start a conversation about LocalScore

Share your experience with LocalScore, ask questions, or help others learn from your insights.

Pricing

OPEN SOURCE

Open Source

Completely free and open source benchmarking tool

  • Access to benchmark results database
  • Hardware comparison tools
  • CLI tool for running benchmarks
  • Community result submissions
  • Open source access
View official pricing

Capabilities

Key Features

  • Open benchmark system for local AI performance
  • Prompt processing speed measurement (tokens/s)
  • Generation speed measurement (tokens/s)
  • Time to first token (TTFT) tracking
  • Overall LocalScore rating
  • Hardware comparison database
  • Model-specific benchmarking
  • CLI tool for running benchmarks
  • Community-contributed results
  • Support for multiple GPU and accelerator types

Integrations

llamafile
Mozilla Builders

Reviews & Ratings

No ratings yet

Be the first to rate LocalScore and help others make informed decisions.

Developer

Mozilla Builders

LocalScore is developed as part of the Mozilla Builders program, which supports open-source projects that advance the health of the internet. The project provides open benchmarking tools for local AI inference, helping users understand and compare hardware performance for running AI models locally. The CLI component is built in collaboration with the llamafile project.

Read more about Mozilla Builders
WebsiteGitHub
1 tool in directory

Similar Tools

Arcee AI icon

Arcee AI

US-based open intelligence lab building open-weight foundation models that run anywhere - on edge, on-prem, or cloud.

PaddlePaddle icon

PaddlePaddle

An open-source deep learning platform developed by Baidu for industrial-grade AI development and deployment.

Open WebUI icon

Open WebUI

Self-hosted AI interface that connects to any model, extends with Python, and lets you run AI on your own terms.

Browse all tools

Related Topics

Local Inference

Tools and platforms for running AI inference locally without cloud dependence.

39 tools

Performance Metrics

Specialized tools for measuring, evaluating, and optimizing AI model performance across accuracy, speed, resource utilization, and other critical parameters.

26 tools

AI Infrastructure

Infrastructure designed for deploying and running AI models.

116 tools
Browse all topics
Back to all tools
Explore AI Tools
  • AI Coding Assistants
  • Agent Frameworks
  • MCP Servers
  • AI Prompt Tools
  • Vibe Coding Tools
  • AI Design Tools
  • AI Database Tools
  • AI Website Builders
  • AI Testing Tools
  • LLM Evaluations
Follow Us
  • X / Twitter
  • LinkedIn
  • Reddit
  • Discord
  • Threads
  • Bluesky
  • Mastodon
  • YouTube
  • GitHub
  • Instagram
Get Started
  • About
  • Editorial Standards
  • Corrections & Disclosures
  • Community Guidelines
  • Advertise
  • Contact Us
  • Newsletter
  • Submit a Tool
  • Start a Discussion
  • Write A Blog
  • Share A Build
  • Terms of Service
  • Privacy Policy
Explore with AI
  • ChatGPT
  • Gemini
  • Claude
  • Grok
  • Perplexity
Agent Experience
  • llms.txt
Theme
With AI, Everyone is a Dev. EveryDev.ai © 2026
Main Menu
  • Tools
  • Developers
  • Topics
  • Discussions
  • News
  • Blogs
  • Builds
  • Contests
Create
Sign In
    Sign in
    8views
    0saves
    0discussions