# llm-citeops > A CLI tool that audits web content for AEO and GEO readiness, scoring pages on answer engine optimization and generative engine optimization using deterministic heuristics. `llm-citeops` is a CLI tool that audits web content to determine whether pages are ready to be crawled, summarized, or cited by AI answer engines and generative search systems. It runs a deterministic AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) rubric over parsed HTML and text, computes weighted scores, and produces actionable reports with evidence and suggested fixes. The tool accepts URLs, local files, folders, or sitemaps as input and outputs HTML, JSON, or CSV reports. - **AEO Checks** — *Evaluates FAQ/HowTo schema, direct answers in the first paragraph, Q&A density, readability, named entities, and author bylines to assess answer engine readiness.* - **GEO Checks** — *Assesses topical depth, trust signals, content freshness, external citations, comparison content, and citation likelihood for generative search visibility.* - **Multiple Input Sources** — *Accepts a live URL (`--url`), a local Markdown or HTML file (`--file`), a folder of content files (`--dir`), or a sitemap/sitemap index (`--sitemap`).* - **Flexible Output Formats** — *Generates HTML reports for human review, JSON for automation pipelines, and CSV for batch runs across entire sites.* - **CI Integration** — *Use `--ci` with `--threshold` to fail builds when composite scores drop below a defined level, with structured exit codes for success, CI failure, crawl errors, and invalid input.* - **Configurable Weights** — *Customize AEO/GEO weight contributions and per-check weights via a `.citeops.json` config file, supporting fine-tuned prioritization for your content strategy.* - **Score Bands** — *Results are classified as `poor`, `needs-improvement`, `good`, or `excellent`, with AEO and GEO each contributing 50% to the composite score by default.* - **No LLM Required** — *All checks are deterministic heuristics over parsed content — no external AI API calls are needed to run an audit.* - **Quick Start** — *Run immediately without installing globally using `npx llm-citeops overview`, or install with `npm install -g llm-citeops` and audit any page with `llm-citeops audit --url "https://example.com/page" --output html`.* ## Features - AEO rubric with 6 checks (FAQ/HowTo schema, direct answer, Q&A density, readability, named entities, author byline) - GEO rubric with 6 checks (topical depth, trust signals, content freshness, external citations, comparison content, citation likelihood) - Audit by URL, local file, folder, or sitemap - HTML, JSON, and CSV output formats - CI mode with configurable score threshold and exit codes - Configurable AEO/GEO weights and per-check weights via .citeops.json - Composite score with poor/needs-improvement/good/excellent bands - Deterministic heuristics — no LLM API calls required - Semantic-release automated versioning from conventional commits - npx support for zero-install usage ## Integrations npm, Node.js, GitHub Actions (CI), sitemap.xml ## Platforms WINDOWS, MACOS, LINUX, WEB, CLI ## Pricing Open Source ## Links - Website: https://llm-citeops.vercel.app/ - Documentation: https://github.com/rakeshcheekatimala/llm-citeops#readme - Repository: https://github.com/rakeshcheekatimala/llm-citeops - EveryDev.ai: https://www.everydev.ai/tools/llm-citeops