/ resources · reference

How citation scoring works

What the score means, how we calculate it, and what you should actually care about versus what you can safely ignore.

§01What “cited” means

A citation is counted when an AI model’s response to a query explicitly names your brand or domain in a positive or neutral recommendation context. We don’t count:

  • Mentions in a negative comparison (“unlike X, avoid Y”)
  • Generic category mentions without a specific brand name
  • Responses that name your brand only as context for recommending a competitor

Each query is run three times per model to account for stochasticity. A citation is counted as “consistent” if it appears in at least 2 of 3 runs. This filters out one-off hallucinations and reflects what a real user would reliably see.

§02The score calculation

Your overall citation score is a weighted average across all queries and models:

# citation score formula
score = (citations / total_query_model_pairs) × 100

# example: 12 queries × 4 models = 48 pairs
# if you were cited in 18 of 48 pairs:
score = (18 / 48) × 100 = 37.5%

The score is intentionally simple — it’s a percentage. The complex part is the query set. We run your domain against buyer-intent queries in your category, not vanity queries like “what is [your brand name].” Getting cited on branded queries is easy and doesn’t tell you much.

▸ important nuance

A 40% score on hard buyer-intent queries beats an 80% score on easy branded queries. The queries matter as much as the score.

§03Per-model breakdown

Beyond the overall score, we show you your citation rate broken down by model. This matters because the models have meaningfully different citation behaviors (see Understanding the 4 AI models).

A brand that scores 80% on Perplexity but 15% on Claude should interpret that very differently than a brand that scores 40% across all four. Perplexity cites broadly; Claude cites selectively. Being cited by Claude at all puts you in a small group.

We also show which specific queries you were and weren’t cited on. The uncited queries are your highest-leverage improvement targets — they represent buyer-intent moments where you’re invisible.

§04What to optimize for

Don’t optimize for your overall score in the abstract. Optimize for specific uncited queries first.

If you’re not cited on “best [category] tool for [use case]” — that’s a content gap. The fix is usually building the page that most directly answers that query: a comparison page, a use-case landing page, or a structured explanation of your positioning in the category.

The overall score is a lagging indicator. The query-level breakdown is where you find the levers.

§05Score benchmarks

citation score rangewhat it signals
80–100%Citation monopoly — you are the default answer in your category
50–80%Strong presence — consistently mentioned, competitive with leaders
25–50%Visible but inconsistent — cited on some queries, absent on others
10–25%Weak presence — showing up only on easy branded queries
0–10%Effectively invisible — the AI conversation is happening without you

Median score across the B2B SaaS brands we’ve checked: 31%. Most brands are visible but inconsistent. The gap between them and category leaders is almost always content clarity, not brand authority.

See how your brand actually scores.

Run a free check on your domain. 60 seconds, no signup required.

Run a free check →