The Alignment Score measures how well the most relevant content on your page covers what AI engines actually tell users about your topics. It’s one of zued’s two primary KPIs — the core signal for whether your content matches what users are hearing from AI. A high Alignment Score alone isn’t enough: if AI crawlers can’t access your pages, the score is irrelevant. That’s why zued always pairs alignment analysis with the Technical Score.Documentation Index
Fetch the complete documentation index at: https://docs.zued.ai/llms.txt
Use this file to discover all available pages before exploring further.
How it’s calculated
zued compares each AI engine response against the most relevant content chunk on your page across four dimensions:| Dimension | Weight | What it measures |
|---|---|---|
| Coverage | Highest | Does your content address the same topics the AI response covers? |
| Depth | High | Where topics overlap, does your content provide equal or greater depth? |
| Currency | Lower | Is your content as current as what the AI is presenting? |
| Structure | Lower | Is your content organised in a way AI systems can extract cleanly? |
Score levels
| Score | Level | What it means |
|---|---|---|
| > 70 | Strong | Your content covers what AI engines tell users about this topic |
| 40–70 | Partial | Some gaps — AI engines discuss topics your content doesn’t fully address |
| < 40 | Poor | Significant gaps — AI engines present information your content doesn’t cover |
The four dimensions in detail
Coverage
The most-weighted dimension. zued identifies the distinct sub-topics in the AI response (e.g. “interest rate”, “minimum deposit”, “switching process”) and checks whether each one is also addressed on your page. A page that addresses three of five sub-topics gets a Coverage score around 60. Coverage gaps are surfaced individually on each prompt so you know which sub-topic is missing, not just that “something” is.Depth
Where your page and the AI response overlap on a sub-topic, Depth asks whether your content goes as far. AI responses often include comparisons, numeric specifics, or worked examples; if your page covers the same sub-topic only at headline level, Depth is penalised. A page can have full Coverage but weak Depth, and that’s a common pattern for short product pages whose AI answers are richer than the page itself.Currency
Currency penalises pages that look out of date relative to what the AI is presenting. Common triggers include outdated year references, deprecated product names, prices that no longer match, and specifications that have moved on. Currency does not penalise evergreen content — only content that conflicts with newer information the AI engine is using.Structure
Structure asks whether the content can be lifted cleanly into an AI answer. Pages that hide their key points inside long paragraphs score lower than pages with clear headings, lists, definitions, and direct answers in the first sentence under each heading. Format mismatch (e.g. AI answers in a comparison table while your page is prose) shows up here.What influences your score
Several signals make alignment higher or lower beyond raw content quality:- Match confidence. If the prompt loosely matches a chunk on your URL (see Prompts), the alignment score reflects the comparison against that loose chunk — which is usually weak. A loose match doesn’t always mean your content is bad; sometimes it means the prompt belongs on a different URL. The Topic Health view helps you spot that pattern.
- Per-engine variation. ChatGPT, Gemini, Perplexity, Copilot, AI Mode, and Grok retrieve and summarise content differently. The same page can score Strong against one engine and Partial against another for the same prompt. Per-engine scores are shown alongside the average so you can see where the divergence is.
- Prompt context level. Prompts run at three context levels (minimal, moderate, rich). Richer prompts surface deeper sub-topics, which raises the Depth bar. A page that scores Strong on minimal-context prompts may show gaps on rich-context ones — that’s a signal to add depth, not a measurement error.
- Fact-check accuracy. Fact-check prompts run on a parallel pipeline (see fact-checking) and don’t feed alignment directly. But when fact-check shows AI engines stating something different from your page, that often shows up as a Currency or Coverage gap as well.
Reading score changes
Alignment is recomputed on every snapshot. When you see a score move week over week, the most common causes:- Score went down: AI engines added a sub-topic that your page doesn’t cover (Coverage), engines started citing a competitor with deeper content (Depth), the AI started using a newer figure your page hasn’t been updated with (Currency), or the prompt set was changed and now includes prompts your page doesn’t fit (loose match).
- Score went up: you added the missing sub-topic, you deepened an existing section, you fixed an outdated reference, or AI engines naturally aligned with your existing content as the topic matured.
- One engine diverges sharply: that engine has different retrieval behaviour for this query class. Check the per-engine breakdown — sometimes a single engine has gone off-source for a prompt and your alignment isn’t actually wrong.
What you see per prompt
Each prompt in your project shows a breakdown:- Content gaps — sub-topics the AI covers that your page doesn’t address
- Content strengths — where your content aligns well
- Format mismatch — when AI responds in a format your content doesn’t support
- Quick Win — the single highest-impact, lowest-effort change for this prompt
A low Alignment Score doesn’t always mean low visibility. It means your content has gaps compared to what AI engines tell users. Closing these gaps ensures your content matches what users are actually hearing.
Per-engine breakdown
Scores are calculated individually per AI engine and then averaged. This means you can have a strong score with Gemini and a poor score with ChatGPT for the same page — each engine retrieves and interprets content differently.The more engines you run, the richer the per-engine breakdown — each engine retrieves and interprets content differently.