How it’s calculated
zued compares each AI engine response against the most relevant content chunk on your page across three dimensions:| Dimension | What it measures |
|---|---|
| Angle | Does your content approach the topic from the same angle as the AI’s answer? |
| Information | Is your content’s depth and accuracy on par with what AI engines say? |
| Structure | Is your content organized in a way AI systems can extract cleanly? |
Score levels
| Score | Level | What it means |
|---|---|---|
| > 70 | Strong | AI engines represent your content accurately |
| 40–70 | Partial | Some gaps — AI covers the topic but misses key points or uses your content incompletely |
| < 40 | Poor | Significant misalignment — AI answers without using your content, or misrepresents it |
What you see per prompt
Each prompt in your project shows a breakdown:- Content gaps — topics the AI covers that your page doesn’t address
- Content strengths — where your content aligns well
- Format mismatch — when AI responds in a format your content doesn’t support
- Quick Win — the single highest-impact, lowest-effort change for this prompt
A low Alignment Score doesn’t always mean low visibility. It means the visibility you have may be inaccurate or incomplete. Fixing alignment improves both the quality and quantity of citations.
Per-engine breakdown
Scores are calculated individually per AI engine and then averaged. This means you can have a strong score with Gemini and a poor score with ChatGPT for the same page — each engine retrieves and interprets content differently.The more engines you run, the richer the per-engine breakdown — each engine retrieves and interprets content differently.