Every recommendation zued generates is scored automatically using a consistent formula. The goal is to surface the changes most worth doing first — not just the most impactful ones, but the ones with the best impact-to-effort ratio.
How it works
The priority score combines three factors:
Impact
How much the fix is expected to improve AI visibility for this URL:
| Level | What to expect |
|---|
| High | Significant visibility improvement — likely to change whether AI engines cite you |
| Medium | Measurable improvement — closes gaps that currently weaken your position |
| Low | Incremental improvement — worth doing after higher-impact changes |
Effort
How long the fix realistically takes:
| Level | Timeframe |
|---|
| Low | Under 2 hours — one person, no dependencies |
| Medium | 2 hours to 2 days — may need a developer or designer |
| High | Over 2 days — team coordination required |
Engine scope
Issues that affect multiple AI engines are ranked higher than engine-specific ones. A robots.txt problem that blocks all engines is more urgent than a formatting issue that only affects one.
Examples
| Issue | Impact | Effort | Engine scope | Priority |
|---|
robots.txt blocking GPTBot | High | Low | All engines | 100 |
| Missing statistic AI engines cite | High | Low | All engines | 100 |
| Site-wide JavaScript → SSR refactor | High | High | All engines | 100 (Strategic) |
| Add FAQ schema to FAQ page | Medium | Medium | Most engines | Medium |
A Priority 100 score doesn’t always mean quick to fix — it means the issue is too important to defer. A site-wide JS refactor scores 100 but belongs in strategic planning, not this week’s to-do list.