Skip to main content
zued runs a multi-step audit pipeline for every URL in your project. Each step exists for a specific reason — together, they produce the alignment and technical scores, content gap analysis, and prioritized recommendations you see in your dashboard.

The pipeline

1

Crawl your pages

zued fetches each URL using a real browser — the same way AI crawlers access your site. This captures the full rendered page, including JavaScript-dependent content, and measures Technical Score signals: bot accessibility, Core Web Vitals, structured data, and meta tags.
2

Chunk your content

Page content is split into meaningful segments that preserve the page structure. These chunks are the unit of comparison — zued doesn’t compare the entire page at once, but rather the most relevant chunk for each prompt.
3

Generate ICPs and prompts

From your crawled content, zued extracts Ideal Customer Profiles — who your pages are written for. These ICPs shape the audit prompts generated for each URL. You review and approve all prompts before they’re dispatched — editing, adding, or removing any that don’t fit your audit goals.
4

Dispatch to AI engines

Prompts are sent to AI engines through their actual web interfaces — Gemini, ChatGPT, Google AI Mode, Microsoft Copilot, Perplexity, and Grok. Responses are collected including the full answer text, cited sources, and brand mentions. This is how real users experience these engines.
5

Compare: your content vs. the AI consensus

For each prompt, zued maps the AI response to the most relevant content chunk on your page and compares them across three dimensions:
  • Angle — does your content approach the topic the same way?
  • Information depth — does it cover the topic with the same detail?
  • Structure — is it organized so AI can extract and cite it?
This comparison produces the Alignment Score and surfaces specific content gaps. For prompts that span multiple URLs, alignment runs per URL — each page gets its own score and gaps.
6

Cross-URL analysis

When a prompt is relevant to multiple URLs in your project, zued runs cross-URL analysis: it aggregates per-URL alignment data, identifies internal linking gaps between pages, and recommends how your pages can work together as an ensemble — so AI engines are more likely to cite them.
7

Verify factual accuracy

For pages with verifiable claims — prices, specifications, features, guarantees — zued checks whether AI engines reproduce them accurately. Each claim is tested across all engines, producing per-claim verdicts that show which facts AI gets right and which it distorts. This feeds directly into page-level recommendations so you can prioritize fixing claims that AI gets wrong.
8

Identify information gain

Where your content goes beyond the AI consensus — original data, unique expertise, deeper analysis — zued identifies this as information gain. This is what makes AI engines more likely to choose your page over others.
9

Generate recommendations

Every gap and technical issue becomes a specific, scored recommendation. The prioritization system ranks them by impact, effort, and how many engines are affected — so you always know what to fix first.

Why this approach works

Most AEO/GEO tools track outputs — whether your brand shows up in AI responses. zued works at the input level: what your content contains, whether AI crawlers can reach it, how it compares to the AI consensus, and what specific changes would improve both alignment and technical access. This is why zued audits weekly rather than daily. The AI consensus on a topic doesn’t shift day-to-day. What you can change — your content and technical setup — takes time to update and deploy. Weekly snapshots give you time to implement recommendations and measure their impact against a consistent baseline.

What makes the comparison meaningful

zued doesn’t use APIs to query AI engines — it uses the same web interfaces real users see. This matters because:
  • API responses and web responses can differ significantly
  • Web responses include search results, cited sources, and current information
  • The comparison reflects what your actual audience experiences
The same prompts run every week against fresh content and fresh AI responses. This is what makes trend tracking reliable — you’re measuring the same thing each time, not chasing moving targets.