Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.zued.ai/llms.txt

Use this file to discover all available pages before exploring further.

zued dispatches audit prompts to AI engines through their actual web interfaces — not through APIs. This means the responses reflect what real users see, including web search results, cited sources, and current information.

Supported engines

EngineDemoPilotGrowthEnterprise
Gemini
ChatGPT
Google AI Mode
Microsoft Copilot
Perplexity
Grok

What zued collects

From each engine response, zued extracts:
  • Response text — the full answer the engine produced
  • Cited sources — URLs the engine referenced
  • Brand mentions — your brand name and competitor brands, normalized for consistency (e.g. “VW” → “Volkswagen”)
This data feeds directly into your Alignment Score, Share of Voice, and Share of Search.
Running more engines gives a more complete picture. A content gap invisible to one engine may be clearly flagged by another. Your page might align well with Gemini’s consensus but drift from what ChatGPT tells users about the same topic.

Should I optimize for a specific engine?

No. The architectural signals that make content citable by AI engines compound across all of them. Structure, factual depth, clarity, verifiable claims, topical authority — every engine rewards these. The differences between engines are matters of degree, not kind. One engine might lean slightly more on cited sources, another might prefer more structured formats, but these are variations on the same underlying preferences. Optimizing for a single engine means targeting a small fraction of what actually matters. Cross-engine improvements — better content structure, stronger claims, clearer entity definitions — deliver significantly more impact because they compound across every engine simultaneously.
Which specific signals matter most is vertical-specific, not engine-specific. Kevin Indig’s March 2026 cross-vertical analysis of 98,217 ChatGPT citations found that aggregate signals like word count, list density and named entity counts are flat or negative predictors, while the signals that do work differ significantly between Finance, Healthcare, Crypto, HR Tech and other verticals. The practical order: cover the architectural fundamentals first (where zued focuses), then tune to your vertical.
Running prompts across multiple engines also produces better recommendations. AI engines frequently disagree — what one engine emphasizes, another may skip entirely. When multiple engines independently converge on the same gap or the same missing information, that’s a strong signal that something genuinely matters. zued’s recommendations are built on this consensus: rather than producing six conflicting playbooks, they prioritize changes validated across engines, so you fix what actually moves the needle. The Engine Performance grid still shows per-engine scores. Use it to diagnose where issues surface — but the fix is almost always cross-engine. If your Alignment Score is low on ChatGPT but high on Gemini for the same prompt, the root cause is rarely “ChatGPT needs different content.” It’s more likely a gap in how clearly your content communicates a specific point that one engine happens to weigh more heavily.
The only truly engine-specific issues are technical: a robots.txt rule blocking one crawler, or rendering differences that prevent an engine from seeing your content. These are surfaced separately in the Technical Analysis and require targeted fixes — but they’re infrastructure problems, not content problems.