Why do audit prompts look nothing like real user queries?
Why do audit prompts look nothing like real user queries?
Audit prompts are designed to be repeatable and neutral, not to replicate any individual user’s phrasing. Real users have chat history, memory, and personalisation that produce unique, non-reproducible responses. Even if you had the exact text of a real user’s prompt, running it in a neutral session wouldn’t produce the same answer. zued’s prompts run in consistent sessions so the same probe produces a comparable result every week — that consistency is what makes trend tracking reliable.
Why are there only 5–6 ICPs per project?
Why are there only 5–6 ICPs per project?
ICPs are extracted from your page content — not from your CRM or external data. zued clusters persona signals from all your URLs into 5–6 profiles that represent how your existing pages position themselves. If your content addresses a broader range of audiences, you can edit the generated ICPs manually to better reflect them.
Why does my score change week to week even when I haven't updated anything?
Why does my score change week to week even when I haven't updated anything?
AI responses are non-deterministic — the same prompt can produce different answers across runs. This is a fundamental property of how AI engines work, not a measurement issue. zued accounts for this by running the same prompts weekly and tracking directional trends rather than treating any single data point as exact. Look at movement over multiple snapshots.
Does blocking one AI crawler affect the others?
Does blocking one AI crawler affect the others?
No — each engine uses its own crawler with its own user agent. Blocking GPTBot in
robots.txt has no effect on PerplexityBot or Bingbot. But blocking any single one means zero visibility for that engine’s users. zued’s bot accessibility check shows exactly which crawlers are blocked, per URL.What happens to my data if I cancel?
What happens to my data if I cancel?
All URL data, analysis results, and snapshot history are preserved — nothing is deleted. Weekly snapshots stop, but all existing URLs and their history remain visible.
Why does the Alignment Score differ between engines for the same URL?
Why does the Alignment Score differ between engines for the same URL?
Each AI engine retrieves and interprets content differently. The same page can be well-represented by Gemini and poorly represented by ChatGPT for the same query. zued scores each engine separately and averages them, so you can identify which engines are the source of misalignment.
How is zued different from prompt tracking tools?
How is zued different from prompt tracking tools?
Prompt tracking tools check if your brand appears in AI responses — they answer are you mentioned? zued goes deeper: it compares what AI engines say against what your pages actually contain, identifies specific content gaps, and tells you what to change so your content aligns with the AI consensus and adds information gain. It’s the difference between monitoring whether you show up and knowing what to fix so you show up accurately.
Why does zued audit weekly instead of daily?
Why does zued audit weekly instead of daily?
The AI consensus on a topic doesn’t shift day-to-day. What changes meaningfully between weeks is whether your content supports that consensus — and whether your fixes moved the needle. Weekly cadence gives you time to implement recommendations and measure their impact without noise from daily fluctuations.