Documentation Index
Fetch the complete documentation index at: https://docs.zued.ai/llms.txt
Use this file to discover all available pages before exploring further.
Prompts are the audit probes zued sends to AI engines on your behalf. They are generated from your crawled page content and confirmed ICPs, then reviewed by you before dispatch. Their goal: to produce AI responses comparable to what your customers would see when asking about topics you cover — so you can measure how well your content matches what users are actually hearing.
How prompts are generated
Once your ICPs are confirmed, zued generates prompts per URL using your confirmed ICPs and the content of each page. Every prompt is ICP-driven — it reflects the situation, needs, and language style of the personas linked to that URL.
Since audit prompts run in neutral sessions — without chat history, memory, or personalisation — zued enriches them with ICP context to compensate. This makes AI engine responses comparable to what a real user with that background would receive.
Prompts are distributed across three context levels:
| Level | What it looks like |
|---|
| Minimal | A short question with one situational hint |
| Moderate | Several context details drawn from the ICP |
| Rich | Full ICP context with constraints, situation, and needs |
The mix covers the full range — from users just starting a conversation to those with significant context built up.
Intent goals
Each prompt is tagged with an intent goal that reflects the type of question a real user would ask:
| Goal | What it tests |
|---|
| Exploratory | Open research questions — “What should I consider when choosing X?” |
| Evaluative | Comparison and decision questions — “Which X is better for Y?” |
| Specific | Concrete questions seeking a direct answer — “What are the best X under Z budget?” |
This distribution ensures your audit covers the full range of how users interact with AI engines about your topics.
Topic clusters
When your project contains multiple URLs covering the same topic, zued groups them into topic clusters. Prompts are generated per cluster using context from all related URLs, then each prompt is assigned to the single most relevant URL.
Topic names are normalised across languages and extraction runs. If you add URLs over time, new topics are matched against existing clusters — so “air fryers” and “Heissluftfritteusen” end up in the same cluster rather than creating duplicates.
Prompt-to-page fit
Every prompt is bound to one URL. When zued matches a prompt to your content, it picks the most relevant section of that page and shows you how confident it is in that match. The match confidence is shown as a label on each prompt and on the matched content card.
| Label | What it means | What to do |
|---|
| Strong match | The page clearly covers what the prompt is asking about. | Nothing. The audit is grounded. |
| Medium match | The topic is covered, but it isn’t the page’s main focus. | Optional: add a section that addresses the prompt directly, or accept that this URL covers the topic in passing. |
| Loose match | No section of this URL closely matches the prompt. | Move the prompt to a URL that fits better, or add the missing topic to this page. |
A loose match doesn’t mean the prompt is wrong. It usually means one of two things: the prompt would be better placed on a different URL in your project, or the page is missing a section about the topic. Both are actionable signals — they tell you where your content has a gap that AI engines will also see.
When you see many loose matches on one URL, treat it as a hint that the page is too narrow for the topics it has been assigned. Move some prompts elsewhere or expand the page.
Prompt count per URL
Prompt count scales with page length and topic complexity. Longer pages with more distinct subtopics receive more prompts to ensure adequate coverage. Shorter, single-topic pages receive fewer.
Brand exclusion
zued intentionally keeps your brand name out of generated prompts. The audit tests whether AI engines surface your content for a topic — not whether they already know your brand. This produces a neutral, repeatable baseline. Brand-specific prompts are only included when the page is explicitly about your brand (e.g. a comparison or “why us” page).
Reviewing and approving prompts
After generation, all prompts enter a review stage before dispatch. The review panel is organised by topic cluster, then by URL, so you can see at a glance which topics and pages are covered. You can:
- Edit prompt text, intent goal, or topic cluster
- Remove prompts that don’t fit your audit goals
- Add custom prompts to test specific queries you care about
Once you approve, zued matches each prompt to your content and begins the AI dispatch. This review step ensures your audit covers exactly what matters to you.
Custom prompts are dispatched and scored like generated ones. Keep generated prompts as the foundation — they’re systematically tied to your ICPs and content structure.
Why prompts include ICP context
Audit prompts run in neutral sessions — no chat history, no logged-in state, no personalisation. Without context, AI engines give generic responses that don’t match what your actual customers see.
ICP context compensates for this. By injecting situation details, constraints, and needs into the prompt, zued produces responses comparable to what a real user with that background would receive. This is what makes the coverage measurement meaningful: you’re comparing your content against the same answers your customers get.
Prompts and weekly snapshots
Prompts are generated once and then reused in every subsequent snapshot. They are never regenerated automatically. This is intentional: the same prompts running week after week is what makes trend tracking meaningful — you’re measuring the same thing each time, against fresh content and fresh AI responses.
Regenerating prompts resets your week-over-week comparison baseline. Only do this if your content or target audience has fundamentally changed.