Skip to main content
Prompts are the test prompts zued sends to AI engines on your behalf. They are generated from your crawled page content and confirmed ICPs, then reviewed by you before dispatch. Their goal: to produce AI responses comparable to what your customers would see when asking about topics you cover.

How prompts are generated

Once your ICPs are confirmed, zued generates prompts per URL using your confirmed ICPs and the content of each page. Every prompt is ICP-driven — it reflects the situation, needs, and language style of the personas linked to that URL. Since audit prompts run in neutral sessions — without chat history, memory, or personalisation — zued enriches them with ICP context to compensate. This makes AI engine responses comparable to what a real user with that background would receive. Prompts are distributed across three context levels:
LevelWhat it looks like
MinimalA short question with one situational hint
ModerateSeveral context details drawn from the ICP
Rich / hybridFull ICP context + asks for recommendations with links
The mix covers the full range — from users just starting a conversation to those with significant context built up. Hybrid prompts can trigger citations from both informational content and product/service pages in a single response.

Multi-target prompts

When your project contains multiple URLs that cover the same topic cluster, zued generates multi-target prompts — queries designed to cite several of your pages simultaneously rather than just one. For example, a guide page and a related product page in the same topic cluster can share a single prompt that naturally asks for both explanation and a recommendation. This increases the likelihood that multiple pages appear together in an AI response, which is stronger than any single citation. Single-URL topics that don’t cluster with other pages receive individually tailored prompts instead.

Prompt count per URL

Prompt count scales with page length and topic complexity. Longer pages with more distinct subtopics receive more prompts to ensure adequate coverage. Shorter, single-topic pages receive fewer.

Brand exclusion

zued intentionally keeps your brand name out of generated prompts. The audit tests whether AI engines retrieve your content for a topic — not whether they already know your brand. This produces a neutral, repeatable baseline. Brand-specific prompts are only included when the page is explicitly about your brand (e.g. a comparison or “why us” page).

Reviewing and approving prompts

After generation, all prompts enter a review stage before dispatch. You can:
  • Edit prompt text, intent goal, or topic cluster
  • Remove prompts that don’t fit your audit goals
  • Add custom prompts to test specific queries you care about
Once you approve, zued matches each prompt to your content and begins the AI dispatch. This review step ensures your audit covers exactly what matters to you.
Custom prompts are dispatched and scored like generated ones. Keep generated prompts as the foundation — they’re systematically tied to your ICPs and content structure.

Why prompts don’t look like real user queries

Audit prompts are designed to be repeatable and neutral, not to replicate how any individual user phrases a question. Real users have chat history, memory, and personalisation that produce unique, non-reproducible responses. zued’s prompts run in neutral sessions so the same probe produces a comparable result every week. Real users ask much longer, more specific questions than traditional search — the average AI prompt is roughly 5x longer than a classic search keyword. zued’s prompts reflect this: they carry enough context to produce a meaningful, testable AI response, without being tied to any individual’s session history.

Prompts and weekly snapshots

Prompts are generated once and then reused in every subsequent snapshot. They are never regenerated automatically. This is intentional: the same prompts running week after week is what makes trend tracking meaningful — you’re measuring the same thing each time, against fresh content and fresh AI responses.
Regenerating prompts resets your week-over-week comparison baseline. Only do this if your content or target audience has fundamentally changed.