TL;DR
- A single AI search result means almost nothing. You need consistent measurement over time.
- Track three core metrics: Mention Rate, Citation Rate, and Answer Position.
- Run 15 to 25 prompts weekly across ChatGPT, Perplexity, Gemini, and Copilot.
- Your own trend and competitor gaps are your most reliable benchmarks right now.
- AI visibility is a leading indicator. It influences customers before they ever visit your site.
The Problem with Checking Your AI Ranking Once
You check your AI search ranking on Monday morning. Your brand appears third. Tuesday, fifth. Wednesday, you are not there at all. Thursday, you are back at second.
This is not a bug. This is how AI works.
AI language models are non-deterministic. Ask ChatGPT the same question twice and you will get different answers both times. There is less than a 1 in 100 chance it will return the same list of brands across 100 identical queries.
So when a marketing manager checks "do we appear when someone asks for the best accountants in Perth" and the answer is yes, that tells them almost nothing. The brand may not appear in the next ten responses.
The measurement instinct is right. The method is wrong. You cannot take a snapshot. You have to take an average. One reading gives you noise. Fifty readings over a month starts to give you a signal.
The Three Metrics That Actually Matter
Mention Rate tracks how frequently your brand appears across multiple AI responses to the same query set. If you run 20 prompts and your brand appears in 14 responses, your Mention Rate is 70%. This is your baseline visibility number.
Citation Rate measures whether AI attributes a source URL to your content when mentioning your brand. A mention without a URL builds brand familiarity. A citation drives traffic. Track Citation Rate through GA4 referral traffic from LLMs.
Answer Position tracks where your brand appears within AI responses. First mention converts differently to fifth mention. When AI leads with your brand, it is signalling how the model weights you relative to competitors.
How to Run the Measurement Framework
Pick 15 to 25 prompts that represent the questions your customers are actually typing into AI. Not brand queries, but category and problem queries: "Best accountant for a small construction business in Wellington" or "Who should I use for employment law advice?"
Run those prompts weekly across ChatGPT, Perplexity, Gemini, Claude, and Copilot. For each response, log three things: did your brand appear, did the response link to your website, and if you appeared, where in the answer did you show up?
A simple spreadsheet handles all of this. The value is not in any single week. It is in watching those numbers move over time.
What the gap tells you: If Citation Rate stays flat while Mention Rate grows, AI knows who you are but is not pointing people to your site. That signals a content problem, not a visibility problem.
Closing the Gap Between Mention Rate and Citation Rate
AI systems pull from trade publications, review sites, industry forums, and third-party directories. If your brand appears across those sources but your own website content is thin or outdated, AI has enough confidence to name you but not enough to cite you.
The fix is two-pronged. First, give AI something worth citing on your own site. The page AI will not cite says: "We are a passionate team of experts delivering innovative solutions tailored to your unique needs." There is nothing there an AI can extract as a fact.
The page AI will cite has specifics: "We work with New Zealand businesses turning over between two and ten million dollars, primarily in construction and professional services, and our clients typically reduce their compliance time by around a third in the first year." A sector. A size range. An outcome. A timeframe.
Second, check whether your third-party presence is doing the heavy lifting your own site should be sharing. If AI is citing a yahoo.com article from 2023 or a single directory listing, that is fragile. You want AI drawing from multiple owned and earned sources simultaneously. That cross-web consistency is what builds citation confidence.
Start Measuring Now, Not When It Feels Ready
There is no industry-wide benchmark for a good Mention Rate yet. Focus on two things instead: your own trend over time and the gap between you and your main competitor on the same prompt set.
The single biggest implementation mistake is doing this once and drawing conclusions. Marketing managers run the prompts, see they appear in 60% of responses, put it in a slide, and move on. Three months later there is nothing to compare it against.
Build this into a weekly or monthly reporting rhythm before you have anything interesting to report. The brands with the most defensible AI visibility data in two years are the ones that started logging consistently in 2025 and 2026, when it felt too early.
If you are not tracking AI visibility, you are not absent from AI search. You just do not know how you are performing in it.
If you want to talk through how this measurement framework applies to your category and competitive set, the team at ADMATIC works through this with clients across Australia and New Zealand.