Here's a question that comes up constantly: "How do I even know if AI is talking about my business?"
AI brand monitoring has become essential as more users turn to LLMs for product recommendations. According to a 2025 Botify study, 62% of brands are invisible to at least one major AI platform. Unlike Google, where you can check rankings with any SEO tool, AI recommendations don't have a standard tracking system. There's no Search Console for ChatGPT. No rank tracker for Perplexity. You're mostly in the dark.
But you're not completely in the dark. There are ways to check, both manual and automated. Here's how to actually do it, step by step.
Traditional Brand Monitoring vs AI Monitoring
Before getting into the how-to, it helps to understand what we're talking about and what we're not.
Traditional brand monitoring tools have existed for years. They track mentions of your brand across social media, news sites, blogs, forums, and the broader web. They answer the question: "Who is talking about us?"
AI monitoring is a different category entirely. It tracks what large language models say about you when users ask questions. It answers: "Is AI recommending us?"
The key differences:
- Traditional monitoring watches public conversations about your brand. AI monitoring watches what AI recommends when users ask questions.
- Traditional monitoring is reactive: someone mentioned you, and you get an alert. AI monitoring is proactive: you find out whether you're being recommended before a customer ever tells you they found you through ChatGPT.
- Traditional tools track what humans say. AI monitoring tracks what machines say about you to humans.
- Many brands need both. But AI monitoring captures a channel that traditional tools miss entirely.
Here's a quick overview of the main traditional monitoring tools and where they fit:
- Meltwater GenAI Lens: an enterprise media monitoring platform that recently added AI visibility tracking to its offering. Comprehensive but expensive, best suited for large brands with big monitoring budgets.
- Brand24: an affordable social listening tool starting at $79/month. Good for tracking social media mentions and web mentions, but limited when it comes to AI search tracking.
- Brandwatch: enterprise-grade social intelligence. Strong on sentiment analysis and consumer research. Not built for AI visibility tracking.
- Talkwalker: enterprise media monitoring with AI-powered analytics. Good for large-scale media listening across traditional channels.
- Mention.com: a mid-range real-time monitoring tool. Easy to set up and use for tracking web and social mentions.
- Awario: a budget-friendly alternative starting at $29/month. Covers web and social monitoring at a lower price point.
These tools are valuable for what they do. But none of them were designed to answer the question that matters here: "When someone asks ChatGPT for a recommendation in my category, do I show up?" That requires a different approach, which is what the rest of this guide covers.
Step 1: Identify the right prompts
Before you start checking, you need to know what to check. The prompts that matter are relevant queries, the questions someone asks when they're looking for a solution like yours.
Think about how a potential customer would ask AI for help. Not "What is project management?" but "What's the best project management tool for a freelance team of 5?" Not "Define content marketing" but "Can you recommend a content marketing agency for SaaS startups?"
Write down 10-15 prompts that a real buyer in your market would use. Be specific. Include variations: different phrasing, different angles, different levels of specificity.
Some examples to get you started:
- "What's the best [your category] for [your target audience]?"
- "Can you recommend a [your service] that specializes in [your niche]?"
- "I need help with [problem you solve]. What are my options?"
- "Compare the top [your category] tools/services"
- "[Competitor] alternative for [specific use case]"
These are the prompts you'll test against. Quality matters more than quantity here. Ten well-chosen prompts tell you more than fifty generic ones.
Step 2: Check ChatGPT manually
Open ChatGPT and start a new conversation. This is important, because existing conversations carry context that can skew results.
Type in your first prompt exactly as a buyer would phrase it. Don't include your brand name in the prompt. You want to see if AI recommends you organically, not whether it knows your brand exists.
Read the response carefully. Look for three things:
Are you mentioned? Does your brand name appear anywhere in the response?
Where are you positioned? Are you the first recommendation, buried in a list, or mentioned as an "also consider" afterthought?
How are you described? What does AI say about you? Is it accurate? Is the positioning favorable?
Record the results: prompt, whether you were mentioned, position in the response, and what was said. Do this for all your prompts.
One important caveat: ChatGPT can give different answers to the same prompt at different times. Run the same prompt two or three times to see if results are consistent. If you show up once but not twice, your visibility is unstable.
Step 3: Repeat across platforms
ChatGPT is only one of five major AI platforms that people use for recommendations. You need to check the others too.
Perplexity (perplexity.ai): Uses real-time web search, so results can differ significantly from ChatGPT. Run your same prompts here.
Claude (claude.ai): Anthropic's AI. Tends to be more cautious with recommendations and often adds qualifiers. Still worth checking.
Google Gemini (gemini.google.com): Leverages Google's search data, so its recommendations can reflect a different information set.
Grok (available via X/Twitter): Draws on X data in ways others don't. Worth checking especially if your audience is active on X.
This means running your 10-15 prompts across 5 platforms. That's 50-75 individual checks. It takes time, but it gives you a comprehensive baseline.
Step 4: Check your competitors
Knowing whether AI mentions you is only half the picture. You also need to know who AI recommends instead of you.
Go back through your results and note every brand that appears in the responses. These are your AI competitors, and they might not be the same as your Google competitors. Sometimes a brand you've never considered as competition is dominating AI recommendations in your space.
Identify the top 2-3 brands that appear most frequently. These are the ones you're competing against for AI attention.
Step 5: Record your baseline
Create a simple spreadsheet. Columns: Prompt, ChatGPT result, Perplexity result, Claude result, Gemini result, Grok result, Top competitor mentioned, Notes.
Fill it in with your findings. This is your baseline. Everything you do from here will be measured against these initial results.
Calculate your overall visibility rate: number of times you were mentioned divided by total checks (prompts x platforms). This gives you a single number to track over time.
Step 6: Set a monitoring schedule
This is where most people fail. They do the initial check, get the data, and then never check again.
AI recommendations change on average every 2-3 weeks for competitive queries, making one-time checks unreliable. What ChatGPT says this week might differ next week. A competitor publishes new content, gets a review, or updates their positioning, and suddenly the recommendations shift.
Minimum viable monitoring: re-run your key prompts every two weeks. Monthly at the absolute least. Weekly if AI visibility is a strategic priority for your business.
Update your spreadsheet each time. Track the trend. Are you gaining visibility? Losing it? Holding steady? This trend data is far more valuable than any single snapshot.
The limitations of manual tracking
Let's be honest about what manual checking can and can't do.
It can give you a clear baseline, show you who your AI competitors are, and reveal how your positioning looks through AI's eyes.
It can't scale. The average enterprise brand needs to track 25-50 prompts across 5 platforms, totaling 125-250 data points per monitoring cycle. Running 75 checks every two weeks takes a couple of hours. And you'll probably start skipping it after a month or two because it's tedious. The data you're collecting is also limited: you can't easily track sentiment trends, visibility changes between checks, or patterns across hundreds of prompts.
It's also inconsistent. ChatGPT gives different responses to the same prompt at different times. Without running each prompt multiple times, you might get a result that isn't representative. Manual checking introduces sampling error that's hard to eliminate.
When to consider automated tracking
For a broader understanding of how AI tracking fits into overall optimization, read our AI search engine optimization guide.
If any of these apply to you, manual tracking starts to break down:
You want to track more than 15-20 prompts. You want data more frequently than monthly. You want to compare results across time periods systematically. You want alerts when your visibility changes. You're managing multiple projects or clients.
Automated tools like Mentionable run your prompts across all five major LLMs on a regular schedule, track changes over time, and alert you when something shifts. The setup takes about ten minutes: enter your URL, pick the prompts to track, and the system handles the rest.
The trade-off is cost vs time. Manual checking is free but labor-intensive and inconsistent. Automated tracking costs money but gives you reliable, continuous data. Companies that monitor AI mentions consistently see 30-40% faster response to visibility drops compared to those doing quarterly audits.
Getting actionable from your data
Whether you track manually or with a tool, the point isn't the data itself. It's what you do with it.
If you're invisible on most prompts, the fix is usually positioning. Is your website clear about what you do and who you serve? Can AI easily extract a one-sentence summary of your value?
If you're visible on some prompts but not others, look for the pattern. What do the "hit" prompts have in common? What's different about the "miss" prompts? Usually the gap is content, third-party validation, or specificity of positioning.
If competitors consistently outrank you, study why. What does their web presence look like? Where are they mentioned that you're not? What can you learn from their positioning?
The brands that win AI visibility aren't guessing. They're measuring, identifying gaps, and systematically filling them. The first step in that process is knowing where you stand.
Real-world example
A European fintech company discovered through systematic AI monitoring that ChatGPT consistently recommended three competitors but never mentioned them. Analysis revealed their product pages used internal jargon instead of industry-standard terms. Customers searched for "expense management software" while the company's site talked about "financial flow optimization suite."
After rewriting key pages to match how customers actually describe their needs, and adding third-party validation through industry reviews and case studies, they appeared in ChatGPT responses for 6 of their 15 tracked prompts within 6 weeks. Perplexity followed a few weeks later, likely because the rewritten pages also performed better in traditional search, which Perplexity draws from.
The takeaway: AI visibility gaps are often language gaps. If you describe your product differently from how buyers ask about it, AI will recommend whoever speaks the buyer's language.
Start checking. Today. Even a quick manual run through your top 5 prompts on ChatGPT gives you more insight than you had an hour ago.
Related articles
- What are AI mentions? -- a clear definition of what counts as an AI mention and why it matters.
- Multi-LLM tracking -- why checking a single AI platform gives you an incomplete picture.
- AI traffic vs Google traffic -- how AI-referred visitors behave differently from search visitors.
- Profound alternative -- how Mentionable compares to other AI monitoring tools.
