Josh Pigford, the founder behind Baremetrics and a string of SaaS products, launched Rumored.ai today. The product does one thing: it queries AI models about your brand, compares what they say to reality, and tells you what they are getting wrong.
The pitch is simple because the problem is simple. Ask ChatGPT about a business and it will confidently state hours, products, pricing, founding year, and competitive positioning. Some of that information will be wrong. Not occasionally wrong. Pigford's claim, based on the audits he has been running: "AI hallucinates business facts for quite literally every brand."
That is not a research curiosity. It is a distribution problem. When a growing share of product research starts in an AI chatbot instead of a search engine, incorrect facts about your business travel at the speed of a confident paragraph.
What the audit actually covers
Rumored.ai generates a report across 12 sections. The core ones: an executive summary of how AI models currently represent your brand, an active threats section listing factual errors that could directly cost you customers, a competitive analysis showing how models position you against named competitors, and a schema audit checking whether your structured data gives models the right signals.
Each section comes with a prioritized action plan. The output is not a dashboard you stare at — it is a set of copy-paste fix prompts and structured data corrections ranked by severity.
The launch pricing is unusual: $25 for the first buyer, and the price rises by $25 with each purchase. That is a forcing function for early adoption, not a long-term pricing model, but it signals that Pigford is treating this as a finished product, not a waitlist experiment.
AEO is not SEO with a new acronym
The industry has started calling this Answer Engine Optimization (AEO) or Generative Engine Optimization (GEO), and both terms are slightly misleading. SEO is about ranking — appearing higher in a list of links. AEO is about accuracy — making sure the answer an AI model constructs about your business is factually correct.
The distinction matters because the failure modes are different. A bad SEO outcome means you appear on page two. A bad AEO outcome means ChatGPT tells a potential customer that you closed in 2023, that you do not offer the service they are looking for, or that your competitor has a feature you actually pioneered. The customer never visits your site. They never see a result to click. They got an answer, and the answer was wrong.
Most businesses have not encountered this problem yet because most businesses have not asked an AI model what it thinks about them. Rumored.ai is betting that once they do, the results will be alarming enough to drive immediate action.
Where the hallucinations come from
AI models assemble brand information from training data, which is a snapshot of the internet at some cutoff date, filtered through whatever deduplication and quality heuristics the model provider used. There is no canonical source of truth the model consults. There is no API call to your Google Business Profile happening behind the scenes.
This means every piece of outdated, inaccurate, or contradictory information about your business that ever appeared online is a candidate for inclusion in a model's response. A blog post from 2019 listing your old pricing. A competitor's comparison page misrepresenting your features. A Yelp review mentioning hours that changed during COVID. The model cannot distinguish between these sources and your actual website. It blends them into a single confident answer.
Structured data — schema markup, Knowledge Graph entries, consistent NAP (name, address, phone) across directories — gives models stronger signals. But most businesses have gaps in their structured data, and even businesses that get it right find that models sometimes ignore it in favor of older, more frequently cited information.
The gap in the current tool landscape
SEO tools have spent years building crawlers, rank trackers, and backlink analyzers. None of them systematically test what AI models say about a specific business. Some monitor whether a brand appears in AI-generated answers (visibility tracking), but visibility and accuracy are different problems. Appearing in an AI answer is worthless if the answer contains wrong information — it might actually be worse than not appearing at all.
Rumored.ai is narrowly focused on the accuracy side. It does not track rankings or citations. It asks models about your brand, compares the output to ground truth, and generates fix instructions. That narrow scope is probably the right first product. The accuracy problem is concrete, immediately actionable, and emotionally compelling in a way that visibility metrics are not. Seeing ChatGPT confidently state the wrong founding year for your company produces a different reaction than seeing a rank drop from position 3 to position 7.
Structured data corrections are the most actionable output
Of the 12 sections in the Rumored.ai report, the schema audit is likely the one that produces the fastest results. Structured data is the most direct channel for communicating facts to models and search engines alike. If your JSON-LD markup is missing, incomplete, or inconsistent with your actual business details, fixing it is a concrete technical task with a clear before-and-after.
The active threats section is the most emotionally resonant. Seeing a list of specific false claims that AI models are making about your business — wrong products, wrong locations, wrong competitive positioning — creates urgency in a way that abstract metrics do not.
The competitive analysis section is interesting for a different reason. It shows how AI models compare you to competitors, which reveals positioning assumptions you did not create and may not agree with. If ChatGPT consistently describes your competitor as the "enterprise" option and your product as the "budget" alternative, that framing is shaping purchase decisions whether you like it or not.
Accuracy auditing as a recurring need
A one-time audit has value, but the underlying problem is ongoing. Models get retrained. Training data gets updated. New content about your business appears online. A competitor publishes a comparison page that misrepresents your features, and six months later that comparison is baked into a model's response patterns.
The open question for Rumored.ai is whether the product evolves into a monitoring service — continuous checks, alerts when model outputs change, automated re-audits — or stays as a point-in-time report. The current launch is a single audit, but the problem demands ongoing attention.
Pigford has built and scaled subscription products before. The one-time pricing structure at launch does not necessarily indicate the long-term model.
The audit nobody thought to run
Rumored.ai is occupying a category that barely existed twelve months ago. The idea that you need to audit what AI says about your business, not just what Google shows when someone searches for you, is new enough that most businesses have not considered it. Pigford is betting that the gap between "have not considered it" and "this is obviously critical" is exactly one audit report wide. Given how confidently AI models state wrong facts about virtually every business, that bet looks reasonable.
