Can a Fake Brand Win in AI Search? What a Month-Long Experiment Reveals About How AI Engines Choose Sources

12 min read · April 30, 2026
Can a Fake Brand Win in AI Search? What a Month-Long Experiment Reveals About How AI Engines Choose Sources

AI search citation systems have a trust problem, and a new experiment published by Search Engine Land just measured exactly how wide the gap is.

Bogdan Babiak at SEL ran a controlled AI search citation experiment spanning ChatGPT, Google AI Overviews, Google AI Mode, Perplexity, and Gemini. The test subject was a fictional brand with zero real-world existence: no customers, no revenue, no history. The team published structured content about this invented entity starting in March 2026 and tracked how all five AI systems responded over one month.

The results should unsettle every brand that has spent years building real authority.

825 prompts across different query types generated 15,835 AI answers during the first month. The fake brand appeared in citations across all five platforms. Not gradually. Not barely. Consistently enough that the experimenters called the pattern "predictable, testable, and open to strategic influence."

This is not the first time researchers have stress-tested AI citation systems. Ahrefs ran a similar experiment in late 2025, creating a fictional brand called Xarumei and tricking eight AI search engines over two months. But Babiak's work at SEL is the first published, peer-visible controlled experiment that isolates the specific signals driving citation behavior across the current generation of AI platforms.

The takeaway is not that AI search is broken. The takeaway is that AI citation mechanics reward content structure and entity consistency more than factual verification. For legitimate brands, this creates both a vulnerability and an opportunity that most SEO teams have not yet internalized.

What the experiment actually found

The SEL team created a fictional brand, published supporting content across the web, and then systematically tested whether five AI platforms would cite it. They ran 825 distinct prompts, generating 15,835 individual AI responses in the first month alone.

The core finding: the fake brand earned citations on all five platforms. The fictional entity was recommended, described, and surfaced as a legitimate option in AI-generated answers.

The most revealing statistic was not the raw citation count. It was the distribution of where those citations appeared. 96% of all AI visibility for the fake brand came from branded searches, queries that included the brand name itself. Generic category queries produced far fewer citations.

This tells us something specific about how AI engines process new entities. When a query includes a named entity, AI systems appear to prioritize finding and surfacing information about that entity, even if the entity was invented weeks ago. The verification step, the one where an AI system asks "is this brand real?" either does not exist or gets overridden by the system's imperative to provide a substantive answer.

The Ahrefs Xarumei experiment from December 2025 reinforced this pattern from a different angle. Ahrefs demonstrated that fabricated information about a nonexistent brand could be injected into AI answers across eight platforms simply by publishing enough structured content in the right places. The Ahrefs experiment focused on misinformation risk. The SEL experiment focuses on citation mechanics. Together, they paint a coherent picture: AI citation systems are optimized for information retrieval, not truth verification.

The four signals that now define AI visibility

Published the same day as the fake brand experiment, Wasim Kagzi's companion piece at Search Engine Land identifies four signals that drive AI visibility. These signals explain why the fake brand succeeded and what legitimate brands need to understand about how they are being evaluated.

Brand mention presence. AI systems need to find your brand name in crawlable content. This is the baseline. If your brand does not appear in the text corpora that AI models index, you simply do not exist in their answer space. The fake brand passed this test by publishing enough mentions across enough sources that all five platforms encountered the name during retrieval.

Recommendation weight. Not all mentions are equal. Being recommended carries more weight than being mentioned in a general list. When an AI system decides which brands to surface in response to a category query, it appears to weight explicit recommendations ("X is a top choice for Y") higher than passive mentions ("X, along with others, offers Y"). The fake brand's content strategy included explicit recommendation framing.

Sentiment and context. Kagzi's research found that sentiment and context determine whether mentions drive action. An AI system that describes your brand as "premium" versus "budget-friendly" shapes user behavior. The fake brand was described consistently in positive, authoritative terms. There was no counter-narrative because no real users existed to contradict the framing.

Structural authority signals. This is where the trust gap becomes most visible. AI systems appear to evaluate source quality through structural heuristics: content organization, schema markup, entity consistency across pages, and the density of supporting information. A well-structured page about a fake brand can score higher on these heuristics than a poorly structured page about a real brand with decades of history.

These four signals interact. A brand that has mentions but negative sentiment gets cited differently than one with positive sentiment. A brand with structural authority but no branded search volume gets different treatment than one with both. The fake brand experiment worked because it addressed all four signals simultaneously with no contradictory real-world data to undermine the signal set.

Want to know how your brand scores across these same signals? Run an AI visibility audit to see where you stand and where the gaps are.

Why this matters for legitimate brands

The uncomfortable reality: a fictional brand with four weeks of content beat real brands with years of authority on a structural level. That does not mean the fake brand is more valuable. It means the measurement system AI engines use is incomplete.

Consider the broader data landscape. According to Position Digital's AI SEO statistics roundup, roughly 75% of sites that actively block AI bots still appeared in AI citations. Blocking GPTBot or Google-Extended in robots.txt does not prevent citation. AI platforms pull from secondary sources, knowledge graphs, and training data that extends far beyond direct crawling.

SE Ranking's research on Google AI Mode found that the average AI Mode answer contains 12.6 linked sources, drawn from a pool of 122,617 links across 9,734 triggered responses. That is a lot of citation real estate, and the selection criteria favor structural signals over historical authority.

Meanwhile, Ahrefs' analysis of 4 million AI Overview URLs found that only 38% of pages cited in AI Overviews also rank in the organic top 10. Traditional ranking position is no longer a reliable predictor of AI citation. The overlap between organic SERP visibility and AI citation is weakening, which means brands that optimized purely for traditional SEO are losing ground in AI search even when their rankings hold steady.

The fake brand experiment reveals that AI citation systems reward structure over verification

For brands that have invested years in building real authority through legitimate PR, customer reviews, thought leadership, and organic growth, this creates a specific frustration. The signals that matter in AI search are not the signals that rewarded patience and authenticity in traditional SEO. They reward structure, consistency, and entity density.

The fake brand did not win because it was better. It won because it was optimized for the current citation mechanics while real brands were optimizing for a different game.

The trust gap, quantified

Here is the core problem: AI citation systems conflate "well-documented" with "trustworthy."

When an AI model encounters a brand name in multiple structured sources, with consistent entity descriptions, schema markup, and supporting content across several domains, it treats that entity as established. The model has no mechanism to distinguish between "this brand has existed for 20 years and serves 50,000 customers" and "this brand was created last month and its supporting content was published by the same entity running the experiment."

The SEL experiment exposes this gap directly. The 96% branded search figure is particularly telling. It means AI systems are highly responsive to branded queries even when the brand has no external validation: no independent reviews, no third-party coverage, no user-generated content, no regulatory filings, no business registrations. The only signal is the content itself.

Compare this to how traditional search worked. Google's original PageRank algorithm used links as a proxy for trust. A page with many inbound links from diverse, authoritative sources was assumed to be more trustworthy than one without. The system was gameable (link farms, paid links), but the core assumption created a high bar for new entrants.

AI citation systems have no equivalent mechanism. There is no "citation rank" that weights sources based on independent verification. The models evaluate text quality, entity consistency, and structural signals, but they do not ask whether the entity actually exists in the physical world.

This is the trust gap. It is not a bug. It is an architectural feature of how current AI retrieval systems work. The models are designed to find and synthesize information, not to verify the real-world existence of the entities they describe.

What brands should do differently

The response to this experiment is not panic. It is strategic recalibration.

First, compete on structure because you have no choice. The fake brand experiment proves that content architecture is table stakes. If your brand's web presence is fragmented, inconsistently described, or structurally weak, you are losing citations to competitors (and fictional brands) that invested in generative engine optimization. This means consistent entity descriptions, schema markup, structured data, and a coherent content architecture that AI systems can parse.

Second, build signals that cannot be faked. This is where the real opportunity lies. The fake brand experiment succeeded because there were no contradictory signals. Real brands have customer reviews, independent media coverage, social proof, user-generated content, regulatory filings, and years of search behavior data. These signals are harder to fabricate and, over time, AI systems will likely weight them more heavily as the platforms mature.

The brands that invest now in building verifiable, independent authority signals will be positioned well as AI citation systems evolve. The fake brand works in a low-trust environment. Real authority wins as trust mechanisms get more sophisticated.

Third, monitor your AI citation profile actively. Citation volatility is real. Searchless benchmark data shows that AI citation positions have a 50% decay rate within 13 weeks. A citation you hold today may be gone in three months. This means AI visibility is not a one-time optimization; it requires ongoing monitoring and adjustment.

If you have not checked whether AI systems are citing your brand, or citing competitors instead, our AI visibility audit maps exactly where you appear, where you are missing, and what signals need adjustment.

Fourth, differentiate between being mentioned and being recommended. The SEL companion piece on the four signals makes clear that recommendation weight and sentiment context matter more than raw mention count. Track not just whether your brand appears in AI answers, but how it appears. Are you described as a leader or an also-ran? Are you recommended with conviction or listed as one of many options?

For a deeper tactical breakdown of how to earn AI citations, our practical playbook for getting cited by AI covers the operational side. And for understanding how a single AI platform handles source selection, our analysis of how Perplexity chooses sources provides engine-specific insight.

The longer arc

AI citation systems are in their early innings. The trust gap the SEL experiment exposed will narrow. Platforms will add verification layers. Citation algorithms will incorporate more independent signals. The question is not whether this will happen, but how fast.

Right now, the gap is wide enough that a fictional brand can compete with real ones. That creates a window where structural optimization delivers outsized returns. But it also creates a window where brands that invest in authentic, hard-to-fake authority signals will build durable advantages that survive the next generation of citation algorithms.

The brands that treat this as a wake-up call rather than a curiosity will be the ones that dominate AI search visibility over the next 18 months. The ones that dismiss it as an academic exercise will find themselves outranked by competitors (and, occasionally, fictional entities) that took the signal mechanics seriously.

The experiment is still running. The fake brand is still earning citations. The question for your brand is whether you are watching.

---

Sources

1. Bogdan Babiak, "Can a fake brand win in AI search? New experiment says yes," Search Engine Land, April 29, 2026. Link

2. Wasim Kagzi, "4 signals that now define visibility in AI search," Search Engine Land, April 29, 2026. Link

3. Position Digital, "150+ AI SEO Statistics for 2026," April 2026. Link

4. Search Engine Journal, "Most Major News Publishers Block AI Training & Retrieval Bots," January 2026. Link

5. Ahrefs, analysis of 4M AI Overview URLs (cited in multiple 2026 roundups). Link

6. SE Ranking, "AI Mode Research: Sources, Volatility, & Differences between AIO and Organic Search," August 2025. Link

7. Searchless benchmark: Citation volatility data, internal analysis 2026.

---

FAQ

Can a fake brand actually sustain AI citations over time?

The SEL experiment is ongoing, but the first-month data shows consistent citation across all five platforms. The Ahrefs Xarumei experiment from December 2025 showed similar sustained results over two months. Long-term durability without real-world signals is the open question.

What should I check first to see if my brand has an AI visibility problem?

Run your brand name and top category keywords through ChatGPT, Perplexity, and Google AI Mode. Note whether you appear, how you are described, and whether competitors get recommended instead. For a systematic assessment, the Searchless AI visibility audit covers 100+ queries across multiple AI platforms.

Is blocking AI bots in robots.txt effective for preventing citations?

No. Research from Position Digital and SEJ shows roughly 75% of sites blocking AI bots still appear in AI citations. AI platforms pull from multiple data sources beyond direct crawling.

How is this different from traditional SEO?

Traditional SEO rewards authority built over time through links, user signals, and domain history. AI citation mechanics weight content structure, entity consistency, and recommendation framing. The overlap between the two is weakening: only 38% of AI Overview citations come from organic top-10 results.

---

Ready to see where your brand stands in AI search? Check our pricing plans to find the right audit and monitoring package for your team.

How Visible Is Your Brand to AI?

88% of brands are invisible to ChatGPT, Perplexity, and Gemini. Find out where you stand in 60 seconds.

Check Your AI Visibility Score Free