Self-Promotional Listicles Aren't Going Away: Peec.AI Study
Key Takeaways:
-
A new Peec AI study found that roughly 11% of citations across major AI platforms came from self-promotional listicles.
-
Peec found no evidence of algorithmic correction over the 12-week study period, with self-promo citation rates staying broadly stable throughout.
-
ChatGPT had the lowest self-promotional listicle rate at 3.6%, compared with 10.3% for Google AI Mode and 10.4% for Perplexity, pointing to real differences in how platforms filter sources.
-
The study focused specifically on software and software reviews, which means the findings are vertical-specific rather than universal across every category.
TL;DR: According to new research from Peec AI, AI search platforms still haven’t cracked the self-promotion problem. After analyzing 232,000 citations across 13,000 listicles over 12 weeks, Peec found that self-promotional “best of” content is still getting cited at meaningful rates, even if platform behavior differs sharply. This makes listicles a viable content format to be considered for an AI search optimization strategy, but their expiration date may be on the horizon.
Self-promotional listicles have been controversial in SEO for years, and Peec’s latest data suggests they’re still showing up in AI search results.
In the analysis, major AI platforms were still citing “best X” pages where brands ranked themselves first and listed competitors below.
| Finding | Peec AI result |
| Total citations analyzed | 232,000 |
| Unique listicles analyzed | 13,000 |
| Study period | 12 weeks |
| Average self-promotional citation rate | ~11% |
| ChatGPT self-promo rate | 3.6% |
| Google AI Mode self-promo rate | 10.3% |
| Perplexity self-promo rate | 10.4% |
What did Peec AI find about self-promotional listicles in AI search?
The headline numbers are clear, but the more interesting question is what’s driving them. Why are some platforms citing self-promotional content at nearly three times the rate of others, and why hasn’t that changed over time? The study doesn’t offer a definitive answer, but the patterns it surfaces point to some meaningful differences in how these platforms are built. Here are the biggest findings.
1. Self-promotional listicles are still a meaningful citation source
The persistence of self-promo citations suggests these pages have genuine signals that AI platforms are responding to, including things like domain authority, inbound links, and topical relevance. It’s not that platforms are blind to the problem; it’s that the content is often technically strong enough to clear the bar anyway. Fixing that likely requires more than a simple filter.
2. There was no clear sign of platform-wide correction
The stability across 12 weeks is arguably more telling than the raw percentages. Platforms had ample time to adjust, and none of them showed a meaningful downward trend. That either means they’re not prioritizing this issue, or it’s harder to solve algorithmically than it looks from the outside.
3. ChatGPT stood out from the rest
The gap between ChatGPT and the rest is nearly threefold. That points to something structural in how ChatGPT handles source selection and retrieval, and it adds another layer to how paid visibility is starting to evolve inside ChatGPT.
4. The findings are specific to software reviews
Software is a category with unusually high incentives for self-promotional content. Vendors have strong commercial motivation to publish comparison pages and the resources to make them look authoritative. That context matters when extrapolating these numbers. Other verticals may show very different patterns depending on how much competitive comparison content exists.
What else should marketers pay attention to in the Peec study?
The headline number stands out, but the rest of the study adds useful context. A few details make the findings more nuanced.
1. Peec tracked a fixed prompt set across six platforms: The study ran the same non-branded software review prompts across ChatGPT, Google AI Mode, Perplexity, Microsoft Copilot, Google Gemini, and Google AI Overviews from December 2025 through February 2026.
2. The platform differences were significant: December and January showed the highest self-promo rates overall. February brought a slight improvement for Google AI Mode, Perplexity stayed consistently high, and ChatGPT remained low throughout.
3. This doesn’t fully contradict other recent listicle research: For example, Seer Interactive reported that overall listicle citations in ChatGPT declined month over month. Peec’s study narrows the focus to self-promotional listicles specifically, and those citations appear to be holding steady.
4. Peec does not endorse the tactic long term: View self-promotional listicles as a risky shortcut, not a sustainable play:
- They can create a reputational downside: Buyers may see the content as self-serving rather than trustworthy.
- They carry algorithmic risk: Performance may not hold up as search and AI systems evolve.
- Better long-term options exist: Peec said it does not actively recommend the tactic given safer, more durable alternatives.
Ready to improve your AI Overview visibility?
ZeroClick Labs helps brands improve their chances of being surfaced across answer engines, AI Overviews, and AI-powered search experiences.
With deep expertise in digital marketing, search analytics, and AI visibility strategy, our team helps companies stay discoverable as search keeps changing. Whether you’re trying to understand AI Overview citation behavior, evaluate answer engine shifts, or build a stronger presence through our AI SEO services, we focus on helping your brand show up where it matters most.
“Our agency had no idea how to approach AI visibility. ZeroClick only does this one thing so they actually know what works. Worth every penny just to not waste time figuring it out ourselves.” – Jay