Is the AI Hype Bubble About to Pop? Why We’re Buying Illusion, Not Innovation
If you're like me, you’re probably seeing "AI" slapped onto everything these days—from your coffee maker to, well, serious government projects. It feels like if you don't use those two magical letters, you won't get funding, you won't get investment, and you won't even be taken seriously . But here's the thing: while genuine breakthroughs are happening—and I truly believe in the long-term potential of this tech—we need to talk about the massive, glittery bubble that's inflating right now. It's a phenomenon where, as one professor notes, a new buzzword like 'AI' replaces older ones like 'creative economy' or 'New Deal' as the golden ticket for government budgets and startup valuations . This overwhelming focus on the label, rather than the core technology, is exactly what we need to unpack because it’s obscuring a critical reality: 99% of what we see might just be pure fluff.
What's particularly concerning is how many companies are aggressively marketing themselves as AI-powered tech giants when their actual technological substance is debatable . You scroll through the news and constantly see companies announcing massive investments, often citing high MAUs (Monthly Active Users) or impressive efficiency boosts, but when you ask for the technical underpinning—the research papers, the architecture, the actual benchmarks—you find almost nothing . From my experience, I've found that these are often great UI/UX service companies who simply decided to slap an "AI" sticker on their product to chase funding, meaning their entire operation could collapse if a major provider like OpenAI or Google changes its API policy . When even seasoned VCs, who should be conducting deep technical diligence, simply follow the lead of other famous investors without truly understanding the technology, it signals a deeper, structural problem in how value is being assessed in this bubble.
This leads us to a crucial distinction: are we investing in a product, or just a person's reputation? The second common type of "AI" startup isn't selling a cutting-edge product; they're selling the resume of their founders, often leveraging their impressive backgrounds from large corporations or famous universities . While experience is valuable, I’ve noticed that news headlines often focus more on the size of their latest funding round than on what the technology actually does or how it works . While top-tier US startups generally launch their actual products publicly, these local counterparts often inflate their value based solely on flashy visions, press releases about their high-profile talent, and what looks dangerously like an intricate network of personal connections—a kind of "clique cartel" . The harsh, counterintuitive reality is that many of these highly-touted companies, despite bringing in famous professors, still have practically zero global technological competitiveness, a fact even Google DeepMind founder Demis Hassabis has warned about when discussing the general AI bubble . It really makes you wonder who is left holding the bag when the inevitable correction happens.
Why Are Sophisticated AI Models Getting Beaten by 20-Year-Old Code?
It might surprise you to learn that the actual state of AI technology, especially in foundational areas like search, is far more humble than the headlines suggest. Even titans like Google are being remarkably honest about their limitations, which is genuinely refreshing amidst all the noise . You see, while many of us now instinctively turn to tools like ChatGPT or Gemini when we have a complex question—and I do too, if I have a health query, I’m asking a chatbot—the reality is that LLMs aren't replacing search; they are essentially highly sophisticated extensions of it . The core mechanism involves two types of search: the old-school keyword-based search and the newer, AI-driven embedding-based search, which understands the conceptual meaning of your query rather than just matching words .
source: https://arxiv.org/pdf/2508.21038
Embedding-based search, which powers pretty much every modern AI tool you use—from perplexity to Google’s own AI features—has been heralded as the future . It excels when you ask for something like, "A quiet place to rest near Gangnam," instead of just "Seoul peaceful hotel," because it understands the semantic relationship between the terms . But here’s the most shocking insight from a recent Google DeepMind paper: this advanced embedding-based search actually faces a structural limitation that completely breaks our expectations . You would logically assume that the more data you feed an AI, the better its performance gets, right? Nope. When DeepMind scaled up their data to billions of records, the AI search performance absolutely plummeted.
This technological regression is truly astounding. The paper showed that a traditional, twenty-year-old keyword search algorithm utterly dominated the latest deep learning models with a staggering 97.8% accuracy rate, while the sophisticated, massive models from Google and Snowflake struggled to break 20% accuracy . Think about it: billions of dollars and years of research in AI were spectacularly defeated by legacy code that's old enough to drive. I like to use this analogy: if your closet gets infinitely bigger, it doesn't become easier to find your shirt; it becomes impossible unless you have clear, organized compartments for seasons, colors, and styles . The same applies to AI: simply expanding the database without structural improvements turns the massive, expensive model into a less capable system than its ancient predecessors .
Are We Mistaking Statistical Guesswork for True Understanding?
We have to face the fact that what we currently call "AI" is nowhere near the AGI (Artificial General Intelligence) we imagine. While AGI might be a perfect score of 100, we are currently sitting somewhere around 8 to 10—we still have a long, long way to go, and the industry’s hype is wildly overblown for where we are today . Even experts who are deeply committed to the future of technology, like myself, acknowledge this severe over-exaggeration in the short term . What's often overlooked by the general public is that the latest models, whether it’s Gemini 3.0 or GPT-5.1, are fundamentally still just probability engines; they are brilliant at statistically guessing the next most plausible token or word in a sequence, but they don't possess actual understanding of what they are saying.
This distinction is massive, and it's where the bubble really starts to inflate. If you ask Gemini 3.0 whether it understands its responses or is just predicting tokens, it will candidly admit it’s the latter . Yet, I'd bet that if you stopped 100 people on the street, 99 wouldn't realize that their favorite chatbot is essentially an advanced statistical machine, not a conscious intellect . This widespread misunderstanding allows companies, particularly weaker institutions that lack the internal expertise to vet AI effectiveness, to market their products with inflated claims . I’ve read reports detailing how one company claimed their app’s accuracy was between 76% and 83%, but when researchers independently audited the product, the actual performance was just 63%—only slightly better than flipping a coin.
It’s clear that we're often buying a fantastical vision of AI rather than the grounded reality, which is exactly why books like The AI Bubble Is Coming emphasize that most corporate AI claims are fundamentally flawed or exaggerated . But hey, this isn't all doom and gloom; it’s about being smart consumers and investors. We need to apply common sense to AI claims, just as we would to a new food product: if a company claims 99% accuracy, spend a few minutes testing it yourself . While many companies out there are certainly wrapped in the AI bubble, there are definitely real innovations, the ones that make you genuinely feel that this is the future . So let’s focus on those companies, the 1% of the real deal, and keep a sharp eye on the tech, not just the terrific marketing budget.