
Generative AI could have written this introduction, but there’s a good chance it would have started hallucinating.
Hallucination, which Google failed to mention in its AI-filled 2025 keynote, led to many, many AI fails in 2025. But it wasn’t the only factor. Below, please enjoy our picks for the biggest AI fails from this past year.
Hallucinations hit academia, government, and the law
AI has been making stuff up for some time; hallucinate was the word of the year in 2023 for good reason.
But in 2025, the problem got a lot worse. Google AI Overviews may no longer be telling you to put glue on pizza, but they can also still claim the latest Call of Duty doesn’t exist.
And it’s not like academics are immune. A study from Deakin University found that ChatGPT fabricated about one in five of its academic citations, while half of its citations contained other error-laden elements of generative AI hallucination.
Such proof of hallucination hasn’t stopped politicians, publications, or lawyers. Robert F. Kennedy Jr.’s Health and Human Services Department used AI to cite studies that don’t exist. The Chicago Sun-Times published a summer reading list in May full of real authors along with hallucinated book titles.
Meanwhile, lawyers and litigants in 635 cases have used AI hallucinations in their arguments.
The Friend wearable failed fast
The Friend is a wearable device that looks like a large necklace pendant and records all of the audio from around the wearer, sends it to a connected phone app, and uses that data to chat with the user by sending texts in real time.
How incredibly odd, you might think. Could such a device increase our epidemic of isolation and loneliness, which is already being exploited by tech companies?
That didn’t stop Friend spending more than $1 million on advertisements on the New York City subway system. Ads hit over 11,000 rail cars, 1,000 platform posters, and 130 urban panels, in one of the largest marketing campaigns in NYC subway history.
This Tweet is currently unavailable. It might be loading or has been removed.
The result? Commuters immediately vandalized it. Criticism was so widespread that the subway ads themselves became Halloween costumes. No wonder reviews of the Friend came with headlines noting “everybody hates it.”
Most corporate AI pilots crashed
Across the business world, companies are being told they simply have to start using AI. The problem: they’re just not very good at it.
According to a report from MIT’s Media Lab, “The State of AI in Business 2025,” 95 percent of corporate AI initiatives fail despite investments that cost those companies somewhere between $30 billion and $40 billion.
“Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment,” the report explains.
“But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations.”
Here’s hoping 2026 will hold fewer AI fails.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.




