I've heard the stat in at least a dozen strategy meetings this year. 95% of AI projects fail.
A number doing the thinking so nobody has to.
The percentages shift depending on who's citing. Sometimes 80, sometimes 90. The narrative holds: AI is expensive, complex, and likely to disappoint.
ERP systems had similar failure rates in the '90s. By 1999, roughly 60% of Fortune 500 companies had them anyway. Most went over budget, over schedule, underdelivered. Today, ERP is foundational infrastructure. The failure narrative didn't prevent adoption. It just meant expectations eventually met reality instead of vendor promises.
AI is in that awkward middle stage. Past the hype, not yet normalized. Which is exactly when the failure metrics get loudest.
I watched this play out at a company I was selling into last year. Eight months on a customer service AI project. The board called it a failure because it didn't hit the 30% cost reduction target. What it actually did was surface that their ticket routing was broken, their knowledge base was stale, and their best agents were spending 40% of their time on workarounds nobody had documented.
The AI didn't fail. It did what a good diagnostic does. Showed them the actual problem.
What counts as failure matters. A project that launched but missed original ROI targets. A system that works but cost more. A team that learned enough to pivot. The research rarely distinguishes. Most studies measure against initial projections, and initial projections for new technology are almost always optimistic. Not because people are dishonest. Because the unknowns are unknown.
The stat creates a psychological anchor. Permission to wait. Let someone else figure it out first. But that assumes every organization fails for the same reasons. A Fortune 500 manufacturer isn't a mid-market SaaS firm. The conditions that create failure in one context are irrelevant in another.
What actually correlates with success is boring. A narrow problem definition. Existing data infrastructure, or a realistic budget to build it. Leadership that can tolerate ambiguity. Teams that understand this is iteration, not installation.
The variable that matters most isn't talent or budget. It's clarity about the actual problem.
I've sat in enough of these rooms to recognize the pattern. Someone presents the 95% stat. The room gets cautious. The scope narrows to something so safe it's meaningless, or so ambitious it's doomed. Both responses treat the stat as a verdict instead of a question.
The company that "failed" at customer service AI ended up fixing their routing, updating their knowledge base, documenting the workarounds. Then they relaunched. It worked. The first attempt wasn't failure. It was expensive discovery.
The stat that should matter isn't "95% of AI projects fail." It's "95% of organizations don't know what they're trying to fix."
The AI is just the exposing agent.