
SAN JOSE, CALIFORNIA – MARCH 18: Nvidia CEO Jensen Huang delivers a keynote address during the Nvidia GTC Artificial Intelligence Conference at SAP Center on March 18, 2024 in San Jose, California. The developer conference is expected to highlight new chip, software, and AI processor technology. (Photo by Justin Sullivan/Getty Images)
Getty Images
Organizations are spending more on AI than ever before, and getting less back than they expected.
Gartner research finds that only one in 50 AI investments delivers transformational value, and only one in five delivers any measurable return on investment. MIT Media Lab’s Project NANDA, which analyzed more than 300 publicly disclosed AI deployments and surveyed 153 senior leaders, found that 95% of enterprise AI pilots delivered no measurable P&L impact — with just 5% of integrated systems creating significant value, according to its GenAI Divide: State of AI in Business 2025 report. Meanwhile, companies are preparing to double their AI spending in 2026, from an average of 0.8% of revenue to 1.7%, according to BCG.
The gap between what leaders invest and what they get back has a name in the Gartner framework: the Trough of Disillusionment. But calling it disillusionment implies the problem is emotional. The evidence points to something more structural — a leadership accountability gap that most organizations have yet to name, let alone fix.
Who’s Making the Decisions
Three-quarters of CEOs are now their organization’s primary AI decision-maker, according to BCG research published in January 2026. The Conference Board’s 2026 C-Suite Outlook Survey confirms that AI has moved rapidly from the margins of corporate strategy to the center — simultaneously identified by CEOs as a top investment priority, a leading external risk, and a governance concern.
That concentration of authority at the top would be appropriate if CEOs had the frameworks to evaluate what they’re approving. Most do not. A Gartner survey found that fewer than 30% of CEOs were satisfied with returns on their AI investments, yet spending continues to accelerate. When a leader cannot define what success looks like before a project launches or identify failure when it arrives, accountability has no place to land.
The result is a pattern that repeats across industries: AI initiatives are approved, deployed, and then quietly shelved when they underperform — without a clear owner, a post-mortem, or adjustments to the criteria for the next investment.
The Governance Gap
The issue isn’t that AI doesn’t work. In many contexts, it works extremely well. The issue is that most organizations have treated AI adoption as a technology decision when it is fundamentally a leadership and governance decision.
Harvard’s framework for responsible AI governance identifies accountability as a foundational requirement — meaning every significant AI decision must have a designated business owner who can explain, adjust, and answer for its outcomes, according to Harvard’s Division of Continuing Education.
The Conference Board’s survey found that CEOs identify AI governance as a concern even as they identify AI as their top investment priority — a contradiction that reveals how few have resolved the tension between moving fast and maintaining oversight. Decisions about the pace and scope of AI implementation are being made in silos, without cross-functional accountability or defined success metrics.
Kyndryl’s 2025 Readiness Report, which surveyed 3,700 senior leaders across 21 countries, found that 62% of organizations have not advanced their AI projects beyond the pilot stage — investing heavily without ever reaching operational scale.
What Accountability Actually Requires
The organizations pulling measurable returns from AI share a common discipline: they define what they are measuring before they deploy, not after.
In one of the most cited early examples of benchmarked AI ROI, Ally Financial reported in 2023 that marketers reduced campaign production time by an average of 34% — a figure made possible by the organization establishing a productivity baseline and measuring against it. That kind of specificity is the exception. Most companies, according to MIT’s Project NANDA, never reach the point of measuring financial return on their AI investments — meaning there is no feedback signal to navigate by. They cannot tell what is working from what is not.
Effective AI governance at the leadership level requires four things that most organizations currently lack:
- Defined success criteria before deployment. Every AI initiative should have a measurable outcome identified in advance — not a vague directive to “improve efficiency,” but a specific, time-bound delta that can be evaluated within the same budget cycle as the spend.
- A designated business owner for every significant AI decision. Technology teams can build and deploy. But accountability for whether AI advances organizational goals belongs to a business leader, not a platform.
- A structured evaluation cadence. AI initiatives that survive the first six months without a formal review tend to drift — continuing to consume resources while delivering diminishing returns. Quarterly reviews with clear go/no-go criteria are not bureaucratic friction. They are how leaders stay in control of what they have authorized.
- A culture where reporting failure is expected. Gartner’s research suggests that one reason AI ROI data is so sparse is that organizations suppress it. Leaders who signal that an honest accounting of AI performance — including failures — is valued will receive better information and make better decisions than those who do not.
The Human Variable
There is a deeper leadership question underneath the ROI data.
The organizations where AI is delivering measurable value have something beyond good governance frameworks. They have leaders who understand that AI adoption is a human change management challenge as much as a technology one. Deploying AI into a workforce that doesn’t understand what it is for, doesn’t trust how it makes decisions, and hasn’t been trained to work alongside it will produce exactly the failure rates the data describes.
Deloitte’s 2024 Human Capital Trends Report found that organizations where leaders actively build trust around AI achieve higher benefits and more balanced integration outcomes. Trust is not a soft variable here. It is a performance variable.
The CEOs in BCG’s survey who are doubling their AI budgets in 2026 are not wrong to invest. AI’s potential is real, the competitive pressure is real, and the organizations that figure out how to deploy it effectively will have genuine advantages. But the investments that deliver are the ones led by executives who define what winning looks like, assign accountability for getting there, and create the conditions for honest reporting when they don’t.
Spending more on AI is not a strategy. Knowing what you are buying, who owns it, and how you will know if it worked — that is.




