In the past year, the boardrooms I advise have begun to sound uncannily alike. The vocabulary varies—pilots, proofs of concept, responsible AI, agents, risk dashboards—but the substance is monotonously consistent. Everyone, it seems, is doing something with AI. Yet almost no one can point to a serious, material improvement in performance. Of course, confront the average CEO or board chair with this reality and you will be met with vehement disagreement, and with good reason. Billions have been poured into the AI revolution, and, for the average organisation, the outcomes have been at best—tepid.
Opinions, on the other hand, are boiling hot. But as Protagoras (c. 490 BC – c. 420 BC) put it some two and a half thousand years ago:
Of all things the measure is Man, of the things that are, that they are, and of the things that are not, that they are not.
In much of today's corporate AI discourse that is exactly what is happening: people with positional authority are dressing opinion up as inevitability.
While '88 percent report regular AI use in at least one business function', 'most organisations are still in the experimentation or piloting phase', and 'nearly two-thirds of respondents say their organisations have not yet begun scaling AI across the enterprise.' Perhaps most damming of all is that 'just 39 percent report EBITDA impact at the enterprise level.'
In every presentation I've given this year, those numbers have landed like a dull thud on well-polished board tables. Impact remains the outlier, not the norm.
The organisations achieving meaningful returns are not tinkering with ever more varied tools or stitching together dashboard curiosities. They are rebuilding workflows, not tweaking them, and creating new products, not merely automating existing ones. They treat AI as a structural shift rather than an upgrade that can save on headcount. The rest are still caught in what has been termed the 'tame house of habit'—incrementalism masquerading as innovation.
AI is everywhere; impact is not.
And in what must be seen as a widening gap between the haves and have nots sits the defining strategic question of the decade: does AI make our organisations more or less human?
The False Binary: Technology or People?
The usual explanation for this shortfall is that companies have not found the right use cases. A more sophisticated version of this line of thinking seeks to blame fragmented data estates, technical debt, or insufficient model governance. All true, in part. But these are not the decisive variables.
The BCG Henderson Institute, working with Columbia Business School, identified something much more elemental: companies that put employees at the centre of their AI strategy are seven times more likely to be AI-mature. Their findings are remarkably consistent across markets:
- Employees in people-centric AI companies are 70% more likely to feel enthusiastic and optimistic about AI adoption.
- They are 92% more likely to feel well-informed about the organisation's AI strategy and tools.
- They are 57% more likely to see their company as a faster adopter of technology than competitors.
These are not cosmetic differences in sentiment. They are antecedents to behaviour—specifically, the behaviours required to redesign work. As I wrote in Trust in the Age of the Machine, tools alone do not transform organisations. Judgment does. And judgment sits with people, not models—much less software.
The Emerging Operational Frontier: Agency and Autonomy
From a year ago, the emerging data reinforce a familiar pattern: the spread of AI continues, but genuine scale remains elusive—and with it EBITDA impact. Many organisations have deployed tools, yet few have converted them into productised use cases, re-engineered their workflows for agentic systems, or built the institutional scaffolding—platforms, standards, and guardrails—required to operate at scale.
In my work with firms, the largest players unsurprisingly move faster; they possess the capital, data, and organisational muscle to push further along the adoption curve. Those reporting material EBITDA impact are typically the ones that have already crossed from experimentation into disciplined scaling. Most executives insist they are pursuing efficiency. The more interesting ones recognise that efficiency is merely table stakes: the real gains appear when technology is used not only to streamline work, but to reshape it.
Yet outside the heady circles of C-suite and board, the mood oscillates between excitement and unease. The freighted question—sometimes whispered, sometimes blunt—is whether agents will take jobs.
History is instructive. In every major technological transition, junior roles absorb the first shock. AI appears set to follow the same pattern: over the coming year, roughly a third of respondents to a recent survey anticipate a reduction, just under half expect no material change, and a small minority foresee an increase. For boards and executives, this raises uncomfortable questions about talent pipelines, succession planning, and the future shape of the organisation.
Yet the better question—the strategic one—is not which roles go, but how much agency remains with those who stay?
The good news, for people with real capabilities and credentials, is that firms scoring highly on employee-centricity—those that invest early in communication, training, and shared understanding—are seven times more likely to be AI-mature than those that do not. What this suggests is not a technical advantage but an institutional one. Where employees are informed, engaged, and brought into the design of change, organisations absorb new capabilities like AI more readily and move from experimentation to scale with far less friction. The goal therefore is not substitution but super-agency: people empowered to do more, faster, with better judgment.
This mirrors a point made in the governance literature. A recent briefing from the AICD to directors warned that boards should avoid treating AI as a purely technical domain requiring specialist oversight. Instead, the challenge is to ensure that all directors 'have a level of AI literacy that enables them to challenge management on the nature and quality of these AI inputs, even if not actively using AI themselves.' In other words, AI literacy is becoming as critical for directors as financial literacy.
Why "tool-centric" AI Strategies Underperform
I have often written about the critical distinction between authority (grounded in reasoned elaboration, expertise, and mutual respect) and managerialism (grounded in power, procedure, and coercive control). This distinction is particularly germane in the realm of AI, with many programs falling into the latter category: deploying tools at people rather than with them. The result is predictable—superficial compliance, limited learning, and minimal impact.
A tool-centric AI program suffers from five recurrent pathologies:
- Lack of meaning: Employees cannot connect the technology to the organisation's purpose or strategy.
- Lack of capability: Training is episodic, not continuous; basic fluency never matures into genuine mastery.
- Lack of psychological safety: People avoid experimentation because the risk of failure is social, not technical.
- Lack of guardrails: Governance focuses on model risk rather than decision-making risk.
- Lack of behavioural incentives: Organisations reward outputs, not the upstream behaviours that generate them.
The outcome is organisational theatre. Institutions that act without deliberation, mistaking motion for progress.

The Real Work: Rebuilding the Social Contract of Work
If the early productivity illusions of AI have taught boards and management teams anything, it is that technology cannot compensate for a hollow social contract. Employees who fear replacement rarely experiment. Teams that lack psychological safety rarely redesign workflows. Organisations that punish failure suffocate innovation before it begins. The result is Governance in Name Only (GINO).
This explains why employee sentiment is such a strong predictor of AI maturity. AI succeeds where trust already exists. It fails where the cultural substrate is brittle.
Boards are beginning to grasp that they are not merely responsible for AI risk oversight; they are responsible for the conditions under which AI can create value. This includes:
- Workforce transition planning: not just headcount planning.
- Ethical clarity: articulating what decisions must have a human origin.
- Transparent communication: especially around role redesign and workforce planning.
- Investment in capability building: sustained and organisation wide.
- Reward systems: recognising the behaviours that shape AI maturity.
The central insight is simple but disruptive: a human-centred AI strategy is, above all, a leadership strategy.
Where The next Serious Companies Will Appear
In every period of technological upheaval, a cohort of leaders emerge that see the world differently—and by that, I mean better. They do not merely adapt to the new technology; they reorganise around it. They treat the shift not as a cost-reduction exercise but as a moment to reimagine the work, the product, and the organisation.
The next serious companies will appear in the gap between pilots and profit. They will distinguish themselves not through AI models, compute metrics, or vendor contracts, but through judgment, governance, and the way they treat their people.
This is the hinge between Part I and Part II of this series.
Part I has outlined the strategic context: the stalling of impact, the centrality of people, the organisational pathologies that hold companies back, and the governance implications for leaders and boards.
But the practical question remains unanswered:
What should leaders actually do to build a human-centred AI strategy that works?
How do they design the workflows, incentives, guardrails, training, structures, and governance mechanisms that convert AI from a pilot factory into an engine of enterprise performance?
That is the task of Part II, available exclusively to members next week.
Good night, and good luck.
Polyphonic Architecture (1930) by Paul Klee (1879–1940) is licensed under Public Domain.
