Mastodon Skip to content

An Eight-Part Playbook for Human-Centred AI

A practical playbook for human-centred AI: redesigning workflows, building capability, governing judgment, and sustaining talent pipelines so AI amplifies human agency rather than hollowing out the organisation.

Geometric abstraction by El Lissitzky (_Proun 19D_, 1920) showing overlapping planes and lines in dynamic spatial relation, evoking structured systems and layered design.
El Lissitzky’s Proun 19D explores how structure, orientation, and movement emerge from deliberate design rather than decoration. Emblematic that value from AI comes not from adding tools, but from re-engineering the underlying systems through which work is organised and decisions are made.
Published:
audio-thumbnail
An Eight Part Playbook for Human Centred AI
0:00
/1203.048

In Part I I showed that most organisations now do AI in the same way they once pursued total quality management or digital transformation: enthusiastically, broadly, and with limited impact. The difference today is the scale of the stakes. AI touches the essence of work—judgment, autonomy, responsibility—and thereby exposes the limits of managerialism. The companies producing real value are not the ones adopting the most tools, but the ones creating the conditions under which people can use those tools to do better work.

🗒️
The Scale of the AI Surge

The Manhattan Project cost roughly $30B in today’s dollars; the Apollo program about $298B.

Since 2013, investors have poured $1.6T into AI—2025 alone is forecast to add another $375B, eclipsing both historic programs combined.

In the first quarter of this year, AI startups drew $70B, nearly 60% of all global venture funding.

Part II turns from diagnosis to practice. What follows is a structured playbook, grounded in the evidence from McKinsey, the BCG Henderson Institute and Columbia Business School, and the governance research now emerging in director circles—most notably from the AICD and Human Technology Institute. It reflects the oldest insight in organisational life: "We become just by doing just acts". Capabilities emerge from practice, not policy.

This eight-part playbook shows managers how to create the practices—and the conditions—that allow AI to compound human capability rather than displace or diminish it.

1. Start with the Work, Not the Tools

Most AI programs begin with a catalogue of technologies or use cases. This is an understandable reflex, reinforced by vendor incentives and the cultural prestige of the technical. But competitive advantage originates in outcomes—what people achieve—not in general-purpose technologies.

A human-centred AI program therefore begins with a simple sequence:

  1. Map the work: Identify the decisions, judgments, and routine tasks within each workflow.
  2. Classify them: by required human expertise, frequency, error tolerance, and data availability.
  3. Isolate the value areas: Where does AI amplify human judgment? Where does it automate low-value work? Where does it create new forms of work altogether?

The firms that realise real value from AI do something beyond tactical automation. A common theme in reports on the state of AI in 2025 consistently shows the small subset of organisations achieving measurable returns from AI investments achieve it from reshaping and reinventing workflows, not merely trimming cost lines by automating people's jobs. These organisations integrate AI deeply into business processes, align management, governance and data foundations, and engage in strategic workforce planning, rather than treating the technology as a strategic answer to a business problem.

Governing Digital with Courage and Clarity
Digital is no longer a side project your CIO worries about between outages. AI pilots, data-sharing partnerships, cloud migrations, and “transformation” programs are now inseparable from strategy, risk, and reputation. Yet many boards still receive slide decks that are long on aspiration and short on accountable owners, hard metrics, or

While AI use accelerates on the ground, governance often lags behind. Governing with Courage and Clarity offers a practical framework for boards seeking to govern AI with coherence, courage, and credibility—available to subscribers.