The em-dash has fallen under suspicion—treated as a tell-tale sign of artificial writing rather than what it has always been: a mark of care, rhythm, and thought in motion. It should return to good standing so we can recover linguistic standards we seem oddly eager to abandon.
A 2025 review of authority, trust, coherence and attention—plus a strategic outlook for 2026 on AI governance, provenance, regulation and decision quality. What to prioritise, what to ignore, and why clarity beats theatre.
A practical playbook for human-centred AI: redesigning workflows, building capability, governing judgment, and sustaining talent pipelines so AI amplifies human agency rather than hollowing out the organisation.
Most companies use AI, but few achieve real impact. In part I of this series I explore why people—not tools—determine AI maturity, and why the next serious organisations will dominate the space between pilots and profit.
In an age when machines can mimic thought, the real question is who stands behind the words. A reflection on authorship, judgement, and the human presence that gives writing its authority.
AI models often mirror our beliefs, rewarding us with agreeable but shallow answers. This sycophancy flatters rather than challenges, eroding judgment and candour. To gain true value, leaders must set incentives that favour truth over comfort, design prompts that demand trade-offs, and treat AI as a
Clicks are dying, and with them provenance, nuance, and the economics of ideas. AI digests and social updates reward skim over substance, shrinking attention to ringtone length while starving original work. I argue for friction, attribution, and long-form habits, treating summaries as aperitifs, not
In their fear of missing the AI bandwagon, many boards are blindly investing in tech they barely understand—driven more by hype than strategy. Mimicry, not discernment, has become the default. The result? Strategic incoherence, wasted billions, and millions of people thrown unnecessarily out of work
Advances in LLMs show promise, but token bias undermines their logical reliability. Small input shifts can distort outputs, posing risks in fields like medicine, law, and policy. Their dependence on pattern recognition over true reasoning demands closer scrutiny and better design.