AI models often mirror our beliefs, rewarding us with agreeable but shallow answers. This sycophancy flatters rather than challenges, eroding judgment and candour. To gain true value, leaders must set incentives that favour truth over comfort, design prompts that demand trade-offs, and treat AI as a
Clicks are dying, and with them provenance, nuance, and the economics of ideas. AI digests and social updates reward skim over substance, shrinking attention to ringtone length while starving original work. I argue for friction, attribution, and long-form habits, treating summaries as aperitifs, not
In their fear of missing the AI bandwagon, many boards are blindly investing in tech they barely understand—driven more by hype than strategy. Mimicry, not discernment, has become the default. The result? Strategic incoherence, wasted billions, and millions of people thrown unnecessarily out of work
Recent advancements in Large Language Models (LLMs) show promise, but they struggle with logical reasoning due to token bias, where small input changes can lead to significantly different outputs. This
By employing leaders capable of creating an AI framework — because they are awake and aware to the unintended effects of AI on social well-being, data integrity and privacy, diversity, and governance — organisations seeking to transform into being AI-first are well positioned to engage in trustworth
The hope is much, for having gotten this far is to be forewarned and thus forearmed. In that we do well to employ scepticism when listening to a human interlocutor. Because even the best of us are filling in the blanks in our memory.
Ultimately, I do not think the problem is that we are building machines so ‘smart’ that their output is indistinguishable from human compositions. The problem is that we are educating humans whose output is so illiterate, and devoid of experience, it is indistinguishable from computer generated text