At the start of the board meeting, the CEO did something unusual. Rather than opening with a presentation, she shared a one-page decision record. It set out the decision required, three credible paths forward, the assumptions each relied on, and a brief pre-mortem outlining how the preferred option could fail. The writing was plain—almost austere. But the reasoning was explicit. When a director pressed on a weak point, the CEO did not argue. She explained how the challenge had already been considered, what evidence had shifted her view, and what would need to change for the decision to be revisited.
That is the standard to aim for: AI not as a ghost-writer, but as a disciplined adversary and amplifier—an extension of cognition that forces clarity rather than flattering vagueness.
The intellectual framework for this is older than ChatGPT. Clark and Chalmers famously argued that cognition can extend beyond the skull into tools and environments—when those tools are reliably available and functionally integrated into the person's thinking loop. LLMs can become such tools. But an "extended mind" is not the same as an "outsourced mind". Extension requires endorsement: the human must understand what the tool is doing, test it, and remain responsible and accountable for the judgement.
The practical challenge, then, is to design a thinking partnership with AI that increases cognitive work where it matters, and offloads only what is safe to mechanise.
Design Your AI Use around Cognition, not Convenience
The most useful rule I've heard about LLMs does not come from academia; it comes from practice. In a talk on AI in journalism, Zach Seward described effective AI workflows as "humans first and humans last, with a little bit of powerful, generative AI in the middle to make the difference". If you want AI to improve your thinking rather than replace it, translate that aphorism into three operational constraints.
First, Humans Must Frame the Question
Framing is the executive act—here I mean executive as the adjective not the organisational rank. Deciding what the problem is, what counts as evidence, and what "good" means. If you delegate that to the model, you will receive a coherent answer to a poorly-formed question—a service that is technically impressive but managerially ruinous. This is why journals force authors to define terms: the definition silently determines the argument's destination. Sound high-level judgement depends on our ability to use "noble" language—language that clarifies, offers reasons, and supports autonomous decision-making. AI can help you refine definitions—much the way that dictionary on the shelf gathering dust did in ages past—but it should not choose the terms for you.
Second, the Model Must Be Used to Create Friction, not Remove It
The danger of AI is not merely error; it is cognitive ease. Studies have found that LLM support can reduce mental effort—precisely what busy executives crave—but with potential compromises in reasoning quality. So treat ease as a warning light. If the model makes the task feel instantly simple, you should assume you are skipping a cognitive step that normally protects judgement.
A useful countermeasure is to use AI in ways that increase effort at the right points: forcing you to specify assumptions, choose between competing framings, and respond to counterarguments. There is encouraging evidence that, in educational contexts, structured AI use can influence thinking skills. For example, an experimental study in Computers and Education: Artificial Intelligence found that students who used ChatGPT in course tasks showed discernible changes across critical, reflective, and creative thinking measures compared with a control group using traditional search tools. The implication for leaders is not that tools like ChatGPT makes people wise by osmosis—much less that it puts "PhD-level" abilities in your pocket—it is that tools can shape cognition when embedded in deliberate learning designs.
Third, Humans Must Verify, Decide, and Record
Automation bias research exists for a reason: people over-trust machine output, especially when it is plausible, well-presented, and delivered under pressure. The antidote is not vague claims of "human oversight"; it is specific verification steps and decision records. AI should help you produce a better audit trail of reasoning, not a more persuasive paragraph for the next board pack.
A Practical Approach for AI-assisted Thinking
The following routine is designed for executive and board contexts—where the point is not to generate content, but to reach defensible decisions faster without degrading judgement. It is deliberately repetitive. That's the point: it turns AI into a thinking gym rather than a drafting service. It can also be used at any level of the organisation where knowledge work is required.
Do not enter confidential or sensitive information into AI tools unless they are explicitly approved by your organisation.
Public or unapproved AI services may store, process, or reuse inputs in ways that breach confidentiality, regulatory obligations, or contractual duties. This includes, but is not limited to, board materials, PII data, commercially sensitive strategy, financial data, and unpublished analysis.
Treat AI prompts as disclosures. Use only company-approved services, with clear data-handling assurances, and assume anything entered could be exposed beyond your control.