Mastodon Skip to content
Artificial Intelligence

The Chat Trap: When AI Makes Your Thinking Softer

AI makes language effortless—but thinking is done best when it is effortful. Thus, beware the “chat trap”: how casual use of generative AI can quietly soften judgement by replacing framing, definition, and trade-offs with fluent prose.

A young man leans over dark water, transfixed by his reflection, illuminated against a deep shadowed background.
Narcissus gazes at his own reflection, absorbed and immobilised. The image is about self-reinforcing attention—a perfect metaphor for conversational AI, which reflects our language back to us with seductive fluency but no critical distance.
audio-thumbnail
The Chat Trap When AI Makes Your Thinking Softer
0:00
/847.632

Board packs tend to land in one's inbox with the quiet menace of inevitability. Hundreds of pages. Polished prose. Crisp headings. Crisper graphs. Confidence everywhere—except in the boardroom the next morning.

A director flips through the pages and asks the question no manager wants to hear: "Where is the thinking?" The CEO, determined not to exhibit the slightest sign that anything is not going according to her plan, responds curtly: "If you would turn to appendix E you'll see..."

"No," responds the board member, "I don't mean the numbers. I mean the thinking. The chain of reasons that connects claims to evidence and evidence to a decision. The pack reads smoothly, but it does not show the company's true position. It offers conclusions without exposing premises, options without trade-offs, reassurance without an argument."

That, in miniature, is the problem with today's most common use of AI. It makes language cheap and plentiful, which tempts leaders to treat language as the output of thinking rather than the medium in which thinking is done. The model will happily draft "a strategy" or "a risk assessment" from a handful of prompts. You will have graphs generated that look like cognition has occurred. And if you are tired, rushed, or under pressure, you will feel the unmistakable relief of progress on that report you wished you didn't have to write.

I want to name that relief: it is the chat trap.

By "chatting" with AI I don't mean using it frequently, or using it conversationally in the Socratic sense. I mean using it casually: asking loosely framed questions, accepting the apparently fluent responses, and mistaking coherence for comprehension. This is the kind of interaction the interface encourages—quick turns, agreeable tone, rapid closure. It is excellent for customer service and basic drafting. It is a poor apprenticeship for executive or board level judgement.

The danger is not that the model will occasionally hallucinate. The deeper danger is that it will be helpful in the wrong way. Cognitive psychologists describe "cognitive offloading": shifting mental work to external aids to reduce effort. Offloading is not inherently bad; done strategically, it frees attention for more difficult work. But when offloading becomes habitual—when the tool starts doing the foundational moves—you stop practising the skill you cannot afford to lose.

🗒️
Cognitive offloading is the use of external actions or tools to reduce mental effort by shifting part of a task out of the head and into the world—such as writing notes, setting reminders, using GPS, or relying on digital systems to store, retrieve, or process information.

Offloading can improve performance in the moment, but it also reshapes how we think: when used habitually or prematurely, it can reduce engagement with the underlying cognitive work, weaken memory and reasoning over time, and distort our confidence in what we actually know. The key issue is not whether we offload, but which cognitive work we choose to keep practising.

And thinking, particularly executive thinking, is a perishable skill. Like physical conditioning, it declines when it is not exercised. It weakens when leaders stop drafting their own problem statements, stop defining their terms, stop forcing themselves to write the uncomfortable paragraph that admits uncertainty. The mind does not usually collapse in a dramatic failure; it softens. Judgement becomes more imitative, more reliant on templates, more allergic to ambiguity, more susceptible to rhetorical confidence or positional authority.

This is why "chatting" with AI will not keep your mind sharp. Yet, it reduces friction and removes the grunt work. But friction—of the right kind—is a shapening agent. If an AI session feels like a warm bath, you are not training your judgement; you are soaking it.

None of this requires you to reject AI. The point is more demanding. AI can genuinely sharpen thinking—but only when it is used to increase cognitive work at the right moments: to force clear definitions, expose assumptions, generate counterarguments, and make trade-offs explicit. Used that way, AI becomes less a conversational companion and more a sparring partner.

In this first article I explain the "chat trap" and why it makes our thinking softer. I achieve this in two steps. First, by clarifying what thinking is (and why writing remains its most reliable diagnostic). Second, by explaining why chat-based AI use creates cognitive ease that can quietly erode reasoning quality—even when the prose improves. In Part 2, I'll offer a practical method for escaping the trap: an implementation-ready workflow that turns AI into a discipline for judgment rather than a substitute for it.

If you've ever read an immaculate AI-generated memo and felt less confident afterwards, you already know the feeling this series is addressing. The question is whether you will let that feeling become BAU.

Thinking is a Practice, not a Possession

Most professionals will tell you they "think all day". Many do. But much of what passes for thinking is retrieval, reaction, or repetition—useful, but not sufficient. Thinking, properly speaking, is the active manipulation of ideas and situations—often through language—until they become a coherent account of reality and a defensible basis for decision making. Philosophers noticed the connection between thought and language long before cognitive science made it fashionable. A contemporary synthesis comes from research on inner speech—the silent speech directed to the self—which treats internal dialogue as a cognitive tool that can improve performance on challenging tasks.

This matters for leaders because their role is unusually linguistic. This language work becomes ever more pronounced and critical the higher the organisational ladder one climbs. To the extent that executive work is essentially declarative: executives are not primarily doers—but those who talk, listen, think, and decide—and thus require language sophistication. Put bluntly: leadership is often a contest of framing, explanation, persuasion, and commitment. When the language used decays, judgement decays with it.

Skill decay is not a metaphor; it is a measured phenomenon. A classic meta-analysis on skill decay showed that performance declines after periods of nonuse, and that decay varies with task characteristics and retention interval. Cognitive tasks—especially those requiring complex processing rather than rote repetition—are not immune. If anything, they are vulnerable because they depend on maintained habits of attention, reasoning, and self-correction.

This is why writing has long been recommended as a tool for thinking. Not because writing is inherently noble, but because it forces us to externalise what we believe we know to a place where what we know can be inspected from every angle. The research base is more modest than the rhetoric, but it is real: a meta-analysis of school-based "writing-to-learn" interventions found a small positive impact on achievement, with stronger effects when writing tasks included metacognitive prompts and ran for longer. The mechanism is plausible for leaders too: the act of writing can expose weak definitions, hidden leaps, and missing premises.

A useful classical warning against false precision comes from Aristotle, who remarked that an educated person seeks only the degree of accuracy a subject permits. Management decisions are often probabilistic, political, and morally weighted; they do not become scientific because the prose sounds statistical or has a data sheet attached. Writing helps leaders see what kind of accuracy is possible—and what kind is pretending to be accurate.

There is also a practical distinction worth recovering: writing is not the same as publishing. Many executives avoid writing because they imagine it creates an obligation to share, defend, or circulate. In fact, the private memo, the scratch-pad argument, and the decision journal are thinking tools precisely because they are provisional. Publishing is performance; writing can be private.

AI complicates this in a predictable way. If you let the model draft your first articulation of a problem, you may never notice the boundary of your own understanding because the model has supplied plausible—and usually invisible—boundary-markers for you. In this scenario, you are a little like a player in a computer game, constrained by the limitations imposed by the architect.

The AI Offloading Trap

The cognitive temptation of AI is not that it is smarter than you. It is that it is effortless. The human brain, like any honest budget, responds to incentives. When a tool makes a cognitive step cheap, we will do more of it. When a tool makes a cognitive step free, we will stop noticing it as a step at all.

Cognitive science has been tracking versions of this problem for years. Research on cognitive offloading examines how people use external aids—notes, devices, reminders—to reduce mental effort. When offloading is strategic, it can free capacity for higher-order work. When it becomes habitual, it can reshape what is remembered, what is practised, and what is trusted. A related effect appears in the "Google effect": when people expect information to be externally available, they are less likely to retain it internally. In other words, easy access changes the allocation of memory and attention.

The key challenge that AI presents is that it pushes offloading beyond memory into reasoning and expression. That is not automatically bad—but it is risky in leadership contexts because reasoning and expression are not separable from judgement. A recent experimental study in Computers in Human Behavior captured the trade-off succinctly: LLM assistance reduced mental effort and improved perceived writing efficiency, but participants' own reasoning quality could suffer—an ease that arrives with a cost.

The organisation version of this problem is well known in human factors research as automation bias: people over-rely on automated advice, especially under time pressure, fatigue, or cognitive load. A 2025 systematic review in AI & Society synthesised evidence that automation bias occurs across domains and is shaped by factors such as task complexity, trust, and design features. In management settings, this bias can look like reasonable deference: accepting an AI-generated analysis because it is well-structured, or because dissent would require slow work the project timeline does not permit.

What makes AI distinctive is that it mimics rhetorical confidence. Earlier decision aids were often opaque; AI is linguistically persuasive in the way it will explain its argument and "thinking" because it can generate arguments, counterarguments, and synthesis at speed—exactly the outputs that usually signal that thinking has occurred by a subject matter expert. But language can simulate thought without embodying it. The model does not own a judgement; it arranges patterns of prior language. That can help you think, but it can also help you avoid thinking.

Here Michael Porter's old distinction between operational effectiveness and strategy becomes relevant once more. Operational effectiveness improves how well you perform activities; strategy involves choosing a distinctive set of activities and accepting trade-offs. AI is superb at operational effectiveness for language: drafting, summarising, reformatting, translating. It is far less reliable at the strategic core: deciding what not to do, what risk is tolerable, what values should govern, what evidence would change your mind. If leaders let operational convenience substitute for strategic judgement, they should not be surprised when their thinking converges into an artificial dialect. Porter warned that benchmarking can make rivals look alike; AI drafting can do the same to our reasoning—everyone starts to sound as though they attended the same seminar.

None of this means "don't use AI". It means something more uncomfortable: the order of operations matters. If AI supplies your first draft of a thought, you are relegated to editing—often cosmetic editing. If you begin with your own articulation—however clumsy—AI can become a true sparring partner. It is the difference between training and transport. One builds capacity; the other gets you there without using your legs.

What's Next?

When used well AI can help with thinking, but not by doing your thinking for you. Thinking is a perishable capability—maintained through deliberate practice in language, analysis, and decision discipline. Used lazily, AI is the cognitive equivalent of a moving walkway: efficient, pleasant, and quietly corrosive of fitness. Used deliberately, it can become a demanding partner that exposes weak premises, forces clarity, and expands the range of alternatives you seriously consider.

In Part 2, I will lay out a practical method for doing exactly that: using AI as a thinking partner that strengthens rather than replaces judgement—complete with an implementation-ready workflow and prompts, including an OODA-style routine for policy and disclosure assessment.

If this argument is close to your lived experience—if you've read a beautiful AI-written document and felt your confidence drop rather than rise—subscribe so you don't miss Part 2.

Good night, and good luck.


Narcissus by Caravaggio (1571–1610) is licensed under Public Domain.

Dr Robert N. Winter

Dr Robert N. Winter

Dr Winter examines the tensions between leadership and management, the structures that hold organisations together, and the ideas that shape organisational life. His work sits where governance, culture, and strategy converge.

More in Artificial Intelligence

See all