Mastodon Skip to content

Could AI Have Written This?

an artificial intelligence illustration on the wall
pexels-photo-8849295
Published:
This week I look at how AI is improving the quality of writing in business reports, but also the way in which it poses a challenge for evaluating a report's content and logic. This is because business decisions are often based on the form (aka proxies) rather than the detailed substance of a report, meaning AI may exacerbate the approval of poor recommendations by producing persuasive but flawed arguments. Ultimately, decision makers should be more informed, critical, and aware of the potential limitations and biases of AI.

I will confess to enjoying work a little more these days. No, this is not because my Pointy-haired Boss has become more of a pleasure to work for, nor because the organisation has slipped its earthly bonds and become a transcendent environment. It is because I am increasingly reading reports that are well written. By this I do not mean there has been a remarkable and unprecedented uplift in general literacy with Burke, Nietzsche, Voltaire, and company quoted in the corridors of power. Rather, that an increasing number of people are using AI to write their reports. It is proving something of a literary Anthropocene for the corporate world.

While I for one welcome the change, if I am going to read poorly argued thinking I would rather it be well written, many colleagues and associates are not so happy with the change. The reason for this is that in the genre of business literature form is substance. In other words, so long as a report ticks all the right boxes for presentation and language it is highly likely to find favour. This is because for the average report, form reigns supreme.

Don't believe me? It is easy to test, simply put up two papers. One with a fundamentally sound business case but ticking every box for what not to do when it comes to writing a paper (long narrative sentences, no bullet points, financials presented in a mix of formats, use of terms outside the organisational lexicon). Then present a paper which is a failure of strategy and planning, but marvellously well written. If you are not up for putting your brand on the line with such an experiment, then simply look back over the papers which have been presented, that hindsight has shown to be strategic and planning failures, and yet sailed through committee. They are all a triumph of form (aka proxies) over substance (aka reasoned deliberation).

It is important to note I am not arguing that form should not matter, quite the contrary. Only underscoring that form in an organisational report is more than simply a literary flourish. Because of this, many managers are now becoming increasingly worried as the old proxies are no longer the litmus test they once were. So, what do we do now?

Did AI Write This?

First up, for those seeking to critically assess their staff's/colleague's abilities, asking a system if it wrote this is unlikely to be very helpful. An associate of mine tried this recently when they became suspicious of a report which seemed 'very well written'.

Leaping to a conclusion and getting ready to put the staff member up against a wall and thrash them with a wet lettuce, I waded into the debate and used my own literary output as a warning from content.

Taking my article The Shape Of Things That Are, I asked ChatGPT and Copilot 'Could AI have written the following article?' In ChatGPT's case, it responded:

It's possible that an AI could generate a similar article, especially considering its structure and argumentative style. The article reflects on contemporary issues regarding academic freedom, the role of data in discourse, and the historical context of controversial ideas. An AI could be programmed to analyze and comment on such topics, incorporating historical references and philosophical insights. However, the nuanced tone and specific references to Steven Pinker and Voltaire suggest that a human writer with knowledge of these figures likely crafted this particular piece.

— ChatGPT

In this particular context, ChatGPT was given a piece with enough nuance, specificity, and originality for it to draw a human writer conclusion. But, when the work is more generic in nature and conforms to the bland aphorisms and tried and tested proxies of the average organisational report, AI is unable to discern human from computer generated text. I found this when I presented ChatGPT with a communal garden piece about project management:

Yes, that response sounds like it could have been generated by me. It effectively addresses the capability statement by emphasizing the importance of relationship-building, active stakeholder engagement, and proactive problem-solving. It showcases an understanding of the significance of trust and rapport in gathering requirements and addressing project blockers. The response also highlights the importance of open communication and collaboration, indicating a holistic approach to stakeholder management.

— ChatGPT

Of course, that people plagiarise are inspired by the writing of others is nothing new. This looms large in the hallowed halls of academia which have long been stalked by students unwilling or unable to achieve the requisite standard finding ways to game the system. This scholarly arms race has made both the services of essay farms and the tools to catch them multi-billion-dollar businesses. A well-trodden path which prompts managers to wonder if they should take a similar path and start vetting company reports.

Investing in prompt assistants and other generative tools and then spending money on top of that for systems to flag when staff are using these tools is to misunderstand the nature of the problem. Which is, does the report provide a true and accurate representation of the opportunity and will be organisation derive value from acting on the recommendations? Which brings us to the heart of the problem with staff reports — proxies.

For avid readers of my work, some of the following was drawn from an edition of The Scribbler, my monthly newsletter. Given I do not think that self-plagiarism is a thing, I have no issue with dipping into my own work.

Thought, Proxies, and LLMs

To cope with the torrent of data and decisions we must deal with in our daily life, we use a series of heuristics or proxies to reduce the cognitive load. In the space of organisational behaviour, this is a process that tends to become more prevalent the higher the org chart we climb. The reason for this is that the Executive Leadership Team or Board of a large organisation could have tens of thousands of specialists working for them and it is impossible to be across the detail of all their work. To prevent analysis paralysis, proxies play a vital role in formulating the myriad complex parts into a conceptualisation about which a decision can be made.

George Pólya's 1945 book, How to Solve It outlined a series of commonly used proxies:

  • Visualisation: if you struggle to comprehend something, sketch it.
  • Start at the end: if you cannot formulate a solution, begin with the end in mind and see if you can work backwards.
  • Give examples: if you find abstract concepts difficult, find a concrete example.

While these techniques can be used to great effect in decision making, they also bring their own range of problems. Key among which is that we make poor decisions if our proxy does not give an accurate representation of the problem / opportunity / reality.

With the rise of AI, what could be termed the 'problems with proxies' is becoming more prevalent. The reason is that, as Ted Chiang put it in an article for The New Yorker, most 'AI' is:

a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you're looking for an exact sequence of bits, you won't find it; all you will ever get is an approximation.

To counter this failing, systems use modelling to fill in the gaps and sharpen the image. This leads to an output which is plausible enough to seem true to an untrained person but falls short of what would have been achieved by an expert with better knowledge of the subject matter.

This is certainly not to dismiss the increasing value that AI is steadily bringing to organisations. Something I argued is a key strategic opportunity in Leading AI-First Organisations. Rather, it is to call attention to what will initially seem like an ever improving 'quality' in report writing, lulling line managers into a false sense of security and ultimately undermining the eventual decision making by Executives and Boards.

Until true AI begins to replace the thinkers in organisations with sentient machines, there is no replacement for the brilliance of a well-trained human mind. A reality that now, more than ever, places our people at the centre of all we do.

If line managers fail to grasp this and let employees with a distracted or tenuous grasp of subject leverage AI to serve up unverified reports, decision makers will end up leading their organisations on the strength of a blurry image of a blurry image. Or as Sextus Empiricus (160 – 210 AD) put it in Outlines of Scepticism:

Nor does the non-expert teach the non-expert—any more than the blind can lead the blind.

Good night, and good luck.

More in Management

See all
A Renaissance painting by Titian, *The Three Ages of Man*, depicts the stages of life in a lush pastoral landscape. On the ri

Stacking Experience

/
A black-and-white image of a serious-looking male news anchor (Andrzej Kozera) seated at a broadcast desk. He wears a dark su

When Messaging Slips the Bonds of Reality

/
An illustration shows a cheerful game show host in a blue suit and red tie holding a microphone and gesturing toward three do

Why Great Managers Know When to Switch Doors

/