In his 1872 novel, Erewhon, Samuel Butler explored the notion of self-replicating machines that had rudimentary consciousness but steadily evolved to become the dominant species on the planet. This essentially dystopian conceptualisation of thinking machines is a common Luddite fear stemming from “what does this mean for me” and often manifests in what Isaac Asimov coined the ‘Frankenstein complex’. That movies have propagated these fears in the shape of HAL 9000 or the Terminator, has only further ingrained an almost visceral concern about the ends of artificial intelligence.
In the wake of the global phenomenon that is Chat GPT, I mused about some of these possibilities in Ghost Writers — Rise of the Machines, but this only dealt cursorily with the immediate ends of the technology in academia and did not address the strategic means on which an increasing number of companies are seeking to capitalise.
The business case for AI is a tantalising one which offers senior leaders the holy grail of:
- Faster data-driven decision making.
- Improved ways of working.
- Enhanced talent pipelines.
- Better strategic thinking.
The ‘gotcha’ for businesses looking to surf this wave to the beach, is the same that is plaguing students who are increasingly turning to ChatGPT to write their homework — all systems, including biological ones, are fundamentally GIGO environments. That is, if Garbage goes In then Garbage will come Out. This means that the implementation of AI is as much a human process as it is a technological one and leaders who fail to comprehend this will get very expensive trash recycling units. We may laugh at this metaphor, but the reality is that there will be enormous business consequences and legal risks if organisations roll out their AI programs unchecked.
Applying AI
Surveys show that AI investment is on the up and up with adoption more than doubling since 2017. Interestingly, this growth has plateaued recently and, with a 50% adoption rate of companies polled, is down from the 2019 high of 59%. However, with companies like Salesforce doubling their investment in AI startups and pushing their AI positioning and strategy, I think it safe to say this is a lull before the storm.
Areas in the practice of management on which AI interest is increasingly resting are where human judgement is historically challenging. Examples of this are leveraging AI to match behavioural traits to roles, marketing open job listings to the right candidates, and attempting to eliminate bias in the screening process. There has even been development in tools that predict employee flight risks, enabling managers to try and get ahead of the sentiment and look to understand and allay the reasons that are driving departure.
A particularly interesting area is in the space of organisational structure. If the technological promise bears fruit, gone will be the days of endlessly redrawing Org Charts as senior leaders play another round of musical chairs. Instead, human resource data can be used to identify areas of opportunity for business and HR leaders to find the right structure and more effectively address suboptimal spans of control, and the needless duplication of transactional work, by more effectively redistributing these activities across roles.
Of course, it is not all upside. On the contrary, because not only is AI hard to build, it is often harder to implement. The first reason for this is that:
it’s often the leaders who champion an AI-frst perspective to scale, grow, and govern their businesses who then find themselves at odds—both culturally and in terms of investment priorities—with the executives who can turn it into reality. Those are the leaders in charge of portfolio and business operations—management groups that, in turn, also have divergent priorities and perspectives from each other. The result can be a triangular discord that makes scaling, not a cakewalk in the best of times, even more perilous.
(Jules 2022, 12)
This puts change management at the heart of applying AI and underlines why it is as much of a human as it is a technological challenge. Which is a convenient segue to the importance of Minding The Capability Gap, as lack of the right people — in data science, software development, and customer insights — will unhinge the technology and make the governance work largely redundant.
Finally, trust and investment are often hard to win when the Board or Executive are yet to be sold on the long term viability and value of the initiative. In Ghost Writers — Rise of the Machines, I covered some of the pitfalls in academia when it comes to generative AI output. Organisational survey’s also show that “55 percent of executives experienced an incident in which active AI (for example, in use in an application) produced outputs that were biased, incorrect, or did not reflect the organization’s values.” Thus, it is unsurprising that employee confidence in AI often mimics the fears surfaced in works of science fiction, and Boards are unwilling to match the investment of AI-first organisations who spend “20 percent of their enterprise-wide revenue on digital technologies.”
Conclusion
AI itself is a very apt metaphor for the type of organisational culture that is needed for a business to be AI-first — perpetual-learning. Without this approach from all levels of an organisation, widespread acceptance and returns from AI are unlikely. To put it simply, AI cannot be a side hustle of the IT or Data team. This means that organisations need to encourage staff to pull all the L&D levers of certification programs, self-directed courses, and experiential learning. Programs which, I must hasten to add, need to be provided for non-technical as much as technical employees.
For this reason, it is not enough to simply hire a Chief Data Officer, provide them with a team of analysts and scientists, and sit back and wait for the magic to happen. Instead, senior leaders need appointing who understand both the technology and have the business and organisational behaviour acumen to understand and leverage the organisational ecosystems that are value and growth centres. These leaders also need to have a deep understanding that conduct counts. That is, they foresee the ethical and data hygiene risks that come as features rather than bugs in applying AI. For this reason, a framework for the adoption and implementation of AI in an organisation is a must. Some essential elements to address in such a framework are:
- AI Code of Ethics: this must be legally enforceable and accommodate social conventions and cultural values.
- Code for the AI Code of Ethics: AI, much like biological thought, has inherent bias. It is impossible to “code out” bias simply because one person’s bias is another person’s appropriate conduct. Therefore decisions need to be made about the training and testing of AI systems to ensure they align with organisational values.
- Ensure Human Autonomy: akin to Asimov’s Three Laws of Robotics, as AI increasingly comes into use in the organisational behaviour space, it is critical that it does not infringe an employee’s freedom and decision making. Nor enable employees to delegate responsibility for their work and actions to AI.
By employing leaders capable of creating such a framework — because they are awake and aware to the unintended effects of AI on social well-being, data integrity and privacy, diversity, and governance — organisations seeking to transform into being AI-first are well positioned to engage in trustworthy and responsible AI use. Thereby standing the best chance of avoiding the Frankenstein complex.
Good night, and good luck.
Photo by Steve Johnson on Unsplash.
Further Reading
Dordevic, M (2022) How Artificial Intelligence Can Improve Organizational Decision Making. Retrieved August 4, 2023, from https://www.forbes.com/sites/forbestechcouncil/2022/08/23/how-artificial-intelligence-can-improve-organizational-decision-making/
Jules, C (2022) Building better organizations : How to fuel growth and lead in a digital era, Oakland, California: Berrett-Koehler Publishers, Inc.
Kim-Schmid, J, and Raveendhran, R (2022, October 13) Where AI can—and can’t—help talent management. Harvard Business Review. Retrieved from https://hbr.org/2022/10/where-ai-can-and-cant-help-talent-management