London UK 2025
logo

Dates and Venue

23 - 24 April 2025 | ExCeL London

23 - 24 April 2025 | ExCeL London

Why AI makes Leadership more important than ever

Wednesday 24 April 2024

Why AI makes Leadership more important than ever

Jill Shepherd
Why AI makes Leadership more important than ever

As Daniel Susskind explored at the 2024 Learning Technologies conference, AI is placing the world in flux. The how and when is unknown and to play for. It is the playing for that we worry about, and are intrigued by, in different measures.

AI is likely to be the biggest ever change project at work. Should it even be called a project. Do you fancy managing the cost-quality-time project management triangle of this ultimate emerging technology? Probably not, because AI brings with us a new Trust Shift and many governance questions, including whether we are asking the right questions of the right people.

 

THE QUESTION OF WHO WE CAN TRUST, IS ONE OF SOCIETY’S GREATEST CHALLENGES

Is there magic in AI that we cannot trust? Large Language Models (LLMs), such as ChatGPT, have surprisingly been generated more through art than science. They are possible more because language is less complicated than we thought, rather than because the technology behind it is cleverer than we thought we could currently create.

ChatGPT, and the like, work by predicting the next word in a sequence of words. Then after adding that new word, it predicts the next word and on it goes. All following a prompt, which is a form of text or code that you use to ask AI to do something for you. As much as it could do that live, by trawling the internet/code/texts to see what most often comes next in the context of your prompt, to do so would be too big a task.

So, Generative AI uses a model; a LLM. Not in the way a mathematical model works. Instead, more like how our brains think, using connections across a ‘neural net’.  So less like a logical 2+2=4 and more like a contemplative - I am here today. And the coffee shops I could walk to are here. And the reviews of them tell me this. So I shall try a coffee here. AI is not exactly a brain like yours or mine but does - like yours or mine - need training. We give it the training data. 

 

WHY CHAT GPT IS FREE

We do not tell it which coffee shop to choose, just as we cannot tell a child once they have seen a cat, whether a new cat they see is a cat or a dog, although we correct them if they get it wrong. Over time a child learns and cannot easily tell you how they discern whether a cat is a cat or a dog, or a cat dressed as a dog. Which is why ChatGPT was given to us for free and we were asked to rate its responses.

Bear in mind too that you can change the so called ‘temperature’ of your generative AI, so it does not give you the most probable next word but a less probable next word. Take the probability down too low and you get non-sensical randomness. Take it too high and your response might be too flat.  In the middle and it might not recommend the same café given the same prompt.

Apologies if you understand ChatGPT better than me and could improve on this explanation (even correct it) but is that possibility not illustrative of the challenge we have with AI? That least I am trying to become a more informed sort of AI adopter? I used the authoritative work of Stephen Wolfram so I tried hard – really hard. Stephen is rather clever.

 

CLOSING THE TRUST GAP

It is because of the trust issue, our limited ability to understand AI and its utility to us all, that leaders and the public will be more involved in AI implementation, than say the cloud or containerisation. AI is not a technology business project that just requires an infrastructure/tech stack and the closing of a digital skills gap, adding durable softer skills to produce a holistic skill set.

As we are all involved in AI, we need to feel in control of it. Accenture’s report (Generative AI Future of Work Talent Transformation | Accenture) talks avidly of opportunities and how people need to feel ‘Net Better Off’ as a ‘clear path to closing the trust gap and getting people reading to be working with gen AI.’ Plus, most-use cases involve mixing Human Intelligence with AI.

Is any path involving change that clear? With AI we need to work with a diverse set of AI adoption and non-adoption personas with a range of emotional and evolving reactions to it. Otherwise, implementation and adoption will be limited and even morally questionable.
 

DIGITAL TRANSFORMATION

Are the capabilities needed to gain value from AI adoption different from those required in general digital transformation? No, in that AI requires everything Dx does. Yes, in that it also requires responsibility around AI. The concern is that organisations will manoeuvre around this moral challenge by claiming you cannot foresee what impact technology will have. Microwaves were due to cook food quickly, but we did not forecast how they would disrupt family meals and advance processed food consumption.

Responsible and admirable AI implementation requires change management that is not a step-wise approach as multiple AI initiatives make steps too simplistic and hinder agility/iteration. AI cannot not be about change agents and individuals seeking to change through some method they follow, or project management within the simple triangle, nor too many agile MVPs that morally fail. It needs a model of digital transformation that is particular about AI as a transformative technology, namely:
 

  • Non-technical staff need to be involved earlier than the smaller numbers of technical staff and senior management staff required, say, in cloud migration
     
  • The technical skills needed to embrace AI deal with probabilistic outputs rather than the more pure and perfect processing power of old-style computing.
     
  • In the past, adding intelligence to work meant adding more people power. Now it means mixing AI and humans, often in ways organisations are not aware of, let alone in control of.
     
  • Emotions around the impact of AI are heightened versus Dx, and include trust
     
  • Diverse AI adoption personas need to lead AI initiatives to increase the probability of responsible AI adoption
     
  • AI is a step up in consumption of energy and water
     
  • Irresponsible AI adoption will be difficult to predict and might well have/should have bigger consequences for reputational damage, than people's previous general lack of care about ownership of their data.
     

Management scholars Spinosa, Hancocks, Tsoukas (2024) suggest modern times would benefit from looking at moral risk taking through a human lens. AI involves moral risks – we risk harm or even evil.

The scholars talk of the need to: a) manage moods (and we will all get moody about AI whether as employees or customers), b) build trust (which given probabilistic modelling inherent in AI will not be easy), c) listen for difference (the need to respect all voices within different AI personas) and d) speak truth to power (whether that be the unforeseeable impact of AI or whether the signing of international agreements creates better or worse places to do AI business, to be a socially responsible citizen. And a sustainable citizen).
 

MORAL RISK IS NOT CENTRAL TO EITHER CHANGE OR PROJECT MANAGEMENT

Can it be used by AI leaders to create values, break rules and challenge norms? Historically great leaders have challenged norms. AI does that for us. Will great leaders of AI use moral risk to challenge the new norms of AI successfully to create new norms that we all trust, which are economically valuable, socially progressive, and contribute to sustainability? Only leaders who involve will win this AI moral risk gameplay to help flourishing towards Net Better Off, as unclear and complex as the path might be.

AI is renewing the importance of leadership. That is a certainty within the fluid world of AI.

 

Dr. Jill Shepherd Jill Shepherd

Practice Director at QA Limited

Explore more news
Loading

Learning Technologies Sponsors

Learning Technologies Partners