London UK 2025
logo

Dates and Venue

29 - 30 April 2026 | Excel London

29 - 30 April 2026 | Excel London

AI for Learning Design: Moving Beyond Automation to Augmentation

AI for Learning Design: Moving Beyond Automation to Augmentation

AI for Learning Design: Moving Beyond Automation to Augmentation
  • Event: Learning Technologies UK 25
  • Date: 23 April 2025
  • Speaker: Dr Philippa Hardman, ASU+GSV Woman in AI
  • ChairDerek Bruce, Leadership & Culture Manager, CEO, Tesco
  • Estimated read time: 9 minutes

 


 

Quick read summary

This session explored how AI is currently being used in L&D and why most organisations are heading down a path of automation rather than improvement. It examined the risks of scaling existing learning models faster and cheaper without addressing quality or impact.

The discussion matters now because AI adoption in L&D has accelerated rapidly, often driven by efficiency targets rather than learning outcomes. Decisions made in the next phase will shape whether AI devalues learning roles or strengthens them.

Readers will gain a practical way to think about AI as an augmentation tool, understand where current approaches fall short, and see what a more effective operating model could look like in practice.

 

The core problem AI is being asked to solve

Across L&D, AI is most often introduced to address speed and cost. Training takes too long to design and deliver, and it is expensive to produce. These pressures are real and widely recognised.

However, focusing solely on efficiency masks a deeper issue. Even when training is delivered on time and within budget, its impact is often unclear. Measuring whether learning changes behaviour or improves performance remains difficult, and evidence of transfer is limited.

The risk is not that AI will disrupt learning. The risk is that it will accelerate an approach that already struggles to deliver measurable outcomes.

 

Two diverging AI strategies in L&D

Current practice broadly falls into two strategic approaches.

The dominant approach is AI for automation. Here, AI replaces or accelerates tasks already performed by humans, such as generating content outlines, converting documents into courses, producing videos, or speeding up evaluation surveys. The goal is increased throughput and reduced cost.

A smaller but growing group of organisations is pursuing AI for augmentation. In this model, AI is used to help learning professionals make better decisions. Rather than automating outputs, it supports deeper analysis, stronger design choices, and more robust evaluation.

The difference is not the technology itself, but the question being asked. Automation asks how to do the same work faster. Augmentation asks how to improve the system before scaling it.

 

Why scaling faster creates new risks

When AI is applied primarily to content production, volume increases quickly. More courses, more videos, more materials become available.

This creates three observable risks.

At a business level, over reliance on automation can reduce innovation and increase churn. In other sectors, aggressive automation has already led to reversals when quality and customer experience suffered.

At a professional level, instructional designers report their role narrowing to feeding content into tools. The craft of learning design risks being reduced to production management.

At a learner level, increased output does not resolve overload. Learners experience more content without clearer relevance, reinforcing disengagement rather than improving capability.

These effects suggest that efficiency gains alone are an insufficient success measure.

 

What AI for augmentation looks like in practice

Augmentation strategies use AI differently across the learning lifecycle.

In analysis, AI is used to synthesise multiple data sources, including performance data, business KPIs, and external trends. This supports clearer problem definition and reduces reliance on assumptions or dominant voices.

In design, specialised tools can connect learning designers to established research and evidence, helping them make informed choices about activities, sequencing, and methods rather than defaulting to templates.

In development, AI can act as a co-pilot rather than a replacement, supporting problem solving and filling knowledge gaps in context.

In evaluation, AI can support more sophisticated approaches to linking learning interventions with behaviour and performance signals, moving beyond completion metrics.

Across these stages, the aim is not to remove humans from the process, but to strengthen judgement and increase the likelihood of impact.

 

Practical application for L&D leaders

Questions leaders should be asking

  • What problem are we trying to solve before choosing an AI tool?
  • Are we optimising for speed, or for learning impact and business value?
  • Which decisions in our learning process would benefit most from better evidence?

Signals to watch in the organisation

  • Rising content volume without improved application
  • Instructional design roles becoming more standardised and narrow
  • Learner feedback pointing to overload rather than usefulness

Common pitfalls

  • Treating AI as a content factory
  • Switching on AI features in existing platforms without changing success measures
  • Assuming efficiency gains equate to effectiveness

What good looks like in practice

Effective organisations create space for controlled experimentation. Small teams run tests, compare outcomes, and build evidence before scaling. L&D begins to operate more like R&D, balancing short term delivery with longer term improvement.

 

Key takeaways

  • AI adoption in L&D is accelerating, but most use cases focus on automation rather than improvement
  • Speed and cost matter, but quality and impact remain unresolved problems
  • Scaling existing learning models faster risks amplifying their weaknesses
  • Augmentation strategies use AI to strengthen analysis, design, and evaluation
  • The future impact of AI in L&D depends on human decisions, not technology alone

 

Quote of the session

“The biggest risk of AI in L&D right now is that it enables us to do what we already do faster and cheaper.”

Philippa Hardman, Co-Founder, ASU+GSV Woman in AI

 

Final thoughts

AI presents a genuine opportunity for L&D, but only if it is used deliberately. The choice is not between adoption and resistance, but between scaling existing practice or rethinking it.

Leaders who focus on evidence, capability, and outcomes can use AI to build learning systems that work better, not just faster. Those decisions will shape whether AI diminishes learning functions or elevates them.

 


 

Speakers

Philippa Hardman, Co-Founder, ASU+GSV Woman in AI. Researcher focused on AI, instructional design, and the impact of emerging technologies on L&D practice.

Derek Bruce, Leadership & Culture Manager, CEO, Tesco. Works across leadership, culture, and talent, with a focus on HR and learning technology.

 


 

Watch full session


 

Explore more news
Loading

Join us at Learning Technologies 2026

Be part of Europe’s leading workplace learning event

Excel London | 29–30 April 2026

 

Register your interest

By registering your interest for 2026, you'll receive the latest news and event updates from us.

Exhibiting enquiries

Showcase your solutions to thousands of learning professionals in London next April.

 


 

Learning Technologies Sponsors

Learning Technologies Partners

Global Event Hub

Learning Technologies

London, UK

Learning Technologies Awards

London, UK

Learning Technologies Autumn Forum

Online

HR Technologies

London, UK

Learning Technologies France

Paris, France

HR Technologies France

Paris, France

OEB Global

Berlin, Germany

Zukunft Personal Europe

Cologne, Germany

Zukunft Personal Nord

Hamburg, Germany

Zukunft Personal Sud

Stuttgart, Germany

DevLearn

Las Vegas, USA

Learning

Orlando, USA

Learning & HR Tech Solutions

Orlando, USA

Join us at Europe's leading workplace learning event