AI agents: the great workplace upheaval - sink, swim, or swarm?
- Event: Learning Technologies UK 25
- Date: 23 April 2025
- Speakers
- Josh Cavalier, Founder, JoshCavalier.ai
- Trish Uhl, Product Manager, Generative AI Enterprise Solutions, Owl's Ledge LLC
- Chair: Celine Mullins, CEO, Adaptas Training
- Estimated read time: 10 minutes
Quick read summary
This session explored how AI agents are moving work beyond isolated tools and into coordinated systems that can plan, execute, and adapt.
It matters now because adoption is accelerating faster than organisational change, creating a widening gap between technical capability and human readiness.
Readers will gain a practical lens for understanding where agents fit, how work is being redesigned, and what role learning and development must play to keep people and organisations effective.
From tools to teammates
For decades, human computer interaction has been about issuing commands through keyboards, mice, and screens. Trish Uhl traced this lineage back to early computing, where the aim was not automation for its own sake, but augmentation of human intellect.
What is changing is not the goal, but the interface. Large language models have shifted interaction from commands to conversation. AI agents extend this further by adding memory, reasoning, feedback loops, and access to tools. The result is not a smarter tool, but a semi independent collaborator.
This marks a move from human computer interaction towards human and AI collaboration, where work is no longer just assisted by technology, but increasingly shared with it.
Why the pace of change matters
Uhl highlighted the speed at which generative AI has been adopted, noting that weekly active use of ChatGPT doubled within weeks in early 2025. The implication is not simply scale, but compression. Organisations are being asked to adapt faster than their structures, processes, and people are designed to allow.
This creates a familiar problem in a new form. Technology advances quickly, while organisational capability lags. That gap is not primarily technical. It is human.
Learning functions sit directly in this tension. The challenge is no longer just helping people use new tools, but helping organisations redesign work so that humans and AI can operate effectively together.
What AI agents actually change
An AI agent is not just a chatbot. As Uhl explained, agents combine a language model with tools, memory, and planning capability. They can break down goals, sequence actions, and adapt based on feedback.
In practice, this means work can move from task execution to goal delegation. In one example shared in the session, an agent was given a goal related to workforce skills analysis. It independently conducted research, produced a report, created learning modules, and generated assessment materials.
The significance is not the artefact quality, but the workflow shift. Work that once required multiple human handoffs can now be orchestrated by a single agent, with humans deciding where oversight and judgement remain essential.
Redesigning work, not just automating tasks
A recurring theme was that most organisations still operate on a model designed around filing cabinets and sequential human effort, even when the files are now digital. AI agents disrupt this model by accessing information directly and acting on it.
This enables outcomes driven services, where value is delivered through results rather than activity. Uhl pointed to examples where organisations are embedding agents into platforms so that outcomes, not effort, become the unit of value.
For learning leaders, this reframes long standing challenges. Measuring impact, personalising learning, and responding at the speed of business are no longer theoretical ambitions. They become operational expectations.
The Human AI Task Scale
Josh Cavalier introduced the Human AI Task Scale as a practical framework for navigating this transition. Rather than treating AI adoption as a binary switch, the scale maps a progression from fully manual work through to autonomous agent ecosystems.
At early stages, humans remain in control, using AI for support or automation. As confidence, governance, and risk management improve, work can move towards joint collaboration, then supervised agents, and eventually fully autonomous execution with human oversight at a systems level.
The value of the scale is not in prescribing an end state, but in enabling informed conversations. Different roles, functions, and tasks will sit at different points on the scale at the same time.
Practical application, what this means for L&D
Questions leaders should be asking
- Which parts of our work are suited to goal delegation rather than task execution?
- Where does human judgement add the most value, and where does it slow outcomes?
- What risks genuinely require human oversight, and which are cultural habits?
Signals to watch in the organisation
- Teams experimenting with agents outside formal governance
- Growing gaps between technology capability and role clarity
- Increased anxiety or resistance linked to role ambiguity
Common pitfalls
- Treating agents as cost cutting tools rather than capability enablers
- Deploying technology faster than people can adapt
- Focusing on content volume instead of performance outcomes
What good looks like in practice
Learning functions acting as performance partners, helping leaders redesign work, define human and machine responsibilities, and build confidence through experimentation rather than one off training programmes.
Key takeaways
- AI agents represent a shift from tool use to shared work execution
- The main constraint on value is human readiness, not technology
- Work needs redesigning before it can be automated effectively
- L&D has a central role in enabling human and AI collaboration
- Frameworks like the Human AI Task Scale support better decision making
Quote of the session
“This is the opportunity for L&D to step into redesigning work, not just supporting it.”
Trish Uhl, Product Manager, Generative AI Enterprise Solutions, Owl's Ledge LLC
Final thoughts
The question is not whether organisations will adopt AI agents, but how deliberately they will do so. Sinking means doing nothing. Swimming means incremental experimentation. Swarming means actively shaping how humans and AI work together.
For learning leaders, the choice defines the future of the function. The opportunity is to move from content delivery to performance orchestration, ensuring that as machines become more capable, people remain confident, capable, and essential.
Speakers
Trish Uhl, Product Manager, Generative AI Enterprise Solutions, Owl's Ledge LLC. Advises organisations on building AI fluent workforces and designing enterprise scale AI solutions.
Josh Cavalier, Founder, JoshCavalier.ai. Works with organisations to apply generative AI to learning, performance, and workforce capability.