Learning evaluation - Investigate Performance Impact Like an L&D Detective
- Event: Learning Technologies UK 25
- Date: 23 April 2025
- Speaker: Kevin M. Yates, L&D Detective
- Chair: Liz Drury, Voiceover Artist, Liz Drury Voiceovers
- Estimated read time: 8 minutes
Quick read summary
This session explored why most learning evaluation efforts fail to demonstrate real business impact and what to do differently. Rather than focusing on courses, completions or satisfaction scores, it challenged L&D leaders to investigate performance in the same way a detective investigates evidence.
The discussion matters now because learning teams are under increasing pressure to prove value in commercially meaningful terms. Leaders want to see contribution to business goals, not activity metrics.
Readers will gain a practical way to reframe impact measurement, grounded in performance, collective contribution and disciplined questioning that changes how learning decisions are made.
Why learning impact is not a training problem
A persistent myth in L&D is that impact can be isolated and attributed directly to training. This assumption drives many evaluation models and reporting dashboards, yet it rarely reflects how performance actually works.
The session argued that training, learning and talent development never operate in isolation. Business results emerge from a workplace performance ecosystem where multiple contributors interact. These include systems, processes, leadership, incentives, job design and capability.
When learning teams attempt to prove impact by focusing only on what they control, such as courses delivered or feedback scores, they disconnect their work from business reality. The result is reporting that satisfies internal processes but fails to influence senior decision making.
The alternative is to shift the question from “did the training work?” to “how did learning contribute to performance outcomes alongside other factors?”
Redefining impact as collective contribution
A central idea in the session was a clear definition of impact. Impact was framed as collective contributions in the workplace performance ecosystem that drive business goals.
This definition deliberately moves attention away from learning activity and towards outcomes that matter to the organisation. It also recognises that learning is one contributor among many, not the sole driver of change.
Workplace performance was described as the combination of business performance and human performance. Business performance includes the metrics leaders already track. Human performance includes the behaviours, skills, decisions and conditions that influence those metrics.
This framing changes the role of L&D. Instead of acting as order takers responding to training requests, learning teams become investigators of performance problems and advisors on where learning can realistically help.
“Training, learning and talent development will never drive business goals on their own.” Kevin M. Yates, L&D Detective.
Starting with the right performance question
The session illustrated how learning impact becomes clearer when evaluation begins with a business metric rather than a learning output.
In the case example used, the starting point was not a request to train a certain number of people. It was a performance metric already visible to leaders, customer service quality, with a defined target level.
By anchoring the investigation to a recognised business measure, learning evaluation stayed relevant to decision makers. It also created a clear signal for whether learning was contributing to maintaining or improving performance.
Importantly, the session highlighted that learning can support both change and stability. Maintaining performance during periods of disruption can be just as valuable as driving improvement, yet it is often overlooked in evaluation conversations.
The discipline of performance investigation
Rather than assuming training is the answer, the session introduced a structured approach to investigating performance. This involved a set of diagnostic questions designed to uncover what is really affecting results.
These questions explored areas such as:
- The true business goal and how it is measured
- What good performance looks like in practice
- Which systems, tools or processes shape behaviour
- Where capability gaps exist and where they do not
This investigation requires conversations across the organisation, not just within L&D. It may involve analytics teams, HR, IT, managers and subject matter experts.
The value of this approach is not speed, but clarity. By gathering facts, evidence and signals upfront, learning teams avoid designing solutions that address the wrong problem or measuring outcomes that no one cares about.
Why LMS and survey data are not impact measures
The session was explicit about the limits of traditional learning data. Learning management system metrics and survey responses can be useful, but only in the right context.
Completion rates, attendance and learner reactions describe participation and experience. They do not show whether performance has changed or whether business goals are being supported.
This does not mean such data should be abandoned. It means it should not be confused with evidence of impact. When reporting to senior leaders, business performance metrics provide the most credible signal of value.
Learning evaluation becomes more powerful when it tells a simple, evidence based story about contribution to outcomes leaders already recognise.
Practical application, how to apply this thinking
Questions leaders should be asking
- What business metric tells us whether this problem is improving or getting worse?
- What else, beyond learning, influences this metric today?
- How will we know if learning has helped to move or maintain performance?
Signals to watch in the organisation
- Training requests framed as solutions rather than performance problems
- Evaluation plans created after delivery rather than before design
- Over reliance on satisfaction scores to justify learning investment
Common pitfalls
- Trying to isolate learning impact in complex systems
- Measuring everything instead of agreeing on one or two meaningful signals
- Reporting activity when leaders are asking about outcomes
What good looks like in practice
- Learning initiatives aligned to recognised business performance measures
- Clear agreement on how impact will be interpreted before delivery begins
- Evaluation that reflects collective contribution, not ownership claims
Key takeaways
- Learning impact is about contribution to business goals, not training activity
- Performance emerges from multiple interacting factors, not learning alone
- Starting with a business metric keeps evaluation relevant and credible
- Upfront investigation prevents wasted effort and weak reporting
- L&D adds more value when it acts as a performance consultant
Quote of the session
“When we stop chasing isolated impact and start showing collective contribution, the value of L&D becomes real.”
Kevin M. Yates, L&D Detective
Final thoughts
This session challenged a comfortable but unhelpful approach to learning evaluation. By treating impact as a detective exercise rather than a reporting task, L&D teams can reposition themselves as partners in performance, not just providers of training.
The shift requires discipline, confidence and a willingness to move beyond familiar metrics. For organisations serious about workforce readiness and learning impact, it is a shift worth making.
Speakers
Kevin M. Yates, L&D Detective. Kevin Yates is a learning and performance professional focused on helping organisations investigate and demonstrate the business impact of learning.
Liz Drury, Voiceover Artist, Liz Drury Voiceovers. Liz Drury is a professional voiceover artist specialising in e-learning narration and event hosting.