Five Gentle Reminders about Measurement for L&D Professionals
As an L&D professional, the pressure to PROVE that your team’s training was effective and valuable can feel, well—overwhelming. We all know that measurement is important, and most of us have signed-off on the concept of measurement in theory. So why is it so difficult in practice?
As a lover of data, measurement has always been a priority for me. That said, thorough measurement is a step even I skip when I’m pressed for time, and resources are stretched thin. After all, if our people like the training, do I really need to measure the results?
The short answer is yes! It’s absolutely a fight worth fighting and time well spent.
The long answer is “Yes, and…”
These are five gentle reminders about measurements from one L&D professional to another.
We collect data because the stories we tell ourselves aren’t always true.
I wish perception and reality matched 100% of the time, but the truth is, bias clouds our vision—tremendously. Sometimes when I’m leading a session, and I see a sea of bobbing heads, I think, “Yeah, they’re definitely getting this.”
Measurement helps us gauge the truth of that sentiment. What question could I ask my learners to be 100% sure this content has landed? What behavior could I track to be 100% sure I’ve closed the knowing-doing gap? And was this behavior as closely intertwined with our business’ bottom line as we thought?
Measurement allows us to answer those questions with more than a best guess.
Numbers lie too.
As important as measurement is, even numbers can lie. For this reason, it’s imperative to think deeply about how data was collected and aggregated and whether there’s inherent bias in your results. Say, for example, you send a survey after training your team on a piece of technology. The survey asks if they use the software more this month than they did prior to your training.
This question is begging to be answered dishonestly. No one wants to admit they’re not meeting well-communicated expectations, and some might genuinely believe they are using the tool more.
The point here is that it’s crucial to think critically about the numbers you’re using to measure your learning initiatives. Ask yourself, “What other ways could this number be interpreted? How was this data collected? What does this really mean?”
The simple act of measuring shapes behavior.
Numbers have the potential to lie, but they also have the potential to shape behavior. Take, for instance, the school that wants to ensure that more students are passing their classes. When teachers know that their pass rate is under scrutiny, they modify their behavior by failing fewer students.
Does this increase pass rates? Sure. Is this a good thing? Probably not…
At the end of the day, teachers have made it easier to pass their class, but their students are not more college or career ready.
In the same way, when associates know you’re measuring a behavior, their focus will shift to making that number favorable. And this may or may not have the big picture impact you wanted.
For this reason, I suggest using data to inform, rather than to punish or reward a specific behavior.
It’s possible to measure the wrong thing.
In our results-driven companies, we’re often tempted to find any numbers to show value to our stakeholders. Often, I see eLearning completion rates or the number of associates who passed a knowledge-based test shared in AARs (After Action Reviews). Remember that just because an associate can answer a true/false question about a new product or service doesn’t mean they can effectively communicate the benefits of said service to a prospective client.
To avoid measuring the wrong thing, I encourage all L&D professionals to write a multi-level measurement plan that pulls data from a variety of sources.
Just because you can’t measure it, doesn’t mean it’s not true, valid or important.
Yes, measurement is important.
Yes, multi-level measurement is important.
Yes, careful, intentional measurement is important.
And…
Some things are impossible to quantify.
The inability to find a direct metric to prove that having a leader kick-off your training created a 6% decrease in employee turnover doesn’t mean that having a leader kick-off your training was a bad idea.
Intuition and data don’t have to compete with one another, and both should be used to create people-centered learning that taps into your team’s fullest potential.
In summary, keep measuring! And… scrutinize your data and measurement strategies. No statistic can replace the mental lift of thoughtful interpretation. When people know they’re being measured, they behave differently.
Metrics are not black and white. With measurement (and most things), it’s important to leave room for the discomfort and curiosity that allows us to sit in the “grey areas.”
Σχόλια