The odds are good your learner surveys aren’t so good. A harsh pronouncement, but probably true, unless you’re already following the advice of Will Thalheimer, founder of Work-Learning Research. He literally wrote the book on learner surveys: Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness.
At the 2024 Learning Business Summit, Will shared tips for improving learning surveys. If you didn’t attend, here’s some of his advice for designing performance-focused learner surveys that deliver the data you need to improve the impact of your education programs.
In Will’s research, 65% of learning teams aren’t happy with their learning measurement tools and want to make modest or substantial improvements. In his poll of session attendees, 45% said their learner survey data was somewhat useful in improving programs and 35% said their data was not very useful.
Why is this happening? When you measure learning at the end of a program, you’re mostly measuring comprehension. Learners haven’t had enough time to forget anything yet. In the session chat, attendees confirmed this, saying their evaluations measure reaction more than learning. Association leaders would rather know if learners liked the program, not if they learned anything.
Researchers say traditional program evaluations (aka smile sheets) have no correlation to learning. High marks could mean anything. Learners are overconfident about how much information they’ll retain. This bias affects evaluation.
Will told a story illustrating another learner bias. In sales training, tough instructors received low evaluation scores but produced the most high-performing salespeople. This finding reminded me of the learning science research presented by Brian McGowan in his Summit session: learners don’t always know what kind of instruction is good for them. They prefer easy, ineffective instructional methods over challenging, effective methods.
Traditional evaluation methods, like the Likert scale—with its “strongly agree,” “agree,” etc. options—don’t provide useful data. If you get an average rating of 3.8, what does that really mean?
Learner surveys must ask unbiased questions focused on how people learn. Here’s an example from Will of a good learner survey question:
Question: HOW ABLE ARE YOU to put what you’ve learned into practice in your work? CHOOSE THE ONE OPTION that best describes your current readiness.
What did you notice?
The options aren’t what the learner expects, so they attract more attention.
They elicit more valuable information. They give the impression you’re taking the learner’s experience more seriously, especially since the options range from negative to positive impact.
The questions are about the learner, not the program, instructor, or venue. The impact on the learner is what matters.
The options have more granularity. Will said he adds an over-the-top choice, like option F, to slow the learner down so they’ll more carefully consider all options.
Notice the lack of jargon, like “learning objectives.” Learner-friendly language helps them make the right choice. The use of the uppercase helps learners home in on the gist of each option.
The resulting data is more useful. You can see the percentage of learners who chose each option and decide if those results are acceptable. If 30% of learners need more guidance before using what they learned (option C), your program is failing them. If 10% are still unclear about what to do (option B), what’s going on there?
Good surveys focus less on learner satisfaction and course reputation, which aren’t correlated to learning impact. Instead, you need to find out if learners:
Will suggests adding nudging questions to your surveys. The example below nudges the learner to follow through with what they’ve learned.
Question: After the course, when you begin to apply your new knowledge at work, which of the following supports are likely to be in place for you? (Select as many items as are likely to be true.)
You can see how these options would prompt your team to think about the support you could give learners and prompt learners to think about the things they could do to make sure they don’t forget what they’ve learned.
Here’s another example of a nudging message.
Question: Compared to most webinars, how well did the session keep YOUR attention. Select one choice.
You can ask questions to nudge a positive perception of your brand, like these questions about accessibility, belonging, and barriers:
Will suggests adding these open-ended questions to the end of your survey to elicit valuable insight.
Notice the use of “we” and “us,” which makes these questions sound like a real person talking.
You need more useful learner data to improve your programs and rise above the competition. What’s the point of using the same old learner evaluations if they’re not eliciting the most essential information: did the program make the intended positive impact?