Training Scenarios: 6 Mistakes To Avoid

6 Mistakes To Avoid When Creating Training Scenarios


Measure What Matters!

Measuring knowledge applied in a situation is always a more effective way to test learning than assessing whether employees remember a fact from a couple of slides ago in an eLearning. This is why assessing decision-making competency using pieces of knowledge in context should be used in learning design more often. However, it is not only about what to do. We can make crucial mistakes while creating scenarios that can undermine the effectiveness of our efforts. This article is an invitation to reflect, challenge, and refine our approach to scenario design when creating training scenarios. While the types of potential issues are endless, an article needs to have a limit.

6 Common Issues To Avoid When Designing Training Scenarios

1. Making Scenarios Too Easy

Example
A customer service training scenario where every customer is polite and has a straightforward request.

Problem
It doesn’t challenge the participants to reflect on real-world complexities. While you may use this “blue-sky scenario” early on in the process for absolute beginners, you need to match the desired skill level with the challenge level later on. When scenarios are too easy, participants are bored. Bored minds don’t learn. When a scenario is too difficult (without any tools for hints or guidance), participants can get frustrated.

Instead
Make it relevant to their job and expected skill level. Find the challenge “right above their heads,” and you’ll get engagement. The challenge is to know where their skill level is!

2. Overloading With (Irrelevant) Information

Example
A medical training scenario filled with pages of background on a patient’s history.

Problem
Cognitive overload. Participants can’t discern what’s essential. While experts and learning designers “know” what’s important, for participants every piece of information is new and potentially important for the scenario. It is actually a separate skill to identify relevant and important parts of a scenario based on the question!

Instead
Be concise. Provide only the information necessary for the decision at hand. That does not mean spelling out the answer! Always design an assessment item for what it is meant to be assessing! If you’re not assessing participants on reading skills ( i.e., unless it’s part of the authentic assessment), don’t make figuring out the scenario harder than actually making the decision.

3. Leading Answers

Example
Offering very obvious right or wrong choices. This often includes perfectly written marketing statements or long legal definitions.

Problem
It doesn’t encourage critical thinking, or even thinking at all. It is also what I call the “illusion of learning,” that is reassuring for everyone (the SME, the learning design team, and the participant) because “everyone knows” the answer. It can lead to good scores with no impact on the job.

Instead
Craft choices that are nuanced and require reflection. Make sure you design the assessment item for the right level: are you assessing recognition of a term, recall of a term, recitation of a term, or application of the term?

4. Ignoring Emotional Realities

Example
A scenario about delivering bad news without addressing the emotional weight.

Problem
It feels inauthentic. Reading a scenario (even with a name) about an employee who’s not going to get the promised promotion can simply come down to picking out the “right” answer. That is very different from telling someone who’s been excited about the promotion that it, again, didn’t go through. Even if that person was assured it would, has already looked at a new car for the coming baby, etc.

Instead
Recognize and incorporate the emotional dimensions of real-life situations. Show, don’t tell! Like in movie script-writing, you show the emotions through actions, not by spelling them out or saying them out loud. The affective domain itself would deserve a whole new article because it is often missing.

5. Using Stereotypes

Example
A scenario about office dynamics, where the manager is always male and the assistant female.

Problem
Not only does it perpetuate biases, but it also conceptualizes a scenario that creates a distance for the participant. It is almost like asking about the general rules of the world. We often know what the right thing to do is. Yet, we do not always do the right thing.

Instead
Challenge stereotypes. Create diverse and inclusive scenarios specific to the situation, rather than based on generic characters. Movie scripts are generally either plot-driven or character-driven. Plot-driven scenarios are defined by decisions and actions, while character-driven scenarios are based on the peculiarity of a unique character.

For training scenarios, we mostly use the plot-driven approach, where actions take place to set the stage for a decision. Just make sure that, for the character you describe in the scenarios, the actions they are to take make sense! Do not put words into a character’s mouth and actions in their path if they are not consistent with the character.

  • Note
    Experimenting with character-driven scenarios takes some practice, but it can be effective when you approach scenarios as a series or a campaign. You’ll need multiple scenarios to establish the character but once known, you can use it in other channels like marketing or comms.

6. Neglecting Feedback

Example
“You’re correct!”

Problem
There are two types of mistakes here that I often see. The first one is the classic: right or wrong. Once an SME told me they don’t want to put anything important in the feedback because people ignore it. Well, maybe…or maybe they ignore it because we taught them that there’s nothing important in there?

Feedback is one of the most underestimated aspects of learning design. In fact, that is always my first question for EdTech companies wanting to showcase their product!

How do you support personalized, timely, and actionable feedback? Show me how the system uses the insights it gains from the feedback and the learner’s reflection on it.

This often baffles them, which means it’s another platform to deliver content. We have too much content already.

The second problem with feedback is relying on the authoring tool. When you have a multiple-choice, multiple-select type of assessment question, you need to decide how to give feedback. Do you give feedback on correct or incorrect? Partially correct? Generic or based on what they selected? The lazy approach by tech is to provide a single statement for all incorrect answers.

This means a learning designer will compose a sentence that generically explains why the selection the user made is wrong. However, it is so generic that makes it no sense to the user. For example, let’s say the scenario is about a specific learning design activity and the question is about engagement. The user selects two of the four options (missing one correct one).

Feedback: “This is incorrect! Remember that engagement is not only about UI interactions, it has three domains. Try it again!”

This won’t tell the user how to reflect on their choices based on their selection. No matter what the user selects, it will repeat the same message, like a parrot. Over time, interactions like this teach the users to just ignore feedback.

Instead
Provide constructive, relevant, customized, and actionable feedback. Explain why a choice was right or wrong. Show the consequences of the participant’s actions. Feedback is not for you, designers, to lecture! It is for the participant to reflect.

Conclusion

In conclusion, write meaningful, authentic, and relevant scenarios that challenge participants, make them reflect on their choices, understand the consequences of those choices, and guide them towards behavior change (which is a long way to go still).