The Pre-brief
Biases are part of human nature. They allow us to make decisions in split seconds and survive as a species. However, they can get in the way of us giving a fair evaluation and become a problem. So what are some of the biases that we need to watch out for?
Halo effect: When our judgment of someone’s performance is affected by how much we like them (or worse, dislike them).
For example, when Jamal (the visiting resident you don’t know) shows up late to the shift, you note his tardiness on your evaluation – showing up late to a shift is inexcusable! However, when Michael (who is on your basketball team) shows up late, you don’t note it in his evaluation because you are sure he had a valid excuse – he is such a good person!
Solution: Take a step back and ask yourself “what am I evaluating exactly: their performance or their ‘likeness’?”
Contamination effect: When we judge someone’s performance by an irrelevant fact that we feel strongly about.
For example, you comment that Sarah “did not behave in a professional manner” during a family conflict on your shift. However, when you reflect on what specifically deserved this statement, you realize it’s not something she said or did, but the fact that she has numerous tattoos, which you personally think is “not professional” (and is not a behavior).
Solution: Similar to the prior bias, take a step back and ask yourself “what am I evaluating exactly?”
Contrast effect: When you think that someone is better or worse than they actually are because you are comparing them to another person rather than an actual fixed standard.
For example, Vikram is an average 2nd year resident who is able to competently care for most of his patients with minimal supervision, as is expected at his level. However, when you are evaluating him, you unconsciously compare him to the struggling visiting intern you worked with last night during a particularly busy shift (although they’re not in the same boat). In comparison, Vikram was “above average” and “a star”, and that is what you end up writing in his evaluation.
Solution: Avoid vague statements such as “above average”, unless the evaluation form has an explicit detailed definition on what “average” means. Stick to stating observable performance.
Advanced Solution: If the multiple-choice answers to your evaluation questions are relating something to “average”, change the prompts or add a detailed rubric. An example of a detailed rubric is the EM Milestones – a description of each “level” is provided, rather than an assumption that the middle number is “average” and leaving the rest to guesswork.
Confirmation effect: Looking for evidence in someone’s performance to confirm your opinion of them, rather than letting your observation of their performance dictate your opinion.
For example, you heard that Fatimah, the new student, is always preoccupied by arranging for childcare for her three kids. During a very busy shift, you see her talking on her cellphone for fifteen minutes and you assume that it’s related to family matters. You comment on her disengagement and lack of professionalism on her evaluation. The next day, you overhear your senior resident sing her praise, as Fatimah managed to track down a complex patient’s next-of-kin and convince them to present to the hospital and discuss end-of-life care.
Solution: Ask yourself if your evaluation is truly based on your observations alone or on other people’s observations.
There are many other biases that affect our judgment on a daily basis, and bringing them to the forefront of our consciousness while evaluating learners makes them much easier to overcome. In fact, many of the biases discussed here are avoided if you BOOST your feedback, specifically focusing on the Objective & Observed aspects.
The Debrief
- Think of the following biases before you hit submit (or sit down for that conversation): halo, contamination, contrast, and confirmation.
- Take a moment and ask yourself:
- Is my evaluation based on what I saw or on other things (likeability, unrelated facts, preformed opinions, etc)?
- Am I comparing my learner to someone else instead of a “standard”?