by Gavin Cassar
Subjectively evaluating people can have long-lasting effects.
Imagine yourself speed dating.
The first person sits across from you and the attraction is instant. You begin chatting and find the words come easily. In fact, they pour out. As the minutes melt away, you realise you’ve probably never felt this deeply connected to anyone before. But then, time is up, and you are suddenly staring into the face of a new stranger.
How much of a fair chance does this new person have to make a good impression on you? Alternatively, how would your perception of this second date change had your first interaction been a total dud? The quality of that first interaction influences the way we judge future, similar experiences.
At one time or another, we’ve all probably felt like we’ve had colleagues who are tough acts to follow. Maybe we’ve felt judged unfavourably not because of our intrinsic qualities, but because of those of our peer group. Individuals with an uncanny talent for their job can cast a long shadow on those around them. But how strong are these comparison effects and how long do they last?
Employee evaluations have been found to be highly subjective, especially when they are not tied to clearly defined criteria. Research has shown that when evaluating or rating, we generally use comparative information to form an opinion.
Our judgement, therefore, is based on relative experiences even when we are instructed to evaluate someone on an absolute scale.
Such biases can enter the workplace if, for instance, a manager with several direct reports carries out their performance evaluations based on subjective measures such as “effectiveness”. Even with a clear definition of the measure, their subjective rating may be subject to substantial biases.
Subjectivity endures
In our working paper, “Peer Effects in Subjective Performance Evaluation”, PhD candidate Taeho Ko and I were interested in finding out what happens when people rate several individuals over time and to what extent previous evaluations influence the ones that follow.
We sifted through data accumulated over seven years of students’ evaluations of their business school professors to see how studying with a highly-rated professor affects other professors’ ratings.
The students in our study were asked to complete questionnaires about different facets of a professor’s preparation and performance. One criterion, “effectiveness”, is the benchmark for the professor’s overall performance. Our study covered 64,886 ratings of 95 professors from 6,741 students. Students would evaluate several professors after six weeks of performance, then over subsequent six-week intervals experience different sets of professors and then evaluate their effectiveness on a 1 to 5 scale. We considered a star rating a score of 4.9 or higher on a 5-point scale.
Hard act to follow
We found three interesting results:
First, when a student begins their MBA experience with a professor who has a star rating, all subsequent ratings from that student are set against a very high benchmark. As a result, when MBA students experience professors who are of higher quality (say one point greater on a scale of 1 to 5), ratings of professors teaching in the same period drop by 0.2. Considering what is already known about subjective evaluations, this confirmed our theory that an employee’s ratings are negatively associated with the employee’s concurrent peers. If one of the peers had a star rating, this resulted in a further 0.16 reduction in teaching effectiveness ratings over-and-above the average peer effects.
Second, we find that these negative effects extend to professors evaluated in the months following a student’s exposure to other professors, including a “superstar”. So in a more general sense, a great employee can pull down the ratings of others not only working at the same time, but also of those who are evaluated months after. We found this ripple effect lasted up to eight months. Meaning, eight months after a student rated an excellent professor, the student retained the ideal of that performance and used it when evaluating other professors, as a contrast bias.
Third, we found this contrast effect was even stronger when professors teaching courses with similar names – for example, Organisational Behaviour I and Organisational Behaviour II – were evaluated for effectiveness. An overlap in course names with one of these highly rated professors meant a further drag in the ratings.
Given these findings and the underlying cognitive mechanisms at play, one may wonder whether other similarities lead to stronger contrast effects. For instance, female colleagues are likely to have a greater effect on other female employees’ evaluations than on their male counterparts.
The effectiveness question
By being aware of the presence and magnitude of peer effects, organisations can take actions to diminish such biases. However, the effectiveness of these remedies in many organisational settings may be limited. Managers could perhaps find other methods to evaluate employee performance that may help dampen these contrast effects. For example, can objective measures be used? Do longer-term outcome measures exist? More broadly, in our setting, is a professor particularly effective based on the “effectiveness” of their students? If most students in the class get impressive jobs at the end of their degree? Or if most of them get As?
Source: INSEAD