Purpose: Social judgment research suggests that rater unreliability in performance assessments arises from raters' differing inferences about the performer and the underlying reasons for the performance observed. These varying social judgments are not entirely idiosyncratic but, rather, tend to partition into a finite number of distinct subgroups, suggesting some "signal" in the "noise" of interrater variability. The authors investigated the proportion of variance in Mini-CEX ratings attributable to such partitions of raters' social judgments about residents.
Method: In 2012 and 2013, physicians reviewed video-recorded patient encounters for seven residents, completed a Mini-CEX, and described their social judgments of the residents. Additional participants sorted these descriptions, which were analyzed using latent partition analysis (LPA). The best-fitting set of partitions for each resident served as an independent variable in a one-way ANOVA to determine the proportion of variance explained in Mini-CEX ratings.
Results: Forty-eight physicians rated at least one resident (34 assessed all seven). The seven sets of social judgments were sorted by 14 participants. Across residents, 2 to 5 partitions (mode: 4) provided a good LPA fit, suggesting that subgroups of raters were making similar social judgments, while different causal explanations for each resident's performance existed across subgroups. The partitions accounted for 9% to 57% of the variance in Mini-CEX ratings across residents (mean = 32%).
Conclusions: These findings suggest that multiple "signals" do exist within the "noise" of interrater variability in performancebased assessment. It may be valuable to understand and exploit these multiple signals rather than try to eliminate them.