Inter-rater reliability in clinical assessments: Do examiner pairings influence candidate ratings?

Research output: Contribution to a Journal (Peer & Non Peer)Articlepeer-review

24 Citations (Scopus)

Abstract

Background: The reliability of clinical assessments is known to vary considerably with inter-rater reliability a key contributor. Many of the mechanisms that contribute to inter-rater reliability however remain largely unexplained and unclear. While research in other fields suggests personality of raters can impact ratings, studies looking at personality factors in clinical assessments are few. Many schools use the approach of pairing examiners in clinical assessments and asking them to come to an agreed score. Little is known however, about what occurs when these paired examiners interact to generate a score. Could personality factors have an impact? Methods: A fully-crossed design was employed with each participant examiner observing and scoring. A quasi-experimental research design used candidate's observed scores in a mock clinical assessment as the dependent variable. The independent variables were examiner numbers, demographics and personality with data collected by questionnaire. A purposeful sample of doctors who examine in the Final Medical examination at our institution was recruited. Results: Variability between scores given by examiner pairs (N = 6) was less than the variability with individual examiners (N = 12). 75% of examiners (N = 9) scored below average for neuroticism and 75% also scored high or very high for extroversion. Two-thirds scored high or very high for conscientiousness. The higher an examiner's personality score for extroversion, the lower the amount of change in his/her score when paired up with a co-examiner; reflecting possibly a more dominant role in the process of reaching a consensus score. Conclusions: The reliability of clinical assessments using paired examiners is comparable to assessments with single examiners. Personality factors, such as extroversion, may influence the magnitude of change in score an individual examiner agrees to when paired up with another examiner. Further studies on personality factors and examiner behaviour are needed to test associations and determine if personality testing has a role in reducing examiner variability.

Original languageEnglish
Article number147
Pages (from-to)147
JournalBMC Medical Education
Volume20
Issue number1
DOIs
Publication statusPublished - 11 May 2020

Keywords

  • Clinical Competence
  • Education, Medical, Undergraduate
  • Educational Measurement/standards
  • Faculty, Medical/standards
  • Female
  • Humans
  • Male
  • Observer Variation
  • Personality
  • Reproducibility of Results
  • Surveys and Questionnaires

Fingerprint

Dive into the research topics of 'Inter-rater reliability in clinical assessments: Do examiner pairings influence candidate ratings?'. Together they form a unique fingerprint.

Cite this