Reliability of Observational Assessment Methods for Outcome-based Assessment of Surgical Skill: Systematic Review and Meta-analyses

Marleen Groenier*, Leonie Brummer, Brendan Bunting, Anthony Gallagher

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

BACKGROUND: Reliable performance assessment is a necessary prerequisite for outcome-based assessment of surgical technical skill. Numerous observational instruments for technical skill assessment have been developed in recent years. However, methodological shortcomings of reported studies might negatively impinge on the interpretation of inter-rater reliability. OBJECTIVE: To synthesize the evidence about the inter-rater reliability of observational instruments for technical skill assessment for high-stakes decisions. DESIGN: A systematic review and meta-analysis were performed. We searched Scopus (including MEDLINE) and Pubmed, and key publications through December, 2016. This included original studies that evaluated reliability of instruments for the observational assessment of technical skills. Two reviewers independently extracted information on the primary outcome (the reliability statistic), secondary outcomes, and general information. We calculated pooled estimates using multilevel random effects meta-analyses where appropriate. RESULTS: A total of 247 documents met our inclusion criteria and provided 491 inter-rater reliability estimates. Inappropriate inter-rater reliability indices were reported for 40% of the checklists estimates, 50% of the rating scales estimates and 41% of the other types of assessment instruments estimates. Only 14 documents provided sufficient information to be included in the meta-analyses. The pooled Cohen's kappa was.78 (95% CI 0.69-0.89, p < 0.001) and pooled proportion agreement was 0.84 (95% CI 0.71-0.96, p < 0.001). A moderator analysis was performed to explore the influence of type of assessment instrument as a possible source of heterogeneity. CONCLUSIONS AND RELEVANCE: For high-stakes decisions, there was often insufficient information available on which to base conclusions. The use of suboptimal statistical methods and incomplete reporting of reliability estimates does not support the use of observational assessment instruments for technical skill for high-stakes decisions. Interpretations of inter-rater reliability should consider the reliability index and assessment instrument used. Reporting of inter-rater reliability needs to be improved by detailed descriptions of the assessment process.

Original languageEnglish
Pages (from-to)189-201
Number of pages13
JournalJournal of surgical education
Volume77
Issue number1
DOIs
Publication statusPublished - 1 Jan 2020

    Fingerprint

Keywords

  • outcome-based assessment
  • surgical skill
  • inter-rater reliability
  • reporting guidelines
  • Patient Care
  • Medical Knowledge

Cite this