Search
Close this search box.
Search
Research in Clinical Assessment grant
To support medical assessment research, the MCC offers research grants to interested faculty members, staff members or graduate students of Canadian medical faculties.

Recipient: 2016-2017 – Walter Tavares

“Why am I doing what I’m doing?”: An exploration into how raters adapt to formative and summative purposes of assessment

Investigator

Walter Tavares, BSc, MSc, PhD

Co-investigators

C. St-Onge
M. Young
G. Gauthier

Abstract

Introduction:

Assessment in health profession education (HPE) has undergone many changes in recent years. Given the range and nature of tasks to be assessed within Competency Based Education (CBE) contexts, there has been increased reliance on rater-based assessment (RBA). Calls have also been made to consider all assessment as a mean to support learning (i.e. foregrounding formative purposes of assessment). In a very practical sense, formative and summative assessments serve very different purposes, but often relying on the same assessors. With increasing reliance on RBA, it is critical to understand how the purpose of assessment (summative versus formative) affects rater behaviour.

Study objective:

The objective of this study is to explore the impact of different assessment purposes (formative vs. summative) on rater performance and behaviour (relying on numerical and narrative components).

Methods:

This 2-group mixed-methods experimental study will require participants to observe and rate three pre-recorded performances under one of two conditions; for the purposes of (1) formative assessment or (2) summative assessment. Raters will score performance using a quantitative scale and provide narrative comments. Between group analyses will explore consistencies and differences between assessment data generated (both quantitative and qualitative) across summative vs. formative focus. Data collection will be facilitated by a custom in-house web-based platform.

Implications:

Understanding how raters adapt (or not) their assessments (quantitative and qualitative assessment data) across the different roles of assessment (formative versus summative) is of critical importance within a CBE context. With increasing reliance on rater-based assessment and the complexity of assuring competence, we must understand how raters are generating data on which high-stakes decisions are made. Findings from this work are of theoretical and practical importance as it is likely to inform broadening perspectives of competency assessment, assessment design, scale development and rater training strategies targeted towards improving rater based clinical performance assessments.