2016-2017 – Valérie Dory | Le Conseil médical du Canada
Rechercher
Fermer ce champ de recherche.
Rechercher
Subvention de recherche en évaluation clinique
Le CMC accorde des subventions pour mener des recherches dans le domaine de l’évaluation médicale. Les membres du corps enseignant, le personnel et les étudiants diplômés des facultés de médecine au Canada peuvent obtenir ces subventions.

2016-2017 – Valérie Dory

Advancing longitudinal work-based assessment systems: Assessing progress in clerkship (en anglais seulement)

Chercheuse

Valérie Dory, MD, PhD, MMedEd

Cochercheurs

C. Gomez-Gabriello
M. Young
B. Cummings
S. Cruess
R. Cruess

Sommaire

Competency-based medical education is underpinned by the notion that each learner has an individual trajectory towards competence that can be measured, documented, and used to facilitate development. During clerkship, medical students develop both discipline-specific and cross-disciplinary competencies, the latter being primarily related to the ‘intrinsic’ roles (e.g. collaborator, professional, communicator). Assessing a learner’s trajectory in the acquisition of cross-disciplinary competencies is a significant challenge. While work-based data collection tools are available, and these tools can form the data on which trajectories can be monitored, psychometric models to meaningfully combine and interpret data collected at multiple time points are lacking. Further, assessor behaviour may support or threaten the validity of data collected regarding learner progress.

The aim of this project is to 1) develop analytical methods to support the combination and interpretation of assessment data collected in a longitudinal work-based assessment system, 2) to shed light on the impact of assessors on the validity of the assessment data collected.

A mixed methods design will be used. Data collected in the first implementation of a longitudinal work-based assessment system using the Professionalism-Mini-Evaluation Exercise (PMEX) across a 48-week year-3 clerkship will be analysed (anticipated n=12 observations x 185 students = 2,200 observations). Using this data, we will investigate our ability to identify sources of measurement error and different student trajectories, and the predictive ability of scores (relying on generalizability theory and multilevel analyses to take account of performance improvement, growth curve analyses, and various models to examine the predictive power of 4, 6, and 8 ratings). To compliment these quantitative and measurement approaches, we will conduct focus groups with purposefully sampled assessors (frequently selected assessors in the system, from different professional backgrounds) to explore their perceptions of how they rate student performance, and specifically how the period of the clerkship year (i.e. how early or late in that educational period) may factor in their judgements. Transcripts will be analysed using inductive thematic coding and constant-comparison.

The findings from this work will provide an analytical framework to support the interpretation of numerical assessment data collected longitudinally. This framework could be applied to different types of data, e.g. from work-based assessments, progress OSCEs, progress testing. It will also further current research on assessor cognition and lay the ground for interventions to improve the data collection process, be it directly through improving the assessment tools (e.g. aligning them to assessors’ mental models of performance), or through technical support such as faculty development. As such, our findings will contribute to the creation of more robust longitudinal assessment systems that are a key component of competency-based medical education.