The broadening complexity of clinical practice, limitations in work-placed based assessments and affordances associated with simulation have led educators to widely adopt simulation for assessment. However, inherent technological and other limitations (e.g., space, difficulty creating some stimuli) can threaten the utility and validity of physical simulation-based assessment. Immersive virtual environments (IVE) broaden some possibilities and are proposed as solutions but validity considerations are extremely limited. As a result, at risk is that the adoption of modern computing for performance-based assessment of clinical competence may stagnate or alternatively outpace evidence supporting its use.
This study proposes to explore validity evidence associated with the use of immersive virtual environments for the purpose of formative and summative assessments of clinical competence. We ask, when compared to physical based simulation, in what way does trainee/candidate and/or rater performance change (become a source of error) when IVE is the assessment modality?
We propose a 2-phase exploratory within-subjects mixed methods comparative validity study. Phase-1 involves participants responding to clinical events in a physical-based simulation followed by the same case using an IVE, or vice versa. Outcomes include differences in non-technical dimensions of performance and behavior pathways. Phase-2 involves raters scoring and providing feedback on low, intermediate and strong performances from physical based simulation and IVE. Outcome measures include differences in scores as well as indicators of rating quality feedback.
This study proposes to advance:
(1) Outcome measurement processes by exploring outcomes generated by IVEs.
(2) Assessments along the continuum by exploring a modality that spans practitioner levels and competencies.
(3) Improving feedback from assessment results by including as an outcome the impact of IVE on feedback type and quality.
(4) Contribute to the role and use of technology in and for assessment by exploring the validity of IVEs in medical education.