Scoring

Overview
How are OSCE stations scored
How are total scores and sub-scores calculated on the MCCQE Part II?
How is the pass score established?
How is a pass/fail decision made?
What will appear on the Statement of Results (SOR) and on the Supplemental Feedback Report (SFR)?
Enhancements to the SFR implemented in 2016
Sample OSCE cases and score sheets

Overview

The Medical Council of Canada Qualifying Examination (MCCQE) Part II is scored independently of the MCCQE Part I. To be awarded the Licentiate of the Medical Council of Canada (LMCC), a candidate must pass both the MCCQE Part I and Part II.

The MCCQE Part II consists of 13 objective structured clinical examination (OSCE) stations. Of the 13 stations, twelve are scored stations and one is a non-scored station that is pretested and evaluated for future use. While the non-scored station does not count toward the final score, it is not identified as a non-scored station in the exam. Candidates are encouraged to do as well as they can on every station.

The MCCQE Part II, like all exams administered by the Medical Council of Canada (MCC), is a criterion-referenced exam. This means that candidates who meet or exceed the standard will pass the exam regardless of how well other candidates perform on it.

The MCCQE Part II stations are evaluated by physician examiners based on the candidate’s performance at each station.

How are OSCE stations scored?

Objectivity is achieved through the use of standardized guidelines, the training of physician examiners and of standardized patients, and the use of predetermined station-specific scoring instruments for OSCE stations.

OSCE stations are scored by the physician examiner using a checklist of tasks (see sample OSCE cases and patient-encounter probe and physical examination station sections below). In almost all stations, the physician examiner also scores selected rating scale items related to the candidate’s interactions with the patient. For example, the physician examiner may be rating the candidate on “Questioning Skills” and/or “Organization of Physical Examination” and/or “Rapport with Person”.

Couplet stations consist of a patient encounter scored by a physician examiner, paired with a task such as reviewing patient-related materials like a patient chart, interpreting XRays or ECGs, or answering written questions related to the encounter. The candidate may be asked to complete tasks such as reviewing patient related materials before or after the patient encounter. Written questions are scored by physician examiners following the exam. For examples of the patient-encounter probe questions and the correct answers, view patient-encounter probe for the history-taking station or the physical examination station.

Candidates are expected to demonstrate appropriate, ethical and professional behaviour. When filling out the physician examiner checklist, physician examiners are asked whether the candidate’s performance was unsatisfactory and to specify the reason(s) for the unsatisfactory performance. Physician examiners must also answer the question: Did this candidate demonstrate a lapse in professional behaviour? If answering “yes”, the physician examiners must then elaborate on the observed lapse.

How are total scores and sub-scores calculated on the MCCQE Part II?

Each station is worth the same as every other station and a candidate’s total score is the average of his or her counting station scores. The total score is adjusted through a statistical method called “linking”
to reflect the level of difficulty of the stations experienced by candidates on a given exam date. The MCC reports scores on a standard-score format ranging from 50 to 950.

Sub-scores are calculated by converting the items associated with each domain to a percentage (i.e., checklist, rating scale, written or oral questions) across the 12 counting stations. There are four domain sub-scores reported on the MCCQE Part II: Data Acquisition, Problem Solving and Decision Making, Patient/Physician Interaction, and C²LEO (the Considerations of the Cultural-Communication, Legal, Ethical, and Organizational aspects of the practice of medicine).

How is the pass score established?

The MCC conducts a standard setting exercise every three to five years to ensure that the standard and the pass score remain appropriate. Standard setting is a process used to define an acceptable level of performance and to establish a pass score.

In spring 2015, the MCC completed a rigorous standard setting exercise based on expert judgments from a panel of 20 physicians across the country that represented faculties of medicine, different specialties, and years of experience supervising students and residents. The Borderline Group Method used by MCC has been successfully employed and defended with a number of large-scale exam programs around the world. Following the standard setting exercise, the recommended pass score was reviewed and approved by the Central Examination Committee (CEC) in June 2015. The technical report on the standard setting exercise for the MCCQE Part II provides additional information.

Using the spring 2015 results of all MCCQE Part II candidates, the new 50 to 950 scale was established to have a mean of 500 and a standard deviation of 100. Results from the spring 2015 and subsequent exam sessions will be reported on this new scale allowing us to compare candidate performance across sessions beginning with the spring 2015 session. For example, if a candidate’s score is 600, the candidate’s ability level is one standard deviation above that of the group mean from spring 2015. On this new scale, the pass score that was recommended from the standard setting panel and approved by the CEC is 509. This pass score will remain in place until the next scheduled standard setting exercise in 2018. .

Prior to 2015, the pass score for the MCCQE Part II was 475 on the old 50 to 950 scale. The new pass score (509) translates to 494 on the old scale.

How is a pass/fail decision made?

A candidate’s final result (e.g., pass, fail) is determined by his or her total score and where it falls in relation to the examination pass score. A total score equal to or greater than the pass score is a pass and a total score less than the pass score is a fail. The candidate’s performance is judged in relation to the examination pass score and not judged on how well other individuals performed.

What will appear on the Statement of Results (SOR) and on the Supplemental Feedback Report (SFR)?

The SOR includes the candidate’s final result and total score, as well as the pass score. Additional information about the total and sub-scores is provided on the SFR. The total score is reported on a standard-score scale ranging from 50 to 950. In contrast, the score profile in Figure 1 of the SFR displays a candidate’s domain sub-scores that indicate a candidate’s relative strengths and weaknesses in four areas. As a result, total scores cannot be compared to the score profile in the SFR as both are reported on different scales. A sample of the SOR and a sample of the SFR, containing mocked-up, random data, depict how information is presented to exam candidates. Both the SOR and the SFR are made available through the candidate’s physiciansapply.ca account.

Enhancements to the SFR implemented in 2016

Beginning in 2016, the MCC is pleased to announce the implementation of various enhancements to the Supplemental Feedback Report (SFR). The SFRs, which are used by MCC to provide feedback to exam candidates, are being standardized across all MCC examinations.

The change to the SFR involves combining the display of sub-scores and quintiles into one figure and replacing quintiles with the mean score of first time test-takers who passed. The change is intended to enable candidates to better understand and gauge their performance on the specific test form of the examination that they have taken.

Sample OSCE cases and score sheets