fbpx

Medical Council of Canada

Score interpretation

Score interpretation guidelines

We have prepared these comprehensive guidelines to help interpret results from each Medical Council of Canada (MCC) examination. We provide general guidelines first followed by specific guidelines for each exam.

Please keep the following in mind when using and interpreting scores for all MCC exams.

  • When using MCC exam results, adhere to the purpose, level of knowledge, skills assessed and intended use of the exam as specified. Secondary use of exam results should be exercised with caution and be in line with the purpose for which the exam is designed. No assessment tool is designed for all purposes.
  • All MCC exams are criterion-referenced exams for which pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of the performance of other candidates. Passing means the candidate has achieved the level of knowledge, skills, and attitudes targeted by the exam.
  • For each exam, candidates may see a different set of questions or cases than other candidates taking the same exam during the same session. These different versions or “forms” of the exams are offered for quality assurance purposes. Great care is taken when assembling these forms to ensure they meet all of the test specifications and are each as comparable to each other as possible.
  • Additionally, psychometric analyses are performed post-exam to adjust for slight difficulty differences across forms. This is known as “linking”, which allows the comparison of scores over time across forms and sessions.
  • Each exam has a score range and a mean and standard deviation based on a “reference group”, a cohort of candidates who are representative of the candidate population.  Score comparisons across time are best made by looking at how far a score is relative to the mean of the reference group assessing. This also applies to comparing scores when there are changes to score scales over time.
  • As an example, a Medical Council of Canada Qualifying Examination (MCCQE) Part I score of 310 on the 100 to 400 scale, in place as of 2018, with a mean of 250 and standard deviation of 30 is two standard deviations above that of the group mean from April 2018. An MCCQE Part I score of 700 on the former 50 to 950 scale with a mean of 500 and a standard deviation of 100 is two standard deviations above that of the group mean from spring 2015. These two scores represent similar performance.
  • Subscores for the Medical Council of Canada Evaluating Examination (MCCEE) and the MCCQE Part I are reported on the same scale as the total and so are comparable across exam forms and sessions. However, for the National Assessment Collaboration (NAC) Examination and MCCQE Part II, subscores are reported on a different scale than the total score and are thus not comparable across examination forms or sessions.
  • Use total scores rather than subscores. Based on significantly less data, subscores are not as reliable as total scores and should not be used to compare candidate performances. Subscores are provided to candidates as formative feedback on their relative strengths and weaknesses in various competency areas.
  • The MCC cautions against comparing candidates based on small total score differences. The total score is designed to be most precise around the pass score to support a reliable pass/fail decision. Total scores are not designed to provide score precision along a wide range of the score scale. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.
  • Selection decisions should not be based on a single assessment tool. We recommend that MCC exam results are used in conjunction with other sources of information (for example, results from another MCC exam, medical school transcripts, reference letter, other credentials, etc.) to obtain a comprehensive view of a candidate’s qualifications.
  • It is also appropriate to consider MCCQE Part II results (if available and if required) before NAC Examination results as the former targets a higher level of clinical skills.
Read more
Read less

Medical Council of Canada Evaluating Examination (MCCEE)

The MCCEE is a screening examination that assesses the basic medical knowledge and problem solving of a candidate at a level comparable to a minimally competent medical student completing his or her education in Canada and about to enter supervised practice. It a four-hour computer-based examination that consists of 180 multiple-choice questions (150 scored questions and 30 non-scored pilot questions).

The MCCEE is a criterion-referenced exam for which pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of the performance of other candidates. It is a prerequisite for international medical graduates (IMGs) to challenge the MCCQE Part I, and has been the minimal requirement for an IMG’s entry into postgraduate medical education in Canada. However, as of 2019, IMGs will be able to challenge the MCCQE Part I directly, without first having to pass the MCCEE.

Each candidate who challenges the MCCEE receives two score reports – a Statement of Results (SOR) and a Supplemental Feedback Report (SFR). The SOR includes a total score and a final result (e.g. pass, fail). The total score is reported on a scale ranging from 50 to 500 with a mean of 271 and standard deviation of 50. As of May 2017, the pass score is 261. It was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in November 2016. Prior to May 2017, the pass score was 250 on the reporting scale of 50 to 500. For candidates who took the MCCEE prior to May 2017, their final result remains valid.

Additional information about a candidate’s performance profile in various competency domains is provided on the SFR. Please note that the subscores in the SFR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The MCCEE subscores (though expressed on the same scale as the total score) are based on significantly less data and thus do not have the same level of precision as the total score.

The MCCEE total scores are linked across examination forms and sessions using statistical procedures. This allows the comparison of scores over time across forms and sessions.

The MCCEE score distribution is along a wide range of the score scale compared to clinical skills examinations such as the NAC Examination. However, as with the NAC Examination, it is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The Medical Council of Canada (MCC) cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection decisions. When comparing candidates for program selection, it is generally appropriate to consider the MCCEE results in conjunction with the results from the NAC Examination as well as other relevant information (e.g., medical school transcripts, application letter, reference letter, etc.) to obtain a comprehensive view of a candidate’s qualifications.

Additionally, please note that MCCEE scores before and after 2008 should not be compared as the test design, test length, scoring method, delivery mode (computer vs. paper-and-pencil), and reporting scale are different.

Read more
Read less

National Assessment Collaboration (NAC) Examination

The NAC Examination assesses the readiness of international medical graduates (IMGs) for entry into a Canadian residency program. It consists of 12 objective structured clinical examination (OSCE) stations (ten scored stations and two non-scored pilot stations) designed to assess the knowledge, skills and attitudes at the level expected of a recent Canadian medical graduate for entry into supervised clinical practice in postgraduate training programs as outlined by the MCC Objectives, regardless of where they received their undergraduate training.

The NAC Examination is a criterion-referenced exam for which pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of the performance of other candidates. Passing means the candidate has demonstrated the knowledge, skills and attitudes expected of a Canadian medical graduate entering supervised practice in Canada, regardless of where they received their undergraduate training.

Each candidate who challenges the NAC Examination receives two score reports – a Statement of Results (SOR) and a Supplemental Feedback Report (SFR). The SOR includes a total score and pass/fail final result. The total score is reported on a scale ranging from 0 to 100 with a mean of 70 and standard deviation of 8 based on all candidates who took the exam in March 2013. The current pass score is 65 and was established by a panel of physician experts from across the country following a rigorous standard-setting exercise in March 2013.

Additional information about a candidate’s performance profile in various competency domains is provided on the SFR. Please note that the subscores reported in the SFR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The subscores for the NAC examination are based on significantly less data and thus do not have the same level of precision as the total score. Furthermore, they are on a different metric than the total score and, unlike the total score, are not comparable across examination forms and sessions.

NAC Examination total scores are adjusted using statistical procedures to account for differences in difficulty across examination forms and sessions. As a result, total scores are placed on the same scale to enable score comparison across forms, over time, and application of the same pass score to candidates who took different forms. The NAC Examination is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. It is not designed to provide score precision along a wide range of the score scale. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The Medical Council of Canada (MCC) cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection decisions. When comparing candidates for residency selection, it is generally appropriate to consider the NAC Examination results in conjunction with the results from the Medical Council of Canada’s Evaluating Examination (MCCEE) as well as other relevant information (e.g., medical school transcripts, application letter, reference letter)  to obtain a comprehensive view of a candidate’s qualifications.

 

Read more
Read less

Medical Council of Canada Qualifying Examination (MCCQE) Part I

The MCCQE Part I is a summative examination that assesses the critical medical knowledge and clinical decision-making ability of a candidate at a level expected of a medical student who is completing his or her medical degree in Canada. The examination is based on the MCC Objectives which are organized under the CanMEDS roles. Candidates graduating and completing the MCCQE Part I normally enter supervised practice.

The MCCQE Part I is a criterion-referenced exam for which pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of the performance of other candidates. Passing means the candidate has demonstrated the knowledge, skills, and attitudes necessary as part of a requirement for medical licensure in Canada for entering supervised clinical practice.

Each candidate who challenges the MCCQE Part I receives two score reports – a Statement of Results (SOR) and a Supplemental Information Report (SIR).

The SOR includes a total score, the pass score and final result (e.g. pass, fail). The total score is reported on a scale ranging from 100 to 400 with a mean of 250 and a standard deviation of 30 based on all spring 2018 results. The current pass score is 226 and was established by a panel of physician experts from across the country following a rigorous standard setting exercise in June 2018.

Prior to 2018, there was a different blueprint for the MCCQE Part I. The scale ranged from 50 to 950 with a mean of 500 and a standard deviation of 100. The mean and standard deviation were set using all results from the spring 2015 session.

Because the exams are different and based on different blueprints, you cannot directly compare scores from before 2018 to those in 2018. What you can do, however, is compare one’s performance to the mean and standard deviation that was in place for the exam session in question.

As an example, an MCCQE Part I score of 310 on the 100 to 400 scale, in place as of 2018, with a mean of 250 and standard deviation of 30 is two standard deviations above that of the group mean from April 2018. An MCCQE Part I score of 700 on the former 50 to 950 scale with a mean of 500 and a standard deviation of 100 is two standard deviations above that of the group mean from spring 2015. These two scores represent similar performance. Additional information about a candidate’s performance in the new Blueprint domains is provided on the SIR. Please note that the subscores as reported in the SIR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas. The MCCQE Part I subscores (though expressed on the same scale as the total score) are based on significantly less data and thus do not have the same level of precision as the total score.

The MCCQE Part I total scores are equated across examination forms and sessions using statistical procedures. As a result, total scores are placed on the same scale to enable score comparison across forms, over time (i.e., April 2018 onward), and application of the same pass score to candidates who took different forms.

The MCCQE Part I score distribution falls along a wide range of the score scale compared to clinical skills examinations such as the MCCQE Part II. However, as with the MCCQE Part II, it is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error.

The MCC cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection or other decisions.

Read more
Read less

Medical Council of Canada Qualifying Examination (MCCQE) Part II

The MCCQE Part II assesses the knowledge, skills and attitudes required for medical licensure in Canada prior to entry into independent clinical practice as outlined by the MCC Objectives. The MCCQE Part II consists of 13 objective structured clinical examination (OSCE) stations (12 scored stations and 1-2 non-scored pilot stations).

The MCCQE Part II is a criterion-referenced exam for which pass/fail is determined by comparing an individual candidate’s score to a standard (as reflected by the pass score) regardless of the performance of other candidates. Passing means the candidate has demonstrated the knowledge, skills and attitudes deemed essential as part of medical licensure in Canada for entering independent clinical practice.

Each candidate who challenges the MCCQE Part II receives two score reports – a Statement of Results (SOR) and a Supplemental Feedback Report (SFR).  The SOR includes a total score and pass/fail final result. The total score is reported on a scale ranging from 50 to 950 with a mean of 500 and standard deviation of 100 based all test takers who took the test that was administered in spring 2015. The current pass score is 509 and was established by a panel of physician experts from across the country following a rigorous standard-setting exercise spring 2015.

Additional information about the total and subscores is provided on the SFR. The score profile in Figure 1 of the SFR displays a candidate’s domain subscores, which provide a measure of a candidate’s relative strengths and weaknesses in four areas. Please also note that the subscores as reported in the SFR should not be used to compare candidate performances. They are provided to candidates, in graphical format, as formative feedback on their relative strengths and weaknesses in various competency areas.   The subscores for the MCCQE Part II are based on significantly less data and thus do not have the same level of precision as the total score. Furthermore, they are on a different metric than the total score and are not comparable across examination forms and sessions.

MCCQE Part II total scores are adjusted using statistical procedures to account for differences in difficulty across examination forms and sessions. As a result, total scores are placed on the same scale to enable score comparison across forms, over time, and application of the same pass score to candidates who took different forms. The MCCQE Part II is designed to be most precise for total scores near the pass score to support a reliable pass/fail decision. It is not designed to provide score precision along a wide range of the score scale. Small score differences should not be over-interpreted as they may fall within the range of values that might reasonably arise as a result of measurement error. The Medical Council of Canada (MCC) cautions against comparing candidates based on small score differences and discourages using exam scores as the sole basis for selection decisions.

Read more
Read less