The National Assessment Collaboration (NAC) examination uses a series of objective structured clinical examination (OSCE) stations to assess international medical graduates’ (IMGs’) readiness for entrance into a Canadian residency program. Physician examiners evaluate candidates’ performances at each of their stations.

The NAC examination consists of 12 OSCE stations. Of the 12 stations, ten are scored stations and two are non-scored stations that are pre-tested and evaluated for future use. On exam day, stations are not identified as scored or non-scored, and candidates are encouraged to do as well as they can on every station.

The NAC examination is a criterion-referenced exam. This means that candidates who meet or exceed the standard will pass the exam regardless of how well other candidates perform on it.

How the OSCE stations are scored

Objectivity is achieved through the use of standardized guidelines for exam administration, the training of physician examiners and standardized patients, and the use of predetermined scoring instruments for OSCE stations.

OSCE stations are scored by physician examiners using a rating scale. Physician examiners observe candidates interacting with standardized patients and provide ratings on up to nine competencies relevant to the presented problem and clinical task.

The competencies include:

  • History taking
  • Physical examination
  • Organization skills
  • Communication skills
  • Language fluency
  • Diagnosis
  • Data interpretation
  • Investigations
  • Therapeutics and management

Each station score is the average of the competency ratings that you received within that station.

How the NAC Examination total scores and sub-scores are calculated

Each station is worth the same as every other station. Your total score is the average of your station scores.

The total score is adjusted via a statistical process called “linking” to reflect the level of difficulty of the stations experienced by candidates on a given exam date. Your linked total score is reported on a scale ranging from 0 to 100.

Sub-scores are calculated by converting each competency rating to a percentage across all the stations and taking the average of the percentages associated with each competency. Please note that sub-scores, as reported in the Supplemental Feedback Report (SFR), cannot be directly compared to the total score because they are calculated differently and reported on different scales.

How the NAC Examination pass score is established

Every few years, the Medical Council of Canada (MCC) brings together a panel of Canadian physicians to define an acceptable level of performance and establish the pass score for the NAC Examination through a standard setting exercise. The panel then recommends its pass score to the NAC Examination Committee for approval. (The NAC Examination Committee is comprised of physicians and medical educators from across the country and reports to the National Assessment Central Coordinating Committee).

The most recent standard-setting exercise was conducted in spring 2013. The panel recommended a pass score of 65. This pass score was approved by the NAC Examination Committee and has remained in place since the spring 2013 session. The pass score will remain in place until the next standard setting exercise occurs.

How the NAC Examination pass/fail decision is made

Your final NAC Examination result (e.g., pass, fail) is based solely on where your total score falls in relation to the pass score. A total score equal to or greater than the pass score is a pass; a total score less than the pass score is a fail. This means all candidates who meet or exceed the pass score will pass the NAC Examination regardless of how well other candidates perform.

How your NAC Examination score can be used to assess relative performance

Using the spring 2013 NAC Examination results, a scale of 0 to 100 was established to have a mean of 70 and a standard deviation of 8. Results from the spring 2013 session and subsequent exam sessions are reported using this scale, allowing us to compare candidate performance across sessions beginning with the spring 2013 session. This means that a score of 78 is one standard deviation above the mean (70 + 8), a score of 86 is two standard deviations above the mean (70 + 8 + 8), and so on.

NAC Examination scores can be compared easily across sessions (e.g., to compare candidates who took the exam within a year or between years). However, it is important to note that the NAC Examination was scored differently before 2013, meaning that scores from before that year should not be compared to those obtained since then.

How NAC Examination results are presented

The SOR includes the candidate’s final result and total score, as well as the examination pass score. Additional information about a candidate’s sub-scores and comparative information is provided on the SFR in graphic form. The total score is reported on a standard-score scale ranging from 0 to 100. In contrast, the score profile in the SFR displays a candidate’s subscores that indicate a candidate’s relative strengths and weaknesses in nine competencies on a percent correct scale. As a result, total scores cannot be compared to the score profile in the SFR as each is reported on a different scaleA sample of the SOR and a sample of the SFR, containing mocked-up, random data, depict how information is presented to exam candidates. Both the SOR and the SFR are made available through the candidate’s account.

Enhancements to the SFR implemented in 2016

Effective in 2016, the Supplemental Feedback Report (SFR), which is used by MCC to provide feedback to exam candidates, is being standardized across all examinations administered by the MCC.

The 2016 enhancements to the SFR include combining the display of subscores and quintiles into one figure and replacing quintiles with mean subscores of first-time test takers who passed the exam. These changes are intended to enable candidates to better understand and gauge their performance on the specific test form of the examination that they have taken.