The MCC’s internal researchers often work on developing methods, tools and applications to make our exam processes more efficient.
Automated item generation
The MCC has been working with Drs. Mark Gierl and Hollis Lai from the University of Alberta to develop a method of using cognitive maps to more efficiently generate examination content, or Automated Item Generation (AIG).
“Many tests of medical knowledge, from the undergraduate level to the level of certification and licensure, contain multiple-choice items. Although these are efficient in measuring examinees’ knowledge and skills across diverse content areas, multiple-choice items are time-consuming and expensive to create. Changes in student assessment brought about by new forms of computer-based testing have created the demand for large numbers of multiple-choice items. Our current approaches to item development cannot meet this demand.”
Source: Medical Education Journal. “Using automatic item generation to create multiple-choice test items”. Gierl, Mark J., Hollis Lai, Simon R. Turner. 2012; 46: 757–765.
How does AIG work?
AIG is a three-stage process:
Content specialists (physicians) create a cognitive map that identifies the necessary content for test items.
The specialists then develop item models using the cognitive maps. Item models identify a structure for the content of test items.
Finally, computer technology systematically combines the elements of the content to generate new test items.
What does AIG mean for test committees?
A great deal of new examination content is required for the MCC’s content database in the near future. AIG will assist the test committees to generate more items more rapidly. Items generated using AIG will be vetted by test committees.
We therefore anticipate that AIG will shift how test committees perform their work by focusing more on quality assurance and somewhat less on traditional item development activities.
The MCC is also considering creating an AIG Test Committee to specialize solely in cognitive mapping, which is a unique skill. During the research project phase, physicians volunteered to learn about cognitive mapping and test it for item generation.
What is the status of AIG at the MCC?
The use of AIG is now under evaluation at the MCC. It is expected to be handed off to MCC operations personnel in 2015. AIG has the potential to become a valuable complement to the work of test committees by increasing the production of new items.
The learning curve for the research participants was initially a steep one. Now that the physicians who volunteered to participate are trained and experienced in cognitive mapping, they find that they can generate hundreds of questions using a single cognitive map.
If you are interested in understanding the research behind AIG in our context, take a look at this brief article: “Assessment: Using automatic item generation to create multiple choice test items”.
The MCC has been developing a computer program to automate the scoring process for clinical decision making (CDM) write-in questions. The Aggregator is a computer application that the MCC has developed to make the marking process more efficient. By aggregating write-in answers, the Aggregator will enable our scorers to mark many answers at once, as well as to see obvious outliers. Additionally, aggregating responses more efficiently opens up the possibility of marking the questions using automated processes.
Why the Aggregator?
The Aggregator dramatically simplifies the process of scoring “write-in” examination responses.
Many CDM questions require short “write-in” answers. For example, take a cohort of 500 candidates. For a given question, 400 of them each write “myocardial infarction” as the answer. In the past, test scorers would see that same word written out 400 times. With the Aggregator, they now see that answer just once and can score all 400 of those responses at the same time.
What are the limitations?
The Aggregator is not more efficient than our traditional way of marking when it deals with “long-answer” type write-ins. In those situations, answers will be scored individually, as they do not aggregate well. Nevertheless, the Aggregator dramatically simplifies the scoring process and reduces the time required to mark short write-in type questions. In fact, MCC anticipates that the Aggregator will reduce by about half the time required to complete CDM marking.
What is the status of the Aggregator?
The research, development, and initial testing of the Aggregator are now nearing completion. The Aggregator was used successfully for marking the short write-in questions of the fall 2014 MCCQE Part I.