Multiple-choice questions don’t belong in college. They’re often ineffective as a teaching tool, they’re easy for students to cheat, and they can exacerbate test anxiety. Yet more professors seem to be turning to the format these days, as teaching loads and class sizes grow, since multiple-choice quizzes and tests can be easily graded by machines.
That’s the case being made by two instructional designers at different colleges who are encouraging professors to try alternative assessment methods. The pair, Flower Darby, from Northern Arizona University, and Heather Garcia, from Foothill College, presented an eye-catching poster at the Educause Learning Initiative conference this year with the title, “Multiple-choice quizzes don’t work.”
One solution, says Garcia, is for professors to give “more authentic” assignments, like project-based work and other things that students would be more likely to see in a professional environment. After all, she notes, “you’re never going to encounter multiple-choice quizzes on the job somewhere.”
The pitch can be a tough sell to busy professors, though, at least at first. “When faculty hear this, they oftentimes feel overwhelmed by the prospect of doing all that grading,” says Garcia.
But she and her colleague argue that there is a way to assign project-based or other rich assessments without spending late nights holding a red pen.
One approach they recommend is called “specification grading,” where professors set a clear rubric for what students need to achieve to complete the assignment, and then score each entry as either meeting those rubrics or not. “It allows faculty to really streamline their grading time,” says Darby, of Northern Arizona. “You can use an LMS rubric tool, and click, click, click, you have a grade.” While it takes professors just moments to check each one, the assignments require a “much higher level of work” from students because they are spending time on a project rather than hunting for or guessing a few correct answers.
The pair credit this idea to Linda B. Nilson, who wrote an entire book about the approach and regularly gives workshops on it. The book’s subtitle lays out the approach’s promise: “Restoring Rigor, Motivating Students and Saving Faculty Time.”
For Nilson, who is director emerita of the Office of Teaching Effectiveness and Innovation at Clemson University, the problem isn’t multiple-choice questions, per se, but the broader issue of grade inflation.
“Our entire education system top to bottom has gotten very sloppy,” she said in a phone interview this week. “We have not been clear about our standards, and the standards we have put out there haven’t been properly enforced.”
One key to her approach is to set objective measures for each possible grade a student can get on the assignment (or it can be done pass/fail). Work that meets one set of criteria, or specs, will get a passing grade, while those who don’t meet the criteria fail.
“The specs may be as simple as ‘completeness’: for instance, all the questions are answered, all the problems attempted in good faith or all the directions followed (that is, the work satisfies the assignment), plus the work meets a required length,” she wrote in a 2016 essay in Inside Higher Ed. “Or the specs may be more complex: for instance, the work fulfills the criteria you set out for a good literature review, research proposal or substantial reflection.”
That way the work comes just once, in creating the rubric, rather than in marking up each paper. “If you want to write comments, knock yourself out, but you don’t have to write the kind of comments you did before to justify the grade,” she says. “Most of what we ask undergraduates to do follows a template, so what we have to do is lay out that template.”
Defending Multiple-Choice
To be fair, not everyone is so down on multiple choice. In fact, two scholars wrote a book a few years ago about their benefits, called “Learning and Assessing with Multiple-Choice Questions in College Classrooms.”
“There is a lot of bad multiple-choice testing out there, but it doesn’t mean that multiple choice is bad,” says Jay Parkes, one of the book’s coauthors, and a professor of educational psychology at University of New Mexico at Albuquerque.
He says he’s also seen an increase in the use of multiple choice by professors who do it to save time in grading, and he’s trying to spread the word about how to make the format more effective. “Just because you’ve selected multiple choice doesn’t mean you’ve given up on driving student learning,” he says.
There are rules to writing good multiple-choice questions. For one thing, be careful not to give away the right answer with grammatical cues, like making the correct answer the only one that fits the structure of a sentence. He also tells professors to craft their wrong answers carefully, so that they can get a sense of where students are in their learning by which answer they chose. A carefully chosen wrong answer is called a “distractor.” It’s an answer that does have a rationale, but it isn’t correct.
For instance, in a math problem involving adding large numbers, a professor could make one of the choices the number that the student would get if they forgot to carry. If professors notice that several students mark that answer, it may be time to go over that concept again. “Even if I’ve got a class of 275, I can learn a lot about what they know and don’t know, and let that guide what I do the next day,” he says.
He also suggests telling students clearly in advance what will be covered on the test, so they can focus their preparation. With a well-designed multiple-choice test, the test itself can serve a teaching function as students puzzle over the choices.
“There’s that sense that testing is what happens when the learnding’s done, but our view is that the learning is something that happens even when the testing happens,” he says. “We treat it like a scientific measurement rather than an extension of our teaching. That change in mindset opens up a whole realm of possibility to use them in a rich way.”