GENERATING EXAM QUESTIONS: DOES IT IMPROVE STUDENTS’ EXAM PERFORMANCE?
Generating questions on reading material has been shown to improve students’ comprehension of that material. Does generating possible exam questions on a lecture similarly improve undergraduate students’ exam performance?
The purpose of our research was to examine whether generating exam questions during tutorials enhances exam performance. In tutorials, small groups of students meet with an advanced student to recapitulate the content of a lecture in any form. Due to small group sizes, tutorials are costly and require organizational effort, therefore evaluation of their effectiveness is advisable. In our case, one “supertutor” organized six tutors, who each worked with 11 to 18 students for 90 minutes per week over the period of one semester (11 weeks). The central didactic means of the tutorials was that participating students used the content of the lecture to self-generate possible exam questions in a multiple-choice format. Afterwards, participants together with the advanced student went through those questions and discussed the correct vs. false options in the multiple-choice format. We expected that doing so would enhance students’ in-depth understanding of the content of the lecture: Students not only had to know and precisely formulate the correct answer, they also needed to a) scan the overall content of the lecture for possible questions, b) cluster content into separate topics and questions, c) prioritize and decide on the likelihood that any one topic/question was exam-worthy, d) relate to what they thought the lecturer found central, e) generate false answer options (“distractors”). Also, we expected that discussing the content and quality of each question and its answer options during tutorials corrected false assumptions, clarified misunderstandings, and thus deepened understanding of the content.
From 383 psychology students who took intermediate exams, 62 participated in a facultative tutorial on social psychology in which advanced students helped them generate multiple-choice questions. Multiple-choice was the format used in the intermediate exam on social psychology at the end of the semester. The exam questions were generated exclusively for the exam and were not given to students beforehand. Our data show that students who participated in the tutorial achieved significantly better results in the social psychology exam, but not significantly better overall results in their intermediate exams. This indicates that generating questions during tutorials specifically prepared them for the exam of the subject they were tutored in.
Self-report data of 33 participants describe students’ motives for participation, expectations beforehand, degree of satisfaction afterwards, and assumptions of what didactic means and processes were most influential on exam performance. We will discuss our assumptions that content of the lecture was understood more deeply, and why we think that self-selection of the students participating in the tutorials did not influence these results systematically. We will also propose how to further pursue the research on generating questions.
© 2008 Victor Karandashev