Third International Conference on the Teaching of Psychology
ICTP-2008
Kevin J. Payne,
Andrew T. Johnson,&
Brian J. Cowley,
Park University,
Parkville, USA




ARTIFICIALLY INTELLIGENT ASSESSMENT OF SOCIAL PSYCHOLOGY ESSAYS

In 2006-7, our BS in Social Psychology underwent the biggest curriculum revision since 1972. Our revised curriculum emphasizes technical skills in social psychology and instituted a series of comprehensive exit examinations. But our social psychology program also currently serves 3,285 majors in 43 campus centers across the United States and online, and it is difficult to allocate enough faculty to attend to this volume of highly distributed students passing through our comprehensive examination system. Therefore, we decided to employ an artificially intelligent expert system at two crucial points in our student assessment: (1) to provide formative evaluation for student research proposals in their final required methodology course, and (2) to grade the essay portion of their comprehensive exit examinations. As part of their SO308 Principles of Social Research course, students must write a detailed proposal for a real research project (students often subsequently conduct their research as Senior Projects, projects for their work, or as the basis for Master’s Theses). But critiquing research proposals is a time consuming and specialized process. Likewise, grading 400 sets of essay answers each year for our comprehensive exit examination is also labor intensive. We were also concerned to provide absolute and consistent standards in the assessment of these crucial student products. Therefore, we decided to become the first major adopter of a new expert system designed to evaluate and provide detailed feedback on natural language texts (SAGrader from IdeaWorks). Unlike other essay grading software, SAGrader works by comparing student writing to semantic network maps created by content experts. It detects whether students present their concepts in the proper logical relation, detects whether they have included the necessary related concepts and their correct connections, or included material that is not applicable to their topic. In the methodology course, students have access to SAGrader for the entire term and are encouraged to submit multiple drafts of their proposals for instant feedback and improvement. Students who use this system earn an average score 1.5 to 2 letter grades higher than those who do not (their final proposal is submitted to the course instructor for grading). As part of our comprehensive exit examination system, SAGrader allows us to construct exams on the fly from pre-established pools of essay questions covering basic psychological, sociological, and technical concepts and their applications. The SAGrader system includes a “challenge” process through which students may question SAGrader’s assessment of their work. At this point, their submission is flagged and a human content expert evaluates their claim. If it is upheld, the content expert adjusts the relevant concept map, all outstanding essays are automatically re-assessed, and new feedback is generated. Each challenge serves to improve the expert system’s subsequent assessments. Incorporating this system into our assessment scheme has resulted in improving our ability to provide consistent, detailed, and effective feedback for our students while simultaneously reducing workload so that our faculty can focus more on interacting with our students.

Back to index of presentations

 

Home Page


© 2008 Victor Karandashev