Sunday, October 28, 2012

When Technology doesn't Keep up with Pedagogy

I spent my Saturday hand-grading six scantron questions from my recent midterm exam.  These were questions that asked the students to mark all correct answers.  In five of the six questions, there were multiple correct answers.  During my reading about assessment and also from my own experiences with a Coursera myth course, I had learned that such questions were highly effective at creating an accurate representation of what students DID know.  They largely eliminate the advantage of being a "good guesser" and having a partial knowledge of course material.  They reward deep knowledge.  Pedagogically, they are a great question type.  For a class of 400 with limited TA support, however, they are a huge burden to grade.  Each question has to be graded, with partial credit awarded for each bubble; and then the points have to be totaled.  And then those points have to be added to whatever they scored on the regular scantron.  In our case, a new Excel document has to be created with the total scantron score that can then be distributed to the 2 grad TAs and 2 student TAs who are grading the exams.  In other words, hours and hours of extra work, all because our scantron machine (apparently) can't be programmed to read multiple correct answers (and because the person running it had no idea if there was a work-around).

I am beyond frustrated.  I plan to continue to include them, but will include a few of them on the short answer part of the exam.  It will add some extra grading for the TAs, but at least will minimize that.  Of course, I now realize yet again why it is that students looking for an easy class flock to the very large courses--they figure it's impossible for us to require them to keep up with the course or to hold them to test depth of knowledge.  They know we don't have the TA support and they count on the fact that many of the faculty who are teaching these large courses are underpaid adjuncts and lecturers who have made their peace with the limitation imposed on their teaching.  It is mind-boggling to me that, in this day and age, a scantron machine can't perform such a simple task that allows instructors to ask machine-gradeable questions that get much closer to measuring a student's real knowledge.

I'd love to hear any solutions people have found; and I'd love to hear about other types of MC questions one can ask.  I do ask the traditional "EXCEPT" questions, which are related to these.  But of course, it's all or nothing.  What I liked about the "mark all of the above" is that it meant they got partial credit.  I suppose I could spread the answers over 5 blanks and they have to mark them in order?  And one of the options is "none"?  But it seems like that would be really confusing, even with careful explanation of how to answer the question.

No comments:

Post a Comment