Thursday, June 6, 2013

The Myth of the Super Professor

Yesterday I wrote about the myth of the bad professor, a myth that has been a central feature of the narrative of a crisis in higher education.   The myth has been a driving force behind the daily calls for the reform of higher education--preferably by Silicon Valley VCs who are entirely outside of the system (and who, on the whole, seem to understand few of the complexities of higher education in the US).  The topic evidently hit a nerve with many because, in about 24 hours, the post had almost 500 hits.  Among other things, I think that points to the fact that so many of us are fed up with the constant attacks on our abilities and efforts.  It is enough of a challenge to inspire often undermotivated students to do the hard work of learning; the last thing we need is to defend ourselves from the politically motivated attacks of those who have never walked in our shoes and who have no idea what is actually happening on the ground.

Yet, to an extent, we have left ourselves vulnerable to these attacks because, in all the years that college teaching has been professionalized, we've never really developed a coherent and consistent system for measuring student learning in our classes.  In many respects, we ourselves are guilty of perpetuating the myth of the bad professor because we have not adequately challenged the myth of the great professor.  If the bad professor is phoning it in, the great professor is the charismatic sage whose students hang on to his (it's usually a he) every word.  When faculty are evaluated for annual raises, tenure, and promotion, student evaluations are consulted; teaching awards are often a result of student nominations and student recommendations.  At no point in this process is any significant attention paid to student learning.  The best professor is the one whose students love him and want to impress him (and, to be fair, are hopefully inspired to work hard in the course because of this love and desire to impress).  But shouldn't the best professor be the one who produces the highest learning outcomes in a cohort of students?  Shouldn't we be evaluating the quality of instruction not on the performance but on the results of the performance?

Most faculty will agree with the research that repeatedly demonstrates that teaching evaluations are not a very effective measure of instructor quality or student learning.  There are a range of age and gender biases at play.  One recent study found, unsurprisingly, a strong correlation between the attractiveness of the professor, the grade the students thought they were getting, and the positivity of evaluation of the professor and course.  We all know these things are true, yet we continue to play along.  One of my favorite stories about gaming course surveys goes as follows: during the semester, slip into the class discussion commentary on each of the survey questions, telling the students exactly how you are doing whatever the question asks (e.g. emphasizing repeatedly your accessibility).  Do this over and over.  By the end of the semester, the students will have internalized the narrative and will rate you and your course highly.  In my own case, I know that if I want to have outstanding course evaluations, I just have to let everyone think they are getting an A up until the time they submit the evaluation--and then use a high stakes final exam to sort them out.  I've never actually done this, but know plenty of people who do.

So what do we need to do to shift the focus from the personality of the professor--and the cult of personality more generally--to evidence-based arguments about instructional quality and student learning?  First, we need to assess our courses and students much more deliberately.  We need to be able to demonstrate in some terms the value added by our courses.  Yes, I know this plays into the rhetoric of the "outcomes" crowd.  It will inevitably undervalue all sorts of difficult to evaluate skills like critical thinking; and it assumes that a course's value is immediately known at the end of the semester (though this second issue could be addressed with follow-up surveys).  Many of us feel more comfortable with the current system, even while acknowledging its imperfections--including me.  At the same time, by essentially allowing ourselves to be evaluated based largely on features of our personality, we are laying the foundation for our demise (because there's always someone else who is even more accessible, entertaining, etc.).  MOOCs perpetuate the notion that good teaching is equivalent to skilled public performance rather than demonstrable learning outcomes.  An important first step in responding to the pedagogical claims of MOOCs is to insist that courses and instructors demonstrate student learning outcomes.

As part of the redesign of my Intro to Rome class this past year, I instituted a thorough system of assessments of the course.  Students had multiple opportunities to comment on all parts of the course.  This was interesting and instructive for a range of reasons, not least of which was because we also had data that allowed for a comparison between their self-reports and direct assessment of various behaviors.  It quickly became clear that there was a significant gap.  One place where this gap was enlightening to all of us was in the assessment of the implementation of an "ethics flag."  Three large enrollment classes implemented the flag in Fall 2012.  One class was lecture-based, 220 students.  The second was a mix of lecture and discussion sections, with 300 students.  Mine was flipped, with discussion as primary feature, with 400 students.  When the students were assessed, they reported the most satisfaction and learning in the lecture-based class; then the mixed class; with my class in last place.  When the flag implementation committee did a direct assessment of their work, the results were exactly the reverse.  They may have "liked" my class the least, but they learned the material much better--not surprising since I was requiring them to engage with it actively rather than passively.  This spring, the discussion section course and my course were offered again and the results were the same, but with a slightly smaller gap and enough student comments to be pretty sure that perceived workload was inversely correlated with student satisfaction. Alas.

When we saw these results, they made sense--but were also an important reminder that student self-reports often reflect their sense of comfort.  Many of them are still far more comfortable learning passively via lecture than in more active forms.  Indeed, as my campus has worked to "blend" a number of gateway courses through the Course Transformation Program, a consistent feature of the blended courses has been lower student evaluations.  There is a clear and indisputable inverse correlation between level of student engagement required and student satisfaction with the course.  What we are learning is that the techniques that produce the greatest learning gains and best prepare students to progress through degree programs and graduate on time aren't necessarily the courses (and instructors) that they love the most.  Often, this is because such courses require significant and consistent engagement.  At the same time, these instructors are clearly doing their job of producing significant learning gains in their students.

The time has come to abandon the myth of the great professor--the lecturer who keeps the audience rapt in their seats, scribbling down his every word, chuckling at his witty jokes, in awe of his brilliance.  Certainly, many professors are skilled performers and many of the students in their classes learn well. But there are plenty of boring, bad performers who are very good at designing courses and whose students learn at very high levels--even if those students didn't necessarily love the professor.   Teaching is ultimately about student learning.  If we continue to insist on defining our best professors as those who are the most gifted performers, but without any way to quantify what this means, we make it difficult to defend ourselves from the accusations of bad teaching.  If we shift to defining good teaching through demonstrable student learning, we make it very difficult for these outside (and sometimes inside) attacks to carry much weight.  Of course, this shift has to start at the highest levels of the university, by creating a system that evaluates and rewards student learning more than instructor likeability.  Likeability matters.  It matters that students enjoy a class.  But, at the end of the day, what matters most is that they learned the course content.

[disclaimer: my students generally like me and give me high course evaluations.  I'm not writing this because of sour grapes.  Rather, after a decade+ of teaching, I've seen repeatedly that the system is set up to incentivize a kind of teaching that does not always serve the students' best interests.  And, at the moment, this system has left faculty very vulnerable to charges that they aren't "good", with no clear way to rebut such charges.]


  1. Some very astute comments. I run an active class in astronomy, and student resistance to active learning is something that all instructors need to be aware of and to create strategies to remedy. I am curious about the "ethics flag" component you mentioned, and how the flag implementation committee assessed the student's work. Could you elaborate on that in a future post?

  2. Thanks so much for your comment! At some point in the next few weeks, I'll do a detailed post on the ethics flag and the way we did different kinds of direct and self-report assessment. In a nutshell: the flag committee created a grading rubric: I think it was scale of 0-3 points; and they were looking at 4-5 different skills related to ethics. They then took random samples of student work (essays on final exams) from each of the instructors who was piloting the flag in a large enrollment class (typically, the flag is taught in small, discussion-based seminars).

  3. As I understand it, all regional accreditors (UT has SACS) now require their colleges and universities to work on improving student learning. The accreditors are in turn authorized by the U.S. Dept. of Education and they are being pressured to have colleges demonstrate their quality. See . For what seems to be happening at UT in this regard, see -- note "quality of student learning." I'm in a different region and we're requires to have "direct measures" of students learning by our accreditor, which is pretty much what you're getting at (I don't know if SACS requires this). Alas, these efforts don't always get down to the department level.

    Finally, great to hear about your efforts.

  4. Yes, we have SACS. There is definitely a move towards direct assessment of student learning but it's slow and nothing is actually in place. I think we'll eventually get there, but it's going to take longer than it should. That said, many of the redesigned freshman-sophomore level courses, like mine, have incorporated direct assessment into the course. I do understand that it's really tricky and there are reasons to be wary of those pushing "outcomes based learning." At the same time, it's crazy that faculty are evaluated without any real attention to their role in improving student learning (there's just an assumption that, if we are teaching, we *must* be improving student learning). Thanks for the encouragement!

  5. Just curious on the "redesigned freshman-sophomore level courses" -- is that some sort of initiative at UT, like the "Top 25 Project" at Miami of Ohio, ? That might be of interest here at Penn State.

    Good luck getting getting recognized for actually being recognized for improving student learning!

  6. Bill,

    UT's Course Transformation Project ( looks very much like the "Top 25 Project": targets large-enrollment,multiple sections, "gateway" courses (mine is an exception to this, a mini CTP). The basic idea is to get as many frosh into well-designed large courses early one. We also have an initiative through the College of Undergraduate Studies, where most take at least one small seminar early on. CTP has been a good program, very strong results coming from the first wave of courses and several more that look very promising. The Vice Provost that oversaw the program is leaving to become Dean of Arts and Sciences at Cornell, unfortunately; but it seems that UT is committed to continuing the program at least in the short run. It's a really smart use of resources, in my view.

  7. Thanks -- very interesting. I hope that by "strong results" that translates to more learning by students (didn't see on the website). It was nice to see Penne Restad mentioned -- I use TBL and she's known for that.

    FWIW, I understand that Cornell's accreditor dinged them for not directly measuring learning and "closing the loop." I wonder if her work on CTP is part of her move?

    I hope that CTP continues -- it sure sounds like a great program.

  8. Your posts are very interesting to me as someone "on the other side," a current undergrad at Notre Dame who often has strong feelings about my professors and the education I'm receiving.

    I had a number of reactions to different things you said but wanted to respond, in particular, to this:
    "There is a clear and indisputable inverse correlation between level of student engagement required and student satisfaction with the course. What we are learning is that the techniques that produce the greatest learning gains and best prepare students to progress through degree programs and graduate on time aren't necessarily the courses (and instructors) that they love the most."

    I have absolutely seen what you are talking about here and think that you make an excellent point about students' satisfaction being related to how much they are pushed outside of their comfort zone by, say, being "forced" to participate more actively or do more work. Not enough students really embrace learning, which requires leaving your comfort zone, and even those that do still complain and resent large volumes of work or difficult assignments pretty regularly. Unfortunately, I think that this is part of a certain culture among students.

    However, I do think that a type of "super professor" exists. I understand the myth you are debunking and propose a different concept of "super professor:" one who pushes you but also finds some way to make it enjoyable (even if not until towards the end of the course when you can appreciate what you've learned). I have had about 2 or 3 of these professors in two years of college taking 5 classes/semester. They have the type of personality of your debunked super professor - engaging, sometimes funny, energetic - but also push their students, hard. Both classes required many, many hours of work per week, and I learned and retained more in each of those classes than I have in any other.

    Perhaps I'm a bit of an exception as a student since I usually enjoy learning, don't mind hard work, and generally look to be pushed, but I'm fairly sure that both of these professors regularly receive very high student evaluations and often talked with my classmates about how awesome they were.

  9. Dear Maribeth, Thanks so much for taking the time to write such a thoughtful response. I absolutely agree with you that there really are "super professors"--those truly excellent professors who seem to inspire everyone to learn, and do so by challenging them and pushing them out of their comfort zone. My own experience is that these are rare beings; and that too often "beloved professors" don't use their wit and ability to entertain to push students to learn. But I certainly don't want to suggest that there don't exist examples of exceptional faculty who can be entertaining, witty, and truly great facilitators of learning. I can imagine that you've encountered several at Notre Dame (a friend of mine teaches there and is certainly one such example). Thanks again for your great comment!

  10. This comment has been removed by the author.