Sunday, March 24, 2013

Transforming Classes, Transforming Faculty

One of the most interesting facets of the transformation movement in higher ed right now is the way that, in the process of transforming individual courses, the instructors (and members of the teaching team) are also being transformed.  This is even true of MOOC instructors.  I was excited to read about the case of Dr. M. Ronen Plesser, a physics professor at Duke.  Said Dr. Plesser, "I found that producing video lectures spurred me to hone pedagogical presentation to a far higher level than I had in 10 years of teaching the class on campus."  I had exactly the same experience when I recorded the lectures for my Intro to Rome class in Summer 2012. 

So what, exactly, is happening?  At least a couple of things: first, changing the platform and audience of a course encourages the instructor to think hard about how the course actually works, how students learn the content.  Second, whether collecting data from a massive audience or simply getting increased and useful feedback via surveys, these courses and the students in them are being studied.  Instead of the fairly useless end of semester surveys, students are completing surveys and other forms of feedback throughout the semester; and questions are far more targeted to the specific learning objectives and strategies of the individual course. We are getting a far more instructive picture of how, exactly, are students are learning (or not learning); and what they need to improve their learning.  In basic terms, faculty are finally getting an opportunity to learn how to teach and assess, usually with help from learning and assessment specialists; and are getting useful feedback on the efficacy of their pedagogy.

It's not just courses that are improving, in other words, it's the instructors who are teaching them.  The value of this second part is enormous, because it suggests that these "transformed" instructors will apply their newly acquired knowledge of pedagogy and assessment to their other courses.  This is a very strong argument for ensuring that faculty remain deeply involved in any course transformation project--that they are treated as partners and not just content specialists who "perform".  It is also a strong argument for instituting serious incentives for faculty to take part in course transformation projects.  It is an opportunity for colleges and universities to train faculty in the latest pedagogical techniques, get them up to speed, and improve their teaching in a range of courses and over a stretch of time.

The course transformation projects also provide younger academics with important training in the newest pedagogy.  Graduate student teaching assistants have a rare opportunity to be a part of the teaching teams and to see how the process works from the inside.  Just to give a single example: one of my teaching assistants commented to me the other day that, at the start of the semester, he was pretty skeptical about the idea that a 400 person, intro humanities course could ever include useful discussion.  At about week three, though, he started to see the way that i>clicker polls, peer discussion, and other techniques could be used to do just that.  It wasn't just that he saw those tools being used in a large auditorium full of students--it's that he saw them being implemented in a way that worked.  Having had this experience as a teaching assistant, he is much more likely to use them himself, and to know "best practices" for using them.  Over the year, in fact, more and more graduate students have become interested in working with me and learning some of the techniques for  flipping a large lecture course.

A Pet Peeve: Misinformed articles about teaching loads

My brain hurts.  I've spent the better part of the past three days prepping my Intro to Rome class: making worksheets for ethics cases; coming up with discussion thread topics that will produce a deep and engaged conversation; creating PPTs for class; writing practice quiz questions; writing a quiz; putting together a list of all potential short answers for the upcoming midterm; talking to my teaching team; meeting with students.  I devote 12-14 hours on Saturday to the Rome class.  Oh, and I am teaching two other classes this semester, including a graduate seminar.  I easily spend 25-30 hours/week on the Rome class; and another 15 or so hours on my other two courses.  I also supervise our graduate students who teach Latin; and serve on several other departmental and university committees.  Time in class is by far the smallest time commitment of my week.

Imagine my irritation at seeing yet another study claiming that it's my fault (or, rather, the fault of professors) that college costs have risen.  Students are being shortchanged, the article claims, by faculty who aren't teaching enough to justify their supposedly astronomical salaries.  Just to be clear: the conclusions of this study are so generalizing as to be meaningless.  Without drilling more deeply into their data--and separating it by fields--it doesn't tell us much of anything.  In fact, teaching loads vary tremendously around universities and even within departments.  In natural sciences, engineering, and other grant-driven fields, for instance, it's not uncommon for faculty to do no teaching at all; and, maximally, to teach one course/semester.  Most teaching is done by lower paid lecturers, many of whom teaching three courses/semester.  In my college, liberal arts, most of us teaching two courses/semester. Some teach one, some none (if they have a grant that lets them "buy out" their time).  Lecturers usually teach three courses/semester. 

It's probably true that the teaching load of tenure-track faculty has decreased over time, but that's due to the elimination of tenured assistant professors who taught increased loads to compensate for their lack of publications.  The other factor that renders conversations about teaching loads meaningless: student-instructor ratios.  In my large lecture class, for instance, I am now teaching twice as many students.  On paper, however, my teaching load has remained a 2-2.  Similarly, the amount of time one devotes to teaching in any given semester can vary widely, depending on the particular courses, enrollment numbers, level, etc.  My two courses is rarely the equivalent of a colleague's two courses; and we all teach a range of different types of courses with the (often untrue) understanding that, over the year, teaching time will even out among faculty members.

Articles such as the one in the Chronicle of Higher Education are not just spreading misinformation, however.  They are potentially damaging to ongoing efforts to transform curricula and deepen learning, particularly in introductory courses.  The single common feature of all efforts to transform lecture courses is the amount of work it takes from the instructor as well as consultants and the classroom teaching team.  In some cases, instructors are giving a reduced teaching load; but oftentimes, such reductions just aren't feasible due to shrinking faculty numbers to staff courses.  I have done my transformation project while teaching a full load in the fall and an overload this spring.  This was nuts and, in retrospect, I'd never do it.   I'd insist on a course release.

This past year has been incredibly labor intensive, for me but also for the rest of the "team."  I expect that the Rome class will get a bit less labor intensive in future iterations; but, with 400 students and 4 TAs to manage as well as many moving parts (including a discussion board), it is always going to take up a big chunk of my time during the semester.  To me, the results I am seeing in my students' learning makes this investment of time worth it.  Still, those who are calling for education reform cannot also be calling for increased teaching loads.  This is particularly true at a time when faculty are taking on more and more of the responsibilities that were traditionally assigned to staff (who have been let go/not replaced in order to save money).

I agree that students should get a quality product for their money.  If they are paying higher tuition, it is my job to make sure that they are learning in my class, that they have something to show for that investment of money.  It is not my job to turn out a cheap product that will break the first time they use it--which is what will happen if I am asked to teach 3 or even 4 classes/semester.  To someone outside the academy, teaching two classes might seem like nothing.  Sometimes, depending on the course (and one's experience in teaching that course), enrollment, and level, it isn't a big time commitment.  Other times, though, it is more than a full-time job, with most of the work done behind the scenes and outside of class time.  I get that most people don't understand this, just as I don't understand the specifics of an investment banker's day to day work.  At the same time, it is important that we faculty resist the narrative being imposed on us, as lazy and entitled and only interested in research (which we expect our students to fund through increased tuition).

Wednesday, March 20, 2013

Why I flipped my class (and lost a year of my research life)

I've been teaching "lecture" classes for a decade, at a university that specializes in them.  These courses range in size from c. 75-500 students.  Most (but not all) departments try to restrict them to their lower division courses.  We faculty know that it is not a very effective way of teaching but most of us, over time, come to see them as an inevitable part of a job at a large, cash-strapped, public institution.  I felt ok about the job I could with 75-100 students in a Pagans and Christians class.  I knew that a lot of students were cramming for exams and not really learning, but attendance was fine and there was always about 25% of the class who were pretty engaged.  I never just lectured; but instead, discussed assigned readings and tried to push the class to think deeply about the complexities of the content.  The biggest challenges were my own pedagogical limitations (I had no training in teaching this sort of class or in assessment and was very much learning on the job); and not having enough classroom support to test them in ways that required them to move beyond the regurgitation of facts.

In Fall 2010 I began to teach one of my department's cornerstone offerings: Introduction to Ancient Rome.  It was generally capped at c. 225 students and, at least when I taught it, every seat was taken.  Most of those seats were taken by non-liberal arts students who needed the class to meet a graduation requirement.  About half of them were upper division students in the natural sciences, business, and engineering; often, they viewed it as their "easy" class and expected to put in little effort while still receiving an A.  I soon found that I liked the experience of teaching a larger class and even enjoyed lecturing.  But I hated the feeling that I was in cahoots with my students--I'd make the class entertaining and not too demanding and they'd humor me by cramming a bunch of facts (from a study guide I handed out) and then purging them on the midterms.  I knew they weren't really learning, but didn't know what else to do.  I also realized that I was going to become bored very quickly with giving the same lectures every fall.

When the course ended up getting scheduled in a room with lecture capture technology (Echo360), I started to think about ways that I could work with that technology to improve the course and increase student learning.  After many conversations, this led me to the project of a deep redesign of the course.  My impetus was to increase student engagement, depth of learning, retention, and self-sufficiency while also getting the students to understand that learning about ancient Rome could be fun--even for an engineer or a business student.  I figured it would take three semesters to get it right; it seems to have taken two.  In the fall, when I told the students that they were in a "flipped" class and tried to make them partners in creating the learning environment, I encountered stiff resistance from a small but vocal enough minority.  After working through the survey feedback from that course, I made a number of changes, none more significant than bringing back a small amount of lecture and not telling the students that they were taking a "flipped" class. 

This "stealth flip" seems to be working and, at the halfway point of the semester, I am seeing some big payoff.  First, many of the students are engaged with the course content and thinking hard about it.  I see this on the discussion board in their posts; and I see it in posts they make in which they connect something from class to something they know.  This morning, I woke up to find that a student had observed that the O Fortuna poem in the Carmina Burana was to the goddess Fortune--the same goddess that we had discussed in class the previous day in relation to Sulla.  Seeing this process of connection-building, this active thinking, on display reminded me of why I have been putting in such long hours.

One of the primary reasons that I opted to flip the class was because, semester after semester, students performed terribly on exams covering the period from the Punic Wars to the reign of Augustus. Exam scores dropped by at least 10% every semester.  This wouldn't be so worrisome if this weren't some of the most important material in the course.  But it's tough going: lots of names, terms, events to keep straight.  It is very complex and not suited to a "cram for the exam" approach to learning.  It is also the kind of material that benefit from constant practice, application, discussion.  In an ideal world, when the course reached this point, we would break into small, specialist-led discussion groups for about 6 weeks.  But it's not an ideal world and I have a limited number of TAs and no discussion sections to work with.  The flipped classroom seemed like a good model for helping students better master this part of the course.

In the straight flip in Fall 2012, test scores plummeted and, as was the case in the lecture-based class, were on average 10% lower than the scores for the first midterm.  The flipped class had no effect at all.  Because I had not also incorporated regular assessments to motivate students to stay on top of the material, I had no way to "force" them away from the "cram for the exam" approach.  I repeatedly warned them that it would not be effective, but they didn't want to hear me.  For three weeks, I felt like I was watching a car accident unfold in slow motion. 

This semester, in my stealth-flipped class, the students take weekly quizzes.  They took one yesterday, the first that really highlighted the challenge of keeping straight a lot of easily-confused content.  It didn't help matters that they were coming off of Spring Break.  It was, however, a perfect "teaching moment."  They took the quiz during the first ten minutes of class.  I then took about 10 minutes to talk about how I felt the class was going and to praise all the good things they had been doing.  I also did my usual bit about how the content was going to be quite challenging for the next several weeks and that the key would be to stay on top of the readings, come to class, engage, practice.  I could see them nodding their heads.  I suspect that my message was heard.  Why?  Because they had just experienced exactly what I was describing.  In effect, I wasn't telling them anything they hadn't just figured out--I was only confirming it and giving them some advice on how to deal with the challenge. 

I expect scores on this quiz to be lower than normal; but I also expect that this quiz, and the one next week, will function as warning shots across the bow.  They will help the students to calibrate their effort, to identify areas that need more attention, and give them a very good sense of what is going to be challenging about the second midterm in two weeks. They will understand the benefits of spending class time practicing the recall of content (something the fall class, as a whole, never really got).   I expect that many of them will make the necessary adjustments and that, for the first time since I started teaching this class, the midterm scores on this next exam will be close to the scores on the first exam. 

Tuesday, March 19, 2013

Show me the $$?: Licensing MOOCs

A lot of things keep me up at night (apart from the kittens playing hide and seek in the bed covers as soon as the lights go out, my toes be damned!)  As sleep eludes me, I find my mind wandering back to the ongoing conversations about US--and global--higher education, and to the role that MOOCs are, can, and almost certainly will play in the mission of post-secondary education. Lately, I have been thinking a lot about what it will mean to monetize a MOOC and, specifically, what it will mean for a faculty-created course to be licensed and sold to other institutions.  Who profits?  Who controls distribution and use?  UT won't allow their logo to be used on, for instance, Classics Department t-shirts that they deem "racy."  Will individual professors similarly be able to control the distribution and use of their courses?  Will they collect royalties, as they currently do from published books?  What financial cut will course designers, assessment specialists, and producers get (if any)?

In the academy, there is the odd but persistent notion that faculty don't and shouldn't care about money.  While it is certainly true that many of us are not compensated all that well for our time or level of education and accomplishment, we also have mortgages (or rent) to pay and all the other costs associated with living the middle class life.  The halls of a university remain one of the few places where money should be discussed only in hushed voices and with at least slight scorn.  Like the monks of old, we pretend that we have sworn off worldly indulgences and revel in the joy of asceticism.  Slowly, as younger generations of scholars from middle-class and even lower class backgrounds opt for an academic career, this ridiculous attitude is changing.

Still, there is no surer way to tar and feather a colleague than to accuse him/her of greed.  I was the victim of just such an accusation when, in the midst of negotiating the terms of a pre-emptive offer that barely headed off an actual offer, I dared to ask for an additional $3K to be added to the $4K in salary that was part of the offer--the audacity!  the greed!  Initially, a small group of co-workers (they suggested that I had somehow invented the potential of a job offer--an utter impossibility in the small discipline of Classics--and demanded that the university administration investigate.  They did (all without my knowledge until a few years post factum) and found that I had acted ethically.  Despite being told that their accusations were baseless, they revived them when I came up for tenure, attempting to suggest that such a greedy and unethical character was surely undeserving (too bad I had stellar letters of support as well as an excellent teaching and service record).  One co-worker continued to try to smear my reputation with my colleagues by asserting, in essence, that I was greedy and depriving them of money.  It is a strategy that seems ludicrous to anyone in the "real world"; but I suspect that other faculty can understand why money is such a delicate subject, particularly at a time when there are fewer resources to go around (UT, for instance, has essentially stopped giving across-the-board raises and insists that departments reward only a percentage of the faculty in each department--a strategy that helps weak departments and punishes strong departments).

With the creation of MOOC Incs., however, we have corporate interests setting up shop in the middle of the campus quad.  Yet many of the creators of the product--chief among them, faculty--have no sense of "how to do business"; and, in fact, have spent their careers giving away their time for free.  Academic journals and presses depend on very cheap (essentially free) labor from the faculty who serve as reviewers.  During the tenure process, faculty review the work and write letters of support (or not) for their peers at other institutions without any compensation.  We review books in exchange for a copy of the book.  Many of us work long hours teaching and mentoring graduate and undergraduate students off the books (e.g. reading drafts of dissertation chapters during the summer or winter break; doing practice job interviews; supervising their teaching).  Without the large amounts of uncompensated labor of faculty, much that happens at a university and in the scholarly publishing world simply wouldn't be possible.  But it could be argued that the time has come for faculty to take an active interest in ensuring that they are being fairly compensated for their work, particularly if others are going to profit from it.

But this is where things get a bit messy.  As I understand it, universities (not individual faculty members or the MOOC Incs) retain the rights to the particular courses offered on the MOOC platforms.  At present, this is a reasonable arrangement: the university pays the salary of the faculty member who creates a course and then controls access to that course.  No harm done.  When the MOOC Inc founders start to talk about licensing courses, however, the   situation gets substantially more complicated.  First of all, this means that once a course is created and handed over to a university, the university can distribute that course however it wants to and controls all profits made from any licensing deals.  Even the oldest faculty dinosaur understands the insanity of writing a textbook then giving it away;  and then allowing his home institution to sell copies of it and keep the profits.  So why are we doing this with MOOCs?  Why are faculty not proceeding with caution to ensure that deals are in place that give them control of how their courses are distributed and outline their financial stake if a course is licensed?

The move to licensing courses, particularly those of the "best professors at the best institutions" (aka the elite, private institutions in the MOOC stables), seems inevitable.  Likewise, it seems inevitable that those courses will be "curated" at other campuses with non-specialist faculty, probably low-salaried lecturers/grad students in order to cut costs.  I know I'm not the only person who worries about the consequences of such a move (the faculty union at UC Santa Cruz is worried).  I have been very grateful that the ITS department in my own College of Liberal Arts was very prescient on this issue and made sure that all material created using their lecture capture technology--Echo360--remains the property of the instructor.  They distribute it for us but we always retain control of our content.  Sure, I am creating that content in my role as a professor at the University of Texas; and the Texas logo is prominently displayed.  But it is mine. There is no reason that a similar deal could not be put in place for faculty (and teams of learning specialists) who are providing the content for MOOCs.

At present, this conversation is a strange mix of venture capitalists who want to get a return on their investment; Ivy League and other institutions claiming that they are motivated by altruism and a desire to make the world a better place; and faculty who are excited by the opportunity to experiment in a new medium.  Still, this is not a neighborly barn-raising; and faculty need to think long and hard about their willingness to give away their intellectual product.  It is not greedy to ask to be compensated, in the short and long-term, for one's work.  It's good common sense.

Addendum (3/27/13): "Half the professoriate will kill the other half for free", about the economics of MOOCs:  "Who benefits when the professor and the teaching assistants all work for free? The MOOC provider, of course. It’s digital sharecropping at its exploitive best."

"Are the costs of 'free' too high in online education,"  a thoughtful article by Michael Cusumano, a professor at the Sloan School of Management at MIT.  Writes Cusumano, "My fear is that we’re plunging forward with these massively free online education resources and we’re not thinking much about the economics."  (and the New York Times coverage of the article)

5/31/2013: A leaked copy of Coursera's contract with University of Michigan, including possible revenue models

6/13/2013: Colleen Flaherty, "It's My Business" (why professors should protect their Intellectual Property more zealously)

Friday, March 15, 2013

Education at Scale?

I  was chatting the other day with a colleague who works in our College of Undergraduate Studies.  We got on the topic of MOOCs and the idea that they are claiming to provide free education to the world.  As those trained in Latin are wont to do, we discussed the etymology of the word education: educere, to lead out or raise up.  The Romans used the verb to talk about raising their children.  When we educate our students, we truly are acting in loco parentis.  But how do we effectively nurture tens of thousands of children at once?  Can this job be reduced to a series of algorithms that require little human interaction?  Can we rely on the older kids to step in and raise their younger siblings because we parents are too busy making sure there's food on the table and the laundry is done?

The phrase "education at scale" has been attached to MOOCs--initially by Coursera, Inc. co-founders Daphne Koller and Andrew Ng in their talk "The Online Revolution: Education at Scale"--with seemingly little thought about what it actually implies.  In fact, the phrase itself highlights a paradox at the root of Coursera, a paradox that Koller has alluded to when she rehearses the impetus for founding Coursera.  On the one hand, Koller was interested in improving pedagogy in her own classes by using online tools like pre-recorded, interactive videos to deliver content outside of class, thereby freeing up class time for higher quality interactions with her students.  In other words, she was interested in blending (or perhaps flipping) her campus-based classes to improve the learning experience of her students.  On the other hand, Ng was interested in globalizing his classroom and (massively increasing the instructor:student ratio).  While Koller was driven by improving student education, Ng seems to have been driven by a desire to scale-up his audience.  In their home discipline of computer science, these two impulses are somewhat less contradictory than they are in, say, English or History of Classics.

I don't mean to say that it is not possible to teach more students with better learning outcomes by using technology more effectively--it obviously is and it is what many K-12 and university instructors are trying to figure out how to do.  But it is important to recognize that the phrase "education at scale" is not what most MOOCs currently do.  Rather, MOOCs deliver content at scale. It is commonplace to point out that printed books have been performing this same role for centuries; but it is worth observing that MOOCs can go to places where books in large numbers still cannot (particularly third world countries).  As well, for a generation raised on images rather than the printed word, I can imagine that it is easier for them to learn from the embodied rather than written "textbook" (even if it is true for some/many (as Mary Beard argues) that content is easier to digest in written form).

Putting content online--even if it has embedded quizzes and assignments--is not the same as educating.  I can understand how two extremely intelligent and creative Computer Science researchers might not recognize this distinction, however.  In a problem-based field where instruction is highly structured and can easily be packaged in terms of mastery of a series of concepts of increasing complexity; and where assignments can be machine-graded,  the claim that a MOOC student who completes all the work and uses the discussion forums as well as peer study groups is receiving something approximating the classroom education of a paying Stanford student is more believeable.  I can't imagine that all MOOC students come away with a strong grasp of the conceptual underpinnings of the course content, but obviously this is a small cost by comparison to the benefits for students who otherwise would have no access to a particular course.   I suspect that statistics or a differential equations course work in fairly similar ways.  I also imagine that, as we move further down the spectrum, towards disciplines that are less problem-based, that don't have a single correct answer to a question, we move further away from anything vaguely resembling education.  Taking a MOOCified version of my Intro to Rome class won't hurt anyone; but I could not say with a straight face that they were getting 90% of the learning experience that my UT students were getting.  If they were, then my UT students would be right to think I was an overpaid dinosaur.

This would all be low-stakes, disciplinary quibbling if it weren't for the fact that legislators and even university administrators salivate at the phrase "education at scale."  They imagine that they will be able to teach tens of thousands of students with a small stable of "the best professors" (whatever that means).  Heck, maybe they will just license courses from other institutions and hire low-paid adjuncts to run the course on campus and call it education.  This is the point where a public-lecture friendly platitude can actually become incredibly dangerous to the mission of higher education, particularly at public universities and in disciplines that do not fit the current MOOC platforms all that well (i.e. liberal arts).

States and the federal government have cut appropriations for education year after year, for decades.  We are at something of a crisis point, to be sure, but it is a crisis that is man-made and is not a crisis in higher education per se.  It is a crisis created by government's refusal to fund education properly; administrators' inclination to use adjuncts and lecturers instead of tenure-track faculty (thus giving themselves more control over annual budgets since they have effectively lowered the fixed costs of paying tenure-track faculty salaries).  Fewer instructors means that fewer courses are offered and fewer seats are available, particularly in high-demand and lower division courses.  This problem could be solved in a number of different ways: states could appropriate reasonable money for their public institutions; philanthropists, especially those wealthy Silicon Valley folks who are now so interested in higher education, could endow professorships across disciplines.  Tuition could be increased, effectively acknowledging the fact that states are no longer sponsoring so-called state universities.

Instead, as many others have noted, we have venture capitalists and others with a horse in the race declaring that there is a crisis in higher education; and that only technology can solve the problem.  Oh, and to adopt that technology will cost a massive investment of $$ by institutions who are too impoverished to hire tenure-track faculty.  The regents of my own institution recently created an Institute for Transformational Learning and gave its director somewhere around $50 million to allocate.  In the meantime, my department is not permitted to replace retiring and departing colleagues, to the point that most of our core courses will be taught by grad students, adjuncts, and lecturers in the coming year.  State lawmakers, regents, and even administrators want to believe in the myth of education at scale because it seems to provide a solution to their budgetary woes that does not require a long-term investment in faculty.

Until MOOCs make serious advances in their pedagogy, however, it is false advertising to say that the vast majority of them are providing education at scale.  They provide many other benefits, including allowing for much higher quality student-instructor interaction by shifting content delivery to outside of class; opening the gates of the university to anyone who is interested (and perhaps persuading more Americans that professors are not all pipe-smoking, latte drinking, leftist slackers); and encouraging Americans to be more intellectually engaged in general.  These are all good things, but they are not interchangeable with a quality university classroom experience--not yet and perhaps not ever.

But let's not throw the baby out with the bathwater.  It *is* possible to teach at scales larger than what we have currently been doing, and to do it more effectively than ever before, by making better use of education technology.  I have done this in my own large lecture class.  Enrollment has doubled from 200 to 400 students; but those 400 students are learning more and better.  I know this because we are doing careful studies of their learning and comparing it to previous cohorts.  To accomplish these improved learning outcomes at a larger scale has required a large and sustained investment of time and energy from me and a tremendous amount of classroom support (a team of 4 ace teaching assistants and 2 undergraduate graders).   I have worked with an excellent team of learning and assessment specialists from around campus and have consulted with several other faculty who are doing similar things with their courses.  I have spent a year developing content for the course and will continue to do so over the summer.  Effective teaching requires enormous energy and engagement--and this doesn't get easier or cheaper over time (at least not until a computer can be programmed to respond with a certain kind of intervention to a certain kind of comment on a discussion board, for example).  There *are* advantages to teaching 400 students instead of 200, but they are related to issues like time to degree and better use of campus resources.

When we talk about education at scale, we can't assume that this is what a MOOC does, at least not without evidence of learning outcomes for MOOC students.  In the meantime, though, we need to be experimenting, figuring out what the limits of scale are for our classes and our disciplines.  What is the largest number that we can teach and continue to see improved learning outcomes?  At what point does that curve turn downward?   The limits of scale will surely vary quite a lot between disciplines and individual courses.  The more theoretical and "subjective" (in a good way) the content, the more difficult it will be to increase size without serious sacrifices to learning outcomes.  Education at scale does not simply happen by making course content available and interactive (though I am willing to concede that it can sometimes happen in a computer science or calculus course with a highly motivated and self-sufficient student).  Moreover, in most disciplines, education at a scale of tens of thousands, much less hundreds of thousands, of students will never happen.

Addendum 3/28/13: A critique of the problem of scale even in a math course on the MOOC platform.

Addendum 4/1/2013: Steve Krause on Duke's English Comp Course and the problems of teaching writing at scale (scroll down the page for comments)

Wednesday, March 13, 2013

Another Reason I Love Lecture Capture

Among the many challenges of teaching my Intro to Rome class is the fact that, well, I have to be the one to teach it.  On the one hand, because I have recordings of all of the content archived, this isn't really a problem.  If I am sick, I can post a recording and know that the students have at least been able to listen to me lecture on the content.  In some real way, though, this misses the point of the class, which is to provide the students the opportunity to practice and apply the content.  Delivery of content is a fairly small part of what I do in any class session.  My PPTs are a mix of i>clicker, peer discussion, group discussion, and content slides.  I ask the students to consider questions from previous lectures as well as from the content I have just delivered. I review main points of their Piazza discussions.  It is not really possible to ask a colleague who has never taught this way to step in and take over for me.

Unfortunately, thanks to a dental emergency just before spring break, I had to do just that.  Fortunately, I had already created the PPT presentation.  I had a recording of me lecturing on the content slides.  If possible, I wanted class to meet as usual; so I asked one of my teaching assistants to step in for me.  This had several advantages, not least of which was the fact that he was used to the large audience and the method of teaching.  He could take my PPT, listen to my lecture and take notes, and then try to approximate that in class in my absence.

Typically, graduate teaching assistants struggle with the lecture format.  It's not an intuitive way to teach and requires very good time management skills as well as an ability to be clear, focused, but also entertaining.  Too often, grad students spend too much time on early material and never get to the final third of their planned lecture.  In this case, my TA delivered the lecture perfectly.  He got through all of the material; he was lively but got all the key content across to the students; and he injected his own observations from time to time.  It struck me that this mode of teacher training is far more effective than our more usual "here's a topic; create a lecture and deliver it" mode.  In this instance, the TA could see the things that I thought were important to emphasize but he could also add his own twist.  It was ultimately his lecture, not mine; yet it covered all the necessary content and kept the class on track.

Many professors I know are taking MOOCs for much the same reason: we want to see how someone else teaches the same material we teach.  This is going to be a valuable contribution of the MOOC.  Too often, once we get jobs, we get little useful feedback on our teaching and rarely have conversations with others who teach our same courses (typically because we are the only person at our institution to teach that course).  The internet has changed this to an extent, in that we can now look at syllabuses from colleagues' courses; but being able to audit them adds an entirely new dimension.  In the same way that (I think) my TA was able to learn from "watching" me and then doing it himself, I have learned a lot from watching my colleagues at other institutions teach courses on topics like Greek mythology.

Quizzes as a Tool for Calibrating Student Expectations

With every passing semester, I believe ever more firmly that one of the fundamental keys to a successful class (like any good relationship) is the clear communication of expectations.  More recently, as I move to a much more student-centered classroom and model of instruction, I have come to see that, for me to communicate my expectations clearly, I need to have a good understanding of what my students' baseline expectations are.  This is particularly true when it comes to grades; and particularly true in a class with 75% underclassmen in the state of Texas.  These students gained admission because they were in the top 10% (or 8% or 9%) of their graduating high school class.  This probably means that they have rarely if ever received a grade lower than an A since middle school--if then.

This spring, working together with a colleague in UT's Center for Teaching and Learning, I administered a backgrounds, goals, and expectations survey to the students during the first week of the semester.  Nearly the entire class completed the survey (there was a big incentive attached).  I recently received an overview of the results.  In many respects, I was surprised: fewer of them work than I expected (only 25%); more of them were in the class because they were interested in the topic than I would have guessed (only about 25/335 registered for purely pragmatic reasons, e.g., "it fit my schedule").  The biggest shock was the grade that they expected to get in the course: 96% expected an A of some kind.  Only .6% expected a grade lower than a B+.  Historically, somewhere between 35-40% earn some kind of A in the course.  Another 30% earn some kind of B and about 20% earn a C.  I fail very few students because they drop the course, even after it is over; or withdraw.  This particular cohort seems to be very good (I wrote about their performance on the first midterm recently).  They are performing at a high level on quizzes and midterms. This means that more like 50% are in the A range, perhaps 55%.  I am fairly sure, though, that 96% of the class will not be earning some kind of A.

I'd be curious to know how this same cohort would answer that question now, at the midway point of the semester.  They have now sat for five quizzes and a midterm exam and have completed half of their graded discussion posts.  With the quizzes, they get weekly feedback on their performance and have the opportunity to calibrate their effort but also their expectations.  Despite the hassles of administering scantron quizzes (to limit cheating) to a class of nearly 400 students, I am a completely sold on their many benefits--and these benefit extend well beyond motivating regular and consistent study.  I would imagine that, with each quiz, the student can see how they are doing.  I post the questions with the correct answers and also review questions that were missed by more than 25% of the class (and frequently include those questions on future quizzes).  I also post a summary of the class performance, so that students have some sense of where they stand relative to the rest of the class.

When I first started teaching a decade ago, I was reluctant to ever post data that showed student performance relative to their classmates.  I wanted students to focus on themselves and not compare themselves or be hyper competitive.  Since I don't grade on a curve, it doesn't matter how someone else did.  Over the years, though, I've realized that this comparative data has an important role to play in helping my students calibrate their expectations, face reality, and grasp that they are no longer in high school.  They may well be a small fish in a very big pond full of much larger fish.  It cuts back on complaints when they get a glimpse of just how many big fish are swimming around with them.

The other advantage of frequent graded assessments: they force students to confront reality.  When I used a 3 midterm system for my lecture class, I regularly had students operating in extreme denial.  Even at the end of the semester, they believed that they were going to get a much better grade than they actually received (I once calculated this from the course evaluations).  They persuade themselves that they will do better on the next exam, regardless of how they have done on previous exams.  I am stunned at the degree of denial that I see on a regular basis.  My sense is that these weekly quizzes are going a long way towards dispelling that denial and forcing students to fish or cut bait. They are also giving them very specific information about their learning behaviors and reminding them that, if they don't keep up, it will be very bad for their grade.

We will do an end of semester survey that asks them to reflect on their performance in the course.  I am now very curious to read those surveys, and especially to see whether, over the course of the semester, the frequent graded feedback actually does help them to adjust their expectations but also realize that, if they want an A, they are going to have to work very hard for it.  I would not be surprised if this cohort ends up earning significantly higher grades than I usually give--but it will be because they worked hard for them.  They knew how hard they would need to work because, each week, they got feedback on the success of their learning strategies.

Tuesday, March 12, 2013

Andrew Ng Comes to Austin: What MOOCs Can and Can't Do

As part of his global lecture tour touting the miracle of MOOCs, and Coursera in particular, Dr. Andrew Ng made a stop at UT Austin.  The presentation itself did not offer anything new to anyone who has been following the MOOC conversation over the past year.  It rehearsed the birth of Coursera at Stanford and offered a show and tell of some of the platform's functionality.  It was also packed with vague platitudes about learning, education, and pedagogy.  Dr. Ng is clearly an intelligent guy; yet, if this had been a presentation for one of my classes, it would have earned a solid C with exhortations to be specific and argue from evidence.  I was especially frustrated with the absence of a clear "so what".  That is, what, exactly, is the rationale for Coursera?  What does it do and how does it do it?  I've taken a few Coursera courses and, for the most part, enjoyed them (while recognizing the extreme limitations of the mode of instruction).  What about student motivation (and the high dropout rate)?

On the one hand, Dr. Ng tells us, education is a human right; and his goal is to educate everyone, regardless of social origins or financial status.  Everyone, he thinks, is entitled to the kind of first rate education that his institution, Stanford, has to offer (I'll leave my thoughts on that particular issue for another post).  Never mind that "an education" at Stanford is far more than the 4 years of courses taken by its students.  Let's assume for now that, in fact, the "best professors" do in fact teach at the most prestigious (i.e. highly ranked) universities (a remarkably bad assumption).  I am sympathetic to the goal of making knowledge open access, particularly in places where students can't afford textbooks and don't have easy access to libraries.  I can also see how, for certain disciplines like computer science, a MOOC might work pretty well.  Dr. Ng repeatedly stated that a MOOC could certify someone's learning and allow them to get a better job.  Again, this makes sense to me in a field like computer science--a field which relies a lot on certifications of various kinds.  A smart kid in India can take a MOOC, learn the content, get a certificate from Stanford, and then use that to get a better job.  This strikes me as an enormous social good.

The problem is, it only makes sense for highly technical fields which rely on certifications.  That is, for fields which might easily be taught in technical schools rather than at universities.  It makes no sense whatsoever for most disciplines currently taught at universities.  Sure, disciplines like political science or classics can package a course as a MOOC--but it is basically just a group of lectures with multiple choice quizzes.  Now, to be fair, this is often the same product that is being delivered on campus to smaller audiences.  Yet, because the learning outcomes aren't easily quantifiable and measurable using machines, it is impossible to test deep learning or certify much of anything.  Certainly, getting a certificate in Greek Mythology is not going to position anyone for a better job.

Dr. Ng and Coursera are very proud of their apparent solution to teaching course content that requires assessments that can't be machine graded (e.g. analytical essays): peer grading.  Suddenly, the talk is all about grading rather than learning.  Whereas the discussion of hard science/math disciplined focused on certifying skills (aka learning), now it's about grades.  One study has shown that peers grade roughly the same as professors.  All well and good, if the point of assigning an essay was to get a grade.  It is at this point that Dr. Ng's (and Coursera's) lack of familiarity with and understanding of humanities disciplines becomes a serious impediment.  They seem totally unaware that written work is generally assigned as a way to assess student progress and help them improve.  300 word essays are not going to do any of that; as well, despite lauding peer grading, the fact is that it has yet to be a real success or do much of anything apart from assigning an arbitrary letter grade to a piece of writing.  Serious humanities scholars ought to be outraged and scared by this aspect of Coursera's self-presentation.  It is very clear that the platform cannot support the delivery of a serious humanities course, that involves real and demonstrable learning.  In fact, Coursera seems to not take humanities courses very seriously.  They want to have some of them on their menu, of course; but, as is often the case in bricks and mortar universities, these courses are treated as second-class citizens, where it is grades rather than learning that matter.  I don't mean to single out Coursera for this attitude; unfortunately, all other MOOC platforms share this approach and nothing is going to change unless humanities scholars get serious about figuring out how to deliver our content in a pedagogically sound way to large audiences. 

So, to sum up: certain classes have the capacity to perform a real social good.  These are classes where the content is highly structured and relatively easy to transfer and assess if student motivation is high.  Dr. Ng's class is a perfect example of one such class.  Coursera's platform is not well-designed for a serious humanities class.  If the aim is to broadcast a series of lectures and check retention with short quizzes, fine.  But if the aim is to do serious instruction, well, Coursera isn't there yet nor does it seem to really care.  In fact, it is convincing itself that peer grading is the ideal solution.

The real value of Coursera isn't really for the supposed "target audience" but instead, for the paying customers at universities.  In fact, when one registers for a Coursera course, one is volunteering to be a research subject in a great experiment on learning.  As Dr. Ng admits, they are amassing an incredible amount of high specific data that is giving important insights into student learning--and, because of the scale, is making visible patterns that would otherwise be invisible to instructors.  As someone who has experienced this phenomenon on a smaller scale, I know how much my teaching has changed thanks to data about student learning behavior. 

But there's an important point to be made here: the real benefit of these MOOCs isn't for the 40,000 students taking them--even if a few students will, in fact, benefit.  It is for the 40 students who are taking that course on campus.  Thanks to the data gathered from the MOOC audience, an instructor can improve content delivery; identify and address trouble spots; and make better use of in class time by having students watch content outside of class.  I am quite certain that, in this regard, MOOCs will pay off great dividends for paying customers in the form of much better classes and more meaningful engagement with the instructor.  But it is important not to confuse this benefit with the claim for a larger, more global benefit of making a Stanford (or Princeton or Harvard) education available to the masses.  MOOCs are not doing that.  They are simply giving the masses the opportunity to sit at the feet of a living textbook for a few weeks.  That is not teaching and is not a model that is particularly supportive of learning.  Indeed, Dr. Ng tacitly concedes this point when he points out the benefits for the paying students, that is, the students who will actually benefit from evidence-based pedagogy that supports learning.

I am a big fan of universities doing a better job of sharing their resources with the outside world; and I do think MOOCs have an important role to play in education, especially continuing education.  The data about learning behavior that they are collecting is going to be an incredible resource as we instructors continue to learn how to better teach our students.  But I also feel strongly that we all need to acknowledge what MOOCs can't do (deliver courses where learning can't be machine-graded); and companies like Coursera need to be more honest about what they actually are doing: providing the resources to improve dramatically the quality of on campus education for paying customers; offering universities the chance to advertise their wares (and professors the opportunity to sell vast numbers of their own books); and perhaps appealing even more directly to their alumni (just as alumni cruises and lecture series do).  These are worthy goals; but they should not be confused with delivering high quality, online instruction.  Even more, they shouldn't distract university administrators from investing in platforms and the development of courses which can, in fact, deliver the serious online learning that MOOCs promise but fail to deliver.

Midterm #1: The Results

My "stealth-flipped" Intro to Ancient Rome class sat for their first midterm a few weeks ago. Honestly, I was dreading this exam and the aftermath.  In the fall, everything was going extremely well until after the first midterm.  After the midterm, students suddenly slacked off, started to complain about having to attend class, and ranted on Facebook about the flipped class model.  In a matter of week, it went from a class that I looked forward to teaching to one that I dreaded and resented.  I spent several months talking to learning specialists, reading evaluations, and trying to figure out what went wrong and, more importantly, how to prevent the same thing from happening again.  One clear answer (besides weekly quizzes and less heavily weighted midterms): a much more challenging first midterm.  To this end, I added a fair chunk of course content (on archaeology) and expanded the chronological range of the exam.  I made sure that the questions were challenging.  My aim was to have the best students score in the mid-low 90s, with the rest of the class scattered in the Bs and Cs.  If a student studied, they wouldn't fail the exam; but I expected a pretty good cluster of Bs and Cs.  I wanted the exam to be a wake-up call of sorts, a reminder that they had to stay engaged and working hard if they hoped to earn a high grade.

I was stunned when the results came back.  Despite my most concerted efforts--and a substantially more difficult first exam, this spring class outscored their two previous cohorts by a wide margin.  Some rough data for comparison:

Fall 2011 (c. 220 students, traditional lecture): Avg: 80; Median 85
Fall 2012 (400 students, traditional flipped): Avg. 81.87, Median 87
Spring 2012 (400 students, stealth flipped): Avg. 86.92, Median 92

In Fall 2012, A (159); B (109); C (56); D (24); F (33) [numbers approximate]
In Spring 2013 A (215); B (93); C (34; D (17); F (22) [numbers approximate]

I am a tough grader and when I want to write a tough exam, it's a tough exam.  In the comparison of Fall 2011 and Fall 2012, the flipped class scored only a few points higher than the regular lecture course--but the exam in Fall 2012 was substantially more challenging.  The grades can't really be compared because, thanks to the flipped method, I was able to write a more difficult exam that tested deeper learning.  I upped the ante once again in the spring; yet, somehow, this spring cohort aced the exam and significantly outpaced the fall cohort.  In part, this may reflect the fact that it is the second semester for freshmen and they are better able to manage the challenges of college--but freshmen are only about 25-30% of my students.  So why such a dramatic difference--and off the charts high grades on a very difficult exam?

First of all, weekly quizzes.  The spring students had taken three weekly quizzes leading up to the midterm, covering all but the final week of course material.  In addition, they had access to practice quizzes for all the content that would be covered on the exam.  On Blackboard, we can see who tries the quizzes (I set it so that we don't see how they perform).  Over 90% of the class tried all available practice quizzes.  That is, they had a lot of practice answering multiple choice and "mark all correct" questions.

In addition to multiple choice and mark all correct questions, the midterm had 8 short answer questions.  In the fall, these short answer questions tripped up a number of students, primarily because they did not answer them completely, directly, or with enough detail.  We received a number of complaints from them and my "grade czar" had to deal with many dissatisfied students who did not understand why they had lost partial credit for vague and incomplete answers.  This time, I made a short Echo video reviewing my expectations for the short answer questions and giving them sample questions and answers, with explanations.  My head TA also did a lot of work with practicing the short answer questions during his weekly Supplementary Instruction sessions.  Finally, we made available to the students a very long list of potential short answer questions.  It was nothing that wasn't also available last semester--but this time, I said explicitly that all questions would be drawn from the (exhaustive) list, albeit perhaps in recombined form.  By telling them that work on this document would be rewarded, they focused their attentions on answering the long list of questions rather than speculation.  In Fall 2012, I told them that the questions would be drawn from in class questions and review questions embedded in the lectures; but, for some reason, this was never enough to motivate them to assemble a list and learn the material.  By assembling the list, distributing it to the class, and encouraging them to create a Google doc and pool knowledge, we were able to get them to focus their energy on study rather than fretting. 

In the days leading up to the exam, I also observed some notable changes in learning behavior.  First, the class discussion board--Piazza--was very quiet.  In the past, this was a place where students would pose questions whose answers could easily be found with a bit of effort.  This had stopped (perhaps in part because I had warned them against doing so in the syllabus).  From what I gather, Facebook was also fairly quiet. Instead of the usual pre-midterm anxiety venting that was a hallmark of the fall class, this group seemed to be studying.  Their anxiety levels were noticeably lower (the teaching team was not inundated with emails) and they seemed to have a clear grasp of what to expect.  The efforts I made to convey expectations as transparently as possible, as well as the practice they had with the quizzes, seemed to pay off.

Typically, in the 48 hours before an exam, there is a big spike in Echo views of recorded lectures.  We waited and waited for the spike to come, but it never did.  In addition, what we did see was the students going back into the recorded class sessions to check details or refresh their memories.  I was delighted by this behavior--it was exactly what I wanted.  In the fall, I pleaded with students to go back to the recorded class sessions, to recognize that exam questions were coming from class and prepare by reviewing the recordings.  Few of them ever did this.  Without a word from me, this spring cohort is using the recorded class sessions exactly as they were intended (and making it worth the expense and effort to record class meetings).

I am learning a lot from my current cohort of students.  Two things stand out: the value of frequent, low-stakes assessments for supporting learning (and, more generally, a positive experience) in a large enrollment class; the importance of lecture--not so much as a means of knowledge transfer as much as a tool for orienting students, letting them get to know me, feel a connection to me, and experience my enthusiasm for ancient Rome.

We are now heading swiftly towards the second midterm.  This is a tough exam--it's the part of the class that led me to flip the class in the first place; and inspired me to toughen up the first section of the course.  Thus far, quiz grades have not dropped off at all and the students themselves have observed that the material is getting more difficult and requiring more work.  In the past, I tried to warn classes about the increased difficulty level to no avail--midterm grades always dropped by 10 points.  I have a feeling that this cohort will finally break that pattern, and I will not have said a word to them about the increased difficulty.  They have figured it out themselves and adjusted their effort accordingly.