In my column last month, I wrote about some of the ethical issues with student evaluations of teaching (SET), using Quinn’s competing values model. Among my top concerns are serious and documented validity issues with SET instruments, as well as how external stakeholders may use SET data as a big stick to weed out faculty who offer unpopular or controversial viewpoints, or who may be judged as ineffective based on this single (mostly invalid) instrument. In this column, I want to talk more about the “now what?” aspect of SETs.
This is probably not a newsflash: SETs are not going away. Despite validity issues, despite being used acontextually, and despite being used summatively instead of developmentally, efforts are underway to expand SET usage, not rein them in as a sort of failed experiment. And the sooner we engage with this inevitability, the more voice we will have in how SETs will come to look, and how they will be used.
Staying at the table
I think there are lots of analogies between resisting SETs (that story is in my prior column) and resisting other serious external forays into our academic enterprise. The most analogous example to me is how we handled assurance of learning (AOL) documentation efforts. My former institution was one of the first AACSB-accredited schools to go through re-accreditation under AOL standards adopted in 2003 , requiring documented connections between learning objectives and student learning outcomes. While I have written about philosophical concerns with such a documentational approach to learning, the fear of this change caused some of our colleagues to adopt an outright dismissive approach. I heard, for example, at one of our prominent conference venues: “Assurance of learning? They can’t make us do anything we don’t want to do!” Uh, well, yes they did. And I see a very similar trajectory with resistance to expanding SET administration and data usage. We can’t give up our seats at this table. We can’t put our heads in the sand that “they” can’t make SET usage more integral to our teaching practice, and that “they” cannot find ways of using those data that run counter to our best interests as professional educators. Stakeholders both inside and outside of our institutions deserve to know how we’re doing as instructors, and we must be there to shape how that evaluation process will go.
I think we are in a liminal space with respect to SET administration. There’s a lot of attention being given to a normative recommendation for how SETs should go. The Gates Foundation, for example, is funding research into creating valid, credible and useful SETs on a post-secondary level, modeled after their sweeping K-12 evaluation research projects. Those projects recommend a multi-pronged evaluation approach including classroom observation, student achievement scores, and yes, some kind of SET instrument. As with most evaluations, triangulation is a good thing, and faculty need to advocate for expanding evaluation data points in institutions for which only SETs reign supreme.
The ethics of the “now what?”
The ethics of SETs include responsibilities on multiple ‘sides’ of the instruments. The almost wholesale resistance to sharing what happens in a classroom, citing academic freedom, has resulted in a pendulum swing over which we have less control than before. Professors have a responsibility to take student input seriously, even as we understand the sometimes significant limitations of student-derived comments and suggestions. Students may, for example, evaluate a professor’s teaching harshly and negatively, only to find later in life that the lessons imparted are life-changing and valuable. This is the nature of our work, but not every student comment is borne of this kind of experience. Some of their feedback is true, and we need to engage with it. I administer both a midterm and end-of-term evaluation instrument, and I find student suggestions and comments invaluable. What helps a lot is priming students for them, and coaching them as to what kinds of comments and suggestions are helpful. Over every semester for the last 16 years there have been creative nuggets for improvement that I never would have thought of myself. It has been my experience that when I signal and model how their input matters, I get suggestions that energize my teaching practice.
Students have a responsibility to offer comments and suggestions in the spirit of fairness and goodwill. SETs have a bad name in part because of the sometimes breathtakingly painful and personal criticisms that students write. We need to coach them on how to give developmental feedback in ways with which professors can engage and from which professors can learn. Similar to our manuscript peer-review process, where some of our reviewer colleagues need to learn how to offer feedback in responsible and developmental ways, we should not assume students know how to offer candid SET data without veering into unhelpful personal critiques.
Administrators have a responsibility to offer instructors a mechanism by which to reflect on SET information, such as an annual evaluation process, an annually-renewed statement of teaching philosophy, or a peer coaching group. They should resist the urge to use such data punitively without also offering improvement coaching and support. Even in the last 15 years, I have observed sea changes in teaching practice, from the lecture-based ‘sage on the stage’ to the ‘guide on the side,’ complete with learning management systems, flipped classrooms, and increasingly daring experiential learning opportunities such as community-engaged partnerships. Faculty can’t just pick these new practices up, be confident in crafting learning outcomes, and carry out affectively-charged learning experiences as if we’re changing classrooms. We need protected professional development funding and support to actualize the possible in new teaching frontiers. And when we try something and it falls flat, resulting in gothic SETs that semester, we need help picking through the pieces of that experience and learning what we could do differently. I was heartened to hear my Dean, Darrin Good (who also weighed in on last month’s column) say, “There is only a small percentage of faculty that do not want to improve, and get better at their teaching. If there is an instrument people trust, with valid feedback, and a committee truly helping the faculty member improve in a non-threatening and informal community-based way, teaching will improve.”
External stakeholders hold responsibility, too. It is true that academe has resisted calls for reform, for transparency, and for some kind of accountability for the quality of academic instruction. But revolution does not happen overnight, and it should not happen top-down, in ways that inspire fear and risk aversion. MOOCs, for example, are brimming with the promise of unprecedented learning access, but in domestic bailiwicks the conversation centers more around fear that MOOCs will be used as a cost-saving lancet, reducing academic positions. While SETs can capture some interesting data, they do a poor job capturing how professors help students become their best selves with learning that resists spreadsheets and test scores. Perhaps you’ve helped Sherry overcome her crippling fear of speaking in front of a group, or helped Sid gain the confidence he needs to apply for law school years after he graduates with his baccalaureate degree. Life-changing, yet probably not on your SETs, and external folks must honor the messy and longer-term character of learning outcomes.
1. If you could craft your own SET questions, what would they be? Why?
2. In what ways can we support each other to improve our teaching practice, in responsible and collegial ways?
3. What are the avenues by which we can advocate for ourselves and how SETs are crafted, administered and used?
I welcome your experiences and responses.