Web-Based Course Evaluations: A Message from Dr. Joan F. Lorden

To: Deans

From: Joan F. Lorden

Re: Electronic course evaluations

Date: January 2, 2012

CC: Dr. Michael Green, Faculty President

Dear Colleagues,

The recent vote in the Faculty Council approved the use of electronic course evaluations by the slimmest of margins. What I found surprising was that this division of opinion about the value and validity of electronic evaluations came at the end of a well-planned and executed study, a long period of discussion, and a commitment to a gradual phase-in that will allow additional monitoring. A significant portion of the faculty view student course evaluations as high stakes game, used by department chairs and deans to determine faculty salary, reappointment, promotion, and tenure decisions. Given that we are going forward with the adoption of electronic evaluations, I have been asked to give some thought to how we might ameliorate faculty concerns.

The study commissioned by FITSAC and the Faculty Council and conducted by the Center for Educational Measurement and Evaluation and the Center for Teaching and Learning triggered two principal areas of concern: low response rates and small decreases in ratings compared to paper based evaluations (Hartshorne et al., 2011). The low response rate has sparked fears that only those at the extreme ends of the range for satisfaction with a course will likely take the trouble to participate. The finding that numerical ratings were slightly lower than those seen with paper and pencil tests may have reinforced fears that unhappy students are more likely to participate than those satisfied with a course. For some faculty, electronic evaluations also raised flags from a procedural standpoint, as evaluations would move from the classroom to an uncontrolled environment. Faculty voiced fears that particularly for contract faculty for whom there are few measures of performance other than course evaluations, electronic evaluations might result in unfairly negative assessments.

A report by Ravenscroft and Enyeart (2009) for the Education Advisory Board based on a survey of ten research universities suggests that the issue of low response rates can be addressed. The institutions surveyed use a variety of strategies from a series of email communications to making the evaluation a small part of the course grade to publishing some of the results in order to boost participation. Although the ratings in the Hartshorne et al. study were slightly lower with online than paper evaluations, the authors emphasize the fact that the effect was small and would not have been sufficient to alter a judgment about a faculty member’s effectiveness. Both of these reports indicate that online evaluations tend to increase the quantity of open-ended responses, suggesting that students spend more time on course evaluations in an online format than in the paper-based classroom administration, making them more rather than less valuable. The Center for Teaching and Learning has been charged with helping participating colleges to find effective means of ensuring participation. Over time and with continued study, comfort with online evaluations may increase. The question that needs to be addressed now is the use of evaluations, whether online or paper based.

We have policies about requirements for teaching evaluations, but I would like to suggest that we revise the statement in the Academic Personnel Handbook to be clearer about why we evaluate teaching, how we do it, and how we use the information obtained. Our current statement is based on an Administrative Memorandum 338 issued by the Office of the President (now General Administration) in 1993. It is the product of a time when the instructional staff consisted almost exclusively of tenured and tenure-track faculty and graduate assistants, but it emphasizes, particularly for new and non-tenured faculty and graduate assistants, regular evaluation through student surveys and peer observation, prompt feedback, rewards for excellence, and opportunities for professional development. These are still good ideas, but this may be a good time to restate our commitment and to recognize the growth of the non-tenure track faculty.

In the interim, as we begin to roll out electronic course evaluations, I offer the following thoughts:

  • UNC Charlotte engages in a program to monitor course quality and provide professional development for instructors. The components of this program are student course evaluations, peer review, reflections on teaching, recognition for excellence, mentoring, and a range of workshops and short courses to improve teaching, restructure courses, and employ new technology. The goal of program is quality assurance not punishment.
  • We should reinforce the fact that course evaluations by students are only one facet of how we evaluate teaching. Any meaningful evaluation should take into account multiple measures of performance.
  • The end-of-course questionnaires, whether paper or electronic, should be reviewed regularly by colleges and departments to ensure that they reflect the factors that the units consider most important. At a minimum, the questionnaires should allow for a balanced appraisal of student perceptions of an instructor’s preparation, mastery of the material, and delivery. All evaluations should include an opportunity for open-ended responses by students.
  • We should encourage faculty to use informal mid-term evaluations to determine whether changes are needed to improve student learning and satisfaction.
  • In assessing the performance of an instructor, department chairs, deans, and review committees typically review multiple samples of student course evaluations. We should emphasize the fact that these reviews focus on trends rather than single measures.
  • Variations in student evaluation ratings may occur for instructors switching from paper to electronic evaluations. These changes should be monitored and examined by departmental and college leadership, but not over-interpreted on the basis of a single measure.
  • Peer observation and feedback are required for all new faculty, pre-tenure faculty, and teaching assistants and are important adjuncts to student evaluations. A well-designed program of peer observation and timely feedback can help new faculty adjust to the expectations of the department and college and assist experienced faculty in improving delivery. Each college should evaluate whether its peer review program is meeting these goals and consider ways to use peer reviews to strengthen overall curricular goals.
  • Excellent teachers generally take time to develop. Through self-reflection, experimentation, and feedback from students and colleagues, we can all improve. We should encourage our faculty, including our non-tenure track faculty, to start early to build portfolios that discuss their approaches to teaching, innovative practices they employ, and efforts they have made to strengthen their teaching in order to document their efforts and to provide information for evaluation.
  • As we encourage our faculty to try new pedagogical techniques and to employ new technology, we need to recognize that not all these experiments will be successful and not all will be warmly received by students. As long as faculty are learning from these experiences and responding to them, we should be willing to support their efforts at innovation and put evaluations in context.
  • We should recognize that not all teaching takes place in the classroom and we should seek ways to consider the impact of our instructors outside the formal classroom setting. Nominations for teaching awards, post graduate surveys, and other kinds of non-class related feedback can provide longer term evidence of the impact of faculty on students.
  • All instructional faculty should be encouraged to make use of the Center for Teaching and Learning
  • Finally, we should look for opportunities to learn from the many outstanding instructors we have on our faculty. The awards given by the university and colleges for excellence in teaching offer many opportunities to recognize and reinforce the behaviors that make for great teachers.

I look forward to hearing your thoughts on improvements in our current statements on teaching evaluation and on how we can reassure our faculty that our evaluation methods will be fairly applied as we begin to the transition to electronic student evaluations.