The CATS reports should be a resource for students

Neil MacDonald

Besides the warmer weather, colorful shorts and insane workloads, another sure sign that the semester is ending is the administration of CATS reports. The Course and Teacher Survey (C.A.T.S., get it?) is administered to every course section towards the end of the year, usually on that class’s last meeting. The report is an opportunity to turn the tables and finally grade the professor who has been grading you for the past 15 weeks.

The current CATS form includes general demographic information (college, year, major, etc.), a list of “agree-disagree” bubble questions and space for handwritten comments. The results of this survey are mostly kept private, however, and the question can be vague. When students need to make decisions about classes, they turn to professor rating websites that usually offer unhelpful, outdated reviews. It’s time that the CATS became a resource for faculty, administration and students by revamping the survey’s questions and publishing all of the results.

The survey’s instructions state the CATS are “intended to help instructors improve their own teaching, and they are also used by the University in decisions regarding reappointment, promotions and salary.” The general questions are in the form of a statement that the respondent then indicates on a scale of one to five his or her level of agreement. Some examples include, “The instructor for this course treats students in a respectful manner,” “explains course material clearly” and “interacts effectively with the students.” There is also a short section assessing the student’s own effort in the course, asking about attendance, assigned work, and the prevalence of cheating.

Currently, a small group of professors have opted to allow the results of these surveys to be posted on MyNova. However, a very large number of faculty opt out, and the website explains “faculty members object to publishing this material electronically (where it could be communicated to people outside the university. Others feel that the data is a part of a teacher’s confidential personnel records and should not be made public. Other faculty members have reservations about the validity of student ratings in general, or about the usefulness of ratings data in certain kinds of courses.” In other words, ratings are private, invalid or just not useful.

It is true that in an ideal world, the instructor of a particular course should not be a concern of students considering taking it. But in reality, we have all had less than ideal professors, and we all do a little digging into a class’s instructor before registering for that class. Because CATS are mostly private, students turn elsewhere. Websites like and some password-protected sites passed down from year to year are very popular around registration time. But these sources tend to have only a few reviews, and many of them are from years ago. Why not take an established practice, the CATS, and revamp it into something that can benefit not only the faculty and administration, but students as well?

The first step is recognizing that not all courses are created equally. One-size-fits- all surveys may misrepresent the goals or style of a particular course. Bloomfield College, in New Jersey, has more than 20 different variations of course evaluation surveys available to its faculty. Some are for lecture-based classes, others are for writing intensive classes, and so on. The questions are tailored to accurately assess many of the specific course styles present on a college campus.

Just down the road at Haverford College, a very open-ended course evaluation method is employed. No bubble sheets can be found, just questions like “Would you recommend this course to a friend? Why?” and “The best and worst things about this course were…” They even ask students to give the course a letter grade just like the professor grades his or her students.

While it may not be feasible for the University to make two dozen CATS variations or read thousands of open responses, it is possible for the CATS to get an upgrade to take student’s interests into account. Questions such as “‘Is the class lecture-based or discussion-based? ‘Is participation graded? Is the textbook used? Is the course writing-intensive? Is the grade mostly from exams or quizzes?” can all be added to the survey to help students when making registration selections. In addition, information regarding office hours availability, email response time, sticking to the syllabus schedule, assignment turnaround time, group projects and ability to engage and keep students interested would all be valuable on a new CATS.

Some faculty members even distribute a type of informal survey at midterm time so that they can adjust their methods before the semester is over. Introducing a smaller, midterm CATS (kittens, maybe?) could let faculty see what can be changed while there is still time to change.

While it is great that the University cares what its students have to say about faculty performance, the CATS can be improved to benefit students as much as administrators. Adding in questions that students would ask their friends about a class and making all results public would end the misinformation spread on ratings sites and the grapevine. A permanent restructure that keeps both student and University interests in mind is needed. Because teacher reputation is a factor in class selection, the University should provide data on its employees just like other service providers are required to disclose information about their products. Doing so will lead to better- informed choices and better classroom environments for faculty and students alike.