Not
too long ago CBS MoneyWatch published a list
titled “25 Schools with the Worst Professors,” using data which we at the
Center for College Affordability and Productivity (CCAP) had gathered from
evaluations published on ratemyprofessor.com (RMP). We strongly believe that
this list of 25 schools is a complete misrepresentation of our work. While it
is true that we use RMP data in the college rankings we develop for Forbes, we do not–and never have–represented RMP data as a
measure of teaching quality; indeed, we have always characterized RMP data as a
measure of “student satisfaction” or “consumer preferences” (see our methodology)
and as a way to answer the question,
“How well do students like their courses?” Therefore, using our RMP data to
construct a list of schools with the “worst professors” is wholly
inappropriate. Furthermore, our RMP data are restricted to a very narrow sample
of 650 institutions (there are more than 4700
degree-granting institutions in the United States), so it is not possible,
using only our data, to determine if the schools in our sample are indeed the “worst”
or “best” in teaching quality.
The
distinction between “teaching quality” and “student satisfaction,” thought
subtle, is an important one. The extant scholarly literature on RMP data–like
the voluminous scholarly literature on student evaluations of teaching–supports
the claim that there is a positive correlation between course easiness (or at
least easiness as perceived by the students) and student evaluations. That is
why we only consider RMP to be a measure of “student satisfaction,” since this
relationship between course easiness and overall rating may indicate that
student evaluations reflect something other than true teaching quality.
Nevertheless, there is sufficient support in the RMP literature (see our
methodology for a brief discussion of this) that RMP ratings generally
correspond with the ratings given by students in official student evaluations
of teaching administered by the colleges themselves such that we believe it is
justifiable to consider RMP data as a measure of student satisfaction.
I noted when the MoneyWatch report came out that the top (bottom?) of the list contained some pretty well-known and respected technical institutes and universities. As a professor in such a school, I can attest that the challenge of mathematics, science, engineering and computing courses can frustrate students and cause some to become bitter. This is often reflected in the RMP data.
I know dedicated teachers at several of the “worst” schools – while they’ll go overboard to help students, at the end of the day some students are simply unable to flourish in a technical program. This isn’t a reflection on the program, the faculty or the student – STEM disciplines are unapologetically demanding and not everyone’s cup of tea. We who teach these subjects do no favor to either students or our science and engineering professions if we do not honestly assess how well students have mastered the material.
Do you want a bridge designed by someone who got a “gentleman’s C” in statics? How about a physician who was “passed along” in chemistry and biology? Compare and contrast to the potential damage done by one who received an “A” in Greek Philosophy without understanding Plato’s Allegory of the Cave.