As reported here and here, the Regents of the University of Colorado have voted to commission a survey of the political climate on the Boulder campus. I spoke at the meeting, and the discussion was less complicated than one might expect given the history of liberal bias topics at Colorado and elsewhere in the last ten years. The Regents behind the proposal, Jim Geddes and Sue Sharkey, were only calling for an inquiry into the matter (their proposal to insert “political discrimination” into the school’s anti-discrimination policies was withdrawn). Who could object?
Now comes the difficult part. How will the inquiry be conducted?
If the research firm that conducts the investigation relies on a questionnaire alone, it will have to address typical problems of question design, confidentiality, sampling, response conditions, and the attitude of respondents toward the instrument. In this case, those problems look especially complicated.
One question may ask whether one has seen or suffered any form of political discrimination on campus. “Yes” replies will require a follow-up explanation: “What happened? Why do you think it happened? Who knew about it?”
Because the topic is a loaded one in Boulder, respondents will have to complete the questionnaire confident that whatever they say will not be used against them. Moreover, the research firm conducting the survey will have to realize the current skepticism or even hostility to its actions on the part of the faculty. No professor, liberal or conservative, likes to be interrogated about political attitudes and conduct, and knowing that the results of the survey will make headlines, they will compose their answers tactically. Liberal professors may understate their politics, while conservative and libertarian professors may overstate the liberal professors’ politics. How do researchers control for that?
These are but a few of the stumbling blocks that come up when liberal bias charges arise and widespread empirical data are lacking. Yes, the vast majority of professors are Democrats, and they self-identify as “liberal” at much higher rates than they do as “conservative.” But that doesn’t mean they are biased and conduct their classes in a partisan or intimidating fashion. So what if the Boulder survey finds that the campus leans heavily to the left and that some people complain of political discrimination? I’m not sure what policies can be implemented to address those cases, and we know how difficult it is to change a culture (altering rules and shifting resources usually doesn’t succeed).
There is another way to get at the bias problem, however, and it skirts the personnel complications. It’s a curriculum review, and it works like this. Investigators proceed by:
- requesting from non-STEM departments syllabi of courses that have been taught in the last three years;
- tabulating the assigned readings;
- determining the topics, themes, ideas, events, and figures that dominate the assignments and presentations;
- identifying any ideological slant in those materials.
The last step isn’t as fuzzy as it may seem. For instance, if courses on U.S. history highlight the negative of our nation (racism, sexism, slavery, imperialism, poverty, inequality, homophobia) and overlook the positive (individual rights, property rights, free markets, local control, civic associations, class mobility, geographical mobility, prosperity), we have a biased curriculum. In courses on literature and the arts, if the majority of the material falls into identity concerns, breaking people up by skin color, sexuality, religion, etc., while underappreciating common human traits, we have evidence of bias. Do we find Karl Marx, Michel Foucault, and various postmodernists assigned much more often than Edmund Burke, Friedrich Hayek, and neoconservatives? Other gauges that divide liberal from conservative approaches may be easy to apply as well.
A model for this approach is the study of the Texas history curriculum conducted by the National Association of Scholars last year. The advantage of such a curriculum review is that it examines books and ideas, not people. There is no need to ask professors to explain themselves, and hazy discussions of hiring decisions (which always go nowhere) can be avoided.
A curriculum review also singles out the most important element of higher education, not the make-up of the faculty, but the nature of the instruction. The examination of course content better reflects the charge of the Regents, too, whose first duty is to ensure the quality of a Colorado education. A tendentious curriculum is an inferior one–that’s the premise by which the results should be interpreted. It’s not a call for “balance,” but a return to the mission of the university, which is to acquaint youths with the full range (the “universe”) of intelligent thought and creativity. If one finds conservative thought missing from course offerings, if the curriculum lets progressive critique overshadow conservative tradition, it has veered into partisanship.
This makes the selection of a research organization by the Regents a delicate one. If the group has no experience with campus protocols and faculty dynamics, if the investigators have no familiarity with tenure decisions and general education requirements, then the survey will fail. If the investigators have no knowledge of the softer disciplines, and if they have no understanding of progressive and conservative backgrounds (who Tocqueville was, where John Dewey stands, the Canon Wars of the early-1990s, etc.), the survey will fail.
In other words, we need academic experts to join the poll takers to complete the campus climate survey. Without a curriculum review, we will find unsurprising evidence of liberal leanings and a few acts of discrimination, but nothing that approaches actionable intelligence.