Cross-posted from National Association of Scholars.
Cross-posted from National Association of Scholars.
Fall 2011 has seen some major milestones for the SAT/ACT optional movement. DePaul University, for instance, initiated its first admission cycle sans test requirement. Clark University announced last month that it will offer test-optional admissions for the incoming class of 2013.
In his new book released this fall titled SAT Wars, sociologist Joseph A. Soares of Wake Forest University hails the success of test-optional admission policies. Wake Forest was the first of the top 30 U.S. News schools to go test-optional and is one of the most vocal cheerleaders of the movement through its blog Rethinking Admissions. According to Soares, adopting policies that allow applicants to opt out of reporting their scores has successfully resulted in diversifying these campuses by race, gender, ethnicity, and class (groups he claims are excluded unfairly for underperforming on standardized tests) without compromising overall academic quality.
By all appearances, requirements for standardized testing in higher ed admissions is on the long and ragged road out the door. To date nearly 850 colleges and universities (40% of all accredited, bachelor-degree granting schools in the country) have already bidden farewell to the test requirement in some form or another. 53 of these institutions are currently listed in the top tier on the “Best Liberal Arts Colleges” list published by U.S. News and World Report including Bowdoin, Smith, Bates, Holy Cross, and Mount Holyoke Colleges. Even some of U.S. News’ high ranking national universities, such as Wake Forest University, Worcester Polytechnic Institute and American University, are categorized as test-optional. It now seems likely that this trend will only gain in popularity and momentum in the coming years.
Yet even after my elementary school pep talk on the nature of scaled grading, I always had this lingering feeling that standardized test scores were somehow an unfair representation of what I could do. Perhaps I simply fell into the category of being a “poor” test taker, getting easily muddled by my own bubble filling perfectionism and the time constraints required by these acronymic tests. Or maybe it was because I could never wrangle up enough motivation to spend my free time studying methods for optimizing my score. And most of all, like any “free-thinking” member of my generation educated by the New Jersey public school curriculum of the 90s, it may have been because I was contentedly assured of being so much more than a number.
One would think given these facts that I would be all for the enforced disappearance of the SAT in favor of the new “holistic” entrance requirements offered by test optional schools. But like a wised-up adult now grateful that her mom made her eat vegetables as a child, I find myself in the curious position of lending support to this once bemoaned exam.
My reason for this change of heart is simple. We need basic universal testing methods to separate out the prepared prospective students from the unprepared.
In his 2011 work, Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies, Howard Wainer uses the available statistical data to conclude that institutions considering SAT-optional policies should proceed with caution.
Making the SAT optional seems to guarantee that it will be the lower scoring students who withhold scores. And these lower scoring students will also perform more poorly, on average, in their first-year college courses, even though the admissions office has found other evidence on which to offer them a spot.
For example, Wainer found that at Bowdoin College, a school at the forefront of test-optional admissions, students in the entering class of 1999 who chose not to report their SAT scores tested 120 points lower, on average, than those students who submitted scores with their application. This gap does sound large at first glance, but when considering students who typically have combined scores of 1250 and above in the traditional math and verbal categories, does that 100-120 point spread really matter when deciding whether a student is college-ready?
Clearly, admissions administrators at schools like Bowdoin and Wake Forest don’t consider it to be a problem. And they might be somewhat justified in this assessment, even if – as Wainer found – the non test reporting students tend to have lower college GPAs then their test reporting peers. Not everyone should be getting As in college and there are plenty of middling students in solid programs who can still benefit from a college education.
But would these higher ranked institutions really want to admit students who score 200 or 300 points below the institutions’ averages? Likely not, as the continued penchant for test-optional schools to purchase the names of high-test scorers indicates. The test-optional philosophy of admissions might sound warm and fuzzy on the surface, but for many of these schools this still appears to be a numbers game; one that perpetuates the value of high scorers and high rankings, now precariously balanced with a goal of attaining the oh-so-necessary badges of inclusion and diversity (yet more statistics to tout).
Most of the students profiled by these SAT-optional schools to prove the success of their new admissions policies are ones who were already at the top of their high school classes and who would have been accepted to any number of decent schools, even with their horrifyingly “low” test scores. Often colleges are willing to overlook mediocre scores if an applicant is salutatorian, captain of the volleyball team, or editor of the newspaper–achievements indicative of a certain level of discipline and focus. And if what these test-optional schools claim is true–that there are students out there who are great fits for their campuses and who have everything in their applications except for a specific score range–the schools should have had the courage to admit (and maybe even recruit) them anyway, bad scores included.
It takes courage to admit low scoring applicants because doing so all but guarantees lowering the SAT averages of these institutions and thereby risks knocking them down a few pegs on many of the popular college ranking lists that use test scores of incoming freshman as a major factor in their rank calculations. Now, with these new non-reporting admissions options, some schools do not consider themselves obligated to factor in the scores of their test-optional applicants, thus allowing their middle 50% SAT range to represent only test reporting students (presumably the best of their enrollment pool). Just look at what the oft reoccurring footnote No. 9 on the U.S. News “Best Colleges List” has to say:
SAT and/or ACT may not be required by school for some or all applicants, and in some cases, data may not have been submitted in form requested by U.S. News. SAT and/or ACT information displayed is for fewer than 67 percent of enrolled freshmen.
If these schools truly believe that the tests are biased or inaccurate representations of student preparedness, then why should they care how their test medians rank or if they recruit the highest scorers for their incoming classes?
Apparent hypocrisy aside, my suspicion is that the schools profiled most frequently on this issue, and the debates surrounding their choice to step away from standardized tests, cover up the true harm the test-optional movement has on academe as a whole. For it seems to pose the most danger not to its leaders, many of whom still selectively accept students over the 80th percentile, but to the large number of other schools who are realistically following suit to lower their admissions standards and raise enrollment to make ends meet. A 100-point spread might not mean all that much to students with scores of 1250+, but it can definitely make a world of difference in schools whose means are already well below that threshold. The hard truth is that at some point being a well-rounded person ceases to compensate for not possessing quantifiably provable verbal and math skills.
And unlike what Soares and his cohort claim, I think most would agree that high school GPA does not ensure the same universality of assessment offered by tests such as the SAT because high school curricula are not created equal. Although I grew up in a school district where we started learning how to write research papers in the third grade, some of my college classmates never had to write more than a single double spaced page at a time, and some were never required to read a book cover to cover in the course of their entire K-12 educations.
On the larger trend, we are not talking about straight A students at challenging high schools who happened to have the flu on test day, or who can’t afford to take test prep classes, or who don’t work well under pressure, as much as the test-optional proponents want us to believe this to be the case. For the majority of those nearly 850 accredited institutions, this movement is about admitting students who are not prepared and quite possibly not capable of benefiting from a college level education.
Accepting students to college when they are not ready for college level course work is irresponsible and inexcusable. It is time to get beyond the top schools in this discussion and consider the havoc test optional policies may wreak on the vast majority of higher ed institutions. What seems like only a minor performance disparity outweighed by the benefits of “diversity” at schools like Wake Forest could spell the end to professional academic standards at lower ranking but still respectable institutions.
It also might be time for the proponents of test-optional admissions to stop and consider that maybe it really isn’t the test’s fault after all. Low-scoring but worthy students ready to tackle college coursework are probably the exception rather than the rule. Admissions officers should use individual discernment and admit such students, when deserved, with full knowledge of how they scored. This is exactly why we have people, not mathematic algorithms, make admissions decisions in the first place.
More broadly, if certain groups are genuinely disadvantaged by these tests and underperform as researchers such as Soares and organizations like The National Center for Fair and Open Testing claim, we should continue to place emphasis on innovative solutions for K-12 reform instead of dispensing with standardized testing altogether. The chances are that the most notable demographic gaps in the test results reflect a deficiency in education quality or testing support, both areas we can improve over time through reform, more than any inherent flaw with the objective test itself. Not to mention that one of the primary methods used, including by the test skeptics listed above, to identify policy weaknesses and demographic disparities is the analysis of standardized test scores. Without any form of universal achievement testing we risk missing demographic weaknesses altogether and could neglect the urgency to find solutions where legitimate problems exist.
The tests will never be perfect or comprehensive, but they continue to offer the most assured universal assessment of college preparedness, especially when considered alongside the many other factors traditionally used in admissions decisions. To say that it is the test’s fault is both a juvenile and a nearsighted excuse. We do need to rethink college admissions, but implementing policies that let in more, not fewer, unprepared students is heading in the wrong direction – one that has no future in mind.