The 2022 ACT: It’s Not All Bad News

Soon after this year’s ACT scores were released, a number of articles appeared commenting on the one-half-point drop in the overall, average composite score. The Wall Street Journal correctly pointed out that this is the fifth consecutive annual decline, and the first time the average composite score has been below 20 since 1991. To put this number in context, prior to 1990, the average composite score (corrected for the differences between the pre- and post-1990 tests) was 20.7. Between 1990 and 2021, it was 20.9. So, the composite score fluctuations have been small and represent an average difference of as few as three incorrect answers (out of 215 questions) on the test.

Despite the fact that these yearly changes draw a lot of attention, they are not, in and of themselves, diagnostic. As I’ve argued previously, the test results are more nuanced and interesting than the overall average scores reveal.

The Test

In 2022, approximately 1.35 million students took the ACT. This is the approximate number that took the test in 2007 and a 4% increase over last year, but it’s only 65% of the largest cohort in 2016.

The ACT has four sections—English, math, reading, and science—and an optional writing component. The number of correct multiple-choice answers for each section is converted to a scaled score ranging from 1–36. The composite score is the average of the four scaled scores, rounded to the nearest whole number. Thirty-six is a perfect score.

As a predictor of what it calls “College and Career Readiness,” the ACT uses an empirically derived set of “College Readiness Benchmark Scores” which were established in 2005 and updated in 2013. These are the minimum ACT section scores that predict a 50% chance of earning a B or better or a 75% chance of earning a C or better in a corresponding credit-bearing college course. The only changes from the original benchmarks are that reading was increased by one point (to 22), science was lowered by a point (to 23), and the STEM (the average of the math and science scores) and ELA (the average of the English, reading, and writing scores) benchmarks were added (26 and 20, respectively). English (18) and math (22) have remained the same.

The ACT also uses a preparedness metric called “Core or More,” meaning that a student has taken four or more years of English and three or more years each of math, social studies, and natural science. A decade ago, in 2012, 76% of the ACT cohort had taken Core or More. That percentage dropped to 69% in 2016, and to 47% in 2022.

Overall Trends

Over the last five tests, the average composite score has dropped from 20.8 to 19.8, which reflects drops in section scores of 1.2 (English), 1.2 (math), 0.9 (reading), and 0.8 (science) points. Although a full-point drop in composite score may seem dramatic, it represents a difference in section-score totals of just three points, which, in turn, reflects a difference of as few as three out of the test’s 215 multiple-choice questions. More diagnostic than composite scores is the degree to which students meet—or fail to meet—the ACT College Readiness Benchmarks.

Historically, a higher percentage of students have met the benchmarks for English and reading than those for math and science. For instance, in 2008, 68% and 53% met these two benchmarks, respectively. In 2022, the percentages dropped to 53% and 41%. During the same period, the percentage of students who met the math and science benchmarks peaked about a decade ago at 46% and 37%, respectively. Now, they are 31% and 28%. Similarly, the percentage of students who met all four benchmarks peaked at 27% in 2018. Only 22% met all four in 2022. There has been a similar decline in the percent of students who met the STEM benchmark, from 26% in 2016 to 16% in 2022.

Unfortunately, there has been a similar decline in reading comprehension. In 2016, the ACT reported that 23% of students were “Below Proficient in Understanding Complex Texts.” In 2022, 57% were Below Proficient.

Although these overall scores seem dire, some students are actually doing well.

[Related: “Go Ahead and Kill the LSAT”]

Racial and Ethnic Groups

Ranked by self-reported race, Asian (24.7), White (21.3), and those identifying as Two or More Races (20.1) had the highest average composite scores in 2022. The next three groups were Hispanic/Latino, Prefer Not to Respond/No Response, and Native Hawaiian/Pacific Islander (17.7, 17.6, 17.7, respectively). The two lowest-performing groups were American Indians/Alaska Native, and Black/African-American (16.4, 16.1, respectively).

This ranking has remained virtually unchanged for a decade. However, since 2012, the Asian group’s composite score has increased by 1.1 points, while all other groups’ scores have declined by an average of 1.8 points. Although there have been declines in composite score for the Asian group over the last five years and the last year, these declines (0.8% and 0.6%, respectively) were smaller than they were for any other group by a factor of two or more. In terms of composite scores, the Asian group has been the most stable.

Although all eight racial groups had lower composite scores this year compared to 2021, the largest decline was in the Prefer Not to Respond group (1.6 versus 0.1–0.6 points for the others). This group’s composite score also dropped 11% over the last five tests, approximately twice that of the group with the next largest decline (Hispanic/Latino, 5.9%).

As with composite scores, the ordinal ranking of Race/Ethnicity groups based on the Readiness Benchmarks has remained virtually unchanged since the benchmarks were introduced. For instance, in 2006, the Asian-American/Pacific Islander and White categories had the largest percentage of students who met the met the benchmark scores for the individual subject tests, and 29% and 26%, respectively, met all four benchmarks. The remaining three groups, American Indian/Alaska Native, Hispanic/Latino, and Black/African-American, fared worse on each of the Subject Tests, and only 10%, 9%, and 3%, respectively, met all four benchmarks.

In 2012, Asian-American and Native Hawaiian/Pacific Islander were separated into two groups, and the category Two or More Races was added. Once again, Asian and White students outperformed the other groups, followed by the Two or More Races category; 42%, 32%, and 26% of the students in these three groups, respectively, met all four benchmarks. Only 5%–17% of the other groups met all four.

Separating Asian-Americans from Pacific Islanders created two very different performance groups. Because of that separation, in 2012, Asian-Americans saw a 13% increase in the number of students who met all four benchmarks (42%) compared to 2006; the Pacific Islander group declined 12 percentage points (to 17%). The three lowest-scoring groups were the same in 2012 as in 2006, with only 11%, 13%, and 5%, respectively, meeting all four benchmarks.

A decade later, in 2022, the group rankings remained fundamentally unchanged: 51%, 29%, and 22% of the same three highest-performing groups (Asian, White, and Two or More Races, respectively) met all four benchmarks. Only 12%, 10%, 6%, and 5% of the four lowest-performing groups (Hispanic/Latino, Native Hawaiian/Pacific Islander, American Indian/Alaska Native, and Black/African-American, respectively) met all four.

There are several important points in these data. First, the percentage of Asian students meeting benchmarks in English, reading, science, and all four increased by an average of 14% (to 77%, 65%, 60%, and 51%, respectively) between 2012 (when the group was established) and 2022. The largest change (28%) was in the percentage who met the science benchmark. Although the percentage of Asian students who met the math benchmark declined from 72% to 64% over the same period, the latter was still 24 percentage points higher than the next highest group (White). Clearly, this group has fared the best.

Between 2006 and 2012, the percentage of students in the second-best performing category (White) who met the benchmarks in math, reading, science, and all four increased by an average of 14% (to 54%, 62%, 38%, and 32%, respectively), and the percent who met the English benchmark stayed the same (77%). This trend reversed after 2012. Since then, there has been an average decrease of 17% in students who have met the English, math, reading, and all four benchmarks (to 65%, 40%, 51%, and 29%, respectively), but an increase in those who have met the science benchmark (38% to 42%). Overall, students in the White group have not fared as well as those in the Asian group over the last decade.

[Related: “Why I’m Leaving the University”]

The three largest racial groups who have done the poorest since 2006 (Hispanic/Latino, American Indian/Alaska Native, Black/African-American) have seen average decreases of 31% in the percentage of students who have met the English, math, and reading benchmarks. In 2022, the average percentages of students in each group who met each benchmark were 28%, 19%, and 18%, respectively. In addition, the percent of American Indian/Alaska Native students who have met the science benchmark has declined to just 12% since 2006. Although the percentage of Hispanic/Latino and Black/African-American students who have met the science benchmark has increased since 2006, the 2022 percentages are still quite low, at 19% and 10%, respectively.

Courses Matter

This year, students who took Core or More earned higher scores in each of the Subject Tests (2.4 to 3 points), and higher composite scores (2.6 points) than the Less Than Core group. Not only did composite scores for the Core or More group remain relatively stable over the last five years (declining by only 0.2 points to 22) but also its science scores remained the same (compared to a 0.8-point cohort decline), its reading scores increased by 0.1 points (compared to a 0.9-point cohort decline), and its English and math scores declined by only 0.2 and 0.5 points, respectively (compared to 1.2-point cohort decline).

Although 4% fewer students (16%) met the STEM benchmark compared to five years ago, those who did saw their average math score remain the same and their science score increase by 0.3 points.

Similarly, the 2022 average reading score for students rated Above Proficient in Understanding Complex Texts is the same as it was in 2018 (31). In contrast, the average score for the Below Proficient group dropped by 0.7 points (to 15.6).

Understandably, the degree to which taking Core or More increases composite scores is inversely correlated with the overall performance of each Racial/Ethnic group. In other words, doing Core or More is most helpful for the lowest-performing students. In 2022, across seven of the eight racial groups, Core or More students scored higher than their Racial/Ethnic group as a whole by 5% (Asian) to 13% (Hispanic/Latino). Once again, the outlier was the Prefer Not to Respond/No Response Group, for which Core or More students scored a remarkable 38% higher than the group as a whole.

Gender

Since 1995, females’ composite scores have not differed from males’ by more than one half point. In 2013 and 2016, females’ composite scores equaled males’, and females have outscored males every year since then, including 2022 (20 versus 19.7). In 2022, 3% of the cohort did not identify their gender or identified as a gender other than male or female; their composite score was 18.3.

A higher percentage of female students took Core or More this year than did males (51% versus 45%, respectively), but their composite scores were 0.4 points lower (21.8 versus 22.2). The 17% of Core or More students who did not identify as male or female scored 23.4. Most importantly, all three of these Core or More groups had composite scores that were 2.3 to 3.1 points higher than those who took Less Than Core.

Historically, female students have consistently outscored males in English and reading. Further, over the last two decades the difference between male and female math scores has steadily declined, and the difference between their science scores has all but disappeared. In 2022, female students outscored males in English (19.6 versus 18.5) and reading (20.9 versus 20), scored virtually the same in science (19.9 versus 20), and scored just 0.6 points lower in math. Students who did not identify as male or female (3%) scored the lowest in each of the Subject Tests (17.5 to 19.2).

Educational Aspirations

The effect of Educational Aspirations on student performance is nothing less than remarkable. Beginning in 2006, the ACT analyzed scores by Racial/Ethnic Groups and Postsecondary Educational Aspirations. It divided the latter designation into seven categories that have remained unchanged: Voc-Tech (vocational-technical), Two-Year College Degree, Bachelor’s Degree, Graduate Study, Professional-Level Degree, and two others which will not be considered here: Other and No Response.

Arguably, student aspirations are the most robust predictors of ACT performance within and between student groups. For instance, for each of the 2006, 2012, and 2022 cohorts, composite scores were 31%, 41%, and 51% higher, respectively, for those aspiring to a professional-level degree versus those planning to pursue vocational-technical training. There was a similar difference in composite scores between those aspiring to a bachelor’s degree and those pursuing a graduate degree of 12%, 17%, and 17%, respectively. Clearly, the effect of aspirations, even if they differ only slightly, is substantial.

[Related: “‘Test-Blind’ Is Another Tool for Discrimination”]

Educational Aspirations have always had a robust influence within Racial/Ethnic Groups. In 2006, the composite score for each racial group was an average of 32% higher for those aspiring to a professional-level degree versus those pursuing vocational-technical training. Likewise, scores for those aspiring to a graduate degree versus those pursuing a bachelor’s degree were an average of 13% higher. However, even more remarkable is the fact that students in the groups whose aggregate performance was the poorest overall (Black/African-American, American Indian/Alaska Native, and Hispanic/Latino) and who aspired to a graduate or professional-level degree actually outscored students in the highest-performing groups (White and Asian-American/Pacific Islander) who aspired to vocational-technical or a two-year college degree.

Compared to 2006, the effects of Educational Aspiration were higher in 2012, and higher still in 2022. In the latter case, the average difference in composite scores between those in each racial group who aspired to a professional-level degree versus those pursuing vocational-technical training was a remarkable 47%. The largest differences—53% and 61%—were in the Native Hawaiian/Pacific Islander and Prefer Not to Respond/No Response Group. The smallest difference, 35%, was in the American Indian/Alaska Native group.

In 2022, students in the four lowest-performing groups (Black/African-American, American Indian/Alaska Native, Hispanic/Latino, Native Hawaiian/Pacific Islander) who aspired to graduate study or a professional-level degree outscored students in the highest-performing groups (White, Asian, Prefer Not to Respond/No Response, and Two or More Races) who aspired to vocational-technical training or a two-year college degree by as much as 6.5 points (39%).

Educational Aspirations have consistently trumped Racial/Ethnic differences in ACT performance for at least the last 15 years.

Lessons Learned

As a biological psychologist, I have spent most of the last several decades teaching in university biology and psychology departments. However, I also spent twelve years teaching high-school science, math, and ACT prep courses for a large, nonprofit tutoring center that drew students from about a dozen area high schools. To stay abreast of the test, and to understand it from my students’ perspectives, I took at least 20 ACTs myself, and taught from many more. Over those years and since, I have watched high school curricula and student performance falter, but the ACT test itself has remained stalwart. I still believe that the ACT is a valid measure of the degree to which students have learned what was (or should have been) taught in high school.

The reasons that students perform as they do on the ACT are complicated and are not revealed by a simple comparison of yearly changes in overall composite scores. There is a complex relationship between performance and each of the groups and subgroups in terms of which the ACT data is presented. Further, yearly changes in overall performance cannot be adequately explained by discrete events in any specific year. They represent decades-long patterns and trends that differ between student groups however defined. The challenge is to understand the nuances.

In the most general terms, it is no surprise that composite scores have declined given that the number of students taking Core or More has dropped by almost 30% over the last decade and that less than half of this year’s cohort was proficient in reading complex material. That said, however, some students are doing much better than others, even excelling. Students who take a more rigorous high school curriculum not only perform better on the ACT but are also less vulnerable to whatever factors hinder the lower-performing students. It is also obvious that the high school courses that students choose to take are related to their educational aspirations, and their aspirations are strong predictors of their ACT performance.

This suggests that enhancing students’ educational aspirations—irrespective of their particular demographics—will enhance their academic performance. One way to do so is to teach them that they can (and will) succeed if they work hard rather than telling them that they are at the mercy of uncontrollable external circumstances. The ACT itself reveals that students from historically lower-performing groups who have high educational aspirations can outperform students from higher-performing groups who have lower aspirations. This information needs to be made clear to students.

Finally, viewing ACT results through the lens of outmoded racial and ethnic categories is illogical and misleading. The facts that about 16% of the 2022 cohort fell into the Two or More Races or Prefer Not to Respond categories, that self-reporting multiracial status is inherently ambiguous, and that the Prefer Not to Respond scores have fluctuated significantly call into question the external validity of all the Racial/Ethnic categories.

Categories such as Asian, White, Black/African-American, Hispanic/Latino, and so on would have external validity only if each group was monolithic, culturally and genetically homogeneous, and distinct from all the other groups. However, none of those assumptions are true. Further, there is no way of knowing what students end up in the Two or More Races or Prefers Not to Respond groups. Consequently, it is nearly impossible to make meaningful comparisons between any of the ACT’s Racial/Ethnic categories.

In fact, the arbitrariness of the categories was laid bare a decade ago when the Asian-American/Pacific Islander group was separated, revealing two academically distinct subgroups. I am sure that the same would happen if any of the Racial/Ethnic categories were subdivided or replaced by, for instance, family culture, country of origin, state, family demographics, school system, or ZIP Code. Any of the latter distinctions would be more informative, and useful in creating more effective educational policy than today’s crude, biologically based categories.


Image: Adobe Stock

Author

  • Frederick Prete

    Frederick Prete, PhD, is a biological psychologist in the Biology Department at Northeastern Illinois University. He writes about education and the use and misuse of biology in forming public policy. Additional essays can be found on his Substack, “Everything Is Biology,” and he can be reached at [email protected].

    View all posts

4 thoughts on “The 2022 ACT: It’s Not All Bad News

  1. Thank you for the thoughtful analysis. Your nuanced approach to the results has given me a more accurate understanding of what the ACT data tell us.

  2. The problem I have is that — knowing that there was a significant drop in student learning because of the Covid shutdowns — it isn’t reflected in the ACT score.

    It’s like if you know there was a severe drought in a certain area for a couple of years, you would expect to see it in tree ring growths of wood from that area.

    Everything I have seen on the K-12 level is that students lost upwards of a year’s worth of anticipated academic progress. In some cases, parental resources and involvement helped abate this loss — and this includes the prep courses taught by Dr. Prete. And the ACT may not include the entire college-bound cohort as IHEs increasingly aren’t requiring it or the SAT.

    But if the ACT is reflective of the cohort, there ought to be a significant drop this year.

Leave a Reply

Your email address will not be published. Required fields are marked *