A Widening Gap
The release of ACT’s class of 2016 report confirms that ACT is now beating College Board where it matters most — everywhere. ACT easily surpassed the 2 million test takers mark. Its gain of 166,000 test takers is the largest in 50 years. The ACT was required of all public high school students in a record 18 states. Twenty-three percent more 2016 graduates took the ACT than took the SAT.
College Board has not yet released its class of 2016 figures, but the SAT is not expected to show more than minimal growth. We’ll use 2015 SAT data as a proxy in this post when making comparisons to the ACT.
If there is a weakness in ACT’s growth engine it is that the gains have been fueled by state-mandated testing. To ensure college readiness for all students and to satisfy federal testing requirements, states are increasingly opting for mandated college admission testing. ACT has had a tremendous head start on College Board, because it could more convincingly make the case that its test was academically aligned.
For the class of 2016, ACT had 18 states where 100% of students were tested: Alabama, Colorado, Illinois, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, North Carolina, North Dakota, South Carolina, Tennessee, Utah, Wisconsin, Wyoming. That compares to 13 states for the class of 2015.
ACTs 2016 growth was geographically concentrated. Only 6 states had gains or more than 3% of high school graduates, and 5 of those were new mandate states.
|% of Graduates|
|% of Graduates|
For the class of 2017, ACT lost both Michigan and Illinois testing to College Board. Those two states had approximately 270,000 ACT testers in 2016.
The Story of Score Distribution
Compass’ primary interest, though, is not in the business competition between ACT and College Board but in how the numbers impact students, counselors, and colleges. A trend that has accelerated in recent years is the appearance of top scorers on the ACT. Historically, students shooting for the most competitive colleges were wary of the ACT — even well after colleges had made clear that the tests were viewed on equal footing. Students feared an unspoken bias. Ironically, the competitiveness of admission at top colleges that once created hesitation is now driving students to the ACT. Students and parents are loathe to leave any stone unturned when presenting a positive testing portfolio. Hitting 25 home runs from the right side of the plate is great until you realize that you are a lefty who can hit 40 home runs from the other side.
While many students perform similarly across the admission tests, some find a marked advantage in one test over the other. That advantage can be a higher score, of course, or the advantage could be in the form of easier preparation and reduced anxiety from material that feels more natural.
Perhaps the most remarkable change in recent years is the rise of the 36 Composite. In 2001, only 89 students received a perfect score on the ACT. That represented 1 student out of every 12,000 tested! The score barely existed. In 2016, there were a record 2,235 perfect Composite scorers or 1 of every 935 testers. Even just looking at the last 4 years, the number of perfect scores has nearly tripled. And these were students who tested prior to the debut of a mysterious new SAT. It’s likely that avoidance of the new SAT in the class of 2017 will bump the number of perfect ACT scores even higher this year.
Why was the number so low, and why has it gone so high?
The change was not manufactured by the ACT itself — the test difficulty has not changed over the last 15 years. (We are regularly disappointed at how many of our colleagues in the test prep industry argue the opposite side of fundamental questions such as this. They are simply mistaken, and we have tried to present the facts plainly and clearly in the remainder of this post.) The difference is in the pool of students choosing to take the ACT. Even in states where most students favored the ACT, top scoring students 10-15 years ago opted for what they considered the safety of the SAT. What changed is that a lot more students decided to at least try hitting from the left side of the plate.
When discussing these sorts of changes, some common questions arise:
Changing a test’s difficulty — meaning how hard it is to get a particular score — requires changing how a test is scaled. This is done infrequently (ACT last made a change in 1989) and with great notice (ACT and SAT typically start educating students and counselors 2-3 years before a change is made and are open about how old and new scores should be compared). Scores that change behind the scenes would devalue the test. A college would have no way of accurately comparing scores. States and school districts would have no way of tracking student performance over time. Consistency and comparability are what give the ACT and SAT power. If the test were getting easier, we would see significant score changes in states and districts that already had a high penetration of students taking the exam. That has not been the case.
Isn't the ACT making the test harder to account for the fact that more top students are taking the test?
This is the inverse of the first question, and most of the answer still applies. ACT is not benefited or harmed whether there are one thousand 36s or three thousand 36s. If it suddenly had ten thousand 36s, then something would have gone wrong with the test, and it would clearly be losing its ability to discriminate among top students. What is actually happening is that ACT and SAT are more in balance than they have ever been. At the higher end of the score ranges in 2016, the testing populations have become similar.
No, that’s not how a curve is established on the ACT. The SAT and ACT use a fixed reference group to norm scores when the initial scale is created. After that, every new test is equated back through an unbroken chain that leads to this reference group. The ACT is, after all, simply an academic measuring stick. A measuring tape doesn’t need to get longer or shorter based on whether I am measuring the height of third-graders or NBA players.
This question deserves — and will receive — a post of its own. The material covered by the ACT evolves with the academic standards used by states (in recent years, the Common Core State Standards). In some cases, the material added is more advanced. This does not necessarily lead to harder questions. The difficulty of a question is highly dependent on the clues an item writer provides and on the ways in which the correct answer is disguised. Slightly more difficult questions can be offset with slightly easier questions. ACT has no interest in “competing” with students in trying to stump them. The increase in high scores is the most obvious evidence that ACT is not trying harder to “stump” students. The test allows for slow and steady content evolution while keeping the measuring stick intact.
Does hitting a left-handed home run score more runs than hitting a right-handed home run? Taking a standardized test is not like trying to stand out by playing the tuba or winning a national writing contest. Top colleges will have thousands or tens of thousands of ACT and SAT scores to compare and have a number of ways of doing that in a consistent fashion. Receiving more or fewer ACT scores does not change the relative value of a top score. It should also be pointed out that most top colleges still see more SAT scores than ACT scores. The gap is narrowing quickly, but the change is far from dramatic enough to impact their decision making. Concordance tables are used to compare ACT, old SAT, and new SAT scores and are not dependent on whether you choose to take the ACT or the SAT. Students should take the test that suits them best. In many cases, the score differences are trivial. Students need not take ACT or College Board administered tests to identify a preferred exam. Taking previously released exams in a proctored environment can replicate the experience without putting a score on the student’s record. The PreACT, Aspire, and PSAT can also serve as reference points. Switch based on how you will prepare and perform, not on a test’s popularity.
Change at All Levels
Although the change is most pronounced at the top score, the trend is present throughout the high score range. The lowest scores also increased much higher than the average increase overall. The cause at the low end is quite directly the increase in state-funded testing. By testing all students, states have included students who might not ordinarily be ready for 4-year colleges and would not have taken the ACT on their own. In the graph below showing concorded SAT scores that fall on the same 1-36 scale as ACT scores, it becomes obvious that SAT is losing the growth game, particularly at the extremes.
While the growth at the low end is easy to pin down, the increase in high scoring students is more multifaceted.
- State-funded testing is leveling the playing field between the ACT and SAT among students applying to the most competitive colleges.
- Increased ACT testing in states that traditionally produce a disproportionate share of top scorers.
- Heightened attention to ACT test preparation and repeat testing.
- A shift toward “dual testing” for students looking at competitive colleges.
Based on our analysis of the numbers and our understanding of the landscape, we believe that the last point has had the biggest impact. Despite the increase in top ACT scores, the number of top scoring SAT testers did not decline. So where did the students come from? The first three factors don’t do enough to explain the strength and rapidity of the change.
State-funded testing clearly played a large role, but it does not explain the hyper-growth at the high end. Universal testing has little impact on the absolute number of top scoring students in a state, since those students are already taking admission tests. What it did do was generate equal consideration of the SAT and ACT among students and catalyze the trend toward dual testing.
If ACT were simply taking market share away from the SAT, we would be likely to see decreases in SAT testing (the growth in college attendance is moderate). Yet at each score swath, we see SAT gaining students over the last decade.
2006 - 2016
We presented the chart of ACTs percentage gains earlier in this post. The table below shows how those gains are reflected in student numbers.
2006 - 2016
For scores that would have been approximately 80th percentile and above in 2006, there are 240,000 SAT and ACT takers. If we look at equivalent scores from 2016, 404,000 students achieved those scores. That’s a 68% change over a period where the number of high school graduates increased by less than 5%. Returning to the baseball metaphor, students batted left-handed and right-handed in order to find their best swing. These dual testers now regularly sample both exams. Nationwide, we estimate dual testing at 35-45% among high scoring students. At the most competitive independent and public schools, we see dual testing rates closer to 65-75%. Tempering those numbers for the class of 2017 has been avoidance of the new SAT entirely by many students. The long-term trend, though, is likely to stay. Only a handful of colleges require students to submit all SAT and ACT scores taken, so families feel empowered to experiment. Score Choice and superscoring policies generate further enthusiasm for dual testing.
Over-testing and the dissipation of energy across multiple tests is a concern that we communicate to families. Choosing the most appropriate exam is important, but game day situations can be replicated. Taking released exams under proctored conditions can give an accurate read of a student’s strength. The PreACT, Aspire, and PSAT provide additional data points. Still, many families cannot resist trying both tests officially.
Laggard to Leader
Both the ACT and SAT have made gains, but scale of the gains is far from equal. Among students scoring 33-36 on the ACT (comparable to 2140-2400 on the old SAT), for example, ACT went from a large disadvantage to a modest advantage.
How similar are SAT and ACT takers?
A tempting and common mistake students make is mixing and matching of percentiles across exams in order to judge results. PSAT percentiles are not the same as SAT percentiles. SAT percentiles are not the same as ACT percentiles. Nothing is the same as Subject Test percentiles. The reason these alignments fail is that the tests are taken by different populations. And even within the same exam, trends make comparisons risky over time. For example, as recently as 2008, a 32 Composite on the ACT was the 99th percentile. A student must now get a 34 to reach the 99th percentile. This is a concept that is hard to accept: the meaning of a 34 didn’t change, only the group of testers did.
An interesting offshoot of ACT’s gains is that percentiles for above average students are closer to comparable SAT percentiles than they were in the past. Below is a table showing how ACT test taker figures stacked up against SAT figures for 2006 and 2016.
|ACT Takers as|
% of SAT Takers
|ACT Takers as %
of SAT Takers
The differences in 2006 were highly skewed. While the tests had parity at scores 20 or below (600-1440 old SAT), ACT had far fewer testers in the upper ranges. By 2016, two things had happened: 1) ACT led in student numbers at each range, and 2) the differences between ranges had greatly narrowed. This makes, temporarily, SAT and ACT percentiles roughly comparable for above average scores. Unfortunately, the reordering of testing patterns with the new SAT will likely make this quick comparison risky again. We still recommend the use of concordance tables when comparing scores (see http://www.compassprep.com/comparing-act-and-new-sat-scores/).
Percentiles can also be misrepresentative because they don’t reflect a student’s fellow applicants at Grinnell or San Diego State or UCLA or Brown. Comparing one’s absolute scores to the 25th-75th range for first year students is a better method for assessing how “good” one’s scores are. [An even truer measure is how one stacks up in the range of applicants and range of admitted students, but most colleges only report these figures for students who actually enroll.]
In prior years, Compass would have advocated against directly comparing percentiles for the two exams. Concordance tables are still the most reliable way of making comparisons, but the percentile shortcut is now far accurate than it was even a few years ago. The test taking pools are almost identical at the top of the score range.