When your testing brand is about staying solid and unchanged, how do you reinvent yourself? That question has weighed heavily on ACT executives in recent years. On July 15th, CEO Janet Godwin announced the first major overhaul of the ACT since 1989. “Innovation” was mentioned 5 times in 10 paragraphs, but it is difficult not to read the changes as being anything but a direct response to the digital SAT. The ACT needed to get shorter and friendlier. Fast.
In a staggered rollout spanning much of 2025 and into 2026, the “core” ACT will be trimmed to just over two hours by reducing the length of the English, reading, and math sections and by bumping science to “optional.” The overall number of questions in the core sections will be reduced by 25% to provide students with more time per question. Compass objectively covers the essential changes and timeline in another post here that will be kept updated as more information unfolds. This piece is a more subjective viewpoint.
[Compass will refer to the shorter test as the Core ACT to distinguish it from the Classic exam. The composite score (1-36) of the new test’s 3 sections will be referred to as the Core Composite to distinguish it from the Classic Composite of all 4 subjects. Some of the information here is triangulated speculation from ACT’s announcement, ACT’s June pilot, and test design precedents.]
College Board has — albeit not always willingly and with varying levels of success — dramatically overhauled the SAT on multiple occasions. ACT has been hemmed in by reputation and economics. It lacks the scale of College Board. Its test taker numbers have been in decline, and it is increasingly dependent on state contracts.
Weird Science
The science test is a net negative for ACT in the current landscape. It would be the perennial winner of any Student Choice Award for “Least Favorite Section.” It adds 35 minutes to a test that is already longer than the SAT. And there is scant evidence that colleges are particularly hungry for the information that it provides. The SAT has, after all, survived for more than 100 years without a science test (well, there were the Achievement/SAT II/SAT Subject Tests).
So why not just ax science entirely? In short: state contracts, reputation, and revenue. Many states use the ACT as an exit exam or summative assessment, and ACT has long touted its alignment with state standards. Contracts also have lengthy sales cycles. Eliminating science would jeopardize ACT’s ability to fulfill current contracts and would potentially undermine its sales pitches for new ones.
We’ve seen this hesitancy play out before where the stakes were even lower: ACT Writing. The Writing has been abandoned by every college in the country but one (Martin Luther College as the ride or die). Yet ACT continues to offer it as an optional part of both school day and national testing to hundreds of thousands of students. In fact, when students register for the ACT, Writing is included by default. ACT has been reluctant to forego the added income.
Are choice and flexibility always good?
Eliminating science is even thornier, because it has been woven into every ACT Composite score for 35 years. ACT is hoping that it can transition from Classic to Core by keeping science around as an option. The argument is that students will have added flexibility (mentioned 4 times in 10 paragraphs). In college admission testing, flexibility and choice often come with their evil twins, confusion and anxiety. Should I take the new ACT or just opt for the SAT? Should I take science? If I did poorly on science the first time, should I skip it the second time? Will colleges allow me to superscore across Core and Core+Science administrations? Across Classic and Core? Will they allow me to suppress my science score? What if I am applying to a STEM program? What does the ACT superscore report even include now? What materials should I use? What practice tests should I take?
A decision about taking the ACT is rarely made in a vacuum. There is often a comparison to the SAT. “On which test will I do better?” The problem, now, is how is “better” even measured?
From a dean of admission’s perspective, there is a similar cluster of concerns. If my models are based on the Classic Composite, how am I supposed to use the Core Composite? If I require science, am I undermining our test optional philosophy? Am I eliminating potential applicants? Should I require it for some majors and not others? Do I favor students who do well on an optional section? Do I stipulate that we will ignore science across the board? Is ACT going to provide a concordance of Classic to Core? Are ACT and SAT going to work together to develop a new concordance?
Compass’s Crystal Ball
If we look 7 to 10 years ahead, we will see an ACT science test that bumps along based on some state contracts and a smattering of colleges that still find it valuable. Natural selection tends to weed out optional items that simply add complexity and cost. ACT will also find that it is needing to develop new science tests for a decreasing user base.
The question, though, is “What happens in the next 1 to 3 years?” Unfortunately, colleges don’t have a required deadline to establish new policies. Three thousand colleges don’t work in unison. Some schools will make a pronouncement on the new exam. Some will update their policies without accounting for all of the permutations. Some will shrug. Some will fall back on the dreaded “applicants are encouraged to submit the scores that best reflect their abilities and achievements.”
Given the slow rollout, Compass strongly recommends that students in the class of 2026 continue to take science if they are taking the ACT and continue to evaluate the ACT versus SAT decision with science in mind. Students in the class of 2027 should have considerably more information by the time decisions need to be made.
Getting Specific
Initial announcements of test overhauls tend to be long on sweeping generalizations and short on specifics and mechanics. Janet Godwin’s press release from July 15th follows that pattern. The necessary follow up is an actual test specification. Beyond its skeleton, what does the new test look like? How much shorter are the passages? How many are there? Do they change in substance or content? Are math topics being trimmed? The Assessment Framework for the Digital SAT Suite runs to almost 200 pages. The Technical Manual for the ACT — a slightly different document for the current ACT — is just as long. And a test specification is only the beginning, as ACT also needs to answer the many implementation and usage details. It also needs to address reliability and predictive validity. [Validity can be thought of as how well a test does what it sets out to do (e.g., measure skills or predict college GPA). Reliability is how consistent the test performs. Reliability is a necessary but not sufficient condition for validity.]
It is hard not to view ACT’s announcement as a sales and marketing decision rather than a psychometric one. The SAT had pulled ahead of the ACT even before the digital SAT provided an attractively pared-down option for students. ACT could not continue to match a 2-hour exam with a 3-hour exam, and it could not avoid the criticism that it was asking students to do too many questions in too little time. Science was the odd test out, but the Core ACT also required trimming in other areas. How will that be accomplished and with what trade-offs?
Linear versus Adaptive
One way of preserving reliability on a shorter exam is to make it adaptive. College Board took this approach with the digital SAT and created a section adaptive test. If a student does well on an initial SAT Math stage, for example, the second stage will be harder. Students’ ability levels can be targeted more quickly by giving them the most relevant questions.
The current ACT, by contrast, is a linear test. Students work from the first problem to the last without any changes based on performance. ACT has not said definitively that the online Core ACT will be linear, but all signs point in that direction. ACT previously indicated that the June pilot was linear. It has also unofficially confirmed that the paper-and-pencil exam will move to the shorter Core format by the September 2025 national administration. Since offering an adaptive paper-and-pencil exam is not feasible, ACT clearly feels comfortable sticking with linear testing.
So how will ACT retain reliability with the new Core test?
One possibility is that it won’t. It may choose to sacrifice a small amount of reliability in order to make a shorter exam. “We’ve got reliability to spare!” The change may not be significant enough to matter.
An alternate possibility is that it has figured out how to optimize the amount of information gleaned from each question. Extraneous information from passages can be trimmed. Item types that can be processed more quickly can be emphasized. Items that prove redundant can be eliminated. We saw some of this with the digital SAT. Gone were the grand 2014 statements about students engaging deeply with passages from great thinkers. Instead, the digital SAT employs a get-it-done approach. ACT will need to apply a similar level of ruthlessness to pare its length.
No amount of wizardry, though, allows a test writer to test as many topics with the same level of thoroughness on a shorter, linear exam.
ACT has long prided itself on being academically aligned. It will presumably need to reduce the number of standards that it tests or provide less differentiation between students on each standard. For example, can the test make do with two trigonometry questions rather than three? Three is better than two, but since the ACT does not depend on producing a trig-specific score, it may not be essential. Moving from 60 math items to 45 will require these sorts of trade-offs. How open will ACT be about calling out these trade-offs?
Speededness versus Power
When an exam is designed to evaluate a student’s ability to complete a task quickly, it is considered a speeded test. When an exam is entirely about knowledge, then it is a power test. Most standardized tests fall somewhere in between. College Board and ACT have long produced research to show that their tests are not significantly speeded. Students have long produced anguished sighs that have shown that the tests are speeded. For example, ACT shows in its technical manual that between 95-97% of students finish a section (defined as answering the last 5 questions). But the ACT has no guessing penalty. Students learn to fill in an answer even if they have not read an item. Completion rate is a poor measure of speededness. It’s not surprising that ACT has seen lower completion rates on its online exam (where you can’t just fill in bubbles when the proctor isn’t looking).
If nothing else, College Board and ACT have felt pressure to address the perception of speededness. Often, that perception revolves around the time per question. It’s not fair to say that more time per item makes a test less speeded (unless we say that the items haven’t changed). Readers old enough to have taken an SAT with antonyms or analogies know how many questions could be answered quickly. College Board, though, has regularly marketed to students the added time per question on the digital SAT. ACT could no longer afford to ignore that metric, so it is providing more time per question on the Core ACT.
Test Time and Anchors
Test writers use what are known as anchor items to test out new problems and to link separate test forms. In some cases, these items are sprinkled among scored questions. An alternative solution is to have an additional section where the anchor items are placed. Test designers label these internal and external anchors, respectively.
College Board and ACT long expected students to take an additional “experimental” section, which lengthened the testing day. College Board switched to internal anchors with the digital SAT. ACT will apparently do the same with the online Core ACT. Every additional internal anchor, however, reduces the number of scored items. The number of math items is expected to go from 60 to 45, for example. If 5 were to be used as anchors, then only 40 questions would contribute to a student’s score.
There is a misconception among some that external anchors are necessarily weaker because students are unmotivated. That can be true, but psychometricians have bags of tricks to deal with that. For one, they can randomly assign students with new sections and known sections. They also have tools to identify unmotivated students through uncharacteristic performance. Switching to internal anchors doesn’t mean that ACT problems will get better. In the long run, though, it makes for a less ungainly structure and experience.
Will the new test be easier or harder?
This is one of the first questions Compass receives when a test change is announced. If the test designers at ACT accomplish their task, the answer is “neither” (despite early prognostications that the new test will produce inflated scores). An exam may feel less speeded. The passages may move more quickly. Ultimately, though, the exams should be equated to produce consistent section scores across Classic and Core. We saw College Board manage the same concerns with the digital SAT transition. A key difference is that the SAT maintained consistency for the 400-1600 total score. The Classic Composite will not be identical to the Core Composite. The shift will produce winners and losers. Will ACT provide research to show what groups may be impacted and how?
Will it be just as reliable? Just as valid?
A shorter test is always at a disadvantage when it comes to reliability. The question really becomes how well ACT is able to manage that disadvantage. Can it take advantage of research improvements? Are there content changes that will help? Does providing more time per question alleviate some concerns? For example, the median ACT Math score is only 17. The more difficult questions on the ACT do little to help accurately score a student with a 17. Could reducing the overload of 60 questions actually help students focus on the ones that matter? Those are interesting research questions that ACT may have addressed. Where a trade-off is unavoidable is at the top of the scale. With fewer questions to identify a student’s “true score,” chance plays a larger role. Getting a 34 Math may be equally hard on the Core ACT (i.e., approximately the same number of students will achieve that score), but it will be less consistent. The Compass crystal ball that sees science disappearing in the future also sees adaptive testing come to the ACT sooner or later. Paper-and-pencil testing will become increasingly anachronistic in the 2030s.
There are many measures of testing validity, but predictive validity is one of the most important for an admission test. Will the Core ACT predict college performance as well as the Classic ACT? If we assume — and I do — that ACT will do a good job in maintaining consistency between Classic English, reading, and math and Core English, reading, and math, we are still faced with the problem that the Core Composite has only three components rather than four.
Predictive validity is usually improved by adding reliable measures, but the gains diminish. Colleges may find that English, reading, and math are sufficient. For example, a study by Stanford and University of Chicago researchers in 2011 showed that Reading and Science added little to predictive validity. The study was limited, however, to Ohio students choosing to attend colleges within Ohio. There will be a lot of late nights in institutional research offices at colleges who want to look at their own history with Composite scores versus section scores.
It will be tempting for colleges to take the conservative path and require Core+Science. That would allow them to treat scores as largely interchangeable (ACT should still endeavor to prove this via a pilot study). The good news is that ACT researchers and colleges already have score data that can be used to determine how essential the science score is. Although much of the ACT literature reports on composite scores, the individual components are almost always available. This is research that we hope is forthcoming from ACT.
This is another place, though, where the not in/not out status of science makes things rocky for ACT. If the test were being eliminated entirely, then the research could focus on how well the Core ACT functions on its own. “See, the Core Composite is just as good (almost?) as the Classic Composite!” If that statement is true, then why should the science test be kept around? Summative testing. Revenue. Choice. None of those is a convincing argument for a college, unless the science test shows added predictive validity within its student population. And if science is a help to a significant number of colleges, is ACT doing a disservice to students by making it seem optional?
Rollout and Confusion
The staggered rollout of the Core ACT across national online, national paper-and-pencil, school day online, and school day paper-and-pencil is going to leave students and administrators frazzled and colleges perplexed. Rather than establish a single switchover date for the Core ACT, the changes are going to hit different testing paths at different times. Online test takers should be able to take Core in April 2025. Unless they are taking a school day exam, in which case they won’t encounter it until spring 2026 (ACT defines the February school day window as “spring.”). Paper test takers will see the shorter ACT arrive in September 2025. Unless they are school day testers, in which case they will need to wait until spring 2026.
Depending on circumstances (district testing) or choice (online or paper-and-pencil), students could end up taking one of 12 different test variations during October 2025!
- Paper Classic
- Paper Classic+Writing
- Online Classic
- Online Classic+Writing
- Online Core
- Online Core+Science
- Online Core+Writing
- Online Core+Science+Writing
- Paper Core
- Paper Core+Science
- Paper Core+Writing
- Paper Core+Science+Writing
Administration and Cost
College Board and ACT have both wrestled with how to maintain sufficient access to testing on national dates. ACT is currently at a disadvantage in the rollout of computer-based testing. Its solution depends on software running on a local server and on school-controlled computers. The digital SAT allows students to bring their own devices. The online ACT, in contrast, is limited to how many computers a national site is willing to make available to test takers. The headaches for national sites are going to increase and potentially push even more to close their doors.
A shorter test should cost less to develop and administer. Is ACT prepared to reduce the cost of the Core test? Is Core+Science going to be a stealth price increase for students? Has shrinkflation made its way to standardized testing? Do ACT’s private equity investors have a say in the matter?
Score Comparison
The value of an admission test score is in its comparability. Admission decisions can be informed from prior class years. Students can be compared no matter where or when they took a test. ACT and SAT takers can be evaluated fairly. ACT has to act carefully, then, to preserve comparability and confidence. It has a lot to accomplish over the coming months.
It’s incorrect to assume that a Core Composite is the same as the Classic Composite. They are different measures. Students who score well in English, reading, and math are not necessarily the same ones who score well in science. What kind of guidance will colleges receive? Some state colleges, for example, use Composite scores to determine eligibility for admission or for scholarships. Will those schools make Classic Composite and Core Composite interchangeable? Will those schools require everyone to take Core+Science? If they switch to Core Composite, which groups of students does it benefit and which will it harm? Unfortunately, neutrality is not an option when eliminating an entire subject.
When students are choosing whether to send ACT scores or SAT scores, what standard should they be using? Plugging in Core Composite to the current ACT to SAT concordance is psychometrically unsound, but if colleges are doing the same thing, then it might be sufficient. But how is a student to know what colleges are doing? And will there be concordances for both Core Composite and Core+Science scores?
Students in the 2025 – 2026 maelstrom are likely going to have differing Composite scores on their transcripts. They may have tests with and without science. One option would be for colleges to remove science from the equation entirely. When talking about concordance, most people mean taking an SAT Total score and linking it to an ACT Composite score or vice versa. But other concordances already exist. For example, an ACT Math score can be linked to an SAT Math score via the tables that were produced during the 2018 concordance study. The combined ACT English and Reading scores can be linked to an SAT ERW score. Would ACT recommend such a change? Would colleges listen? It is a significant conceptual shift, but at least the data are available.
PreACT
ACT has not yet made any announcement about the PreACT, which is vertically scaled with the ACT. College Board often uses the fall PSAT as a trial run of changes. The PreACT schedule won’t provide ACT the same opportunity. The upcoming testing window is March-April 2025, so it is likely that ACT will stick with Classic until 2026. Unfortunately, the 8th, 9th, and 10th graders who take the Classic PreACT in spring 2025 will end up taking Core ACT by the time they are preparing for college. To be determined is whether or not the PreACT in 2026 will continue to incorporate science.
Delivering for Students
ACT should be marshaling new practice test materials for students. It currently has the defensible position that students can’t even sign up for the Core ACT (registration is only available through February 2025). But if a test rollout is only 9 months away, the responsible solution is to have information and practice tests available for students in the coming months.
ACT’s handling of the June pilot was not encouraging. It intentionally withheld specific information from test takers (yes, it did provide a general caveat). Was it the culmination of a well-designed research study or a rushed premiere?
Throughout 2025, some students will be taking the Core ACT and some will be taking Classic. It’s an unprecedented overlap. Will ACT provide materials for both tests? Will students know what material is relevant and what isn’t? Will a 2025-2026 edition of The Official ACT Prep Guide handle Classic or Core? Will its test designers at least revise prior released form codes to show how the passage shortening and question trimming will really look? What online resources will be made available?
A related issue is whether or not ACT will commit to Test Information Release (TIR) with the new exam. College Board used the excuse of the new digital SAT to eliminate the Question-and-Answer Service (QAS). A digital test doesn’t leave the same physical trail as a paper-and-pencil exam, so it is more common for digital items to be reused.
ACT has a lot to deliver to students, high schools, and colleges. Unfortunately, Godwin’s initial press release did not set out a delivery timeline. Let’s hope that ACT quickly fills the information vacuum.