fbpx Skip to main content

ACT Abandons Writing Scale

ACT Essay scaled score range now 2-12 again, no longer 1-36

Just nine months ago, ACT debuted a radically overhauled and ambitious new essay assignment and scoring methodology that moved the reported score range from 2-12 to 1-36. This incautious decision kicked off the most tumultuous year in memory for ACT and its rival, College Board. Six ACT test dates and more than one million essay-takers later, ACT has finally acknowledged that by stretching and scaling the Writing to 1-36 they have a “perceptual problem” and have “created confusion.” [We give ACT a 36 for understatement.] The 1-36 range implied a precision and relevance to the score that was never supported by statistics or the anticipated use by colleges. Students and their advisers understandably were baffled by seemingly discrepant essay scores much lower than scores on other parts of the test. ACT’s solution for this perceptual problem is a switch back to a 2-12 range, effective as of the next test date in September.

ACT’s responsiveness to criticism is welcome, but the underlying details of the response are problematic. Note that ACT has not reverted to the 2-12 scoring scheme of June 2015 and earlier, in which two readers would each simply give the essay one score from 1-6. The much more complex scheme in place since September 2015 is still intact: Each essay receives four domain scores on a scale of 2-12 (1-6 from two readers), and those four domain scores add up to a range of 8-48. But instead of placing that raw sum of 8-48 on a scaled range of 1-36, ACT will substitute averaging and rounding for scaling. Writing scores will now be the raw sum of domain scores, divided by four and rounded up or down as needed to derive the student’s 2-12 score.

There are many technical and transitional problems with this approach. See a companion post here that includes the relevant percentiles and concordances and provides a lengthy FAQ on the challenges involved in comparing scores from before/during/after the brief reign of the 1-36 scale.

But the more fundamental question is why ACT seems to think their problem is only one of perception. ACT has “solved” this aspect of the problem by hiding it – decreasing the likelihood that it is noticed. We would argue that the issues facing the essay are existential in nature and that there seems to be a growing consensus that the essay in fact should not survive. Its considerable overhead simply isn’t justified against the low value it provides in the evaluation of college applicants.

ACT’s researchers are the first to admit that the Writing test has low reliability compared to the other parts of the ACT. We were reminded of this in a research paper released in January, and we’re being reminded again today. Left unanswered is the question of why a testing instrument with such imprecise measuring capability continues to exist. Reliability in this context is a statistical term describing the consistency of the results of a test, which can be described by its Standard Error of Measurement (SEM). Ideally the SEM is as low as possible, because the SEM is defined as the range above or below an attained score within which there is a 2/3 likelihood that the student’s “true” score would fall. In simpler terms, SEM tells us how good a job does a test do of consistently pegging a student’s ability within a reasonable range of precision. ACT’s research paper revealed the essay’s SEM on the 1-36 scale to be 4. (For comparison, note that the SEM for the ACT Composite score is 1.)

An SEM of 4 on a 36 point scale is not good. For a mid-range ACT essay score of 20, SEM of 4 means that all you can fairly conclude is that the student’s “true” score falls somewhere between 16 to 24 (20 +/- 4). Translating to percentiles tells us that this student has a 2/3 chance of falling between the 34th and 86th percentile in the pool of testers! It’s like your English teacher telling you your score on the final exam is somewhere between a D- and a B+. If you’re wondering how such a broad range could be useful and meaningful to colleges, well, you’re not alone. ACT’s official explanation boils down to, hey, the Writing has always had this degree of variability in scoring; you just weren’t noticing. This statement is essentially accurate, if not particularly helpful. Keep in mind that nothing has been done to make the Writing test more reliable; on a scale with fewer increments its inconsistencies are simply less noticeable. 

ACT’s public statements seem carefully worded to try to stave off the inevitable. At Compass we believe that the ACT essay (and the SAT essay, to be fair) is fundamentally ill-suited to be a useful factor in the college admission process. Indeed, the ACT and SAT essays are the best examples in college admission testing of the friction between face-validity (testing a skill that seems worthy) and predictive validity (testing something that correlates well to, say, GPA). Yes, ACT and College Board seem to need a rated, on-demand writing task for their institutional clients – the states and large public school districts buying the test en masse for a diverse range of assessment needs. But the ACT/SAT essay experiment as an admission requirement, which dates back to 2005, has simply failed and it’s time to give up.

A movement to do the right thing and get rid of these essays as requirements will need to be led by the colleges still mandating itWhile that list totals less than 15% of all colleges, it includes some of the most prominent, including Harvard, Yale, Stanford, the UCs, and Michigan. We hope that counselors and other advocates for students will continue to call on the leaders of these institutions to, at minimum, state more clearly their understanding of the limitations of the ACT/SAT essays, the relevance of the essays in their review of applications, and their rationale for persisting in requiring the essays. The number of colleges still requiring the essays is shrinking rapidly; it can’t dwindle fast enough and reach zero soon enough. 

Adam Ingersoll

Adam began his career in test prep in 1993 while at the University of Southern California, where he was a student-athlete on the basketball team, worked in the admission office, and graduated magna cum laude. Over the last three decades he has guided thousands of families to successful experiences with standardized tests and has mentored hundreds of the industry's most sought-after tutors. Adam is known nationally as a leading expert on college admission testing and is a frequent presenter at higher ed conferences, faculty development workshops, and school seminars.

3 Comments

Leave a Reply