fbpx Skip to main content

The Trembling State of College Admission Tests: Part II — Are They Getting ‘Harder’?

By November 5, 2014November 10th, 2014ACT, SAT

 

Part I left off pondering the idea that college admission tests can be quietly made harder for various reasons.  ACT’s rising popularity has made that test in particular a target of such accusation.  Here I address these myths and explain where these unfounded suspicions originate.

Has the ACT gotten harder?

This has been a bold claim in some test prep circles, and the big miss. It’s understandable when students depend on what feel like finely tuned testing palates to extrapolate their own experiences to 1.85 million ACT takers. It’s troubling when experts do so and then wrongfully report it. This mistakes evolutionary changes in the test for changes in difficulty when in fact the former are required to avoid the latter. Test content has to shift — on a very gradual schedule — with the times and based on external pressures. It’s hard to separate the difficulty one senses on certain problems from the overall test difficulty encountered by all high schoolers. That’s where psychometrics comes in.

Difficult is a loaded word. Just about anything can be described, emotionally and subjectively, as difficult. We know what students mean when they use the word but that doesn’t cut it for test-makers. When psychometricians use the word, they are not describing an item, they are measuring it. If too few or too many students miss a question, it’s useless. If this were a footrace, we wouldn’t be measuring speed (whether with a tailwind or into a headwind) but rather how many racers you beat. Golf is a better analogy: lots of individuals competing on a common course (the test) whose slope (difficulty rating) has been set by previous golfers (reference group) and then rank-ordered on the leader board. But golfers aren’t then scaled, so that’s where that comparison ends. With standardized tests, there are scales and every scale is a bit different from exam to exam. We all know this, and then we all sort of forget it.

Over the last 15 years, there has not been much variation in the raw-to-scaled stability — especially when we look where we should be looking — where the kids are. The meaty 16-28 range is a far better indicator of how the majority of college bound students are doing. Even in non-ideal situations where the scaled score dropped to 34 based on 1 error (which indicates an “easy” raw test for the top 1%), the scales stabilize quickly. You don’t see multiple gaps happening (the SAT is always going to display more gapping because of the 61-point scale, but the result is the same).

Test prep experts tend to score at the top themselves and often limit their analysis (and recollection of the test) to that narrow band. I regularly mention the weakness of these tests at the extreme ends, and how a test loses optics at its tails. The makeup of the test from a student’s experience of difficulty has not changed in at least the last several years. The national average has been stable. The p-value for the average student has been about where it should be on a normed test — hitting the sweet spot. It is easy to remember the really tricky problems, the ones strong test takers belabor and remain haunted by. They are terribly skewed in how much time they require for one added point. What we are all bad at doing, though, is remembering the ones we pick up between questions 1-30 — let alone taking inventory of any slight changes there. A high scoring student quickly conquers those, which is exactly why she has 2-3 minutes to burn on that polynomial division problem that pops up late.

Could the items underneath be shifting? Is there a creeping Common Core?

There is some paranoia in just about any testing discussion these days. Fears feel validated everywhere we look even when we know that we shouldn’t be thinking about them. Yes, Common Core is playing an important role in the redesign of the SAT, although note that President Coleman has handpicked from among a range of standards in Common Core, and Texas, and Virginia. That’s a different path than ACT who touts its test aligns with 100% of the Common Core standards. The College Board neither can nor wants to say that. They have intentionally given a cold shoulder, for example, to geometry. ACT, meanwhile, was already maintaining full compliance with Common Core by 2010. But the last major test overhaul was 1989. Huh? Well, ACT had a hand in writing the Common Core standards and there’s been some squinting-until-you-see-it happening both at ACT and on the ACT ever since. Everything they do now is seen through the lens of the New SAT and Common Core.

The difficulty claim is problematic. One article recently proclaimed that an old 36 is like a new 31 (which would be like saying a 690 on the SAT is the new 800). I was then further bewildered when that claim was linked to a spike in smart and well-prepped kids on the coasts discovering the ACT. That is absurdly inaccurate. This ignores that ACT scales were set based on the performance of a sample of seniors in October 1988. Californians and Iowans are all equated to that same gang (ACT did do some additional equating work in 2005 when the calculator option was added, but their testing showed that it really didn’t make a difference.). If more strong test-takers take the test, we get more 34’s, 35’s, and 36’s (and indeed we’ve seen that; it’s just that a fairly stable mean disguises the outliers). What we don’t get is countervailing pressure to push the scale back down again. Why would ACT even want to do that? To preserve some sense of cohort difficulty while completely abandoning their core mission of a commonly accepted baseline? And if such a radical change were happening on the ACT, wouldn’t that totally scramble the relationship between the SAT and ACT?

That said, the ACT has always cycled in some oddballs. They want to claim to test law of sines or double angle formula or matrix addition, but there just isn’t enough room – literally or psychometrically — on any one test. Most of any ACT is same-old-same-old. But outside of those retreads, you will see the occasional oddity, and maybe that’s happening more given the trembling state of testing these days. The former head of R & D at the College Board is now at ACT (and vice versa). These are interesting times.

There is the supposition that we are seeing more problems in context. I would contend that this is mostly paranoia. The ACT has always prided itself in putting things in context — even to the point of hilarity. Swimming pool, oil drills, hot air balloon ropes, blimp tethers, lighthouses, and car rides are just a few of the situations ACT has created for trigonometry problems. Basically, if you can make a right triangle with it, ACT will try to work it into an item at some point. But that’s all these questions are: here is a right triangle that we’ve spent too much time describing; solve for one of the missing pieces of information using SOHCAHTOA. This is not really what the standards writers were hoping for when they talked about students proving and modeling math using real world examples. This is just an item writer’s need for variety. No matter how convoluted the background material, the first thing every good student does is just redraw a simple right triangle. Actually, the diagram usually provides a nice right triangle. “The context” is gravy.

We’ve heard other recent conspiracy claims to which there is always a reasonable explanation. Maybe it’s a geometry problem with more steps, none of which is that hard. Bread-and-butter for the ACT, because they simply do not jump into difficult math. So they put rhombuses inside of squares inside of circles inside of isosceles triangles. These are times tables with tricky triangles. Annoying, sometimes time-consuming, and not without their slip-ups. But not dramatically more difficult. Higher order polynomials appear every few ACT’s. So does inverse trigonometry. Functions requiring being careful about the order of operations is classic ACT. They love the confusing functions (they also love easy functions, which is where all of the sub-32 students are getting their points). Difficult probability problems? If anything, ACT has made probability way too easy, not too hard. Common Core actually calls for much more here.

So maybe ACT has been just mixing things up, or maybe they’ve self-identified a weakness in light of Common Core. But strictly speaking, no, the ACT has not suddenly gotten harder (and psst — the redesigned SAT won’t either, so don’t believe the hype!).

Bruce Reed

Bruce graduated from Colby College and has served in leadership roles in education for more than 25 years. Bruce founded our Northern California office in 2004 where he continues to serve as its hands-on leader while also guiding our national team in his broader role as Compass’ Executive Director. Bruce is recognized throughout the Bay Area and beyond as a visionary and passionate voice in the realm of teaching, testing, and educational development. His extensive experience in one-on-one test preparation, college admissions, and professional development makes him a trusted resource for parents and counselors.

5 Comments

  • Mara Patti says:

    I really like how you compared the test to a golf game, clever analogy. I also feel with change comes a certain level of anxiety and the unknown. So I look forward to seeing the new SAT. Thanks for sharing.

  • Jon W says:

    Bruce, interesting thoughts. How many times have you taken the ACT in your career? Which ACT test administrations have you taken in the last 3 years? When I took the ACT earlier this year, I did find certain aspects of the test more difficult than the tests in the practice book provided by the ACT test creators. But you do raise valid points. Any one person’s ACT experience is certainly only anecdotal from a research perspective.

  • Bruce,

    Let me begin by saying that I read your posts with interest and admiration.

    Here, I think you’re slightly off base. The scale hasn’t changed. The test has. The simplest way to see this is to have a new student to a red book (Official Guide) tests and then take a recent test. After you’ve done this for a few dozen students, you’ll have a pretty good sample size. We have done this. Also, between us, my colleagues and I take about three actual ACTs every year. We have also done the red book tests. There are new Math topics on recent tests and the ACT Science section now requires outside knowledge on 2-3 questions on each test. Time yourself on a red book Science section (if you haven’t done it recently) and then on a 2013-15 released Science section. I think you’ll see a difference.

    Nitin
    Principal, Tutoring and Test Preparation
    Marks Education

  • Sorry – I meant “The simplest way to see this is to have a student DO… a red book (Official Guide) test, …”

  • Jenn says:

    I do think there are some qualitative differences in much older ACT math sections and those administered recently. I do think they’ve added more oddball questions that are very hard to prepare for in recent exams. It’s just a way to make sure those hardest questions really mean something. If the same concepts were always tested, they’d no longer qualify as the hardest questions because more students would know how to tackle them! I also notice some weirdness in much older sample tests that test the same concepts in back-to-back questions. I don’t think that at all reflects real tests, and I suspect they were never really administered to students. But seeing those outside the bigger context could certainly lead you to think the new tests are harder!

    Further, some tests ARE relatively harder, but the norming process ensures that those students aren’t penalized for having a more challenging test. See a few of those, and you’ll wonder about whether they’re getting harder overall!

Leave a Reply