Occasionally a student’s SAT or ACT scores will stay the same or, in rarer cases, decrease from one test to the next. Why? And how can we avoid flat or lower scores?
In some cases, low quality prep is to blame for a lack of score increase. For our students, however, who have an average score increase of 180 points on the SAT and 6 points on the ACT and literally the most well-trained tutors, that possibility is impossible. Here are the other reasons a lack of increase or a decrease can occur:
A bad day
We all know that sometimes athletes have good and bad days. Even performance on very straight-forward sports with no influence from factors of direct competition varies from one day to the next. For example, one could presume that runners would have the exact same time every race (except maybe with gradual improvement over time) when controlling for racing conditions. Yet even the top runners vary roughly 2% in performance from one race to the next (on the ACT, a 2% variance would equate to almost a full composite point difference on each test). Male, younger, and slower runners vary the most in performance from one race to the next.¹
We also know from our personal experience that some days we are just mentally quicker than others. The same is true for students on standardized tests. Some days they will do better than others (and some days will be really good days on which a student performs better than usual).
A bad test
By “bad” test, I simply mean a test that does not fit a student’s knowledge base as well. Unlike in running in which all tracks are the same, on the SAT and ACT there is also variance in content from one test to the next (if there wasn’t, each test would be so predictable that students could prep so easily for them that everyone would get a perfect score). To avoid complete predictability, there are 140 different topics tested just on the Math section of the ACT. Some topics, such as exponents and percentages, are tested on every exam; but, after students have learned the frequently tested topics, they will be forced to learn increasingly infrequent (and difficult) topics if they want to increase their score. Thus, they will have to learn topics tested on only 1 in 2 tests, then those tested on 1 in 4 tests, then those on 1 in 10 tests, etc. Because of this variance in content from one test to the next, some tests may have content that suits a student significantly better or worse than other tests and leads to unexpectedly large increases or lack thereof.
Every SAT and ACT is “equated” to ensure that a hard test has a more generous curve and vice versa (so that students’ scores do not fluctuate simply because they got lucky or unlucky with the difficulty of the test they took). However, the equating process is not perfect. Kristen Mattern, Senior Director in Research at ACT, states in an interview with Tests and the Rest, “The tests are very reliable, but they’re not perfectly reliable. That means that there’s some measurement error, so, every time you take a section, your score may be a little higher than your true ability level or may be a little lower.” For example, historically, the September ACT has had one of the most generous curves of the year, closely followed by the October test. However, this year (2019), the October test was delayed and went through an extra equating process; it ended up having an abysmal curve (literally almost every one of my personal students went down in score from September to October — but saw a jump on the December and kept increasing thereafter). Even as the test makers admit: the tests are very reliable, but not perfectly reliable. Students can get lucky or unlucky with a better or worse curve, so their score could go down through no fault of their own.
As we wrote about here, the detriment of taking breaks is surprisingly large. Students who do not take breaks achieve 50% higher score increases (6 points as opposed to 4 points on the ACT) than those who do take breaks. Thus, breaks in-between tests can cause students to yo-yo: they improve but then regress back to where they were originally, improve and then regress, etc, which makes it look like they were not improving.
Most problems in life have the same solution: hard-work. The SAT and ACT are no exception. The more students practice, the less variance there is in their scores. In foot races, the experienced runners have more predictable times. The same holds true on the SAT and ACT: the more experienced test takers are less likely to panic and more likely, because of practice, to execute with consistency. And, the more topics a student learns, the less variance in scores because they know more and more of even the infrequently tested topics. Finally, students should plan to take the tests multiple times; in the event they have a bad day, a test that does not fit their content knowledge, or get unlucky with an unfavorable curve, they will want to have multiple opportunities to instead have a good day, with a test that fits them well, and a generous curve.
The key to success in life is hard-work. The same is true on standardized tests. With practice and quality prep, even if a student has an unlucky test, they will succeed in the long-run — as hard-workers always do.
There is quite compelling data to suggest that air pollution, specifically of particulate matter concentration, on a given day can significantly affect cognitive ability and test performance; that probably sounds too crazy to be true, but there is enough evidence for its effect that the Freakonomics Podcast recently devoted an entire episode to it. If the data is to be believed, air pollution affected percentile performance by, on average, half a standard deviation. In plain English and related to the SAT and ACT, just random chance of particulate matter concentrations on a given day could alone account for about +/- 1 point on the ACT and about +/- 100 points on the SAT.