When I talk with families about what score a student can achieve within a given timeframe, I always talk in terms of probabilities. “Tom is likely to be able to improve 6 points on the ACT.” “I expect that Sarah will improve on this next test.” “I would say it’s a coin flip chance if Bob increases in score on the February test.”
Why talk in terms of probability about SAT and ACT scores? Because test-taking involves some variables that are beyond anyone’s control. To give more insight into those variables, this post describes why there is variability in test-taking performance and scores, the average probability that our students will increase in score on a given test, and how students can increase the probability of increasing their scores.
Why do scores vary from test to test?
There are three components that influence the variability in a student’s score on a given day:
- Personal performance: we all know that we have good days and bad days when it comes to performance. Even top runners who have run the same race countless times still see a 2% variation in their times — and the less experienced and younger runners see more average variance than that. These are three-hour exams — longer than it takes top runners to do a half marathon. And, annoyingly, our personal performance can also be impacted by others — if there is no air conditioning in the school during the July test so the windows are open and there is a basketball game taking place outside, if the student next to you has a cold and is sniffling for literal hours on end, etc.
- Content on that test: The SAT and ACT cycle through a bank of topics. Some topics (like percentages) are tested on every exam. Some topics (like inverse functions) might only be tested once every ten tests. Because the content that is tested varies, just by random chance some tests will fit a student better or worse than other tests, and thus scores will naturally vary from test to test.
- Imperfect scoring: It is impossible to make every SAT and ACT the exact same difficulty. So what do the test makers do to ensure that the scoring is fair? They do an equating process that makes sure that the curve is steeper when the test is easy (to bring students’ scores down so that they don’t unfairly get high scores just because their test was easier than average) and flatter when the test is hard (so that each question does not hurt a student’s score as much because the test was just harder than average). But, the equating process to make the curves is very good but not perfect. What is the margin of error on the tests? About +/- 40 points on the SAT and +/- 1 point on the ACT.
Here’s how the imperfect equating can play out. Let’s say that a student’s “true” ability level on the ACT is a 30 (meaning if they took 10 tests, they were not doing any preparation to improve their scores in between taking the tests, and the average of their 10 test scores was a 30, then we would have a very good assessment that their “true” score is a 30 on the ACT). So let’s say the student takes the September ACT and scores a 31 on it (so 1 point above their true ability level) because that test date had an advantageous curve — totally expected since about 1/3 of tests will have an advantageous curve that helps students score above their “true” ability score. The student then preps for the October ACT test and improves their true ability level up to a 31 by the October test date. But, as luck would have it, the student gets that 1/3 of tests that has an unfairly steep curve and brings students’ scores, on average, below their true ability level. So the student scores a 30 on it (1 point below their true ability level of 31).
- True ability: 30
- Actual score: 31
- True ability: 31
- Actual score: 30
The student is likely going to be very upset: they will feel that their hard work was for naught: their actual score went down, even though their true ability level increased from a 30 to 31.
Covid gave us the best experimental cases for this because the testing calendar became extremely chaotic and school day tests often happened just a week apart from the additional national test dates that were added, so we had multiple students who were able to take the ACT twice in the span of two weeks. Rarely did those students ever get the same composite score (and never the same score in all sections on both test dates, so 100% of the time there was at least some variation in the section scores). Sometimes they got the higher score on the first test of those two weeks. Sometimes on the second test of those two weeks.
This variability is also why prep companies that give a score “guarantee” never give an actual guarantee — that is just a marketing gimmick to get people to sign up (and there are always so many hoops to jump through that no one ever qualifies for a refund). Every tutor and company knows that there is inherent variability to test scores — some of which is outside anyone’s control. One of the solutions, then, is to make sure that a student has the opportunity to take the real test multiple times just in case the first test or two do not go as planned. (You can read more here on why more test-taking chances lead to higher scores.)
Probability of increasing in score
The probability that our students will increase in knowledge and skill is 100% — every instructor we have on our team is an expert on these exams, so there is no possible way they will not impart at least some knowledge and skills, even to students who should do their homework but don’t, even to students who do not want any part of test prep and resist learning, etc.
But, increasing in knowledge and skill is not a guarantee of an increase in score on a given test.
Over 10 years, the average probability that our students will increase in score on any given test has remained quite steady:
- 80% chance a student will increase in score on a given test. Most of our students are increasing in score on most of their tests, but that also leaves a 20% chance — or 1 in 5 chance — that they will get the same score or decrease in score on a given test. That means too that if a student takes the ACT five times, we will expect that on one of those tests the student will get the same or lower score than they did previously. We’ll get to why later.
- 96% chance that a student will increase in score over the course of two tests.
- 99% chance that a student will increase in score over the course of three tests.
So increasing in score is eventually virtually guaranteed (on average, our students increase about 1.5 points on each real test, with typically larger increases between tests at the beginning of prep and smaller increases between tests later in their prep when they have already significantly increased their score).
That still does leave 1% of our students who do not see an increase in score even though they are getting prep over the course of taking the real test three times. At that point, for that very small minority of students, we just give them free, unlimited prep until they get to the score that we would have predicted from the sessions they have already paid for thus far. It’s very rare, but at least it always still does have a happy ending.
Note: There is considerable variance in the likelihood of increasing in score depending on a student’s specific circumstances. For instance, if a student is around a 500 on the SAT Math, then their chances of going up on the SAT Math on the next test (assuming they are our average student who does their homework, etc) is near 100%. But, if the student has a 700+ Verbal score, then their chances of going up on the SAT Verbal on the next test is around 60% (still far better than with no prep — in which case their probability would likely be much closer to 0%). The SAT Verbal section — specifically the SAT Reading subsection — is very hard to increase on and has significant variability in the scores. In general, too, the higher a student’s score, the lower the probability that they will increase on the next test (especially on the SAT because there is so little margin for error on the SAT at the high end of the scoring scale).
We also treat taking real tests as mock tests, which makes a massive difference too. For instance, when students have done prep with us for 6 or more months before taking their first real test, 100% of them improved from their initial score. It might sound, then, like all students should just do a lot of prep before taking their first real test. However, though 100% of our students who have waited that long to take their first test improved, they likely didn’t get as high of a score on that first real test as they would have if it were their third real test. Test-taking and being able to execute under pressure over a long period of time is a skill that can be developed and honed, so those students missed that practice in real test-taking. So, yes, they are guaranteed to see an increase, but it’s optimal if students instead view the real tests just as mock tests and work on developing their knowledge and skills and improving their performance over time. So, if we wanted to, we could have 100% of our students go up on each test, but we would be spacing out their tests more, and our students would then, on average, take fewer tests and get lower overall increases (they would get higher increases between each real test, but they would be taking fewer tests overall so their eventual score goal still wouldn’t be as large).
How to increase the probability of a score increase
For some students, they don’t do their homework, they see prep as a waste of time and thus fight the process instead of seeing the tutoring as an opportunity to learn fundamental knowledge and skills that improve their college readiness, they take breaks, etc. In those instances, the remedy is straightforward: do the homework and take mock tests, realize that the knowledge and content that are tested are very helpful for school and life, and keep learning and progressing so that they don’t regress.
But, for students who are doing what they should, how can they increase the probability that they will go up on a given test?
Focus on knowledge and skills, not on the score
If students really focus on learning the necessary content and skills, they will — unsurprisingly — more quickly master the necessary content and skills and see their scores increase faster and with more consistency. The score is just a good but imperfect proxy for their knowledge and skill. If, for instance, the student above had increased their true ability from a 30 to a 33 between the September and October ACT test dates, then even though they scored a 31 on their September test, they would likely in the worst case scenario still score a 32 on the October test because they were able to move their true ability level up so substantially that a disadvantageous curve still didn’t hold them back from a higher score.
Focus on test-taking, not on the score
Especially the first time a student takes a test, they might experience the “fog of test day” — when students lose clarity on how they need to execute and then take the real tests differently than they take practice tests. When students take practice tests, they don’t really care if they get a particular question right: they are just doing their best, and, if they don’t know an answer, they move on. That’s how they need to take the real tests because that is how they have been practicing. But, so often, they don’t. In short, in practice, they focus on the questions and on the test. On test day, their focus is on the score they want to hit, so they are more likely to take the test differently than they did in practice, double check their answers (which throws off their timing), etc. (Recent update: This Joe Rogan podcast episode does a phenomenal job of getting into the psychology of what happens when people focus on a target instead of on what they need to do to make the perfect shot and why people breakdown under pressure — Note: I am very ambivalently linking to it because I don’t endorse the profanity and much of the episode, but the psychological analysis in it is fantastic for how we need to focus on the mechanics of an action and not the goal of that action if we want to execute well.)
This ability to focus on the task itself (not on the goal which actually distracts from optimally performing the task) is not only a helpful skill to learn for test taking but also for life.
The test prep industry has a pretty bad reputation. Most of it is deserved. And most of that bad reputation comes down to lying to families that students can substantially increase their SAT and ACT scores from “tips” and “tricks.” They can’t. It has never happened. Sure, there are some tips and tricks to the test, but no one has ever significantly increased their scores from tips and tricks alone. It’s like saying someone could go from the 50th percentile of soccer players to the 90th percentile just from tips/tricks — it’s not possible. Instead, the only way to significantly increase in score is to… learn knowledge and master skills. Which, to be honest, should be obvious — that is how we get substantially better at everything else in life. But, it’s also understandable that people don’t see test prep that way because there has been so much disinformation about it. Prep that is about “tips/tricks” is the same as a “get rich quick” scheme — if it sounds too good to be true, it likely is.
So, why do I say study? Because the public largely believes the lie from the test makers for the SAT and ACT as well that the tests are measuring something that can’t be prepared for. That is also not true. Instead, you can substantially increase your score because the tests are not measuring anything inherent. But, also, tips/tricks are not going to get you that substantial increase — only significantly increasing your knowledge and skills will get you a significant increase. And that means that the SAT and ACT should be viewed like a very extensive final exam: you can study and prepare for them, but you have to study and prepare for them a lot if you’re going to increase your score a lot. In the week leading up to a real test, then, students should absolutely be studying and reviewing everything that they have learned from their instructor. If you just get instruction, you are likely to increase your score. If you get instruction and intensely study everything the week before the test, you are much more likely on test day to be able to recall what you learned and are even more likely to increase your score. The SAT and ACT are regular tests, just really large tests that cover reading, mathematics, written communication, and data analysis.
If this seems unfair that there is variability in scores, that’s because it is unfair. The SAT and ACT are remarkably well-designed tests, but nothing is perfect. And, we need some common yardstick so that students who attend high schools with massive grade inflation do not get an advantage over students who have more academic achievement but who attend high schools with less grade inflation and thus have lower grades.
On its admissions website, Harvard gives a very good explanation for why standardized test scores are helpful in college admissions, especially to selective and highly selective schools:
Again, nothing is perfect, but on average SAT and ACT scores improve predicting how well a student will do in college.
It also means that no one should read assume that scores are precise enough to accurately differentiate between students with similar scores. Is there a difference in knowledge and skill between a student who gets a 20 on the ACT (the average score in the U.S.) and a student who gets a 34 (which puts them in the top 1% of test takers)? Almost definitely yes. Is there a difference in knowledge and skill, though, between a student with a 33 and another with a 34? Probably not. And, even if there is, the difference is so insignificant that college decisions should never hinge on such small differences in scores.
That said, because these tests evaluate students on fundamental knowledge and skills that can be learned and developed, students can significantly increase their scores. This was the score journey of one of my former students:
- Mock ACT in June after sophomore year: 23
- October of junior year ACT: 26
- December ACT: 28 (yay! doing well!)
- February ACT: 27 (ok, it happens)
- April ACT: 27 (well, this is very unfortunate and very painful)
- September of senior year ACT: 31
Imagine the pain between December of junior year and September before they got their results back: 10 months of prep and they had only seen their score go down in that time. Gut-wrenching for all of us. And yet, it was worth it: this student not only increased a fantastic 8 points on the ACT (and went from the 69th percentile of test takers to the 96th percentile) but also became a significantly better student in school as well. (Side note: he got into the undergraduate business program he wanted, graduated from college last year, and is now working at a dream job.)
But, sometimes it goes just like everyone would hope too. Here was the journey of another of my students:
- Mock ACT in June after sophomore year: 25
- September of junior year ACT: 27
- October ACT: 29
- December ACT: 30
- February ACT: 32
- April ACT: 32 (but superscore improved to a 33)
- June ACT: 34
That’s pretty much as good as it gets. The first student had a much rougher time than most. The second a bit better time than most.
The moral of the story: students can significantly improve their scores by significantly improving their knowledge and skills, but it’s crucially important to understand the variability in scores so that we have appropriate expectations and do not get discouraged.
“I never lose. I either win or learn.”Nelson Mandela