**New:** Watch Wes’s video breakdown of this post, or keep reading for the full analysis.
Here’s a question that matters more than most families realize: Should your kid prep for the SAT or the ACT?
It doesn’t matter to the colleges: There are concordance tables that convert one score to the other. Colleges focus on the better score. So your student’s final result is effectively the max of the two.
Which means every hour spent prepping for the wrong test is an hour wasted.
Two extremes (both wrong)
Extreme #1: Split your time 50/50.
Prep equally for both tests. The upside? Your student is guaranteed to spend half their effort on whichever test they’re better at. The downside? They’re also guaranteed to waste half their effort. Their counterpart—the kid just like yours who picked the right test from the start—outscored them.
Extreme #2: Pick one and go all-in.
Just choose and start. The upside? If chosen correctly, all the effort counts. The downside? If the choice is wrong, your kid burns time catching up—and will still be slower than they could have been on the other test.
The real insight: picking the right test early is valuable, but only if done accurately and without wasting time.
The usual methods (& why they fall short)
Method 1: Rely on heuristics.
“Fast readers do better on the ACT.” “Math kids prefer the SAT.” You’ve heard these. They’re easy. But they’re also often wrong.
Here’s the thing: heuristics describe populations, not individuals. Yes, faster readers tend to do better on the ACT in aggregate. But “a bit faster than normal” having a bigger advantage than “normal” doesn’t mean a super-fast reader will crush it. The advice is only accurate in a certain range. Different score bands have different rules. The heuristics can’t tell the difference.
Method 2: Take one of each as a practice test.
This is better. Real data. Direct comparison. But it costs eight hours up front. And you have to assume mood, sleep, and test-day randomness don’t skew the results. You also have to use official tests—homegrown versions from Kaplan or Princeton Review drift from reality, especially at the top of the score curve.
Method 3: Take a hybrid diagnostic.
Some providers (including us) have built 3–4 hour tests combining SAT and ACT sections into one Saturday morning instead of two. Same night’s sleep, same energy level, less noise in the data. This is arguably the best traditional option.
But here’s the problem all three methods share:
They measure where a student is today. Not where they’ll end up after prep.
The real question
What if your student does better on the ACT today because they read a bit faster—but their score ceiling on the SAT is actually higher because of their analytical skills? Maybe under pressure your student reads faster, but shallowly. Maybe that’s easier to train out of on the SAT, where time pressure is less intense.
What we actually need is a test that predicts which test offers the higher score ceiling at the end of prep.
Not where your student is. Where they’re headed.
Enter MindPrint
MindPrint helps us do something no other method can: predict final score ceilings.
The argument is straightforward. MindPrint has a ton of data correlating pre-test cognitive profiles with post-test final scores. Unlike heuristics, it actually measures the thing that we care about and understand how to leverage correctly: the underlying cognitive factors that determine how far prep can take your student, not current performance as a proxy for final results.
One hour. No proctoring required. And in addition to test selection, the MindPrint report makes us better coaches from day one. If we see a kid with weak abstract reasoning but strong working memory, we know going in that one of our big projects will be getting them to work out loud—so we can see the fixable mistakes that have been happening invisibly for years.
Learn about how MindPrint predicts SAT/ACT score ceilings
Is it perfect? No. We don’t have a way to independently verify MindPrint’s accuracy without running our own longitudinal study. It’s a black box. That’s a real limitation.
Our anecdotal evidence (collected over years, and over students we know well) suggests to us both that MindPrint delivers what it claims, and that we are good at leveraging edge-case results for better outcomes. It’s a limitation we can live with, given everything it offers.
The Bottom Line
Heuristics are fine for populations. But they’re too often misleading for an individual kid—and this is way too important an arena for a high-performing kid to be misled.
Practice tests provide data on where a student is, but not where they could go. And they waste precious time and motivation.
We want to index on score ceiling, not current scores. We also want coaching intel that makes us more effective from the jump.
That’s why MindPrint is our starting point.
Explore vs. exploit. Early exploration—one hour to get the lay of the land—pays off across the entire prep process.
That’s it. That’s the move.



0 Comments