Missouri Growth 101: Why Missouri’s Growth Model is the Best in the Country
By Collin Hitt, Ph.D.
Published On: January 24, 2025
The PRiME Center’s Executive Director, Dr. Collin Hitt, breaks down the Missouri Growth Model and answers frequently asked questions about Growth Scores.
The Starting Point: MAP Tests
Students in Missouri public schools take MAP tests: standardized tests developed through the Missouri Assessment Program. The tests are designed to be aligned to state learning standards, and are administered grades 3–8.
A single test score is a snapshot: it tells us how much a student has learned up to that exact point in time. An eighth grade MAP test in math, for example, captures how much math a student has learned from birth to grade 8.
Test scores contain important information. But a single test score tells us next to nothing about how much a student learned over a single school year, because a single test score tells us nothing about where a student started the school year.
That’s where the idea of growth comes in. By testing students year after year, MAP testing makes it possible to examine student learning growth over time. When looking at a student’s 8th grade math scores, we should also look at their 7th grade scores, to understand how much distance they covered. And then we should compare that distance covered to the amount of distance that other similar students covered that same grade and year.
Growth, in the simplest sense, could be calculated by subtracting last year’s test scores from this year’s test scores—the result of which would be the amount that students grew this year. But things aren’t that simple. Missouri MAP tests are not built to be compared to (or subtracted from) one another to calculate growth. Few tests are. And even if this was possible, we still would still need additional information about whether a student's year-to-year changes in test scores were high, average or low.
The Missouri Growth Model
The Missouri Growth Model has to wrestle with the following questions:
How do we measure student learning this year, taking into account what their own test scores were last year?
How did student performance this year compare to other students who started the year in a similar position?
How much of a student’s performance was attributable to the school they attended?
How do schools and districts compare to one another on student growth?
In order to answer these questions, a growth model must address a number of challenges. In the following brief, we walk through a number of those challenges, and detail how the Missouri Growth Model addresses them. The result is a growth model that, on paper, can appear complicated. But complexity is necessary when dealing with a complex set of problems.
Some states have chosen growth models that favor simplicity. And they achieve simplicity by ignoring one or more of the problems below. Not Missouri.
The Missouri Growth Model—a two-step value-added model developed over years by economists at the University of Missouri—is the only model that can adequately address all of the following problems at once. And that is why, in our view, it is the best growth model in the country.
The Big Questions for Growth Models
#1. How much distance did a student cover this year?
This is the basic idea behind “growth.” How much distance did each kid cover this year?
When a student earns a standardized test score—take 8th grade math—that tells us how much they’ve learned in math to that time, from birth to the 8th grade.
But when we’re talking about this year—in our example the 8th grade—we want to know how much they’ve learned in the eighth grade. This means we need to compare a student’s 8th grade scores to their 7th grade scores.
The Missouri Growth Model does this.
#2. How do we compare tests from year to year if the tests are different?
A common sense definition of growth is simply “this year’s test score, minus last year’s test score.” Unfortunately, test scores aren’t that simple. Missouri’s test scores aren’t directly comparable year-to-year in a way that allows for this simple math—few tests are actually built that way.
This is where “regression modeling” comes in. While MAP scores year to year aren’t exactly comparable, there is a strong correlation between test scores year to year. Past year’s scores can be used to predict this year’s scores; and student growth can be calculated as a student’s end of year school versus as predicted.
Forgetting about the Missouri MAP tests for a moment, imagine that a teenager took the ACT one year and the SAT the next. While you can’t subtract an ACT score from an SAT score—they are scaled differently and cover content in different ways—you can reasonably expect that a student’s ACT score would be predictive of their SAT score, albeit imperfectly. The same logic applies to Missouri MAP scores: previous scores are predictive of later scores.
The Missouri Growth Model first finds the statistical relationship between this year’s test scores and previous years’ test scores. On balance, students’ previous test scores are predictive of this year’s test scores.
Yet some students outperform what their past performance would have led us to predict, statistically speaking. Some students perform lower than predicted. In regression models like those used in Missouri’s Growth Model calculations, each student is given a model score (i.e. “expected” or “predicted” score), based on how students who scored similarly to them in the past scored this year. How a student performed compared to statistical predictions is their “growth.”
And so we’re back to the basic intuition. A student’s MAP score this year, minus their statistically-predicted score, is their individual growth.
#3. How do you take into account more than just last year’s test score?
Wouldn’t it be nice to take into account more than just a single, past test score when talking about growth? The Missouri Growth Model uses “multiple regression analyses,” which can take into account multiple past predictors of this year’s performance. So, for example, when looking at a student’s performance in math for this year, the model not only accounts for a student’s past performance in math, but also language arts, as well as other community-level factors.
#4. How did a student’s year-to-year change in performance compare to other students?
While we would expect a student’s testable skills to improve over the course of a whole school year, how does their improvement (or lack thereof) compare to other students?
Some students outperform statistical expectations. Some do not. A student’s performance relative to their model-estimated score is what Missouri refers to as “growth.” In statistics, this piece of information is called a “residual”—it is literally the extent to which a student’s test score at the end of the year defied statistical expectations.
#5. What is the average student growth at a given school? And how confident are we in these numbers?
At the school level, how did all students perform this year when compared to students in similar situations with similar past scores? At the school level, student residuals (see point 4 above) are averaged by using a second regression analysis that not only calculates average student growth, but also provides “confidence intervals” that give a plausible range of scores that represent the school’s true impact on student achievement in a given year.
#6. How do we measure school performance in a way that is fair to schools that serve high numbers of disadvantaged students?
Schools that serve high percentages of high-need students face special challenges. They should not be penalized for taking on these challenges, and their performance should be compared to similar schools with similar students.
Some students, for reasons well beyond the control of the school, enter the year testing far below state averages, such that even if they make large gains over the year, they may only test in the middle of the pack at year’s end. Conversely, other schools primarily serve students who enter the year scoring so highly that even if they learn very little over the course of the school year, they will still test above average come spring.
Rather than looking at where a student simply ends the year—which is what happens when looking at single-point-in-time test scores—growth looks at how much distance each student covers. This is the only way to make a fair comparison across students, and across schools.
Focusing on a single test score as a common goal for all students—which is literally what the state does when looking at the percentage of students scoring “proficient”—actually sets a different goal for every student. Some students start the year far below that score, some just shy, and some already past it.
Instead, growth says, “how much distance did you cover, regardless of where you started the year?” The Missouri Growth Model controls for past performance, as well as the starting performance of students around you. The model essentially compares each student to other students whose past performance was similar to their own.
The Missouri Growth Model controls for past performance. Because of this, schools that serve high concentrations of low-income students are just as likely to show high growth as schools from wealthier communities. The University of Missouri authors of the model have shown that there is no relationship between school growth scores and the economic circumstances of their students.
The Growth Model sets schools on equal footing, which is the final reason why it is the best growth model in the country, and far superior to focusing solely on “proficiency.”
Discussion
Missouri’s Growth Model is the best and only way to address all of these questions at once. Each of these questions may seem simple. But in combination, they create a complicated puzzle.
Some states solve the puzzle by ignoring certain pieces. Growth models such as “Student Growth Percentiles” and “Growth to Proficiency” simply ignore some of the challenges above. Not Missouri—the state’s “two-step value-added model” is the only model in use that can address all of the challenges above.
The state’s Growth Model is a statistical masterwork—“a two-step value-added model” developed, refined and run by econometricians at the University of Missouri. It should be applauded for addressing the issues above head on.
For years, President Donald Trump has campaigned on, among other things, his intentions to dissolve the Department of Education. In January 2025, two bills seeking to do just that, were introduced in the House of Representatives (H.R.369 and H.R.899). Just yesterday, Trump signed an executive order that would begin eliminating the federal Department of Education, citing poor test scores as a key justification for the move.