A Simple Choice Of Words Can Help Avoid Confusion About New Test Results
In 1998, the National Institutes of Health (NIH) lowered the threshold at which people are classified as “overweight." Literally overnight, about 25 million Americans previously considered as having a healthy weight were now overweight. If, the next day, you saw a newspaper headline that said “number of overweight Americans increases," you would probably find that a little misleading. America’s “overweight” population didn’t really increase; the definition changed.
Fast forward to November 2012, during which Kentucky became the first state to release results from new assessments that were aligned with the Common Core Standards (CCS). This led to headlines such as, "Scores Drop on Kentucky’s Common Core-Aligned Tests" and "Challenges Seen as Kentucky’s Test Scores Drop As Expected." Yet, these descriptions unintentionally misrepresent what happened. It's not quite accurate - or at least highly imprecise - to say that test scores “dropped," just as it would have been wrong to say that the number of overweight Americans increased overnight in 1998 (actually, they’re not even scores, they’re proficiency rates). Rather, the state adopted different tests, with different content, a different design, and different standards by which students are deemed “proficient."
Over the next 2-3 years, a large group of states will also release results from their new CCS-aligned tests. It is important for parents, teachers, administrators, and other stakeholders to understand what the results mean. Most of them will rely on newspapers and blogs, and so one exceedingly simple step that might help out is some polite, constructive language-policing.
States use standards to sort students’ test scores into “proficient” and other “NCLB-style” performance categories. The long and short of it is that, in most states, the bar will be set higher once the new CCS-aligned tests are adopted. Thus, in most cases, as in Kentucky, the proficiency rates in the first year will be lower than they were the previous year, when the old tests/standards were in place.
By themselves, these changes in proficiency and other rates can’t tell you much about the year-to-year performance of students, since it's extremely difficult to separate the "real" change from that due to the new test and standards (actually, it's very tough even when the tests/standards don’t change between years, but that’s a different story).
What they suggest, very roughly, is how a given state’s previous standards compare with the CCS. For example, if the new standards/tests are a lot more difficult, rates will be probably be much lower than before. Conversely, more modest differences will likely lead to smaller changes in proficiency. In this sense, the new test results are an opportunity to illustrate how sensitive proficiency rates (and changes in those rates) are to where one sets the bar.
Now, here's the suggestion. There will inevitably be confusion among parents and the public about what the shifts mean. However, calling them “increases/decreases” or "jumps/drops" will almost certainly exacerbate the misunderstanding. If editors, reporters and commentators wish to focus on year-to-year rate changes, they should consider characterizing them as “differences," or similar terms that don’t imply a shift over time in student performance. They should also, of course, note explicitly that the new tests and standards are most likely the primary reason for the changes.
Alternatively, and perhaps preferably, discussion of the data might downplay the year-to-year changes, and instead focus on the raw state test results as they are most properly interpreted – a snapshot of tested students’ performance in a given year.
- Matt Di Carlo
off topic...
what's your take on the MET study?
The general public does not like nuance. Neither does the press. The results will be presented as just more evidence that public schools are failing.
The results will be a further dismantling and segregation of America's public school system.