Making (Up) The Grade In Ohio

In a post last week over at Flypaper, the Fordham Institute’s Terry Ryan took a “frank look” at the ratings of the handful of Ohio charter schools that Fordham’s Ohio branch manages. He noted that the Fordham schools didn’t make a particularly strong showing, ranking 24th among the state’s 47 charter authorizers in terms of the aggregate “performance index” among the schools it authorizes. Mr. Ryan takes the opportunity to offer a few valid explanations as to why Fordham ranked in the middle of the charter authorizer pack, such as the fact that the state’s “dropout recovery schools," which accept especially hard-to-serve students who left public schools, aren’t included (which would likely bump up Fordham's relative ranking).

Mr. Ryan doth protest too little. His primary argument, which he touches on but does not flesh out, should be that Ohio’s performance index is more a measure of student characteristics than of any defensible concept of school effectiveness. By itself, it reveals relatively little about the “quality” of schools operated by Ohio’s charter authorizers.

But the limitations of measures like the performance index, which are discussed below (and in the post linked above), have implications far beyond Ohio’s charter authorizers. The primary means by which Ohio assesses school/district performance is the state’s overall “report card grades," which are composite ratings comprised of multiple test-based measures, including the performance index. Unfortunately, however, these ratings are also not a particularly useful measure of school effectiveness. Not only are the grades unstable between years, but they also rely too heavily on test-based measures, including the index, that fail to account for student characteristics. While any attempt to measure school performance using testing data is subject to imprecision, Ohio’s effort falls short.

The Stability Of Ohio's School Value-Added Ratings And Why It Matters

I have discussed before how most testing data released to the public are cross-sectional, and how comparing them between years entails the comparison of two different groups of students. One way to address these issues is to calculate and release school- and district-level value-added scores.

Value added estimates are not only longitudinal (i.e., they follow students over time), but the models go a long way toward accounting for differences in the characteristics of students between schools and districts. Put simply, these models calculate “expectations” for student test score gains based on student (and sometimes school) characteristics, which are then used to gauge whether schools’ students did better or worse than expected.

Ohio is among the few states that release school- and district-level value-added estimates (though this number will probably increase very soon). These results are also used in high-stakes decisions, as they are a major component of Ohio’s “report card” grades for schools, which can be used to close or sanction specific schools. So, I thought it might be useful to take a look at these data and their stability over the past two years. In other words, what proportion of the schools that receive a given rating in one year will get that same rating the next year?

Learning Versus Punishment And Accountability

Our guest author today is Jeffrey Pfeffer, Thomas D. Dee II Professor of Organizational Behavior at the Stanford University Graduate School of Business. We find it intriguing, given the current obsession with “accountability” in education reform. It is reprinted with permission from Dr. Pfeffer’s blog, Rational Rants, found at http://www.jeffreypfeffer.com.

People seem to love to exact retribution on those who screw up—it satisfies some primitive sense of justice. For instance, research in experimental economics shows that people will voluntarily give up resources to punish others who have acted unfairly or inappropriately, even though such behavior costs those doing it and even in circumstances where there is going to be no future interaction to be affected by the signal sent through the punishment. In other words, people will mete out retribution even when such behavior is economically irrational.

The Cost Of Success In Education

Many are skeptical of the current push to improve our education system by means of test-based “accountability” - hiring, firing, and paying teachers and administrators, as well as closing and retaining schools, based largely on test scores. They say it won’t work. I share their skepticism, because I think it will.

There is a simple logic to this approach: when you control the supply of teachers, leaders, and schools based on their ability to increase test scores, then this attribute will become increasingly common among these individuals and institutions. It is called “selecting on the dependent variable," and it is, given the talent of the people overseeing this process and the money behind it, a decent bet to work in the long run.

Now, we all know the arguments about the limitations of test scores. We all know they’re largely true. Some people take them too far, others are too casual in their disregard. The question is not whether test scores provide a comprehensive measure of learning or subject mastery (of course they don’t). The better question is the extent to which teachers (and schools) who increase test scores a great deal are imparting and/or reinforcing the skills and traits that students will need after their K-12 education, relative to teachers who produce smaller gains. And this question remains largely unanswered.

This is dangerous, because if there is an unreliable relationship between teaching essential skills and the boosting of test scores, then success is no longer success. And by selecting teachers and schools based on those scores, we will have deliberately engineered our public education system to fail in spite of success.

It may be only then that we truly realize what we have done.

Accountability For Us, No Way; We're The Washington Post

In his August 4th testimony before the Senate’s Committee on Health, Education, Labor and Pensions, Government Accountability Office (GAO) official Gregory D. Kutz offered an earful of scandalous stories about how for-profit, post-secondary institutions use misrepresentation, fraud, and generally unethical practices to tap the federal loan and grant-making trough. One of these companies, so says the Washington Post itself, is Kaplan Inc, a profit-making college that contributes a whopping amount to the paper’s bottom line (67 percent of the Washington Post Company’s $92 million in second quarter earnings, according to the Washington Examiner; 62 percent according to the Post’s Ombudsman Andrew Alexander).

One might assume that the Post's deep financial involvement in Kaplan Inc. would prompt its editorial board to recuse itself from comment on new proposed federal regulations designed to correct the problems. Instead of offering "point-counterpoint" op-eds on this issue, this bastion of journalistic integrity has launched a veritable campaign in support of its corporate education interests, and offered up its op-ed page to education business allies. It is a sad and disappointing chapter in the history of this once-great institution.