Making Sense Of Florida's School And Teacher Performance Ratings

Last week, Florida State Senate President Don Gaetz (R – Niceville) expressed his skepticism about the recently-released results of the state’s new teacher evaluation system. The senator was particularly concerned about his comparison of the ratings with schools’ “A-F” grades. He noted, “If you have a C school, 90 percent of the teachers in a C school can’t be highly effective. That doesn’t make sense."

There’s an important discussion to be had about the results of both the school and teacher evaluation systems, and the distributions of the ratings can definitely be part of that discussion (even if this issue is sometimes approached in a superficial manner). However, arguing that we can validate Florida’s teacher evaluations using its school grades, or vice-versa, suggests little understanding of either. Actually, given the design of both systems, finding a modest or even weak association between them would make pretty good sense.

In order to understand why, there are two facts to consider.

The Test-Based Evidence On The "Florida Formula"

** Reprinted here in the Washington Post

Former Florida Governor Jeb Bush has become one of the more influential education advocates in the country. He travels the nation armed with a set of core policy prescriptions, sometimes called the “Florida formula," as well as "proof" that they work. The evidence that he and his supporters present consists largely of changes in average statewide test scores – NAEP and the state exam (FCAT) – since the reforms started going into place. The basic idea is that increases in testing results are the direct result of these policies.

Governor Bush is no doubt sincere in his effort to improve U.S. education, and, as we'll see, a few of the policies comprising the “Florida formula” have some test-based track record. However, his primary empirical argument on their behalf – the coincidence of these policies’ implementation with changes in scores and proficiency rates – though common among both “sides” of the education debate, is simply not valid. We’ve discussed why this is the case many times (see here, here and here), as have countless others, in the Florida context as well as more generally.*

There is no need to repeat those points, except to say that they embody the most basic principles of data interpretation and causal inference. It would be wonderful if the evaluation of education policies – or of school systems’ performance more generally - was as easy as looking at raw, cross-sectional testing data. But it is not.

Luckily, one need not rely on these crude methods. We can instead take a look at some of the rigorous research that has specifically evaluated the core reforms comprising the “Florida formula." As usual, it is a far more nuanced picture than supporters (and critics) would have you believe.

The Year In Research On Market-Based Education Reform: 2012 Edition

** Reprinted here in the Washington Post

2012 was another busy year for market-based education reform. The rapid proliferation of charter schools continued, while states and districts went about the hard work of designing and implementing new teacher evaluations that incorporate student testing data, and, in many cases, performance pay programs to go along with them.

As in previous years (see our 2010 and 2011 reviews), much of the research on these three “core areas” – merit pay, charter schools, and the use of value-added and other growth models in teacher evaluations – appeared rather responsive to the direction of policy making, but could not always keep up with its breakneck pace.*

Some lag time is inevitable, not only because good research takes time, but also because there's a degree to which you have to try things before you can see how they work. Nevertheless, what we don't know about these policies far exceeds what we know, and, given the sheer scope and rapid pace of reforms over the past few years, one cannot help but get the occasional “flying blind" feeling. Moreover, as is often the case, the only unsupportable position is certainty.

The Sensitive Task Of Sorting Value-Added Scores

The New Teacher Project’s (TNTP) recent report on teacher retention, called “The Irreplaceables," garnered quite a bit of media attention. In a discussion of this report, I argued, among other things, that the label “irreplaceable” is a highly exaggerated way of describing their definitions, which, by the way, varied between the five districts included in the analysis. In general, TNTP's definitions are better-described as “probably above average in at least one subject" (and this distinction matters for how one interprets the results).

I’d like to elaborate a bit on this issue – that is, how to categorize teachers’ growth model estimates, which one might do, for example, when incorporating them into a final evaluation score. This choice, which receives virtually no discussion in TNTP’s report, is always a judgment call to some degree, but it’s an important one for accountability policies. Many states and districts are drawing those very lines between teachers (and schools), and attaching consequences and rewards to the outcomes.

Let's take a very quick look, using the publicly-released 2010 “teacher data reports” from New York City (there are details about the data in the first footnote*). Keep in mind that these are just value-added estimates, and are thus, at best, incomplete measures of the performance of teachers (however, importantly, the discussion below is not specific to growth models; it can apply to many different types of performance measures).

Cheating, Honestly

Whatever one thinks of the heavy reliance on standardized tests in U.S. public education, one of the things on which there is wide agreement is that cheating must be prevented, and investigated when there’s evidence it might have occurred.

For anyone familiar with test-based accountability, recent cheating scandals in Atlanta, Washington, D.C., Philadelphia and elsewhere are unlikely to have been surprising. There has always been cheating, and it can take many forms, ranging from explicit answer-changing to subtle coaching on test day. One cannot say with any certainty how widespread cheating is, but there is every reason to believe that high-stakes testing increases the likelihood that it will happen. The first step toward addressing that problem is to recognize it.

A district, state or nation that is unable or unwilling to acknowledge the possibility of cheating, do everything possible to prevent it, and face up to it when evidence suggests it has occurred, is ill-equipped to rely on test-based accountability policies. 

Selective Schools In New Orleans

Charter schools in New Orleans, LA (NOLA) receive a great deal of attention, in no small part because they serve a larger proportion of public school students than do charters in any other major U.S. city. Less discussed, however, is the prevalence of NOLA’s “selective schools” (elsewhere, they are sometimes called “exam schools”). These schools maintain criteria for admission and/or retention, based on academic and other qualifications (often grades and/or standardized test scores).

At least six of NOLA’s almost 90 public schools are selective – one high school, four (P)K-8 schools and one serving grades K-12. When you add up their total enrollment, around one in eight NOLA students attends one of these schools.*

Although I couldn’t find recent summary data on the prevalence of selective schools in urban districts around the U.S., this is almost certainly an extremely high proportion (for instance, selective schools in New York City and Chicago, which are mostly secondary schools, serve only a tiny fraction of students in those cities).

Creating A Valid Process For Using Teacher Value-Added Measures

** Reprinted here in the Washington Post

Our guest author today is Douglas N. Harris, associate professor of economics and University Endowed Chair in Public Education at Tulane University in New Orleans. His latest book, Value-Added Measures in Education, provides an excellent, accessible review of the technical and practical issues surrounding these models. 

Now that the election is over, the Obama Administration and policymakers nationally can return to governing.  Of all the education-related decisions that have to be made, the future of teacher evaluation has to be front and center.
In particular, how should “value-added” measures be used in teacher evaluation? President Obama’s Race to the Top initiative expanded the use of these measures, which attempt to identify how much each teacher contributes to student test scores. In doing so, the initiative embraced and expanded the controversial reliance on standardized tests that started under President Bush’s No Child Left Behind.

In many respects, The Race was well designed. It addresses an important problem - the vast majority of teachers report receiving limited quality feedback on instruction. As a competitive grants program, it was voluntary for states to participate (though involuntary for many districts within those states). The Administration also smartly embraced the idea of multiple measures of teacher performance.

But they also made one decision that I think was a mistake.  They encouraged—or required, depending on your vantage point—states to lump value-added or other growth model estimates together with other measures. The raging debate since then has been over what percentage of teachers’ final ratings should be given to value-added versus the other measures. I believe there is a better way to approach this issue, one that focuses on teacher evaluations not as a measure, but rather as a process.

Describing, Explaining And Affecting Teacher Retention In D.C.

The New Teacher Project (TNTP) has released a new report on teacher retention in D.C. Public Schools (DCPS). It is a spinoff of their “The Irreplaceables” report, which was released a few months ago, and which is discussed in this post. The four (unnamed) districts from that report are also used in this one, and their results are compared with those from DCPS.

I want to look quickly at this new supplemental analysis, not to rehash the issues I raised about“The Irreplaceables," but rather because of DCPS’s potential importance as a field test site for a host of policy reform ideas – indeed, the majority of core market-based reform policies have been in place in D.C. for several years, including teacher evaluations in which test-based measures are the dominant component, automatic dismissals based on those ratings, large performance bonuses, mutual consent for excessed teachers and a huge charter sector. There are many people itching to render a sweeping verdict, positive or negative, on these reforms, most often based on pre-existing beliefs, rather than solid evidence.

Although I will take issue with a couple of the conclusions offered in this report, I'm not going to review it systematically. I think research on retention is important, and it’s difficult to produce reports with original analysis, while very easy to pick them apart. Instead, I’m going to list a couple of findings in the report that I think are worth examining, mostly because they speak to larger issues.

Annual Measurable Objections

As states’ continue to finalize their applications for ESEA/NCLB “flexibility” (or “waivers”), controversy has arisen in some places over how these plans set proficiency goals, both overall and for demographic subgroups (see our previous post about the situation in Virginia).

One of the underlying rationales for allowing states to establish new targets (called “annual measurable objectives," or AMOs) is that the “100 percent” proficiency goals of NCLB were unrealistic. Accordingly, some (but not all) of the new plans have set 2017-18 absolute proficiency goals that are considerably below 100 percent, and/or lower for some subgroups relative to others. This shift has generated pushback from advocates, most recently in Florida, who believe that lowering state targets is tantamount to encouraging or accepting failure.

I acknowledge the central role of goals in any accountability system, but I would like to humbly suggest that this controversy, over where and how states set proficiency targets for 2017-18, may be misguided. There are four reasons why I think this is the case (and one silver lining if it is).

NCLB And The Institutionalization Of Data Interpretation

It is a gross understatement to say that the No Child Left Behind (NCLB) law is, was – and will continue to be – a controversial piece of legislation. Although opinion tends toward the negative, there are certain features, such as a focus on student subgroup data, that many people support. And it’s difficult to make generalizations about whether the law’s impact on U.S. public education was “good” or “bad” by some absolute standard.

The one thing I would say about NCLB is that it has helped to institutionalize the improper interpretation of testing data.

Most of the attention to the methodological shortcomings of the law focuses on “adequate yearly progress” (AYP) – the crude requirement that all schools must make “adequate progress” toward the goal of 100 percent proficiency by 2014. And AYP is indeed an inept measure. But the problems are actually much deeper than AYP.

Rather, it’s the underlying methods and assumptions of NCLB (including AYP) that have had a persistent, negative impact on the way we interpret testing data.