The Year In Research On Market-Based Education Reform: 2011 Edition
** Also posted here on 'Valerie Strauss' Answer Sheet' in the Washington Post
If 2010 was the year of the bombshell in research in the three “major areas” of market-based education reform – charter schools, performance pay, and value-added in evaluations – then 2011 was the year of the slow, sustained march.
Last year, the landmark Race to the Top program was accompanied by a set of extremely consequential research reports, ranging from the policy-related importance of the first experimental study of teacher-level performance pay (the POINT program in Nashville) and the preliminary report of the $45 million Measures of Effective Teaching project, to the political controversy of the Los Angeles Times’ release of teachers’ scores from their commissioned analysis of Los Angeles testing data.
In 2011, on the other hand, as new schools opened and states and districts went about the hard work of designing and implementing new evaluations compensation systems, the research almost seemed to adapt to the situation. There were few (if any) "milestones," but rather a steady flow of papers and reports focused on the finer-grained details of actual policy.*
Nevertheless, a review of this year's research shows that one thing remained constant: Despite all the lofty rhetoric, what we don’t know about these interventions outweighs what we do know by an order of magnitude.
The Uncertain Future Of Charter School Proliferation
This is the third in a series of three posts about charter schools. Here are the first and second parts.
As discussed in prior posts, high-quality analyses of charter school effects show that there is wide variation in the test-based effects of these schools but that, overall, charter students do no better than their comparable regular public school counterparts. The existing evidence, though very tentative, suggests that the few schools achieving large gains tend to be well-funded, offer massive amounts of additional time, provide extensive tutoring services and maintain strict, often high-stakes discipline policies.
There will always be a few high-flying chains dispersed throughout the nation that get results, and we should learn from them. But there’s also the issue of whether a bunch of charters schools with different operators using diverse approaches can expand within a single location and produce consistent results.
Charter supporters typically argue that state and local policies can be leveraged to “close the bad charters and replicate the good ones." Opponents, on the other hand, contend that successful charters can’t expand beyond a certain point because they rely on selection bias of the best students into these schools (so-called “cream skimming”), as well as the exclusion of high-needs students.
Given the current push to increase the number of charter schools, these are critical issues, and there is, once again, some very tentative evidence that might provide insights.
Explaining The Consistently Inconsistent Results of Charter Schools
This is the second in a series of three posts about charter schools. Here is the first part, and here is the third.
As discussed in a previous post, there is a fairly well-developed body of evidence showing that charter and regular public schools vary widely in their impacts on achievement growth. This research finds that, on the whole, there is usually not much of a difference between them, and when there are differences, they tend to be very modest. In other words, there is nothing about "charterness" that leads to strong results.
It is, however, the exceptions that are often most instructive to policy. By taking a look at the handful of schools that are successful, we might finally start moving past the “horse race” incarnation of the charter debate, and start figuring out which specific policies and conditions are associated with success, at least in terms of test score improvement (which is the focus of this post).
Unfortunately, this question is also extremely difficult to answer – policies and conditions are not randomly assigned to schools, and it’s very tough to disentangle all the factors (many unmeasurable) that might affect achievement. But the available evidence at this point is sufficient to start draw a few highly tentative conclusions about “what works."
The Evidence On Charter Schools
** Also posted here on "Valerie Strauss' Answer Sheet" in the Washington Post and here on the Huffington Post
This is the first in a series of three posts about charter schools. Here are the second and third parts.
In our fruitless, deadlocked debate over whether charter schools “work," charter opponents frequently cite the so-called CREDO study (discussed here), a 2009 analysis of charter school performance in 16 states. The results indicated that overall charter effects on student achievement were negative and statistically significant in both math and reading, but both effects sizes were tiny. Given the scope of the study, it’s perhaps more appropriate to say that it found wide variation in charter performance within and between states – some charters did better, others did worse and most were no different. On the whole, the size of the aggregate effects, both positive and negative, tended to be rather small.
Recently, charter opponents’ tendency to cite this paper has been called “cherrypicking." Steve Brill sometimes levels this accusation, as do others. It is supposed to imply that CREDO is an exception – that most of the evidence out there finds positive effects of charter schools relative to comparable regular public schools.
CREDO, while generally well-done given its unprecedented scope, is a bit overused in our public debate – one analysis, no matter how large or good, cannot prove or disprove anything. But anyone who makes the “cherrypicking” claim is clearly unfamiliar with the research. CREDO is only one among a number of well-done, multi- and single-state studies that have reached similar conclusions about overall test-based impacts.
This is important because the endless back-and-forth about whether charter schools “work” – whether there is something about "charterness" that usually leads to fantastic results – has become a massive distraction in our education debates. The evidence makes it abundantly clear that that is not the case, and the goal at this point should be to look at the schools of both types that do well, figure out why, and use that information to improve all schools.
What The "No Excuses" Model Really Teaches Us About Education Reform
** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post
In a previous post, I discussed “Apollo 20," a Houston pilot program in which a group of low-performing regular public schools are implementing the so-called “no excuses” education model common among high-profile charter schools such as KIPP. In the Houston implementation, “no excuses” consists of five basic policies: a longer day and year, resulting in 21 percent more school time; different human capital policies, including performance bonuses and firing and selectively rehiring all principals and half of teachers (the latter is one of the "turnaround models" being pushed by the Obama Administration); extensive 2-on-1 tutoring; regular assessments and data analysis; and “high expectations” for behavior and achievement, including parental contracts.
A couple of weeks ago, Harvard professor Roland Fryer, the lead project researcher, released the results of the pilot’s first year. I haven’t seen much national coverage of the report, but I’ve seen a few people characterize the results as evidence that “’No excuses’ works in regular public schools." Now, it’s true that there were effects – strong in math – and that the results appear to be persistent across different model specifications.
But, when it comes to the question of whether “no excuses works," the reality is a bit more complicated. There are four main things to keep in mind when interpreting the results of this paper, a couple of which bear on the larger debate about "no excuses" charter schools and education reform in general.
The Real Charter School Experiment
The New York Times reports that there is a pilot program in Houston, called the "Apollo 20 Program" in which some of the district’s regular public schools are "mimicking" the practices of high-performing charter schools. According to the Times article, the group of pilot schools seek to replicate five of the practices commonly used by high-flying charters: extended school time; extensive tutoring; more selective hiring of principals and teachers; “data-driven” instruction, including frequent diagnostic quizzing; and a “no excuses” culture of high expectations.
In theory, this pilot program is a good idea, since a primary mission of charter schools should be as a testing ground for new policies and practices that could help to improve all schools. More than a decade of evidence has made it very clear that there’s nothing about "charterness" that makes a school successful – and indeed, only a handful get excellent results. So instead of arguing along the tired old pro-/anti-charter lines, we should, like Houston, be asking why these schools excel and working to see if we can use this information productively.
I’ll be watching to see how the pilot schools end up doing. I’m also hoping that the analysis (the program is being overseen by Harvard’s EdLabs) includes some effort to separate out the effects of each of the five replicated practices. If so, I’m guessing that we will find that the difference between high- and low-performing urban schools depends more than anything else on two factors: time and money.
In Ohio, Charter School Expansion By Income, Not Performance
For over a decade, Ohio law has dictated where charter schools can open. Expansion was unlimited in Lucas County (the “pilot district” for charters) and in the “Ohio 8” urban districts (Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo, and Youngstown). But, in any given year, charters could open up in any other district that was classified as a “challenged district," as measured by whether the district received a state “report card” rating of “academic watch” or “academic emergency." This is a performance-based standard.
Under this system, there was of course very rapid charter proliferation in Lucas County and the “Ohio 8” districts. Only a small number of other districts (around 20-30 per year) “met” the performance-based standard. As a whole, the state’s current charter law was supposed to “open up” districts for charter schools when the districts are not doing well.
Starting next year, the state is adding a fourth criterion: Any district with a “performance index” in the bottom five percent for the state will also be open for charter expansion. Although this may seem like a logical addition, in reality, the change offends basic principles of both fairness and educational measurement.
Charter And Regular Public School Performance In "Ohio 8" Districts, 2010-11
Every year, the state of Ohio releases an enormous amount of district- and school-level performance data. Since Ohio has among the largest charter school populations in the nation, the data provide an opportunity to examine performance differences between charters and regular public schools in the state.
Ohio’s charters are concentrated largely in the urban “Ohio 8” districts (sometimes called the “Big 8”): Akron; Canton; Cincinnati; Cleveland; Columbus; Dayton; Toledo; and Youngstown. Charter coverage varies considerably between the “Ohio 8” districts, but it is, on average, about 20 percent, compared with roughly five percent across the whole state. I will therefore limit my quick analysis to these districts.
Let’s start with the measure that gets the most attention in the state: Overall “report card grades." Schools (and districts) can receive one of six possible ratings: Academic emergency; academic watch; continuous improvement; effective; excellent; and excellent with distinction.
These ratings represent a weighted combination of four measures. Two of them measure performance “growth," while the other two measure “absolute” performance levels. The growth measures are AYP (yes or no), and value-added (whether schools meet, exceed, or come in below the growth expectations set by the state’s value-added model). The first “absolute” performance measure is the state’s “performance index," which is calculated based on the percentage of a school’s students who fall into the four NCLB categories of advanced, proficient, basic and below basic. The second is the number of “state standards” that schools meet as a percentage of the number of standards for which they are “eligible." For example, the state requires 75 percent proficiency in all the grade/subject tests that a given school administers, and schools are “awarded” a “standard met” for each grade/subject in which three-quarters of their students score above the proficiency cutoff (state standards also include targets for attendance and a couple of other non-test outcomes).
The graph below presents the raw breakdown in report card ratings for charter and regular public schools.
Comparing Teacher Turnover In Charter And Regular Public Schools
** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post
A couple of weeks ago, a new working paper on teacher turnover in Los Angeles got a lot of attention, and for good reason. Teacher turnover, which tends to be alarmingly high in lower-income schools and districts, has been identified as a major impediment to improvements in student achievement.
Unfortunately, some of the media coverage of this paper has tended to miss the mark. Mostly, we have seen horserace stories focusing on fact that many charter schools have very high teacher turnover rates, much higher than most regular public schools in LA. The problem is that, as a group, charter school teachers are significantly dissimilar to their public school peers. For instance, they tend to be younger and/or less experienced than public school teachers overall; and younger, less experienced teachers tend to exhibit higher levels of turnover across all types of schools. So, if there is more overall churn in charter schools, this may simply be a result of the demographics of the teaching force or other factors, rather than any direct effect of charter schools per se (e.g., more difficult working conditions).
But the important results in this paper aren’t about the amount of turnover in charters versus regular public schools, which can measured very easily, but rather the likelihood that similar teachers in these schools will exit.
Underestimating Context (But Selectively)
Imagine that for some reason you were lifted out of your usual place in society and dropped into somebody else’s spot — the place of someone whose behavior you have never understood. For example, you are an anarchist who suddenly becomes a top cabinet member. Or you are an environmentalist who is critical of big business who suddenly finds yourself responsible for developing environmental policy for ExxonMobil or BP.
As systems thinker Donella Meadows points out in her book Thinking in Systems, in any given position, "you experience the information flows, the incentives and disincentives, the goals and discrepancies, the pressure […] that goes with that position." It’s possible, but highly unlikely, that you might remember how things looked from where you were before. If you become a manager, you’ll probably see labor less as a deserving partner, and more as a cost to be minimized. If you become a labor leader, every questionable business decision will start to seem like a deliberate attack on your members.
How do we know?
The best psychological experiments ask questions about human nature. What makes a person strong? Or evil? Are good and evil dispositional hardwired traits, permanent once unleashed? Or is there something about the situations in which people find themselves that influences their behavior?