The Relatively Unexplored Frontier Of Charter School Finance

Do charter schools do more – get better results - with less? If you ask this question, you’ll probably get very strong answers, ranging from the affirmative to the negative, often depending on the person’s overall view of charter schools. The reality, however, is that we really don’t know.

Actually, despite uninformed coverage of insufficient evidence, researchers don’t even have a good handle on how much charter schools spend, to say nothing of whether how and how much they spend leads to better outcomes. Reporting of charter financial data is incomplete, imprecise and inconsistent. It is difficult to disentangle the financial relationships between charter management organizations (CMOs) and the schools they run, as well as that between charter schools and their "host" districts.

A new report published by the National Education Policy Center, with support from the Shanker Institute and the Great Lakes Center for Education Research and Practice, examines spending between 2008 and 2010 among charter schools run by major CMOs in three states – New York, Texas and Ohio. The results suggest that relative charter spending in these states, like test-based charter performance overall, varies widely. In addition, perhaps more importantly, the findings make it clear that there remain significant barriers to accurate spending comparisons between charter and regular public schools, which severely hinder rigorous efforts to examine the cost-effectiveness of these schools.

Teachers And Their Unions: A Conceptual Border Dispute

One of the segments from “Waiting for Superman” that stuck in my head is the following statement by Newsweek reporter Jonathan Alter:

It’s very, very important to hold two contradictory ideas in your head at the same time. Teachers are great, a national treasure. Teachers’ unions are, generally speaking, a menace and an impediment to reform.
The distinction between teachers and their unions (as well as those of other workers) has been a matter of political and conceptual contention for long time. On one “side," the common viewpoint, as characterized by Alter's slightly hyperbolic line, is “love teachers, don’t like their unions." On the other “side," criticism of teachers’ unions is often called “teacher bashing."

So, is there any distinction between teachers and teachers’ unions? Of course there is.

The Test-Based Evidence On New Orleans Charter Schools

Charter schools in New Orleans (NOLA) now serve over four out of five students in the city – the largest market share of any big city in the nation. As of the 2011-12 school year, most of the city’s schools (around 80 percent), charter and regular public, are overseen by the Recovery School District (RSD), a statewide agency created in 2003 to take over low-performing schools, which assumed control of most NOLA schools in Katrina’s aftermath.

Around three-quarters of these RSD schools (50 out of 66) are charters. The remainder of NOLA’s schools are overseen either by the Orleans Parish School Board (which is responsible for 11 charters and six regular public schools, and taxing authority for all parish schools) or by the Louisiana Board of Elementary and Secondary Education (which is directly responsible for three charters, and also supervises the RSD).

New Orleans is often held up as a model for the rapid expansion of charter schools in other urban districts, based on the argument that charter proliferation since 2005-06 has generated rapid improvements in student outcomes. There are two separate claims potentially embedded in this argument. The first is that the city’s schools perform better that they did pre-Katrina. The second is that NOLA’s charters have outperformed the city’s dwindling supply of traditional public schools since the hurricane.

Although I tend strongly toward the viewpoint that whether charter schools "work" is far less important than why - e.g., specific policies and practices - it might nevertheless be useful to quickly address both of the claims above, given all the attention paid to charters in New Orleans.

Value-Added Versus Observations, Part Two: Validity

In a previous post, I compared value-added (VA) and classroom observations in terms of reliability – the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren’t useful unless they are valid – that is, unless they’re measuring what we want them to measure.

Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional – in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they’re being used.

Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.

Becoming A 21st Century Learner

Think about something you have always wanted to learn or accomplish but never did, such as a speaking a foreign language or learning how to play an instrument. Now think about what stopped you. There’s probably a variety of factors but chances are those factors have little to do with technology.

Electronic devices are becoming cheaper, easier to use, and more intuitive. Much of the world’s knowledge is literally at our fingertips, accessible from any networked gadget. Yet, sustained learning does not always follow. It is often noted that developing digital skills/literacy is fundamental to 21st century learning but, is that all that’s missing? I suspect not. In this post I take a look at university courses available to anyone with an internet connection (a.k.a. massive open on-line courses or MOOCs) and ask: What attributes or skills make some people (but not others) better equipped to take advantage of this and similar educational opportunities brought about by advances in technology?

In the last few months, Stanford University’s version of MOOCs have attracted considerable attention (also here and here), leading some to question the U.S. higher education model as we know it – and even envision its demise. But, what is really novel about the Stanford MOOCs? Why did 160,000 students from 190 countries sign up for the course “Introduction to Artificial Intelligence”?

Value-Added Versus Observations, Part One: Reliability

Although most new teacher evaluations are still in various phases of pre-implementation, it’s safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers’ final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA).

One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than “real” performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class.

These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe “true” teacher performance, it’s tough to say which is “better” or “worse," despite the certainty with which both “sides” often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*

Nevertheless, many states and districts have already made the choice to use both measures, and in these places, the existence of imprecision is less important than how to deal with it. Viewed from this perspective, VA and observations are in many respects more alike than different.

There's No One Correct Way To Rate Schools

Education Week reports on the growth of websites that attempt to provide parents with help in choosing schools, including rating schools according to testing results. The most prominent of these sites is GreatSchools.org. Its test-based school ratings could not be more simplistic – they are essentially just percentile rankings of schools’ proficiency rates as compared to all other schools in their states (the site also provides warnings about the data, along with a bunch of non-testing information).

This is the kind of indicator that I have criticized when reviewing states’ school/district “grading systems." And it is indeed a poor measure, albeit one that is widely available and easy to understand. But it’s worth quickly discussing the fact that such criticism is conditional on how the ratings are employed - there is a difference between the use of testing data to rate schools for parents versus for high-stakes accountability purposes.

In other words, the utility and proper interpretation of data vary by context, and there's no one "correct way" to rate schools. The optimal design might differ depending on the purpose for which the ratings will be used. In fact, the reasons why a measure is problematic in one context might very well be a source of strength in another.

The Challenges Of Pre-K Assessment

In the United States, nearly 1.3 million children attend publicly-funded preschool. As enrollment continues to grow, states are under pressure to prove these programs serve to increase school readiness. Thus, the task of figuring out how best to measure preschoolers’ learning outcomes has become a major policy focus.

First, it should be noted that researchers are almost unanimous in their caution about this subject. There are inherent difficulties in the accurate assessment of very young children’s learning in the fields of language, cognition, socio-emotional development, and even physical development. Young children’s attention spans tend to be short and there are wide, natural variations in children’s performance in any given domain and on any given day. Thus, great care is advised for both the design and implementation of such assessments (see here, here, and here for examples). The question of if and how to use these student assessments to determine program or staff effectiveness is even more difficult and controversial (for instance, here and here). Nevertheless, many states are already using various forms of assessment to oversee their preschool investments.

It is difficult to react to this (unsurprising) paradox. Sadly, in education, there is often a disconnect between what we know (i.e., research) and what we do (i.e., policy). But, since our general desire for accountability seems to be here to stay, a case can be made that states should, at a minimum, expand what they measure to reflect learning as accurately and broadly as possible.

So, what types of assessments are better for capturing what a four- or a five- year old knows? How might these assessments be improved?

Still In Residence: Arts Education In U.S. Public Schools

There is a somewhat common argument in education circles that the focus on math and reading tests in No Child Left Behind has had the unintended consequence of generating a concurrent deemphasis on other subjects. This includes science and history, of course, but among the most frequently-mentioned presumed victims of this trend are art and music.

A new report by the National Center for Education Statistics (NCES) presents some basic data on the availability of arts instruction in U.S. public schools between 1999 and 2010.

The results provide only very mixed support for the hypothesis that these programs are less available now than they were prior to the implementation of NCLB.

Measuring Journalist Quality

Journalists play an essential role in our society. They are charged with informing the public, a vital function in a representative democracy. Yet, year after year, large pockets of the electorate remain poorly-informed on both foreign and domestic affairs. For a long time, commentators have blamed any number of different culprits for this problem, including poverty, education, increasing work hours and the rapid proliferation of entertainment media.

There is no doubt that these and other factors matter a great deal. Recently, however, there is growing evidence that the factors shaping the degree to which people are informed about current events include not only social and economic conditions, but journalist quality as well. Put simply, better journalists produce better stories, which in turn attract more readers. On the whole, the U.S. journalist community is world class. But there is, as always, a tremendous amount of underlying variation. It’s likely that improving the overall quality of reporters would not only result in higher quality information, but it would also bring in more readers. Both outcomes would contribute to a better-informed, more active electorate.

We at the Shanker Institute feel that it is time to start a public conversation about this issue. We have requested and received datasets documenting the story-by-story readership of the websites of U.S. newspapers, large and small. We are using these data in statistical models that we call “Readers-Added Models," or “RAMs."