• Charter Schools, Special Education Students, And Test-Based Accountability

    Opponents often argue that charter schools tend to serve a disproportionately low number of special education students. And, while there may be exceptions and certainly a great deal of variation, that argument is essentially accurate. Regardless of why this is the case (and there is plenty of contentious debate about that), some charter school supporters have acknowledged that it may be a problem insofar as charters are viewed as a large scale alternative to regular public schools.

    For example, Robin Lake, writing for the Center for Reinventing Public Education, takes issue with her fellow charter supporters who assert that “we cannot expect every school to be all things to every child.” She argues instead that schools, regardless of their governance structures, should never “send the soft message that kids with significant differences are not welcome,” or treat them as if “they are somebody else’s problem.” Rather, Ms. Lake calls upon charter school operators to take up the banner of serving the most vulnerable and challenging students and “work for systemic special education solutions.”

    These are, needless to say, noble thoughts, with which many charter opponents and supporters can agree. Still, there is a somewhat more technocratic but perhaps more actionable issue lurking beneath the surface here: Put simply, until test-based accountability systems in the U.S. are redesigned such that they stop penalizing schools for the students they serve, rather than their effectiveness in serving those students, there will be a rather strong disincentive for charters to focus aggressively on serving special education students. Moreover, whatever accountability disadvantage may be faced by regular public schools that serve higher proportions of special education students pales in comparison with that faced by all schools, charter and regular public, located in higher-poverty areas. In this sense, then, addressing this problem is something that charter supporters and opponents should be doing together.

  • The Education Policy Glossary

    Like most policy fields, education is full of jargon. There are countless acronyms, terms and phrases that may hold little meaning for the average citizen, but are used routinely in education circles. Moreover, there are just as many words and phrases that carry a different meaning in education than they do in regular conversation.

    We at the Shanker Institute have started a new project to help people, inside and outside the field, to understand the language of education policy. Accordingly, we have assembled the first installment of an education policy glossary that indicates what people in education typically mean, intentionally or unintentionally, when they use certain words and phrases.

    We hope that this will encourage more people to engage in the public discourse, and to improve understanding and consistency among those of us who are already participating. The glossary is below.

  • Lessons And Directions From The CREDO Urban Charter School Study

    Last week, CREDO, a Stanford University research organization that focuses mostly on charter schools, released an analysis of the test-based effectiveness of charter schools in “urban areas” – that is, charters located in cities located within in 42 urban areas throughout 22 states. The math and reading testing data used in the analysis are from the 2006-07 to 2010-11 school years.

    In short, the researchers find that, across all areas included, charters’ estimated impacts on test scores, vis-à-vis the regular public schools to which they are compared, are positive and statistically discernible. The magnitude of the overall estimated effect is somewhat modest in reading, and larger in math. In both cases, as always, results vary substantially by location, with very large and positive impacts in some places and negative impacts in others.

    These “horse race” charter school studies are certainly worthwhile, and their findings have useful policy implications. In another sense, however, the public’s relentless focus on the “bottom line” of these analyses is tantamount to asking continually a question ("do charter schools boost test scores?") to which we already know the answer (some do, some do not). This approach is somewhat inconsistent with the whole idea of charter schools, and with harvesting what is their largest potential contribution to U.S. public education. But there are also a few more specific issues and findings in this report that merit a little bit of further discussion, and we’ll start with those.

  • Recent Trends In The Sources Of Public Education Revenue

    Every year, the U.S. Census Bureau issues a report on the overall state of public education finances in the U.S. There is usually a roughly 2-3 year lag on the data – for example, the latest report applies to the 2011-12 fiscal year – but the report and accompanying data are a good way to keep an eye on the general education finance situation both in individual states as well as nationwide, particularly among those of us who are somewhat casual followers (though it bears keeping in mind that these data do not include many charter schools).

    One of the more interesting trends in recent years is the breakdown of total revenue by source. As most people know, U.S. public school systems are funded by a combination of federal, state and local revenue. Today, although states vary considerably in the configuration of these three sources, on the whole, most funding comes from state and local revenue, with a smaller but still significant contribution from federal government sources (total revenue in 2011-12 was about $595 billion).

    But there has been some volatility in these relative contributions over the past few years (at least the past few years for which data are available). The graph below presents the percent of total elementary/secondary education revenue from federal, state and local sources between 1989-90 and 2011-12.

  • The Big Story About Gender Gaps In Test Scores

    The OECD recently published a report about differences in test scores between boys and girls on the Programme for International Student Assessment (PISA), which is a test of 15 year olds conducted every three years in multiple subjects. The main summary finding is that, in most nations, girls are significantly less likely than boys to score below the “proficient” threshold in all three subjects (math, reading and science). The report also includes survey items and other outcomes.

    First, it is interesting to me how discussions of these gender gaps differ from those about gaps between income or ethnicity groups. Specifically, when we talk about gender gaps, we interpret them properly – as gaps in measured performance between groups of students. Any discussion of gaps between groups defined in terms of income or ethnicity, on the other hand, are almost always framed in terms of school performance.

    This is partially because schools in the U.S. are segregated by income and ethnicity, but not really by gender, and also because some folks have a tendency to overestimate the degree to which income- and ethnicity-based achievement gaps stem from systematic variation in schooling inputs, whereas in reality they are more a function of non-school factors (though, of course, schools matter, and differences in school quality reinforce the non-school-based impact). That said, returning to the findings of this report, I was slightly concerned with how, in some cases, they were reported in the media.

  • Teacher Quality - Still Plenty Of Room For Debate

    On March 3, the New York Times published one of their “Room for Debate” features, in which panelists were asked "How To Ensure and Improve Teacher Quality?" When I read through the various perspectives, my first reaction was: "Is that it?"

    It's not that I don't think there is value in many of the ideas presented -- I actually do. The problem is that there are important aspects of teacher quality that continue to be ignored in policy discussions, despite compelling evidence suggesting that they matter in the quality equation. In other words, I wasn’t disappointed with what was said but, rather, what wasn’t. Let’s take a look at the panelists’ responses after making a couple of observations on the actual question and issue at hand.

    The first thing that jumped out at me is that teacher quality is presented in a somewhat decontextualized manner. Teachers don't work in a vacuum; quality is produced in specific settings. Placing the quality question in context can help to broaden the conversation to include: 1) the role of the organization in shaping educator learning and effectiveness; and 2) the shining of light on the intersection between teachers and schools and the vital issue of employee-organization "fit."

    Second, the manner in which teacher quality is typically framed -- including in the Times question -- suggests that effectiveness is a (fixed) individual attribute (i.e., human capital) that teachers carry with them across contexts (i.e., it's portable). In reality, however, it is context-dependent and can be (and is indeed) developed among individuals -- as a result of their networks, their professional interactions, and their shared norms and trust (i.e., social capital). In sum, it's not just what teachers know but who they know and where they work -- as well as the interaction of these three.

  • The Smoke And The Fire From Evaluations Of Teach For America

    A recent study by the always reliable research organization Mathematica takes a look at the characteristics and test-based effectiveness of Teach For America (TFA) teachers who were recruited as part of a $50 million federal “Investing in Innovation” grant, which is supporting a substantial scale-up of TFA’s presence in U.S. public schools.

    The results of this study pertain to a small group of recruits (and comparison non-TFA teachers) from the first two years of the program – i.e., a sample of 156 PK-5 teachers (66 TFA and 90 non-TFA) in 36 schools spread throughout 10 states. What distinguishes the analysis methodologically is that it exploits the random assignment of students to teachers in these schools, which ensures that any measured differences between TFA and comparison teachers are not due to unobserved differences in the students they are assigned to teach.

    The Mathematica researchers found, in short, that the estimated differences in the impact of TFA and comparison teachers on math and reading scores across all grades were modest in magnitude and not statistically discernible at any conventional level. There were, however, meaningful positive estimated differences in the earliest grades (PK-2), though they were only statistically significant in reading, and the coefficient in reading for grades 3-5 was negative (and not significant). Let’s take a quick look at these and other findings from this report and what they might mean.

  • How Not To Improve New Teacher Evaluation Systems

    One of the more interesting recurring education stories over the past couple of years has been the release of results from several states’ and districts’ new teacher evaluation systems, including those from New York, Indiana, Minneapolis, Michigan and Florida. In most of these instances, the primary focus has been on the distribution of teachers across ratings categories. Specifically, there seems to be a pattern emerging, in which the vast majority of teachers receive one of the higher ratings, whereas very few receive the lowest ratings.

    This has prompted some advocates, and even some high-level officials, essentially to deem as failures the new systems, since their results suggest that the vast majority of teachers are “effective” or better. As I have written before, this issue cuts both ways. On the one hand, the results coming out of some states and districts seem problematic, and these systems may need adjustment. On the other hand, there is a danger here: States may respond by making rash, ill-advised changes in order to achieve “differentiation for the sake of differentiation,” and the changes may end up undermining the credibility and threatening the validity of the systems on which these states have spent so much time and money.

    Granted, whether and how to alter new evaluations are difficult decisions, and there is no tried and true playbook. That said, New York Governor Andrew Cuomo’s proposals provide a stunning example of how not to approach these changes. To see why, let’s look at some sound general principles for improving teacher evaluation systems based on the first rounds of results, and how they compare with the New York approach.*

  • The Status Fallacy: New York State Edition

    A recent New York Times story addresses directly New York Governor Andrew Cuomo’s suggestion, in his annual “State of the State” speech, that New York schools are in a state of crisis and "need dramatic reform." The article’s general conclusion is that the “data suggest otherwise.”

    There are a bunch of important points raised in the article, but most of the piece is really just discussing student rather than school performance. Simple statistics about how highly students score on tests – i.e., “status measures” – tell you virtually nothing about the effectiveness of the schools those students attend, since, among other reasons, they don’t account for the fact that many students enter the system at low levels. How much students in a school know in a given year is very different from how much they learned over the course of that year.

    I (and many others) have written about this “status fallacy” dozens of times (see our resources page), not because I enjoy repeating myself (I don’t), but rather because I am continually amazed just how insidious it is, and how much of an impact it has on education policy and debate in the U.S. And it feels like every time I see signs that things might be changing for the better, there is an incident, such as Governor Cuomo’s speech, that makes me question how much progress there really has been at the highest levels.

  • Turning Conflict Into Trust Improves Schools And Student Learning

    Our guest author today is Greg Anrig, vice president of policy and programs at The Century Foundation and author of Beyond the Education Wars: Evidence That Collaboration Builds Effective Schools.

    In recent years, a number of studies (discussed below; also see here and here) have shown that effective public schools are built on strong collaborative relationships, including those between administrators and teachers. These findings have helped to accelerate a movement toward constructing such partnerships in public schools across the U.S. However, the growing research and expanding innovations aimed at nurturing collaboration have largely been neglected by both mainstream media and the policy community.

    Studies that explore the question of what makes successful schools work never find a silver bullet, but they do consistently pinpoint commonalities in how those schools operate. The University of Chicago's Consortium on Chicago School Research produced the most compelling research of this type, published in a book called Organizing Schools for Improvement. The consortium gathered demographic and test data, and conducted extensive surveys of stakeholders, in more than 400 Chicago elementary schools from 1990 to 2005. That treasure trove of information enabled the consortium to identify with a high degree of confidence the organizational characteristics and practices associated with schools that produced above-average improvement in student outcomes.

    The most crucial finding was that the most effective schools, based on test score improvement over time after controlling for demographic factors, had developed an unusually high degree of "relational trust" among their administrators, teachers, and parents.