Teacher Appreciation: The Center for Research on Expanding Educational Opportunity (CREEO) Connects Equity and Justice to Education Policy and Practice

Our guest author is Melika Jalili, program manager at the Center for Research on Expanding Educational Opportunity (CREEO), UC Berkeley.

Whether it is a focus on the teacher shortage, a discussion of our public schools, or Teacher Appreciation Week, it seems everyone agrees that teachers deserve more respect and recognition. Making that recognition meaningful, by supporting educators to be the teachers they have always dreamed they could be, should be a priority for all of us.

Cue in, Dr. Travis J. Bristol, Associate Professor at the UC Berkeley School of Education, who announced the exciting launch of the Center for Research on Expanding Educational Opportunity (CREEO) at UC Berkeley last month.

Reading Science: Staying the Course Amidst the Noise

Critical perspectives on the Science of Reading (SoR) have always been present and are justifiably part of the ongoing discourse. At the Shanker Institute, we have been constructively critical, maintaining that reading reforms are not a silver bullet and that aspects of SoR, such as the role of knowledge-building and of infrastructure in reading improvement, need to be better understood and integrated into our discourse, policies, and practices. These contributions can strengthen the movement, bringing us closer to better teaching and learning. However, I worry that other forms of criticism may ultimately divert us from these goals and lead us astray.

At the annual conference of the American Educational Research Association (AERA), the largest research conference in the field of education, I witnessed the spread of serious misinformation about reading research and related reforms. In this post, I aim to address four particularly troubling ideas I encountered. For each, I will not only provide factual corrections but also contextual clarifications, highlighting any bits of truth or valid criticisms that may exist within these misconceptions.

The Offline Implications Of The Research About Online Charter Schools

It’s rare to find an educational intervention with as unambiguous a research track record as online charter schools. Now, to be clear, it’s not a large body of research by any stretch, its conclusions may change in time, and the online charter sub-sector remains relatively small and concentrated in a few states. For now, though, the results seem incredibly bad (Zimmer et al. 2009Woodworth et al. 2015). In virtually every state where these schools have been studied, across virtually all student subgroups, and in both reading and math, the estimated impact of online charter schools on student testing performance is negative and large in magnitude.

Predictably, and not without justification, those who oppose charter schools in general are particularly vehement when it comes to online charter schools – they should, according to many of these folks, be closed down, even outlawed. Charter school supporters, on the other hand, tend to acknowledge the negative results (to their credit) but make less drastic suggestions, such as greater oversight, including selective closure, and stricter authorizing practices.

Regardless of your opinion on what to do about online charter schools’ poor (test-based) results, they are truly an interesting phenomenon for a few reasons.

We Can't Graph Our Way Out Of The Research On Education Spending

The graph below was recently posted by U.S. Education Department (USED) Secretary Betsy DeVos, as part of her response to the newly released scores on the 2017 National Assessment of Educational Progress (NAEP), administered every two years and often called the “nation’s report card.” It seems to show a massive increase in per-pupil education spending, along with a concurrent flat trend in scores on the fourth grade reading version of NAEP. The intended message is that spending more money won’t improve testing outcomes. Or, in the more common phrasing these days, "we can't spend our way out of this problem."

Some of us call it “The Graph.” Versions of it have been used before. And it’s the kind of graph that doesn’t need to be discredited, because it discredits itself. So, why am I bothering to write about it? The short answer is that I might be unspeakably naïve. But we’ll get back to that in a minute.

First, let’s very quickly run through the graph. In terms of how it presents the data, it is horrible practice. The double y-axes, with spending on the left and NAEP scores on the right, are a textbook example of what you might call motivated scaling (and that's being polite). The NAEP scores plotted range from a minimum of 213 in 2000 to a maximum of 222 in 2017, but the graph inexplicably extends all the way up to 275. In contrast, the spending scale extends from just below the minimum observation ($6,000) to just above the maximum ($12,000). In other words, the graph is deliberately scaled to produce the desired visual effect (increasing spending, flat scores). One could very easily rescale the graph to produce the opposite.

What Are "Segregated Schools?"

The conventional wisdom in education circles is that U.S. schools are “resegregating” (see here and here for examples). The basis for these claims is usually some form of the following empirical statement: An increasing proportion of schools serve predominantly minority student populations (e.g., GAO 2016). In other words, there are more “segregated schools.”

Underlying the characterization of this finding as “resegregation” is one of the longstanding methodological debates in education and other fields today: How to measure segregation (Massey and Denton 1988). And, as is often the case with these debates, it’s not just about methodology, but also about larger conceptual issues. We might very casually address these important issues by posing a framing question: Is a school that serves 90-95 percent minority students necessarily a “segregated school?”

Most people would answer yes. And, putting aside the semantic distinction that it is students rather than schools that are segregated, they would be correct. But there is a lot of nuance here that is actually important.

New Research Report: Are U.S. Schools Inefficient?

At one point or another we’ve all heard some version of the following talking points: 1) “Spending on U.S. education has doubled or triped over the past few decades, but performance has remained basically flat; or 2) “The U.S. spends more on education than virtually any other nation and yet still gets worse results.” If you pay attention, you will hear one or both of these statements frequently, coming from everyone from corporate CEOs to presidential candidates.

The purpose of both of these statements is to argue that U.S. education is inefficient - that is, gets very little bang for the buck – and that spending more money will not help.

Now, granted, these sorts of pseudo-empirical talking points almost always omit important nuances yet, in some cases, they can still provide important information. But, putting aside the actual relative efficiency of U.S. schools, these particular statements about U.S. education spending and performance are so rife with oversimplification that they fail to provide much if any useful insight into U.S. educational efficiency or policy that affects it. Our new report, written by Rutgers University Professor Bruce D. Baker and Rutgers Ph.D. student Mark Weber, explains why and how this is the case. Baker and Weber’s approach is first to discuss why the typical presentations of spending and outcome data, particularly those comparing nations, are wholly unsuitable for the purpose of evaluating U.S. educational efficiency vis-à-vis that of other nations. They then go on to present a more refined analysis of the data by adjusting for student characteristics, inputs such as class size, and other factors. Their conclusions will most likely be unsatisfying for all “sides” of the education debate.

Are U.S. Schools Resegregating?

Last week, the U.S. Government Accountability Office (GAO) issued a report, part of which presented an analysis of access to educational opportunities among the nation’s increasingly low income and minority public school student population. The results, most generally, suggest that the proportion of the nation's schools with high percentages of lower income (i.e., subsidized lunch eligible) and Black and Hispanic students increased between 2000 and 2013.

The GAO also reports that these schools, compared to those serving fewer lower income and minority students, tend to offer fewer math, science, and college prep courses, and also to suspend, expel, and hold back ninth graders at higher rates.

These are, of course, important and useful findings. Yet the vast majority of the news coverage of the report focused on the interpretation of these results as showing that U.S. schools are “resegregating.” That is, the news stories portrayed the finding that a larger proportion of schools serve more than 75 percent Black and Hispanic students as evidence that schools became increasingly segregated between the 2000-01 and 2013-14 school years. This is an incomplete, somewhat misleading interpretation of the GAO findings. In order to understand why, it is helpful to discuss briefly how segregation is measured.

New Report: Does Money Matter in Education? Second Edition

In 2012, we released a report entitled “Does Money Matter in Education?,” written by Rutgers Professor Bruce Baker. The report presented a thorough, balanced review of the rather sizable body of research on the relationship between K-12 education spending and outcomes. The motivation for this report was to address the highly contentious yet often painfully oversimplified tribal arguments regarding the impact of education spending and finance reforms, as well as provide an evidence-based guide for policymakers during a time of severe budgetary hardship. It remains our most viewed resource ever, by far.

Now, almost four years later, education spending in most states and localities is still in trouble. For example, state funding of education is lower in 2016 than it was in 2008 (prior to the recession) in 31 states (Leachman et al. 2016). Moreover, during this time, there has been a continuing effort to convince the public that how much we spend on schools doesn’t matter for outcomes, and that these spending cuts will do no harm.

As is almost always the case, the evidence on spending in education is far more nuanced and complex than our debates about it (on both “sides” of the issue). And this evidence has been building for decades, with significant advances since the release of our first “Does Money Matter?” report. For this reason, we have today released the second edition, updated by the author. The report is available here.

Teacher To Teacher: Classroom Reform Starts With “The Talk”

Our guest author today is Melissa Halpern, a high school English teacher and Ed.M candidate at the Harvard Graduate School of Education. For the past 9 years, she's been dedicated to making schooling a happier, more engaging experience for a diverse range of students in Palm Beach County, FL.

We teachers often complain, justifiably, that policy makers and even school administrators are too disconnected from the classroom to understand how students learn best. Research is one thing, we claim, but experience is another. As the only adults in the school setting who have ongoing, sustained experience with students, we’re in the best position to understand them—but do we really? Do we understand our students’ educational priorities, turn-ons, anxieties, and bones-to-pick in our classrooms and in the school at large?

The truth is that no amount of research or experience makes us experts on the experiences and perspectives of the unique individuals who inhabit our classrooms. If we want to know what’s going on in their minds, we have to ask. We have to have “the school talk.”

What have students learned that is important to them, and what do they wish they could learn? What makes them feel happy and empowered at school? What makes them feel bored, stressed, or dehumanized?

The Debate And Evidence On The Impact Of NCLB

There is currently a flurry of debate focused on the question of whether “NCLB worked.” This question, which surfaces regularly in the education field, is particularly salient in recent weeks, as Congress holds hearings on reauthorizing the law.

Any time there is a spell of “did NCLB work?” activity, one can hear and read numerous attempts to use simple NAEP changes in order to assess its impact. Individuals and organizations, including both supporters and detractors of the law, attempt to make their cases by presenting trends in scores, parsing subgroups estimates, and so on. These efforts, though typically well-intentioned, do not, of course, tell us much of anything about the law’s impact. One can use simple, unadjusted NAEP changes to prove or disprove any policy argument. And the reason is that they are not valid evidence of an intervention's effects. There’s more to policy analysis than subtraction.

But it’s not just the inappropriate use of evidence that makes these “did NCLB work?” debates frustrating and, often, unproductive. It is also the fact that NCLB really cannot be judged in simple, binary terms. It is a complex, national policy with considerable inter-state variation in design/implementation and various types of effects, intended and unintended. This is not a situation that lends itself to clear cut yes/no answers to the “did it work?” question.