On Focus Groups, Elections, and Predictions

Focus groups, a method in which small groups of subjects are questioned by researchers, are widely used in politics, marketing, and other areas. In education policy, focus groups, particularly those comprised of teachers or administrators, are often used to design or shape policy. And, of course, during national election cycles, they are particularly widespread, and there are even television networks that broadcast focus groups as a way to gauge the public’s reaction to debates or other events.

There are good reasons for using focus groups. Analyzing surveys can provide information regarding declaratory behaviors and issues’ rankings at a given point in time, and correlations between these declarations and certain demographic and social variables of interest. Focus groups, on the other hand, can help map out the issues important to voters (which can inform survey question design), as well investigate what reactions certain presentations (verbal or symbolic) evoke (which can, for example, help frame messages in political or informational campaigns).

Both polling/surveys and focus groups provide insights that the other method alone could not. Neither of them, however, can answer questions about why certain patterns occur or how likely they are to occur in the future. That said, having heard some of the commentary about focus groups, and particularly having seen them being broadcast live and discussed on cable news stations, I feel strongly compelled to comment, as I do whenever data are used improperly or methodologies are misinterpreted.

A Myth Grows In The Garden State

New Jersey Governor Chris Christie’s recently announced a new "fairness funding" plan to provide every school district in his state roughly the same amount of per-pupil state funding. This would represent a huge change from the current system, in which more state funds are allocated to the districts that serve a larger proportion of economically disadvantaged students. Thus, the Christie proposal would result in an increase in state funding for middle class and affluent districts, and a substantial decrease in money for poorer districts. According to the Governor, the change would reduce the property tax burden on many districts by replacing some of their revenue with state money.

This is a very bad idea. For one thing, NJ state funding of education is already about 7-8 percent lower than it was in 2008 (Leachman et al. 2015). And this plan would, most likely, cut revenue in the state’s poorest districts by dramatic amounts, absent an implausible increase in property tax rates. It is perfectly reasonable to have a discussion about how education money is spent and allocated, and/or about tax structure. But it is difficult to grasp how serious people could actually conceive of this particular idea. And it’s actually a perfect example of how dangerous it is when huge complicated bodies of empirical evidence are boiled down to talking points (and this happens on all “sides” of the education debate).

Pu simply, Governor Christie believes that “money doesn’t matter” in education. He and his advisors have been told that how much you spend on schools has little real impact on results. This is also a talking point that, in many respects, coincides with an ideological framework of skepticism toward government and government spending, which Christie shares.

New Research Report: Are U.S. Schools Inefficient?

At one point or another we’ve all heard some version of the following talking points: 1) “Spending on U.S. education has doubled or triped over the past few decades, but performance has remained basically flat; or 2) “The U.S. spends more on education than virtually any other nation and yet still gets worse results.” If you pay attention, you will hear one or both of these statements frequently, coming from everyone from corporate CEOs to presidential candidates.

The purpose of both of these statements is to argue that U.S. education is inefficient - that is, gets very little bang for the buck – and that spending more money will not help.

Now, granted, these sorts of pseudo-empirical talking points almost always omit important nuances yet, in some cases, they can still provide important information. But, putting aside the actual relative efficiency of U.S. schools, these particular statements about U.S. education spending and performance are so rife with oversimplification that they fail to provide much if any useful insight into U.S. educational efficiency or policy that affects it. Our new report, written by Rutgers University Professor Bruce D. Baker and Rutgers Ph.D. student Mark Weber, explains why and how this is the case. Baker and Weber’s approach is first to discuss why the typical presentations of spending and outcome data, particularly those comparing nations, are wholly unsuitable for the purpose of evaluating U.S. educational efficiency vis-à-vis that of other nations. They then go on to present a more refined analysis of the data by adjusting for student characteristics, inputs such as class size, and other factors. Their conclusions will most likely be unsatisfying for all “sides” of the education debate.

Are U.S. Schools Resegregating?

Last week, the U.S. Government Accountability Office (GAO) issued a report, part of which presented an analysis of access to educational opportunities among the nation’s increasingly low income and minority public school student population. The results, most generally, suggest that the proportion of the nation's schools with high percentages of lower income (i.e., subsidized lunch eligible) and Black and Hispanic students increased between 2000 and 2013.

The GAO also reports that these schools, compared to those serving fewer lower income and minority students, tend to offer fewer math, science, and college prep courses, and also to suspend, expel, and hold back ninth graders at higher rates.

These are, of course, important and useful findings. Yet the vast majority of the news coverage of the report focused on the interpretation of these results as showing that U.S. schools are “resegregating.” That is, the news stories portrayed the finding that a larger proportion of schools serve more than 75 percent Black and Hispanic students as evidence that schools became increasingly segregated between the 2000-01 and 2013-14 school years. This is an incomplete, somewhat misleading interpretation of the GAO findings. In order to understand why, it is helpful to discuss briefly how segregation is measured.

New Research Brief: Teacher Segregation In Los Angeles And New York City

The current attention being given to the state of teacher diversity, including ASI’s recent report on the subject, is based on the idea that teacher diversity is a resource that profits everyone, and that policymakers and administrators should try to increase this resource. We agree.

There is already a fair amount of research to indicate the significance and potential implications of teacher diversity (e.g., Dee 2004; Gershenson et al., 2015; Mueller et al. 1999). It’s important to bear in mind, however, that the benefits of diversity, like those of any resource, are dependent not just on how much is available, but also how it is distributed across schools and districts.

Unfortunately, research on the distribution of teacher diversity or teacher segregation has, thus far, been virtually non-existent. A new ASI research brief begins to help fill this void. The brief, written with my colleagues Matt Di Carlo and Esther Quintero, presents a descriptive analysis of teacher segregation within the two largest school districts in the nation – Los Angeles and New York City. We find that teachers in these two districts, while quite diverse overall, relative to the U.S. teacher workforce as a whole, are rather segregated across schools by race and ethnicity, according to multiple different measures of segregation. In other words, teachers tend to work in schools with disproportionate numbers of colleagues of their own race and/or ethnicity.

Charter Schools And Longer Term Student Outcomes

An important article in the Journal of Policy Analysis and Management presents results from one of the published analyses to look at the long term impact of attending charter schools.

The authors, Kevin Booker, Tim Sass, Brian Gill, and Ron Zimmer, replicate part of their earlier analysis of charter schools in Florida and Chicago (Booker et al. 2011), which found that students attending charter high schools had a substantially higher chance of graduation and college enrollment (relative to students that attended charter middle schools but regular public high schools). For this more recent paper, they extend the previous analysis, including the addition of two very important, longer term outcomes – college persistence and labor market earnings.

The limitations of test scores, the current coin of the realm, are well known; similarly, outcomes such as graduation may fail to capture meaningful skills. This paper is among the first to extend the charter school effects literature, which has long relied almost exclusively on test scores, into the longer term postsecondary and even adulthood realms, representing a huge step forward for this body of evidence. It is a development that is likely to become more and more common, as longitudinal data hopefully become available from other locations. And this particular paper, in addition to its obvious importance for the charter school literature, also carries some implications regarding the use of test-based outcomes in education policy evaluation.

Evaluating The Results Of New Teacher Evaluation Systems

A new working paper by researchers Matthew Kraft and Allison Gilmour presents a useful summary of teacher evaluation results in 19 states, all of which designed and implemented new evaluation systems at some point over the past five years. As with previous evaluation results, the headline result of this paper is that only a small proportion of teachers (2-5 percent) were given the low, “below proficiency” ratings under the new systems, and the vast majority of teachers continue to be rated as satisfactory or better.

Kraft and Gilmour present their results in the context of the “Widget Effect,” a well-known 2009 report by the New Teacher Project showing that the overwhelming majority of teachers in the 12 districts for which they had data received “satisfactory” ratings. The more recent results from Kraft and Gilmour indicate that this hasn’t changed much due to the adoption of new evaluation systems, or, at least, not enough to satisfy some policymakers and commentators who read the paper.

The paper also presents a set of findings from surveys of and interviews with observers (e.g., principals). These are in many respects more interesting and important results from a research and policy perspective, but let’s nevertheless focus a bit on the findings on the distribution of teachers across rating categories, as they caused a bit of a stir. I have several comments to make about them, but will concentrate on three in particular (all of which, by the way, pertain not to the paper’s discussion, which is cautious and thorough, but rather to some of the reaction to it in our education policy discourse).

Improving Teaching Through Collaboration

Our guest author today is Matthew Ronfeldt, Assistant Professor at the University of Michigan School of Education. Ronfeldt seeks to understand how to improve teaching quality, particularly in schools and districts that serve historically marginalized student populations. His research sits at the intersection of educational practice and policy and focuses on teacher preparation, teacher retention, teacher induction, and the assessment of teachers and preparation programs.

Learning to teach is an ongoing process. To be successful, then, schools must promote not only student learning but also teacher learning across their careers.* Embracing this notion, policymakers have called for the creation of school-based professional learning communities, including organizational structures that promote regular opportunities for teachers to collaborate with teams of colleagues** – also here and here. As the use of instructional teams becomes increasingly common, it is important to examine whether and how collaboration actually improves teaching and learning. The growing evidence, summarized below, suggests that it does. 

For many decades, educational scholars have conducted qualitative case studies documenting the nature of collaboration among particular groups of teachers working together in departmental teams, reading groups, and other types of instructional teams. This body of work has demonstrated that the kinds and content of collaboration vary substantially across contexts, has shed light on the norms and structures that promote more promising collaboration, and has set the stage for today’s policy focus on “professional learning communities.” However, these studies rarely connected collaboration to teachers’ classroom performance. Thus, they provided little information on whether teachers actually got better at teaching as a result of their participation in collaboration.

Student Sorting And Teacher Classroom Observations

Although value added and other growth models tend to be the focus of debates surrounding new teacher evaluation systems, the widely known but frequently unacknowledged reality is that most teachers don’t teach in the tested grades and subjects, and won’t even receive these test-based scores. The quality and impact of the new systems therefore will depend heavily upon the quality and impact of other measures, primarily classroom observations.

These systems have been in use for decades, and yet, until recently, relatively little is known about their properties, such as their association with student and teacher characteristics, and there are, as yet, only a handful of studies of their impact on teachers’ performance (e.g., Taylor and Tyler 2012). The Measures of Effective Teaching (MET) Project, conducted a few years ago, was a huge step forward in this area, though at the time it was perhaps underappreciated the degree to which MET’s contribution was not just in the (very important) reports it produced, but also in its having collected an extensive dataset for researchers to use going forward. A new paper, just published in Educational Evaluation and Policy Analysis, is among the many analyses that have and will use MET data to address important questions surrounding teacher evaluation.

The authors, Rachel Garrett and Matthew Steinberg, look at classroom observation scores, specifically those from Charlotte Danielson’s widely employed Framework for Teaching (FFT) protocol. These results are yet another example of how observation scores share most of the widely-cited (statistical) criticisms of value added scores, most notably their sensitivity to which students are assigned to teachers.

The IMPACT Of Teacher Turnover In DCPS

Teacher turnover has long been a flashpoint in education policy, yet these debates are rife with complications. For example, it is often implied that turnover is a “bad thing,” even though some turnover, as when low-performing teachers leave, can be beneficial, whereas some retention, as when low-performing teachers stay, can be harmful. The impact of turnover also depends heavily on other factors, such as the pool of candidates available to serve as replacements, and how disruptive turnover is to the teachers who are retained.

The recent widespread reform of teacher evaluation systems has made the turnover issue, never far below the surface, even more salient in recent years. Critics contend that the new evaluations, particularly their use of test-based productivity measures, will cause teachers to flee the profession. Supporters, on the other hand, are in a sense hoping for this outcome, as they anticipate that, under the new systems, voluntary and involuntary separations will serve to improve the quality of the teacher workforce.

A new working paper takes a close look the impact of teacher turnover under what is perhaps the most controversial teacher evaluation system in the nation – that used in the District of Columbia Public Schools (DCPS). It's a very strong analysis that speaks directly to policy in a manner that does not fit well into the tribal structure of education debates today.