The Story Behind The Story: Social Capital And The Vista Unified School District

Our guest author today is Devin Vodicka, superintendent of Vista Unified, a California school district serving over 22,000 students that was recently accepted into the League of Innovative Schools. Dr. Vodicka participates in numerous state and national leadership groups, including the Superintendents Technical Working Group of the U.S. Education Department .

Transforming a school district is challenging and complex work, often requiring shifts in paradigms, historical perspective, and maintaining or improving performance. Here, I’d like to share how we approached change at Vista Unified School District (VUSD) and to describe the significant transformation we’ve been undergoing, driven by data, focused on relationships, and based in deep partnerships. Although Vista has been hard at work over many years, this particular chapter starts in July of 2012 when I was hired.  

When I became superintendent, the district was facing numerous challenges: Declining enrollment, financial difficulties, strained labor relations, significant turnover in the management ranks, and unresolved lawsuits were all areas in need of attention. The school board charged me and my team with transforming the district, which serves large numbers of linguistically, culturally, and economically diverse students. While there is still significant room for improvement, much has changed in the past three years, generally trending in a positive direction. Below is the story of how we did it.

Teacher To Teacher: Classroom Reform Starts With “The Talk”

Our guest author today is Melissa Halpern, a high school English teacher and Ed.M candidate at the Harvard Graduate School of Education. For the past 9 years, she's been dedicated to making schooling a happier, more engaging experience for a diverse range of students in Palm Beach County, FL.

We teachers often complain, justifiably, that policy makers and even school administrators are too disconnected from the classroom to understand how students learn best. Research is one thing, we claim, but experience is another. As the only adults in the school setting who have ongoing, sustained experience with students, we’re in the best position to understand them—but do we really? Do we understand our students’ educational priorities, turn-ons, anxieties, and bones-to-pick in our classrooms and in the school at large?

The truth is that no amount of research or experience makes us experts on the experiences and perspectives of the unique individuals who inhabit our classrooms. If we want to know what’s going on in their minds, we have to ask. We have to have “the school talk.”

What have students learned that is important to them, and what do they wish they could learn? What makes them feel happy and empowered at school? What makes them feel bored, stressed, or dehumanized?

In Defense Of The Public Square

A robust and vibrant public square is an essential foundation of democracy. It is the place where the important public issues of the day are subject to free and open debate, and where our ideas of what is in the public interest take shape. It is the ground upon which communities and associations are organized to advocate for policies that promote that public interest. It is the site for the provision of essential public goods, from education and healthcare to safety and mass transportation. It is the terrain upon which the centralizing and homogenizing power of both the state and the market are checked and balanced. It is the economic arena with the means to control the market’s tendencies toward polarizing economic inequality and cycles of boom and bust. It is the site of economic opportunity for historically excluded groups such as African-Americans and Latinos.

And yet in America today, the public square is under extraordinary attack. A flood of unregulated, unaccountable money in our politics and media threatens to drown public debate and ravage our civic life, overwhelming authentic conceptions of the public interest. Decades of growing economic inequality menaces the very public institutions with the capacity to promote greater economic and social equality. Unprecedented efforts to privatize essential public goods and public services are underway. Teachers, nurses and other public servants who deliver those public goods are the object of vilification from the political right, and their rights in the workplace are in danger. Legislative and judicial efforts designed to eviscerate public sector unions are ongoing.

In response to these developments, a consortium of seven organizations—the Albert Shanker Institute; the American Federation of State, County and Municipal Employees; the American Federation of Teachers; the American Prospect; Dissent; Georgetown University’s Kalmanovitz Initiative for Labor and the Working Poor; and the Service Employees International Union—has organized a to bring together prominent elected officials, public intellectuals, and union, business and civil rights leaders “in defense of the public square.”

PISA And TIMSS: A Distinction Without A Difference?

Our guest author today is William Schmidt, a University Distinguished Professor and co-director of the Education Policy Center at Michigan State University. He is also a member of the Shanker Institute board of directors.

Every year or two, the mass media is full of stories on the latest iterations of one of the two major international large scale assessments, the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA). What perplexes many is that the results of these two tests -- both well-established and run by respectable, experienced organizations -- suggest different conclusions about the state of U.S. mathematics education. Generally speaking, U.S. students do better on the TIMSS and poorly on the PISA, relative to their peers in other nations. Depending on their personal preferences, policy advocates can simply choose whichever test result is convenient to press their argument, leaving the general public without clear guidance.

Now, in one sense, the differences between the tests are more apparent than real. One reason why the U.S. ranks better on the TIMSS than the PISA is that the two tests sample students from different sets of countries. The PISA has many more wealthy countries, whose students tend to do better – hence, the U.S.’s lower ranking. It turns out that when looking at only the countries that participated in both the TIMSS and the PISA we find similar country rankings. There are also some differences in statistical sampling, but these are fairly minor.

A Descriptive Analysis Of The 2014 D.C. Charter School Ratings

The District of Columbia Public Charter School Board (PCSB) recently released the 2014 results of their “Performance Management Framework” (PMF), which is the rating system that the PCSB uses for its schools.

Very quick background: This system sorts schools into one of three “tiers," with Tier 1 being the highest-performing, as measured by the system, and Tier 3 being the lowest. The ratings are based on a weighted combination of four types of factors -- progress, achievement, gateway, and leading -- which are described in detail in the first footnote.* As discussed in a previous post, the PCSB system, in my opinion, is better than many others out there, since growth measures play a fairly prominent role in the ratings, and, as a result, the final scores are only moderately correlated with key student characteristics such as subsidized lunch eligibility.** In addition, the PCSB is quite diligent about making the PMF results accessible to parents and other stakeholders, and, for the record, I have found the staff very open to sharing data and answering questions.

That said, PCSB's big message this year was that schools’ ratings are improving over time, and that, as a result, a substantially larger proportion of DC charter students are attending top-rated schools. This was reported uncritically by several media outlets, including this story in the Washington Post. It is also based on a somewhat questionable use of the data. Let’s take a very simple look at the PMF dataset, first to examine this claim and then, more importantly, to see what we can learn about the PMF and DC charter schools in 2013 and 2014.

Feeling Socially Connected Fuels Intrinsic Motivation And Engagement

Our "social side of education reform" series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school's social environment -- i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?

This is, of course, a very complex question, which can't be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our "social side" theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) -- how they think they fit into -- their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called "wise interventions" -- i.e., small but precise strategies that target recursive processes (more below).

The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers.

Rethinking The Use Of Simple Achievement Gap Measures In School Accountability Systems

So-called achievement gaps – the differences in average test performance among student subgroups, usually defined in terms of ethnicity or income –  are important measures. They demonstrate persistent inequality of educational outcomes and economic opportunities between different members of our society.

So long as these gaps remain, it means that historically lower-performing subgroups (e.g., low-income students or ethnic minorities) are less likely to gain access to higher education, good jobs, and political voice. We should monitor these gaps; try to identify all the factors that affect them, for good and for ill; and endeavor to narrow them using every appropriate policy lever – both inside and outside of the educational system.

Achievement gaps have also, however, taken on a very different role over the past 10 or so years. The sizes of gaps, and extent of “gap closing," are routinely used by reporters and advocates to judge the performance of schools, school districts, and states. In addition, gaps and gap trends are employed directly in formal accountability systems (e.g., states’ school grading systems), in which they are conceptualized as performance measures.

Although simple measures of the magnitude of or changes in achievement gaps are potentially very useful in several different contexts, they are poor gauges of school performance, and shouldn’t be the basis for high-stakes rewards and punishments in any accountability system.

Multiple Measures And Singular Conclusions In A Twin City

A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.

The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show." These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers." Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers," could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.

So, while there are no clear-cut "right" or "wrong" answers here, let’s take a quick look at the data and what they might tell us.

The Bewildering Arguments Underlying Florida's Fight Over ELL Test Scores

The State of Florida is currently engaged in a policy tussle of sorts with the U.S. Department of Education (USED) over Florida’s accountability system. To make a long story short, last spring, Florida passed a law saying that the test scores of English language learners (ELLs) would only count toward schools’ accountability grades (and teacher evaluations) once the ELL students had been in the system for at least two years. This runs up against federal law, which requires that ELLs’ scores be counted after only one year, and USED has indicated that it’s not willing to budge on this requirement. In response, Florida is considering legal action.

This conflict might seem incredibly inane (unless you’re in one of the affected schools, of course). Beneath the surface, though, this is actually kind of an amazing story.

Put simply, Florida’s argument against USED's policy of counting ELL scores after just one year is a perfect example of the reason why most of the state's core accountability measures (not to mention those of NCLB as a whole) are so inappropriate: Because they judge schools’ performance based largely on where their students’ scores end up without paying any attention to where they start out.