Staff Matters: Social Resilience In Schools

In the world of education, particularly in the United States, educational fads, policy agendas, and funding priorities tend to change rapidly. The attention of education research fluctuates accordingly. And, as David Cohen persuasively argues in Teaching and Its Predicaments, the nation has little coherent educational infrastructure to fall back upon. As a result of all this, teachers’ work is almost always surrounded by important levels of uncertainty (e.g., lack of a common curricula) and variation. In such a context, it is no surprise that collaboration and collegiality figure prominently in teachers’ world (and work) views.

After all, difficulties can be dealt with more effectively when/if individuals are situated in supportive and close-knit social networks from which to draw strength and resources. In other words, in the absence of other forms of stability, the ability of a group – a group of teachers in this case – to work together becomes indispensable to cope with challenges and change.

The idea that teachers’ jobs are surrounded by uncertainty made me of think problems often encountered in the field of security. In this sector, because threats are increasingly complex and unpredictable, much of the focus has shifted away from heightened protection and toward increased resilience. Resilience is often understood as the ability of communities to survive and thrive after disasters or emergencies.

Higher Education: Soaring Rhetoric, Skyrocketing Costs

Over the past several years, the mantra of “college for all” has become ubiquitous, with Americans told that a college education is no longer a luxury, but a necessity, for any individual who aspires to a middle-class life in the 21st century economy.  And indeed, many studies tend to confirm that persons with a post-secondary education enjoy  lower unemployment rates and higher wages over time

Simultaneously – sometimes in the same articles – we learn that soaring tuition rates have put college out of the reach of many, if not most, families.  In fact, for the past few decades, college costs have been rising faster than health care costs.  In the last year or so, the news is that students who tried to borrow their way around this seemingly intractable problem only dug themselves a deeper hole. Outstanding student college loans have reached – or soon will reach – the $1 trillion mark.

The average student graduates college with a debt burden of nearly $25,000; others, especially those with professional degrees, are buckling under a debt load in the six figures. Since bankruptcy forgiveness does not apply to student debt, even unemployed and underemployed graduates can expect to carry this debt with them for years, perhaps decades, to come. With a slow economy exacerbating the problem, it’s no surprise to find that the national student loan default rate for 2009 (the last year for which data are available) was 8.8 percent and rising. At for-profit schools, the rate was 15 percent.

The Relatively Unexplored Frontier Of Charter School Finance

Do charter schools do more – get better results - with less? If you ask this question, you’ll probably get very strong answers, ranging from the affirmative to the negative, often depending on the person’s overall view of charter schools. The reality, however, is that we really don’t know.

Actually, despite uninformed coverage of insufficient evidence, researchers don’t even have a good handle on how much charter schools spend, to say nothing of whether how and how much they spend leads to better outcomes. Reporting of charter financial data is incomplete, imprecise and inconsistent. It is difficult to disentangle the financial relationships between charter management organizations (CMOs) and the schools they run, as well as that between charter schools and their "host" districts.

A new report published by the National Education Policy Center, with support from the Shanker Institute and the Great Lakes Center for Education Research and Practice, examines spending between 2008 and 2010 among charter schools run by major CMOs in three states – New York, Texas and Ohio. The results suggest that relative charter spending in these states, like test-based charter performance overall, varies widely. In addition, perhaps more importantly, the findings make it clear that there remain significant barriers to accurate spending comparisons between charter and regular public schools, which severely hinder rigorous efforts to examine the cost-effectiveness of these schools.

Teachers And Their Unions: A Conceptual Border Dispute

One of the segments from “Waiting for Superman” that stuck in my head is the following statement by Newsweek reporter Jonathan Alter:

It’s very, very important to hold two contradictory ideas in your head at the same time. Teachers are great, a national treasure. Teachers’ unions are, generally speaking, a menace and an impediment to reform.
The distinction between teachers and their unions (as well as those of other workers) has been a matter of political and conceptual contention for long time. On one “side," the common viewpoint, as characterized by Alter's slightly hyperbolic line, is “love teachers, don’t like their unions." On the other “side," criticism of teachers’ unions is often called “teacher bashing."

So, is there any distinction between teachers and teachers’ unions? Of course there is.

The Test-Based Evidence On New Orleans Charter Schools

Charter schools in New Orleans (NOLA) now serve over four out of five students in the city – the largest market share of any big city in the nation. As of the 2011-12 school year, most of the city’s schools (around 80 percent), charter and regular public, are overseen by the Recovery School District (RSD), a statewide agency created in 2003 to take over low-performing schools, which assumed control of most NOLA schools in Katrina’s aftermath.

Around three-quarters of these RSD schools (50 out of 66) are charters. The remainder of NOLA’s schools are overseen either by the Orleans Parish School Board (which is responsible for 11 charters and six regular public schools, and taxing authority for all parish schools) or by the Louisiana Board of Elementary and Secondary Education (which is directly responsible for three charters, and also supervises the RSD).

New Orleans is often held up as a model for the rapid expansion of charter schools in other urban districts, based on the argument that charter proliferation since 2005-06 has generated rapid improvements in student outcomes. There are two separate claims potentially embedded in this argument. The first is that the city’s schools perform better that they did pre-Katrina. The second is that NOLA’s charters have outperformed the city’s dwindling supply of traditional public schools since the hurricane.

Although I tend strongly toward the viewpoint that whether charter schools "work" is far less important than why - e.g., specific policies and practices - it might nevertheless be useful to quickly address both of the claims above, given all the attention paid to charters in New Orleans.

Value-Added Versus Observations, Part Two: Validity

In a previous post, I compared value-added (VA) and classroom observations in terms of reliability – the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren’t useful unless they are valid – that is, unless they’re measuring what we want them to measure.

Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional – in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they’re being used.

Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.

Becoming A 21st Century Learner

Think about something you have always wanted to learn or accomplish but never did, such as a speaking a foreign language or learning how to play an instrument. Now think about what stopped you. There’s probably a variety of factors but chances are those factors have little to do with technology.

Electronic devices are becoming cheaper, easier to use, and more intuitive. Much of the world’s knowledge is literally at our fingertips, accessible from any networked gadget. Yet, sustained learning does not always follow. It is often noted that developing digital skills/literacy is fundamental to 21st century learning but, is that all that’s missing? I suspect not. In this post I take a look at university courses available to anyone with an internet connection (a.k.a. massive open on-line courses or MOOCs) and ask: What attributes or skills make some people (but not others) better equipped to take advantage of this and similar educational opportunities brought about by advances in technology?

In the last few months, Stanford University’s version of MOOCs have attracted considerable attention (also here and here), leading some to question the U.S. higher education model as we know it – and even envision its demise. But, what is really novel about the Stanford MOOCs? Why did 160,000 students from 190 countries sign up for the course “Introduction to Artificial Intelligence”?

Value-Added Versus Observations, Part One: Reliability

Although most new teacher evaluations are still in various phases of pre-implementation, it’s safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers’ final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA).

One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than “real” performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class.

These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe “true” teacher performance, it’s tough to say which is “better” or “worse," despite the certainty with which both “sides” often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*

Nevertheless, many states and districts have already made the choice to use both measures, and in these places, the existence of imprecision is less important than how to deal with it. Viewed from this perspective, VA and observations are in many respects more alike than different.

There's No One Correct Way To Rate Schools

Education Week reports on the growth of websites that attempt to provide parents with help in choosing schools, including rating schools according to testing results. The most prominent of these sites is GreatSchools.org. Its test-based school ratings could not be more simplistic – they are essentially just percentile rankings of schools’ proficiency rates as compared to all other schools in their states (the site also provides warnings about the data, along with a bunch of non-testing information).

This is the kind of indicator that I have criticized when reviewing states’ school/district “grading systems." And it is indeed a poor measure, albeit one that is widely available and easy to understand. But it’s worth quickly discussing the fact that such criticism is conditional on how the ratings are employed - there is a difference between the use of testing data to rate schools for parents versus for high-stakes accountability purposes.

In other words, the utility and proper interpretation of data vary by context, and there's no one "correct way" to rate schools. The optimal design might differ depending on the purpose for which the ratings will be used. In fact, the reasons why a measure is problematic in one context might very well be a source of strength in another.

The Challenges Of Pre-K Assessment

In the United States, nearly 1.3 million children attend publicly-funded preschool. As enrollment continues to grow, states are under pressure to prove these programs serve to increase school readiness. Thus, the task of figuring out how best to measure preschoolers’ learning outcomes has become a major policy focus.

First, it should be noted that researchers are almost unanimous in their caution about this subject. There are inherent difficulties in the accurate assessment of very young children’s learning in the fields of language, cognition, socio-emotional development, and even physical development. Young children’s attention spans tend to be short and there are wide, natural variations in children’s performance in any given domain and on any given day. Thus, great care is advised for both the design and implementation of such assessments (see here, here, and here for examples). The question of if and how to use these student assessments to determine program or staff effectiveness is even more difficult and controversial (for instance, here and here). Nevertheless, many states are already using various forms of assessment to oversee their preschool investments.

It is difficult to react to this (unsurprising) paradox. Sadly, in education, there is often a disconnect between what we know (i.e., research) and what we do (i.e., policy). But, since our general desire for accountability seems to be here to stay, a case can be made that states should, at a minimum, expand what they measure to reflect learning as accurately and broadly as possible.

So, what types of assessments are better for capturing what a four- or a five- year old knows? How might these assessments be improved?