Who Has Confidence In U.S. Schools?

For many years, national survey and polling data have shown that Americans tend to like their own local schools, but are considerably less sanguine about the nation’s education system as a whole. This somewhat paradoxical finding – in which most people seem to think the problem is with “other people’s schools” – is difficult to interpret, especially since it seems to vary a bit when people are given basic information about schools, such as funding levels.

In any case, I couldn’t resist taking a very quick, superficial look at how people’s views of education vary by important characteristics, such as age and education. I used the General Social Survey (pooled 2006-2010), which queries respondents about their confidence in education, asking them to specify whether they have “hardly any," “only some” or “a great deal” of confidence in the system.*

This question doesn’t differentiate explicitly between respondents’ local schools and the system as a whole, and respondents may consider different factors when assessing their confidence, but I think it’s a decent measure of their disposition toward the education system.

The Data-Driven Education Movement

** Also reprinted here in the Washington Post

In the education community, many proclaim themselves to be "completely data-driven." Data Driven Decision Making (DDDM) has been a buzz phrase for a while now, and continues to be a badge many wear with pride. And yet, every time I hear it, I cringe.

Let me explain. During my first year in graduate school, I was taught that excessive attention to quantitative data impedes – rather than aids – in-depth understanding of social phenomena. In other words, explanations cannot simply be cranked out of statistical analyses, without the need for a precursor theory of some kind – a.k.a. “variable sociology” – and the attempt to do so constitutes a major obstacle to the advancement of knowledge.

I am no longer in graduate school, so part of me says: Okay, I know what data-driven means in education. But then, at times, I still think: No, really, what does “data-driven” mean even in this context?

Which State Has "The Best Schools?"

** Reprinted here in the Washington Post

I’ve written many times about how absolute performance levels – how highly students score – are not by themselves valid indicators of school quality, since, most basically, they don’t account for the fact that students enter the schooling system at different levels. One of the most blatant (and common) manifestations of this mistake is when people use NAEP results to determine the quality of a state's schools.

For instance, you’ll often hear that Massachusetts has the “best” schools in the U.S. and Mississippi the “worst," with both claims based solely on average scores on the NAEP (though, technically, Massachusetts public school students' scores are statistically tied with at least one other state on two of the four main NAEP exams, while Mississippi's rankings vary a bit by grade/subject, and its scores are also not statistically different from several other states').

But we all know that these two states are very different in terms of basic characteristics such as income, parental education, etc. Any assessment of educational quality, whether at the state or local level, is necessarily complicated, and ignoring differences between students precludes any meaningful comparisons of school effectiveness. Schooling quality is important, but it cannot be assessed by sorting and ranking raw test scores in a spreadsheet.

College Attainment In The U.S. And Around The World

A common talking point in circles in that college attainment in the U.S. used to be among the highest in the world, but is now ranked middling-to-low (the ranking cited is typically around 15th) among OECD nations. As is the case when people cite rankings on the PISA assessment, this is often meant to imply that the U.S. education system is failing and getting worse.*

The latter arguments are of course oversimplifications, given that college attendance and completion are complex phenomena that entail many factors, school and non-school. A full discussion of these issues is beyond the scope of this post - obviously, the causes and “value” of a postsecondary education vary within and between nations, and are subject to all the usual limitations inherent in international comparisons.

That said, let's just take a very quick. surface-level look at the latest OECD figures for college attainment (“tertiary education," meaning associate-level, bachelor's or advanced degree), which have recently been released for 2010.

Looking Backwards Into The Future

This is an adaptation of a recent message to AFT staff and leadership from Eugenia Kemble, on the occasion of her departure as the Albert Shanker Institute’s founding executive director, a position she held from March 1998 through September 2012.

I hope you will accept a few reflections from an old-timer as I leave the Albert Shanker Institute, which was launched with the support of the American Federation of Teachers in 1998, a year after Al’s death.

I started in 1967 as a cub reporter for New York’s Local 2 and have worked for the AFT, the AFL-CIO, and the Albert Shanker Institute since 1975, so I have been on duty for awhile. I was particularly grateful for the decision to create the Shanker Institute.  It has become a very special kind of forum – directed by an autonomous board of directors to ensure its independence – where, together with a broad spectrum of colleagues from both inside and outside the union, core ideas, positions, and practices could be discussed, examined, modeled, and debated.  Its inquisitive nature and program attempt to capture a key feature of Al Shanker’s contribution to union leadership.  As a result, the Institute’s work has helped many, including me, to reach a clearer understanding of the essential character of the AFT, unionism, public education, and of democracy itself, as well as what about them we hope will endure.

Attempted Assassination In Pakistan

What would drive armed gunmen to open fire on a bus full of schoolgirls, with the express aim of assassinating one talented young teenager? That’s the question on the minds of many people this week, following Tuesday’s attempted assassination of 14-year-old Malala Yousafzai in northwestern Pakistan. A refugee fleeing Taliban violence and oppression in Pakistan’s Swat Valley, Malala had already won a following for her precocious and courageous blog posts, written when she was really just a child, arguing that young women have a right to an education, and indeed, to a life free from discrimination and fear.

She is also a hero to many Pakistanis. In 2011, the Pakistani government awarded her a national peace prize and 1 million rupees (US$10,500). In 2012, she was a finalist for the International Children’s Peace Prize, awarded by a Dutch organization, in recognition of her courage in defying the Taliban by advocating for girls’ education.  

The Stability And Fairness Of New York City's School Ratings

New York City has just released the new round of results from its school rating system (they're called “progress reports"). It relies considerably more on student growth (60 out of 100 points) than absolute performance (25 points), and there are efforts to partially adjust most of the measures via peer group comparisons.*

All of this indicates that the city's system is more focused on school rather than student test-based performance, compared with many other systems around the U.S.

The ratings are high-stakes. Schools receiving low grades – a D or F in any given year, or a C for three consecutive years – enter a review process by which they might be closed. The number of schools meeting these criteria jumped considerably this year.

There is plenty of controversy to go around about the NYC ratings, much of it pertaining to two important features of the system. They’re worth discussing briefly, as they are also applicable to systems in other states.

Our Not-So-College-Ready Annual Discussion Of SAT Results

Every year, around this time, the College Board publicizes its SAT results, and hundreds of newspapers, blogs, and television stations run stories suggesting that trends in the aggregate scores are, by themselves, a meaningful indicator of U.S. school quality. They’re not.

Everyone knows that the vast majority of the students who take the SAT in a given year didn’t take the test the previous year – i.e., the data are cross-sectional. Everyone also knows that participation is voluntary (as is participation in the ACT test), and that the number of students taking the test has been increasing for many years and current test-takers have different measurable characteristics from their predecessors. That means we cannot use the raw results to draw strong conclusions about changes in the performance of the typical student, and certainly not about the effectiveness of schools, whether nationally or in a given state or district. This is common sense.

Unfortunately, the College Board plays a role in stoking the apparent confusion - or, at least, they could do much more to prevent it. Consider the headline of this year’s press release:

That's Not Teacher-Like

I’ve been reading Albert Shanker’s “The Power of Ideas: Al In His Own Words," the American Educator’s compendium of Al’s speeches and columns, published posthumously in 1997. What an enjoyable, witty and informative collection of essays.

Two columns especially caught my attention: “That’s Very Unprofessional Mr. Shanker!" and “Does Pavarotti Need to File an Aria Plan” – where Al discusses expectations for (and treatment of) teachers. They made me reflect, yet again, on whether perceptions of teacher professionalism might be gendered. In other words, when society thinks of the attributes of a professional teacher, might we unconsciously be thinking of women teachers? And, if so, why might this be important?

In “That’s Very Unprofessional, Mr. Shanker!" Al writes:

Does It Matter How We Measure Schools' Test-Based Performance?

In education policy debates, we like the "big picture." We love to say things like “hold schools accountable” and “set high expectations." Much less frequent are substantive discussions about the details of accountability systems, but it’s these details that make or break policy. The technical specs just aren’t that sexy. But even the best ideas with the sexiest catchphrases won’t improve things a bit unless they’re designed and executed well.

In this vein, I want to recommend a very interesting CALDER working paper by Mark Ehlert, Cory Koedel, Eric Parsons and Michael Podgursky. The paper takes a quick look at one of these extremely important, yet frequently under-discussed details in school (and teacher) accountability systems: The choice of growth model.

When value-added or other growth models come up in our debates, they’re usually discussed en masse, as if they’re all the same. They’re not. It's well-known (though perhaps overstated) that different models can, in many cases, lead to different conclusions for the same school or teacher. This paper, which focuses on school-level models but might easily be extended to teacher evaluations as well, helps illustrate this point in a policy-relevant manner.