Common Core Opens The Second Front In The Reading Wars

Our guest author today is Kathleen Porter-Magee, Bernard Lee Schwartz policy fellow and editor of the Common Core Watch blog at the Thomas B. Fordham Institute. Previously, Ms. Porter-Magee served as both a middle and high school teacher, as well as the curriculum and professional development director for a network of public charter schools.

Up until now, the Common Core ELA standards were considered path-breaking mostly because of their reach. This isn't the first time a group attempted to write “common” standards, but it is the first time they’ve gained such widespread traction.

Yet the Common Core standards are revolutionary for another, less talked about, reason: they define rigor in reading and literature classrooms more clearly and explicitly than nearly any of the state ELA standards they’ve replaced. Now, as the full impact of these expectations is starting to take hold, the decision to define rigor—and the way the CCSS define it—is fanning the flames of a debate that threatens to open up a whole new front in America’s long running “Reading Wars."

The first and most divisive front in the reading wars was the debate over the importance of phonics to early reading instruction. Thanks to the 2000 recommendations of the National Reading Panel and the 2001 “Reading First” portion of No Child Left Behind, the phonics camp has largely won the day in this battle. Now, while there remain curricula that may marginalize the importance of phonics and phonemic awareness, there are none that ignore it completely.

Literacy For Life: The Role Of Career And Technical Education In Reading Proficiency

It is well established that a student’s reading proficiency level in elementary school is a good predictor of high school graduation success. The lower the reading level, the more likely it is that the student will not graduate on time. Against this background, it is sobering that many U.S. students reach high school without the reading and comprehension skills they need. According to NAEP data, in 2011, more than a third (33 percent) of 4th-graders were reading at a below basic level; among 8th-grade and 12th grade students, the percentage of students who were stuck at the below basic reading level had dropped, but only to about 25 percent. Many of these students drop out; many go on to earn a diploma, but enter the work world singularly unprepared to earn a living.

What is to be done? Certainly, intensive remediation is part of the answer, but so are practice and motivation and interest. The challenge for struggling readers at the high school level is hard to overstate; by the time they enter high school, they often display a negative and despairing attitude toward school that has been hardened by years of failure. Furthermore, most high school teachers are not trained in literacy instruction, a specialized skill which is theoretically the purview of early elementary school. Indeed, for many urban teachers, motivating kids just to come to school is the major challenge.

How do we motivate these kids, who sometimes exhibit stubborn resistance to reading or to any other kind of schoolwork?  One effective strategy is to make the purpose of reading as interesting and obvious as possible. For many youngsters, that means access to high-quality Career and Technical Education (CTE).

The Louisiana Voucher Accountability Sweepstakes

The situation with vouchers in Louisiana is obviously quite complicated, and there are strong opinions on both sides of the issue, but I’d like to comment quickly on the new “accountability” provision. It's a great example of how, too often, people focus on the concept of accountability and ignore how it is actually implemented in policy.

Quick and dirty background: Louisiana will be allowing students to receive vouchers (tuition to attend private schools) if their public schools are sufficiently low-performing, according to their "school performance score" (SPS). As discussed here, the SPS is based primarily on how highly students score, rather than whether they’re making progress, and thus tells you relatively little about the actual effectiveness of schools per se. For instance, the vouchers will be awarded mostly to schools serving larger proportions of disadvantaged students, even if many of those schools are compelling large gains (though such progress cannot be assessed adequately using year-to-year changes in the SPS, which, due in part to its reliance on cross-sectional proficiency rates, are extremely volatile).

Now, here's where things get really messy: In an attempt to demonstrate that they are holding the voucher-accepting private schools accountable, Louisiana officials have decided that they will make these private schools ineligible for the program if their performance is too low (after at least two years of participation in the program). That might be a good idea if the state measured school performance in a defensible manner. It doesn't.

Schools Aren't The Only Reason Test Scores Change

In all my many posts about the interpretation of state testing data, it seems that I may have failed to articulate one major implication, which is almost always ignored in the news coverage of the release of annual testing data. That is: raw, unadjusted changes in student test scores are not by themselves very good measures of schools' test-based effectiveness.

In other words, schools can have a substantial impact on performance, but student test scores also increase, decrease or remain flat for reasons that have little or nothing to do with schools. The first, most basic reason is error. There is measurement error in all test scores - for various reasons, students taking the same test twice will get different scores, even if their "knowledge" remains constant. Also, as I've discussed many times, there is extra imprecision when using cross-sectional data. Often, any changes in scores or rates, especially when they’re small in magnitude and/or based on smaller samples (e.g., individual schools), do not represent actual progress (see here and here). Finally, even when changes are "real," other factors that influence test score changes include a variety of non-schooling inputs, such as parental education levels, family's economic circumstances, parental involvement, etc. These factors don't just influence how highly students score; they are also associated with progress (that's why value-added models exist).

Thus, to the degree that test scores are a valid measure of student performance, and changes in those scores a valid measure of student learning, schools aren’t the only suitors at the dance. We should stop judging school or district performance by comparing unadjusted scores or rates between years.

Investing In Children = Supporting Their Families

Although some parents are better positioned than others to meet their families’ child care needs, very few parents are immune to the challenges of balancing work and family. Adding further stress to families is the fact that single-parent households are at a record high in the U.S., with more than 40 percent of births happening outside of marriage. Paid parental leave and quality early childhood education (ECE) are two important policies that can assist parents in this regard. In the United States, however, both are less comprehensive and less equally distributed than in most other developed nations.

As a recent (and excellent) Forbes piece points out, we have two alternatives: hope that difficult family circumstances reverse themselves, or support policies such as paid parental leave and universal early childhood education and care — policies which would make it much easier for all parents to raise children, be it as a couple or on their own. So, what’s it going to be?

In 2010, a global survey on paid leave and other workplace benefits directed by Dr. Jody Heymann (McGill University) and Dr. Alison Earle (Northeastern University) found that the U.S. is one of four* countries in the world without a national law guaranteeing paid leave for parents.** The other three nations are Liberia, Papua New Guinea, and Swaziland. Some might see this as evidence of American “exceptionalism," but what a 2011 Human Rights Watch report finds exceptional is the degree to which the nation is "Failing Its Families." In fact, according to a survey of registered voters cited in the report, 76 percent of Americans said they would endorse laws that provide paid leave for family care and childbirth. Yet, it is still the case in the U.S. that parental leave, when available at all, is usually brief and unpaid.

The Irreconcilables

** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post

The New Teacher Project (TNTP) has a new, highly-publicized report about what it calls “irreplaceables," a catchy term that is supposed to describe those teachers who are “so successful they are nearly impossible to replace." The report’s primary conclusion is that these “irreplaceable” teachers often leave the profession voluntarily, and TNTP offers several recommendations for how to improve this.

I’m not going to discuss this report fully. It shines a light on teacher retention, which is a good thing. Its primary purpose is to promulgate the conceptual argument that not all teacher turnover is created equal – i.e., that it depends on whether “good” or “bad” teachers are leaving (see here for a strong analysis on this topic). The report’s recommendations are standard fare – improve working conditions, tailor pay to “performance” (see here for a review of evidence on incentives and retention), etc. Many are widely-supported, while others are more controversial. All of them merit discussion.

I just want to make one quick (and, in many respects, semantic) point about the manner in which TNTP identifies high-performing teachers, as I think it illustrates larger issues. In my view, the term “irreplaceable” doesn't apply, and I think it would have been a better analysis without it.

The Real “Trouble” With Technology, Online Education And Learning

It’s probably too early to say whether Massive Open Online Courses (MOOCs) are a "tsunami" or a "seismic shift," but, continuing with the natural disaster theme, the last few months have seen a massive “avalanche” of press commentary about them, especially within the last few days.

Also getting lots of press attention (though not as much right now) is Adaptive/Personalized Learning. Both innovations seem to fascinate us, but probably for different reasons, since they are so fundamentally different at their cores. Personalized Learning, like more traditional concepts of education, places the individual at the center. With MOOCs, groups and social interaction take center stage and learning becomes a collective enterprise.

This post elaborates on this distinction, but also points to a recent blurring of the lines between the two – a development that could be troubling.

But, first things first: What is Personalized/Adaptive Learning, what are MOOCs, and why are they different?

The Unfortunate Truth About This Year's NYC Charter School Test Results

There have now been several stories in the New York news media about New York City’s charter schools’ “gains” on this year’s state tests (see hereherehere, here and here). All of them trumpeted the 3-7 percentage point increase in proficiency among the city’s charter students, compared with the 2-3 point increase among their counterparts in regular public schools. The consensus: Charters performed fantastically well this year.

In fact, the NY Daily News asserted that the "clear lesson" from the data is that "public school administrators must gain the flexibility enjoyed by charter leaders," and "adopt [their] single-minded focus on achievement." For his part, Mayor Michael Bloomberg claimed that the scores are evidence that the city should expand its charter sector.

All of this reflects a fundamental misunderstanding of how to interpret testing data, one that is frankly a little frightening to find among experienced reporters and elected officials.

What Florida's School Grades Measure, And What They Don't

A while back, I argued that Florida's school grading system, due mostly to its choice of measures, does a poor job of gauging school performance per se. The short version is that the ratings are, to a degree unsurpassed by most other states' systems, driven by absolute performance measures (how highly students score), rather than growth (whether students make progress). Since more advantaged students tend to score more highly on tests when they enter the school system, schools are largely being judged not on the quality of instruction they provide, but rather on the characteristics of the students they serve.

New results were released a couple of weeks ago. This was highly anticipated, as the state had made controversial changes to the system, most notably the inclusion of non-native English speakers and special education students, which officials claimed they did to increase standards and expectations. In a limited sense, that's true - grades were, on average, lower this year. The problem is that the system uses the same measures as before (including a growth component that is largely redundant with proficiency). All that has changed is the students that are included in them. Thus, to whatever degree the system now reflects higher expectations, it is still for outcomes that schools mostly cannot control.

I fully acknowledge the political and methodological difficulties in designing these systems, and I do think Florida's grades, though exceedingly crude, might be useful for some purposes. But they should not, in my view, be used for high-stakes decisions such as closure, and the public should understand that they don't tell you much about the actual effectiveness of schools. Let’s take a very quick look at the new round of ratings, this time using schools instead of districts (I looked at the latter in my previous post about last year's results).

How Often Do Proficiency Rates And Average Scores Move In Different Directions?

New York State is set to release its annual testing data today. Throughout the state, and especially in New York City, we will hear a lot about changes in school and district proficiency rates. The rates themselves have advantages – they are easy to understand, comparable across grades and reflect a standards-based goal. But they also suffer severe weaknesses, such as their sensitivity to where the bar is set and the fact that proficiency rates and the actual scores upon which they’re based can paint very different pictures of student performance, both in a given year as well as over time. I’ve discussed this latter issue before in the NYC context (and elsewhere), but I’d like to revisit it quickly.

Proficiency rates can only tell you how many students scored above a certain line; they are completely uninformative as to how far above or below that line the scores might be. Consider a hypothetical example: A student who is rated as proficient in year one might make large gains in his or her score in year two, but this would not be reflected in the proficiency rate for his or her school – in both years, the student would just be coded as “proficient” (the same goes for large decreases that do not “cross the line”). As a result, across a group of students, the average score could go up or down while proficiency rates remained flat or moved in the opposite direction. Things are even messier when data are cross-sectional (as public data lmost always are), since you’re comparing two different groups of students (see this very recent NYC IBO report).

Let’s take a rough look at how frequently rates and scores diverge in New York City.