Examining Principal Turnover

Our guest author today is Ed Fuller, Associate Professor in the Education Leadership Department at Penn State University. He is also the Director of the Center for Evaluation and Education Policy Analysis as well as the Associate Director for Policy of the University Council for Educational Administration.

“No one knows who I am," exclaimed a senior in a high-poverty, predominantly minority and low-performing high school in the Austin area. She explained, “I have been at this school four years and had four principals and six algebra I teachers."

Elsewhere in Texas, the first school to be closed by the state for low performance was Johnston High School, which was led by 13 principals in the 11 years preceding closure. The school also had a teacher turnover rate greater than 25 percent for almost all of the years and greater than 30 percent for 7 of the years.

While the above examples are rather extreme cases, they do underscore two interconnected issues – teacher and principal turnover - that often plague low-performing schools and, in the case of principal turnover, afflict a wide range of schools regardless of performance or school demographics.

Cheating In Online Courses

Our guest author today is Dan Ariely, James B Duke Professor of Psychology and Behavioral Economics at Duke University, and author of the book The Honest Truth About Dishonesty (published by Harper Collins in June 2012).

A recent article in The Chronicle of Higher Education suggests that students cheat more in online than in face-to-face classes. The article tells the story of Bob Smith (not his real name, obviously), who was a student in an online science course.  Bob logged in once a week for half an hour in order to take a quiz. He didn’t read a word of his textbook, didn’t participate in discussions, and still he got an A. Bob pulled this off, he explained, with the help of a collaborative cheating effort. Interestingly, Bob is enrolled at a public university in the U.S., and claims to work diligently in all his other (classroom) courses. He doesn’t cheat in those courses, he explains, but with a busy work and school schedule, the easy A is too tempting to pass up.

Bob’s online cheating methods deserve some attention. He is representative of a population of students that have striven to keep up with their instructor’s efforts to prevent cheating online. The tests were designed in a way that made cheating more difficult, including limited time to take the test, and randomized questions from a large test bank (so that no two students took the exact same test).

Low-Income Students In The CREDO Charter School Study

A recent Economist article on charter schools, though slightly more nuanced than most mainstream media treatments of the charter evidence, contains a very common, somewhat misleading argument that I’d like to address quickly. It’s about the findings of the so-called "CREDO study," the important (albeit over-cited) 2009 national comparison of student achievement in charter and regular public schools in 16 states.

Specifically, the article asserts that the CREDO analysis, which finds a statistically discernible but very small negative impact of charters overall (with wide underlying variation), also finds a significant positive effect among low-income students. This leads the Economist to conclude that the entire CREDO study “has been misinterpreted," because it’s real value is in showing that “the children who most need charters have been served well."

Whether or not an intervention affects outcomes among subgroups of students is obviously important (though one has hardly "misinterpreted" a study by focusing on its overall results). And CREDO does indeed find a statistically significant, positive test-based impact of charters on low-income students, vis-à-vis their counterparts in regular public schools. However, as discussed here (and in countless textbooks and methods courses), statistical significance only means we can be confident that the difference is non-zero (it cannot be chalked up to random fluctuation). Significant differences are often not large enough to be practically meaningful.

And this is certainly the case with CREDO and low-income students.

The Data Are In: Experiments In Policy Are Worth It

Our guest author today is David Dunning, professor of psychology at Cornell University, and a fellow of both the American Psychological Society and the American Psychological Association. 

When I was a younger academic, I often taught a class on research methods in the behavioral sciences. On the first day of that class, I took as my mission to teach students only one thing—that conducting research in the behavioral sciences ages a person. I meant that in two ways. First, conducting research is humbling and frustrating. I cannot count the number of pet ideas I have had through the years, all of them beloved, that have gone to die in the laboratory at the hands of data unwilling to verify them.

But, second, there is another, more positive way in which research ages a person. At times, data come back and verify a cherished idea, or even reveal a more provocative or valuable one that no one has never expected. It is a heady experience in those moments for the researcher to know something that perhaps no one else knows, to be wiser—more aged if you will—in a small corner of the human experience that he or she cares about deeply.

Share My Lesson: The Imperative Of Our Profession

Leo Casey, UFT vice president for academic high schools, will succeed Eugenia Kemble as executive director of the Albert Shanker Institute, effective this fall.

"You want me to teach this stuff, but I don't have the stuff to teach." So opens "Lost at Sea: New Teachers' Experiences with Curriculum and Assessment," a 2002 paper by Harvard University researchers about the plight of new teachers trying to learn the craft of teaching in the face of insubstantial curriculum frameworks and inadequate instructional materials.

David Kauffman, Susan Moore Johnson and colleagues interviewed a diverse collection of first- and second-year teachers in Massachusetts who reported that, despite state academic standards widely acknowledged to be some of the best in the country, they received “little or no guidance about what to teach or how to teach it. Left to their own devices they struggled day to day to prepare content and materials. The standards and accountability environment created a sense of urgency for these teachers but did not provide them with the support they needed."

I found myself thinking about this recently when I realized that, with the advent of the Common Core State Standards, new teachers won’t be the only ones in this boat. Much of the country is on a fast-track toward implementation, but with little thought about how to provide teachers with the “stuff” – aligned professional development, curriculum frameworks, model lesson plans, quality student materials, formative assessments, and so on – that they will need to implement the standards well.

When Push Comes To Pull In The Parent Trigger Debate

The so-called “parent trigger," the policy by which a majority of a school’s parents can decide to convert it to a charter school, seems to be getting a lot of attention lately.

Advocates describe the trigger as “parent empowerment," a means by which parents of students stuck in “failing schools” can take direct action to improve the lives of their kids. Opponents, on the other hand, see it as antithetical to the principle of schools as a public good – parents don’t own schools, the public does. And important decisions such as charter conversion, which will have a lasting impact on the community as a whole (including parents of future students), should not be made by a subgroup of voters.

These are both potentially appealing arguments. In many cases, however, attitudes toward the parent trigger seem more than a little dependent upon attitudes toward charter schools in general. If you strongly support charters, you’ll tend to be pro-trigger, since there’s nothing to lose and everything to gain. If you oppose charter schools, on the other hand, the opposite is likely to be the case. There’s a degree to which it’s not the trigger itself but rather what’s being triggered - opening more charter schools - that’s driving the debate.

The Busy Intersection Of Test-Based Accountability And Public Perception

Last year, the New York City Department of Education (NYCDOE) rolled out its annual testing results for the city’s students in a rather misleading manner. The press release touted the “significant progress” between 2010 and 2011 among city students, while, at a press conference, Mayor Michael Bloomberg called the results “dramatic." In reality, however, the increase in proficiency rates (1-3 percentage points) was very modest, and, more importantly, the focus on the rates hid the fact that actual scale scores were either flat or decreased in most grades. In contrast, one year earlier, when the city's proficiency rates dropped due to the state raising the cut scores, Mayor Bloomberg told reporters (correctly) that it was the actual scores that "really matter."

Most recently, in announcing their 2011 graduation rates, the city did it again. The headline of the NYCDOE press release proclaims that “a record number of students graduated from high school in 2011." This may be technically true, but the actual increase in the rate (rather than the number of graduates) was 0.4 percentage points, which is basically flat (as several reporters correctly noted). In addition, the city's "college readiness rate" was similarly stagnant, falling slightly from 21.4 percent to 20.7 percent, while the graduation rate increase was higher both statewide and in New York State's four other large districts (the city makes these comparisons when they are favorable).*

We've all become accustomed to this selective, exaggerated presentation of testing data, which is of course not at all limited to NYC. And it illustrates the obvious fact that test-based accountability plays out in multiple arenas, formal and informal, including the court of public opinion.

Colorado's Questionable Use Of The Colorado Growth Model

I have been writing critically about states’ school rating systems (e.g., OhioFloridaLouisiana), and I thought I would find one that is, at least in my (admittedly value-laden) opinion, more defensibly designed. It didn't quite turn out as I had hoped.

One big starting point in my assessment is how heavily the systems weight absolute performance (how highly students score) versus growth (how quickly students improve). As I’ve argued many times, the former (absolute level) is a poor measure of school performance in a high-stakes accountability system. It does not address the fact that some schools, particularly those in more affluent areas, serve  students who, on average, enter the system at a higher-performing level. This amounts to holding schools accountable for outcomes they largely cannot control (see Doug Harris' excellent book for more on this in the teacher context). Thus, to whatever degree testing results can be used to judge actual school effectiveness, growth measures, while themselves highly imperfect, are to be preferred in a high-stakes context.

There are a few states that assign more weight to growth than absolute performance (see this prior post on New York City’s system). One of them is Colorado's system, which uses the well-known “Colorado Growth Model” (CGM).*

In my view, putting aside the inferential issues with the CGM (see the first footnote), the focus on growth in Colorado's system is in theory a good idea. But, looking at the data and documentation reveals a somewhat unsettling fact: There is a double standard of sorts, by which two schools with the same growth score can receive different ratings, and it's mostly their absolute performance levels determining whether this is the case.

Do Charter Schools Serve Fewer Special Education Students?

A new report from the U.S. Government Accountability Office (GAO) provides one of the first large-scale comparisons of special education enrollment between charter and regular public schools. The report’s primary finding, which, predictably, received a fair amount of attention, is that roughly 11 percent of students enrolled in regular public schools were on special education plans in 2009-10, compared with just 8 percent of charter school students.

The GAO report’s authors are very careful to note that their findings merely describe what you might call the “service gap” – i.e., the proportion of special education students served by charters versus regular public schools – but that they do not indicate the reasons for this disparity.

This is an important point, but I would take the warning a step further:  The national- and state-level gaps themselves should be interpreted with the most extreme caution.

Louisiana's "School Performance Score" Doesn't Measure School Performance

Louisiana’s "School Performance Score" (SPS) is the state’s primary accountability measure, and it determines whether schools are subject to high-stakes decisions, most notably state takeover. For elementary and middle schools, 90 percent of the SPS is based on testing outcomes. For secondary schools, it is 70 percent (and 30 percent graduation rates).*

The SPS is largely calculated using absolute performance measures – specifically, the proportion of students falling into the state’s cutpoint-based categories (e.g., advanced, mastery, basic, etc.). This means that it is mostly measuring student performance, rather than school performance. That is, insofar as the SPS only tells you how high students score on the test, rather than how much they have improved, schools serving more advantaged populations will tend to do better (since their students tend to perform well when they entered the school) while those in impoverished neighborhoods will tend to do worse (even those whose students have made the largest testing gains).

One rough way to assess this bias is to check the association between SPS and student characteristics, such as poverty. So let’s take a quick look.