• A New Focus On Social Capital In School Reform Efforts

    ** Reprinted here in the Washington Post

    Our guest authors today are Carrie R. Leana, George H. Love Professor of Organizations and Management, Professor of Business Administration, Medicine, and Public and International Affairs, and Director of the Center for Health and Care Work, at the University of Pittsburgh, and Frits K. Pil, Professor of Business Administration at the Katz Graduate School of Business and research scientist at the Learning Research and Development Center, at the University of Pittsburgh. This column is part of The Social Side of Reform Shanker Blog series.

    Most current models of school reform focus on teacher accountability for student performance measured via standardized tests, “improved” curricula, and what economists label “human capital” – e.g., factors such as teacher experience, subject knowledge and pedagogical skills. But our research over many years in several large school districts suggests that if students are to show real and sustained learning, schools must also foster what sociologists label “social capital” – the value embedded in relations among teachers, and between teachers and school administrators. Social capital is the glue that holds a school together. It complements teacher skill, it enhances teachers’ individual classroom efforts, and it enables collective commitment to bring about school-wide change.

    We are professors at a leading Business School who have conducted research in a broad array of settings, ranging from steel mills and auto plants to insurance offices, banks, and even nursing homes. We examine how formal and informal work practices enhance organizational learning and performance. What we have found over and over again is that, regardless of context, organizational success rarely stems from the latest technology or a few exemplary individuals.

  • Redesigning Florida's School Report Cards

    The Foundation for Excellence in Education, an organization that advocates for education reform in Florida, in particular the set of policies sometimes called the "Florida Formula," recently announced a competition to redesign the “appearance, presentation and usability” of the state’s school report cards. Winners of the competition will share prize money totaling $35,000.

    The contest seems like a great idea. Improving the manner in which education data are presented is, of course, a laudable goal, and an open competition could potentially attract a diverse group of talented people. As regular readers of this blog know, however, I am not opposed to sensibly-designed test-based accountability policies, but my primary concern about school rating systems is focused mostly on the quality and interpretation of the measures used therein. So, while I support the idea of a competition for improving the design of the report cards, I am hoping that the end result won't just be a very attractive, clever instrument devoted to the misinterpretation of testing data.

    In this spirit, I would like to submit four simple graphs that illustrate, as clearly as possible and using the latest data from 2014, what Florida’s school grades are actually telling us. Since the scoring and measures vary a bit between different types of schools, let’s focus on elementary schools.

  • Attitudes Toward Education And Hard Work In Post-Communist Poland

    The following is written by Kinga Wysieńska-Di Carlo and Matthew Di Carlo. Wysieńska-Di Carlo is an Assistant Professor of Sociology in the Institute of Philosophy and Sociology at the Polish Academy of Sciences.

    Economic returns to education -- that is, the value of investment in education, principally in terms of better jobs, earnings, etc. -- rightly receives a great deal of attention in the U.S., as well as in other nations. But it is also useful to examine what people believe about the value and importance of education, as these perceptions influence, among other outcomes, individuals’ decisions to pursue additional schooling.

    When it comes to beliefs regarding whether education and other factors contribute to success, economic or otherwise, Poland is a particularly interesting nation. Poland underwent a dramatic economic transformation during and after the collapse of Communism (you can read about Al Shanker’s role here). An aggressive program of reform, sometimes described as “shock therapy," dismantled the planned socialist economy and built a market economy in its place. Needless to say, actual conditions in a nation can influence and reflect attitudes about those conditions (see, for example, Kunovich and Słomczyński 2007 for a cross-national analysis of pro-meritocratic beliefs).

    This transition in Poland fundamentally reshaped the relationships between education, employment and material success. In addition, it is likely to have influenced Poles’ perception of these dynamics. Let’s take a look at Polish survey data since the transformation, focusing first on Poles’ perceptions of the importance of education for one’s success.

  • Building And Sustaining Research-Practice Partnerships

    Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships - read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.

    In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?

    In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships.

  • What You Need To Know About Misleading Education Graphs, In Two Graphs

    There’s no reason why insisting on proper causal inference can’t be fun.

    A weeks ago, ASCD published a policy brief (thanks to Chad Aldeman for flagging it), the purpose of which is to argue that it is “grossly misleading” to make a “direct connection” between nations’ test scores and their economic strength.

    On the one hand, it’s implausible to assert that better educated nations aren’t stronger economically. On the other hand, I can certainly respect the argument that test scores are an imperfect, incomplete measure, and the doomsday rhetoric can sometimes get out of control.

    In any case, though, the primary piece of evidence put forth in the brief was the eye-catching graph below, which presented trends in NAEP versus those in U.S. GDP and productivity.

  • The Common Core And Failing Schools

    In observing all the recent controversy surrounding the Common Core State Standards (CCSS), I have noticed that one of the frequent criticisms from one of the anti-CCSS camps, particularly since the first rounds of results from CCSS-aligned tests have started to be released, is that the standards are going to be used to label more schools as “failing," and thus ramp up the test-based accountability regime in U.S. public education.

    As someone who is very receptive to a sensible, well-designed dose of test-based accountability, but sees so little of it in current policy, I am more than sympathetic to concerns about the proliferation and misuse of high-stakes testing. On the other hand, anti-CCSS arguments that focus on testing or testing results are not really arguments against the standards per se. They also strike me as ironic, as they are based on the same flawed assumptions that critics of high-stakes testing should be opposing.

    Standards themselves are about students. They dictate what students should know at different points in their progression through the K-12 system. Testing whether students meet those standards makes sense, but how we use those test results is not dictated by the standards. Nor do standards require us to set bars for “proficient," “advanced," etc., using the tests.

  • Regular Public And Charter Schools: Is A Different Conversation Possible?

    Uplifting Leadership, Andrew Hargreaves' new book with coauthors Alan Boyle and Alma Harris, is based on a seven-year international study, and illustrates how leaders from diverse organizations were able to lift up their teams by harnessing and balancing qualities that we often view as opposites, such as dreaming and action, creativity and discipline, measurement and meaningfulness, and so on.

    Chapter three, Collaboration With Competition, was particularly interesting to me and relevant to our series, "The Social Side of Reform." In that series, we've been highlighting research that emphasizes the value of collaboration and considers extreme competition to be counterproductive. But, is that always the case? Can collaboration and competition live under the same roof and, in combination, promote systemic improvement? Could, for example, different types of schools serving (or competing for) the same students work in cooperative ways for the greater good of their communities?

    Hargreaves and colleagues believe that establishing this environment is difficult but possible, and that it has already happened in some places. In fact, Al Shanker was one of the first proponents of a model that bears some similarity. In this post, I highlight some ideas and illustrations from Uplifting Leadership and tie them to Shanker's own vision of how charter schools, conceived as idea incubators and, eventually, as innovations within the public school system, could potentially lift all students and the entire system, from the bottom up, one group of teachers at a time.

  • The Superintendent Factor

    One of the more visible manifestations of what I have called “informal test-based accountability” -- that is, how testing results play out in the media and public discourse -- is the phenomenon of superintendents, particularly big city superintendents, making their reputations based on the results during their administrations.

    In general, big city superintendents are expected to promise large testing increases, and their success or failure is to no small extent judged on whether those promises are fulfilled. Several superintendents almost seem to have built entire careers on a few (misinterpreted) points in proficiency rates or NAEP scale scores. This particular phenomenon, in my view, is rather curious. For one thing, any district leader will tell you that many of their core duties, such as improving administrative efficiency, communicating with parents and the community, strengthening districts' financial situation, etc., might have little or no impact on short-term testing gains. In addition, even those policies that do have such an impact often take many years to show up in aggregate results.

    In short, judging superintendents based largely on the testing results during their tenures seems misguided. A recent report issued by the Brown Center at Brookings, and written by Matt Chingos, Grover Whitehurst and Katharine Lindquist, adds a little bit of empirical insight to this viewpoint.

  • The Fatal Flaw Of Education Reform

    In the most simplistic portrayal of the education policy landscape, one of the “sides” is a group of people who are referred to as “reformers." Though far from monolithic, these people tend to advocate for test-based accountability, charters/choice, overhauling teacher personnel rules, and other related policies, with a particular focus on high expectations, competition and measurement. They also frequently see themselves as in opposition to teachers’ unions.

    Most of the “reformers” I have met and spoken with are not quite so easy to categorize. They are also thoughtful and open to dialogue, even when we disagree. And, at least in my experience, there is far more common ground than one might expect.

    Nevertheless, I believe that this “movement” (to whatever degree you can characterize it in those terms) may be doomed to stall out in the long run, not because their ideas are all bad, and certainly not because they lack the political skills and resources to get their policies enacted. Rather, they risk failure for a simple reason: They too often make promises that they cannot keep.

  • The Great Teacher Evaluation Evaluation: New York Edition

    A couple of weeks ago, the New York State Education Department (NYSED) released data from the first year of the state's new teacher and principal evaluation system (called the “Annual Professional Performance Review," or APPR). In what has become a familiar pattern, this prompted a wave of criticism from advocates, much of it focused on the proportion of teachers in the state to receive the lowest ratings.

    To be clear, evaluation systems that produce non-credible results should be examined and improved, and that includes those that put implausible proportions of teachers in the highest and lowest categories. Much of the commentary surrounding this and other issues has been thoughtful and measured. As usual, though, there have been some oversimplified reactions, as exemplified by this piece on the APPR results from Students First NY (SFNY).

    SFNY notes what it considers to be the low proportion of teachers rated “ineffective," and points out that there was more differentiation across rating categories for the state growth measure (worth 20 percent of teachers’ final scores), compared with the local “student learning” measure (20 percent) and the classroom observation components (60 percent). Based on this, they conclude that New York’s "state test is the only reliable measure of teacher performance" (they are actually talking about validity, not reliability, but we’ll let that go). Again, this argument is not representative of the commentary surrounding the APPR results, but let’s use it as a springboard for making a few points, most of which are not particularly original. (UPDATE: After publication of this post, SFNY changed the headline of their piece from "the only reliable measure of teacher performance" to "the most reliable measure of teacher performance.")