K-12 Education

  • To Seek Common Ground On Life's Big Questions, We Need Science Literacy

    Our guest author today is Jonathan Garlick, Director of the Division of Cancer Biology and Tissue Engineering at the School of Dental Medicine at Tufts University. This article was originally published on The Conversation.

    Science isn’t important only to scientists or those who profess an interest in it. Whether you find fascinating every new discovery reported or you stopped taking science in school as soon as you could, a base level understanding is crucial for modern citizens to ground their engagement in the national conversation about science-related issues.

    We need to look no further than the Ebola crisis to appreciate the importance of science literacy. A recently elected senator has linked sealing the US-Mexican border with keeping Ebola out of the US, even though the disease is nonexistent in Mexico. Four out of 10 Americans believe there will be a large scale Ebola epidemic here, even though there have been just four cases in the US and only one fatality. Flu, on the other hand, which killed over 100 children here last winter, barely registers in the public consciousness.

    Increasingly we must grapple with highly-charged and politicized science-based issues ranging from infectious diseases and human cloning to reproductive choices and climate change. Yet many – perhaps even the majority – of Americans aren’t sufficiently scientifically literate to make sense of these complicated issues. For instance, on one recent survey of public attitudes and understanding of science and technology, Americans barely got a passing grade, answering only 5.8 out of 9 factual knowledge questions correctly.

  • The Accessibility Conundrum In Accountability Systems

    One of the major considerations in designing accountability policy, whether in education or other fields, is what you might call accessibility. That is, both the indicators used to construct measures and how they are calculated should be reasonably easy for stakeholders to understand, particularly if the measures are used in high-stakes decisions.

    This important consideration also generates great tension. For example, complaints that Florida’s school rating system is “too complicated” have prompted legislators to make changes over the years. Similarly, other tools – such as procedures for scoring and establishing cut points for standardized tests, and particularly the use of value-added models – are routinely criticized as too complex for educators and other stakeholders to understand. There is an implicit argument underlying these complaints: If people can’t understand a measure, it should not be used to hold them accountable for their work. Supporters of using these complex accountability measures, on the other hand, contend that it’s more important for the measures to be “accurate” than easy to understand.

    I personally am a bit torn. Given the extreme importance of accountability systems’ credibility among those subject to them, not to mention the fact that performance evaluations must transmit accessible and useful information in order to generate improvements, there is no doubt that overly complex measures can pose a serious problem for accountability systems. It might be difficult for practitioners to adjust their practice based on a measure if they don't understand that measure, and/or if they are unconvinced that the measure is transmitting meaningful information. And yet, the fact remains that measuring the performance of schools and individuals is extremely difficult, and simplistic measures are, more often than not, inadequate for these purposes.

  • New Research On School Discipline

    School discipline was one of the most prominent education issues this year. A major theme within the discipline conversation has been the large discipline disparities by race/ethnicity and gender, which are exhibited as early as pre-K. These disparities drew attention to the important issue of implicit bias – i.e., the idea that we all harbor unconscious attitudes that tend to favor individuals from some groups (whites, males), while putting others (people of color, women) at a disadvantage. This research, which the Kirwan Institute has reviewed in great depth, strongly suggests that a double standard exists – one that is more lenient toward white students and girls – when assessing and addressing challenging student behaviors.

    A second area of focus has been the shortcomings of policies, such as "zero tolerance," which, have been shown to be ineffective in the establishment of order and injurious to suspended or expelled students – who, as a result, are more likely to fall behind academically, drop out of school, and/or become disconnected from the educational system. Nevertheless, many still believe that harsh policies are sometimes necessary to keep the majority of students safe, maintain order and establish a positive school climate.  So, do suspensions and expulsions really help create an environment conducive to learning for all students?

    A new paper by Brea L. Perry and Edward W. Morris, published in the most recent issue of American Sociological Review, suggests that harsh discipline practices actually aren't good for anyone, including non-suspended students.

  • Constitution For Effective School Governance

    Our guest author today is Kenneth Frank, professor in Measurement and Quantitative Methods at the Department of Counseling, Educational Psychology and Special Education at Michigan State University.

    Maybe it’s because I grew up in Michigan, but when I think of how to improve schools, I think about the “Magic Johnson effect." During his time at Michigan State, Earvin “Magic” Johnson scored an average of 17 points per game. Good, but many others have had higher averages. Yet, I would want Magic Johnson on my team because he made everyone around him better. Similarly, the best teachers may be those that make everyone around them better.  This way of thinking is not currently the focus of many current educational reforms, which draw on individual competition and market metaphors.

    So how can we leverage the Magic Johnson effect to make schools better? We have to think of ways that teachers can work together. This might be in terms of co-teaching, sharing materials, or taking the time to engage one another in honest professional dialogues. There is considerable evidence that teachers who can draw on the expertise of colleagues are better able to implement new practices. There is also evidence that when there is an atmosphere of trust teachers can engage in honest dialogues that can improve teaching practices and student achievement (e.g., Bryk and Schneider, 2002).

  • Is Teaching More Like Baseball Or Basketball?

    ** Republished here in the Washington Post

    Earlier this year, a paper by Roderick I. Swaab and colleagues received considerable media attention (e.g., see here, here, and here). The research questioned the widely shared belief that bringing together the most talented individuals always produces the best result. The authors looked at various types of sports (e.g., player characteristics and behavior, team performance etc.), and were able to demonstrate that there is such thing as “too much talent," and that having too many superstars can hurt overall team performance, at least when the sport requires cooperation among team members.

    My immediate questions after reading the paper were: Do these findings generalize outside the world of sports and, if so, what might be the implications for education? To my surprise, I did not find much commentary or analysis addressing them. I am sure not everybody saw the paper, but I also wonder if this absence might have something to do with how teaching is generally viewed: More like baseball (i.e., a more individualistic team sport) than, say, like basketball. But in our social side of education reform series, we have been discussing a wealth of compelling research suggesting that teaching is not individualistic at all, and that schools thrive on trusting relationships and cooperation, rather than competition and individual prowess.

    So, if teaching is indeed more like basketball than like baseball, what are the implications of this study for strategies and policies aimed at identifying, developing and supporting teaching quality?

  • PISA And TIMSS: A Distinction Without A Difference?

    Our guest author today is William Schmidt, a University Distinguished Professor and co-director of the Education Policy Center at Michigan State University. He is also a member of the Shanker Institute board of directors.

    Every year or two, the mass media is full of stories on the latest iterations of one of the two major international large scale assessments, the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA). What perplexes many is that the results of these two tests -- both well-established and run by respectable, experienced organizations -- suggest different conclusions about the state of U.S. mathematics education. Generally speaking, U.S. students do better on the TIMSS and poorly on the PISA, relative to their peers in other nations. Depending on their personal preferences, policy advocates can simply choose whichever test result is convenient to press their argument, leaving the general public without clear guidance.

    Now, in one sense, the differences between the tests are more apparent than real. One reason why the U.S. ranks better on the TIMSS than the PISA is that the two tests sample students from different sets of countries. The PISA has many more wealthy countries, whose students tend to do better – hence, the U.S.’s lower ranking. It turns out that when looking at only the countries that participated in both the TIMSS and the PISA we find similar country rankings. There are also some differences in statistical sampling, but these are fairly minor.

  • A Descriptive Analysis Of The 2014 D.C. Charter School Ratings

    The District of Columbia Public Charter School Board (PCSB) recently released the 2014 results of their “Performance Management Framework” (PMF), which is the rating system that the PCSB uses for its schools.

    Very quick background: This system sorts schools into one of three “tiers," with Tier 1 being the highest-performing, as measured by the system, and Tier 3 being the lowest. The ratings are based on a weighted combination of four types of factors -- progress, achievement, gateway, and leading -- which are described in detail in the first footnote.* As discussed in a previous post, the PCSB system, in my opinion, is better than many others out there, since growth measures play a fairly prominent role in the ratings, and, as a result, the final scores are only moderately correlated with key student characteristics such as subsidized lunch eligibility.** In addition, the PCSB is quite diligent about making the PMF results accessible to parents and other stakeholders, and, for the record, I have found the staff very open to sharing data and answering questions.

    That said, PCSB's big message this year was that schools’ ratings are improving over time, and that, as a result, a substantially larger proportion of DC charter students are attending top-rated schools. This was reported uncritically by several media outlets, including this story in the Washington Post. It is also based on a somewhat questionable use of the data. Let’s take a very simple look at the PMF dataset, first to examine this claim and then, more importantly, to see what we can learn about the PMF and DC charter schools in 2013 and 2014.

  • Feeling Socially Connected Fuels Intrinsic Motivation And Engagement

    Our "social side of education reform" series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school's social environment -- i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?

    This is, of course, a very complex question, which can't be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our "social side" theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) -- how they think they fit into -- their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called "wise interventions" -- i.e., small but precise strategies that target recursive processes (more below).

    The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers.

  • Rethinking The Use Of Simple Achievement Gap Measures In School Accountability Systems

    So-called achievement gaps – the differences in average test performance among student subgroups, usually defined in terms of ethnicity or income –  are important measures. They demonstrate persistent inequality of educational outcomes and economic opportunities between different members of our society.

    So long as these gaps remain, it means that historically lower-performing subgroups (e.g., low-income students or ethnic minorities) are less likely to gain access to higher education, good jobs, and political voice. We should monitor these gaps; try to identify all the factors that affect them, for good and for ill; and endeavor to narrow them using every appropriate policy lever – both inside and outside of the educational system.

    Achievement gaps have also, however, taken on a very different role over the past 10 or so years. The sizes of gaps, and extent of “gap closing," are routinely used by reporters and advocates to judge the performance of schools, school districts, and states. In addition, gaps and gap trends are employed directly in formal accountability systems (e.g., states’ school grading systems), in which they are conceptualized as performance measures.

    Although simple measures of the magnitude of or changes in achievement gaps are potentially very useful in several different contexts, they are poor gauges of school performance, and shouldn’t be the basis for high-stakes rewards and punishments in any accountability system.

  • Multiple Measures And Singular Conclusions In A Twin City

    A few weeks ago, the Minneapolis Star Tribune published teacher evaluation results for the district’s public school teachers in 2013-14. This decision generated a fair amount of controversy, but it’s worth noting that the Tribune, unlike the Los Angeles Times and New York City newspapers a few years ago, did not publish scores for individual teachers, only totals by school.

    The data once again provide an opportunity to take a look at how results vary by student characteristics. This was indeed the focus of the Tribune’s story, which included the following headline: “Minneapolis’ worst teachers are in the poorest schools, data show." These types of conclusions, which simply take the results of new evaluations at face value, have characterized the discussion since the first new systems came online. Though understandable, they are also frustrating and a potential impediment to the policy process. At this early point, “the city’s teachers with the lowest evaluation ratings” is not the same thing as “the city’s worst teachers." Actually, as discussed in a previous post, the systematic variation in evaluation results by student characteristics, which the Tribune uses to draw conclusions about the distribution of the city’s “worst teachers," could just as easily be viewed as one of the many ways that one might assess the properties and even the validity of those results.

    So, while there are no clear-cut "right" or "wrong" answers here, let’s take a quick look at the data and what they might tell us.