• Why Teachers And Researchers Should Work Together For Improvement

    Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. This is the first of two posts on research-practice partnerships; both are part of The Social Side of Education Reform series.

    Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?

    Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners.  Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds.

  • Research And Policy On Paying Teachers For Advanced Degrees

    There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

    The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

    This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere.

  • A Quick Look At The ASA Statement On Value-Added

    Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it's useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

    Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

    Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged.

  • The Thrill Of Success, The Agony Of Measurement

    ** Reprinted here in the Washington Post

    The recent release of the latest New York State testing results created a little public relations coup for the controversial Success Academies charter chain, which operates over 20 schools in New York City, and is seeking to expand.

    Shortly after the release of the data, the New York Post published a laudatory article noting that seven of the Success Academies had overall proficiency rates that were among the highest in the state, and arguing that the schools “live up to their name." The Daily News followed up by publishing an op-ed that compares the Success Academies' combined 94 percent math proficiency rate to the overall city rate of 35 percent, and uses that to argue that the chain should be allowed to expand because its students “aced the test” (this is not really what high proficiency rates mean, but fair enough).

    On the one hand, this is great news, and a wonderfully impressive showing by these students. On the other, decidedly less sensational hand, it's also another example of the use of absolute performance indicators (e.g., proficiency rates) as measures of school rather than student performance, despite the fact that they are not particularly useful for the former purpose since, among other reasons, they do not account for where students start out upon entry to the school. I personally don't care whether Success Academy gets good or bad press. I do, however, believe that how one gauges effectiveness, test-based or otherwise, is important, even if one reaches the same conclusion using different measures.

  • No Teacher Is An Island: The Role Of Social Relations In Teacher Evaluation

    Our guest authors today are Alan J. Daly, Professor and Chair of Education Studies at the University of California San Diego, and Kara S. Finnigan, Associate Professor at the Warner School of Education at the University of Rochester. Daly and Finnigan recently co-edited Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill (Springer, 2014).

    Teacher evaluation is a hotly contested topic, with vigorous debate happening around issues of testing, measurement, and what is considered ‘important’ in terms of student learning, not to mention the potential high stakes decisions that may be made as a result of these assessments.  At its best, this discussion has reinvigorated a national dialogue around teaching practice and research; at its worst it has polarized and entrenched stakeholder groups into rigid camps. How is it we can avoid the calcification of opinion and continue a constructive dialogue around this important and complex issue?

    One way, as we suggest here, is to continue to discuss alternatives around teacher evaluation, and to be thoughtful about the role of social interactions in student outcomes, particularly as it relates to the current conversation around valued added models. It is in this spirit that we ask: Is there a 'social side' to a teacher's ability to add value to their students' growth and, if so, what are the implications for current teacher evaluation models?

  • The Semantics of Test Scores

    Our guest author today is Jennifer Borgioli, a Senior Consultant with Learner-Centered Initiatives, Ltd., where she supports schools with designing performance based assessments, data analysis, and curriculum design.

    The chart below was taken from the 2014 report on student performance on the Grades 3-8 tests administered by the New York State Department of Education.

    Based on this chart, which of the following statements is the most accurate?

    A. “64 percent of 8th grade students failed the ELA test”

    B. “36 percent of 8th graders are at grade level in reading and writing”

    C. “36 percent of students meet or exceed the proficiency standard (Level 3 or 4) on the Grade 8 CCLS-aligned math test”

  • Differences In DC Teacher Evaluation Ratings By School Poverty

    In a previous post, I discussed simple data from the District of Columbia Public Schools (DCPS) on teacher turnover in high- versus lower-poverty schools. In that same report, which was issued by the D.C. Auditor and included, among other things, descriptive analyses by the excellent researchers from Mathematica, there is another very interesting table showing the evaluation ratings of DC teachers in 2010-11 by school poverty (and, indeed, DC officials deserve credit for making these kinds of data available to the public, as this is not the case in many other states).

    DCPS’ well-known evaluation system (called IMPACT) varies between teachers in tested versus non-tested grades, but the final ratings are a weighted average of several components, including: the teaching and learning framework (classroom observations); commitment to the school community (attendance at meetings, mentoring, PD, etc.); schoolwide value-added; teacher-assessed student achievement data (local assessments); core professionalism (absences, etc.); and individual value-added (tested teachers only).

    The table I want to discuss is on page 43 of the Auditor’s report, and it shows average IMPACT scores for each component and overall for teachers in high-poverty schools (80-100 percent free/reduced-price lunch), medium poverty schools (60-80 percent) and low-poverty schools (less than 60 percent). It is pasted below.

  • How Boston Public Schools Can Recruit and Retain Black Male Teachers

    Our guest author today is Travis J. Bristol, former high school English teacher in New York City public schools and teacher educator with the Boston Teacher Residency program, who is currently a research and policy fellow at the Stanford Center for Opportunity Policy in Education (SCOPE) at Stanford University.

    The challenges faced by Black male teachers in schools may serve as the canary in the coalmine that begins to explain the debilitating condition faced by Black boys in schools. Black males represent 1.9% of all public school teachers yet have one of the highest rates of turnover. Attempts to increase the number of Black male teachers are based on research that suggests these new recruits can improve Black students’ schooling outcomes.

    Below, I discuss my study of the school-based experiences of 27 Black male teachers in Boston Public Schools (BPS), who represent approximately 10 percent of all Black male teachers in the district. This study, which I recently discussed in Boston’s NPR news station, is one of the largest studies conducted exclusively on Black male teachers and has implications for policymakers as well as school administrators looking to recruit and retain Black male educators.

    Here is a summary of the key findings.

  • Social Capital Matters As Much As Human Capital – A Message To Skeptics

    In recent posts (here and here), we have been arguing that social capital -- social relations and the resources that can be accessed through them (e.g., support, knowledge) -- is an enormously important component of educational improvement. In fact, I have suggested that understanding and promoting social capital in schools may be as promising as focusing on personnel (or human capital) policies such as teacher evaluation, compensation and so on. 

    My sense is that many teachers and principals support this argument, but I am also very interested in making the case to those who may disagree. I doubt very many people would disagree with the idea that relationships matter, but perhaps there are more than a few skeptics when it comes to how much they matter, and especially to whether or not social capital can be as powerful and practical a policy lever as human capital.

    In other words, there are, most likely, those who view social capital as something that cannot really be leveraged cost-effectively with policy intervention toward any significant impact, in no small part because it focuses on promoting things that already happen and/or that cannot be mandated. For example, teachers already spend time together and cannot/should not be required to do so more often, at least not to an extent that would make a difference for student outcomes (although this could be said of almost any policy).

  • Lost In Citation

    The so-called Vergara trial in California, in which the state’s tenure and layoff statutes were deemed unconstitutional, already has its first “spin-off," this time in New York, where a newly-formed organization, the Partnership for Educational Justice (PEJ), is among the organizations and entities spearheading the effort.

    Upon first visiting PEJ’s new website, I was immediately (and predictably) drawn to the “Research” tab. It contains five statements (which, I guess, PEJ would characterize as “facts”). Each argument is presented in the most accessible form possible, typically accompanied by one citation (or two at most). I assume that the presentation of evidence in the actual trial will be a lot more thorough than that offered on this webpage, which seems geared toward the public rather than the more extensive evidentiary requirements of the courtroom (also see Bruce Baker’s comments on many of these same issues surrounding the New York situation).

    That said, I thought it might be useful to review the basic arguments and evidence PEJ presents, not really in the context of whether they will “work” in the lawsuit (a judgment I am unqualified to make), but rather because they're very common, and also because it's been my observation that advocates, on both “sides” of the education debate, tend to be fairly good at using data and research to describe problems and/or situations, yet sometimes fall a bit short when it comes to evidence-based discussions of what to do about them (including the essential task of acknowledging when the evidence is still undeveloped). PEJ’s five bullet points, discussed below, are pretty good examples of what I mean.