Weaning Educational Research Off Of Steroids
Our guest authors today are Hunter Gehlbach and Carly D. Robinson. Gehlbach is an associate professor of education and associate dean at the University of California, Santa Barbara’s Gevirtz Graduate School of Education, as well as Director of Research at Panorama Education. Robinson is a doctoral candidate at Harvard’s Graduate School of Education.
Few people confuse academics with elite athletes. As a species, academics are rarely noted for their blinding speed, raw power, or outrageously low resting heart rates. Nobody wants to see a calendar of scantily clad professors. Unfortunately, recent years have surfaced one commonality between these two groups—a commonality no academic will embrace. And one with huge implications for educational policymakers’ and practitioners’ professional lives.
In the same way that a 37 year-old Barry Bonds did not really break the single-season home run record—he relied on performance-enhancing drugs—a substantial amount of educational research has undergone similar “performance enhancements” that make the results too good to be true.
To understand, the crux of the issue, we invite readers to wade into the weeds (only a little!), to see what research “on steroids” looks like and why it matters. By doing so, we hope to reveal possibilities for how educational practitioners and policymakers can collaborate with researchers to correct the problem and avoid making practice and policy decisions based on flawed research.
To understand the problem of research on steroids, let us review the “study” that caused a colossal earthquake in psychology, and is beginning to cause tremors in education. The authors of this study conducted a real experiment with the goal of generating a clearly fake result. Specifically, they showed how listening to the Beatles’ When I’m Sixty-Four made study participants younger—note the causal language. The goal of the authors was not to try to convince anybody that John, Paul, George, and Ringo could lead them to the fountain of youth, but rather to illustrate how well-meaning researchers (and journal editors) can inadvertently produce (and publish) false findings.
How did they pull this off? In designing their study, they used techniques commonly employed by diligent educational researchers. For example, they tried out a couple different treatment conditions (while some participants listened to the Beatles, others heard “Kalimba” or “Hot Potato”), collected several dependent variables (including, how old participants felt and how much they would enjoy eating at a diner), and examined a host of different covariates (e.g., father’s age).
After completing their data collection, the study authors then thoroughly analyzed their data. In other words, they looked at their data in numerous ways in an effort to find any statistically significant results that may have emerged. As it turned out, one becomes significantly younger by listening to the Beatles only after controlling for paternal age—listening to other music, testing other dependent measures, and using other covariates presumably did not show effects. A major point to emphasize here is that these techniques—collecting a great deal of data and thoroughly analyzing one’s data—are exactly what good researchers are trained to do.
Two major problems result from these sensible practices. First, just by chance alone researchers are bound to find at least one significant result. Second, because humans are incorrigible storytellers, researchers will likely find a way in which their results “make sense” or “align with prior theory.” Exacerbating the problem, many of these performance-enhanced findings trickle down to educational practitioners and policymakers who then use the study results to inform their instructional and policy decisions.
First scholars in medicine, then psychologists, and now even journalists writing for mainstream news outlets, have realized this problem and called attention to the fact that many research findings are on steroids. In doing so, they have pressured these disciplines to change the norms around conducting research. Yet, the field of education has barely starting discussing the problem (and not because research on steroids doesn’t exist!). So what should be done to address this challenging situation?
Giving up on educational research cannot be an option. If we throw in the towel on data, we are left with gut intuitions to guide decision-making. Historically, people’s gut intuitions have done little to close achievement gaps, boost teacher retention, stem violence in our schools, or otherwise improve education’s challenging issues. Research can provide answers if it is done right.
Fortunately, some important progress is being made. A small but growing group of researchers has begun to pre-register their studies. In this approach, researchers publicly post exactly what their hypotheses are and exactly how they will evaluate each one—e.g., they specify the equation they will use to test each hypothesis. Most important, they post this information before they have examined their data for the first time. By doing so, researchers help their audience distinguish those analyses which are confirmatory—and we should really have faith in—from those that are exploratory, which we should view as interesting hypotheses to test in a future study.
In an effort to promote this particular practice within the educational research community, we recently wrote an article discussing the promise of pre-registration in education. Through this piece, we tried to identify some of the ways in which very reasonable characteristics of studies (such as using small sample sizes, eliminating outliers in idiosyncratic ways, or findings that interventions work only for particular subgroups of students, etc.), in the aggregate, might signal research on steroids. Academics can sometimes sniff out questionable studies. Our hope was that identifying these characteristics would help everyone else more readily identify problematic research.
Next, we offered some specific suggestions for how educational researchers might pre-register their studies, including how they might address some of the problems that consistently arise when conducting field research in schools.
However, one can imagine how effective it is for a small group of academics to gently suggest to their colleagues that their tried-and-true approach to research, that has served them well for so many years, should be replaced by a new method. This message is typically as well-received as when school administrators hear about the extra round of budget cuts. Given that most practicing academics work very hard on their comprehensive data collections and thorough analyses, the news that research norms need to change to include an additional step is especially hard to hear.
To change this deeply entrenched habit of how research is conducted and introduce pre-registration as a norm will require help from practitioners and policymakers. Specifically, three forms of tough love could put critical pressure on educational researchers. First, district administrators can embrace their roles as gatekeepers. As they are reading research, they can flag studies that appear dubious and tell their colleagues why they may want to be skeptical of the findings. In doing so, administrators will help shrink the audience for research on steroids.
Second, when interacting with researchers who are promoting their latest findings, ask them whether their study is pre-registered. If it has not been, find out why not. There may be a good reason. But then there is probably a corresponding good reason to view the findings as exploratory and in need of future testing. Exploratory studies introduce important new ideas, but they probably are not a foundation for sound educational policy.
Finally, when setting up new research partnerships, ask for pre-registration as a requirement of the partnership. This is the moment when practitioners and policymakers typically have the most leverage in the relationship. So it makes sense to take advantage of that moment to make the research as useful to all parties involved as possible.
With enough upward pressure on educational researchers from practitioners and policymakers, we can move away from educational research that is on performance enhancing drugs. Then hopefully, we can get back on the right track of enhancing the performance of students and teachers.