Ted Miguel in conversation on "Transparent and Reproducible Social Science Research: How to Do Open Science"

Ph.D student Isabelle Cohen in conversation with Oxfam Professor in Environmental and Resource Economics Ted Miguel.

Ted Miguel and Isabelle Cohen

Edward Miguel wants researchers in the social sciences to join the Open Science Movement. 

In his new book, Transparent and Reproducible Social Science Research: How to Do Open Science, Miguel and co-authors Jesse Freese (Stanford University) and Garret Christensen (U.S. Census Bureau) give readers the tools to do so. “The research so far has been more focused on documenting the problems,” says Professor Miguel. “The Open Science movement imagines the major changes in policy and practice that could become solutions.” 

The book chronicles some of the most common problems in experiment designs, and suggests preemptive solutions. Research, says Miguel, should be judged based on the importance of the question, the quality of the data, the strength of the research design, rather than whether or not the desired conclusion was reached. “I think that's the way forward.” 

The progress on improving transparent, high-quality research has become more prominent in the past decade, and is spearheaded by the Berkeley Initiative for Transparency in the Social Sciences (BITSS) where Miguel is faculty director. 

Isabelle Cohen, a PhD candidate in economics at UC Berkeley, interviewed Professor Miguel to discuss his new book.

Isabelle Cohen: How did this book come about?

Edward Miguel:  In 2012, I became involved with colleagues in psychology and political science on creating the first initiative on transparency, which later became the Berkeley Initiative for Transparency in the Social Sciences (BITSS). We shared a common goal of creating transparency and reproducibility in the social science and we wanted to come together to discuss how to achieve this mutual goal. 

At BITSS, we asked ourselves: How do we train the next generation? We created an online course that was followed by a graduate course, with Garret [Christensen] as the GSR, who later co-authored this book. In that class, we had PhD students from different fields, including economics, psychology, and biostatistics. When we were ready to turn this into a book, we were especially excited to collaborate with Jeremy Freese, who is a leading scholar in his field. His training as a sociologist gave us an additional perspective that helped us write this book to a wider audience than economics researchers. 

I.C: What do you hope to accomplish with this book?

E.M: There are two goals here. The more obvious one is giving researchers (whether they’re students or practitioners) tools and practices that lead to better, more transparent research, such as pre-analysis plans and reproducible coding, so that they can improve their research from a technical standpoint. 

On a more fundamental level, we wanted to provide readers with a new mindset about how to do research. I want them to understand that being a researcher isn't just about learning statistical techniques, that it's about a set of values and principles and a way of looking at the world in a very even handed-way. When I look back at my own training, I remember research felt like a competition you wanted to win, and not having statistically significant results meant you had failed in that competition.  A faculty member told me once in grad school, "Before I do any data analysis, I know exactly what the results are going to be." it really stuck in my mind as really the worst of current empirical research practice. But I don't think that faculty member was alone in sort of thinking along those lines. That mentality is the antithesis of this book and the social norm we’re hoping to change. 

I.C: Your target audience is students and researchers who will use these tools for their research. Why should the average person be concerned about the social sciences being transparent and reproducible?

E.M: The average person should care because economic and health policy is shaped by research. If that research isn't reliable, then we're going to reach the wrong conclusions about whether a certain tax program, for example, or a public health intervention, is effective. We know some groups seem bent on discrediting research simply because they think there’s an agenda behind it, and some are blatantly anti-science. If those groups believed this research was transparent and credible, then maybe they would trust the results. 

I.C: Clearly, this change in research methodology has been a work in progress for many years. Who were some pioneers in the field that helped improve transparency and credibility in research?

E.M: This book would not even be in the realm of possibility if it hadn't been for a couple of decades of progress in Econometrics and in applied fields. You can see a trend starting from our colleague David Card in Labor Economics, who with colleagues, Orley Ashenfelter, Alan Krueger and others in the 1980s brought approaches that stem from medical research. Development economists and labor economists started a credibility revolution that called for more explicit experimental and research design. All of this pushed us towards more scientific rigor, which set the stage for this book.

I.C: Was there pushback to this shift?

E.M: There was intense resistance, particularly to experiments, but also to sharing research with null results. Again, David Card and Alan Krueger are a great example of that. They broke the mold when they showed in 1992 that increasing hourly minimum wages won’t necessarily reduce employment.  This was backed by many research findings that followed. In fact, some of the more recent papers show when you raise the minimum wage, sometimes employment goes up. But his discovery went against neo-classical economic theory. It exploded the brains of theoretically minded economists who said, "No, this can't be right. There has to be a problem." But those who were more scientific said, "Well we have to maybe modify our theory to reflect the world, rather than rejecting empirical results because we don't like the results."

I.C: There’s a quantitative push in more and more areas. Do you think the methods you discuss in the book can translate into more areas that haven’t been as quantitative in the past? 

E.M: That's such a great question. I think certain ideas translate very well, such as open sharing of data, open sharing of materials, open sharing of the research processes is, disclosure of financial agreements, disclosure of personal relationship. Last year, we ran a large scale survey of scholars in economics, political science, psychology and sociology, to measure adoption of Open Science practices.  We were glad to observe that across disciplines, experimentalists have massively adopted open data methods such as pre-registration and pre-analysis plans, although there were variations on which tool each field opted to use more frequently.  

I.C: In the book, you argue that research should be purely motivated by the desire for knowledge and discovery. However, do you think researchers have an ethical responsibility to share their results, even if it might be used to promote policies that may cause harm? How can researchers balance these two forces?

E.M: I think that the scientific ethos demands that once you've asked a question, you have a responsibility to the community to share your results and data in an even-handed way, even if you do fear the consequences of sharing it. We don’t want to go back to this trend where only the research that conforms with your views is what's getting published.

But that doesn't mean we have to lack values or principles - rather, those can shape the sort of questions we gravitate towards. However, your point is very well taken and I think I've heard from colleagues and scholars throughout the years at several points, heard that people have hesitated in publishing things where they don't like the conclusions and that's a problem, because it leads to selective publications.

I.C: You mention the File Drawer Problem in the book. Tell me more about that and how it translates to everyday research.

E.M: This is a classic phenomenon and a known obstacle to research transparency. Research results can affect whether or not the research gets published - or rather, figuratively, or literally, put in the drawer. A famous paper by Sterling in 1957 showed that over 97% of papers published in leading Psychology journals in the 1950s had statistically significant results, which meant that a lot of null results were just disappearing. The issue with that is that if we're not seeing a large body of evidence then our understanding of the world is really incomplete, and in fact, we might be mainly publishing false-positive results. If we're only seeing a slice of the evidence, then it's not clear how credible our bodies of research are. 

I.C: There’s even a prior step in which researchers might not even try to publish results that they think aren't likely to get published.

E.M: That’s right. It's really insidious. The file drawer problem logically precedes journal and referee biases. The goal of the Open Science movement is to change the incentives and practices to move us to a better equilibrium. 

I.C: That brings us to registries, which are a recent development from the past seven years, and an interesting solution.  What is a registry and why should researchers register their projects?

E.M: A study registry is an attempt to make the universe of studies that have been conducted more accessible to researchers that fewer disappear. This is the product of deliberate action by the US government and National Institutes of Health (NIH) who, 20 years ago, established a registry and tried to encourage or forced grantees to sort to register their trials on the registry. That was motivated by various scandals in the eighties and nineties of pharmaceutical companies funding trials for drugs and then only selectively publishing the results that they liked. In the book, we talk about how applying this practice in the social sciences can help make null results more prevalent. The norm is clearly shifting.

I.C: The idea of pre-analysis plans seems to compliment this effort to avoid “cherry picking” which studies are published.

E.M: Yeah, that's right. So before we've analyzed the data, we post what our main hypotheses are, what our statistical models are with data we want to use will be. That has this great benefit of increasing accountability for scholars, who are then more likely to report the results of those pre-specified analyses. it constrains researchers in some way to make sure they are showing the analyses they said they would show.

I.C :Isn’t there a risk that this type of planning might stifle creativity and innovation?

E.M: Yes, this has been a topic of debate. Our answer to that is that pre-analysis plans don’t prevent researchers from performing additional analysis. Sometimes you include it after the fact. A good example, which we discuss in the book, is a paper by Katherine Casey, Rachel Glennerster and me on Community-Driven Development in Sierra Leone. There were some basic hypotheses that we neglected to include in the plan. In the paper, we acknowledged this oversight and included those findings because we thought it was integral to the rest of the paper, even though it wasn’t in the plan.  I think humility is important, and as long as the pre-analysis plan lays out clearly your key hypotheses, and it's posted on public registry, that's a huge win relative to the world we were in before.

I.C: One of my favorite quotes from the book is that “science prizes originality”. And yet, you call for replicating. Tell me why should we be replicating.

E.M: Scientific results that don't replicate aren't really scientific results. Replication means that a result would be the same if the experiment were to be conducted again. It increases our certainty and the validity of the result. 

The replication crisis has become really pronounced in lab experiments in Psychology and Social Psychology. A recent study showed that two thirds of those published high-profile studies did not replicate. There's been a similar effort in Experimental Economics led by Colin Camerer that found that two thirds of the Experimental Economics papers do replicate, which was a little more encouraging. It's really a half glass full, half empty. A third don't replicate. Is that good or bad?

I.C: If there was one tool you would want researchers who read this book to take from it, what would it be?

E.M: It’s hard to choose, but I’d say that maintaining reproducible coding practices is a valuable tool. It pays huge dividends, whether it’s for you to reconstruct the data later or for shareability. 

But there is a larger point here, which is that we hope to change the attitude towards what constitutes research quality.  When people ask whether someone is a good student, or look at the quality of their work, the criteria shouldn't be, "Oh, they've got two stars." It should be, "They tackled this really important question around state capacity for tax collection in Uganda, which is a central issue in development. Without tax revenue, you can't invest in social programs. So they tackled this really important question, with a great research design and a large sample and a clever experiment.” This puts the emphasis on the method, which in itself is reproducible, rather than the specific result.

It’s a cultural shift, and our hope is that teaching people these tools will also change their orientation towards research. In the book we say that "A good musician doesn't just play the notes." A good jazz musician has a certain style, an openness, and we want to bring that mindset into our research culture.

 

Recently, social science has had numerous episodes of influential research that was found invalid when placed under rigorous scrutiny. The growing sense that many published results are potentially erroneous has made those conducting social science research more determined to ensure the underlying research is sound. Transparent and Reproducible Social Science Research is the first book to summarize and synthesize new approaches to combat false positives and non-reproducible findings in social science research, document the underlying problems in research practices, and teach a new generation of students and scholars how to overcome them. Understanding that social science research has real consequences for individuals when used by professionals in public policy, health, law enforcement, and other fields, the book crystallizes new insights, practices, and methods that help ensure greater research transparency, openness, and reproducibility. Readers are guided through well-known problems and are encouraged to work through new solutions and practices to improve the openness of their research. Created with both experienced and novice researchers in mind, Transparent and Reproducible Social Science Research serves as an indispensable resource for the production of high quality social science research.