By Barbara A. Spellman, Professor of Law and Professor of Psychology, University of Virginia School of Law
Journals and scientists should be BFFs. But currently they are frenemies. Or, in adult-speak:
Journals play an important role in ensuring that the scientific enterprise is sound. Their most obvious function is to publish science—good science, science that has been peer-reviewed by experts and is of interest to a journal’s readership. But in fulfilling that mission, journals may provide incentives to scientists that undermine the quality of published science and distort the scientific record.
Journal policies certainly contributed to the replication crisis. As businesses, publishers (appropriately) want to make money; to do so they need people to buy, read, and cite their journals. To make that happen, editors seek articles that are novel, that confirm some new hypothesis, and that have clear results. Scientists know that editors want articles with these qualities. Accordingly, scientists may (knowingly or not) bias the scientific process to produce that type of result.
Slinking Around the Scientific Method
Consider the steps in the abstract version of the scientific method that we all learned back in high school. Of course, what we learned is the blue circle with the approved steps in black. What we didn’t learn back then is how, in an attempt to create publishable results, scientists can go wrong. (See Figure 1.)
First, researchers generate and specify hypotheses. What we didn’t learn in high school is that researchers will try to generate novel, surprising hypotheses because otherwise their articles won’t get published (at least not in “good” journals).
It is difficult to get the necessary beautiful results to support novel and surprising hypotheses, so researchers might find ways to improve their own luck. For example, in their experimental designs, they might choose to include many conditions or measures, and be willing to ditch those that don’t “work”. During data collection, they might choose to take risks with small sample sizes—thus, increasing chances of false positives. They might also continually “peek” at the data and choose to stop data collection when the analyses reach statistical significance. During data analysis, they might choose to eliminate outliers for reasons they hadn’t specified in advance. And if the data turn out not to support the initial hypothesis, researchers might choose to change the hypothesis to fit the data. This procedure, of presenting post hoc hypotheses as if they were a priori hypotheses, is called “HARKing”—hypothesizing after results are known, a term coined by Norbert L. Kerr in 1998. Note the many places above where researchers can “choose” what to do (i.e., “researcher degrees of freedom”).
Those choices enable scientists to take a messy project and create a beautiful article that meets the publishers’ (and peer reviewers’) desideratum. And that’s without committing fraud—or not exactly. Of course, results created in such a way are not likely to be replicable. (For a clever parody of this distorted process, click here).
Figure 1: Threats to reproducible science1
The Transparency and Openness Promotion (TOP) Guidelines
In November 2014, academics from various social sciences, journal editors, publishers, leaders of professional societies, and funders, met at the Center for Open Science. They wanted to figure out what journals could do to promote better science. What emerged were the Transparency and Openness Promotion (TOP) Guidelines.
These guidelines describe eight kinds of publication policies that journals might consider adopting. The policies are designed to address different factors that affect the quality of published science and the completeness of the scientific record. Among the most important features of the guidelines are:
- Each policy can be adopted, or not, independent of the others.
- The policies can be applied in their own way to different disciplines.
- The policies can be adopted in small steps (as researchers become accepting of new norms).
Figure 2 (Summary Table from the TOP Guidelines) 2
Data, code, and materials transparency
Consider the policies of data, code, and materials transparency. A journal might choose not to adopt any policy regarding those research components. If a journal adopts Level 1 (see Figure 2, above) policies, articles must state whether the components are available, and, if so, how someone could access them. At Level 2, the components must be posted to a trusted repository (unless an exception is granted). At Level 3, this posted information would be reproduced independently before publication.
The information would thus be verified and open to others to use. Mistakes might be found, direct replications could be made more exact, and researchers could more easily build on what others have done, creating a cumulative record. Why might scientists use these procedures even if journals have not taken steps to implement them? Some journals give “badges” to articles that meet requirements for Open Data and Open Materials. Such badges not only signal good science, but also the authors’ leadership in reform. Why might journals agree to implement these procedures? In addition to demonstrating the journals’ values, they might want to show that they are keeping up with high-prestige journals (like Science and Nature) that have begun implementing the guidelines. (About 5,000 journals are currently TOP signatories; however, not all have begun implementing them.)
Replications and registered reports
The TOP guidelines are also concerned with two not-yet-standard types of articles: Replications and Registered Reports. Regarding replications: journals may choose not to publish them (Level 0), may encourage them, or may use a process that pairs replications with pre-registration (Level 3). Pre-registration means creating a time-stamped version of a research plan before the study is run. Researchers can choose to pre-register hypotheses, thus eliminating HARKing. They can choose to pre-register analysis plans, thus avoiding conflating confirmatory and exploratory analyses. And journals might choose to implement registered reports for replications, new studies, or both.
In a registered report, researchers submit a description of the study, design, hypotheses, methods, and analysis plan, to a journal before beginning data collection. The proposal then undergoes peer review. Reviewers judge whether the study asks an important question and provides suitable techniques to address it. If so, the study is granted “in principle acceptance” —a guaranteed publication if the authors follow the approved protocol, regardless of how the data turn out. This process should seem similar to that of clinicaltrials.gov, or to how one goes about proposing and defending a dissertation. Over 100 journals now use registered reports. Registered reports address multiple publication problems, from HARKing to the lack of publication of studies that “didn’t work”.
The comments above are most relevant to traditional print journals. Previously, such journals didn’t have the “space” to print full methods, data, or code. But technology has solved those problems. Newer online-only journals are springing up, many of which have adopted some or all of TOP Level 3 policies. Of course, there is now a plethora of “predatory journals,” which seem to have invented Level negative 3.
Adopting the guidelines above may create some initial difficulties for journals and authors. Ultimately, though, the goal is to make what is good for individual scientists’ careers also be what is good for science itself. Journals and scientists with their incentives aligned. Thus, BFFs.
1 Figure 1
From Munafò M.R. et al. A Manifesto for Reproducible Science. Nat. Hum. Behav. 1, 0021 (2017).
Figure 1 licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. To view a copy of this license, visit 2 Figure 2
From the Center for Open Science.
Figure 2 is licensed under a Creative Commons Attribution-NoDerivatives License, which permits use and sharing. However, if you remix, transform or build upon the material, you may not distribute the modified material. To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/4.0.