Journal of Memory and Language | 2019

New initiatives to promote open science at the Journal of Memory and Language

 
 

Abstract


Over the last several years, psychologists have engaged in rigorous discussions and debates about ways to enhance the robustness of research findings and their replicability. These discussions have led to growing enthusiasm for a range of initiatives designed to make our research more open and transparent. These initiatives apply across the research process, from deeper consideration and specification of study design and analysis through preregistration, to adopting more stringent data analysis protocols, and to making the outcomes of research openly accessible. Such practices improve the quality of our work, and thus, its long-term impact. We are proud of the reputation that the Journal of Memory and Language has built for publishing robust findings that can be replicated, and for the Journal’s leadership in establishing sound practices in statistical analysis of data (e.g., Baayen, Davidson & Bates, 2008; Barr, Levy, Scheepers, & Tily, 2013). It is in this spirit that we are announcing new policies and processes to promote open science in the Journal. Our new initiatives are focused on making research materials and data openly accessible. Several recent articles have outlined the benefits that accrue when researchers make their data and materials available to the scientific community (e.g., Crocker & Light, 2018; Houtkoop et al., 2018; LeBel, Campbell, & Loving, 2017; Lindsay, 2017; Mellor, Vazire, & Lindsay, 2018). These and other articles suggest that making data and materials openly accessible permits more rigorous peer review, enhances transparency and reproducibility, and increases the potential impact of the research by permitting data synthesis (e.g. meta-analyses) or development of novel hypotheses. Our experience as editors at the Journal of Memory and Language has also prompted us to appreciate the potential value of open practices. Quite often, reviewers’ evaluations of papers include speculations about what authors may or may not have done. For example, reviewers often wonder to what extent particular example stimuli are representative of the full set of stimuli. Similarly, reviewers frequently voice concerns about submissions’ lack of clarity with respect to data analysis. These concerns have become increasingly common in the era of mixed linear effects modeling. Finally, we have noticed that reviewers are increasingly requesting underlying data as a condition of peer review, leading to different requirements across our authors. Beginning on January 1, 2019, we will be asking authors to make various aspects of the research process available to the scientific community. Specifically, at the point of submission, we will ask authors to certify that they will make their de-identified data available to the research community should their manuscript be accepted for publication. We also have the expectation that authors will provide materials, analysis scripts (i.e., when there is relevant statistical code), and computational models so that they can be run or replicated. Editors at other journals have indicated that there may be compelling reasons why authors are unable to accede to such requests: for example, “I encourage authors to make the de-identified data underlying their reported analyses available to reviewers whenever doing so is ethically permitted and practically feasible” (Lindsay, 2017, p. 699). Thus, if there are compelling reasons why it would be unethical or impractical to share research data, the Editor may waive this requirement. Although we will require data sharing at the point of acceptance, we encourage authors to engage in transparent practices at the point of submission, and anticipate that this will become an expectation in the research community over the next few years. Several options are available to authors for making their data accessible. Elsevier (JML’s publisher) owns Mendeley Data. Authors can use this resource to deposit the relevant information while retaining copyright (see https://data.mendeley.com/faq). Authors are also welcome to use other public archives such as the Open Science Framework (https://osf.io/), or provide pointers in their submissions to their university data repositories. However, at point of acceptance, they must offer assurance that these sites will remain active and accessible. Finally, authors may submit the information as supplementary material, to be hosted on the Journal website. We suggest that authors will most often provide trial-level data. Such data are likely to be most useful in allowing readers to reproduce the analyses, and may also promote secondary analyses and the creation of larger-scale data resources (e.g. meta-analyses). However, we also acknowledge that standards for data reporting are likely to evolve considerably as the research community gains experience with transparent practices. Finally, while we have described some of the benefits of sharing research materials and data openly, we also recognize that this constitutes a substantial change in our scientific culture and practice. Making the materials and data from JML papers open available is an indication of the confidence that we have in the quality of the work that we are publishing. But it also exposes authors to a level of scrutiny that has not characterized our discipline in the past. We hope that all colleagues in the JML community will go on this journey with us, as the Journal continues its leadership of advancing the highest-quality research on the mechanisms underpinning memory and language.

Volume 104
Pages 126-127
DOI 10.1016/J.JML.2018.10.004
Language English
Journal Journal of Memory and Language

Full Text