Improving pedagogy through Registered Reports

Phil McAleer1 and Helena M. Paterson1

1 School of Psychology and Neuroscience, University of Glasgow, Scotland

Corresponding Authors:
Dr Phil McAleer, 62 Hillhead Street, Glasgow G12 8QB
Email: Philip.McAleer@glasgow.ac.uk
Dr Helena Paterson, 62 Hillhead Street, Glasgow G12 8QB
Email: Helena.Paterson@glasgow.ac.uk

 

Abstract

An emphasis on research-led teaching across educational settings has meant that more and more implementations for improving student learning are based on published research. However, the utility of those implementations is only as strong as the research they are based on. The recent replication crisis, witnessed across various fields of science, including pedagogical research, has called into question the published research record. Along with addressing issues in research practices, changes to publication practices are seen as an important step in ensuring that the evidence-based teaching practices we adopt in our classrooms are fit for purpose. Here we highlight a number of issues within the pedagogical literature, including a lack of replication studies and a positive publication bias, as well as common questionable research practices, that need addressed to ensure credible science as a basis for educational interventions. We propose the adoption of Registered Reports as a means for counteracting these issues. By ensuring that the literature that we base policy and practice upon is published based on its scientific rigor, and not merely its outcome, we believe that the field will have a stronger basis on which to decide the implementations and approaches we adopt within our classrooms.

Keywords

pedagogy, scholarship, publication, registered reports, replication

In light of recent developments in the ethos and practice around open scholarship, we propose that the standard route to publishing research, and bringing knowledge into the public record, is flawed and needs updating. For many years the typical process has been to run a study, write-up the outcome, and hope to find a home for your work in one of the myriad of publishing outlets. However, taking our own field of psychology as an example, as highlighted in the fallout of the reproducibility crisis (Soderberg et al., 2021), this standard route can lead to an increase in the quantity of publications but not necessarily an increase in quality, with approximately two-thirds of replicated studies failing to show the same effect as in the original study (Open Science Collaboration, 2015). This would not be such an important issue if it were not for the fact that approaches to teaching are borne out of these findings; Hansford (2021), for example, highlights a variety of measures implemented in pedagogical settings. Both schools and universities make use of research, from fields such as psychology and education, to help improve the learning of their students. However, the implementations that educational bodies adopt can only ever be as good as the research they are based on. As such, it is incumbent on us as producers, disseminators, and users of knowledge to continue to improve and maintain the standard of the underlying research evidence, and in turn the interventions and implementations we adopt in our teaching. One step towards this is to reflect on how our publishing and editorial processes work and implement policies and opportunities that allow for more reliable research to be published.

As an illustrative example of the problem, an established way to improve evidence is to normalise replications of studies, collate their findings, and base new pedagogical approaches upon the outcomes. Alister, Vickers-Jones, Sewell, and Ballard (2021), in a survey on practices that gave researchers more confidence in published research, found that a direct replication of a study – using the same methods as the original study – resulted in the greatest increase in confidence to that finding. However, somewhat counterintuitively, published replications are the exception and not the norm. For instance, Makel, Plucker, and Hegarty (2012), based on a review of the top one hundred psychology journals, estimated replications to make up approximately only 1.6% of all published articles. Educational research also has a limited publication record of replications, with Makel and Plucker (2014) finding that only 0.13% of studies in the top educational research journals, over a 5-year period, to be replications; this value was reported at 0.5% two years later (Makel et al., 2016). Given the perception of replications as being the gold standard in science, it is perhaps surprising that replications constitute less of the published research over time, not more.

Furthermore, in addition to a general lack of replication studies, Morrison (2019) highlighted that replications tend to have a higher rate of obtaining the same effect as the original study only when replicated by the original research team; this rate falls significantly when a new research team attempts the study. This in itself is a problem as Makel and Plucker (2014) found that across 164,589 published replications, only 54 papers demonstrated replications by a new research team. Therefore, even within replications, there is an issue of transparency of methods that may affect the outcome of the study and, as such, the research we base pedagogy on exhibits the same challenges of scientific credibility as other fields highlighted in the replication crisis (Soderberg et al., 2021; Munafò et al., 2017), at least from a hypothesis-driven quantitative perspective that makes up the majority of the studied literature. In turn, and if the solution is not solely via replication, then the field as a whole would benefit from approaches that allow researchers to adopt norms related to transparency of methods, and to participate in practices that improve the quality and quantity of the evidence we base our teaching approaches on (Vazire, 2018).

Beyond replications, again focussing primarily on hypothesis-driven quantitative studies, Munafò et al. (2017) highlight at least three weaknesses in the publication process that potentially leads to less reliable research. The first is simply publication bias whereby journals only publish positive findings (i.e., studies where the researchers find the effect they set out to find; the alternative hypothesis). This in turn leads to the file-drawer problem whereby researchers do not write-up the research where they obtained a null finding (i.e., no effect) as they know the study will not receive a favourable peer review, or worse, be desk-rejected by the journal without explanation. By illustration, Fanelli (2010), in a review of over 2000 articles across an array of science subjects, reported an average of 84% of articles stating a finding in accordance with the stated alternative hypothesis; psychology-related journals showed a rate of 91.5%. More recently, Scheel, Schijen, Mitchell and Lakens (2021) found approximately 96% of the 152 psychology articles they sampled to state a result consistent with their hypothesis. From these findings, it is clear that the current approach to publication is not without bias that substantiates the publication of novel and exciting new findings, regardless of their robustness.

This publication positivity bias leads to a second issue that Munafò et al. (2017) highlight as being responsible for low quality publications: namely Hypothesising After Results Known. Shortened to HARKing, this is where a researcher runs and analyses their study, and then writes the hypothesis and introduction based around that finding as though that was what they had originally predicted. This process became so normalised that it was even encouraged in an article (i.e. Bem (1987)) commonly taught in higher education research methods classes (Chambers and Tzavella, 2020). As researchers know that journals are more likely to publish positive findings, researchers, when HARKing, present their work in that fashion, without consideration of whether the outcome was a spurious finding or not. Given the huge variety of researcher’s degrees of freedom in a study, i.e. the number of decisions a researcher can make in analysing their study, and given how small changes can influence an outcome (Parsons, 2020), it is clear how HARKing, along with a publication bias towards positive findings, can create published articles that are less likely to replicate as the decision to publish these studies is based on their outcome and not on the quality of the research.

The flipside of HARKing, and the final issue from Munafò et al. (2017) we raise, is Critiquing After Results Knowing (CARKing). This is the overly common occurrence whereby a reviewer asks that a researcher change aspects of the write-up of their study to make ‘the story’ more consistent with the hypothesis. Similarly, the reviewer may suggest additional analyses that perhaps were outwith the scope of the original paper; both theoretically and methodologically. CARKing again highlights that published articles can be a product of the review process just as much as they are a product of the researcher’s decision-making tree (Parsons, 2020), and that decisions about publication are not being based on the research quality but purely the outcome. In more serious scenarios, CARKing may simply result in rejection of a paper based on nothing other than the biases a review team brings to the process, and their beliefs of the links between theory and evidence. Dienes (2020), for example, highlights evidence of how seeing the results of a study before linking methodology and theory can in turn ultimately compromise the evidence for a given theory, and the inferences that are drawn from the data, through personal biases and CARKing.

Taking these issues together, coupled with incredibly high rejection rates across journals – a snapshot from psychology, for example, suggests an average of approximately 70% (American Psychological Association, 2020) rejection rate across all journals - and an emphasis on publications metrics for academic jobs and promotion (Cline et al., 2020; Herz et al., 2020), it is clear how the current path to publication is not fit for purpose, how it can be gamified, and how it can ultimately lead to the publication of less credible research that becomes the basis of policy and decision making. Moreover, these issues mentioned do not yet take into consideration additional Questionnable Research Practices (QRPs), such as omitting variables and/or p-hacking (changing samples to obtain a significant result), nor a general lack of transparency in the reporting of methods and analyses (John, Loewenstein, & Prelec, 2012; Makel, Hodges, Cook, & Plucker, 2021) that likewise leads to published studies, that do not hold up to scrutiny. For illustration, Makel et al. (2021) surveyed approximately 1400 researchers who had published educational research within the previous 10 years and found that questionable research practices were prevalent throughout the field. That said, Gehlbach and Robinson (2021) postulated that within the field of educational psychology there is an uneven distribution of knowledge about the practices that lead to less credible research; it is not the case that all researchers are deliberately creating less reliable research, many are just not aware of the issue. And so, if replications, considered to be the ‘gold standard’ in science and the approach that instils most confidence in research, are heavily under-used, potentially for reasons relating to the lack of novelty or, more simply, difficulty in replicating the study, and the current publication system ripe for the influx of questionable research, it would be pertinent to normalise an alternative approach to publishing research that counters these issues.

Registered Reports (Chambers & Tzavella, 2020; Munafò et al., 2017) are one such approach and, with widespread adoption, would help improve the standard of research that schools and universities base interventions and implementations upon. Both Dienes (2020) and Kiyonaga and Scimeca (2019) offer a detailed review of the process but, in summary, the ethos of Registered Reports is that the decision to publish the research should be based on the research question and methods alone, and never the results. This in turn leads to a fully transparent reporting of methods and analyses and hence a greater confidence in the conclusions. Their current format, initially proposed in 2012 (Chambers & Tzavella, 2020), follows a two-stage peer review process. The first stage (Stage 1) sees researchers submit the introduction (background, research question, hypothesis, and reasoning), methodology, and proposed analysis plan for peer review and a decision is then made as to whether the research should be carried out as stated or revised. Once the peer review team decide that the research should be carried out, the research is stated as having “in principle acceptance” meaning that if the research team follow their own plan then the study will be published regardless of outcome. The research team then carry out the study and complete the write-up (results and discussion) and submit the paper again for the second stage of review (Stage 2) where the peer review team check that the study has been carried out as planned and that the results and discussion are consistent with the plan. If so, then the paper is accepted and published. Deviations from the plan are allowed as long as the original plan is included and the deviations are documented and reasoned. At no point however is the decision to publish the research based on the findings; purely on the approach. Further to this, in agreement with the research team, the journal can also undertake to publish the Stage 1 submission if for some reason there are unforeseen circumstances that prevent data being collected.

Taking the mentioned flaws with the standard publication route in turn we can see how Registered Reports counteract these issues. Registered Reports remove publication bias of only positive findings because the decision is not based on the outcome – the decision to publish is made prior to anyone (research team or reviewers) knowing the outcome. As evidence of impact, a recent study by Scheel et al. (2021) found that taking a Registered Report approach reduced the number of positive findings by almost half. Comparing 152 standard route papers with 71 Registered Report papers in psychology, they found that only 44% of papers published via the Registered Report route had a finding consistent with the effect laid out in the hypothesis compared to 96% of papers published via the standard route. This suggests that Registered Reports lead to more null findings being published which in itself is an important improvement as knowing what does not work when teaching is just as important as knowing what does work! Similarly, this finding by Scheel et al. (2021) points to a reduction in HARKing; research teams cannot adjust, and have no need to adjust, their introduction and hypothesis to fit an outcome, as again the decision to publish has already been made before the results are known. Likewise, CARKing is prevented because the review team have agreed at Stage 1 that this method and plan is acceptable and, assuming that the plan is followed, cannot reasonably ask the research team to make changes at the second stage. In fact, given that the majority of questionable research practices are carried out to obtain a publication (Chambers, Feredoes, Muthukumaraswamy, & Etchells, 2014) taking a Registered Report approach reduces the need, desire, and opportunity for such practices whilst at the same time improving the methodological transparency of the study. It is our argument that people are more willing to accept amendments to their project at the design phase than at the completed phase, creating a more collegiate and discursive approach to peer review, which can only be to the benefit of science through the publication of more transparent methods and reliable findings. Furthermore, Gehlbach and Robinson (2021) point out that other transparent practices, such as pre-registering hypotheses and analyses, are not necessarily sufficient in ensuring better research practices and hence we believe that an outside review of the theory, methods, and analytical plan of a study before data collection can better show transparency and increase confidence in the conclusions.

Whilst not novel in their concept, Registered Reports are still rare in their use but, at the time of writing, 280 plus journals offer them as a route to publication (Chambers & Tzavella, 2020) with currently seven teaching-focused journals being registered within that collective on the Open Science Framework. It will be a few years before we truly understand whether Registered Reports do lead to more reliable research, as the numbers of papers published in this fashion are still low. However, the findings from Scheel et al. (2021) suggest that Registered Reports will improve the research that we base our decisions on as they remove the issues associated with the standard approach. Beyond this, Alister et al. (2021) found Registered Reports improve people’s perceived confidence in research carried out in this fashion by over 60%. This is below the 90+% increase in confidence towards replications but we must keep in mind that replication is not always a viable option for some researchers. In educational research, for example, you may only have one academic year to test your novel implementation, say a statistics workbook to improve understanding, and decide at the end of that year whether to continue with that approach or to remove it. In the long run your implementation could be replicated at another school, and results compared, but in the meantime, you have to decide whether to continue using the implementation and to tell others about it. Using Registered Reports could be an excellent option in this scenario as you, as the researcher and as the teacher, benefit from the Stage 1 review process, knowing that, going into your academic year, you have a solid methodological and analytical approach, agreed on and improved in collaboration with the review team, and that the findings will be published regardless of outcome. Everybody, researcher, teacher, and future implementer alike, learns from this process.

That said, Registered Reports are not without their criticisms, and may not be suitable for all types of study and research (see Chambers et al., 2014, and Chambers and Tzavella, 2020 for full discussion). Primarily, Registered Reports require a shift in mentality to conducting research, going from simply reporting the work at the end of the study, to transparently recording and publishing research decisions across the duration of the study. Reich (2021) points out some important considerations when applying Registered Reports in pedagogical settings. For instance, fieldwork, by necessity, cannot allow for a long, iterative review process of the Stage 1 manuscript. However, two solutions present themselves. First, through discussion with editors, authors can agree a timeframe for when the Stage 1 manuscript is to be submitted and for when reviewers will agree to review the manuscript by, thus meaning that the review process can fit within a limited period. This approach would also greatly benefit early career researchers and PhD students where time is limited and the desire for published outcomes is highest for future career prospects. Second, and demonstrated in a case study described by Reich, merging ideas between Registered Reports and pre-registration (time stamping aspects of your study in advance without peer review), an author submits their Stage 1 manuscript but calendar constraints mean that they cannot wait to gather data, and so lock the data and do not analyse it until the Stage 1 review has been returned. While this is risky for new or exploratory research as reviewers are more likely to find methodological flaws in such studies, we believe that with agreement by an editor, this approach can work well for confirmatory or replication studies where the methodology is well-established. In such a case the emphasis would be on scrutinising the analysis and, as the risk of rejection due to questionable research practices would be the same as in any other approach, researchers would still need to be fully transparent about their methods with failure to do so being tantamount to scientific fraud and retraction of their work. What this case study highlights most however is that there needs to be some flexibility to allow people to incorporate high quality research practices whilst still mitigating for the real-world constraints often encountered in teaching. However, we believe that any issues can be accommodated and, where suitable, adopting Registered Reports would lead to an overall improvement in the published pedagogical research.

One elephant in the room in terms of pedagogical research and open science practice, however, is that most of the interventions developed for increasing transparency and decreasing bias have been based under a quantitative hypothesis-testing umbrella. In contrast, by the nature of what is studied – teacher and student interactions in a given time and place – pedagogical research favours qualitative and exploratory enquiry, where the prevalence of the issues caused by a malformed peer review process and/or questionable research practices is harder to quantify. What is known though is that qualitative research publication is not devoid of publication bias; for instance, Petticrew et al. (2008) estimated that only about 44% of qualitative research presented at medical conferences ended up being published. More recently Toews et al., (2016) showed that, along with a publication bias, questionable research practices such as reporting only some of the results still persists in qualitative research. Additionally, the authors suggested that one of the reasons for knowledge from qualitative enquiry falling out of the peer reviewed knowledgebase was a lack of robust methodology. And whilst these findings may relate to a specific field, namely health, we believe those engaged in pedagogical research need to be cognisant of these findings in order to increase the knowledgebase on which our interventions are based. Pre-registration and a commitment by publishers to publish Stage 1 reports is potentially a powerful tool for increasing the volume and value of qualitative evidence, in similar ways as for hypothesis-driven enquiry. Haven and Van Grootel (2019) go further in stating that preregistration, and by extension Registered Reports, has the potential to address a number of misunderstandings surrounding the subjective nature of qualitative research. Promoting transparency would encourage researchers to declare a priori their philosophical positioning and the lens through which they will interpret their data. In short, much in the same way that exploratory quantitative pedagogical research allows for greater scope in what is carried out, but yet can still benefit from scrutiny by peers through pre-registration and registered reports, then so might qualitative pedagogical research benefit.

Ultimately, despite the scientific approach to the research behind their interventions, be it quantitative or qualitative, exploratory or confirmatory, pedagogical and educational settings are in a continual trade-off between finding the most effective intervention that their available facilities can sustain. A recent meta-analysis blog by Hansford (2021), showing the variety of interventions in teaching and their relative effect sizes (i.e. their impact on learning), gives just a snapshot of the somewhat overwhelming options; assuming the underlying research is robust – which is perhaps not such a valid assumption for the reasons stated above. Moreover, any implementation is costly, both in terms of time and finance – even just considering the workload hours to put an intervention in place – and so, as above, and particularly in research related to teaching and learning, it is imperative that the field has a good handle on what does and does not work, and the effectiveness of different approaches. Registered Reports, even in the absence of replications, allow for a greater confidence in knowing what knowledge is credible, which interventions work and which do not, and how effective they will be, to a much greater extent than the traditional publication route.

Regardless of the benefits, however, any change to practice will often be met with a degree of scepticism, and potentially resistance; particularly in the academic climate where job security and promotion criteria have for many years been explicitly linked to high-impact publications (Cline et al., 2020; Herz, Dan, Censor, & Bar-Haim, 2020). Such resistance has already been shown in other fields towards Registered Reports and has led to a number of papers that help dispel myths and rumours about the approach, namely that Registered Reports will lead to the end of exploratory research and can only benefit hypothesis-driven confirmatory research (see Chambers et al., 2014, for example). Most people however would agree that, conceptually, transparency in research is to the benefit of science, and really the question then becomes about adopting a change in policy that leads to the uptake of good research practices such as Registered Reports. As above, with a recent push towards greater research integrity, there has been a steady shift in policy at the academic level with a small number of institutes adopting criteria emphasising research transparency and rigour in their hiring and promotion criteria as well as their research ethos statements (e.g. University of Glasgow, 2021). Whilst these are steps in the right direction, we must be careful of resting on our laurels and instead aim to encourage a field that actively promotes the practices that it values.

Focusing on Registered Reports in pedagogy and thinking how they could be actively encouraged in a high-pressure time-constrained environment, it would perhaps not be too speculative to image a scenario where grants or incentives are offered for writing Stage 1 reports, and having those reports stored in a repository. Additional incentives/grants could be offered for completing the study at a later date, perhaps even by a different research team or as a call towards data being gathered at multiple sites (for instance, see the Psychological Science Accelerator as an example, Moshontz et al., 2018). Alternatively, as a field, we should perhaps be more open to the idea of a longer time between Stage 1 acceptance and Stage 2 completion than may be seen in research intensive fields – again a repository of accepted or published Stage 1 reports would be of benefit here. In addition, and really the motivation of this article, pedagogically focused journals could aid the field simply by offering Registered Reports as a means to publication, alongside more traditional approaches. Increasing the visibility of Registered Reports would in turn lead to internal learning and teaching committees openly and actively discussing how they could implement such an approach in their practice – with the carrot of any additional incentives from their school, institute or funding-body where possible. The key point is that the first step to adopting a practice starts with making it accessible and visible. Incentive then promotes people to try the practice, and achievement develops a sustained sense of motivation and increases longevity. How we as a field make Registered Reports accessible and motivate researchers to adopt them are questions that are open to debate and ones that journals and steering committees will have differing views on, perhaps using the above suggestions as springboards, however the question of visibility, and having that option, is one that all journals can adopt. When we know we are on the right path, we must not get stuck in the past for fear of not quite knowing what is around the corner.

In conclusion, whilst Registered Reports are no panacea to all the pitfalls with the current approach to carrying out and publishing research, it is our belief that they are a large step in the right direction towards ensuring more transparent and credible pedagogical research output. In turn this would mean that teaching implementations and interventions that come out of that research could be seen to have been tested to a higher standard of evidence. As such we propose that those invested in pedagogical research should adopt and normalise Registered Reports as a means to disseminating their work, where possible, so that we all might benefit from knowing, with an increased degree of confidence, which research to base our course and curriculum design upon, to the benefit of our students and the development of their graduate skills and knowledge.

References

Alister, M., Vickers-Jones, R., Sewell, D. K., & Ballard, T. (2021). How do we choose our giants? Perceptions of replicability in psychological science. Advances in Methods and Practices in Psychological Science, 4(2), 251524592110181. https://doi.org/10.1177/25152459211018199

American Psychological Association (2020). Summary report of journal operations, 2019. The American Psychologist, 75(5), 723-724. https://doi.org/10.1037/amp0000680

Bem, D.J. (1987) Writing the empirical journal article. In: M.P. Zanna, & J.M. Darley (Eds.), The complete academic: A practical guide for the beginning social scientist (pp. 171–201). Hillsdale, NJ: Erlbaum.

Chambers, C.D., Feredoes, E., Muthukumaraswamy, S.D. & Etchells, D.J. (2014) Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond[J]. AIMS Neuroscience, 1(1): 4-17. doi:10.3934/Neuroscience.2014.1.4

Chambers, C.D., & Tzavella, L. (2020, February 10). The past, present, and future of Registered Reports. Nature Human Behaviour, 1-14 https://doi.org/10.31222/osf.io/43298

Cline, H., Coolen, L., de Vries, S., Hyman, S., Segal, R., & Steward, O. (2020). Recognizing team science contributions in academic hiring, promotion, and tenure. The Journal of Neuroscience, 40(35), 6662-6663. https://doi.org/10.1523/JNEUROSCI.1139-20.2020

Dienes, Z. (2020, October 28). The inner workings of Registered Reports. https://doi.org/10.31234/osf.io/yhp2a

Fanelli, D. (2010). "Positive" results increase down the hierarchy of the sciences. PLOS ONE, 5(4), e10068-e10068. https://doi.org/10.1371/journal.pone.0010068

Hansford, N. (2021, 6 February) Common teaching interventions ranked by meta-analysis effect size. Pedagogy non grata. https://nathanielhansford.wixsite.com/website/blank-page

Haven, T.L., & Van Grootel, D.L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229-244. https://doi.org/10.1080/08989621.2019.1580147

Herz, N., Dan, O., Censor, N., & Bar-Haim, Y. (2020). Authors overestimate their contribution to scientific work, demonstrating a strong bias. Proceedings of the National Academy of Sciences - PNAS, 117(12), 6282. https://doi.org/10.1073/pnas.2003500117

Gehlbach, H., & Robinson, C.D. (2021). From old school to open science: The implications of new research norms for educational psychology and beyond. Educational Psychologist, 56(2), 79-89. https://doi.org/10.1080/00461520.2021.1898961

John, L.K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532. https://doi.org/10.1177/0956797611430953

Kiyonaga, A., & Scimeca, J.M. (2019). Practical considerations for navigating registered reports. Trends in Neurosciences (Regular Ed.), 42(9), 568-572. https://doi.org/10.1016/j.tins.2019.07.003

Makel, M.C., Hodges, J., Cook, B.G., & Plucker, J.A. (2021). Both questionable and open research practices are prevalent in education research. Educational Researcher, 13189. https://doi.org/10.3102/0013189X211001356

Makel, M.C., & Plucker, J.A. (2014). Facts are more important than novelty: Replication in the education sciences. Educational Researcher, 43(6), 304-316. https://doi.org/10.3102/0013189X14545513

Makel, M.C., Plucker, J.A., Freeman, J., Lombardi, A., Simonsen, B., & Coyne, M. (2016). Replication of special education research: Necessary but far too rare. Remedial and Special Education, 37(4), 205-212. https://doi.org/10.1177/0741932516646083

Makel, M.C., Plucker, J.A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7(6), 537-542. https://doi.org/10.1177/1745691612460688

Morrison, K. (2019). Realizing the promises of replication studies in education. Educational Research and Evaluation, 25(7-8), 412-441. https://doi.org/10.1080/13803611.2020.1838300

Moshontz, H., Campbell, L., Ebersole, C.R., IJzerman, H., Urry, H.L., Forscher, P.S., … Isager, P.M. (2018). The psychological science accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501-515. https://doi.org/10.1177/2515245918797607

Munafò, M.R., Nosek, B.A., Bishop, D.V.M., Button, K.S., Chambers, C.D., Percie Du Sert, … Ioannidis, J.P.A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 0021-0021. https://doi.org/10.1038/s41562-016-0021

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science (American Association for the Advancement of Science), 349(6251), 943-943. https://doi.org/10.1126/science.aac4716

Parsons, S. (2020, June 26). Exploring reliability heterogeneity with multiverse analyses: Data processing decisions unpredictably influence measurement reliability. https://doi.org/10.31234/osf.io/y6tcz

Petticrew, M., Egan, M., Thomson, H., Hamilton, V., Kunkler, R., & Roberts, H. (2008). Publication bias in qualitative research: What becomes of qualitative research presented at conferences? Journal of Epidemiology and Community Health (1979), 62(6), 552-1. https://doi.org/10.1136/jech.2006.059394

Reich, J. (2021). Preregistration and registered reports. Educational Psychologist, 56(2), 101-109. https://doi.org/10.1080/00461520.2021.1900851

Scheel, A.M., Schijen, Mitchell R.M.J, & Lakens, D. (2021). An excess of positive results: Comparing the standard psychology literature with registered reports. Advances in Methods and Practices in Psychological Science, 4(2), 1-12. https://doi.org/10.1177/25152459211007467

Soderberg, C.K., Errington, T.M., Schiavone, S.R., Bottesini, J., Thorn, F.S., Vazire, S., … Nosek, B.A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5(8), 990-997. https://doi.org/10.1038/s41562-021-01142-4

Toews, I., Glenton, C., Lewin, S., Berg, R.C., Noyes, J., Booth, A., … Meerpohl, J.J. (2016) Extent, awareness and perception of dissemination bias in qualitative research: An explorative survey. PLoS ONE 11(8): e0159290. https://doi.org/10.1371/journal.pone.0159290

University of Glasgow (2021, September 17). UofG promotes open research practices. University of Glasgow. Retrieved from https://www.gla.ac.uk/myglasgow/news/headline_811562_en.html?utm_medium=email&utm_source=newsletter&utm_campaign=https://www.gla.ac.uk/myglasgow/news/

Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411-417. https://doi.org/10.1177/1745691617751884