Grant Lottery
Due to insights from the metascience research reform movement, the Fetzer Franklin Fund is also experimenting with the process of how science is funded. In collaboration with Brian Nosek of the Center of Open Science, the Fetzer Franklin Fund carried out an experiment in the allocation of research funds using a lottery system. This experiment combined multiple unique features such as (i) blinding the identity of the researchers so that evaluation was only based on the quality of the proposed work, (ii) openness to receiving proposals from students and other early-career researchers, (iii) an emphasis on the innovative nature of the proposed work and not merely on the assessment of the likelihood of success, and (iv) a lottery for selection among all proposals that met the threshold for funding eligibility. The funding process itself was guided by an external process for evaluation. Key outcome measures of this funding experiment are the impact of this funding process on the incentives and culture of the practice of science. If successful, this innovative funding process will encourage the exploration of novel ideas, and reduce non-productive elements of the funding application process. For more information see also Carl Bergstrom, who inspired the novel funding approach including a partial lottery.
The winning proposals:
Aurélien Allard (UC Davis): Getting a broader picture of open science reforms
Olavo B. Amaral (University of Rio de Janeiro): Impact of data attributes and visual elements on graph interpretation by biomedical scientists: an eye-tracking study
Julia G. Bottesini (University of California, Davis): Science Journalists’ Views of Research Practices
Katherine S. Corker (Grand Valley State University): Increasing the Geographical Diversity of Metascience at SIPS
Nicholas DeVito (University of Oxford): Examining Institutional Barriers and Best Practice to Trial Reporting at UK Universities
Timothy M. Errington (Center for Open Science): The Effect of Preregistration on Credibility in Research Findings
James Evans (for Robert Danziger) (University of Illinois at Chicago): The complex networks underlying certainty and enthusiasm in stem cell biology
Michael C. Frank (Stanford University): MetaLab: From Meta-Analysis to Mega-Analysis
Matthew Goodwin (Northeastern University): Errordetection.tools: a platform for free multi-functional metascientific error detection at scale
Sean Grant (Indiana University Richard M. Fairbanks School of Public Health): Further Increasing Take-Up of TOP by Social Intervention Research Journals
William Gunn (Elsevier): A Platform For Crowd-Sourcing Metascientific Data On Plagiarism / Textual Overlap
Daniel Hamilton (University of Melbourne, Australia): Prevalence of reproducible research practices in the oncology literature
Tobias Heycke (GESIS – Leibniz-Institute for the Social Sciences): Improving Peer-Review: Testing the Effect of Video Recordings in Peer-Review
Shoshana Jarvis (University of California, Berkeley): Influences of conferences: Exploring the social and scientific effects of Metascience 2019
Zoltan Kekecs (ELTE (Eotvos Lorand University), Budapest, Hungary): STASH – Stimulating direct archiving and sharing of research data
Lanu Kim (Stanford University): Search engines impact on how scientists cite
Oskar Sebastian Lundmark (University of Gothenburg, Sweden, Department of Communication): Can a checklist with critical questions help scientists and the public avoid fallacious and circular arguements?
Maya Mathur (Stanford University): Sensitivity analysis for publication bias in meta-analyses: development of novel statistical methods and results in meta-analyses across disciplines
Nicholas Otis (UC Berkeley): Experimental Tests of Policy Innovation and Selection
Tim Parker (Biology Department, Whitman College): To what extent do methodological differences drive heterogeneity in results in ecology?
Thomas Pfeiffer/L Chen (Harvard University): Optimal human-machine integration for predicting the replicability of scientific studies
Maia Salholz-Hillel (Universitätsmedizin Berlin; Berlin Institute of Health (BIH) Center for Transforming Biomedical Research; QUEST – Quality | Ethics | Open Science | Translation): Meta-metascience: A competition to understand and incentivize researchers’ participation in Meatascience Surveys
Felix Singleton Thorn (University of Melbourne): Automated assessment of statistical reporting practices and effect sizes
Ye Sun (University of Utah): How Has Publication Bias Concerned Meta-analysts in Communication: A Two-part Meta-assessment
William Hedley Thompson (Karolinska Institute): Dataset decay and scientific intuition
Leonid Tiokhin (Eindhoven University of Technology): Honest signaling in academic publishing
Leonardo Tozzi (Stanford University): Meta-connectomes for neuroscience research
Charles Twardy (KeyW, a Jacobs Company): PNFRS: Prizes for nominating and forecasting retractions and successes
Olmo van den Akker (Tilburg University, the Netherlands): Metascience Grant Application for Part 2 of the project “The AE-algorithm: Author name disambiguation for large Web of Science datasets
Simine Vazire (University of California, Davis): Validating a rubric for open peer review
Kathleen Vohs (University of Minnesota): Replication Outcomes: Does Experimenter Behavior Play a Role?
Jelte M. Wicherts (Tilburg University): Preregistering your meta-analysis: A template and tutorial