Large, coordinated replication projects should be treated as a routine, auditable metric of a field's reliability. Regularly reporting field-level replication rates and typical effect‑size decay would give funders, journals, and the public a concrete signal about how much confidence to place in new findings.
— Making replication rates public would reorient incentives in science (publishing, hiring, funding) and sharpen public understanding of what scientific claims are well‑established versus provisional.
2026.03.05
90% relevant
Jussim anchors his ~75% estimate on replication failure rates (he cites an unreplicable‑finding run rate of ~50%) and then adds concrete channels (misquotation, ignoring contrary evidence, fabrication, censorship) that convert replication gaps into a broader proportion of false claims — this is exactly the argument that replication statistics should be treated as a core measure of how much to trust scientific fields.
2015.03.05
100% relevant
Open Science Collaboration's 2015 Science paper: 100 attempted replications in psychology with only ~36% statistically significant and large average reductions in effect size.
← Back to All Ideas