Analyzing 487,996 statistical tests from 35,515 papers (1975–2017), the study finds substantial publication bias and p‑hacking and persistently low power, yet estimates only about 17.7% of reported significant results are false under stated assumptions. Power improved only slightly over four decades and meets 80% only for large effects.
— This tempers replication‑crisis nihilism while underscoring the need for power, preregistration, and bias controls, shaping how media, funders, and policymakers treat psychology evidence.
2025.10.07
100% relevant
Estimate: 'The share of false discoveries among all significant results was 17.7%' from a corpus of 35,515 psychology papers (1975–2017).
2015.10.07
78% relevant
The Open Science Collaboration’s 2015 mass replication (e.g., ~36% significant replications, effect sizes roughly halved) paved the way for later meta‑audits that estimate psychology’s false‑positive share (~17.7%). The OSC paper is the empirical foundation that triggered these field‑level quantifications.
← Back to All Ideas