Analyzing 487,996 statistical tests from 35,515 papers (1975–2017), the study finds substantial publication bias and p‑hacking and persistently low power, yet estimates only about 17.7% of reported significant results are false under stated assumptions. Power improved only slightly over four decades and meets 80% only for large effects.
— This tempers replication‑crisis nihilism while underscoring the need for power, preregistration, and bias controls, shaping how media, funders, and policymakers treat psychology evidence.
Arnold Kling
2026.01.12
88% relevant
The article cites Josh Zlatkus and Rob Kurzban on replication failures and explicitly frames psychology as operating near the boundary of 'science and baloney sandwich,' directly matching the registered idea that psychology shows systemic publication bias, low power, and a substantial false‑discovery problem.
Lee Jussim
2026.01.10
75% relevant
The discussion of how academics and laypeople badly predict meta‑analytic outcomes and how the IAT is routinely misinterpreted ties directly to broader replication‑quality concerns summarized by the false‑discovery work—showing why psychology/social science credibility metrics are central to public debates.
Josh Zlatkus
2026.01.07
92% relevant
The article argues that many psychological findings are not true science but artifacts of poor methods; that diagnosis matches the existing calculation that a substantial share of reported significant results in psychology are likely false positives, directly connecting to claims about low replication and publication bias.
Steve Stewart-Williams
2025.11.29
70% relevant
The article relies on large syntheses and meta‑analytic results (e.g., Zell & Lesick on conscientiousness, Ferretti et al. comparing Big Five vs MBTI) and thus provides an example of psychology producing robust, replicable findings rather than one‑off claims; this connects to the existing idea about the replication and false‑discovery profile of psychology by showing where effects are reliable and policy‑relevant.
2025.10.07
100% relevant
Estimate: 'The share of false discoveries among all significant results was 17.7%' from a corpus of 35,515 psychology papers (1975–2017).
2015.10.07
78% relevant
The Open Science Collaboration’s 2015 mass replication (e.g., ~36% significant replications, effect sizes roughly halved) paved the way for later meta‑audits that estimate psychology’s false‑positive share (~17.7%). The OSC paper is the empirical foundation that triggered these field‑level quantifications.