Analyzing nearly half a million test statistics across 35,515 psychology papers (1975–2017), the author finds low power and clear publication selection bias but, under a range of reasonable assumptions, estimates that only about 18% of published significant results are false positives — meaning a substantial majority likely reflect real effects rather than pure statistical artifacts. The paper emphasizes that conclusions depend on untestable assumptions (e.g., prior proportion of true hypotheses and the mechanisms generating excess borderline p‑values) and presents alternative scenarios to bracket uncertainty.
— If most significant findings are substantive despite misconduct and low power, reforms should target specific practices (selective reporting, small‑sample studies) rather than wholesale mistrust of published science; policymakers and journalists should calibrate skepticism accordingly.
2026.05.04
100% relevant
Estimate of 17.7% false discoveries derived from 487,996 test values across 35,515 psychology articles and modeled adjustments for p‑hacking and publication bias.
← Back to All Ideas