A coordinated, large‑scale replication effort (100 psychology experiments) found low reproducibility and shrunken effect sizes, indicating systemic problems — not just isolated errors — in research practices and publication incentives. That suggests reproducibility projects are a tool for diagnosing structural weaknesses (p‑hacking, selective reporting, low statistical power) that ripple into policy, clinical practice, and public trust.
— If many high‑profile behavioral findings are unreliable, policy and clinical recommendations based on them may be misinformed and public trust in science can erode, making reform of incentives and standards a matter of public interest.
2015.05.04
100% relevant
Open Science Collaboration (Science, 2015): 100 replications, ~36% replication rate (statistically significant), median effect size roughly halved relative to originals.
← Back to All Ideas