Publishers, funders and professional societies should maintain public dashboards that aggregate reported test statistics and p‑value distributions across a discipline to track changes in statistical power, selection bias signals (e.g., p‑curve anomalies), and estimated false discovery rates in near real time. These dashboards would use standardized, machine‑readable submissions or automated extraction from articles and transparently show trends to guide policy, preregistration enforcement, and funding priorities.
— A continuous, public metric would give policymakers, journals, and funders an evidence base to calibrate reproducibility interventions and to hold institutions accountable for improving research reliability.
2026.01.04
100% relevant
The article’s large‑scale extraction (487,996 tests from 35,515 papers) and its sensitivity analyses demonstrate the feasibility and value of discipline‑wide aggregation and of tracking FDR and power trends over time.
← Back to All Ideas