Highly cited papers can still be wrong or misleading, especially in fast‑moving, high‑salience topics. Treat citations and awards as attention metrics, not validity, and anchor policy in replicated, preregistered evidence with sufficient power.
— Separating attention from reliability would improve how media, funders, and governments weigh evidence before making rules.
Tom Chivers
2025.10.07
55% relevant
By spotlighting fabricated or suspect data in famous dishonesty studies (Francesca Gino, Dan Ariely), the piece underscores that prominence and publication do not guarantee validity, reinforcing the need to privilege replicated, well‑measured evidence over prestige.
Tyler Cowen
2025.08.30
78% relevant
The paper reports a mean 51% share of significant robustness tests and 70% relative t/z values across 17 American Economic Review papers, showing that even highly cited, prestigious publications can be fragile—exactly the warning that status signals shouldn’t substitute for replicated, preregistered evidence.
Paul Bloom
2025.08.19
78% relevant
Bloom notes that tens of thousands of citations to fraudulent or weak work can evaporate without altering real knowledge, reinforcing the claim that attention metrics don’t equal validity and that robust findings rest on converging evidence rather than citation counts.
Lee Jussim
2025.06.27
100% relevant
The article notes Moss‑Racusin (2012) had 4,505 citations and elite endorsements, yet a stronger replication finds the opposite effect.
2025.05.25
70% relevant
Francesca Gino was a highly cited, celebrated scholar whose work is now alleged to include manipulated data; Harvard’s decision underscores that prestige and citation counts are not proxies for validity.
2025.01.07
82% relevant
Huebner’s 2005 paper is highly cited (≈152) yet this replication shows its 'innovation decline' result depends on a single history book’s bias; using broader figure databases reverses the trend, illustrating that attention and citations can anchor policy narratives on fragile evidence.