Experienced economist John Cochrane tested a startup 'Refine' and Claude (an LLM) on a draft booklet and got critique comments comparable to top human referees, plus runnable Matlab code to update graphs. That anecdote foregrounds a near‑term capability: generative tools can reliably perform peer‑review style critique and some reproducible research tasks.
— If AI reliably produces referee‑quality review and reproducible code, academic publishing, tenure, and research funding norms will need to be rethought—who counts as an expert, how credit is assigned, and what startups are worth backing.
Michael Inzlicht
2026.03.04
70% relevant
The article reports a presenter delivering a talk using 100% AI‑generated slides — a direct example of AI moving from a backend tool into the visible apparatus of academic judgment and presentation, which connects to debates about AI’s role in evaluating, synthesizing, and representing research in scholarly venues.
Arnold Kling
2026.02.25
100% relevant
Cochrane’s on‑record trial of Refine and Claude Opus 4.6 produced organized referee comments and Matlab code; he and the toolmakers (López and Golub) are the concrete actors cited.
← Back to All Ideas