Compute‑Productivity Scaling Law

Updated: 2026.01.13 16D ago 1 sources
Measure and model how increases in LLM training compute map to real‑world professional productivity (e.g., percent task‑time reduction) using preregistered, role‑specific experiments. Early evidence suggests roughly an 8% annual task‑time reduction per year of model progress, with compute accounting for a majority of measurable gains and agentic/tooled workflows lagging behind. — If robust, a compute→productivity scaling law anchors macro forecasts, labor policy, and industrial strategy—turning abstract model progress into quantifiable economic expectations and regulatory triggers.

Sources

Claims about AI productivity improvements
Tyler Cowen 2026.01.13 100% relevant
Yale’s Ali Merali preregistered experiment with 500+ consultants/data analysts found an 8% per‑year reduction in task time tied to model progress and decomposed gains into 56% compute vs 44% algorithmic, projecting ~20% U.S. productivity upside over a decade.
← Back to All Ideas