The Forecasting Research Institute’s updated ForecastBench suggests AI forecasters are on track to match top human forecasters within about a year. Phil Tetlock’s 'best guess' is 2026, contradicting longer 10–15 year timelines.
— If AI equals superforecasters soon, institutions in policy, finance, and media will retool decision processes around AI‑assisted prediction and accountability.
Nate Silver
2026.01.14
62% relevant
Silver’s ratings address the same core problem that ForecastBench and superforecaster research target—how to evaluate and weight probabilistic political forecasts. The pollster ratings provide a reproducible, evidence‑based prior (Predictive Plus‑Minus) that forecasters and institutions should combine with model outputs (including AI) when producing election probabilities.
Nate Silver
2026.01.14
45% relevant
The updated human pollster accuracy benchmarks matter for claims that AI forecasters will reach or exceed human forecasting skill — Silver’s ratings supply the empirical baseline against which AI forecasting systems should be compared and audited.
Nate Silver
2026.01.07
72% relevant
This article is a concrete example of the same broader phenomenon: algorithmic forecasters (here ELWAY plugged into QBERT) producing probabilistic predictions that can outperform or correct human consensus; it illustrates how domain‑specific models are shaping public expectation and betting markets, the very arena ForecastBench and parity discussions claim AI systems will dominate.
Matthew Yglesias
2026.01.05
60% relevant
Yglesias’ admission that human punditry produced notable misses (especially in foreign elections) strengthens the case for improving forecasting methods and tools; it concretely illustrates the human error this existing idea argues AI/forecast‑bench systems may soon be able to match or outperform.
Tyler Cowen
2025.10.09
100% relevant
Tyler Cowen’s post citing FRI’s ForecastBench update and Phil Tetlock’s 2026 estimate (via tweet/Substack).