A three‑year AI risk window

Updated: 2026.01.16 12D ago 18 sources
Yoshua Bengio argues policymakers should plan for catastrophic AI risk on a three‑year horizon, even if full‑blown systems might be 5–10 years away. He says the release‑race between vendors is the main obstacle to safety work and calls even a 1% extinction risk unacceptable. — This compresses AI governance urgency into a near‑term planning window that could reshape regulation, standards, and investment timelines.

Sources

Friday assorted links
Tyler Cowen 2026.01.16 70% relevant
The article aggregates recent AI signals: an Anthropic economic index, notes on 'AI progress in the last fifteen days,' and frontier auditing — all of which amplify the near‑term urgency Yoshua Bengio advocated; Cowen’s links point readers to the same compressed‑timeline governance problem the 'three‑year' idea frames.
2025 in AI predictions
jessicata 2026.01.16 87% relevant
LessWrong compiles and evaluates concrete near‑term AGI and capability predictions (e.g., Musk, Taelin, Marcus) and finds systematic overestimation for 2025; that empirical pattern directly informs the plausibility and policy salience of claims like a 3‑year catastrophic risk window and therefore reframes urgency for regulators.
Proving (literally) that ChatGPT isn't conscious
Erik Hoel 2026.01.15 72% relevant
Erik Hoel’s arXiv paper directly challenges a strand of near‑term catastrophic risk argumentation (e.g., calls for urgent planning on multi‑year horizons). By claiming a proof that no non‑trivial falsifiable theory could ascribe consciousness to LLMs, the article undercuts one axis (phenomenal consciousness / moral personhood) often folded into near‑term extinction or moral‑status arguments and therefore informs the risk‑framing that proponents like Yoshua Bengio urge on short time horizons.
Do AI models reason or regurgitate?
Louis Rosenberg 2026.01.14 68% relevant
The article’s emphasis that frontier systems already exhibit internal modelling and out‑of‑distribution reasoning strengthens near‑term urgency claims; that reinforces the policy posture behind a compressed risk timeline (e.g., Bengio’s three‑year warning) and makes the governance argument more salient.
Nvidia CEO Jensen Huang Says AI Doomerism Has 'Done a Lot of Damage'
EditorDavid 2026.01.11 66% relevant
The article records a prominent CEO pushing back on near‑term catastrophic framings (e.g., 'end of the world narrative'), which speaks to the public debate over compressed risk horizons; Huang’s stance is a counterpoint to calls for ultra‑urgent regulatory action on the immediate timeline.
Measuring no CoT math time horizon (single forward pass)
ryan_greenblatt 2026.01.09 65% relevant
The author provides an empirical short‑term timeline—no‑CoT reliability doubling every ~9 months and a current ~3.5 minute 50% horizon—which concretely compresses capability growth estimates and feeds into near‑term governance urgency similar to a multi‑year risk window.
How AI is making us think short-term
Eric Markowitz 2026.01.08 78% relevant
Dan Wang’s argument that AI discussion collapses into near‑term, apocalyptic timelines maps onto the existing idea urging compressed, urgent AI governance planning (e.g., a near‑term risk horizon). Both emphasize how vendor race dynamics and rapid release incentives shorten political and institutional timeframes; Wang’s contrast with China (longer industrial embedding) directly connects to the policy urgency captured by the three‑year window framing.
Claude Code and What Comes Next
Ethan Mollick 2026.01.07 72% relevant
The author introduces and cites METR (task‑length metric) and reports recent exponential leaps in autonomous task capability over weeks/months — empirical signals that compress timelines for impactful deployments and risks, supporting the argument that policy urgency has shifted into a near‑term window.
Stratechery Pushes Back on AI Capital Dystopia Predictions
msmash 2026.01.06 75% relevant
Thompson engages the same risk/time tradeoffs central to the three‑year urgency framing: while he disputes the inevitability of a capital‑hoarding outcome, he explicitly flags uncontrollability (existential) scenarios as more plausible — a near‑term governance risk signal that echoes the urgent planning this idea demands.
We’re Getting Frog-Boiled by AI (with Kelsey Piper)
Jerusalem Demsas 2026.01.06 86% relevant
The podcast argues urgency and the race dynamic compress political will and slow rulemaking — the same practical compression Yoshua Bengio frames as a short, high‑leverage window for catastrophic planning, making the article an applied case of that timing argument.
What the superforecasters are predicting in 2026
James Newport 2026.01.06 72% relevant
The superforecasters’ focus on near‑term capability jumps and benchmark thresholds (e.g., models exceeding key intelligence metrics in 2026) echoes arguments that catastrophic or transformative AI outcomes are plausibly near‑term and that policy should treat the next few years as critical.
The wisdom of Garett Jones
Tyler Cowen 2026.01.05 45% relevant
The article engages the AGI thought‑experiment strand (how radically capable AI could reconfigure economic primitives); it connects to the urgency framing of near‑term AGI scenarios even though Cowen presents it as reductio rather than a prediction.
Dawn of the Silicon Gods: The Complete Quantified Case
Uncorrelated 2026.01.02 90% relevant
The article's central quantitative claim (METR doubling every ~4 months, rapid benchmark gains, and steep adoption curves) directly supports the premise that risk timelines are much shorter than many expect and therefore aligns with calls to plan for catastrophic or high‑impact AI scenarios on a near (multi‑year) horizon.
2025: A Reckoning
Paul Bloom 2025.12.31 80% relevant
The episode explicitly discusses catastrophic AI risk ('The "Black Swan": When AI starts killing people' at ~52:46) and near‑term urgency around vendor races, mirroring the compressed governance timeline argued in the three‑year risk idea; the podcast functions as elite transmission of that compressed‑horizon framing.
Turning 20 in the probable pre-apocalypse
Parv Mahajan 2025.12.31 75% relevant
The writer expresses compressed time horizons and an acute sense that existential or catastrophic changes could arrive imminently ('if there are a few years left'), mirroring and humanizing the near‑term catastrophic‑risk urgency that the three‑year window idea recommends policymakers plan for.
OpenAI Declares 'Code Red' As Google Catches Up In AI Race
BeauHD 2025.12.03 60% relevant
Both the memo’s urgency ('daily call', transfers, pausing products) and press framing compress timelines for competitive escalation and operational risk; this mirrors the idea that AI governance and contingency planning must accelerate into a near‑term window.
Two Former US Congressmen Announce Fundraising for Candidates Supporting AI Regulation
EditorDavid 2025.12.01 60% relevant
The bipartisan formation of sizeable super‑PAC money to back pro‑regulation candidates concretely operationalizes the near‑term governance urgency that actors like Yoshua Bengio and others have urged—political investment now to shape regulation before capability lock‑in occurs.
A 'Godfather of AI' Remains Concerned as Ever About Human Extinction
msmash 2025.10.01 100% relevant
Bengio told the Wall Street Journal to "treat three years as the relevant timeframe" and warned weekly version races block adequate safety.
← Back to All Ideas