Large‑model syntheses (e.g., GPT‑5.2) can rapidly compress the scholarship on contentious issues like low‑skilled immigration into an easily sharable, nuanced verdict (national welfare ≈ neutral/weakly positive; localised losers exist). That lowers the friction for evidence‑based framing but also concentrates epistemic authority in model outputs unless provenance and robustness are required.
— If policymakers and journalists begin citing AI syntheses as standalone evidence, public discourse will shift toward model‑mediated summaries—raising opportunities for faster, better‑informed debate but also risks from unvetted or decontextualized model outputs.
Tyler Cowen
2026.01.11
100% relevant
Tyler Cowen asked GPT‑5.2 Pro for a welfare synthesis of UK low‑skilled immigration and posted the model’s balanced summary (national effects modest/near‑zero; distributional/local harms depend on spillovers).
← Back to All Ideas