New CS evidence finds persuasion performance saturates as LLM pretraining scales, undercutting claims that more data alone yields mass-manipulation capability.
— If 'superpersuasion' doesn’t scale with compute/data, AI governance and election-integrity debates must recalibrate risk models and focus on deployment context, targeting, and platform design over raw scaling.
David Pinsof
2025.08.19
100% relevant
The article cites a paper showing LLM persuasive ability plateaus with more text, challenging doomer narratives about scaling laws enabling takeover via persuasion.
Alexander Kruel
2025.07.22
80% relevant
The cited arXiv paper reports persuasion rising with model scale (+1.6pp per OOM) and even more with post‑training (+3.5pp), while factual accuracy drops—directly informing debates about whether persuasion saturates and highlighting a policy‑salient persuasion–truth tradeoff for elections and moderation.
← Back to All Ideas