LLM Persuasion Hits a Plateau

Updated: 2025.08.19 2M ago 2 sources
A new computer science paper reportedly finds that as large language models are trained on more text, their ability to persuade does not keep rising—it levels off. This challenges claims that sheer scale will produce 'superpersuasion' capable of mass manipulation. — If persuasion doesn’t scale with data, AI-doomer narratives and regulatory priorities around manipulative LLMs may need recalibration toward concrete, bounded risks.

Sources

Bullshit Links - August 2025
David Pinsof 2025.08.19 100% relevant
Pinsof cites 'a new computer science paper' showing persuasive ability plateaus as more text is fed to LLMs.
Links for 2025-07-22
Alexander Kruel 2025.07.22 80% relevant
The roundup links an arXiv paper reporting +1.6 percentage points persuasion per order of magnitude in model scale (+3.5 pp from post‑training) and a decrease in factual accuracy as persuasion rises, directly contradicting the prior 'plateau' claim and adding a new safety‑relevant tradeoff.
← Back to All Ideas