Americans’ acceptance of AI depends on what it’s used for: people are likely to react differently to AI in political speeches than in entertainment like songs. This suggests disclosure carries a context‑dependent trust penalty that institutions will have to manage.
— If trust drops more for civic content than for entertainment, labeling rules and campaign, government, and newsroom policies must adapt to domain‑specific expectations.
Tyler Cowen
2025.10.16
63% relevant
Cowen critiques blanket AI‑labeling mandates (e.g., California’s) and highlights the practical and constitutional risks of forcing disclosure as AI permeates all content, complementing the existing idea that the trust penalty from disclosure depends on context and that policy must be domain‑sensitive.
msmash
2025.10.09
66% relevant
Jim Lee said audiences 'recoil from what feels fake' and DC will not use AI for storytelling or art, betting that entertainment consumers penalize AI‑made content and reward authenticity—aligning with evidence that trust penalties depend on context.
msmash
2025.09.18
88% relevant
The Pew findings say Americans are open to AI for heavy data analysis (weather, medicines) but not for personal matters (religion, matchmaking), directly echoing the domain-contingent acceptance pattern in this idea.
msmash
2025.09.17
84% relevant
Business Insider’s reported policy to let staff use AI for first drafts without notifying readers directly touches the domain-specific 'disclosure penalty'—news audiences may penalize AI use more than entertainment audiences, incentivizing non-disclosure in journalism.
Reem Nadeem
2025.09.17
100% relevant
Pew’s appendix framing—'From political speeches to songs, how would Americans react if they found out AI was involved?'—indicates measured reactions by content type.
Reem Nadeem
2025.09.17
95% relevant
The Pew piece explicitly asks how Americans would react if AI were involved in different contexts—'from political speeches to songs'—directly supporting the claim that acceptance of AI is domain‑dependent and that disclosure carries different trust penalties by use case.
Reem Nadeem
2025.09.17
95% relevant
Pew Research Center’s new survey explicitly asks how Americans would react upon learning AI was involved in different content types (political speeches versus songs), providing empirical backing for the claim that disclosure carries a context‑dependent trust penalty.
Reem Nadeem
2025.09.17
92% relevant
Pew Research Center examines how Americans would respond upon learning AI was used in domains ranging from political speeches to music, directly testing the claim that acceptance of AI is context‑dependent and that disclosure carries different trust penalties in civic vs entertainment settings.
Reem Nadeem
2025.09.17
90% relevant
The article explicitly contrasts reactions to AI in political speeches versus songs, directly aligning with the idea that public acceptance of AI depends on context and carries different trust penalties across domains.