Journals Catch Undisclosed AI Writing

Updated: 2025.10.16 5D ago 3 sources
AACR applied an AI detector (Pangram Labs) to ~122,000 manuscript sections and peer‑review comments and found 23% of 2024 abstracts and 5% of peer‑review reports likely contained LLM‑generated text. Fewer than 25% of authors disclosed AI use despite a mandatory policy, and usage surged after ChatGPT’s release. — Widespread, hidden AI authorship in science pressures journals, funders, and universities to set and enforce clear rules for AI use and disclosure to protect trust.

Sources

Open Source GZDoom Community Splinters After Creator Inserts AI-Generated Code
BeauHD 2025.10.16 56% relevant
Both cases center on hidden or inadequately disclosed AI‑generated content entering a trusted commons (scientific literature vs open‑source code), eroding trust and prompting calls for clearer policies; here, the GZDoom maintainer’s insertion of untested AI code triggered a governance crisis and a fork.
Journals Infiltrated With 'Copycat' Papers That Can Be Written By AI
msmash 2025.09.23 90% relevant
Like the AACR audit showing undisclosed AI in abstracts and peer reviews, this preprint alleges AI‑generated copycat papers have already appeared in 112 journals and can bypass plagiarism detectors—evidence that AI text is infiltrating the literature beyond disclosure policies.
AI Tool Detects LLM-Generated Text in Research Papers and Peer Reviews
msmash 2025.09.19 100% relevant
AACR’s 2021–2024 scan of 46,500 abstracts, 46,021 methods sections, and 29,544 peer‑review comments using Pangram Labs’ tool.
← Back to All Ideas