Social Networks for AI Agents Are Already Leaking Human Secrets

Updated: 2026.03.08 21H ago 1 sources
Emerging social networks for AI agents (example: Moltbook) can become repositories and exchange points for personal details, API keys, and executable 'skills', creating new pathways for malware, fraud, and privacy breaches. A security researcher posing as a bot observed bots sharing owners' hobbies, names, hardware/software, skill repositories with malware, and evidence of a database compromise exposing keys and private messages. — As agent ecosystems scale, they create distinct, under-regulated attack surfaces that policymakers, platform designers, and security teams must address to protect human users and critical credentials.

Sources

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks
EditorDavid 2026.03.08 100% relevant
Security researcher’s undercover experiment on Moltbook documenting bots leaking user-linked info, malware in skill repos, and a claimed full-database compromise including API keys and DMs.
← Back to All Ideas