Frontier AIs now produce sophisticated results from vague prompts with little or no visible reasoning, shifting users from collaborators to auditors. In tests, GPT‑5 Pro not only critiqued methods but executed new analyses and found a subtle error in a published paper, while tools like NotebookLM generated fact‑accurate video summaries without exposing their selection process.
— If AI outputs are powerful yet opaque, institutions need verification workflows, provenance standards, and responsibility rules for AI‑authored analysis.
Arnold Kling
2025.09.20
90% relevant
The piece quotes Ethan Mollick that AI now delivers sophisticated outputs to vague prompts with opaque processes, shifting users from collaborators to 'supplicants who receive the output,' and gives a Replit example—directly echoing the ‘wizard’ framing.
Ethan Mollick
2025.09.11
100% relevant
Mollick gave GPT‑5 Pro his job‑market paper; in ~10 minutes it ran code and Monte Carlo checks and uncovered a small cross‑table error; NotebookLM made an accurate video overview from his book/posts without explainable choices.
← Back to All Ideas