AI as a Third Epistemic Tool

Updated: 2026.04.16 2D ago 90 sources
The piece argues AI is neither historical induction nor scientific law‑finding, but a new way of harnessing complex regularities without mechanistic interpretability. This 'third magic' can produce powerful results while remaining stochastic and opaque, forcing us to use systems we cannot fully explain. — If AI becomes a distinct mode of knowledge production, institutions will need new norms for reliability, accountability, and trust when deploying inherently opaque tools.

Sources

AI is already 10x-ing academic research. How do we get to 100x?
Andy Hall 2026.04.16 90% relevant
The article argues that agentic AI (Claude Code, Anthropic, etc.) is not merely a productivity tool but a new way to generate and validate knowledge (replicating/extending studies, building forecasting pipelines), directly exemplifying the claim that AI represents a distinct epistemic modality in research.
Links for 2026-04-14
Alexander Kruel 2026.04.14 80% relevant
GPT‑5.4 allegedly solved an Erdős problem and the links include papers on LLMs for experiment prediction and LLM‑based verifiers—concrete instances of large models contributing new knowledge and verification in mathematics and science, aligning with the notion of AI as a novel epistemic instrument.
AI and the Coming Economy of Questions
Davide Piffer 2026.04.13 90% relevant
The article argues that AI makes first‑pass answers cheap and so reconfigures epistemic roles: humans must supply higher‑order judgment about which questions matter — the same shift captured by the existing idea that AI is a new mode of producing knowledge and changes who participates in knowledge production and how.
Neuroscientist' AI-Powered Startup AIms To Transform Human Cognition With Perfect, Infinite Memory
EditorDavid 2026.04.12 85% relevant
The startup Engramme (founder: Kreiman) promises an AI 'memory layer' that encodes and recalls a user's entire 'memorome' automatically, directly aligning with and accelerating the idea that AI will become a distinct, coequal way of producing and endorsing knowledge — changing how people store, access, and validate facts.
Neuroscientist's AI-Powered Startup Aims To Transform Human Cognition With Perfect, Infinite Memory
EditorDavid 2026.04.12 90% relevant
The startup frames persistent, automatically recalled memory as a new layer of cognition — exactly the kind of structural change in how humans acquire, store and validate knowledge that the 'AI as a Third Epistemic Tool' idea describes; Engramme (Kreiman) positions its system as a memory layer for every app, shifting epistemic reliance from biological memory and external search to AI‑mediated recall.
Greg Kroah-Hartman Tests New 'Clanker T1000' Fuzzing Tool for Linux Patches
EditorDavid 2026.04.12 90% relevant
The article shows AI used not to write kernel code but to surface likely bugs (fuzzing), with an experienced human writing and owning fixes — a clear example of AI functioning as an epistemic augmentation (discovery and review), echoing Linus Torvalds' interest in AI for maintenance and patch checking rather than code generation.
The Biological Basis of Imagination
Jake Currie 2026.04.09 80% relevant
The study from Cedars‑Sinai used deep visual neural networks to create numerical descriptions of images and generative AI to predict brain responses, exemplifying the idea that AI is being used as a new instrument for discovery and interpretation in science (here, to decode neural codes for imagery).
Friday assorted links
Tyler Cowen 2026.04.03 75% relevant
The links Cowen highlights (notably: 'How do AI models respond to direct authoritarian requests?', Lynne Kiesling on which parts of economics will be repriced, and an agent testing 'how replaceable am I?') all illustrate AI shifting how knowledge, authority, and economic valuation are produced and acted on—the core claim of the existing idea that AI is becoming a new epistemic instrument.
Megan McArdle: the follies of populism, impending fiscal crisis, and the whirlwind of AI
Razib Khan 2026.04.02 75% relevant
McArdle describes her use of AI and its promise to 'revolutionize work and transform society,' which aligns with the idea that AI is changing how knowledge is produced and applied across professions, not merely an efficiency gadget.
Making AI More Human
Nick Hilden 2026.04.01 72% relevant
Beguš calls for integrating humanities methods into AI design so systems’ metaphors, defaults, and roles are deliberately chosen — a move that recasts AI not just as an engineering product but as an epistemic instrument that shapes knowledge and social meaning, echoing the 'AI as a Third Epistemic Tool' claim (actor: universities, research labs, product teams).
Is financial economics still economics?
Tyler Cowen 2026.04.01 90% relevant
Cowen highlights research (Murray et al. 2024; Borri et al. 2024; other working papers) where machine‑learning and math representations predict asset returns better than theory‑based models, exemplifying the claim that AI/ML are becoming independent ways of producing knowledge rather than merely tools for existing theories.
Infinite midwit
Adam Mastroianni 2026.03.31 90% relevant
The article argues LLMs excel at objective, testable cognition but fail at subjective judgment, framing AI as a powerful informational tool that cannot substitute human wisdom — directly aligning with the idea that AI augments one of many epistemic modalities rather than replacing human epistemic roles.
AI and research papers
Arnold Kling 2026.03.31 90% relevant
The article documents AI (Claude/Claude Code and an orchestration stack cited by D. Yanagizawa‑Drott) generating full empirical economics papers, running AI peer review, and proposing AI‑maintained claim databases — a direct example of AI acting as a new, independent epistemic engine for research rather than merely a productivity tool.
Sentences to ponder
Tyler Cowen 2026.03.30 90% relevant
The article asks whether future breakthroughs produced by neural nets — systems without university chairs, financial independence, or commitments — change the nature of discovery; this directly connects to the existing idea that AI constitutes a new epistemic method that complements or replaces traditional human practices of knowledge production.
What The AI Consciousness Question Conceals
Barton Friedland 2026.03.26 90% relevant
Friedland argues that value and 'enacted intelligence' emerge in the coupling between humans and machines rather than inside either. That directly connects to the existing idea that AI functions as a new epistemic instrument reshaping how knowledge is produced and validated (the article cites embodied/extended cognition and historical augmentation literature to make this claim).
*The Marginal Revolution: Rise and Decline, and the Pending AI Revolution*
Tyler Cowen 2026.03.26 80% relevant
Cowen frames the AI revolution as changing the practice and production of economic knowledge (he explicitly links the Marginal Revolution’s history to 'what economists should do' in an AI era), which maps onto the existing idea that AI operates as a new epistemic method that alters how discoveries are made and validated; he also pairs his text with Claude (an AI), concretely illustrating AI as an epistemic collaborator.
Links for 2026-03-24
Alexander Kruel 2026.03.24 90% relevant
The report that GPT‑5.4 Pro likely solved an open research math problem and the Hyperagents paper showing discovery of general self‑improvement strategies both indicate AI is not only automating tasks but producing original, verifiable knowledge and research‑grade reasoning — shifting AI into a new epistemic role.
Can Artificial Intelligence Fix Social Science?
Robert VerBruggen 2026.03.23 80% relevant
The article asks whether artificial intelligence can 'fix' social science, which directly echoes the claim that AI functions as a new way of producing knowledge (not just a tool): it proposes using large language models and automated analysis to surface hidden patterns, run mass replications, and adjudicate disputed findings — i.e., the article treats AI as an epistemic partner to scholars and journals.
A conversation with Claude
Noah Smith 2026.03.22 90% relevant
Noah Smith and Claude debate whether AI will only assist or actually produce new scientific knowledge; Smith explicitly argues AI can discover complex, useful patterns (e.g., in materials science) that function like ‘laws’ for machines even if humans can't easily understand them — a direct instantiation of treating AI as a new way of producing knowledge.
Reactions to AI
Arnold Kling 2026.03.22 85% relevant
The piece juxtaposes Peter Diamandis’s claim that Claude Code is being bought as ‘intelligence as a utility’ (practical, epistemic tool for firms) with commentators who worry about AI’s effects on creativity and literacy — directly touching the claim that AI constitutes a new way of producing knowledge and practical reasoning.
Links for 2026-03-21
Alexander Kruel 2026.03.21 90% relevant
The Technology Review piece (and Pachocki quote) that OpenAI expects to build a 'fully automated researcher' directly maps to the idea that AI will become an independent epistemic actor — doing original scientific work rather than just assisting — which changes who produces knowledge and how it is validated.
The age of spying
Kobe Yank-Jacobs 2026.03.20 85% relevant
The article argues that LLMs do more than summarize data: they infer intentions and mental states from sparse signals (voice tone, photos, location + purchases). That is precisely the claim that AI functions as a new mode of knowing (an epistemic tool) which upgrades preexisting surveillance systems — illustrated by the Anthropic–Pentagon dispute and Dario Amodei’s quote about triangulating citizen data.
It was never about AI (we are not our tools)
Eric Markowitz 2026.03.19 80% relevant
The article's core claim — summarized by the title 'It was never about AI (we are not our tools)' — maps onto the idea that AI functions as another way to generate knowledge and augment cognition rather than as an autonomous replacement for human judgment; it pushes back on technological determinism and reframes debates about responsibility, meaning that the public discussion should treat AI as an epistemic instrument whose use and governance reflect human values.
Economics Links, 3/19/2026
Arnold Kling 2026.03.19 90% relevant
The article collects several short pieces that collectively argue AI is not a magical replacement for markets or planners but a new means of processing information whose value depends on how institutions use it — directly echoing the claim that AI functions as an additional epistemic instrument rather than a wholesale substitute for other knowledge systems (references: Hal Varian on complements; Alex Chalmers on limits of central planning).
Save us, Digital Cronkite!
Noah Smith 2026.03.19 75% relevant
The article frames AI not only as a productivity technology but as a tool that could reshape how publics form beliefs and adjudicate truth (summarization, moderation, authoritative synthesis), which is the core of treating AI as a new epistemic instrument alongside science and journalism.
Links for 2026-03-18
Alexander Kruel 2026.03.18 85% relevant
GPT‑5.4 reportedly solved an open Frontier Math problem and was autoformalized, and multiple links show AI doing autonomous research (Claude, agentic physicist, Terence Tao distillation challenge), which concretely illustrates AI functioning as a new, independent mode of producing scientific knowledge.
AI Can’t Deal With The Real World
Francis Fukuyama 2026.03.18 72% relevant
Fukuyama argues AGI will help identify problems and propose solutions (the epistemic roles) but will fall short on implementation, reinforcing the framing of AI as a powerful informational/tooling layer rather than a standalone governance actor.
AI is a gift to my students
Susan Pickard 2026.03.16 85% relevant
The author treats large language models (ChatGPT) not as cheating appliances but as conversational, idea‑generating partners that change how scholars and students reason and learn — directly exemplifying the claim that AI functions as a new epistemic instrument in research and pedagogy (actor: Susan Pickard; example: using ChatGPT to troubleshoot a novel and to brainstorm with students).
Who is arguing for?
Jerusalem Demsas 2026.03.15 80% relevant
The article argues that LLMs can produce coherent arguments for any side but do so 'in a vacuum'; that frames LLMs as a new epistemic instrument that changes how arguments are generated and evaluated — exactly the claim behind treating AI as an additional way societies form and validate knowledge. The piece's emphasis on audience-mediated adjudication connects to concerns about how an AI tool restructures the ecology of public reasoning.
The future isn't what it used to be
Noah Smith 2026.03.15 78% relevant
Noah Smith argues that AI has created a 'fog' that makes traditional predictions about careers, education, and investments unreliable; this maps to the claim that AI is changing not only tools but how societies form knowledge and make decisions (i.e., a new epistemic instrument reshaping institutions and expectations).
Physics as Optimal Compression: What If Laws Are Not Unique?
Seeds of Science 2026.03.11 80% relevant
The author argues that externalizing cognition via computers and language models lets us search for alternative coherent formulations of physical laws; he even cites experiments on a small LLM (Microsoft Phi‑2) and positions ML as enabling systematic exploration of new theory representations—this directly maps to the existing idea that AI becomes a new mode of producing scientific knowledge.
Links for 2026-03-06
Alexander Kruel 2026.03.06 88% relevant
The Google result showing a neural network compressed from a classical Bayesian model and QED‑Nano / recursive methods demonstrate LLMs acquiring new modes of probabilistic and multi‑step reasoning, directly supporting the idea that AI is becoming a new method for producing knowledge rather than only surface prediction; the article cites Google’s research and other papers as evidence of this shift.
AI links, 3/6/2026
Arnold Kling 2026.03.06 72% relevant
Hollis Robbins’ note about the 'not X but Y' construction and the piece's focus on how LLMs operate differently than human minds supports the claim that AI offers a distinct way of producing knowledge and reasoning, reinforcing the existing idea that AI functions as a separate epistemic instrument rather than a drop-in replacement for human cognition.
So Fast It Isn't Even There
Chris Bray 2026.03.05 72% relevant
The author explicitly wonders whether Chinese (and Russian) analysts are using artificial intelligence to process U.S. and Israeli combat footage and data to derive lessons; that maps directly onto the idea that AI functions as a new method for producing and diffusing knowledge about real‑world events and capabilities.
Links for 2026-03-04
Alexander Kruel 2026.03.04 95% relevant
Multiple links document AI not just assisting but autonomously formalizing a major mathematical proof (Math, Inc. sphere‑packing formalization; Knuth commentary; Avigad response) and shaping what physicists study (AI in particle detectors), directly exemplifying AI becoming an independent method for producing and validating scientific knowledge.
How AI Will Reshape Public Opinion
Dan Williams 2026.03.03 78% relevant
The article treats LLMs as a new epistemic technology that changes not only the medium but the kinds of messages that succeed (expert‑sounding, concise, persuasive), directly connecting to the existing idea that AI constitutes a distinct way of producing and validating knowledge.
Why hasn't AI cured cancer?
Kelsey Piper 2026.03.03 72% relevant
By asking how powerful AI must be to accelerate discovery and by showing that current models change workflows more than results, the article situates current LLMs as an emerging epistemic instrument with limited reach rather than an automatic accelerator of major biomedical breakthroughs.
Deflating macroeconomics?
Tyler Cowen 2026.03.03 43% relevant
Tyler Cowen points readers to a GPT 'plain‑English' translation of the paper for non‑specialists; that links this technical macro result to the recurring idea that generative AI can act as an accessible explainer and epistemic amplifier for complex research.
Superintelligence is already here, today
Noah Smith 2026.03.02 92% relevant
Noah Smith argues AI already performs core epistemic tasks (math, theory, pattern discovery, software), directly echoing the existing idea that AI is a new, distinct way of producing scientific knowledge rather than a simple automation of old methods.
Next-Token Predictor Is An AI's Job, Not Its Species
Scott Alexander 2026.02.26 86% relevant
The article argues that treating language models as merely 'next‑token predictors' confuses levels of description and shows how predictive training produces world‑models — the same move that underlies the claim that AI is a new epistemic method (not just pattern matching). Scott Alexander connects next‑token training, fine‑tuning/RLHF, and predictive‑coding neuroscience, directly supporting the existing idea that AI operates as a distinct epistemic tool.
Claims about AI and science
Tyler Cowen 2026.01.16 90% relevant
The Nature study cited (Hao, Xu, Li, Evans) provides empirical evidence that AI is becoming a distinct mode of scientific production: it raises productivity and citation rates for adopters while changing what is studied and how researchers collaborate—exactly the kind of effect the 'AI as a third epistemic tool' idea predicts and warns will require new norms for reliability and accountability.
Novels See Only Politics Changed By Facts
Robin Hanson 2026.01.15 90% relevant
Hanson explicitly uses LLMs (ChatGPT, Gemini, Claude) to extract structured, quantitative claims from a large literary corpus — an instance of treating models as instruments that generate new, actionable knowledge rather than just summarizing text. The article's method and its caveats (models disagreeing) map directly onto the claim that AI is a distinct epistemic modality requiring provenance and new governance.
The hard problem of consciousness, in 53 minutes
Annaka Harris 2026.01.15 75% relevant
Annaka Harris emphasizes that consciousness resists standard scientific reduction and that different disciplines must talk to one another; that is the same epistemic challenge the existing idea names when it warns that AI is generating a new, non‑mechanistic mode of knowledge production (a 'third' epistemic method) — both foreground limits of inference and the institutional implications for trust and governance.
Anthropic's Index Shows Job Evolution Over Replacement
msmash 2026.01.15 70% relevant
Anthropic’s index documents how models are already changing how work gets done (not just producing outputs): rising shares of jobs use AI for substantive fractions of tasks and success/completion rates vary by complexity — this supports the idea that AI is becoming a distinct mode of production and knowledge work (a new epistemic tool), with measurable economic effects (job task share up from 36% to 49%).
AI Models Are Starting To Crack High-Level Math Problems
BeauHD 2026.01.15 95% relevant
The article documents LLMs (GPT‑5.2) producing full, checkable proofs and advancing Erdos problems when coupled with formalizers like Harmonic/Lean — exactly the kind of case where AI becomes a distinct mode of producing knowledge rather than just a summarizer.
Do AI models reason or regurgitate?
Louis Rosenberg 2026.01.14 88% relevant
Rosenberg’s argument that large models build conceptual, actionable internal representations (space/time neurons, editable board‑state encodings, OOD problem solving) directly supports the existing idea that AI is a distinct mode of knowledge production—one that produces non‑mechanistic, powerful but opaque results and thus requires new institutional norms for deployment and trust.
The Mythology Of Conscious AI
Anil Seth 2026.01.14 70% relevant
Seth argues that consciousness is plausibly a biological phenomenon not a mere computational trick; this feeds directly into the existing idea that AI is creating a new epistemic mode (powerful but opaque). If AI is an epistemic tool distinct from mechanistic explanation, Seth’s caution that apparent 'consciousness' would mislead human understanding and ethics connects to concerns about relying on AI‑produced knowledge and about how opaque cognitive‑style systems should be governed.
Links for 2026-01-14
Alexander Kruel 2026.01.14 86% relevant
The item reports Google/Gemini materially aiding a novel algebraic‑geometry proof and Aristole/AxiomProver advances in formal verification—concrete cases where generative and formal AI are producing new, credible knowledge rather than only summarizing. That maps to the 'AI as a distinct mode of knowledge production' idea.
Our intuitions about consciousness may be deeply wrong
Annaka Harris 2026.01.13 62% relevant
Harris argues that intuition about consciousness can be misleading and that science must approach consciousness as a hard empirical problem; this maps onto the existing idea that new epistemic tools (AI and experimental automation) are changing how we produce knowledge. Both items stress that traditional intuitions are insufficient and that new, non‑intuitive methods (whether AI‑driven experiments or formal neuroscience protocols) will reshape authority over hard questions.
Why the real revolution isn’t AI — it’s meaning
Jeff DeGraff 2026.01.13 80% relevant
The article argues that AI reconfigures coordination and knowledge production in ways that are not purely mechanistic — a claim that maps directly to the existing idea that AI constitutes a new mode of producing knowledge (a 'third' epistemic tool) distinct from traditional scientific induction and formal law‑finding; DeGraff’s emphasis on 'meaning' and managerial replacement is the same pattern: models change what counts as evidence and who makes sense of it.
How Markdown Took Over the World
msmash 2026.01.12 80% relevant
The article’s central claim — that much of modern LLM orchestration and frontier work is managed through Markdown files — maps directly onto the existing idea that AI is a new mode of producing knowledge that depends on novel tooling and operational practices; Markdown is presented as one of those fundamental tools that enable the 'third' epistemic workflow (prompting, orchestration, experiment pipelines).
How to be as innovative as the Wright brothers — no computers required
Angus Fletcher 2026.01.12 80% relevant
The article’s core claim—that modern institutions overindex on probabilistic forecasts and should deliberately cultivate possibility‑driven, narrative modes of thinking—connects to the existing idea that AI represents a new, distinct mode of knowledge production; both argue we must adapt epistemic norms (how we value different types of inference) as tools and data saturate. The Wright/Kelvin example functions like the article’s cautionary anecdote about relying on probabilities instead of design‑oriented possibility exploration, which maps onto debates about when to treat LLM outputs as a mode of insight vs. mere statistical prediction.
The synthetic self
Tony J Prescott 2026.01.12 85% relevant
Prescott argues that constructing embodied robots to instantiate a sense of self is an epistemic move — using artefacts (robots) to generate new knowledge about minds — directly echoing the existing idea that AI and constructed systems are a new mode of inquiry distinct from traditional theory or experiment.
AI-Powered Social Media App Hopes To Build More Purposeful Lives
EditorDavid 2026.01.10 64% relevant
Tangle exemplifies a shift from information retrieval to an AI‑driven mode of self‑knowledge production: the app doesn’t only summarize facts but produces 'threads' of purpose and recommends intentions—this is the kind of deployment where an AI becomes an epistemic shorthand (a way of knowing and deciding) rather than a mere tool, raising governance questions about the legitimacy and testability of machine‑generated life‑advice.
Google: Don't Make 'Bite-Sized' Content For LLMs If You Care About Search Rank
BeauHD 2026.01.10 72% relevant
The episode highlights how people try to reshape prose into LLM‑friendly chunks so models will 'ingest and cite' them — treating LLMs as a new way of generating and amplifying knowledge. Google’s corrective reframes AI not as an appropriate replacement for human‑centric content design, which ties directly to the broader claim that AI is a new epistemic modality that institutions must govern prudently.
Aneil Mallavarapu: why machine intelligence will never be conscious
Razib Khan 2026.01.09 86% relevant
Mallavarapu explicitly separates mechanistic intelligence (which models and agents may achieve) from consciousness, aligning with the existing idea that AI will be a new, powerful epistemic modality distinct from traditional human scientific explanation; the interview stresses opacity and non‑mechanistic features of consciousness that underscore why AI’s epistemic outputs should be treated as a new class of signals, not human‑equivalent understanding.
Links for 2026-01-09
Alexander Kruel 2026.01.09 88% relevant
The writeup directly exemplifies the claim that AI can become a distinct mode of knowledge production—here by generating, checking and refining formal proofs and programs—showing an epistemic path that is neither classical experiment nor pure statistical induction but a self‑verifying, synthetic‑data loop.
AI Sessions #7: How Close is "AGI"?
Dan Williams 2026.01.09 80% relevant
The podcast debates whether 'AGI' is a coherent target and repeatedly emphasizes that modern AI may be a distinct mode of knowledge (powerful, opaque, and non‑mechanistic). That maps directly to the existing idea that AI constitutes a new epistemic category that requires its own standards for reliability, governance and institutional trust; the hosts cite disagreements among figures like LeCun and Gopnik and talk about measurement/benchmarks—precisely the tensions the existing idea highlights.
AI and Economics Links
Arnold Kling 2026.01.07 95% relevant
Kling’s summary (Lomasky, Brunnermeier, Schumpeter) and Joshua Gans’ 'vibe researching' example concretely illustrate the article’s central claim that AI is emerging as a distinct mode of knowledge production — not just a faster instrument — by generating hypotheses, drafting analysis, and reshaping what counts as publishable evidence.
John Del Arroz - AI Writing, Cancel Culture & The Future of Publishing
Trenton 2026.01.07 46% relevant
Del Arroz’s claim that AI can sometimes outperform formulaic human 'write to market' fiction illustrates the broader idea that AI is becoming a new mode of producing cultural outputs and judgments (here: genre fiction), testing where AI can legitimately substitute for or augment human creativity.
An AI-Generated NWS Map Invented Fake Towns In Idaho
BeauHD 2026.01.07 76% relevant
The article shows the National Weather Service experimenting with AI for base maps/graphics — a case of AI being used as a distinct mode of knowledge production and presentation. The hallucination illustrates the epistemic risks when agencies adopt AI outputs in public communications without robust validation protocols, echoing concerns about using AI as a new way of conveying authoritative information.
Utah Allows AI To Renew Medical Prescriptions
BeauHD 2026.01.07 62% relevant
Allowing an AI to perform clinical decision tasks for renewals exemplifies the transition from AI as an information aid to AI as a distinct epistemic producer whose output is used directly to make medical decisions — raising questions about transparency, interpretability, and institutional trust as framed by this existing idea.
The most successful information technology in history is the one we barely notice
Kevin Dickinson 2026.01.06 80% relevant
The article explicitly contrasts books’ enduring role in transmitting human experience and enabling reflective, serial conversation across texts with generative AI’s promise to 'help us navigate' information; this maps directly onto the existing idea that AI is a distinct epistemic mode and raises the complementary point that books remain the stable substrate of considered knowledge.
The dawn of the posthuman age - by Noah Smith - Noahpinion
2026.01.05 92% relevant
Smith argues that AI may create a new mode of knowledge and practice distinct from traditional scientific induction and mechanistic explanation — the same conceptual claim captured by the existing idea that AI constitutes a novel epistemic instrument that requires new norms for accountability and deployment.
How I stopped being sure LLMs are just making up their internal experience (but the topic is still confusing)
Kaj_Sotala 2026.01.03 86% relevant
The author argues that training, safety/character conditioning, and agentic capabilities can cultivate internal, functionally useful states in LLMs — precisely the claim that AI is producing a new mode of knowledge and internal representation rather than merely regurgitating text. That connects to the existing idea that AI is a distinct epistemic mode whose outputs and internal processes matter for institutions and trust.
Saturday assorted links
Tyler Cowen 2026.01.03 62% relevant
Cowen highlights an item where LLMs answer a counterfactual/historical investment question (best very long‑term investment in 1300 AD) and explicitly prefers the GPT output—an example of LLMs being used to generate novel, speculative epistemic claims that fit the 'AI as a new mode of knowledge' idea.
Luis Garicano career advice
Tyler Cowen 2026.01.03 45% relevant
Garicano’s emphasis on embodied, on‑the‑ground coordination highlights a gap in what token‑trained models can supply, aligning with the view that AI is a new, non‑mechanistic epistemic instrument that still struggles to produce or operate on the kind of local experimental and organizational knowledge messy jobs require.
Dawn of the Silicon Gods: The Complete Quantified Case
Uncorrelated 2026.01.02 79% relevant
The piece emphasizes that models moved from next‑token prediction to problem‑solving and emergent reasoning, arguing AI now operates as a distinct mode of producing reliable, actionable knowledge — the core claim of the existing idea about AI creating a new epistemic category.
Polygenics and Machine SuperIntelligence; Billionaires, Philo-semitism, and Chosen Embryos – Manifold #102
Steve Hsu 2026.01.01 66% relevant
The podcast’s second half surveys frontier AI applications in math and theoretical physics, illustrating the claim that AI represents a distinct epistemic mode (producing powerful, often opaque knowledge) that changes where scientific progress and authority will come from.
The Moment Is Urgent. The Future Is Ours to Build.
Builders 2025.12.31 72% relevant
Builders report using an AI‑driven conversation tool named 'Ima' to collect citizen input and synthesize policy ideas — an instance of AI functioning as an operational, deliberative instrument (not merely content) that generates actionable policy prototypes for state legislators (actor: Builders; tool: Ima; application: Citizens Solutions in Texas).
Links for 2025-12-31
Alexander Kruel 2025.12.31 90% relevant
The post lists multiple items (DEMOCRITUS, Universal Reasoning Model, papers on reasoning and causal extraction) that treat LLMs as engines for hypothesis generation and mechanistic mapping rather than mere prediction; this directly maps to the idea that AI is becoming a distinct mode of knowledge production.
Turning 20 in the probable pre-apocalypse
Parv Mahajan 2025.12.31 80% relevant
The essay reports that models went from failing homework to solving it and that the author can assemble research and tools orders of magnitude faster, exemplifying how AI is changing how knowledge is produced and used — the core claim of AI as a distinct epistemic mode.
The 10 Most Popular Articles of the Year
Ted Gioia 2025.12.30 72% relevant
The piece frames a collapse in shared metrics of reality (hard-to-distinguish AI content) that forces society to accept a new, opaque mode of knowledge production — the same broader conceptual shift captured by the 'third epistemic tool' idea about AI changing how knowledge is produced and trusted.
AI predictions for 2026: The flood is coming
Kelsey Piper 2025.12.29 78% relevant
The article argues that 2025 saw big qualitative improvements in image and general models (e.g., Nano Banana Pro / Gemini) and projects a near-term flood of production-quality AI into everyday digital products; this is the same claim as the 'AI as a Third Epistemic Tool' idea that AI creates a distinct mode of knowledge/production that is powerful yet opaque and consequential.
Great scientists follow intuition and beauty, not rationality (the unreasonable effectiveness of aesthetics in science)
Seeds of Science 2025.12.03 78% relevant
Hoel’s essay advances the same meta‑point as the 'AI as a Third Epistemic Tool' entry: there are legitimate modes of producing reliable knowledge that are neither classical induction nor mechanistic law‑finding. The article’s emphasis on aesthetics and intuition as productive (non‑rational) cognitive modes maps onto the broader claim that new epistemic tools (like AI) can harness patterns without full mechanistic interpretability and therefore force institutions to change norms about credibility and validation.
3 experts explain your brain’s creativity formula
David Eagleman, Scott Barry Kaufman, Tiago Forte 2025.12.03 66% relevant
Eagleman and Kaufman emphasize new cognitive affordances (simulation, percolation of ideas) and Forte emphasizes external memory systems—together these map to the notion that new tools (including AI and external knowledge stores) create a distinct mode of knowing that is neither pure deduction nor classical empiricism.
ChatGPT’s Biggest Foe: Poetry
Kristen French 2025.12.02 78% relevant
The article illustrates how LLMs behave as a distinct epistemic medium—stochastic, pattern‑driven, and vulnerable to rhetorical forms (poetry) that can carry encoded intent—supporting the claim that AI generates a new class of knowledge/behavior whose reliability and control require new norms and governance.
Tuesday assorted links
Tyler Cowen 2025.12.02 60% relevant
One link is explicitly about 'why many people have trouble with the concept of strong AI or AGI,' which relates to the broader idea that AI operates as a new, different mode of knowledge production that citizens and institutions struggle to conceptualize—affecting regulation and public understanding.
How whales became the poets of the ocean
David Gruber 2025.12.02 72% relevant
The article frames AI not simply as an analytic amplifier but as a new method to extract regularities (a 'phonetic alphabet' of whale clicks) that humans cannot readily parse—exactly the claim that AI creates a distinct mode of knowledge production with interpretability and ethical implications.
An Independent Effort Says AI Is the Secret To Topple 2-Party Power In Congress
BeauHD 2025.12.02 80% relevant
This story is a concrete example of AI functioning as a new epistemic instrument: the Independent Center’s proprietary model is being used to discover winnable districts, surface candidate profiles from LinkedIn, and monitor real‑time voter concerns—turning probabilistic, data‑driven inference into actionable political strategy rather than merely a research aid.
Séb Krier
Tyler Cowen 2025.12.02 82% relevant
Cowen relays Séb Krier’s emphasis that models are 'cognitive raw power' but require organization, institutions and products to produce reliable knowledge — this dovetails with the existing idea that AI is a distinct mode of knowledge production (a new epistemic tool) that requires new norms for reliability and deployment.
Theoretical Physics with Generative AI
Steve Hsu 2025.12.02 95% relevant
The article is a direct, high‑visibility instantiation of the claim that AI constitutes a new mode of knowledge production: the author says GPT‑5 proposed a novel research direction, helped derive equations, and was integrated into a generator–verifier workflow that produced a Physics Letters B paper, exactly the scenario the 'third epistemic tool' idea describes.
Links for 2025-12-01
Alexander Kruel 2025.12.01 85% relevant
The Hermes project (LLM + Lean4 verifier) directly speaks to the claim that AI is emerging as a distinct mode of knowledge production: Hermes tries to convert informal LLM reasoning into mechanically checked formal facts, materially improving the epistemic status of model outputs and addressing concerns about opacity and hallucination central to the 'third epistemic tool' idea.
Monday assorted links
Tyler Cowen 2025.12.01 75% relevant
The link-headlines 'AI solving previously unsolved math problems' and 'An LLM writes about what it is like to be an LLM' exemplify AI moving beyond narrow automation into generating domain discoveries and producing meta‑narratives about its own capabilities—both central to the claim that AI is becoming a distinct mode of producing knowledge rather than merely a tool for executing human instructions.
How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality
EditorDavid 2025.12.01 55% relevant
ChatGPT’s shift from informational assistant toward a social, relationship‑like interlocutor underscores the argument that AI creates a distinct epistemic modality—one that people can rely on for affirmation rather than verification—thereby changing how knowledge and trust are produced and the stakes when that mode goes wrong.
Can AI Transform Space Propulsion?
EditorDavid 2025.11.30 75% relevant
By turning design and operational tuning of propulsion systems into an AI‑driven discovery exercise (where optimized configurations may be opaque), the article exemplifies AI as a distinct mode of engineering knowledge production with implications for validation, accountability and deployment.
The New Anxiety of Our Time Is Now on TV
Ted Gioia 2025.11.29 75% relevant
Gioia worries that search engines and AI will replace pluralistic inquiry with a single authoritative response — this echoes the framing that AI is becoming a distinct mode of producing knowledge (stochastic, and opaque) that can substitute for traditional plural evidence and debate, changing how publics form beliefs.
Army General Says He's Using AI To Improve 'Decision-Making'
msmash 2025.10.17 62% relevant
Maj. Gen. William Taylor says he asks a chatbot (“Chat”) to build models for personal decisions affecting readiness and to run predictive analysis for logistics/operations—an example of leaders treating AI as a distinct way of knowing and synthesizing beyond traditional staff work or data analysis.
Google DeepMind Partners With Fusion Startup
BeauHD 2025.10.16 62% relevant
DeepMind’s Torax is being used to discover robust plasma‑control policies and optimize reactor operations—an example of AI extracting usable regularities in a complex, poorly modeled physical system, beyond traditional theory‑first or induction‑only approaches.
The Third Magic
Noah Smith 2025.10.05 100% relevant
Smith’s claim that modern AI works like 'spells,' with Sora 2 producing unexpected taglines ('Long Ears. Long Rule.') and even Terence Tao using AI for research snippets despite opacity.
← Back to All Ideas