Proof‑assistant‑verified LLMs

Updated: 2025.12.01 4D ago 1 sources
Wrap large language models with proof assistants (e.g., Lean4) so model‑proposed reasoning steps are autoformalized and mechanically proved before being accepted. Verified steps become a retrievable database of grounded facts, and failed proofs feed back to the model for revision, creating an iterative loop between probabilistic generation and symbolic certainty. — If deployed, this approach could change how we trust AI in math, formal sciences, safety‑critical design, and regulatory submissions by converting fuzzy model claims into machine‑checked propositions.

Sources

Links for 2025-12-01
Alexander Kruel 2025.12.01 100% relevant
Hermes architecture described in the post: LLM → autoformalizer → Lean4 prover → memory of proved steps (arXiv:2511.18760v1).
← Back to All Ideas