Readings shared September 12, 2025. jaalonso.github.io/vestigium/po... #AI #Autoformalization #ITP #LeanProver #Math #Rocq
Readings shared September 12, 2025
The readings shared in Bluesky on 12 September 2025 are
Finiteness of symbolic derivatives in Lean. ~ Ekaterina Zhuchko, Hendrik Maarand, Margus Veanes, Gabriel Ebner. #ITP #LeanProver #Math
Modeling
jaalonso.github.io
September 13, 2025 at 7:34 AM
Everybody can reply
1 likes
mathematics). To mitigate the inefficiency of manual formalization, we introduce a novel human-in-the-loop autoformalization pipeline that integrates: (1) specialized large language models (LLMs) for statement autoformalization, (2) multi-LLM semantic [3/7 of https://arxiv.org/abs/2505.02735v1]
May 6, 2025 at 6:02 AM
Everybody can reply
ATLAS generated ~117k theorem statements and fine-tuned Llama 3.1-8B-Instruct with LoRA adapters, yielding statistically significant gains (p < 0.05). https://getnews.me/atlas-framework-advances-ai-theorem-autoformalization-with-large-dataset/ #atlas #llama31 #neurips2025
October 3, 2025 at 7:12 AM
Everybody can reply
MASA: LLM-driven multi-agent systems for autoformalization. ~ Lan Zhang, Marco Valentino, André Freitas. arxiv.org/abs/2510.089... #Autoformalization #LLM #ITP #LeanProver #Math
MASA: LLM-Driven Multi-Agent Systems for Autoformalization
Autoformalization serves a crucial role in connecting natural language and formal reasoning. This paper presents MASA, a novel framework for building multi-agent systems for autoformalization driven b...
arxiv.org
October 17, 2025 at 10:14 AM
Everybody can reply
Process-driven autoformalization in Lean 4. ~ Jianqiao Lu et als. arxiv.org/abs/2406.01940 #Autoformalization #LLMs #ITP #Lean4
Process-Driven Autoformalization in Lean 4
Autoformalization, the conversion of natural language mathematics into formal languages, offers significant potential for advancing mathematical reasoning. However, existing efforts are limited to...
arxiv.org
June 8, 2024 at 9:56 AM
Everybody can reply
unique challenges. In this survey, we provide a comprehensive overview of recent advances in autoformalization from both mathematical and LLM-centric perspectives. We examine how autoformalization is applied across various mathematical domains and [3/5 of https://arxiv.org/abs/2505.23486v1]
May 30, 2025 at 5:58 AM
Everybody can reply
September 13, 2025 at 1:50 AM
Everybody can reply
WIth Lean-FIRE, we achieved the first end-to-end autoformalization for 13 Putnam problems, but also showed that conjecturing is still a massive reasoning challenge.
October 24, 2025 at 11:05 AM
Everybody can reply
Document-level autoformalization. ~ Antoine Bosselut, Viktor Kunčak, Maryna Viazovska. www.renaissancephilanthropy.org/document-lev... #AI #Math #ITP #LeanProver #Autoformalization
Document-Level Autoformalization — Renaissance Philanthropy – A brighter future for all through science, technology, and innovation
www.renaissancephilanthropy.org
September 17, 2025 at 6:00 PM
Everybody can reply
FormalAlign: Automated alignment evaluation for autoformalization. ~ Jianqiao Lu, Yingjia Wan, Yinya Huang, Jing Xiong, Zhengying Liu, Zhijiang Guo. arxiv.org/abs/2410.10135 #Autoformalization #ITP #Lean4
FormalAlign: Automated Alignment Evaluation for Autoformalization
Autoformalization aims to convert informal mathematical proofs into machine-verifiable formats, bridging the gap between natural and formal languages. However, ensuring semantic alignment between the ...
arxiv.org
October 18, 2024 at 6:49 AM
Everybody can reply
1 likes
Next up: Real-world autoformalization by Siddhartha Gadgil. This is gonna be lit, very much looking forward to this.
researchseminars.org - View talk
Welcome to researchseminars.org, a list of research seminars and conferences!
buff.ly
January 15, 2025 at 1:01 PM
Everybody can reply
1 likes
Why is this a worthwhile project?
1) It will create a hard dataset for autoformalization AI's;
2) It will force us to formalize the definitions of mathematical objects which are being used today in the top journals, thus making Lean's mathematics library more relevant to modern math researchers.
1) It will create a hard dataset for autoformalization AI's;
2) It will force us to formalize the definitions of mathematical objects which are being used today in the top journals, thus making Lean's mathematics library more relevant to modern math researchers.
I am advertising for 4 post-docs to come to Imperial and formalize, in Lean, *statements* of theorems from recent issues of the top generalist pure mathematics journals.
www.imperial.ac.uk/jobs/search-...
Positions are for 2 years, start date 1st Oct this year. Deadline 15th August.
www.imperial.ac.uk/jobs/search-...
Positions are for 2 years, start date 1st Oct this year. Deadline 15th August.
Description
Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...
www.imperial.ac.uk
July 28, 2025 at 12:12 PM
Everybody can reply
10 likes
Readings shared February 14, 2025. jaalonso.github.io/vestigium/po... #Autoformalization #FunctionalProgramming #Haskell #ITP #IsabelleHOL #LLMs
Readings shared February 14, 2025
The readings shared in Bluesky on 14 February 2025 are
Language models for verifiable mathematical automation (Interaction, integration, and autoformalization). ~ Qiaochu Jiang. #ITP #IsabelleHOL #LL
jaalonso.github.io
February 15, 2025 at 10:25 AM
Everybody can reply
1 likes
ProofBridge: Auto-formalization of natural language proofs in Lean via joint embeddings. ~ Prithwish Jana et als. arxiv.org/abs/2510.15681 #ITP #LeanProver #Autoformalization #LLMs #Math
ProofBridge: Auto-Formalization of Natural Language Proofs in Lean via Joint Embeddings
Translating human-written mathematical theorems and proofs from natural language (NL) into formal languages (FLs) like Lean 4 has long been a significant challenge for AI. Most state-of-the-art method...
arxiv.org
October 20, 2025 at 11:23 AM
Everybody can reply
2 likes
ProofNet: A benchmark for autoformalizing and formally proving undergraduate-level mathematics problems. ~ Zhangir A Azerbayev, Bartosz Piotrowski, Jeremy Avigad. mathai2022.github.io/papers/20.pdf #Autoformalization #ITP #LeanProver #Math
November 1, 2023 at 9:53 AM
Everybody can reply
[2025-08-27] 📚 Updates in #AIMat
(1) <a href="https://researchtrend.ai/papers/2508.18914" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">FormaRL: Enhancing Autoformalization with no Labeled Data
(2) FormaRL: Enhancing Autoformalization with no Labeled Data
🔍 More at researchtrend.ai/communities/AIMat
(1) <a href="https://researchtrend.ai/papers/2508.18914" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky">FormaRL: Enhancing Autoformalization with no Labeled Data
(2) FormaRL: Enhancing Autoformalization with no Labeled Data
🔍 More at researchtrend.ai/communities/AIMat
August 27, 2025 at 3:07 AM
Everybody can reply
Autoformalization performance of LLMs as measured by standard benchmarks such as ProofNet. Crucially, our approach outperforms pretrained models using a minimal number of tokens. We also show, through strategic prompting and [6/8 of https://arxiv.org/abs/2502.15795v1]
February 25, 2025 at 5:53 AM
Everybody can reply
Merlin Carl
Improving the Diproche CNL through Autoformalization via Large Language Models
https://arxiv.org/abs/2303.17513
Improving the Diproche CNL through Autoformalization via Large Language Models
https://arxiv.org/abs/2303.17513
April 3, 2024 at 3:13 PM
Everybody can reply
Research shows data alignment, not size, significantly influences LLM performance, especially in Autoformalization. There is a strong negative correlation between alignment and perplexity, indicating a need to adjust LLM training methodologies. https://arxiv.org/abs/2501.08496
Quantifying the Importance of Data Alignment in Downstream Model Performance
ArXiv link for Quantifying the Importance of Data Alignment in Downstream Model Performance
arxiv.org
July 4, 2025 at 7:30 AM
Everybody can reply
2 likes
arXiv:2505.23486v1 Announce Type: new
Abstract: Autoformalization, the process of transforming informal mathematical propositions into verifiable formal representations, is a foundational task in automated theorem proving, offering a new perspective [1/5 of https://arxiv.org/abs/2505.23486v1]
Abstract: Autoformalization, the process of transforming informal mathematical propositions into verifiable formal representations, is a foundational task in automated theorem proving, offering a new perspective [1/5 of https://arxiv.org/abs/2505.23486v1]
May 30, 2025 at 5:58 AM
Everybody can reply
Readings shared February 21, 2025. jaalonso.github.io/vestigium/po... #AI #Autoformalization #Coq #FunctionalProgramming #ITP #IsabelleHOL #LLMs #LeanProver #Math #OCaml #Rocq
Readings shared February 21, 2025
The readings shared in Bluesky on 21 February 2025 are
Machine-assisted proofs (February 19, 2025). ~ Terence Tao. #ITP #LeanProver #AI #Math
Formalisation of combinatorial optimisation in Isabelle/H
jaalonso.github.io
February 22, 2025 at 11:14 AM
Everybody can reply
2 likes
Nilay Patel, Jeffrey Flanigan, Rahul Saha
A New Approach Towards Autoformalization. (arXiv:2310.07957v1 [cs.CL])
http://arxiv.org/abs/2310.07957
A New Approach Towards Autoformalization. (arXiv:2310.07957v1 [cs.CL])
http://arxiv.org/abs/2310.07957
October 13, 2023 at 2:03 AM
Everybody can reply
@rohanpaul_ai https://x.com/rohanpaul_ai/status/1963195517980291255 #x-rohanpaul_ai
a simple reinforcement approach that learns formal math from unlabeled text and boosts accuracy.
4x to 6x pass@1 gains using only 859 unlabeled problems.
Autoformalization means converting textbook...
a simple reinforcement approach that learns formal math from unlabeled text and boosts accuracy.
4x to 6x pass@1 gains using only 859 unlabeled problems.
Autoformalization means converting textbook...
September 3, 2025 at 11:15 AM
Everybody can reply
We need so much more work on autoformalization (well that and practical code writing, like where the AI has to write a whole project, since I think they share many of the same difficulties like writing APIs, using libraries, and breaking down problems into sub-problems).
December 23, 2024 at 1:52 AM
Everybody can reply
3 likes
Chan, Souliman, Nordhagen, Miranda, Obbad, Koyejo: Lean-ing on Quality: How High-Quality Data Beats Diverse Multilingual Data in AutoFormalization https://arxiv.org/abs/2502.15795 https://arxiv.org/pdf/2502.15795 https://arxiv.org/html/2502.15795
February 25, 2025 at 5:53 AM
Everybody can reply