Jason Rute
jasonrute.bsky.social
Jason Rute
@jasonrute.bsky.social
AI Researcher @ Mistral AI | Formally IBM Research | Former Mathematician/Logician/Data scientist | Building AI for math and reasoning
(A) Is the least hand holding, but it could misformalize. (B) Makes sure the formalization fits your model’s constraints, but leaves you open to criticism. (C) Joseph has thought more about IMO Lean formalization than anyone I know, but his may not fit your model’s requirements.
July 16, 2025 at 11:49 AM
Oh, you want a tool which annotates papers (maybe on the fly) where if you hover over say “inaccessible cardinal” it gives you the definition needed for the paper (or a link)?
February 2, 2025 at 5:21 PM
PDFs? You mean if the lean code is in a paper you can’t mouse over it? If that is what you mean, what do pdfs have to do with LLMs or are though two separate examples of things which don’t have access to language server information?
February 2, 2025 at 8:29 AM
I have no idea if LLMs are any good as math collaborators for bouncing around ideas and brainstorming, but if those who do get this to work keep it a secret, we just won’t know. We should be rewarding those who use them wisely for their research.
January 14, 2025 at 4:00 PM
If we aren’t willing to talk about when tools are helpful, it will lead to an asymmetry where those in the know will use hidden methods that they keep secret out of either embarrassment or competition. IMHO, this isn’t in the spirit of mathematics research.
January 14, 2025 at 4:00 PM
Mathematics isn’t chess. Using tools isn’t cheating. It is research and knowing what tools to use is important to research. But also as these tools start to feel more human, it does bring up tricky questions of attribution and citation.
January 14, 2025 at 4:00 PM
I last attended JMM in Seattle in 2017. It was also the year I left mathematics. Like many math postdocs, I couldn’t find a suitable job and went into industry, first data science and later AI research. I’m so excited to be back talking about AI for Math!
January 9, 2025 at 12:01 PM
The talk is Friday 8am (bright and early!) in the session “AI for the working mathematician”: meetings.ams.org/math/jmm2025...
<p>Applications of AI to formal theorem proving</p>
As with many other scientific and professional areas, artificial intelligence i...
meetings.ams.org
January 9, 2025 at 12:01 PM
I think the long term goal of Lean4Lean is (1) a full implementation of the Lean’s kernel in Lean, (2) a formalization of Lean’s type theory LeanTT as in Mario’s thesis, (3) a proof that the kernel faithfully implements LeanTT, (4) a formalization of Mario’s proof that LeanTT is consistent/sound.
January 5, 2025 at 7:24 PM
I’m not sure exactly what you mean here, but would Metamath Zero (if finished) be what you are looking for? It would be a checker verified up to x86 byte code that could check a trace from another theorem prover (if the kernel rules were given to MM0 and the trace contained all the proof steps).
January 5, 2025 at 7:04 PM
If you are just worried about Lean’s kernel, I think Lean4Lean could help verify that it is both correct and sound. If you are worried about the full Lean code base, that indeed gets a bit complicated since one could do weird stuff which bypasses the kernel. I’m not sure the full correct solution.
January 5, 2025 at 6:56 PM
But just to be clear, I have no current plans to write any proof checking language (except maybe some toy external type checkers for some existing languages like Lean or MetaMath).
January 4, 2025 at 4:23 PM
Or better yet, meta proof checking languages like Metamath Zero, so that I know already the kernel was correct and all I had to do was get the logic specification right (and maybe that language would also have good libraries and tools for proving that specification matches a desired semantics).
January 4, 2025 at 4:18 PM
If I wrote a real proof checker now that I expected to be performant, I think I would use Rust. But in the next 100 years I would hope to have performant languages that we could also prove correctness of so that I could prove my checker is correct.
January 4, 2025 at 4:08 PM
I don’t understand this question. The language the checker is expected to implemented in, or the name of the expected yet-to-be-developed proof checking language?
January 4, 2025 at 4:06 PM
When I compare o1 pro to AlphaProof, I lean towards thinking o1 pro is better (much faster to the right ideas), but it obviously didn’t sufficiently prove stuff. I think something like o1/o3 plus access to Lean would be really powerful. It now seems to me that AlphaProof is too constrained w/o CoT.
December 24, 2024 at 12:17 AM
It it different in other functional languages? But I can see how it is quite confusing if you aren’t prepared for this.
December 23, 2024 at 11:59 PM
Are you friends with any of the xAI folks?
December 23, 2024 at 1:42 PM
Caveat 3: I’m talking about really cutting-edge theorem proving here. There is still great value in ATPs for day-to-day formalization. We don’t need o3-like models for that. We need solvers at the right level of speed/cost/usability (tactics, hammers, chatbots, copilots, etc.).
December 23, 2024 at 1:42 PM
Caveat 2: Verification (including formalization) should be incorporated in the middle of reasoning models, and not just at the end. An AI should be checking its results as it goes, and adjusting its informal thinking process to account. (And for code too, e.g. unit tests!)
December 23, 2024 at 1:42 PM