Jason Rute
jasonrute.bsky.social
Jason Rute
@jasonrute.bsky.social
100 followers 21 following 50 posts
AI Researcher @ Mistral AI | Formally IBM Research | Former Mathematician/Logician/Data scientist | Building AI for math and reasoning
Posts Media Videos Starter Packs
(A) Is the least hand holding, but it could misformalize. (B) Makes sure the formalization fits your model’s constraints, but leaves you open to criticism. (C) Joseph has thought more about IMO Lean formalization than anyone I know, but his may not fit your model’s requirements.
For those using @leanprover AI models to “compete” in #imo2025 (@ProjectNumina , @KaiyuYang4), what Lean formalizations will you use? (A) autoformalized statements, (B) manually formalized statements, (C) Joseph Meyer’s IMOLean formalizations (github.com/jsm28/IMOLean)?
GitHub - jsm28/IMOLean: Suggested conventions and examples for Lean formalization of IMO problem statements
Suggested conventions and examples for Lean formalization of IMO problem statements - jsm28/IMOLean
github.com
Oh, you want a tool which annotates papers (maybe on the fly) where if you hover over say “inaccessible cardinal” it gives you the definition needed for the paper (or a link)?
PDFs? You mean if the lean code is in a paper you can’t mouse over it? If that is what you mean, what do pdfs have to do with LLMs or are though two separate examples of things which don’t have access to language server information?
I have no idea if LLMs are any good as math collaborators for bouncing around ideas and brainstorming, but if those who do get this to work keep it a secret, we just won’t know. We should be rewarding those who use them wisely for their research.
If we aren’t willing to talk about when tools are helpful, it will lead to an asymmetry where those in the know will use hidden methods that they keep secret out of either embarrassment or competition. IMHO, this isn’t in the spirit of mathematics research.
Mathematics isn’t chess. Using tools isn’t cheating. It is research and knowing what tools to use is important to research. But also as these tools start to feel more human, it does bring up tricky questions of attribution and citation.
At #JMM2025 last week, a mathematician told me he used ChatGPT (forgot which version) to get a major insight into solving his math problem. But, he is reluctant to share publicly (e.g. in his paper) that this happened since it could make him look less bright. I find this sad. 🧵
I last attended JMM in Seattle in 2017. It was also the year I left mathematics. Like many math postdocs, I couldn’t find a suitable job and went into industry, first data science and later AI research. I’m so excited to be back talking about AI for Math!
I’m really excited to announce that I’m going to be at the Joint Math Meetings #JMM2025 to give a talk on AI for Formal Theorem Proving on Friday. Reach out if you want to talk about AI for Math or for all my math friends, if you just want to catch up!
<p>Applications of AI to formal theorem proving</p>
As with many other scientific and professional areas, artificial intelligence i...
meetings.ams.org
I think the long term goal of Lean4Lean is (1) a full implementation of the Lean’s kernel in Lean, (2) a formalization of Lean’s type theory LeanTT as in Mario’s thesis, (3) a proof that the kernel faithfully implements LeanTT, (4) a formalization of Mario’s proof that LeanTT is consistent/sound.
I’m not sure exactly what you mean here, but would Metamath Zero (if finished) be what you are looking for? It would be a checker verified up to x86 byte code that could check a trace from another theorem prover (if the kernel rules were given to MM0 and the trace contained all the proof steps).
If you are just worried about Lean’s kernel, I think Lean4Lean could help verify that it is both correct and sound. If you are worried about the full Lean code base, that indeed gets a bit complicated since one could do weird stuff which bypasses the kernel. I’m not sure the full correct solution.
But just to be clear, I have no current plans to write any proof checking language (except maybe some toy external type checkers for some existing languages like Lean or MetaMath).
Or better yet, meta proof checking languages like Metamath Zero, so that I know already the kernel was correct and all I had to do was get the logic specification right (and maybe that language would also have good libraries and tools for proving that specification matches a desired semantics).
If I wrote a real proof checker now that I expected to be performant, I think I would use Rust. But in the next 100 years I would hope to have performant languages that we could also prove correctness of so that I could prove my checker is correct.
I don’t understand this question. The language the checker is expected to implemented in, or the name of the expected yet-to-be-developed proof checking language?
When I compare o1 pro to AlphaProof, I lean towards thinking o1 pro is better (much faster to the right ideas), but it obviously didn’t sufficiently prove stuff. I think something like o1/o3 plus access to Lean would be really powerful. It now seems to me that AlphaProof is too constrained w/o CoT.
It it different in other functional languages? But I can see how it is quite confusing if you aren’t prepared for this.
Are you friends with any of the xAI folks?
Caveat 3: I’m talking about really cutting-edge theorem proving here. There is still great value in ATPs for day-to-day formalization. We don’t need o3-like models for that. We need solvers at the right level of speed/cost/usability (tactics, hammers, chatbots, copilots, etc.).
Caveat 2: Verification (including formalization) should be incorporated in the middle of reasoning models, and not just at the end. An AI should be checking its results as it goes, and adjusting its informal thinking process to account. (And for code too, e.g. unit tests!)