Sebastien Bubeck
sbubeck.bsky.social
Sebastien Bubeck
@sbubeck.bsky.social
3.8K followers 170 following 29 posts
I work on AI at OpenAI. Former VP AI and Distinguished Scientist at Microsoft.
Posts Media Videos Starter Packs
This was a really fun conversation with @billgates.bsky.social and Peter Lee, and I hope you enjoy it too!

(roughly we talk about our respective first contacts with genAI/LLM/sparks and what we see as missing steps/future steps for it to truly transform healthcare)

youtu.be/N0w9uO7kH80?...
How AI is reshaping the future of healthcare and medical research
YouTube video by Microsoft Research
youtu.be
Reposted by Sebastien Bubeck
Impressive ‘GTA VI’ Trailer Features Characters Claiming They’re Sentient, Begging For Release From Digital Prison
theonion.com/impressive-g...
Overall I think o3-mini will be a very useful model for the academic community.

Learn more here: openai.com/index/openai...
OpenAI o3-mini
Pushing the frontier of cost-effective reasoning.
openai.com
Interestingly though the reference it gives is not quite the correct one, but it is very closely related! In general I have found that references are "fuzzily correct", giving some mixed up version of authors/journals/titles, but surprisingly still useful!
o3-mini is a remarkable model. Somehow it has *grokked arxiv* in a way that no other model on the planet has, turning it into a valuable research partner!

Below is a deceitfully simple question that confuses *all* other models but where o3-mini gives an extremely useful answer!
and if you're the type to be worried about contamination (American Mathematics Competition from last month, after phi-4 was trained):
They made a pretty cool poster for the event too 😄
Tomorrow morning at the Simons Institute : Sparks versus embers, can LLMs solve major open mathematical conjectures?

(FWIW I agree with everything in the Embers paper, so I guess the debate will be about the conclusions to draw from current evidence!)

simons.berkeley.edu/talks/sebast...
Debate: Sparks versus embers
Debaters: Sebastien Bubeck (OpenAI), Tom McCoy (Yale) Discussants: Pavel Izmailov (Anthropic), Ankur Moitra (MIT) Moderator: Anil Ananthaswamy
simons.berkeley.edu
Yes, I was a bit hyperbolic:-). One of the cleanest open problem that is still open and doable I think is to prove a n^{3/2} sqrt(T) lower bound for BCO. I'd love to see that.
It's looking like all the open problems I have thought about in the last 10 years are now solved (or in some cases on the verge of being solved)? Latest case in point this beautiful new paper: arxiv.org/abs/2411.18614 . I'm glad we (humans) got all this results just in time!
Optimal root recovery for uniform attachment trees and $d$-regular growing trees
We consider root-finding algorithms for random rooted trees grown by uniform attachment. Given an unlabeled copy of the tree and a target accuracy $\varepsilon > 0$, such an algorithm outputs a set of...
arxiv.org
Was there a paper hinting at that 18 months ago? Hmmm 🤔🤣🤣
It is funny how much of the dominant discussion about LLMs eighteen months ago (can AI pass the Turing test? can it do tasks that are not explicitly in the training data? is it a stochastic parrot?) faded. Lots of questions left (if/when it can reason, etc) but a big quiet shift in assumptions.