Gilles Louppe
glouppe.bsky.social
Gilles Louppe
@glouppe.bsky.social

AI for Science, deep generative models, inverse problems. Professor of AI and deep learning @universitedeliege.bsky.social. Previously @CERN, @nyuniversity. https://glouppe.github.io

Computer science 62%
Physics 22%
Pinned
<proud advisor>
Hot off the arXiv! 🦬 "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation" 🌍 Appa is our novel 1.5B-parameter probabilistic weather model that unifies reanalysis, filtering, and forecasting in a single framework. A thread 🧵

Reposted by Gilles Louppe

Yesterday, I did 2 AI conferences in Paris; different kind of vibes.
AdoptAI, a business show: pretty videos, promises about AI, 2.30€ coffee
NeurIPS@Paris, a research conference: equations, free coffee

AI business wouldn't exist without research. Let's not forget it and keep investing in research

Les prix pour la première conférence en IA. Sans compter le vol jusque San Diego, la semaine à l'hotel, ou les frais sur place...

Finally, at the #CCAI workshop, Thomas will show how, without retraining, GenCast can be embedded in a particle filter for data assimilation. That is, no initial state x0 is required anymore, observations are sufficient to start generating realistic weather trajectories! arxiv.org/abs/2509.18811

At the same workshop, @orochman.bsky.social will discuss how neural solvers only approximately satisfy physical constraints (even if they are supposedly trained for that). Fortunately, simple post-hoc projection steps can help improve physical consistency significantly. arxiv.org/abs/2511.17258

At the #ML4PS workshop, @gandry.bsky.social and @sachalewin.bsky.social will present Appa, our large weather model for global data assimilation. Appa differs from other weather models in that it can do reanalysis, nowcasting and forecasting within the same framework. arxiv.org/abs/2504.18720

@francois-rozet.bsky.social (attending @euripsconf.bsky.social) will present the work he did at @polymathicai.bsky.social as an intern. In a nutshell, we find that emulating physics in latent space leads to better results than trying to generate in pixel-space directly. arxiv.org/abs/2507.02608

Our Montefiore Science with AI lab will be at #NeurIPS2025 presenting 1 paper at the main conference and 3 papers at workshops. If you are attending, feel free to reach out with the crew to discuss science, AI, or just to say hi! (I won't attend this year unfortunately 🌱)

Reposted by Gilles Louppe

As we go into the Thanksgiving holiday, I wanted to express my thanks to my collaborators @johannbrehmer.bsky.social @glouppe.bsky.social, Juan Pavez, @smsharma.bsky.social. Recently, I was awarded the Pritzker Prize for AI in Science for work on SBI. That wouldn't have never happened without them.

Reposted by Gilles Louppe

Lots of interesting LLM releases last week. My fav was actually Olmo 3 (I love the Olmo series due to their full open-sourceness and transparency).
If you are interested in reading through the architecture details, I coded it from scratch here: github.com/rasbt/LLMs-f...

For some puzzling reason my (existing) Scholar profile cannot be found using Scholar itself, it's been like this for years 🙃 (search engines like Bing do find it however)

... and here I thought the new Scholar Labs would finally be able to search and find my Scholar profile 😥
Man, everything is so bleak, anyone got a fun fact or little bit of trivia they want to share

Reposted by Gilles Louppe

What happens when you combine 10 years of brain data with one of the world’s fastest supercomputers?

A virtual mouse cortex simulation, thanks to a global collaboration.

🧠📈 https://alleninstitute.org/news/one-of-worlds-most-detailed-virtual-brain-simulations-is-changing-how-we-study-the-brain/

Reposted by Gilles Louppe

GDM WeatherNext 2

8x faster than v1, it can compute extreme situations and game out scenarios in one minute flat on a single TPU (as opposed to hours of supercomputer time for traditional algorithms)

will be available in all of Google’s weather apps

blog.google/technology/g...
WeatherNext 2: Our most advanced weather forecasting model
The new AI model delivers more efficient, more accurate and higher-resolution global weather predictions.
blog.google
I am super happy to share that our project on training biophysical models with Jaxley is now published in Nature Methods: www.nature.com/articles/s41...
Jaxley: differentiable simulation enables large-scale training of detailed biophysical models of neural dynamics - Nature Methods
Jaxley is a versatile platform for biophysical modeling in neuroscience. It allows efficiently simulating large-scale biophysical models on CPUs, GPUs and TPUs. Model parameters can be optimized with ...
www.nature.com

Oui c'était bien moi! Je me disais bien aussi que tu me disais qqch :-) au plaisir de se recroiser

Reposted by Gilles Louppe

It's remarkable how AI generated images have gone from remarkable and visually attractive to repulsive AI slop in just a few years.

Reposted by Gilles Louppe

I am incredibly honored to have received the inaugural AI in Science Research Excellence Prize from the Margot and Tom Pritzker Foundation
dsi.wisc.edu/2025/11/10/d...

‪Once again an interdisciplinary research project gets poor, even condescending this time, evaluations because reviewers make little effort to understand the maths. Science is not only about killing rat models and to see if some random drug worked. Tired of this game.

Reposted by Gilles Louppe

"stole Rosalind Franklin's work" has become the new orthodoxy. While she was certainly the victim of sexism from Watson, I think her colleague Wilkins was the real villain. Events 1951-53 well covered in Nature in 2023 www.nature.com/articles/d41...
What Rosalind Franklin truly contributed to the discovery of DNA’s structure
Franklin was no victim in how the DNA double helix was solved. An overlooked letter and an unpublished news article, both written in 1953, reveal that she was an equal player.
www.nature.com

Reposted by Gilles Louppe

New paper, with @rkhashmani.me @marielpettee.bsky.social @garrettmerz.bsky.social Hellen Qu. We introduce a framework for generating realistic, highly multimodal datasets with explicitly calculable mutual information. This is helpful for studying self-supervised learning
arxiv.org/abs/2510.21686

Reposted by Gilles Louppe

"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances
The Principles of Diffusion Models
This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffu...
arxiv.org

It is only useful when the training data is noisy or incomplete. See eg. arxiv.org/abs/2405.13712 where train diffusion models from sparse images only.
Learning Diffusion Priors from Observations by Expectation Maximization
Diffusion models recently proved to be remarkable priors for Bayesian inverse problems. However, training these models typically requires access to large amounts of clean data, which could prove diffi...
arxiv.org

EM algorithm: 1977 vintage, 2025 relevant. New lecture notes on a classic that refuses to age. From fitting a GMM on the Old Faithful data to training modern diffusion models in incomplete data settings, the same simple math applies. 👉 glouppe.github.io/dats0001-fou...

Reposted by Gilles Louppe

Fisher meets Feynman! 🤝

We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.

To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)
What if we did a single run and declared victory

Reposted by Gilles Louppe

Excited to share SamudrACE, the first 3D AI ocean–atm–sea-ice #climate emulator! 🚀 Simulates 800 years in 1 day on 1 GPU, ~100× faster than traditional models, straight from your laptop 👩‍💻 Collaboration with @ai2.bsky.social and GFDL, advancing #AIforScience with #DeepLearning.
tinyurl.com/Samudrace
SamudrACE: A fast, accurate, efficient 3D coupled climate AI emulator
A fast digital twin of a state-of-the-art coupled climate model, simulating 800 years in 1 day with 1 GPU. SamudrACE combines two leading…
medium.com

Reposted by Gilles Louppe

🕳️🐇 𝙄𝙣𝙩𝙤 𝙩𝙝𝙚 𝙍𝙖𝙗𝙗𝙞𝙩 𝙃𝙪𝙡𝙡 – 𝙋𝙖𝙧𝙩 𝙄 (𝑃𝑎𝑟𝑡 𝐼𝐼 𝑡𝑜𝑚𝑜𝑟𝑟𝑜𝑤)

𝗔𝗻 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗱𝗲𝗲𝗽 𝗱𝗶𝘃𝗲 𝗶𝗻𝘁𝗼 𝗗𝗜𝗡𝗢𝘃𝟮, one of vision’s most important foundation models.

And today is Part I, buckle up, we're exploring some of its most charming features. :)
Thrilled to have two years' of work out, in a pair of papers led by @gradientrider.bsky.social and @maxecharles.bsky.social.

We've built a data-driven calibration of the James Webb Interferometer to near its fundamental limits for high-res imaging - explainer at @aunz.theconversation.com!
How we sharpened the James Webb telescope’s vision from a million kilometres away
The only Australian hardware on board the legendary telescope is starting to fulfil its duties.
theconversation.com

Reposted by Gilles Louppe

Dans cette interview avec la JASRAC (équivalent japonais de la SACEM), Nobuo Uematsu a donné son avis sur la musique générée par IA. À sa façon, il relaie ce nouvel adage : si personne ne s'est fatigué à l'écrire, je ne me fatiguerai pas à l'écouter.

📃 www.jasrac.or.jp/magazine/int...