Gilles Louppe
glouppe.bsky.social
Gilles Louppe
@glouppe.bsky.social
AI for Science, deep generative models, inverse problems. Professor of AI and deep learning @universitedeliege.bsky.social. Previously @CERN, @nyuniversity. https://glouppe.github.io
Les prix pour la première conférence en IA. Sans compter le vol jusque San Diego, la semaine à l'hotel, ou les frais sur place...
November 27, 2025 at 3:19 PM
Finally, at the #CCAI workshop, Thomas will show how, without retraining, GenCast can be embedded in a particle filter for data assimilation. That is, no initial state x0 is required anymore, observations are sufficient to start generating realistic weather trajectories! arxiv.org/abs/2509.18811
November 27, 2025 at 1:37 PM
At the same workshop, @orochman.bsky.social will discuss how neural solvers only approximately satisfy physical constraints (even if they are supposedly trained for that). Fortunately, simple post-hoc projection steps can help improve physical consistency significantly. arxiv.org/abs/2511.17258
November 27, 2025 at 1:37 PM
At the #ML4PS workshop, @gandry.bsky.social and @sachalewin.bsky.social will present Appa, our large weather model for global data assimilation. Appa differs from other weather models in that it can do reanalysis, nowcasting and forecasting within the same framework. arxiv.org/abs/2504.18720
November 27, 2025 at 1:37 PM
@francois-rozet.bsky.social (attending @euripsconf.bsky.social) will present the work he did at @polymathicai.bsky.social as an intern. In a nutshell, we find that emulating physics in latent space leads to better results than trying to generate in pixel-space directly. arxiv.org/abs/2507.02608
November 27, 2025 at 1:37 PM
... and here I thought the new Scholar Labs would finally be able to search and find my Scholar profile 😥
November 22, 2025 at 11:04 AM
EM algorithm: 1977 vintage, 2025 relevant. New lecture notes on a classic that refuses to age. From fitting a GMM on the Old Faithful data to training modern diffusion models in incomplete data settings, the same simple math applies. 👉 glouppe.github.io/dats0001-fou...
October 27, 2025 at 2:53 PM
Flying ✈️ to Stockholm for 2 days of AI for Science at the Royal Swedish Academy of Sciences scienceacademyswe.bsky.social Anyone around?
September 2, 2025 at 4:11 PM
Very proud to see the #ML4PS2025 workshop thrive! ... but where will we put so many posters? 👀 (subtweet)
September 1, 2025 at 4:58 AM
After threatening reviewers and co-authors, the PCs are now micro-managing ACs. @neuripsconf.bsky.social please review the tone of your emails. We do not need to be treated like children. We are all adults here, and we are all trying to do our best, free of charge.
July 7, 2025 at 4:27 PM
Operating at both 0.25-degree spatial resolution and 1-hour temporal resolution (vs traditional 6 or 12-hour intervals), Appa delivers more than promising results: strong skills in observation assimilation and competitive forecasting with detailed atmospheric dynamics.
April 29, 2025 at 4:48 AM
Appa handles multiple tasks traditionally requiring separate systems, all within the same probabilistic framework and without retraining:
- Reanalysis (filling gaps in historical data)
- Filtering (estimating current states)
- Forecasting (predicting future states)
April 29, 2025 at 4:48 AM
How? Under the hood, Appa scales up @francois-rozet.bsky.social's visionary work on Score-based Data Assimilation (SDA) [https://arxiv.org/abs/2306.10574] to global atmospheric states, taking SDA from regional to planetary scale.
April 29, 2025 at 4:48 AM
<proud advisor>
Hot off the arXiv! 🦬 "Appa: Bending Weather Dynamics with Latent Diffusion Models for Global Data Assimilation" 🌍 Appa is our novel 1.5B-parameter probabilistic weather model that unifies reanalysis, filtering, and forecasting in a single framework. A thread 🧵
April 29, 2025 at 4:48 AM
What are the biggest risks of AI? #AISummit
February 7, 2025 at 2:04 PM
Old is new. It's funny how so many ideas from what some call "good old-fashioned AI" keep resurfacing! My advice: revisit Russel and Norvig's book with 2025 deep learning.
February 6, 2025 at 5:08 PM
I am at the AI Action Summit today and tomorrow! Ping me if you are around! @polytechniqueparis.bsky.social
February 6, 2025 at 9:41 AM
Brrrrr! Going down... 📉
December 3, 2024 at 2:39 PM
NeuralMPM performs better or on par with existing baselines (GNS, DMCF) while reducing training times. It also generalizes to systems with more particles than seen during training and to larger grid sizes.
November 18, 2024 at 7:28 AM
NeuralMPM is inspired by the Material Point Method and combines Lagrangian particles with a discretized or continuous Eulerian grid to emulate fluid dynamics simulations at a fraction of the cost.
November 18, 2024 at 7:28 AM
Sharing this again, to cheer up my students after rough #ICLR2025 reviews. We present NeuralMPM, a neural emulation framework for particle-based simulations. Full paper at arxiv.org/abs/2408.15753
November 18, 2024 at 7:28 AM
Tuning the learning rate often brings significant improvements, but little did I realize that modern optimizers would bring so much more to the table. Here AdamW vs. SOAP, everything else being equal. 😮
November 10, 2024 at 1:37 PM