Gilles Louppe
glouppe.bsky.social
Gilles Louppe
@glouppe.bsky.social
AI for Science, deep generative models, inverse problems. Professor of AI and deep learning @universitedeliege.bsky.social. Previously @CERN, @nyuniversity. https://glouppe.github.io
Les prix pour la première conférence en IA. Sans compter le vol jusque San Diego, la semaine à l'hotel, ou les frais sur place...
November 27, 2025 at 3:19 PM
Finally, at the #CCAI workshop, Thomas will show how, without retraining, GenCast can be embedded in a particle filter for data assimilation. That is, no initial state x0 is required anymore, observations are sufficient to start generating realistic weather trajectories! arxiv.org/abs/2509.18811
November 27, 2025 at 1:37 PM
At the same workshop, @orochman.bsky.social will discuss how neural solvers only approximately satisfy physical constraints (even if they are supposedly trained for that). Fortunately, simple post-hoc projection steps can help improve physical consistency significantly. arxiv.org/abs/2511.17258
November 27, 2025 at 1:37 PM
At the #ML4PS workshop, @gandry.bsky.social and @sachalewin.bsky.social will present Appa, our large weather model for global data assimilation. Appa differs from other weather models in that it can do reanalysis, nowcasting and forecasting within the same framework. arxiv.org/abs/2504.18720
November 27, 2025 at 1:37 PM
@francois-rozet.bsky.social (attending @euripsconf.bsky.social) will present the work he did at @polymathicai.bsky.social as an intern. In a nutshell, we find that emulating physics in latent space leads to better results than trying to generate in pixel-space directly. arxiv.org/abs/2507.02608
November 27, 2025 at 1:37 PM
For some puzzling reason my (existing) Scholar profile cannot be found using Scholar itself, it's been like this for years 🙃 (search engines like Bing do find it however)
November 22, 2025 at 11:04 AM
Oui c'était bien moi! Je me disais bien aussi que tu me disais qqch :-) au plaisir de se recroiser
November 12, 2025 at 3:28 PM
It is only useful when the training data is noisy or incomplete. See eg. arxiv.org/abs/2405.13712 where train diffusion models from sparse images only.
Learning Diffusion Priors from Observations by Expectation Maximization
Diffusion models recently proved to be remarkable priors for Bayesian inverse problems. However, training these models typically requires access to large amounts of clean data, which could prove diffi...
arxiv.org
October 27, 2025 at 3:38 PM