lebellig
banner
lebellig.bsky.social
lebellig
@lebellig.bsky.social
Ph.D. student on generative models and domain adaptation for Earth observation 🛰
Previously intern @SonyCSL, @Ircam, @Inria

🌎 Personal website: https://lebellig.github.io/
Pinned
I created 3 introductory notebooks on Flow Matching models to help get started with this exciting topic! ✨

1. Annotated Flow Matching paper: github.com/gle-bellier/...
2. Discrete Flow Matching: github.com/gle-bellier/...
3. Minimal FM in Jax: github.com/gle-bellier/...
GitHub - gle-bellier/flow-matching: Annotated Flow Matching paper
Annotated Flow Matching paper. Contribute to gle-bellier/flow-matching development by creating an account on GitHub.
github.com
Calling it for today... I tried using the gemini 3 Pro preview to build some js animations, and it went well
November 18, 2025 at 8:17 PM
Interpolation between two gaussian distributions on a flat torus (my personal benchmark for new llms)
November 18, 2025 at 6:43 PM
It may not top ImageNet benchmarks, but honestly, that hardly matters... the removal of the VAE component is a huge relief and makes it much easier to apply diffusion models to domain-specific datasets that lack large-scale VAEs.
"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
November 18, 2025 at 5:10 PM
"Back to Basics: Let Denoising Generative Models Denoise" by
Tianhong Li & Kaiming He arxiv.org/abs/2511.13720
Diffusion models in pixel-space, without VAE, with clean image prediction = nice generation results. Not a new framework but a nice exploration of the design space of the diffusion models.
November 18, 2025 at 5:05 PM
Reposted by lebellig
We figured out flow matching over states that change dimension. With "Branching Flows", the model decides how big things must be! This works wherever flow matching works, with discrete, continuous, and manifold states. We think this will unlock some genuinely new capabilities.
November 10, 2025 at 9:10 AM
Reposted by lebellig
We created a 1-hour live-coding tutorial to get started in imaging problems with AI, using the deepinverse library

youtu.be/YRJRgmXV8_I?...
DeepInverse tutorial - computational imaging with AI
YouTube video by DeepInverse
youtu.be
November 13, 2025 at 3:24 PM
I’ll be at EurIPS in Copenhagen in early December ! Always up for chats about diffusion, flow matching, Earth observation, AI4climate, etc... Ping me if you’re going! 🇩🇰🌍
November 12, 2025 at 9:07 PM
I first came across the idea of learning curved interpolants in "Branched Schrödinger Bridge Matching" arxiv.org/abs/2506.09007. I liked it, but I’m curious how well it scales to high-dim settings and how challenging it is to learn sufficiently good interpolants to train the diffusion bridge
November 12, 2025 at 8:57 PM
"Curly Flow Matching for Learning Non-gradient Field Dynamics" @kpetrovvic.bsky.social et al. arxiv.org/pdf/2510.26645
Solving the Schrödinger bridge pb with a non-zero drift ref. process: learn curved interpolants, apply minibatch OT with the induced metric, learn the mixture of diffusion bridges.
November 12, 2025 at 8:09 PM
Reposted by lebellig
I'm on my way to @caltech.edu for an AI + Science conference. Looking forward to seeing some friends and meeting new ones. There will be a livestream.
aiscienceconference.caltech.edu
November 9, 2025 at 8:41 PM
Reposted by lebellig
“Entropic (Gromov) Wasserstein Flow Matching with GENOT” by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromov–Wasserstein couplings
October 30, 2025 at 10:43 PM
Reposted by lebellig
💥 DeepInverse is now part of the official PyTorch Landscape💥

We are excited to join an ecosystem of great open-source AI libraries, including @hf.co diffusers, MONAI, einops, etc.

pytorch.org/blog/deepinv...
DeepInverse Joins the PyTorch Ecosystem: the library for solving imaging inverse problems with deep learning – PyTorch
pytorch.org
November 5, 2025 at 5:31 PM
Reposted by lebellig
🌀🌀🌀New paper on the generation phases of Flow Matching arxiv.org/abs/2510.24830
Are FM & diffusion models nothing else than denoisers at every noise level?
In theory yes, *if trained optimally*. But in practice, do all noise level equally matter?

with @annegnx.bsky.social, S Martin & R Gribonval
November 5, 2025 at 9:03 AM
Reposted by lebellig
Want to work on generative models and Earth Observation? 🌍

I'm looking for:
🧑‍💻 an intern on generative models for change detection
🧑‍🔬 a PhD student on neurosymbolic generative models for geospatial data

Both starting beginning of 2026.

Details are below, feel free to email me!
November 4, 2025 at 10:08 AM
Reposted by lebellig
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
October 31, 2025 at 11:24 AM
“Entropic (Gromov) Wasserstein Flow Matching with GENOT” by D. Klein et al. arxiv.org/abs/2310.09254
Transport between two distributions defined on different spaces by training a noise-to-data flow models in the target space, conditioned on the source data and leveraging Gromov–Wasserstein couplings
October 30, 2025 at 10:43 PM
Reposted by lebellig
New paper, with @rkhashmani.me @marielpettee.bsky.social @garrettmerz.bsky.social Hellen Qu. We introduce a framework for generating realistic, highly multimodal datasets with explicitly calculable mutual information. This is helpful for studying self-supervised learning
arxiv.org/abs/2510.21686
October 28, 2025 at 5:23 PM
"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances
The Principles of Diffusion Models
This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffu...
arxiv.org
October 28, 2025 at 8:35 AM
Reposted by lebellig
I'm excited to share jaxion, a differentiable Python/JAX library for fuzzy dark matter (axions) + gas + stars, scalable on multiple GPUs

⭐️repo: github.com/JaxionProjec...
📚docs: jaxion.readthedocs.io

Feedback + collaborations welcome!
October 27, 2025 at 6:10 PM
Reposted by lebellig
Fisher meets Feynman! 🤝

We use score matching and a trick from quantum field theory to make a product-of-experts family both expressive and efficient for variational inference.

To appear as a spotlight @ NeurIPS 2025.
#NeurIPS2025 (link below)
October 27, 2025 at 12:51 PM
that and please share/repost the articles you’re interested in (especially if you’re not the author). If I’m following you, I want to see what you’re reading. We don’t need a fancy algorithm if we can discover great research through the curated posts of the people we follow
If you’re going to post a paper on twitter, why not do it a few days after the bluesky post? No harm to your career but makes clear it’s a slower information source
October 27, 2025 at 1:55 PM
Reposted by lebellig
Strong afternoon session: Ségolène Martin on how to go from flow matching to denoisers (and hopefully come back?) and Claire Boyer on how learning rate and working in latent spaces affect diffusion models
October 24, 2025 at 3:04 PM
Reposted by lebellig
Kickstarting our workshop on Flow matching and Diffusion with a talk by Eric Vanden Eijnden on how to optimize learning and sampling in Stochastic Interpolants!

Broadcast available at gdr-iasis.cnrs.fr/reunions/mod...
October 24, 2025 at 8:30 AM
Reposted by lebellig
Excited to share SamudrACE, the first 3D AI ocean–atm–sea-ice #climate emulator! 🚀 Simulates 800 years in 1 day on 1 GPU, ~100× faster than traditional models, straight from your laptop 👩‍💻 Collaboration with @ai2.bsky.social and GFDL, advancing #AIforScience with #DeepLearning.
tinyurl.com/Samudrace
SamudrACE: A fast, accurate, efficient 3D coupled climate AI emulator
A fast digital twin of a state-of-the-art coupled climate model, simulating 800 years in 1 day with 1 GPU. SamudrACE combines two leading…
medium.com
October 15, 2025 at 4:11 PM
I'm already waiting for the next generation of "diffusion transformers features are well-suited for discriminative tasks" but with DiT trained with this representation autoencoders and the loop with be closed
Diffusion Transformers with Representation Autoencoders by Boyang Zheng, et al (arxiv.org/abs/2510.116...)

Unexpected result: swapping the SD-VAE for a pretrained visual encoder improves FID, challenging the idea that encoders' information compression is not suited for generative modeling!
October 15, 2025 at 11:55 AM