David Picard
banner
davidpicard.bsky.social
David Picard
@davidpicard.bsky.social
Professor of Computer Vision/Machine Learning at Imagine/LIGM, École nationale des Ponts et Chaussées @ecoledesponts.bsky.social Music & overall happiness 🌳🪻 Born well below 350ppm 😬 mostly silly personal views
📍Paris 🔗 https://davidpicard.github.io/
L'adhésion à la CVF est gratuite !
November 20, 2025 at 9:27 PM
To be honest, it's kinda close. less than 3% on each channel...
November 20, 2025 at 4:21 PM
La disponibilité des chips (qu'il faut acheter vite et d'un coup car ils se périment), la ligne électrique, la dissipation de chaleur, etc.
Franchement, ça ressemble à un plan de marketeux pour attirer de l'investissement. Beaucoup plus de risque de gouffre financier que de catastrophe écologique.
November 19, 2025 at 8:18 PM
Je suis toujours un peu sceptique quand je vois marqué 1GW. Comme ordre de grandeur, ça fait un million de GPU à 1kW chaque. Je veux bien me tromper d'un facteur 3 et dire que ça ne fait que 300k GPU. Est-ce qu'ils sont disponibles ?
À titre de comparaison: epoch.ai/data/gpu-clu...
Data on GPU clusters
Our database of over 500 GPU clusters and supercomputers tracks large hardware facilities, including those used for AI training and inference.
epoch.ai
November 19, 2025 at 8:13 PM
More like he is spreading negationism and antisemitism right now: bsky.app/profile/maza...

Please translate it using any visual translation tools like Google lens, and let it be known that there is currently a legal action being taken in France.
On est pas bien là ? :)
November 19, 2025 at 7:54 PM
Yeah, sorry, I realized that I was being confusing because "electric field screening" has a single word translation in french ("écrantage" that literally translates to "screening") and for us there is no ambiguity when using that word, whereas screening is fairly common in English.
November 19, 2025 at 3:42 PM
I don't have time to lose on that. It adds 0 bit of information.
Tell me what you can do, what you want to do and why you think we're a good match.
November 19, 2025 at 12:44 PM
Tu veux dire que j'ai de la chance ?

bsky.app/profile/davi...
Pro-tip: if you send me an email asking for an internship, a PhD or a postdoc position, don't copy/paste an over-hyped summary of one of my papers you just asked ChatGPT to spit out.

Pro-tip#2: don't do that with other professors as well. It's not just me.
November 19, 2025 at 8:25 AM
And do the presentation with manim?
November 19, 2025 at 6:30 AM
We're on the same boat. Maybe we can convince each other to not do it. But the lack of video support is possibly a deal breaker anyway.
November 19, 2025 at 12:11 AM
Nice!
November 18, 2025 at 11:39 PM
Because of the hidden dim >> data dim. For pixel space 16x16, you have data dim=768, so you'd need a transformer of hidden dim at least 2k
What they do is essentially just learning a PCA so that data dim=128 and thus they can have a hidden dim of only 1k.
VAEs have much lower data dim, typically 16.
November 18, 2025 at 7:01 PM
Les occitans ont creusé trop profond?
November 18, 2025 at 6:41 PM
I don't think it's clear enough: looking at the diff analytical solution, every training point is a strong attractor and masks what's behind (hence the "screening" analogy). In contrast in FM, you can easily pass through training points because of normalization factors.
November 18, 2025 at 4:06 PM
I theory: I suspect it's related to the analytical solution. In FM, you have a (1-t) normalizing term that tends to still push away the current point if by luck you land at epsilon from a training point for t<1. Whereas in diff training points are performing "screening" if you allow the EM analogy.
November 18, 2025 at 3:56 PM
In practice: FM seems to be able to cover the entire support of the target distribution even if both are not aligned, whereas diffusion struggles with that. I suspect it's a reason why the statistics of the VAE are so well tuned with a magic constant (centered, variance=0.5 IIRC).
November 18, 2025 at 3:56 PM
How much in a hurry are you? I may want to have something to relax my mind during future holidays 😅
November 18, 2025 at 3:36 PM
Not really with flow matching. It's sufficiently robust to the statistics of the noise distribution, whereas diffusion isn't.
November 18, 2025 at 3:33 PM