David Picard
banner
davidpicard.bsky.social
David Picard
@davidpicard.bsky.social
Professor of Computer Vision/Machine Learning at Imagine/LIGM, École nationale des Ponts et Chaussées @ecoledesponts.bsky.social Music & overall happiness 🌳🪻 Born well below 350ppm 😬 mostly silly personal views
📍Paris 🔗 https://davidpicard.github.io/
To be honest, it's kinda close. less than 3% on each channel...
November 20, 2025 at 4:21 PM
Found this in my notebook. Never been more appropriate. 🥰
November 18, 2025 at 5:59 PM
In practice: FM seems to be able to cover the entire support of the target distribution even if both are not aligned, whereas diffusion struggles with that. I suspect it's a reason why the statistics of the VAE are so well tuned with a magic constant (centered, variance=0.5 IIRC).
November 18, 2025 at 3:56 PM
10 ans déjà 😭
November 18, 2025 at 9:59 AM
I so happen to have this novel by Maupassant on my phone (don't ask why). In french, of course.
November 18, 2025 at 9:58 AM
Perks of having meetings in Paris: if you plan ahead, there's always something interesting to see. For example a stroll in the Montparnasse cemetery, among some of the greatest writers.
November 18, 2025 at 9:58 AM
Not everything is retrieval, although I know I am biased towards k-NN. But in the special case of ICL, it's mostly about picking the nearest example from the context rather than "emerging properties".
See for example: arxiv.org/abs/2404.15736
November 16, 2025 at 4:29 PM
Is it really surprising given your choice of kernel? It has really long interactions, so I assume that with sufficient step size and number of iterations, the local minima are unattainable due to numerical precision. It's a different story with a Gaussian kernel w/o tuning bandwidth, lr, #steps.
November 15, 2025 at 11:21 AM
The arrogance
November 14, 2025 at 7:04 PM
💩
November 14, 2025 at 6:04 PM
For people using collabora online, do you know how to fix these ugly fonts in the menu?
It doesn't hinder functionality, but it somehow annoys me so much that I've noticed I'm slower because of that.
November 14, 2025 at 3:37 PM
www.lemonde.fr/campus/artic...

Au début, j'ai honnêtement cru que c'était une parodie de sketch des inconnus. Mais ça pose de vraies questions de comment on sort d'une dynamique où l'enseignement supérieur devient de plus en plus favorable aux mêmes populations - et pas que à cause du prix.
November 14, 2025 at 2:54 PM
I saw this in many labs 20 years ago. The wall of irrelevant maths that was required to avoid getting annoyed by undergrad reviewers has now been replaced with a wall of useless compute. The sad thing is that contrarily to the 1st one, that 2nd one requires a lot of wasted money.
November 14, 2025 at 12:59 PM
Ils demandent à GPT-5 de faire le graphe.
Never forget!
November 13, 2025 at 12:49 PM
November 12, 2025 at 6:52 AM
You can't deny the beauty of a forest in autumn 🍂🍁
November 9, 2025 at 9:01 AM
Archéologie des logos de black metal ?
November 3, 2025 at 9:20 AM
Light on the Seine valley.
I'm at a workshop on AI for cultural heritage organized in the beautiful chateau de Saint Germain en Laye.
(Organisation by GDR IASIS and PEPR ICARE national programs)
November 3, 2025 at 8:30 AM
Case in point: email reçu ce matin d'un collègue que j'apprécie beaucoup et que je trouve brillant, dans un labo génial (mais pas UMR).

J'ai passé les 2 dernières années à voir pas mal de labos d'info : je pense que les univ et les écoles d'ing n'ont aucune chance à l'ERC, seuls les EPST en ont.
October 31, 2025 at 9:30 AM
👀 arxiv.org/abs/2510.25897

Thread with all details coming soon!
October 31, 2025 at 8:55 AM
To be more clear: equivalence does not mean all are equally easy to solve. Of course, you never get the optimal solution, but just an approximation of it, and depending of the formulation, the resulting approx can have different properties like fast/slow to converge, stable/unstable, sharp/blurry.
October 28, 2025 at 9:51 AM
This is probably one of the most important part to remember. All diffusion models are basically doing the same thing up to the parametrization of what part you want to approximate with a neural network, which means different formulations put emphasis on different aspects of the task.
October 28, 2025 at 9:46 AM
I mean, if this does not remind you of Zothique, maybe you should read some of his books
October 24, 2025 at 8:15 AM
Half of the images on this article are straight out of a Clark Ashton Smith short story
October 24, 2025 at 8:12 AM
No ICCV FOMO for me as I was able to see Paradise Lost tonight! 😎🤘
October 20, 2025 at 8:56 PM