Working on Self-supervised Cross-modal Geospatial Learning.
Personal WebPage: https://gastruc.github.io/
Introducing AnySat: one model for any resolution (0.2m–250m), scale (0.3–2600 hectares), and modalities (choose from 11 sensors & time series)!
Try it with just a few lines of code:
- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
It outperforms CLIP-like models (SigLip2, finetuned StreetCLIP)… and that’s shocking 🤯
Why? CLIP models have an innate advantage — they literally learn place names + images. DinoV3 doesn’t.
If you're here and want to discuss geolocation or geospatial foundation models, let's connect!
If you're here and want to discuss geolocation or geospatial foundation models, let's connect!
A new large-scale, multimodal dataset for land cover and crop type mapping
🤗 Dataset: huggingface.co/datasets/IGN...
📄 Preprint: arxiv.org/abs/2506.07080
🤗 Pretrained models: huggingface.co/collections/...
💻 Code: github.com/IGNF/FLAIR-HUB
🌐 Project : arxiv.org/abs/2506.07080
A new large-scale, multimodal dataset for land cover and crop type mapping
🤗 Dataset: huggingface.co/datasets/IGN...
📄 Preprint: arxiv.org/abs/2506.07080
🤗 Pretrained models: huggingface.co/collections/...
💻 Code: github.com/IGNF/FLAIR-HUB
🌐 Project : arxiv.org/abs/2506.07080
Honored and grateful that this paper received the best student paper award!
Honored and grateful that this paper received the best student paper award!
“When majority rules, minority loses: bias amplification of gradient descent”
We often blame biased data but training also amplifies biases. Our paper explores how ML algorithms favor stereotypes at the expense of minority groups.
➡️ arxiv.org/abs/2505.13122
(1/3)
“When majority rules, minority loses: bias amplification of gradient descent”
We often blame biased data but training also amplifies biases. Our paper explores how ML algorithms favor stereotypes at the expense of minority groups.
➡️ arxiv.org/abs/2505.13122
(1/3)
Check it out:
📄 Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat
Check it out:
📄 Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat
📄 Paper: arxiv.org/abs/2503.15683
📄 Paper: arxiv.org/abs/2503.15683
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
Registration is open (it's free) with priority given to authors of accepted papers: cvprinparis.github.io/CVPR2025InPa...
Big 🧵👇 with details!
Registration is open (it's free) with priority given to authors of accepted papers: cvprinparis.github.io/CVPR2025InPa...
Big 🧵👇 with details!
We leverage our coherence aware training to improve the textual understanding
It has a package and pretrained models!
🖥️ nicolas-dufour.github.io/cad.html
🤖 github.com/nicolas-dufo...
We leverage our coherence aware training to improve the textual understanding
Introducing AnySat: one model for any resolution (0.2m–250m), scale (0.3–2600 hectares), and modalities (choose from 11 sensors & time series)!
Try it with just a few lines of code:
Introducing AnySat: one model for any resolution (0.2m–250m), scale (0.3–2600 hectares), and modalities (choose from 11 sensors & time series)!
Try it with just a few lines of code:
AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities
https://arxiv.org/abs/2412.14123
AnySat: An Earth Observation Model for Any Resolutions, Scales, and Modalities
https://arxiv.org/abs/2412.14123
☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵
🌐Webpage: anttwo.github.io/matcha/
☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵
🌐Webpage: anttwo.github.io/matcha/
🗺️ Paper, code, and demo: nicolas-dufour.github.io/plonk
🗺️ Paper, code, and demo: nicolas-dufour.github.io/plonk