Guillaume Astruc
gastruc.bsky.social
Guillaume Astruc
@gastruc.bsky.social
2nd Year PhD Student from Imagine-ENPC/IGN/CNES

Working on Self-supervised Cross-modal Geospatial Learning.

Personal WebPage: https://gastruc.github.io/
We've added new experiments demonstrating robust generalization capabilities! Notably, AnySat shows strong performance on HLS Burn Scars - a sensor never seen during pretraining! 🔥🛰️
Check it out:
📄 Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat
April 30, 2025 at 2:00 PM
🔗 Check it out:
📜 Paper: arxiv.org/abs/2412.14123
🌐 Project: gastruc.github.io/anysat
🤗 HuggingFace: huggingface.co/g-astruc/Any...
🐙 GitHub: github.com/gastruc/AnySat
December 19, 2024 at 10:46 AM
🚀 Even better: AnySat supports linear probing for semantic segmentation!
That means you can fine-tune just a few thousand parameters and achieve SOTA results on challenging tasks—all with minimal effort.
December 19, 2024 at 10:46 AM
AnySat achieves SOTA performance on 6 tasks across 10 datasets:
🌱 Land cover mapping
🌾 Crop type segmentation
🌳 Tree species classification
🌊 Flood detection
🌍 Change detection
December 19, 2024 at 10:46 AM
We trained AnySat on 5 multimodal datasets simultaneously:
📡 11 distinct sensors
📏 Resolutions: 0.2m–500m
🔁 Revisit: single date to weekly
🏞️ Scales: 0.3–150 hectares

The pretrained model can adapt to truly diverse data, and probably yours too!
December 19, 2024 at 10:46 AM
🔍Thanks to our modified JEPA training scheme and scale-adaptive spatial encoders, AnySat trains on datasets with diverse scales, resolutions, and modalities!
🧠 75% of its parameters are shared across all inputs, enabling unmatched flexibility.
December 19, 2024 at 10:46 AM
🤔 What if embedding multimodal EO data was as easy as using a ResNet on images?
Introducing AnySat: one model for any resolution (0.2m–250m), scale (0.3–2600 hectares), and modalities (choose from 11 sensors & time series)!
Try it with just a few lines of code:
December 19, 2024 at 10:46 AM