Lupin-Jimenez et al. (2025)
"Simultaneous Emulation and Downscaling..."
doi.org/10.1029/2025JH000851
💾 Code & data:
zenodo.org/record/14607130
We’d love to hear from collaborators in ocean ML, emulation, and climate AI 🌊🤝
Lupin-Jimenez et al. (2025)
"Simultaneous Emulation and Downscaling..."
doi.org/10.1029/2025JH000851
💾 Code & data:
zenodo.org/record/14607130
We’d love to hear from collaborators in ocean ML, emulation, and climate AI 🌊🤝
✅ Physically grounded
✅ 10x–1000x faster than ROMS
✅ Enables regional “digital twins”
✅ Sets up for coupled ocean–atmosphere emulation
✅ Works across different reanalysis sources
AI meets ocean science.
✅ Physically grounded
✅ 10x–1000x faster than ROMS
✅ Enables regional “digital twins”
✅ Sets up for coupled ocean–atmosphere emulation
✅ Works across different reanalysis sources
AI meets ocean science.
Beats interpolation
Matches or outperforms ROMS in short-term accuracy
Stays stable & realistic over 10 years
Captures mean state and eddy variability
Preserves spectral energy across scales
No exploding gradients here 💥
Beats interpolation
Matches or outperforms ROMS in short-term accuracy
Stays stable & realistic over 10 years
Captures mean state and eddy variability
Preserves spectral energy across scales
No exploding gradients here 💥
We don't just super-resolve existing data.
We downscale from an emulator that predicts ocean dynamics.
Plus: our downscaler learns to correct both model bias and physical mismatch (GLORYS → CNAPS). That’s new.
We don't just super-resolve existing data.
We downscale from an emulator that predicts ocean dynamics.
Plus: our downscaler learns to correct both model bias and physical mismatch (GLORYS → CNAPS). That’s new.
An FNO emulator predicts SSH, SSU, SSV, SSKE daily at 8 km
A UNet + PatchGAN-VAE downscales to 4 km & corrects bias
Spectral loss + online fine-tuning ensures physical consistency
Together: speed, structure, and stability.
An FNO emulator predicts SSH, SSU, SSV, SSKE daily at 8 km
A UNet + PatchGAN-VAE downscales to 4 km & corrects bias
Spectral loss + online fine-tuning ensures physical consistency
Together: speed, structure, and stability.
Regional ocean models like the Gulf of Mexico are hard—complex coastlines, eddies, Loop Current, chaotic boundary forcing.
Physics models = accurate but slow.
ML = fast, but unstable after a few weeks. We wanted the best of both.
Regional ocean models like the Gulf of Mexico are hard—complex coastlines, eddies, Loop Current, chaotic boundary forcing.
Physics models = accurate but slow.
ML = fast, but unstable after a few weeks. We wanted the best of both.
If you're working on GenAI for Earth systems, let’s connect — curious to hear your thoughts!
#GenAI #ClimateAI #OceanML #FNO #DDPM #DataAssimilation
If you're working on GenAI for Earth systems, let’s connect — curious to hear your thoughts!
#GenAI #ClimateAI #OceanML #FNO #DDPM #DataAssimilation
⚡️ One-shot
🌀 Physics-consistent
🌐 Scalable
It captures high-wavenumber, fine-scale structures other ML baselines miss. Spectral diagnostics & vorticity metrics confirm this. (4/5)
⚡️ One-shot
🌀 Physics-consistent
🌐 Scalable
It captures high-wavenumber, fine-scale structures other ML baselines miss. Spectral diagnostics & vorticity metrics confirm this. (4/5)
• FNO (Fourier Neural Operator)
• DDPM (Denoising Diffusion Probabilistic Model)
✅ Reconstructs high-resolution states from 1%–0.1% data
✅ Works on synthetic turbulence, GLORYS reanalysis & real satellite altimetry
✅ No forward solver required (3/5)
• FNO (Fourier Neural Operator)
• DDPM (Denoising Diffusion Probabilistic Model)
✅ Reconstructs high-resolution states from 1%–0.1% data
✅ Works on synthetic turbulence, GLORYS reanalysis & real satellite altimetry
✅ No forward solver required (3/5)
This makes reconstructing fine-scale ocean dynamics like eddies and fronts very hard — especially for forecasting.
We tackle this using a diffusion model conditioned on a neural operator. (2/5)
This makes reconstructing fine-scale ocean dynamics like eddies and fronts very hard — especially for forecasting.
We tackle this using a diffusion model conditioned on a neural operator. (2/5)