Anton Obukhov
obukhov.ai
Anton Obukhov
@obukhov.ai
Research Scientist in Computer Vision and Generative AI
New modalities include surface normals and intrinsic decompositions like albedo, material properties (roughness, metallicity), and lighting decompositions. Marigold proves to be an efficient fine-tuning protocol that generalizes across image analysis tasks.
May 15, 2025 at 4:23 PM
Big Marigold update!
Last year, we showed how to turn Stable Diffusion 2 into a SOTA depth estimator with a few synthetic samples and 2–3 days on just 1 GPU.
Today's release features:
🏎️ 1-step inference
🔢 New modalities
🫣 High resolution
🧨 Diffusers support
🕹️ New demos
🧶👇
May 15, 2025 at 4:23 PM
RollingDepth rolls into Nashville for #CVPR2025! 🎸
February 28, 2025 at 10:26 AM
MDEC Challenge update! The 4th Monocular Depth Estimation Workshop at #CVPR2025 will be accepting submissions in two phases:
🚀 Dev phase: Feb 1 - Mar 1
🎯 Final phase: Mar 1 - Mar 21
Website: jspenmar.github.io/MDEC/
🌐 Codalab: codalab.lisn.upsaclay.fr/competitions...

Bring your best depth!
February 4, 2025 at 3:57 PM
Update about the 4th Monocular Depth Estimation Workshop at #CVPR2025:
🎉 Website is LIVE: jspenmar.github.io/MDEC/
🎉 Keynotes: Peter Wonka, Yiyi Liao, and Konrad Schindler
🎉 Challenge updates: new prediction types, baselines & metrics
January 31, 2025 at 7:23 PM
The 4th Monocular Depth Estimation Challenge (MDEC) is coming to #CVPR2025, and I’m excited to join the org team! After 2024’s breakthroughs in monodepth driven by generative model advances in transformers and diffusion, this year's focus is on OOD generalization and evaluation.
December 21, 2024 at 3:52 PM
Introducing ⇆ Marigold-DC — our training-free zero-shot approach to monocular Depth Completion with guided diffusion! If you have ever wondered how else a long denoising diffusion schedule can be useful, we have an answer for you! Details 🧵
December 19, 2024 at 1:52 AM
Introducing 🛹 RollingDepth 🛹 — a universal monocular depth estimator for arbitrarily long videos! Our paper, “Video Depth without Video Models,” delivers exactly that, setting new standards in temporal consistency. Check out more details in the thread 🧵
December 2, 2024 at 7:59 AM