Christoph Reich
banner
christophreich.bsky.social
Christoph Reich
@christophreich.bsky.social
120 followers 260 following 14 posts
@ellis.eu Ph.D. Student @CVG (@dcremers.bsky.social), @visinf.bsky.social & @oxford-vgg.bsky.social | Ph.D. Scholar @zuseschooleliza.bsky.social | M.Sc. & B.Sc. @tuda.bsky.social | Prev. @neclabsamerica.bsky.social https://christophreich1996.github.io
Posts Media Videos Starter Packs
Reposted by Christoph Reich
[6/8] Motion-Refined DINOSAUR for Unsupervised Multi-Object Discovery (Oral at ILR+G Workshop)

by Xinrui Gong*, @olvrhhn.bsky.social *, @christophreich.bsky.social , Krishnakant Singh, @simoneschaub.bsky.social , @dcremers.bsky.social @stefanroth.bsky.social
Reposted by Christoph Reich
Interested in 3D DINO features from a single image or unsupervised scene understanding?🦖
Come by our SceneDINO poster at NeuSLAM today 14:15 (Kamehameha II) or Tue, 15:15 (Ex. Hall I 627)!
W/ Jevtić @fwimbauer.bsky.social @olvrhhn.bsky.social Rupprecht, @stefanroth.bsky.social @dcremers.bsky.social
Reposted by Christoph Reich
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
Check out our blog post about SceneDINO 🦖
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀

🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
Reposted by Christoph Reich
𝗙𝗲𝗲𝗱-𝗙𝗼𝗿𝘄𝗮𝗿𝗱 𝗦𝗰𝗲𝗻𝗲𝗗𝗜𝗡𝗢 𝗳𝗼𝗿 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗦𝗰𝗲𝗻𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻
Aleksandar Jevtić, Christoph Reich, Felix Wimbauer ... Daniel Cremers
arxiv.org/abs/2507.06230
Trending on www.scholar-inbox.com
Reposted by Christoph Reich
The code for our #CVPR2025 paper, PRaDA: Projective Radial Distortion Averaging, is now out!

Turns out distortion calibration from multiview 2D correspondences can be fully decoupled from 3D reconstruction, greatly simplifying the problem

arxiv.org/abs/2504.16499
github.com/DaniilSinits...
✅ SceneDINO offers refined, high-resolution, and multi-view consistent (rendered) 2D features.
✅SceneDINO outperforms our unsupervised baseline (S4C + STEGO) in unsupervised SSC accuracy.
✅Linear probing our feature field leads to an SSC accuracy on par with 2D supervised S4C.
⚗️Distilling and clustering SceneDINO's feature field in 3D results in unsupervised semantic scene completion predictions.
🏋SceneDINO is trained to estimate an expressive 3D feature field using multi-view self-supervision and 2D DINO features.
🚀 SceneDINO is unsupervised and infers 3D geometry and features from a single image in a feed-forward manner. Distilling and clustering SceneDINO's 3D feature field lead to unsupervised semantic scene completion predictions.
Reposted by Christoph Reich
Aleksandar Jevti\'c, Christoph Reich, Felix Wimbauer, Oliver Hahn, Christian Rupprecht, Stefan Roth, Daniel Cremers
Feed-Forward SceneDINO for Unsupervised Semantic Scene Completion
https://arxiv.org/abs/2507.06230
Reposted by Christoph Reich
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Christoph Reich
We had a great time at #CVPR2025 in Nashville!
Reposted by Christoph Reich
Scene-Centric Unsupervised Panoptic Segmentation

by @olvrhhn.bsky.social , @christophreich.bsky.social , @neekans.bsky.social , @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social

Sunday, 8:30 AM, ExHall D, Poster 330
Project Page: visinf.github.io/cups
Reposted by Christoph Reich
Can we match vision and language representations without any supervision or paired data?

Surprisingly, yes! 

Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.

⬇️ 1/4
Reposted by Christoph Reich
Can you train a model for pose estimation directly on casual videos without supervision?

Turns out you can!

In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!

⬇️
Reposted by Christoph Reich
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!

1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.

Code and more info: ⏩ fwmb.github.io/anycam/
Check out our recent #CVPR2025 #highlight paper on unsupervised panoptic segmentation🚀
🌍 visinf.github.io/cups/
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
Check out the #MCML blog post on our recent #CVPR2025 #highlight paper🔥
𝗠𝗖𝗠𝗟 𝗕𝗹𝗼𝗴: Robots & self-driving cars rely on scene understanding, but AI models for understanding these scenes need costly human annotations. Daniel Cremers & his team introduce 🥤🥤 CUPS: a scene-centric unsupervised panoptic segmentation approach to reduce this dependency. 🔗 mcml.ai/news/2025-04...