Simone Schaub-Meyer
simoneschaub.bsky.social
Simone Schaub-Meyer
@simoneschaub.bsky.social
190 followers 220 following 2 posts
Assistant Professor of Computer Science at TU Darmstadt, Member of @ellis.eu, DFG #EmmyNoether Fellow, PhD @ETH Computer Vision & Deep Learning
Posts Media Videos Starter Packs
🎉 Today, Simon Kiefhaber will present our ICCV oral paper on how to make optical flow estimators more efficient (faster inference and lower memory usage) with state-of-the-art accuracy:

🌍 visinf.github.io/recover

Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
Reposted by Simone Schaub-Meyer
📢Excited to share our IROS 2025 paper “Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model”!

Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
Reposted by Simone Schaub-Meyer
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @iccv.bsky.social 2025 in Hawaii! 🎉🏝
Reposted by Simone Schaub-Meyer
We are presenting five papers at the DAGM German Conference on Pattern Recognition (GCPR, @gcpr-by-dagm.bsky.social) in Freiburg this week!
Reposted by Simone Schaub-Meyer
Efficient Masked Attention Transformer for Few-Shot Classification and Segmentation (GCPR 2025)

by @dustin-carrion.bsky.social, @stefanroth.bsky.social, and @simoneschaub.bsky.social

🌍: visinf.github.io/emat

Poster: Wednesday, 03:30 PM, Postern 8
Reposted by Simone Schaub-Meyer
Removing Cost Volumes from Optical Flow Estimators (ICCV 2025 Oral)

by @skiefhaber.de, @stefanroth.bsky.social, and @simoneschaub.bsky.social

🌍: visinf.github.io/recover

Poster: Friday, 10:30 AM, Poster 14
Reposted by Simone Schaub-Meyer
🚀 Open-Mic Opinions! 🚀

We welcome you to voice your opinion on the state of XAI. You get 5 minutes to speak (in-person only) during the workshop.

📷 Submit your proposals here: lnkd.in/d7_EWKXp

For more details: lnkd.in/dpYWVYXS

@iccv.bsky.social #ICCV2025 #eXCV
Reposted by Simone Schaub-Meyer
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
Reposted by Simone Schaub-Meyer
🚨Deadline Approaching! 🚨

Non-Proceedings track closes in 2 days!

Be sure to submit on time!

We are awaiting your submissions!

More info at: excv-workshop.github.io

@iccv.bsky.social #ICCV2025 #eXCV
Reposted by Simone Schaub-Meyer
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Simone Schaub-Meyer
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!

@iccv.bsky.social
Reposted by Simone Schaub-Meyer
Reposted by Simone Schaub-Meyer
Reasonable Artificial Intelligence und The Adaptive Mind: Die TU Darmstadt wird im Rahmen der Exzellenzstrategie des Bundes und der Länder mit gleich zwei geförderten Clusterprojekten ausgezeichnet. Ein Meilenstein für unsere Universität! www.tu-darmstadt.de/universitaet...
Zwei Exzellenzcluster für die TU Darmstadt
Großer Erfolg für die Technische Universität Darmstadt: Zwei ihrer Forschungsprojekte werden künftig als Exzellenzcluster gefördert. Die Exzellenzkommission im Wettbewerb der prestigeträchtigen Exzell...
www.tu-darmstadt.de
Reposted by Simone Schaub-Meyer
"Reasonable AI" got selected as a cluster of excellence www.tu-darmstadt.de/universitaet...

Overwhelmingly happy to be part of RAI & continue working with the smart minds at TU Darmstadt & hessian.AI, while also seeing my new home at Uni Bremen achieve a historic success in the excellence strategy!
Reposted by Simone Schaub-Meyer
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
Reposted by Simone Schaub-Meyer
Why has continual ML not had its breakthrough yet?

In our new collaborative paper w/ many amazing authors, we argue that “Continual Learning Should Move Beyond Incremental Classification”!

We highlight 5 examples to show where CL algos can fail & pinpoint 3 key challenges

arxiv.org/abs/2502.11927
Reposted by Simone Schaub-Meyer
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
Reposted by Simone Schaub-Meyer
Excited to share that today our paper recommender platform www.scholar-inbox.com has reached 20k users! We hope to reach 100k by the end of the year.. Lots of new features are being worked on currently and rolled out soon.
Reposted by Simone Schaub-Meyer
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
Verstehen, was KI-Modelle können und was nicht: RAI-Forschende Dr. Simone Schaub-Meyer im Interview
YouTube video by Technische Universität Darmstadt
www.youtube.com
Hi Julian, just joined bluesky, I am working on XAI in Computer Vision, would be great to be added to the list as well, thanks
Reposted by Simone Schaub-Meyer
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!

Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
Reposted by Simone Schaub-Meyer
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉

visinf.github.io/primaps/

PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.