Visual Inference Lab
banner
visinf.bsky.social
Visual Inference Lab
@visinf.bsky.social
Visual Inference Lab of @stefanroth.bsky.social at @tuda.bsky.social - Research in Computer Vision and Machine Learning.

See https://www.visinf.tu-darmstadt.de/visual_inference
Reposted by Visual Inference Lab
@neuripsconf.bsky.social is two weeks away!

📢 Stop missing great workshop speakers just because the workshop wasn’t on your radar. Browse them all in one place:
robinhesse.github.io/workshop_spe...

(also available for @euripsconf.bsky.social)

#NeurIPS #EurIPS
November 19, 2025 at 8:00 PM
📢🎓 We have open PhD positions in Computer Vision & Machine Learning at @tuda.bsky.social and @hessianai.bsky.social within the Reasonable AI Cluster of Excellence — supervised by @stefanroth.bsky.social, @simoneschaub.bsky.social and many others!

www.career.tu-darmstadt.de/tu-darmstadt...
www.career.tu-darmstadt.de
November 4, 2025 at 2:04 PM
Reposted by Visual Inference Lab
🎉 Today, Simon Kiefhaber will present our ICCV oral paper on how to make optical flow estimators more efficient (faster inference and lower memory usage) with state-of-the-art accuracy:

🌍 visinf.github.io/recover

Talk: Tue 09:30 AM, Kalakaua Ballroom
Poster: Tue 11:45 AM, Exhibit Hall I #76
October 21, 2025 at 7:13 PM
Reposted by Visual Inference Lab
Interested in 3D DINO features from a single image or unsupervised scene understanding?🦖
Come by our SceneDINO poster at NeuSLAM today 14:15 (Kamehameha II) or Tue, 15:15 (Ex. Hall I 627)!
W/ Jevtić @fwimbauer.bsky.social @olvrhhn.bsky.social Rupprecht, @stefanroth.bsky.social @dcremers.bsky.social
October 19, 2025 at 8:38 PM
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @iccv.bsky.social 2025 in Hawaii! 🎉🏝
October 19, 2025 at 3:35 PM
📢Excited to share our IROS 2025 paper “Boosting Omnidirectional Stereo Matching with a Pre-trained Depth Foundation Model”!

Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
October 17, 2025 at 9:27 PM
🎓 Looking for a PhD position in computer vision? Apply to the European Laboratory for Learning & Intelligent Systems (ELLIS) and work with @stefanroth.bsky.social & @simoneschaub.bsky.social! Join the info session on Oct 1.

@ellis.eu @tuda.bsky.social

ellis.eu/news/ellis-p...
ELLIS PhD Program: Call for Applications 2025
The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI...
ellis.eu
September 29, 2025 at 9:35 AM
We are presenting five papers at the DAGM German Conference on Pattern Recognition (GCPR, @gcpr-by-dagm.bsky.social) in Freiburg this week!
September 23, 2025 at 5:46 PM
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
August 29, 2025 at 12:48 PM
Reposted by Visual Inference Lab
🌟 Keynotes at #GCPR2025 🌟

🎤 Prof. Dima Damen (Uni Bristol & Google DeepMind)

🗓️ Thursday, Sept 25, 2025. 10:30–11:30

Talk:Opportunities in Egocentric Vision

Discover new frontiers in egocentric video understanding, from wearable devices to large-scale datasets.

🔗 www.dagm-gcpr.de/year/2025/re...
August 21, 2025 at 4:49 PM
Reposted by Visual Inference Lab
🚨 Nectar Track @ #GCPR2025 — Call for Submissions! 🧠📢

Have a top-tier paper from the last year (CVPR, NeurIPS, ICLR, ECCV, ICCV, etc.)?

Share your work with the vibrant GCPR community!

🗓️ Submission Deadline: July 28, 2025

🔗 Instructions: www.dagm-gcpr.de/year/2025/su...
DAGM GCPR (Submission Instructions)
www.dagm-gcpr.de
July 22, 2025 at 3:41 PM
Reposted by Visual Inference Lab
𝗙𝗲𝗲𝗱-𝗙𝗼𝗿𝘄𝗮𝗿𝗱 𝗦𝗰𝗲𝗻𝗲𝗗𝗜𝗡𝗢 𝗳𝗼𝗿 𝗨𝗻𝘀𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗦𝗰𝗲𝗻𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻
Aleksandar Jevtić, Christoph Reich, Felix Wimbauer ... Daniel Cremers
arxiv.org/abs/2507.06230
Trending on www.scholar-inbox.com
July 11, 2025 at 6:00 AM
Reposted by Visual Inference Lab
July 9, 2025 at 1:18 PM
Reposted by Visual Inference Lab
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
June 26, 2025 at 9:22 AM
We had a great time at #CVPR2025 in Nashville!
June 20, 2025 at 5:04 AM
We are presenting 3 papers at #CVPR2025!
June 11, 2025 at 8:56 PM
Reposted by Visual Inference Lab
Check out the #MCML blog post on our recent #CVPR2025 #highlight paper🔥
𝗠𝗖𝗠𝗟 𝗕𝗹𝗼𝗴: Robots & self-driving cars rely on scene understanding, but AI models for understanding these scenes need costly human annotations. Daniel Cremers & his team introduce 🥤🥤 CUPS: a scene-centric unsupervised panoptic segmentation approach to reduce this dependency. 🔗 mcml.ai/news/2025-04...
April 4, 2025 at 1:36 PM
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
April 4, 2025 at 1:38 PM
Reposted by Visual Inference Lab
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
March 13, 2025 at 1:11 PM
Reposted by Visual Inference Lab
Im Februar trat Simone Schaub-Meyer ihre Professur für Bild- und Videoanalyse am TU-Fachbereich Informatik an. Im Interview verrät sie, was das Spannende an ihrer Forschung ist, wie wichtig Interdisziplinarität ist und in welches Fach sie gerne schnuppern würde: www.tu-darmstadt.de/universitaet...
Kurz gemeldet
Kurzmeldungen aus der TU: schnell und übersichtlich finden Sie hier aktuelle Meldungen - kurz, prägnant und streng nachrichtlich.
www.tu-darmstadt.de
February 25, 2025 at 3:11 PM
🏔️⛷️ Looking back on a fantastic week full of talks, research discussions, and skiing in the Austrian mountains!
January 31, 2025 at 7:38 PM
Reposted by Visual Inference Lab
Verstehen, was KI-Modelle können – und was nicht: Interview mit @simoneschaub.bsky.social, Early-Career-Forscherin im Clusterprojekt „RAI“ (Reasonable Artificial Intelligence).
"RAI" ist eines der Projekte, mit denen sich die TUDa um einen Exzellenzcluster bewirbt.
www.youtube.com/watch?v=2VAm...
Verstehen, was KI-Modelle können und was nicht: RAI-Forschende Dr. Simone Schaub-Meyer im Interview
YouTube video by Technische Universität Darmstadt
www.youtube.com
January 13, 2025 at 12:18 PM
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!

Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds
December 13, 2024 at 10:10 AM
Our work, "Boosting Unsupervised Semantic Segmentation with Principal Mask Proposals" is accepted at TMLR! 🎉

visinf.github.io/primaps/

PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
November 28, 2024 at 5:41 PM