Matteo Dunnhofer
banner
mdunnhofer.bsky.social
Matteo Dunnhofer
@mdunnhofer.bsky.social
MSCA Postdoctoral Fellow at University of Udine 🇮🇹 and York University 🇨🇦 - interested in computer vision 👁️🤖

https://matteo-dunnhofer.github.io
As an Highlight ✨ in the main conference, we will present results of a new investigation on object tracking in first-person vision by comparison against third-person videos. Work done at the MLP lab at the University of Udine led by Christian Micheloni

📆 Oct 21th, 15:00 - 17:00
📍Poster #542

3/3
October 18, 2025 at 3:59 PM
At the Human-inspired Computer Vision #HiCV2025 workshop, I will present a poster with recent results on comparing video-based ANNs and the primate visual system. Ongoing project at the ViTA lab @yorkuniversity.bsky.social led by @kohitij.bsky.social

📆 Oct 20th, 8:30 - 12:30
📍Room 309

2/3
October 18, 2025 at 3:59 PM
Joint work with @zairamanigrasso.bsky.social and Christian Micheloni

Funded by PRIN 2022 PNRR, MSCA Actions

(7/7)
July 23, 2025 at 2:51 PM
All details can be found in our paper.

📄 arXiv: arxiv.org/abs/2507.16015
🌐 Webpage: machinelearning.uniud.it/datasets/vista

The VISTA benchmark will be released soon. Stay tuned!

(6/7)
Is Tracking really more challenging in First Person Egocentric Vision?
Visual object tracking and segmentation are becoming fundamental tasks for understanding human activities in egocentric vision. Recent research has benchmarked state-of-the-art methods and concluded t...
arxiv.org
July 23, 2025 at 2:51 PM
These FPV-specific challenges include:
- Frequent object disappearances
- Continuous camera motion altering object appearance
- Object distractors
- Wide field-of-view distortions near frame edges

(5/7)
July 23, 2025 at 2:51 PM
- Trackers learn viewpoint biases and perform best on the viewpoint used during training.
- FPV tracking presents its specific challenges.

(4/7)
July 23, 2025 at 2:51 PM
Key takeaways from our study:

- FPV is challenging for state-of-the-art generalistic trackers.
- Tracking objects in human-object interaction videos is difficult across both first- and third-person viewpoints.

(3/7)
July 23, 2025 at 2:51 PM
We specifically examined whether these drops are due to FPV itself or to the complexity of human-object interaction scenarios.

To do this, we designed VISTA, a benchmark using synchronized first and third-person recordings of the same activities.

(2/7)
July 23, 2025 at 2:51 PM
This paper contributes to our projects PRIN 2022 EXTRA EYE and Project PRIN 2022 PNRR TEAM funded by European Union-NextGenerationEU.

6/6
March 3, 2025 at 5:49 PM
This work was led by Moritz Nottebaum (stop by his poster!) at the Machine Learning and Perception Lab of the University of Udine

5/6
March 3, 2025 at 5:49 PM
LowFormer achieves significant speedups in image throughput and latency on various hardware platforms, while maintaining or surpassing the accuracy of current state-of-the-art models across image recognition, object detection, and semantic segmentation.

4/6
March 3, 2025 at 5:49 PM
We used insights from such an analysis to enhance the hardware efficiency of backbones at the macro level, and introduced a slimmed-down version of multi-head self-attention to improve efficiency in the micro design.

3/6
March 3, 2025 at 5:49 PM
We empirically found out that MACs alone do not accurately account for inference speed.

2/6
March 3, 2025 at 5:49 PM
Is attendance open to YorkU researchers (e.g. postdocs)? Would like a lot to learn from your teaching style!
December 23, 2024 at 1:25 AM
Did the same a few weeks ago in Toronto. I think this is the best pizza flavor you can get in Canada 😂
December 15, 2024 at 12:40 PM
The top-performing teams will be invited to present their solution at the 3rd Workshop on Computer Vision for Winter Sports at #WACV2025!

📄 sites.google.com/unitn.it/cv4...

3/3
CV4WS@WACV2025
UPDATE (11/19/24): DEADLINE EXTENDED
sites.google.com
December 1, 2024 at 2:50 PM