Hirokatsu Kataoka | 片岡裕雄
banner
hirokatukataoka.bsky.social
Hirokatsu Kataoka | 片岡裕雄
@hirokatukataoka.bsky.social
130 followers 150 following 50 posts
Chief Scientist @ AIST | Academic Visitor @ Oxford VGG | PI @ cvpaper.challenge | 3D ResNet (Top 0.5% in 5-yr CVPR) | FDSL (ACCV20 Award/BMVC23 Award Finalist)
Posts Media Videos Starter Packs
We’ve released the ICCV 2025 Report!
hirokatsukataoka.net/temp/presen/...

Compiled during ICCV in collaboration with LIMIT.Lab, cvpaper.challenge, and Visual Geometry Group (VGG), this report offers meta insights into the trends and tendencies observed at this year’s conference.

#ICCV2025
[Workshop Paper; 5/5; 20 Oct 15:40 - 16:30] Masatoshi Tateno, Gido Kato, Kensho Hara, Hirokatsu Kataoka, Yoichi Sato, Takuma Yagi, HanDyVQA: A Video QA Benchmark for Fine-Grained Hand-Object Interaction Dynamics, ICCV 2025 Workshop on HANDS workshop hands-workshop.org/workshop2025...
HANDS Workshop
hands-workshop.org
[Workshop Paper; 4/5; 20 Oct 15:10 - 16:00] Jumpei Nakao, Yuto Shibata, Rintaro Yanagi, Masaru Isonuma, Hirokatsu Kataoka, Junichiro Mori, Ichiro Sakata, Synthetic Text-to-Image Pre-training through Fractals with Pseudo-Captions, Trustworthy FMs Workshop. t2fm-ws.github.io/T2FM-ICCV25/...
Accepeted papers - T2FM Workshop @ ICCV 2025
T2FM Workshop - Accepeted papers
t2fm-ws.github.io
[Workshop Paper; 2/5; 19 Oct 16:40 - 18:00] Shinichi Mae, Ryousuke Yamada, Hirokatsu Kataoka, Industrial Synthetic Segment Pre-training, ICCV 2025 Workshop on LIMIT Workshop (Invited Poster). arxiv.org/abs/2505.13099
[Workshop Paper; 1/5; 19 Oct 11:25 - 12:15] Misora Sugiyama, Hirokatsu Kataoka, Simple Visual Artifact Detection in Sora-Generated Videos, ICCV 2025 Workshop on Workshop on Human-Interactive Generation and Editing, 2025. arxiv.org/abs/2504.21334 / higen-2025.github.io
[Main Conference Paper; 2/2; 22 Oct 10:45 - 12:45; Poster #451] Risa Shinoda, Nakamasa Inoue, Iro Laina, Christian Rupprecht, Hirokatsu Kataoka, AnimalClue: Recognizing Animals by their Traces, ICCV 2025 (Highlight). dahlian00.github.io/AnimalCluePa...
[Main Conference Paper; 1/2; 21 Oct 15:00 - 17:00; Poster #246] Risa Shinoda, Nakamasa Inoue, Hirokatsu Kataoka, Masaki Onishi, Yoshitaka Ushiku, AgroBench: Vision-Language Model Benchmark in Agriculture, ICCV 2025. dahlian00.github.io/AgroBenchPage/
I’m planning to attend ICCV 2025 in person!

Here are my accepted papers and roles at this year’s #ICCV2025 / @iccv.bsky.social .

Please check out the threads below:
We organized the "Cambridge Computer Vision Workshop" at the University of Cambridge together with Elliott Wu, Yoshihiro Fukuhara, and LIMIT.Lab! It was a fantastic workshop featuring presentations, networking, and discussions.
cambridgecv-workshop-2025sep.limitlab.xyz
Finally, the accepted papers at #ICCV2025 / @iccv.bsky.social LIMIT Workshop has been publicly released!
--
- OpenReview: openreview.net/group?id=the...
- Website: iccv2025-limit-workshop.limitlab.xyz
At ICCV 2025, I am organizing two workshops: the LIMIT Workshop and the FOUND Workshop.

◆ LIMIT Workshop (19 Oct, PM): iccv2025-limit-workshop.limitlab.xyz
◆ FOUND Workshop (19 Oct, AM): iccv2025-found-workshop.limitlab.xyz

We warmly invite you to attend at these workshops in ICCV 2025 Hawaii!
I’m thrilled to announce my invited talk at BMVC 2025 Smart Cameras for Smarter Autonomous Vehicles and Robots!

supercamerai.github.io
Our AnimalClue has been accepted to #ICCV2025 as a highlight🎉🎉🎉 We also released an official press release from AIST!! This is the collaboration between AIST x Oxford VGG.

Project page: dahlian00.github.io/AnimalCluePa...
Dataset: huggingface.co/risashinoda
Press: www.aist.go.jp/aist_j/press...
Our AgroBench has been accepted to #ICCV2025 🎉🎉🎉 We released project page, paper, code, and dataset!!

Project page: dahlian00.github.io/AgroBenchPage/
Paper: arxiv.org/abs/2507.20519
Code: huggingface.co/datasets/ris...
Dataset: github.com/dahlian00/Ag...
We’ve released the CVPR 2025 Report!
hirokatsukataoka.net/temp/presen/...

Compiled during CVPR in collaboration with LIMIT.Lab, cvpaper.challenge, and Visual Geometry Group (VGG), this report offers meta insights into the trends and tendencies observed at this year’s conference.

#CVPR2025
hirokatsukataoka.net
For the research community, we’ve named it “http://LIMIT.Community.” If you’re interested, please feel free to contact us. Students are also welcome.
LIMIT.Lab brings together computer vision researchers from Japan, UK, Germany, and Netherlands! Below are our current partner institutions:

🇯🇵 AIST, Science Tokyo, TUS
🇬🇧 Oxford VGG, Cambridge
🇩🇪 UTN FunAI Lab
🇳🇱 UvA
# Fields & partner institutions are continually expanding
[LIMIT.Lab Launched]
limitlab.xyz

We’ve established "LIMIT.Lab" a collaboration hub for building multimodal AI models under limited resources, covering images, videos, 3D, and text, when any resource (e.g., compute, data, or labels) is constrained.
“Industrial Synthetic Segment Pre-training” on arXiv!

Formula-driven supervised learning (FDSL) has surpassed the vision foundation model "SAM" on industrial data. It delivers strong transfer performance to industry while minimizing IP-related concerns.

arxiv.org/abs/2505.13099
I’m honored to serve as an Area Chair for CVPR 2025 for the second time. Thank you so much for the support!!

cvpr.thecvf.com/Conferences/...
2025 Progam Committee
cvpr.thecvf.com
Reposted by Hirokatsu Kataoka | 片岡裕雄
where I apparently asked you to present your poster on 3D ResNet *in 2 minutes* in #CVPR2018...
7 years later, I am very grateful for your *50 minutes* talk and full day visit to my group...
Thanks for the personal touch ☺️
2/2