Chengzu
chengzu-li.bsky.social
Chengzu
@chengzu-li.bsky.social
PhD student at Language Technology Lab, University of Cambridge
Round of applause for the fantastic collaborators in this project: Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vulić and Furu Wei🥳🥳
January 14, 2025 at 2:50 PM
📄 Dive Deeper into MVoT

Discover how MVoT rewrites the rules with details like loss design, image tokenization and interleaved multimodal training.
👉Read our paper on arXiv: arxiv.org/abs/2501.07542
Imagine while Reasoning in Space: Multimodal Visualization-of-Thought
Chain-of-Thought (CoT) prompting has proven highly effective for enhancing complex reasoning in Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). Yet, it struggles in complex ...
arxiv.org
January 14, 2025 at 2:50 PM
🔗 MVoT + CoT: New Ceiling for Reasoning

MVoT doesn’t replace CoT—it elevates it. By combining MVoT and CoT, the synergy of multimodal reasoning and verbal reasoning unlocks the performance upper bound, proving that two reasoning paradigms are potentially better than one!
January 14, 2025 at 2:50 PM
🎨 Revolutionizing Visual Reasoning with Token Discrepancy Loss

Messy visuals? Not anymore. Our token discrepancy loss ensures that MVoT generates accurate, meaningful visualizations with less redundancy.

Result? Better images, clearer reasoning, stronger performance.
January 14, 2025 at 2:50 PM
🎯 Performance Boosts with MVoT

MVoT isn’t just new—it’s better.
🔥 Better and more stable performance than CoT, particularly in complex scenarios like FrozenLake.
🌟 Plug-and-play power: Supercharges models like GPT-4o for unprecedented versatility.
January 14, 2025 at 2:50 PM
🧠MVoT

MVoT moves beyond Chain-of-Thought (CoT) to enable AI to imagine what it thinks with generated visual images. By blending verbal and visual reasoning, MVoT makes tackling complex problems more intuitive, interpretable, and powerful.
January 14, 2025 at 2:50 PM
Hi would love to be added in the list! Thanks!
December 5, 2024 at 3:06 PM
🙋working on VLMs and would love to be added! Thanks!
December 5, 2024 at 3:02 PM