Sjoerd van Steenkiste
@svansteenkiste.bsky.social
96 followers
52 following
24 posts
Researching AI models that can make sense of the world @GoogleAI. Gemini Thinking.
Posts
Media
Videos
Starter Packs
Pinned
Reposted by Sjoerd van Steenkiste
Reposted by Sjoerd van Steenkiste
📢Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025!
🌐 sites.google.com/view/vlms4all
🌐 sites.google.com/view/vlms4all
Reposted by Sjoerd van Steenkiste
Reposted by Sjoerd van Steenkiste
ICLR 2025 decisions and meta-reviews are now available on OpenReview.
We reviewed 11,565 submissions, with an overall acceptance rate of 32.08%. Oral/poster decisions will be announced at a later date. Camera ready deadline is March 1st.
We reviewed 11,565 submissions, with an overall acceptance rate of 32.08%. Oral/poster decisions will be announced at a later date. Camera ready deadline is March 1st.
Reposted by Sjoerd van Steenkiste
Reposted by Sjoerd van Steenkiste
Mehdi S. M. Sajjadi
@msajjadi.com
· Jan 13
Reposted by Sjoerd van Steenkiste
Excited to be at #NeurIPS2024. A few papers we are presenting this week:
MooG: arxiv.org/abs/2411.05927
Neural Assets: arxiv.org/abs/2406.09292
Probabilistic reasoning in LMs: openreview.net/forum?id=arYXg…
Let’s connect if any of these research topics interest you!
MooG: arxiv.org/abs/2411.05927
Neural Assets: arxiv.org/abs/2406.09292
Probabilistic reasoning in LMs: openreview.net/forum?id=arYXg…
Let’s connect if any of these research topics interest you!
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
The broader spectrum of in-context learning
The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...
arxiv.org
Reposted by Sjoerd van Steenkiste
Reposted by Sjoerd van Steenkiste
Excited to announce MooG for learning video representations. MooG allows tokens to move “off-the-grid” enabling better representation of scene elements, even as they move across the image plane through time.
📜https://arxiv.org/abs/2411.05927
🌐https://moog-paper.github.io/
📜https://arxiv.org/abs/2411.05927
🌐https://moog-paper.github.io/
✍️ Reminder to reviewers: Check author responses to your reviews, and ask follow up questions if needed.
50% of papers have discussion - let’s bring this number up!
50% of papers have discussion - let’s bring this number up!