🧠📈 🧪 #NeuroAI #MLSky
Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?
TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!
🧵 (1/10)
🧠📈 🧪 #NeuroAI #MLSky
Seq-JEPA is a step in the right direction. It learns by predicting sensory outcomes from series of interactions. Cool things emerged! 👇with @shahabbakht.bsky.social
#MLSky #NeuroAI
Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?
TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!
🧵 (1/10)
Seq-JEPA is a step in the right direction. It learns by predicting sensory outcomes from series of interactions. Cool things emerged! 👇with @shahabbakht.bsky.social
#MLSky #NeuroAI
Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?
TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!
🧵 (1/10)
Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?
TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!
🧵 (1/10)
Biologically-inspired networks (for AI models).
Biologically-inspired networks (for AI models).