Shahab Bakhtiari
banner
shahabbakht.bsky.social
Shahab Bakhtiari
@shahabbakht.bsky.social
|| assistant prof at University of Montreal || leading the systems neuroscience and AI lab (SNAIL: https://www.snailab.ca/) 🐌 || associate academic member of Mila (Quebec AI Institute) || #NeuroAI || vision and learning in brains and machines
The lure of manifolds isn't just for neuroscientists :)

bsky.app/profile/euge...
Find someone who loves you more than ML people love saying manifold
November 20, 2025 at 2:50 PM
🥳
November 19, 2025 at 3:05 AM
I think people are mainly referring to published work where the raw data isn't available. At least in my comment above, specifically regarding @neurograce.bsky.social's interest in LIP/FEF for information integration, I was referring to data that have already been published but remain inaccessible.
November 18, 2025 at 7:02 PM
Ah I see. So you’re using a frozen embedding for the glimpses.

In that case, our results would predict that invariance/equivariance to saccades actually depends on the presence or absence of efference copies in your training.
November 18, 2025 at 3:14 PM
but then if you integrate across multiple saccades, the output of the integrator will be invariant to the saccades.
November 18, 2025 at 3:01 PM
If you’re using a simCLR-like contrastive loss, without action-conditioning, it’ll learn to be invariant to the transformations, I.e., the saccades in your case.

If you condition your predictor on the saccade, the latent embedding of the glimpses will become equivariant to the saccade vectors …
November 18, 2025 at 3:01 PM
Interesting!

I have the same problem with many AI benchmarks where we actually lack a proper human performance benchmarking, so we know how the AI fails/succeeds but don't know where the model is really standing compared to humans.
November 18, 2025 at 2:34 PM
I wonder if the better alignments with the visual cortex that you observed could be linked to the co-existence of invariance and equivariance in the model.
November 18, 2025 at 2:29 PM
Very cool work. Congrats!

You may find our recent work relevant: bsky.app/profile/shah...

We showed how a similar model (which we called sequential JEPA) developed action-invariant and -equivariant representations simultaneously within the same model due to action-conditioned prediction.
Thrilled to see this work accepted at NeurIPS!

Kudos to @hafezghm.bsky.social for the heroic effort in demonstrating the efficacy of seq-JEPA in representation learning from multiple angles.

#MLSky 🧠🤖
Excited to share that seq-JEPA has been accepted to NeurIPS 2025!
November 18, 2025 at 2:29 PM
How can this be controversial?
November 18, 2025 at 12:28 PM
Until recently that a professional editor removed all the spaces from my draft and turned the whole thing into LLM perfection.
November 18, 2025 at 4:26 AM
Huh … then I’m not the only one doing that 😅
November 18, 2025 at 4:22 AM
From the examples in the report, it seems the data was provided to the model separately, so as a user, you’d need to find the potentially useful datasets and give them to the model.
November 17, 2025 at 6:24 PM
Based on the report, it’s a combination of multi-agent literature search (up to ~1500 papers) and data analysis (up to ~40000 lines of code).
November 17, 2025 at 6:17 PM
That wouldn’t be needed with a clear definition of roles and contributions. Not every single co-author needs to contribute in drafting the paper.
November 17, 2025 at 5:17 PM