In that case, our results would predict that invariance/equivariance to saccades actually depends on the presence or absence of efference copies in your training.
In that case, our results would predict that invariance/equivariance to saccades actually depends on the presence or absence of efference copies in your training.
If you condition your predictor on the saccade, the latent embedding of the glimpses will become equivariant to the saccade vectors …
If you condition your predictor on the saccade, the latent embedding of the glimpses will become equivariant to the saccade vectors …
I have the same problem with many AI benchmarks where we actually lack a proper human performance benchmarking, so we know how the AI fails/succeeds but don't know where the model is really standing compared to humans.
I have the same problem with many AI benchmarks where we actually lack a proper human performance benchmarking, so we know how the AI fails/succeeds but don't know where the model is really standing compared to humans.
You may find our recent work relevant: bsky.app/profile/shah...
We showed how a similar model (which we called sequential JEPA) developed action-invariant and -equivariant representations simultaneously within the same model due to action-conditioned prediction.
Kudos to @hafezghm.bsky.social for the heroic effort in demonstrating the efficacy of seq-JEPA in representation learning from multiple angles.
#MLSky 🧠🤖
You may find our recent work relevant: bsky.app/profile/shah...
We showed how a similar model (which we called sequential JEPA) developed action-invariant and -equivariant representations simultaneously within the same model due to action-conditioned prediction.