Will Redman
wtredman.bsky.social
Will Redman
@wtredman.bsky.social
Incoming assist. prof. JHU ECE | Koopman operators and hippocampi | He/Him | You know what the French c'est - "Hate baguettes hate"

https://wredman4.wixsite.com/wtredman
To begin to test this hypothesis, we developed a “cavity method” approach, computing how the removal of each individual weight impacts the statistics of the preactivations. We find that IMP removes weights (on average) exactly when their removal would increase non-Gaussianity 11/
February 27, 2025 at 2:44 PM
We next measured the kurtosis of the preactivations (as a way to quantify non-Gaussianity) with each round of IMP. We found that, compared to random and oneshot pruning, IMP significantly increases the non-Gaussian statistics 9/
February 27, 2025 at 2:44 PM
3 years ago I teamed up with Seb and Ale to try and tackle this question. We first showed that inputs with non-Gaussian statistics are necessary for IMP to discover local RFs, as training on a Gaussian clone of ImageNet32 led to no localization 8/
February 27, 2025 at 2:44 PM
The first direct evidence of this came from Franco Pellegrini and @giuliobiroli.bsky.social when they found IMP discovered local RFs in FCNs (proceedings.mlr.press/v162/pellegr...) 5/
February 27, 2025 at 2:44 PM
This heterogeneity in grid properties is robust, but small. Is it enough to gain a computational benefit? Using synthetic grid cell populations, we show that it is possible to achieve high linear decoding accuracy of local space from a single module with heterogeneity 12/
February 20, 2025 at 7:39 PM
We find that there is variability in grid spacing and orientation in such RNNs! This could be another explanation for why these models capture MEC neuronal response profiles (@anayebi.bsky.social proceedings.neurips.cc/paper/2021/h... ) 11/
February 20, 2025 at 7:39 PM
We perform a number of control analyses to convince ourselves that this is not a product of our methodology. And we find that this result holds across a number of different modules! 9/
February 20, 2025 at 7:39 PM
To identify the robustness of the heterogeneity, we perform a shuffle analysis and compute the within- and between-cell variability. Consistent with the variability being an underlying property of the grid code, we find greater between- than within-cell variability 8/
February 20, 2025 at 7:39 PM
We find that grid cells within the same module can exhibit a wide range of grid spacings and orientation - even when we only include the most “griddy” grid cells 7/
February 20, 2025 at 7:39 PM
If grid cells within the same module have heterogeneity in their grid properties, the translation symmetry can be broken (over a finite area), allowing local spatial information to be encoded by population activity from a single module! 5/
February 20, 2025 at 7:39 PM
TLDR: Grid cells in the same module are not “identical”! There is small, but robust heterogeneity in grid spacing and orientation that enables grid cells from a single module to encode info about local space 2/
February 20, 2025 at 7:39 PM