Chris Versteeg
cversteeg.bsky.social
Chris Versteeg
@cversteeg.bsky.social
Applying ODIN to neural activity from the monkey motor cortex, we find that ODIN can reconstruct held-out firing rates with high accuracy with only ~10 state dimensions, better than state-of-the-art models with more than double ODIN’s dimensionality. 13/
September 15, 2023 at 5:59 PM
Additionally, ODIN recovers the nature of the simulated nonlinear embedding more accurately than the alternative readouts, suggesting that ODIN is well suited to model neural manifolds. 12/
September 15, 2023 at 5:59 PM
We also find that ODIN allows for accurate recovery of fixed points than models that don’t account for embedding nonlinearities. 11/
September 15, 2023 at 5:58 PM
We find that models with Linear or MLP readouts fail to reconstruct neural activity or have poor latent recovery when state dimensionality is incorrectly chosen. In contrast, ODIN had good performance at all relevant state dimensionalities. 10/
September 15, 2023 at 5:57 PM
To test the ability of ODIN to accurately recover neural latent dynamics and their embedding, we simulated neural activity from a low-dimensional dynamical system nonlinearly embedded into neural activity. 9/
September 15, 2023 at 5:57 PM
Our new readout, called Flow, is based on invertible ResNets. Flow models the embedding of latent activity into neural activity as a reversible dynamical system, imposing an inductive bias towards injectivity. 8/
September 15, 2023 at 5:57 PM
ODIN’s primary innovation is its injective nonlinear readout, which obligates all latent activity to affect neural reconstruction. This penalizes superfluous dynamical features, while readout nonlinearity allows ODIN capture nonlinear embeddings (i.e., neural manifolds). 7/
September 15, 2023 at 5:56 PM
Our previous work (led by @arsedle) has shown that neural ODE-based architectures can recover the latent space better than RNNs. Unfortunately, we also found that higher dimensional models of all types tend to sacrifice latent recovery for reconstruction performance! 5/
September 15, 2023 at 5:54 PM
In contrast to task-trained models, “data-trained” (e.g., LFADS-like) models learn to approximate a latent dynamical system (the “generator”) and an embedding of those dynamics into neural space (the “readout”) that reconstructs observed spiking data. 3/
September 15, 2023 at 5:54 PM
Recent work has demonstrated that task-trained RNNs learn to perform computation via dynamical features (e.g. fixed points) that can provide an intuitive understanding of their underlying computational mechanisms. 2/
September 15, 2023 at 5:53 PM
Ever wondered if the dynamics learned by LFADS-like models could help us understand neural computation?@chethan,@arsedle, @JonathanDMcCart, and I developed ODIN to robustly recover latent dynamical features through the power of injectivity!  arxiv.org/abs/2309.06402 1/
September 15, 2023 at 5:53 PM