Jonathan Kadmon
banner
kadmonj.bsky.social
Jonathan Kadmon
@kadmonj.bsky.social
Assistant professor of theoretical neuroscience @ELSCbrain. My opinions are deterministic activity patterns in my neocortex. http://neuro-theory.org
More than 450 researchers have signed this declaration in the past 24 hours. While we continue to focus most of our efforts within Israel, there is, and always has been, significant resistance in the country. The academic community is one of many voices expressing this sentiment.
July 27, 2025 at 8:36 PM
(6/6) Our findings challenge some prevailing conceptions about gradient-based learning, opening new avenues for understanding efficient neural learning in both artificial systems and the brain.
Read our full preprint here: arxiv.org/abs/2502.20580 #Neuroscience #MachineLearning #DeepLearning
Training Large Neural Networks With Low-Dimensional Error Feedback
Training deep neural networks typically relies on backpropagating high dimensional error signals a computationally intensive process with little evidence supporting its implementation in the brain. Ho...
arxiv.org
March 23, 2025 at 9:23 AM
(5/6) Applying our method to a simple ventral visual stream model replicates the results of Lindsey, @suryaganguli.bsky.social and @stphtphsn.bsky.social and shows that the bottleneck in the error signal—not the feedforward pass—shapes the receptive fields and representations of the lower layers.
March 23, 2025 at 9:23 AM
(4/6) The key insight: The dimensionality of the error signal doesn’t need to scale with network size—only with task complexity. Low-dimensional feedback can effectively guide learning even in very large nonlinear networks. The trick is to align feedback weights to the error (the gradient loss).
March 23, 2025 at 9:23 AM
(3/6) We developed a local learning rule leveraging low-dimensional error feedback, decoupling forward and backward passes. Surprisingly, performance matches traditional backpropagation across deep nonlinear networks—including convolutional nets and even transformers!
March 23, 2025 at 9:23 AM
(2/6) Backpropagation, the gold standard for training neural networks, relies on precise but high-dimensional error signals. Other proposed alternatives, like Feedback alignment do not challenge this assumption. Could simpler, low-dimensional signals achieve similar results?
March 23, 2025 at 9:23 AM
Just realized I've mentioned the wrong Jan above. Sorry @japhba.bsky.social!
February 17, 2025 at 9:50 PM
Just as the meaning of words depends on context, the brain must infer context and meaning simultaneously to adapt in real-time. We believe these insights uncover core principles of #CognitiveFlexibility—check it out! www.biorxiv.org/content/10.1...
Congrats to John and Jan on their outstanding work!
Neural mechanisms of flexible perceptual inference
What seems obvious in one context can take on an entirely different meaning if that context shifts. While context-dependent inference has been widely studied, a fundamental question remains: how does ...
www.biorxiv.org
February 17, 2025 at 1:43 PM
Remarkably, DeepRL networks converged on near-optimal strategies and exhibited the same nontrivial Bayesian-like belief-updating dynamics—despite never being trained on these computations directly. This suggests that inference mechanisms can emerge naturally through reinforcement learning.
February 17, 2025 at 1:43 PM
By deriving a Bayes-optimal policy, we show that rapid context shifts emerge from sequential belief-state updates—driven by flexible internal models. The resulting dynamics resemble an integrator with nontrivial context-dependent, adaptive bounds.
February 17, 2025 at 1:43 PM
We tackled this challenge with behavioral experiments in mice, Bayesian theory, and #DeepRL. Using a novel change-detection task, we show how mice and networks adapt on the first trial from a context change by inferring both context and meaning—without trial and error.
February 17, 2025 at 1:43 PM