It's often said that hippocampal replay, which helps to build up a model of the world, is biased by reward. But the canonical temporal-difference learning requires updates proportional to reward-prediction error (RPE), not reward magnitude
1/4
rdcu.be/eRxNz
**Class conditioned latent diffusion and semantically cued hippocampus share a remarkably similar computational architecture...**
**Class conditioned latent diffusion and semantically cued hippocampus share a remarkably similar computational architecture...**
A woman walked into the graveyard, waved, said hello, and, as we had previously arranged, handed me a paper bag full of human bones.
The life of a churchwarden is a strange one...
A woman walked into the graveyard, waved, said hello, and, as we had previously arranged, handed me a paper bag full of human bones.
The life of a churchwarden is a strange one...
It does. It very much does. What do you think citing other work is for? What do you think writing a paper is for? What do you *think*?
fortune.com/2026/01/21/n...
It does. It very much does. What do you think citing other work is for? What do you think writing a paper is for? What do you *think*?
profmarkfabian.substack.com/p/airing-my-...
profmarkfabian.substack.com/p/airing-my-...
To learn more about pro-human policies, rules, and rights, join 88,800+ subscribers here: www.luizasnewsletter.com
To learn more about pro-human policies, rules, and rights, join 88,800+ subscribers here: www.luizasnewsletter.com
Excited to share our new preprint, the outcome of an extensive effort led by Johannes Niediek, showing that reactivation of human concept neurons reflects memory content rather than event sequence.
Excited to share our new preprint, the outcome of an extensive effort led by Johannes Niediek, showing that reactivation of human concept neurons reflects memory content rather than event sequence.
They store facts outside the main NN layers and perform lookups during inference via n-grams.
This benefits not just knowledge, but also reasoning, bc fewer weights are dedicated to facts
They store facts outside the main NN layers and perform lookups during inference via n-grams.
This benefits not just knowledge, but also reasoning, bc fewer weights are dedicated to facts
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.
My hope is that this will be a living document, continuously improved as I get feedback.
The 10th of these, would you believe?
This year we have foundation models, breakthroughs in using light to understand the brain, a gene therapy, and more
Enjoy!
medium.com/the-spike/20...
The 10th of these, would you believe?
This year we have foundation models, breakthroughs in using light to understand the brain, a gene therapy, and more
Enjoy!
medium.com/the-spike/20...
🧪
#AcademicSky
www.nature.com/articles/d41...
🧪
#AcademicSky
www.nature.com/articles/d41...
(OP @drgbuckingham.bsky.social )
(OP @drgbuckingham.bsky.social )
- People are happy they can ask questions quickly without judgment or looking for the Right Person to ask in the office.
- People are unhappy that nobody asksthem questions, because that is how they get to know colleagues and win their trust.
- Developers like Claude Code + Claude Opus 4.5.
- People appreciate that AI does not judge. Unlike coworkers, who may silently label you as incompetent if you ask one too many “stupid” questions, AI will answer every question - including the ones you are hesitate to ask.
- People are happy they can ask questions quickly without judgment or looking for the Right Person to ask in the office.
- People are unhappy that nobody asksthem questions, because that is how they get to know colleagues and win their trust.
google-deepmind.github.io/disco_rl/
google-deepmind.github.io/disco_rl/
www.sciencedirect.com/science/arti...
www.sciencedirect.com/science/arti...
www.nature.com/articles/s41...
www.nature.com/articles/s41...
I am a comma stan.
I am a comma stan.
#philsci #cogsky #CognitiveNeuroscience
@phaueis.bsky.social
aktuell.uni-bielefeld.de/2025/11/24/t...
#philsci #cogsky #CognitiveNeuroscience
@phaueis.bsky.social
aktuell.uni-bielefeld.de/2025/11/24/t...
That is, in Go, your reward is 0 for most time steps and only +1/-1 at end. That sound's sparse, but not from an algorithmic perspective.