Emma Roscow
emmaroscow.bsky.social
Emma Roscow
@emmaroscow.bsky.social
Machine learning @ EcoVadis | ex-neuroscience-postdoc who still dabbles
Pinned
New(ish) paper!

It's often said that hippocampal replay, which helps to build up a model of the world, is biased by reward. But the canonical temporal-difference learning requires updates proportional to reward-prediction error (RPE), not reward magnitude

1/4

rdcu.be/eRxNz
Post-learning replay of hippocampal-striatal activity is biased by reward-prediction signals
Nature Communications - It is unclear which aspects of experience shape sleep’s contributions to learning. Here, by combining neural recordings in rats with reinforcement learning, the...
rdcu.be
Reposted by Emma Roscow
People have been looking at "AI & brains" through the lens of LLMs/transformers. But what latent diffusion? (think text-to-image/video platforms such as Midjourney).

**Class conditioned latent diffusion and semantically cued hippocampus share a remarkably similar computational architecture...**
January 25, 2026 at 5:59 AM
Reposted by Emma Roscow
Are AI agents ready for the workplace? A new benchmark raises doubts.

techcrunch.com/2026/01/22/a...
Are AI agents ready for the workplace? A new benchmark raises doubts | TechCrunch
New research looks at how leading AI models hold up doing actual white-collar work tasks, drawn from consulting, investment banking, and law. Most models failed.
techcrunch.com
January 23, 2026 at 3:30 PM
Reposted by Emma Roscow
A few days ago, I stood in the graveyard of an 1100 year old church, getting damper and damper in the drizzling rain.

A woman walked into the graveyard, waved, said hello, and, as we had previously arranged, handed me a paper bag full of human bones.

The life of a churchwarden is a strange one...
January 23, 2026 at 11:44 AM
Reposted by Emma Roscow
What is wild to me is the defense, BY THE NEURIPS BOARD, that fabricated citations do not mean "the content of the papers themselves [is] necessarily invalidated"

It does. It very much does. What do you think citing other work is for? What do you think writing a paper is for? What do you *think*?
January 21, 2026 at 9:34 PM
Reposted by Emma Roscow
if i could make you read ONE (1) single post to improve your understanding of the challenges of social science in general it would be this one from @markfabian.bsky.social about wellbeing science specifically

profmarkfabian.substack.com/p/airing-my-...
Airing my grievances with wellbeing science
We have a streetlight problem
profmarkfabian.substack.com
January 19, 2026 at 9:35 AM
Reposted by Emma Roscow
As I often write in my newsletter, the future of AI doesn't exist yet; we are building it right now, and every policy and regulatory choice counts.

To learn more about pro-human policies, rules, and rights, join 88,800+ subscribers here: www.luizasnewsletter.com
January 15, 2026 at 12:33 PM
Reposted by Emma Roscow
Proud to have contributed to @jiaxuanqi.bsky.social's masterpiece out @nature.com! She shows that dopamine transients track the learned quality of song during juvenile learning and that dopamine release is driven not just by VTA firing, but by a local cholinergic mechanism! (1/x)
Dual neuromodulatory dynamics underlie birdsong learning - Nature
Dopamine release in the basal ganglia of the zebra finch is driven by neurons associated with reinforcement learning and by cholinergic signalling, and tracks performance quality during long-term lear...
www.nature.com
March 12, 2025 at 4:54 PM
Reposted by Emma Roscow
How does the brain replay memories during sleep?
Excited to share our new preprint, the outcome of an extensive effort led by Johannes Niediek, showing that reactivation of human concept neurons reflects memory content rather than event sequence.
Episodic memory consolidation by reactivation of human concept neurons during sleep reflects contents, not sequence of events https://www.biorxiv.org/content/10.64898/2026.01.10.698827v1
January 13, 2026 at 9:13 AM
Reposted by Emma Roscow
Engram — separate the factual info from the weights, dedicate more weights to reasoning instead of fact lookup

They store facts outside the main NN layers and perform lookups during inference via n-grams.

This benefits not just knowledge, but also reasoning, bc fewer weights are dedicated to facts
January 12, 2026 at 10:28 PM
Reposted by Emma Roscow
Interesting read on how chunking can emerge directly from #synaptic dynamics: by temporarily suppressing groups of items via synaptic augmentation, #WorkingMemory can retrieve up to 8 items despite a base capacity of only 4.
January 12, 2026 at 8:32 PM
Reposted by Emma Roscow
With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.
January 9, 2026 at 1:27 AM
Reposted by Emma Roscow
Just published my review of neuroscience in 2025, on The Spike.

The 10th of these, would you believe?

This year we have foundation models, breakthroughs in using light to understand the brain, a gene therapy, and more

Enjoy!

medium.com/the-spike/20...
2025: A Review of the Year in Neuroscience
Enlightening the brain
medium.com
December 30, 2025 at 3:52 PM
Reposted by Emma Roscow
Andrej Karpathy is worried about keeping up with software engineering practices
December 27, 2025 at 9:40 AM
Reposted by Emma Roscow
Where is the story in a book?
Where are thoughts in the brain? Are they in the brain?
December 21, 2025 at 10:32 AM
Reposted by Emma Roscow
Seven feel-good science stories to round up 2025. All too often we forget to celebrate the positives
🧪
#AcademicSky

www.nature.com/articles/d41...
Seven feel-good science stories to restore your faith in 2025
Immense progress in gene-editing, drug discovery and conservation are just some of the reasons to be cheerful about 2025.
www.nature.com
December 18, 2025 at 8:15 AM
Reposted by Emma Roscow
Ok, this is nuts. Once you see it you cannot unsee it. Do you see it?
(OP @drgbuckingham.bsky.social )
December 16, 2025 at 7:39 PM
Reposted by Emma Roscow
I’ve been hearing two things:
- People are happy they can ask questions quickly without judgment or looking for the Right Person to ask in the office.
- People are unhappy that nobody asksthem questions, because that is how they get to know colleagues and win their trust.
In the AI social sphere:

- Developers like Claude Code + Claude Opus 4.5.
- People appreciate that AI does not judge. Unlike coworkers, who may silently label you as incompetent if you ask one too many “stupid” questions, AI will answer every question - including the ones you are hesitate to ask.
December 15, 2025 at 3:18 PM
Reposted by Emma Roscow
Completely agree. And if I can make a self-promoting plug here, we have a nice table in this paper trying to separate some of these ideas out. The brain is very information-efficient (bits/ATP), while still being very expensive in energy consumption (ATP/sec).
www.sciencedirect.com/science/arti...
December 8, 2025 at 2:23 PM
Reposted by Emma Roscow
Soapbox time: the problem with metabolic efficiency arguments in neuroscience is that they often confuse energy efficiency with energy expenditure. Biological systems are optimized for energy efficiency, but that does NOT imply they are optimized for low energy expenditure 🧵 1/
December 8, 2025 at 1:31 PM
Reposted by Emma Roscow
As a Doctor Who fan, I had to read and then re-read this.

I am a comma stan.
December 3, 2025 at 9:04 PM
Reposted by Emma Roscow
A neat perspective on what makes RL for LLMs tractable
I'd say that's because it's not sparse reward in a meaningful way, in the same way Go in self-play is not sparse in a meaningful way.

That is, in Go, your reward is 0 for most time steps and only +1/-1 at end. That sound's sparse, but not from an algorithmic perspective.
December 1, 2025 at 12:52 PM
Reposted by Emma Roscow
1/3 How reward prediction errors shape memory: when people gamble and cues signal unexpectedly high reward probability, those incidental images are remembered better than ones on safe trials, linking RL computations to episodic encoding. #RewardSignals #neuroskyence www.nature.com/articles/s41...
Positive reward prediction errors during decision-making strengthen memory encoding - Nature Human Behaviour
Jang and colleagues show that positive reward prediction errors elicited during incidental encoding enhance the formation of episodic memories.
www.nature.com
November 30, 2025 at 11:12 AM