Bowen Zheng
bwz-brain.bsky.social
Bowen Zheng
@bwz-brain.bsky.social
MIT BCS | grad
part-time reductionist, full time human
Reposted by Bowen Zheng
Human speech is continuous, and many meaning spaces (like color) are continuous too. Yet we use discrete words like “blue” and “green” that carve these spaces into categories.

In our new paper, we ask: How do people turn continuous spaces into structured, word-like systems for communication? (1/8)
Discrete and systematic communication in a continuous signal-meaning space
Abstract. Human spoken language uses a continuous stream of acoustic signals to communicate about continuous features of the world, by using discrete forms
academic.oup.com
November 26, 2025 at 2:35 PM
Reposted by Bowen Zheng
What does it mean to understand language? We argue that the brain’s core language system is limited, and that *deeply* understanding language requires EXPORTING info to other brain regions.
w/ @neuranna.bsky.social @evfedorenko.bsky.social @nancykanwisher.bsky.social
arxiv.org/abs/2511.19757
1/n🧵👇
What does it mean to understand language?
Language understanding entails not just extracting the surface-level meaning of the linguistic input, but constructing rich mental models of the situation it describes. Here we propose that because pr...
arxiv.org
November 26, 2025 at 4:26 PM
Reposted by Bowen Zheng
"Spiking Networks Hate It! Find Out the One Plasticity Trick They Don’t Want You to Know! Never stabilise models by hand again." - I woke up thinking we missed an opportunity with the title of this one. :/ www.science.org/doi/10.1126/... Also: It snowed in Vienna, 10cm white fluffies! Happy Sunday!
Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks
Plasticity at inhibitory synapses maintains balanced excitatory and inhibitory synaptic inputs at cortical neurons.
www.science.org
November 23, 2025 at 7:09 AM
Reposted by Bowen Zheng
This raises what I like to call the "AI test for tasks".

If many people use AI to do task X, then that tells you that task X is actually just a brainless administrative exercise.

Any such task should probably be eliminated, and if that's not an option, modified to make automation even easier.
There was never any point to having reference letters. That's why we've all started using AI to do this nonesense task.

References should only be used for short-listed candidates for important positions/awards, and ideally, be done via a call to get the most honest opinion possible.
From my discussions with other faculty, the use of generative AI I hear about the most is writing reference letters.

What's the point of having reference letters anymore if everyone is just having them written by machine?
November 14, 2025 at 7:14 PM
Reposted by Bowen Zheng
What is the most profitable industry in the world, this side of the law? Not oil, not IT, not pharma.

It's *scientific publishing*.

We call this the Drain of Scientific Publishing.

Paper: arxiv.org/abs/2511.04820
Background: doi.org/10.1162/qss_...

Thread @markhanson.fediscience.org.ap.brid.gy 👇
November 12, 2025 at 10:31 AM
Reposted by Bowen Zheng
The way Sutton himself interprets the “bitter lesson” in this interview definitely caught a lot of bitter lesson enthusiasts off guard.
LLMs not actually being an example of the bitter lesson was quite a nuance no one saw coming.

youtu.be/21EYKqUsPfg?...
Richard Sutton – Father of RL thinks LLMs are a dead end
YouTube video by Dwarkesh Patel
youtu.be
October 4, 2025 at 3:55 AM
Reposted by Bowen Zheng
So far, learning traps seem robust to social learning in our cases. Surprisingly, despite many manipulations that have tried to reduce this learning trap, the most effective has been simply being a child (see @emilyliquin.bsky.social's work on traps in children) osf.io/preprints/ps...
OSF
osf.io
September 26, 2025 at 3:30 AM
Reposted by Bowen Zheng
New preprint! How can you remember an image you saw once, even after seeing thousands of them? We find a role for humble mid-level visual cortex in high-capacity, one-shot learning. doi.org/10.1101/2025.09.22.677855 🧵🧪1/
Neuronal signatures of successful one-shot memory in mid-level visual cortex
High-capacity, one-shot visual recognition memory challenges theories of learning and neural coding because it requires rapid, robust, and durable representations. Most studies have focused on the hip...
doi.org
September 23, 2025 at 3:09 PM
Reposted by Bowen Zheng
The New York Times piece today about US science is terrible and wrong—in many ways.

I could write a whole article about this, but as one example:

“To close observers, the original crisis began well before any of this…”
No. I’m a close observer of science, and this is incorrect.
September 22, 2025 at 12:20 PM
Reposted by Bowen Zheng
Can a single cell learn? Even without a brain, some microbes show simple forms of cognition. Can this basal cognition be engineered? Check our new paper with @jordiplam.bsky.social on the minimal synthetic circuits & their cognitive limits. @drmichaellevin.bsky.social www.biorxiv.org/content/10.1...
September 10, 2025 at 11:48 AM
Reposted by Bowen Zheng
LLRX republished the blogpost www.llrx.com/2025/08/ai-s...
August 22, 2025 at 8:20 PM
Reposted by Bowen Zheng
I wrote a Comment on neurotheory, and now you can read it!

Some thoughts on where neurotheory has and has not taken root within the neuroscience community, how it has shaped those subfields, and where we theorists might look next for fresh adventures.

www.nature.com/articles/s41...
Theoretical neuroscience has room to grow
Nature Reviews Neuroscience - The goal of theoretical neuroscience is to uncover principles of neural computation through careful design and interpretation of mathematical models. Here, I examine...
www.nature.com
August 20, 2025 at 4:09 PM
Reposted by Bowen Zheng
MIT’s NANDA initiative found that 95% of generative AI deployments fail after interviewing 150 execs, surveying 350 workers, and analyzing 300 projects. The real “productivity gains” seem to come from layoffs and squeezing more work from fewer people not AI.
MIT report: 95% of generative AI pilots at companies are failing
There’s a stark difference in success rates between companies that purchase AI tools from vendors and those that build them internally.
fortune.com
August 20, 2025 at 4:52 AM
Reposted by Bowen Zheng
Our paper just out in Nature Communications!
www.nature.com/articles/s41...

We introduce curved neural networks naturally introducing high-order interactions showing:
• explosive phase transitions
• enhanced memory retrieval via self-annealing
• increased memory capacity through geometric curvature
Explosive neural networks via higher-order interactions in curved statistical manifolds - Nature Communications
Higher-order interactions shape complex neural dynamics but are hard to model. Here, authors use a generalization of the maximum entropy principle to introduce a family of curved neural networks, reve...
www.nature.com
July 24, 2025 at 10:24 AM
Reposted by Bowen Zheng
So what drives drift? We looked closely at the neurons and found that a small group of them were stable. These stable neurons were more excitable than neighboring cells, making the fate of the cells predictable.
July 23, 2025 at 4:15 PM
Reposted by Bowen Zheng
DO EVERYTHING YOU CAN TO GET FOOD AND WATER IN TO GAZA.
This is from Lemkin Institute begging..... we are all begging.
July 21, 2025 at 9:57 PM
Reposted by Bowen Zheng
Really interesting results, suggesting that long-term place field stability is not from long-lasting synaptic plasticity, but is instead from an increased *probability of plasticity induction* in subsequent days.
Formation of an expanding memory representation in the hippocampus - Nature Neuroscience
Multiday imaging of CA1 neurons during learning reveals that the representation stabilizes as the number of readily retrievable, information-rich and stable place cells increases and suggests novel me...
www.nature.com
July 17, 2025 at 12:14 PM
Reposted by Bowen Zheng
Who doesn't like a good model of the brain? Yet, from simple regression to neural nets, some limitations keep popping up (e.g., overfitting) @mjwolff.bsky.social & I saw some cool but puzzling data, ran a quick analysis & found one such limitation: model mimicry. Now in #naturecommunications &🧵below
Model mimicry limits conclusions about neural tuning and can mistakenly imply unlikely priors
Nature Communications - Model mimicry limits conclusions about neural tuning and can mistakenly imply unlikely priors
rdcu.be
July 2, 2025 at 8:50 AM
Reposted by Bowen Zheng
My latest Aronov lab paper is now published @Nature!

When a chickadee looks at a distant location, the same place cells activate as if it were actually there 👁️

The hippocampus encodes where the bird is looking, AND what it expects to see next -- enabling spatial reasoning from afar

bit.ly/3HvWSum
June 11, 2025 at 10:24 PM
Reposted by Bowen Zheng
It occurred to me last night that microwaves are kinda like LLMs.
Remember when they first came out, people bought microwave cookbooks, and special vented plastic cookware, and they were going to change the way we cooked and ate forever?
Now we use them for defrosting mince, and reheating cold tea.
May 8, 2025 at 8:39 AM
Reposted by Bowen Zheng
We’re excited about this project! We present a model of motor savings without the need for context.
April 2, 2025 at 1:37 PM
Reposted by Bowen Zheng
Kilosort4 detects a LOT of neurons, I recorded 15k neurons in one year 🤯 Traditionally, one would curate these detected units to see if they are well isolated single neurons. This is not feasible anymore, so today let's look at three options that are out there to automate this process! 🤖👇
March 27, 2025 at 10:38 AM
Reposted by Bowen Zheng
I’m hiring a full-time lab tech for two years starting May/June. Strong coding skills required, ML a plus. Our research on the human brain uses fMRI, ANNs, intracranial recording, and behavior. A great stepping stone to grad school. Apply here:
careers.peopleclick.com/careerscp/cl...
......
Technical Associate I, Kanwisher Lab
MIT - Technical Associate I, Kanwisher Lab - Cambridge MA 02139
careers.peopleclick.com
March 26, 2025 at 3:09 PM