Dan Levenstein
@dlevenstein.bsky.social
3.9K followers 1.3K following 1.2K posts
Neuroscientist, in theory. Studying sleep and navigation in 🧠s and 💻s. Assistant Professor at Yale Neuroscience, Wu Tsai Institute. An emergent property of a few billion neurons, their interactions with each other and the world over ~1 century.
Posts Media Videos Starter Packs
Pinned
dlevenstein.bsky.social
Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!

My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵
Reposted by Dan Levenstein
itsneuronal.bsky.social
It's possible to get first order approximate understanding of RNNs performing relatively complex tasks.

[1906.10720] Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics share.google/ElT686dgAUIk...

But some tasks are harder than others 🤷‍♂️
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse...
share.google
Reposted by Dan Levenstein
gershbrain.bsky.social
Yeah, I agree. There seems to be a pattern across various "adversarial" papers where they feel the need to take down the orthodoxy. I think it's more often "yes and" rather than "no but".
Reposted by Dan Levenstein
Reposted by Dan Levenstein
engeltatiana.bsky.social
We apply our model to survey the spiking irregularity across cortical areas and find that Poisson irregularity is a rare exception, not a rule. Our results show the need to include non-Poisson spiking in inferring neural dynamics from single trials.
dlevenstein.bsky.social
Language is a great example - not necessary for intelligence to emerge, but might be enough to bootstrap. Once you’re using language data you’re not building intelligence de novo.

I’d bet social interactions are the opposite - you need them for intelligence to emerge but not to bootstrap.
dlevenstein.bsky.social
Always find it an interesting question which ingredients are necessary for intelligence to emerge from scratch vs which are necessary if you only want to bootstrap from an existing intelligent system…
ricardsole.bsky.social
Moreover, AGI might also require social interactions to become a reality, where cultural evolution (and extended mind) play a major role. It is not by chance that brain and culture evolved together, allowing complex minds to emerge @anilseth.bsky.social @mitibennett.bsky.social
Reposted by Dan Levenstein
agreco.bsky.social
here the trick is to unpack the word *understanding*

it’s not about what each neuron does, but the rules shaping the behaviour of biological systems!

"rules for development and learning in brains may be far easier to understand than their resulting properties"

👇👌
arxiv.org/abs/1907.06374
dlevenstein.bsky.social
It is possible that really appreciating this tweet requires having gone to college in the states around 2010 😅
dlevenstein.bsky.social
I’m at the restaurant and they’re playing my song and can you imagine anyone making a song right now called “Party in the USA”?
dlevenstein.bsky.social
The constraints and self-regulation are the special sauce. I like the point that understanding self-regulation could be a secret back door (“leverage”) to understanding computation (or maybe not needing to… 🥲)
dlevenstein.bsky.social
A human-comprehensible story about how the pattern of activations lead to a network’s competencies in real-world tasks, and how they come to do so with learning. Which we can back up with predictions and perturbations. TL;DR the dream of systems neuroscience.
dlevenstein.bsky.social
Because a deep RNN is a much simpler system than the brain, that also operates through the parallel/distributed activity of connected input/output units, where the efficacy of connections plays a key role in its operation. If we can’t understand that how can we hope to understand the brain?
dlevenstein.bsky.social
So I get that a Neuroscientist Couldn’t Understand a Microprocessor, and TBH I’m ok with that. But could a neuroscientist understand a deep RNN? Because that seems like a more pressing issue.

*assuming you think the brain operates through the parallel activity of many connected input/output units
Reposted by Dan Levenstein
shahabbakht.bsky.social
Regardless of what explainability/mech interp in AI is actually after, and whether or not they know what they’re searching for, we can confidently say they’re pursuing what systems neuroscience has pursued for decades, with very similar puzzles and confusions.
bayesianboy.bsky.social
What problem is explainability/interpretability research trying to solve in ML, and do you have a favorite paper articulating what that problem is?
dlevenstein.bsky.social
And if you’re looking for a postdoc not a faculty position, we have those too 😉
dlevenstein.bsky.social
Come do a postdoc at the Wu Tsai Institute!

WTI fellows have freedom to work with anyone at the institute, and preference is given to applicants who want to work on interdisciplinary projects with multiple faculty mentors.

If you’re interested to work with me, please reach out!
wutsaiyale.bsky.social
📣 Calling experimental, computational, or theoretical researchers!

WTI's Postdoc Fellowships application is now open, offering a competitive salary, structured mentorship, world-class facilities + more: wti.yale.edu/initiatives/...

Apply by November 10: apply.interfolio.com/174525

#KnowTogether
dlevenstein.bsky.social
Come do a postdoc at the Wu Tsai Institute!

WTI fellows have freedom to work with anyone at the institute, and preference is given to applicants who want to work on interdisciplinary projects with multiple faculty mentors.

If you’re interested to work with me, please reach out!
wutsaiyale.bsky.social
📣 Calling experimental, computational, or theoretical researchers!

WTI's Postdoc Fellowships application is now open, offering a competitive salary, structured mentorship, world-class facilities + more: wti.yale.edu/initiatives/...

Apply by November 10: apply.interfolio.com/174525

#KnowTogether
dlevenstein.bsky.social
The Wu Tsai Institute at Yale is hiring another faculty member in neurocomputation. Come work with us in a growing community at the interface of neuroscience and AI!

More info below 👇
wutsaiyale.bsky.social
📣 WTI is hiring faculty positions! Are you interested in advancing our understanding of the brain + how it gives rise to cognition?

Two calls are open:

Open-rank search, Neurocomputation, deadline: 12.1.25
Senior search, Neurodevelopment, rolling review

🔗 wti.yale.edu/opportunities

#KnowTogether
dlevenstein.bsky.social
Sounds like a…. bitter pill for them to swallow, eh? 😅🥁
dlevenstein.bsky.social
Kauffman Level 3 is when you get the superpowers 👍
dlevenstein.bsky.social
Ty! 🙏🙏🙏 We’ll have an updated preprint soon - with non-spatial representations (“splitter”, “lap”, etc cells), an orthogonalized manifold, spatial cell “type” quantification, and sparse-lognormal connectivity.

Also a package+tutorial so you can easily train sequential pRNNs in your own environment!
dlevenstein.bsky.social
📌 This feed to see what the people who like the same things you like like. 🫧

bsky.app/profile/spac...
spacecowboy17.bsky.social
Welcome to the ✨For You✨ feed!

It finds people who liked the same posts as you, and shows you what else they've liked recently.

📌 Pin to add it to your top bar
❤️ Like the feed and repost to spread the goodness
dlevenstein.bsky.social
New feed based on your co-likers’ likes just dropped

bsky.app/profile/spac...
spacecowboy17.bsky.social
Welcome to the ✨For You✨ feed!

It finds people who liked the same posts as you, and shows you what else they've liked recently.

📌 Pin to add it to your top bar
❤️ Like the feed and repost to spread the goodness