Dan Goodman
banner
neural-reckoning.org
Dan Goodman
@neural-reckoning.org
Computational neuroscientist at Imperial College. I like spikes and making science better (Neuromatch, Brian spiking neural network simulator, SNUFA annual workshop on spiking neurons).
🧪 https://neural-reckoning.org/
📷 https://adobe.ly/3On5B29
I don't really understand what that means in practice. What I mean is that if you gave me the magic power to get any data I wanted, the results of any possible experiment, we still wouldn't know what to do to make sense of the brain.
November 17, 2025 at 7:54 PM
Yeah this is premature I agree. I find the family of backprop algorithms and related software packages useful for science but not much beyond that.
November 17, 2025 at 7:30 PM
But I bet a lot of people would describe the ANN as the "data-driven" approach. So yeah I agree with you there needs to be a tether but it doesn't need to be new data.
November 17, 2025 at 7:17 PM
I think it really depends on what your criteria are for that experimental grounding. For example, is a small recurrent SNN more or less tethered to data than a large ANN with connectivity derived from a connectomic database? I'd say the former because it's tethered by decades of data and models.
November 17, 2025 at 7:17 PM
I'm not sure there's any incentive in the universe that would get me to agree to be in charge of coordinating a thousand academics to agree to a single final text.
November 17, 2025 at 5:06 PM
Yeah but who wants to organise that?
November 17, 2025 at 3:48 PM
Same! I'd be totally fine with normalising this. Wouldn't scale to approaches that combine across large numbers of datasets though.
November 17, 2025 at 3:16 PM
Honestly, how many of those 3.6k are even people?

Anything that says AI is doing 6 months of work in a day when they still can't match parentheses consistently is going to make scientists raise an eyebrow and scroll on without clicking.
November 17, 2025 at 2:50 PM
Agree! I did a thread maybe on twitter a couple of years back where I pointed out that the information gathering rate of hypothesis testing science is 1 bit per experiment, ie often 1 bit per several months. We can't make fast progress like that! The naive Popperianism of some expt neuro must go.
November 17, 2025 at 2:46 PM
🤮
November 17, 2025 at 9:38 AM
That's a really useful term indeed!
November 17, 2025 at 3:11 AM
It's a great situation to be in though! Might feel hard right now but what a unique position it'll put you in.
November 17, 2025 at 2:52 AM
Yeah but my impression is that there are big areas of neuroscience that theory people just don't read. Maybe I'm just speaking for myself though. 😂
November 17, 2025 at 2:48 AM
I'm trying to do this in my work, eg this recent preprint combining the idea of neuromodulators, SNNs and training networks to perform challenging tasks using backdrop. Directly linking biological mechanisms to function. But it still feels like baby steps.

neural-reckoning.org/pub_neuromod...
Neuromodulation enhances dynamic sensory processing in spiking neural network models
Neuromodulators allow circuits to dynamically change their biophysical properties in a context-sensitive way. In addition to their role in learning, neuromodula...
neural-reckoning.org
November 17, 2025 at 2:47 AM
Hard to say what such an idea would look like but I think we can guess some of the properties it would need to have. It would have to be good enough that it could lead to understanding of a fully observable DNN. And it would need to include a link to biology. No current approaches fulfil these.
November 17, 2025 at 2:42 AM
That agreement can't be imposed by one side or the other, and it's not something that should be negotiated either since that would generate just a compromise that wouldn't actually work. It needs to be a new idea that shows both sides a better way to do things.
November 17, 2025 at 2:42 AM
You're right and it's been like this for decades. We probably won't find an easy solution but it's worth keeping on trying. My feeling is that it's not just about listening to what each other are doing, we need to find an agreed shared approach that can feasibly answer questions.
November 17, 2025 at 2:42 AM
Yeah you're probably right about that. What I would say though is that I feel like until we know what a satisfying solution might even look like, it's hard to know what would even be a good place to get started. That does feel like a good reason for doing some pure comp theory work with ML toolboxes
November 17, 2025 at 2:34 AM
Just about to fall asleep and my brain is like 90% of the way there, but I think I'd agree with that. Seems more measured than what you're saying in this thread though? Maybe my sleepy brain is betraying me.
November 17, 2025 at 2:30 AM
Nice!
November 17, 2025 at 2:26 AM
More data would always be nice, but my impression is that we're not doing all we could with the data we've got. And we may be collecting the wrong data, or not the most efficient data that we would be collecting if we had a better idea of what we should be looking for.
November 17, 2025 at 2:04 AM
I'm not sure I agree that data collection and perturbation methods is the limiting factor, for the reasons below. 👇

bsky.app/profile/neur...
November 17, 2025 at 1:32 AM
What's missing is not data, nor models, it's a theoretical approach that is powerful enough to generate understanding of such a complex nonlinear system whose function we don't even understand (if indeed that's a meaningful thing to even say, which we also don't know).
November 17, 2025 at 1:29 AM
I'm not sure I agree. If we had all the data we wanted we still wouldn't know how to make sense of it. And we know this because we don't understand how DNNs work despite having 100% of the info we need and being able to do arbitrary perturbations.
November 17, 2025 at 1:29 AM