Brad Aimone
banner
jbimaknee.bsky.social
Brad Aimone
@jbimaknee.bsky.social
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
The analog computing in neurons, particularly within dendrites, likely allows the brain to compress a lot of complex computation into a very small space. We absolutely need to capture that in neural computational models and hardware. But that should be to complement digital, not to bury it.
November 19, 2025 at 4:30 PM
If analog was so amazing, evolution wouldn't have invented spikes to scale up neural computation. C Elegans and similar pure analog neural systems would rule the earth

Digital computers-like the ones everyone is reading this post on-rule the day because digital is, on average, better for computing
November 19, 2025 at 4:30 PM
Bottom line, it is just part of the cost. We get paid to do science, most of us by our communities through grants in some form. Part of doing that science is communicating and sharing those results. That's part of the cost, it is part of the budget, it is part of the obligation.
November 18, 2025 at 6:13 PM
All sharing (data, code, etc) is a lot of work. I recall spending many days reformatting and cleaning up the ugly Matlab code from my PhD thesis into a form suitable to share when requested by another group. This was before code sharing was standard. I learned to have sharing in mind from the start.
November 18, 2025 at 6:13 PM
I guess I'm confused; I don't know any theorist that would ask for raw unpublished data, ask for extensive processing and analysis, and then just include that in a paper without including those people as authors. That obviously would be sketchy...
(and useless, as any reviewer should question that)
November 18, 2025 at 6:01 PM
Obviously one should give credit. Just as an experimentalist who bases their experimental design on prior theoretical work should cite them and give credit. No one says otherwise.
November 18, 2025 at 4:59 PM
I don't think experimentalists appreciate that a good data set can enable citations far beyond imagination. MNIST, CIFAR, etc have hundreds of thousands of cites.

People complain about citations, but they are a currency and they are valid, especially at scale. I'm grateful for any citations I get.
November 18, 2025 at 2:06 PM
This confused me, did federal taxpayer funded grants pay for the research? Then yes, it should be free.
November 18, 2025 at 1:33 PM
This isn't to knock experiments but theory has to push the experimentalists to new ways of thinking, not the other way around

This happened in mol bio 25 years ago. When the genome was sequenced, people initially still thought one gene at a time. But there was a switch and people now think bigger.
November 17, 2025 at 2:53 PM
I remember that thread!

This is easily apparent in meetings like SfN, where I suspect 50% of the abstracts could be from 2015 or even 2005 (we should have an LLM try to guess the years of SfN abstracts...). The rate is too slow. We have to think differently.
November 17, 2025 at 2:49 PM
This doesn't scale. It will never scale. For every "I need to see how L2/3 V1 neurons interact with L2/3 V2 neurons" answered, there are about 10 more pairwise questions that need to be answered.

We *have* to stop thinking about one question at a time.
November 17, 2025 at 1:50 PM
A challenge in neuro is that data is too often collected in the context of a narrow experiment, such that the data isn't useful for the next question. It's optimized for high-impact papers, PhD theses, etc. It isn't meant to help rise all ships. So theorists always have to ask for more.
November 17, 2025 at 1:50 PM
So stay tuned! Linking applications like this and neuromorphic to connectomes and functional data is the next step. And of course, solving sparse linear systems is a powerful and well studied area of applied math, so incorporating that domain knowledge into neuroscience will be exciting! 10/10
November 16, 2025 at 4:16 PM
This is just a start. Can a more motor cortex-like model do the same or even better? This gives us a legit math application to target, but more importantly it gives us a clear path to go from the weakly constrained "can my model solve this task" to "can it solve it efficiently" 9/
November 16, 2025 at 4:16 PM
What is exciting about this application is that the model is not learned, the network is defined by the math for the system that needs to be solved. This is something that could be genetically encoded into a circuit (think animals that can walk at birth). This breaks us out of UFA. 8/
November 16, 2025 at 4:16 PM
[side note: "But the brain doesn't do math!" is a common refrain; but that conflates conscious perception and implicit solutions. Of course our brains do math; all cognition is mathematically rich and complex. It simply isn't generic - it is specific to the task and region.] 7.5/
November 16, 2025 at 4:16 PM
In this study, we show that a motor cortex-like circuit running on brain-like hardware can achieve near ideal scaling and efficiency on solving complex physics tasks, specifically the sparse linear system described in a finite element model. 7/
November 16, 2025 at 4:16 PM
So back to our paper. Our growing belief is that as we find that neural circuits are *efficient* at something, the more likely the brain is doing that type of computation. The more brain-like the circuit, the more likely the brain is doing that type of function. Basically, can we work backwards? 6/
November 16, 2025 at 4:16 PM
So we need another constraint for neural computation beyond "can neurons do the task?"
One approach is to consider costs. There is huge evolutionary force for whatever the brain does to be space, time and energy efficient. This likely is the driving force for why neural circuits are what they are 5/
November 16, 2025 at 4:16 PM
But it is worthless for neuroscience. UFA is so powerful it becomes too weak as a constraint. So even if I have a ANN that does vision, it does not necessarily tell me *anything* about how the visual cortex implements vision. It could, if I'm lucky, but it may not. And that is the problem. 4/
November 16, 2025 at 4:16 PM
An appeal of the success of ANNs is that we now have "neurons" solving real tasks. And maybe they can teach us?
The power of ANNs lies in the universal function approximation. Coarsely stated, there exists an ANN that can approximate any function. That's cool! And it has given us modern AI! 3/
November 16, 2025 at 4:16 PM
A big challenge in neuroscience is that we don't have an ability to concretely define the computations different brain regions perform. We can guess at it based on intuition (cerebellum does control, hippocampus associative memory, etc). But we lack concrete formalisms that we can tie to circuits 2/
November 16, 2025 at 4:16 PM
I think the limits to collecting data makes theory and algorithms even more important. It is expensive and difficult to map connectomes, record neurons in bulk, etc. It feels like we need stronger guides for what to prioritize--are the questions and model systems from our grandparents still right?
November 15, 2025 at 6:00 PM