Brad Aimone
banner
jbimaknee.bsky.social
Brad Aimone
@jbimaknee.bsky.social
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
Our paper on a neuromorphic algorithm for solving finite element models is out, which is exciting for a number of reasons. But really I want to talk a bit about how this relates to understanding the brain and #NeuroAI. 🤖🧪🧠🧵 1/

www.nature.com/articles/s42...
Solving sparse finite element problems on neuromorphic hardware - Nature Machine Intelligence
Theilman and Aimone introduce a natively spiking algorithm for solving partial differential equations on large-scale neuromorphic computers and demonstrate the algorithm on Intel’s Loihi 2 neuromorphi...
www.nature.com
November 16, 2025 at 4:16 PM
Nothing like flying out of a city when all the SfN folks are flying in.

Satellite meetings are where it's at. 3 over 3 days was a little intense. But smaller meetings are more fun. Enjoy SfN everyone!
November 15, 2025 at 5:01 PM
I was amused to find myself quoted, kind of, in the last paragraph of this; don't know if I've ever been the punchline quote before!
Can #NeuromorphicComputing help reduce AI’s high #energy cost? Researchers see big potential in #EnergyEfficient systems inspired by the #HumanBrain. A PNAS Core Concept explainer: https://ow.ly/45rk50XkYt5

#AI #ArtificialIntelligence #LLMs #ChatGPT #NeuralNetwork #DataCenter
November 2, 2025 at 5:10 PM
This is a great thread, and I think it hits on one of the biggest challenges in neuroscience that I hope NeuroAI can impact.

Until we can rigorously define what computations are occurring in the brain, we can't make any real progress in our functional understanding. We need constraints. 🧠🤖🧪
I once saw a (very interesting) talk about sleep in which the speaker started by saying that we don't really know how to define sleep, and then proceeded to operationalize sleep in flies as basically periods when they are still for a long time. This got me thinking...
September 21, 2025 at 3:44 PM
We put the FlyWire connectome on the Loihi 2 neuromorphic platform

🤖🧠🧪🪰

arxiv.org/abs/2508.16792
Neuromorphic Simulation of Drosophila Melanogaster Brain Connectome on Loihi 2
We demonstrate the first-ever nontrivial, biologically realistic connectome simulated on neuromorphic computing hardware. Specifically, we implement the whole-brain connectome of the adult Drosophila ...
arxiv.org
August 26, 2025 at 1:46 AM
An interesting comment and discussion that gets to the deeper question of whether NeuroAI is a move towards something new or repackaging of old tired approaches with a shiny AI paint job.
This statement is frustrating because systems neuroscientists (e.g. @gunnarblohm.bsky.social) have spent many years trying to build biologically realistic, mechanistic network models that are largely ignored outside a small community. And now NeuroAI is going to discover this is important?
“NeuroAI should not remain limited to learning statistical relationships, but should also help in building mechanistic and causal models of neural activity. These models will incorporate biological properties of neural circuits, including cellular characteristics and network properties.”

💯💯💯
August 15, 2025 at 2:53 PM
Traffic in Seattle is really bad...
August 13, 2025 at 1:12 AM
I'm a big fan of computing with neurons, stem cells, organoids, and energy efficient AI.

But we really have to stop coming up with systems (bio or inorganic) that are a few thousand neurons and claiming wins. AI models have BILLIONS of neurons. GPUs are quite efficient for the models they run 🤖🧠🧪
August 9, 2025 at 2:08 PM
Reposted by Brad Aimone
Spiking neural networks people, this message is for you!

The annual SNUFA workshop is now open for abstract submission (deadline Sept 26) and (free) registration. This year's speakers include Elisabetta Chicca, Jason Eshraghian, Tomoki Fukai, Chengcheng Huang, and... you?

snufa.net/2025/

🤖🧠🧪
SNUFA 2025
Spiking Neural networks as Universal Function Approximators
snufa.net
August 7, 2025 at 11:56 AM
For a few years I have said that neuromorphic is specialized general purpose, like GPUs, but with different advantages.

In this preprint I try to put some substance to that claim. There are real theoretical advantages, but they aren't obvious. 🧪🧠🤖 www.arxiv.org/abs/2507.17886
Neuromorphic Computing: A Theoretical Framework for Time, Space, and Energy Scaling
Neuromorphic computing (NMC) is increasingly viewed as a low-power alternative to conventional von Neumann architectures such as central processing units (CPUs) and graphics processing units (GPUs), h...
www.arxiv.org
July 29, 2025 at 2:04 PM
This is an interesting thread getting to th heart of the neuroscience / AI disconnect.

I think we need better "comp neuro for dummies" options, but there is a widely held view among engineers and computing people that neuroscientists obsess over details for no reason.
For trainees entering computational neuroscience or NeuroAI from an engineering background, where do you direct them to learn some neuroscience these days? Books, courses, ...?

And no... I'm not interested in scaring them off with Kandel!

🧠🤖, 🧠📈
July 19, 2025 at 11:32 PM
Dear editors:

If you want me to review a paper, give me a form with two entries: comments for authors, optional comments for editor. Maybe (maybe) ask for accept/revise/reject recommendation, but that is really your job.

Please don't ask me to answer 12!!! separate questions... 🧪
June 3, 2025 at 7:46 PM
Lately my feed has been full of stories like "The brain's learning is more complex than Hebb thought!" and "Does the adult brain have new neurons?". Hebb was almost 100 years ago and the neurogenesis "debate" is 30 years old. We are worse than Hollywood in terms of rehashing the same old stories 🧠🧪
June 2, 2025 at 4:29 PM
Bill Dally presenting the rise in necessary compute for AI

I see why this is good for Nvidia. But why would this be considered good for anyone else?
May 15, 2025 at 6:44 PM
At the @cra-hq.bsky.social CCC Computing Futures Symposium this week - exciting (and interesting) times to be talking about the future of computing research!

As part of this event, last night we demo'd neuromorphic and its potential impact on ModSim and #NeuroAI at the US Senate!
May 15, 2025 at 12:58 PM
With the demise of Twitter, I greatly miss the often critical but honest takes on new results. LinkedIn's AI community is all hype and often wrong. I would like to see Bsky prioritize honesty and truth over becoming an echo chamber, particularly with #NeuroAI new results that risk being misused 🧠🧪🤖
There is a strong anti-negative-comment bias on here. And how can you discuss if the negative part of the spectrum is unacceptable?
May 12, 2025 at 5:56 PM
This RealID chaos is so strange. I am quite certain I have had a compliant ID for about 10, maybe even 15 years. What is going on that some states are so backwards that they can't figure this out?

And I used to live in some pretty dysfunctional states that could even mange this.
May 7, 2025 at 1:49 PM
I'm unsure whether I agree with this. On the one hand, questions matter the most.

On the other hand, I'm not convinced we (as neuroscientists) know the right questions to ask.
While I would say my research is part of #NeuroAI, I don't actually thing it's very useful to define a research field as a constellation of methodologies. We should organize primarily around questions, and then use whatever methods best answer them.
What other big picture/perspective pieces are part of the #neuroAI vision?

Any parts of the field you see not well-represented yet in the perspective literature?
April 29, 2025 at 5:12 PM
I'm not a fan of either, but at least with "ANNs to describe the brain" the idea is new. For 50+ years physicists have been trying to shoehorn the brain into mean field approaches with little justification beyond "it would be so convenient"

If you don't want to think about the brain, don't study it
April 23, 2025 at 6:21 PM
All right world, I think it is time to write...
April 4, 2025 at 2:43 PM
Has anyone looked into using LLMs to interpret the information content of words in different languages? For instance, German has many words for 'the' (thus presumably higher information content), and Spanish verbs capture contextual uncertainty much differently than English
February 27, 2025 at 3:54 PM
My only comment about NIH indirects is that universities should pay postdocs and grad students more (a direct cost), and maybe-just maybe-seriously looking at where the money is going may eventually end up directing more $ to those who actually do the research.
February 13, 2025 at 1:00 AM
If the endless "some people optimized LLMs" news is starting to bore you, check out our preprint on neuromorphic-compatible neural circuits that can solve sparse linear systems.

arxiv.org/abs/2501.10526

This opens up some exciting new doors for brain-like algorithms! 🧪🧠🤖

#NeuroAI #Neuromorphic
January 28, 2025 at 3:22 PM
I've never been directly funded by NIH, but in honor of this amazing organization with great people, here is my talk from last fall when they hosted me. One of the best visits I have had anywhere.

oir.nih.gov/wals/2024-20...
How Neuromorphic Computing Can Help Us Understand the Brain | NIH Office of Intramural Research
oir.nih.gov
January 24, 2025 at 12:26 AM
I'm very excited that this review on scalable neuromorphic hardware, headed up by Dhireesha Kudithipudi, has finally come out in Nature🧪🧠🤖

rdcu.be/d7atq

Neuromorphic hardware is ready for prime time. There are things to keep advancing, but I think the bigger opportunities lie in how we use it!
Neuromorphic computing at scale
Nature - Approaches for the development of future at-scale neuromorphic systems based on principles of biointelligence are described, along with potential applications of scalable neuromorphic...
rdcu.be
January 23, 2025 at 2:07 PM