Brad Aimone
banner
jbimaknee.bsky.social
Brad Aimone
@jbimaknee.bsky.social
Computational neuroscientist-in-exile; computational neuromorphic computing; putting neurons in HPC since 2011; dreaming of a day when AI will actually be brain-like.
Bill Dally presenting the rise in necessary compute for AI

I see why this is good for Nvidia. But why would this be considered good for anyone else?
May 15, 2025 at 6:44 PM
At the @cra-hq.bsky.social CCC Computing Futures Symposium this week - exciting (and interesting) times to be talking about the future of computing research!

As part of this event, last night we demo'd neuromorphic and its potential impact on ModSim and #NeuroAI at the US Senate!
May 15, 2025 at 12:58 PM
If the endless "some people optimized LLMs" news is starting to bore you, check out our preprint on neuromorphic-compatible neural circuits that can solve sparse linear systems.

arxiv.org/abs/2501.10526

This opens up some exciting new doors for brain-like algorithms! 🧪🧠🤖

#NeuroAI #Neuromorphic
January 28, 2025 at 3:22 PM
I'm not sure about the connection to SGD in large ML models. What I have always seen boils down to this figure. Naively, as scale (# inputs) goes up, a constant E/I ratio of inputs to a neuron the expected value of firing stays the same but the sensitivity of the firing rate to any variability grows
December 20, 2024 at 2:04 PM
My new office art
December 14, 2024 at 9:52 PM
November 29, 2024 at 5:17 PM
This is all sorts of fun.
February 1, 2024 at 2:20 AM
2) More interestingly, the 2nd most popular choice reflects uncertainty you would naively expect.  Shirts and jackets on Fashion MNIST; dogs and deer on CIFAR-10.  1’s and 7’s on MNIST. 3) This amazingly seems pretty robust to precision in the probabilities. 7/
December 4, 2023 at 7:59 PM
The paper – which is pretty short – goes into all of the details. But punchlines: 1) it works. With minimal impact on training, majority vote of samples gets pretty close to deterministic accuracy in classification. 6/
December 4, 2023 at 7:59 PM