Vineet Tiruvadi, MD, PhD
banner
virati.bsky.social
Vineet Tiruvadi, MD, PhD
@virati.bsky.social
Reverse Neuroengineer. Health AI ← Control Theory. Community Matters. vineet.tiruvadi.net

Research Fellow @harvardmed @bwh. Prev: @emorysom @gatech @hume_ai
I think a lot of folks working on adaptive DBS don't realize they're working on a canonical control theoretic problem - without having any of the control theoretic background needed to solve it.
November 12, 2025 at 5:49 PM
Contrasted with trying to reinvent your own approach to inference.
September 5, 2025 at 11:39 PM
Also: engineering approaches to inference are strictly better for complex systems. Period.

If your analysis pipeline is not amenable to engineering approaches to inference, then you're better off spending time finding a way to represent your data so that it becomes amenable.
September 5, 2025 at 11:39 PM
Why linearize?
May 3, 2025 at 12:02 AM
I assume a requirement of consistency? "The sky is blue and not any other color. The sky is red and not any other color." can be written mathematically, but it's quite different than a sentence that doesn't contradict itself.

Funny enough, I think they meant "in a set theoretic way" informally...
May 1, 2025 at 8:49 PM
$100 I can guess their demographics... best not to conflate conditional distributions with joint distributions...
May 1, 2025 at 8:46 PM
as one of those "we" you keep bringing up - I found that abstract + literature quite interesting and important.
May 1, 2025 at 8:32 PM
Reposted by Vineet Tiruvadi, MD, PhD
I believe this (friendly piece) aligns with your question about the absurdity of reducing strong emotions & compulsions to happenings in the brain. Bonus: it captures a diversity of researcher perspectives about the best ways to think about it.

www.thetransmitter.org/the-big-pict...
What, if anything, makes mood fundamentally different from memory?
To better understand mood disorders—and to develop more effective treatments—should we target the brain, the mind, the environment or all three?
www.thetransmitter.org
April 27, 2025 at 7:26 PM
I love the model comparison approach (closer to the medical inference engine) - but if your models are hyperparametrized, there's an implicit "big data" in there, no?

Maybe not "foundation" since that's tighter linked to Transformer/LLM, but I still don't see an explicit "small data" vision.
April 29, 2025 at 7:54 PM
Reposted by Vineet Tiruvadi, MD, PhD
I'd put these on the NeuroAI vision board:

@tyrellturing.bsky.social's Deep learning framework
www.nature.com/articles/s41...

@tonyzador.bsky.social's Next-gen AI through neuroAI
www.nature.com/articles/s41...

@adriendoerig.bsky.social's Neuroconnectionist framework
www.nature.com/articles/s41...
April 28, 2025 at 11:15 PM
Yup (where foundation ≡ big data derived).

Has anyone outlined + whipped up votes + published an alternative vision in a similarly public way?
April 28, 2025 at 11:02 PM
I want to be supportive of my NeuroAI colleagues, but who is working on alternative visions?

What about one that seeks to bring ML/AI to where clinical neuro(engineering) is already working magic?
April 28, 2025 at 10:19 PM