Josh Dudman
banner
dudman.bsky.social
Josh Dudman
@dudman.bsky.social
husband, father, neuroscientist, senior group leader at Janelia @hhmijanelia.bsky.social, beard enthusiast, unrepentant dilettante, he|him
www.dudmanlab.org
https://orcid.org/0000-0002-4436-1057
+1. LLMs have taught me two things about myself. More of my thought is prob search like than I appreciate. And more of my language use turns on good sounding statistical structure than I realized. “Poetics” in Weathersby’s terminology. I do hope in both cases it is less than _all_ of what I do…
November 21, 2025 at 6:59 PM
Fascinating. I appreciate the detailed description. I agree modern LLMs are amazing and the work here is really interesting. The last bit: There are different ways to think, a subset resemble search imo. A few things you say above do feel like search over existing code (eg must be in numpy not Jax)
November 21, 2025 at 4:00 PM
It sounds like the prompts were to do curve fitting so any repo on GitHub curve fitting with stretched exponentials could be in search, right? One could imagine the LLM making a mistake that is useful. Like curve fitting for “imaging” (data) being scored as related to “images” or “vision”
November 20, 2025 at 12:35 PM
I would also imagine GitHub has many repos with Python code for fitting stretched exponentials. Seems reasonable given prompts it searches over repos and tries out programs that already exist? I don’t think “applied in neuroscience” means anything to an llm - part of its strength in a way.
November 20, 2025 at 12:27 PM
Right there are papers that say nonlinearity should be a free parameter in vision tuning curves + many Wikipedia etc entries on various functional forms ; I was interested in your intuition having so thoughtfully interacted with the LLM/training. Agreed that this is often what progress is. Thanks!
November 20, 2025 at 11:50 AM
I am curious whether you think of the AI scientist as being prompted well to find previous examples I n its training set (eg @kordinglab.bsky.social ‘s paper ieeexplore.ieee.org/abstract/doc...) vs “coming up with it” which implies more of a train of reasoning? Cool work and LLM useful either way.
Learning the Nonlinearity of Neurons from Natural Visual Stimuli
Learning in neural networks is usually applied to parameters related to linear kernels and keeps the nonlinearity of the model fixed. Thus, for successful models, properties and parameters of the nonl...
ieeexplore.ieee.org
November 19, 2025 at 3:12 PM
All sounds fascinating. Would love to coordinate, maybe at BonnBrain? We did some head mounted video on mice years ago. It was pretty nauseating to watch, but would be fun to get back to.
September 24, 2025 at 5:14 PM
Agree. Love to coordinate on this with you and others.
September 24, 2025 at 12:23 PM