Sam Gershman
gershbrain.bsky.social
Sam Gershman
@gershbrain.bsky.social
Professor, Department of Psychology and Center for Brain Science, Harvard University
https://gershmanlab.com/
I find these emails borderline outrageous because it's basically resume padding ("I worked with the great Professor Bergelson at HARVARD UNIVERSITY"). No one needs expert advisors in high school. Youth is for enjoying one's batshit crazy ideas without the cold shower of expert advice.
November 25, 2025 at 9:11 PM
Berlioz is an inspiration to all of us who aspire to take reasonable ideas to their unreasonable extremes.
November 25, 2025 at 4:28 PM
I love Feyerabend, and I think scientists should do whatever kind of science they want. But if they are going to publish a claim, with inferential statistics to back it up, then we want to be sure that the result is reproducible and not mining noise. We are currently awash in non-reproducibility.
November 12, 2025 at 12:46 PM
I agree that not all scientific research is hypothesis testing. But I think published research should be, even if it comes *after* some initial exploratory phase. That's how you ensure robustness and reproducibility.
November 12, 2025 at 12:20 PM
The biggest barrier is probably cultural; we're just not used to doing things this way, and I'm as guilty as anyone.
November 12, 2025 at 11:55 AM
The reason to do this is that it removes the incentive to p-hack, inflate claims, overestimate effect sizes, tell post hoc stories, and generally engage in random walk science.
November 12, 2025 at 11:55 AM
My favorite thing about elife!
November 8, 2025 at 2:40 PM
Also, it is so cringe to force prostrate authors to cite your papers. Don't do that!
November 8, 2025 at 11:32 AM
Everyone would be happier if reviewers took this as their mandate rather than a "criticize everything" approach. We'd all spend a lot less time on peer review, and papers would get less mangled by the process.
November 8, 2025 at 11:22 AM
"Let thine eye be thy cook" - Henry V, Act 5, Scene 2
November 5, 2025 at 2:20 PM
If you're interested in a broader perspective on the connections between linear and softmax attention, as well as to fast weight programming, RNNs, state space models, and neurobiology, Kazuki has also written a primer:
arxiv.org/abs/2508.08435
Fast weight programming and linear transformers: from machine learning to neurobiology
Recent advances in artificial neural networks for machine learning, and language modeling in particular, have established a family of recurrent neural network (RNN) architectures that, unlike conventi...
arxiv.org
November 4, 2025 at 10:33 AM
brilliant
October 30, 2025 at 5:25 PM
Is there a term for "solutions that solve the problem by not solving it"?
October 30, 2025 at 3:09 PM
There's another C&H cartoon which I can't find right now, but is even more relevant. He invents a robot so that he doesn't have to clean his room. The robot doesn't do anything, but then he returns at the end of the day having not cleaned his room, so it worked!
October 30, 2025 at 3:09 PM