john sakon
banner
johnsakon.bsky.social
john sakon
@johnsakon.bsky.social
I’m a neuroscientist studying memory @ UCLA
🌌 *Were most of your stars out?* 🌌
this was just my personal opinion but
November 24, 2025 at 9:03 PM
Similar vibes in this one
elifesciences.org/reviewed-pre...
November 23, 2025 at 6:09 PM
Recently saw this played live and there’s no way it’s outside the top 25
October 4, 2025 at 4:46 AM
Done. It’s insane, Dems caving on their morals for ABSOLUTELY NOTHING. This is not an issue anyone votes on! 26th our of 27 across age groups!

It’s a Fox News talking point that’s used to distract from Rs ushering in fascism. @ezraklein.bsky.social
September 29, 2025 at 7:58 PM
September 13, 2025 at 5:35 PM
This was a real team effort, with fellow lead authors
Yuanyi Ding and Soraya Dunn, and senior authors
Itzhak Fried and Vwani Roychowdhury.

Shout out to our collaborators at U. of Iowa Neurosurgery and our selfless participants that performed research despite no benefits to themselves. PS/6
August 28, 2025 at 7:42 PM
Now here’s the really cool part: theory predicts medial temporal lobe (MTL) first stores memories, but cortex is the final storage site. Our models show this after one night of sleep! MTL models decode presleep, while frontal cortex (FC) decodes postsleep, but not vice versa. 5/6
August 28, 2025 at 7:39 PM
Pooling eight concepts from the episode people tend to remember, our models successfully predict which concept people are about to recall a few seconds before they say it! We tested people's memory both before and after sleep, and we achieve above chance decoding for both. 4/6
August 28, 2025 at 7:38 PM
After they finish watching, participants recall everything they can from the episode. We then test if our model, using neural patterns recorded in memory recall, can predict when people are about to recall concepts from the episode. E.g. the show's main character (J. Bauer). 3/6
August 28, 2025 at 7:38 PM
Our transformer-based model takes only two inputs: unsorted voltage spikes recorded during episode viewing and labels of the concepts (e.g. main characters or scene locations) currently onscreen. We then train individual models for each person that watches the episode. 2/6
August 28, 2025 at 7:36 PM
Excited to present our new work reading minds!

Ok, not *that* kind of mind reading, but we have created a deep learning method capable of using single neuron recordings from people watching episodes of TV that can predict when they recall specific memories from the episode. 1/6
August 28, 2025 at 7:35 PM
June 27, 2025 at 10:59 PM
June 15, 2025 at 1:44 AM
I’ve been to the museum, and 1) it’s far and 2) it would have been way easier to hit the vehicle when the car was coming towards the building instead of when it turned left and was driving away
March 24, 2025 at 8:08 PM
Thanks to the Trump admin the past month I have been inundated with scam phone calls. I haven’t lived in CT for 20 years so any time I get a call from an 860 I know they’re just spamming by area.

Just one more way life has gotten worse.
March 20, 2025 at 8:17 PM
February 27, 2025 at 6:32 PM
Came across these 8 years ago today
February 22, 2025 at 12:33 AM
February 17, 2025 at 7:19 PM
January 29, 2025 at 6:16 PM
🌟The Democrats🌟
January 8, 2025 at 12:07 AM
January 1, 2025 at 4:53 AM
Why does MIT expect me to know anything about MIT? And presumably customize my letter as such?
December 3, 2024 at 8:55 PM
November 28, 2024 at 8:13 PM
Looks good to me. What’d he do now?
November 21, 2024 at 10:24 PM
Sure. I guess.
November 17, 2024 at 11:42 PM