Naomi Saphra
banner
nsaphra.bsky.social
Naomi Saphra
@nsaphra.bsky.social
Waiting on a robot body. All opinions are universal and held by both employers and family.

Literally a professor. Recruiting students to start my lab.
ML/NLP/they/she.
Pinned
I wrote something up for AI people who want to get into bluesky and either couldn't assemble an exciting feed or gave up doomscrolling when their Following feed switched to talking politics 24/7.
The AI Researcher's Guide to a Non-Boring Bluesky Feed | Naomi Saphra
How to migrate to bsky without a boring feed.
nsaphra.net
Reposted by Naomi Saphra
Fatma @fguney.bsky.social gets the best comment award 😆
November 28, 2025 at 4:22 PM
Reposted by Naomi Saphra
Oof
November 28, 2025 at 2:52 PM
Reposted by Naomi Saphra
A thread of directed acyclic graphs (DAGs) that look like record covers... because that's EXACTLY what the world needs

1. Huey Lewis and the News: link.springer.com/chapter/10.1...
November 28, 2025 at 11:36 AM
Reposted by Naomi Saphra
Happy Thanksgiving
November 27, 2025 at 3:57 PM
Reposted by Naomi Saphra
the only time i ever gave back to my community (cinephile dorks whose parents claim they can't see it but we suspect are messing with us a little)
November 27, 2025 at 1:19 AM
Reposted by Naomi Saphra
Fifteen Years

xkcd.com/3172/
November 26, 2025 at 10:32 PM
Reposted by Naomi Saphra
Wow 80% bad-faith responses, and people lecturing the creator of Flask here and the creator of Django/datasette in the comments on why AI is useless for software engineering...
Is this platform still massively against AI or has it moved more towards acceptance?
November 26, 2025 at 12:15 PM
Reposted by Naomi Saphra
which I know from personal inspection. What it had was the biggest (n-gram) language model anyone had yet built. @nsaphra.bsky.social et al. have a nice paper on this analogy. arxiv.org/abs/2311.05020
First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models
Many NLP researchers are experiencing an existential crisis triggered by the astonishing success of ChatGPT and other systems based on large language models (LLMs). After such a disruptive change to o...
arxiv.org
November 26, 2025 at 3:15 PM
Reposted by Naomi Saphra
🎉 "High-dimensional neuronal activity from low-dimensional latent dynamics: a solvable model" will be presented as an oral at #NeurIPS2025 🎉

Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.

www.biorxiv.org/content/10.1...
1/2
September 19, 2025 at 8:01 AM
Reposted by Naomi Saphra
Finally somebody is doing something about the podcast shortage
Starting to regret inventing the podcast
November 25, 2025 at 1:00 PM
Reposted by Naomi Saphra
Excited to announce that I’ll be presenting a paper at #NeurIPS this year! Reach out if you’re interested in chatting about LM training dynamics, architectural differences, shortcuts/heuristics, or anything at the CogSci/NLP/AI interface in general! #Neurips2025
November 25, 2025 at 2:27 PM
November 25, 2025 at 1:28 AM
Reposted by Naomi Saphra
Looking forward to sharing our work at #NeurIPS2025 next week!

Session 6 on Fri 12/5 at 4:30-7:30pm, Poster 2001 ("a space odyssey")

Details on this thread by the brilliant lead author @annhuang42.bsky.social below:
📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
November 24, 2025 at 5:16 PM
Reposted by Naomi Saphra
📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
November 24, 2025 at 4:43 PM
Reposted by Naomi Saphra
literally unusable
Nano banana can still not do steller's jays. Total California erasure. It probably makes redwood cones into pinecones or some shit too.
November 24, 2025 at 8:20 AM
I've written many reviews and received several top reviewer awards. I've also written some absolute dogwater critiques based on skimming at the last second with a fever. My point is it's totally random, it's not just whether you rolled a decent reviewer but whether they've had lunch that day
November 23, 2025 at 11:58 PM
Engagement KPIs have been responsible for infinite scroll UI traps, RecSys radicalization spirals, public shame brigades, and basically every modern ailment novel to the past decade. At some point, you have to recognize that the problem isn't any one technology, it's the metric.
The company essentially turned a dial that made ChatGPT more appealing and made people use it more, but sent some of them into delusional spirals.

OpenAI has since made the chatbot safer, but that comes with a tradeoff: less usage.
November 23, 2025 at 8:12 PM
Reposted by Naomi Saphra
the economic success of the U.S. is significantly built on the land grant universities and in particular their excellent agricultural science tradition.
Really important to stress that the Crown Jewels of the US higher education system were never the Ivies or elite SLACs (other countries have equivalents of these) but the well-funded, large, cheap, and excellently staffed public state university systems bringing high quality education to the masses.
One of the bragging rights that the US ed system had in the 20th century is that we didn't have education tracks. Essentially, any kid could go to a CC or state school & major in whatever they wanted to (obviously an oversimplification). I fear this aspect of the American dream is dying.
November 23, 2025 at 5:26 PM
Reposted by Naomi Saphra
I had a popular account with a valuable audience and my Twitter payout was $80 a month-ish, to the point where I disabled monetization instead of uploading my ID. Payouts are only material if you live in a developing country, so “guy in Nigeria posting right-wing Amerislop” has taken over the site.
Twitter pays people based on engagement (views, retweets, comments, etc). It appears that many MAGA accounts are based abroad and they use AI technology to generate low-effort rage bait.

My guess is that this will get worse as AI tech improves. For instance, fake videos of minorities doing crime.
November 23, 2025 at 3:31 PM
Reposted by Naomi Saphra
Very important point! We've made arguments from a computational perspective that low-variance features can be computationally relevant (bsky.app/profile/lamp...), but it's much cooler to see it demonstrated on a model of real neural dynamics
“Our findings challenge the conventional focus on low-dimensional coding subspaces as a sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.”
Neural dynamics outside task-coding dimensions drive decision trajectories through transient amplification
Most behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choice...
www.biorxiv.org
November 23, 2025 at 5:05 PM
Reposted by Naomi Saphra
Nice example of Simpson’s Paradox in this post.

Minor league umpires have a higher accuracy rate on ball-strike calls than major league umpires but

(a) they are worse on easy calls and
(b) they are worse on hard calls.

blogs.fangraphs.com/your-final-p...
Your Final Pre-Robo-Zone Umpire Accuracy Update
This is the last time we’ll get to judge umpire accuracy without the ABS challenge system. Where do umpires stand, and how might we expect their accuracy to change once the robots get involved?
blogs.fangraphs.com
November 22, 2025 at 9:54 PM
Reposted by Naomi Saphra
If we had to hear “Baby Shark” everywhere we went for two whole months out of the year we’d throw a fit, right?

And rightfully so.

So tell me, why do we put up with it when the song is “Frosty the Snowman”?
November 19, 2025 at 6:16 PM
my 1 year old nephew is obsessed with ballet and pavarotti (he sings along as only a baby who cannot yet talk would) and I'm dreading when he gets to preschool and the other toddlers are all stuffing him in lockers for his weird ballet thing
November 22, 2025 at 2:48 AM
Reposted by Naomi Saphra
SVMs are the only moral type of machine learning, which is why they’re still taught in ML classes. Unlike next token prediction, they are inherently good
November 21, 2025 at 9:02 PM
Reposted by Naomi Saphra
Folks, I don’t know how it’s possible, but it gets funnier.
November 21, 2025 at 3:19 PM