Aidan Clark
aidanclark.bsky.social
Aidan Clark
@aidanclark.bsky.social
I train models @ OpenAI.
Previously Research at DeepMind.

Hae sententiae verbaque mihi soli sunt.
Today’s NYT column could have been written in 1900 decrying the mid-ness of the horseless carriage. It says AI is a fizzling fad while mentioning —without any self awareness— that AI can “predict my lecture […] anticipate essay prompts, research questions […] and then, finally, write a paper”
Opinion | The Tech Fantasy That Powers A.I. Is Running on Fumes
A.I. is just what we need in the post-fact era: less research and more predicting what we want to hear.
www.nytimes.com
March 29, 2025 at 2:29 PM
Someone has gotta start hyping MCBench on here I’m lost
March 22, 2025 at 4:52 PM
If you can’t train big models, the best experience you can get is working on projects that develop and show a clear ability to do subtle deep learning empirical science. Hard to understate how valuable that skill is.
February 28, 2025 at 4:51 PM
I realize for every 10 people voting Trump for Gaza, 9 of them were bots, but …. man …. I’d love to hear from that 10% right now.
February 4, 2025 at 10:26 PM
I wish academic ML was a bit more skeptical of papers and less skeptical of industry. I get that it sucks to not have visibility on details, but it doesn’t invalidate the results. On the flip side, there are too many papers whose message are parroted despite sketchy experiments.
January 11, 2025 at 9:47 PM
By far the most common thing I see anti-LLM people say is “you’re an idiot for using a tool which can lie to you” …. Which is a hilariously false equivalence.
December 31, 2024 at 4:05 AM
Request for citation: I was taught that second-wave NN research (I.e. late 90s/early 00s) was killed by SVMs, largely because they had provable guarantees and people were scared of the non-convexity of SGD on NNs. Does anyone know of a contemporary citation of such an opinion?
December 29, 2024 at 5:02 PM
Had to re-up my CV for the first time in years to send for a potential talk -- proud I'm at the point where I feel like I can be a little silly on it and not have to convince the world I'm the World's Most Professional Man^TM
December 26, 2024 at 5:24 PM
It’s good to engage with people who disagree with you but it’s critical to completely ignore anyone who is (a) anonymous [don’t debate a probable-bot on obamacare] or (b) not engaging in good-faith debate [stop signal boosting Gary Marcus]
December 22, 2024 at 8:12 PM
It's amazing how quick (approx 22 hours) I go from being tired of meeting-ful days and excited for a couple weeks off to becoming excited to try my next one-random-idea...

It's a weird time but, man, AI is cool.
December 21, 2024 at 11:37 PM
One must not see an AI's score on a benchmark and think it is as capable as a human achieving that same score, but in turn the score must not be disregarded. The capability is real, modern AI is just much spikier in intelligence than we are.

But how much will be capable w/spiky intelligence? A lot.
December 20, 2024 at 7:32 PM
Reposted by Aidan Clark
agents are standing by
December 18, 2024 at 6:12 PM
Good framing of a problem we all see coming. There is a lot of social & financial arbitrage opportunities which have been too high effort to bother with (both offensively and defensively). That is about the change. I think the system will hold up better than expected, but we will need to be ready.
A whole bunch of systems that depend on effort being costly are going to be breaking.

Academic journals are seeing this happen already.
December 15, 2024 at 5:48 AM
Something I said the other day offhand and have been reflecting on a lot...

Open-ended research questions are the devil. Never pitch or pursue them. The job of a senior research leader is to represent a program of open ended research as a sequence of clear and precise questions.
December 9, 2024 at 1:34 AM
Is there **any** ML researcher out there who has both
(a) meaningfully contributed to the field since 2013
(b) is disappointed with the current pace of things
???
December 8, 2024 at 12:23 AM
Reposted by Aidan Clark
AI skepticism / criticism would hit a lot harder if the field wasn't full of prominent folks repeatedly claiming AI doesn't work / has no value. Hard to engage with something when it can seemingly support such a weird conclusion
December 7, 2024 at 4:11 PM
We need to do a targeted PR campaign correcting the record.

This is untrue!!!!! It’s only a true statement for base models, which ~no one interacts with anymore.

RMs have a data-supervised sense of truth, and RLHF instills that knowledge into the LLM.
“Given the statistical distribution of words in the vast public corpus of text, what are the words most likely to follow the sequence ‘what country is to the south of Rwanda?’” Even if the system responds with the word “Burundi,” this is a different sort of assertion [..]
December 5, 2024 at 4:12 PM
Bluesky is less conductive to doomscrolling than Twitter and it’s gonna be a big test of self control to acknowledge that’s an okay thing.
December 2, 2024 at 2:33 AM
Please do a blue check but for human verification. I.e I am Aidan Clark on here and I have government documentation saying I am Aidan Clark.
November 30, 2024 at 6:06 PM
1st thanksgiving dinner finished

Things that went well:
+ making gma's potatoes
+ not panicking
+ turkey miraculously wasn't dry

Things that went poorly:
- not noticing thermometer was on C not F
- swapping gravy recipes halfway through
- having a french 75 on an empty belly before prep was over
November 29, 2024 at 6:21 PM
I am once again reminded of how frustrated I am by Google Docs stylistically correcting me, and how much I think this will ruin the artistic voices of the future.
November 28, 2024 at 4:22 PM
I wonder if one of the explanations for the drama is that first wave Bluesky people were disproportionately anti-AI and to their chagrin we’re all moving over here too now lol
November 28, 2024 at 5:56 AM
No one:

Absolutely nobody:

Intrusive thoughts at 2am: Maybe you actually do have time to try that one random experiment which definitely won’t work
November 27, 2024 at 3:30 PM
Every time I hop on a long plane I want to do some offline-capable coding project for fun but I can never think of what to do....
November 24, 2024 at 9:09 AM
My proposal would be to start by drastically changing the paper format by assuming that papers are consumed by experts. No big introduction, no lengthy related work, no preliminaries.

I actually wanted to try to get people excited about a NeurIPS workshop like this but uh I have no time.
We need to abolish peer-review conferences, at the very minimum. Everyone knows it’s a charade, yet they still keep playing the game.
I'm old enough to remember when mailing in a review and disappearing was all that was expected. 🙃

We keep expecting more from reviewers, but somehow the results seem to be getting worse. 🤔
November 24, 2024 at 8:35 AM