allie lawsen
banner
lexlawsen.bsky.social
allie lawsen
@lexlawsen.bsky.social
AI grantmaking at Open Philanthropy
Previously 80,000 Hours
lawsen.substack.com
Anthropic before deploying a new model
May 9, 2025 at 8:43 AM
xAI before releasing a new model
May 9, 2025 at 8:22 AM
Google before deploying a new model
May 9, 2025 at 8:11 AM
OpenAI before deploying a new model
May 9, 2025 at 7:44 AM
I could have titled my new blogpost "The one skill that makes or breaks your career." I didn't, because that really isn't my style. But I do think the post contains a lot of the best general career advice I can give. 🧵 below

lawsen.substack.com/p/four-and-a...
Four (and a half) Frames for Thinking About Ownership
It's hard to write a guide to being good at ownership.
lawsen.substack.com
April 24, 2025 at 3:45 PM
Interesting...

Proxy-generator is here in case you're still using the other place: substack-proxy.glitch.me
April 24, 2025 at 3:42 PM
If only this was the AI that had been asked for tariff setting advice.
April 9, 2025 at 10:32 PM
What deep research queries have you tried and been disappointed by? Thinking about writing a post on how to use it well and it would be good to have interesting examples.

Reply with the prompt and model you used.
April 3, 2025 at 10:31 PM
What's the difference between good and *great* prompt engineering?
🧵 ↓
April 1, 2025 at 9:20 AM
What does it take for people to actually, reliably use a whistleblowing function? 1/8
March 24, 2025 at 1:05 PM
Just published a new post on the Claude Projects I'm currently maintaining. I've had a lot of interest in how I'm using AI, so wrote this breakdown.
March 6, 2025 at 11:45 AM
1. I'm unironically excited about Anthropic's Pokémon eval. It's a step towards the kind of thing I was hoping we'd get in response to this section of our recent RFP.
February 26, 2025 at 1:41 PM
1. I wrote this a while ago, but it's come up in a few conversations since so it seemed worth a short thread.
February 26, 2025 at 11:42 AM
Reposted by allie lawsen
🚨 Emergency pod from me 🚨

Elon offers $97b to buy OpenAI and derail Altman's plan to break free of non-profit control.

Would it hold up in court?
What hurdles does Elon have to jump?
And why the hell is OAI cutting AI from its non-profit mission entirely?

www.youtube.com/watch?v=9TCK...
Will Elon's $97b bid for OpenAI hold up in court? (emergency pod with Rose Chan Loui)
YouTube video by 80,000 Hours
www.youtube.com
February 12, 2025 at 6:56 PM
My current writing workflow.
As of Feb 2025
open.substack.com
February 11, 2025 at 8:33 PM
1/8 Really happy to share Open Phil’s new Request For Proposals to improve AI capability evaluations. It's been a big project for me and my colleague Catherine. Getting evaluations right is crucial — and we're ready to fund serious work to make that happen. 🧵
February 6, 2025 at 7:22 PM
Got a fully adjustable, ergonomic, steel case chair for working from home and it sucks.

My back still hurts after 5 hours working hunched over my laptop on the sofa because I didn't want to go upstairs.
February 1, 2025 at 7:47 PM
TFW you realise anti-sycophancy training still has a way to go because your conspiracy theorist relative texts you that Claude agrees with them "once they supply it with the facts the MSM is missing"
January 28, 2025 at 9:15 PM
OpenAI's operator, from the sound of it, barely works when it comes to bunch of things. Luckily, as we all know, it's really hard to go from 'barely works' to 'works' to 'superhuman' in AI, especially once you have the basic set up that gets you to 'barely works'.
January 24, 2025 at 7:47 PM
Intelligence has to be from the Jacques region of France! What you're talking about is just sparkling matrix multiplications!
January 17, 2025 at 2:24 PM
Reposted by allie lawsen
Wrote a long rambling post about why individual AI use isn't bad for the environment substack.com/home/post/p-...
Individual AI use is not bad for the environment
And a plea to think seriously about climate change without getting distracted
substack.com
January 13, 2025 at 10:28 PM
Real
January 11, 2025 at 9:00 AM
Training a flexible, general-purpose reasoner that can succeed despite unexpected obstacles seems pretty hard.
Worryingly, training a flexible, general purpose reasoner that can succeed despite unexpected obstacles *except when those obstacles are humans trying to stop it succeeding* seems harder.
January 7, 2025 at 11:37 PM
Say It Twice, Write It Down
Following my own advice
open.substack.com
January 7, 2025 at 7:45 PM
Would you be more likely to read an essay entitled:
1. "Just" has no explanatory power
2. "Just" is a semantic stop sign
3. Un-Just dismissal
?
December 30, 2024 at 11:01 PM