Chase in DC
banner
chasehardin.bsky.social
Chase in DC
@chasehardin.bsky.social
Making it up as I go. History, sci-fi, good dogs, film, games. Now: Comms at the Future of Life Institute. Previously: taxes, telecoms, healthcare, guns and more. DC & Phoenix. Opinions are mine but they should be yours, too.
Watching Elon's tweets roll in...
June 5, 2025 at 7:38 PM
Close the gates! A new essay from Anthony Aguirre calls for stopping AGI and superintelligence before it’s too late. Check it out at KeepTheFutureHuman.ai.

But you should also watch this incredible summary from Siliconversations:

youtu.be/zeabrXV8zNE?...
New AI Safety Rules That Could Actually Work
YouTube video by Siliconversations
youtu.be
March 6, 2025 at 8:23 PM
A level of inner peace we shall never know.
February 25, 2025 at 7:16 PM
Prof. Max Tegmark makes the case for Tool AI as the path forward for AI development. #IASEAI25
February 7, 2025 at 9:36 AM
@josephestiglitz.bsky.social kicking off Day 2 of #IASEAI25 talking about the economic impact of AI and the alignment problem as it relates to wealth and power concentration. Probably the most immediate and salient AI issue today.
February 7, 2025 at 8:39 AM
Day 2 of #IASEAI25 starts now!
February 7, 2025 at 8:33 AM
The historiography of Marie Antoinette often paints her as a victim, but we gotta clear something up: she was 100% guilty of many of the crimes of which she was accused! She *did* conspire with foreign powers to attack France! She schemed to entice Austria & Prussia to invade! She committed treason!
February 2, 2025 at 3:01 PM
The Battle Hymn of the Republic should be the national anthem.

That’s all.
January 20, 2025 at 5:09 PM
So hear me out: adulthood sucks but it’s actually way more fun if you *keep playing* dodgeball…
January 14, 2025 at 7:18 PM
I’m watching Predator for the first time ever. AMA.
January 13, 2025 at 1:12 AM
If you’re wondering when I started crying in my annual viewing of Love Actually, it was here.
December 21, 2024 at 2:58 AM
And on a related note: terms limits are extremely bad. You actually do want your legislators to gain experience! Otherwise, they'll just be reliant on lobbyists.
I am my most anti-populist when it comes to the issue of politician compensation. it's not a good soundbite but members of congress should be paid like $1m annually and honestly I'd be fine with them not having to pay income tax on it, in exchange for total bans on most private investments.
December 20, 2024 at 6:37 PM
“How are we not discussing the fact that so much of the internet is riddled with poison? How are we not treating the current state of the tech industry like an industrial chemical accident?”

Really excellent stuff here.
Never Forgive Them
In the last year, I’ve spent about 200,000 words on a kind of personal journey where I’ve tried again and again to work out why everything digital feels so broken, and why it seems to keep getting wor...
www.wheresyoured.at
December 19, 2024 at 3:58 AM
The use of AI monitoring software is skyrocketing. Schools, employers, governments. It’ll be invasive, pervasive, and astonishingly powerful. And here’s the crux: it will probably be pretty damn effective.

But we’re not pausing to consider the broader impact. We’ll live to regret it.
Schools Using AI to Send Police to Students' Homes
Schools are employing dubious AI-powered software to accuse teenagers of wanting to harm themselves — and sending the police to their homes.
futurism.com
December 15, 2024 at 8:43 PM
We gotta talk about these grades, folks. Competitive pressures are driving the leading AI tech firms to sidestep major questions around AI Safety. Despite *explicitly* trying to develop AGI, none of the companies have robust or reliable plans for controlling the systems once they've created them!
🆕 Out now: FLI's 2024 AI safety scorecard! 🦺

🧑‍🔬 We convened an independent panel of leading AI experts to evaluate the safety practices of six prominent AI companies: OpenAI, Anthropic, Meta, Google DeepMind, xAI, and Zhipu AI.

🧵 Here's how it went 👇
December 11, 2024 at 9:37 PM
Here is @theinformation.bsky.social’s AI Agenda newsletter giving an example of o1’s unpredictable behavior, as observed by Apollo Research. Even when you give these models perfectly benign directives, it can result in extremely troubling behavior!
December 10, 2024 at 3:20 PM
“No justice, no peace” my husband says, threatening property destruction after discovering @amazonprime.bsky.social is charging $7.99 to watch the original Rudolph the Red Nosed Reindeer special.
December 8, 2024 at 1:43 AM
Sitting quietly in the living room when my husband angrily informs me, “The damn woke AI won’t tell me how much Dr. Pepper to add to my eggnog!”

This is the final straw, he tells the dog and I. He’s cancelling his Claude subscription (he is deadly serious). cc: @anthropic.com
December 7, 2024 at 11:14 PM
Reposted by Chase in DC
💼 We're hiring a Head of US Policy! ⬇️

🇺🇸 This opening is an exciting opportunity to lead and grow our US policy team in its advocacy for forward-thinking AI policy at the state and federal levels.

✍ Apply by Dec. 22 and please share:
jobs.lever.co/futureof-life/c933ef39-588f-43a0-bca5-1335822b46a6
December 5, 2024 at 10:15 PM
It’s funny to see people push back against this with “You asked it to misbehave and it did 🙄” as if no one will ever ask an AI to misbehave!

And that’s the point: you can’t predict what the AI will do with instructions, and you can’t address every edge case of what a person will ask it to do!
OpenAI's new model tried to avoid being shut down
o1 attempted to exfiltrate its weights to avoid being shut down
open.substack.com
December 6, 2024 at 7:38 PM