David Schneider-Joseph
thedavidsj.bsky.social
David Schneider-Joseph
@thedavidsj.bsky.social
570 followers 3.9K following 84 posts
Working on AI gov. Past: Technology and Security Policy @ RAND re: AI compute, telemetry database lead & 1st stage landing software @ SpaceX; AWS; Google.
Posts Media Videos Starter Packs
I'm not aware of any that do a great job *explaining* why subjective experience attends particular physical states, but there are several which at least purport to *describe* which physical states are attended by subjective experience, and that's implicitly what I depend on to say you're conscious.
In particular, these experiments are providing evidence about the actual internal mechanisms and not just the input/output mapping.
I think empirical experiment can provide evidence of consciousness even if not proof, since the most plausible philosophical theories of consciousness say that it coincides with the presence of certain mechanisms. If this were not so, then I could not even acquire evidence that you are conscious.
Blake Lemoine famously claimed LaMDA was conscious, but “evidence” was consistent with roleplay in response to leading questions. It has also been claimed LLMs cannot be conscious in principle, but on weak grounds.

Two new papers offer hints on the question.

thedavidsj.substack.com/p/evidence-o...
Evidence on language model consciousness
Two new papers offer some hints on the question.
thedavidsj.substack.com
I often see argument: “We have 2 choices: current rate of AI progress and very large risk, or slow things down, greatly reducing risk in exchange for minuscule delay of benefits.”

But this depends on mechanism of slowdown.

I argue this case more fully here: thedavidsj.substack.com/p/draconian-...
Draconian measures can increase the risk of irrevocable catastrophe
I frequently see arguments that we can either accept the current rate of AI progress and a very large risk of existential catastrophe, or slow things down, greatly reducing the risk of existential cat...
thedavidsj.substack.com
Scott called this "one of the best things I've read all year, and the first thing on Alzheimers that makes me actually feel like I understand something".
Pacific tsunami advisory due to magnitude 8.7 earthquake off coast of Kamchatka.
What's the Atlantic piece you're referring to?
Claude 4 Opus seems very excitable.
Why are modern book covers so bad?
Asking Claude important questions about papal succession.
Today is a great day to illustrate why the Dow is a bad stock index: it's down 1.33% on a day the S&P 500 is up 0.13%, simply because one component is down 22.38% and also that component is overweighted because the weights are based on share price (???) instead of market cap.
Appears Trump admin is adversarially interpreting SCOTUS order to “facilitate” return of Garcia (who, by their admission, they sent without cause to El Salvadorian prison), to mean they must merely “remove any domestic obstacles”, rather than actually work to secure his return.
I haven’t checked these numbers myself, but it appears that the “Tariffs Charged to the US” column in the White House’s new tariff legend is actually just the ratio of the US trade deficit with that country divided by US imports from that country, with a floor at 10%. Pretty incredible really.
Probably the most blatant autobiographical confabulation I’ve seen from Claude.
There's truth to this but it's a matter of degree. There are structural changes such as reducing the impeachment conviction threshold, removing the pardon power, limiting the wealth of hundred-billionaires, etc.
one thing i keep thinking about is how there is simply no way to design a constitutional system that can resist authoritarian incursion if participants in that system do not actually care that much
Did you all know that Hawaii was this long
14.8k output tokens/H800 node/second = 6.7M/GPU hour, close to the 10M/GPU hour I estimated. This puts their cost at 30¢ per million output tokens at $2/GPU hour.

x.com/deepseek_ai/...
(I wrote this reply on Twitter but might as well include it here too.)
I always thought the "billionaires are fundamentally evil" argument was pretty dumb but the "no one should have that much power" argument is looking pretty good right now.
I don’t fully understand the reaction to this result. If language models weren’t capable of some generalization, they wouldn’t work at all. Even alignment-specific generalization has been shown since at least InstructGPT. What about this generalization in particular is a big surprise?
This is a crazy paper. Fine-tuning a big GPT-4o on a small amount of insecure code or even "bad numbers" (like 666) makes them misaligned in almost everything else. They are more likely to start offering misinformation, spouting anti-human values, and talk about admiring dictators. Why is unclear.
And now actually ruled out. NASA: 0.0039%, ESA 0.0016%.
Impact close to ruled out now. NASA says 0.28%, ESA says 0.16%.
This is now up to 2.3%.