Sonia
@soniajoseph.bsky.social
AI researcher at Mila, visiting researcher at Meta
Also on X: @soniajoseph_
Also on X: @soniajoseph_
We covered a lot of ground, from how AI learns physics from video and forms internal models of the world, to the risks of deception, sycophancy, and misaligned goals in robotics.
November 7, 2025 at 12:40 AM
We covered a lot of ground, from how AI learns physics from video and forms internal models of the world, to the risks of deception, sycophancy, and misaligned goals in robotics.
How is this different from capitalism
April 26, 2025 at 3:44 PM
How is this different from capitalism
Ok nice, I’m trying to figure out if the current state of AI culture in the US is some path dependent Yudkowsky thing or more universal
April 26, 2025 at 2:42 PM
Ok nice, I’m trying to figure out if the current state of AI culture in the US is some path dependent Yudkowsky thing or more universal
Why are there no doomers in China?
April 23, 2025 at 8:30 PM
Why are there no doomers in China?
To contain the “crazy” in clean, Euclidean lines is a tragedy.
Many researchers have left the field because of attempts by others to shut it down or contain it. And the consequences on the industry have been profoundly negative.
Many researchers have left the field because of attempts by others to shut it down or contain it. And the consequences on the industry have been profoundly negative.
April 23, 2025 at 5:50 PM
To contain the “crazy” in clean, Euclidean lines is a tragedy.
Many researchers have left the field because of attempts by others to shut it down or contain it. And the consequences on the industry have been profoundly negative.
Many researchers have left the field because of attempts by others to shut it down or contain it. And the consequences on the industry have been profoundly negative.
I would encourage more canonical EA alignment researcher types to learn to recognize this cognitive style. And to see that you need it for your field to succeed.
April 23, 2025 at 5:50 PM
I would encourage more canonical EA alignment researcher types to learn to recognize this cognitive style. And to see that you need it for your field to succeed.
I would encourage those of this archetype to lean into it, to be even more unhinged, and to go all the way. Find others of their likeness and coordinate.
April 23, 2025 at 5:50 PM
I would encourage those of this archetype to lean into it, to be even more unhinged, and to go all the way. Find others of their likeness and coordinate.
The celebrity fixation is often due to parasociality (a safe person to “socialize” with) and a template for social masking (you can mimic their behaviors).
The beauty/makeup fixation is just like any other autistic fixation, but sublimated into acceptable, female coded pursuits.
The beauty/makeup fixation is just like any other autistic fixation, but sublimated into acceptable, female coded pursuits.
April 23, 2025 at 5:49 PM
The celebrity fixation is often due to parasociality (a safe person to “socialize” with) and a template for social masking (you can mimic their behaviors).
The beauty/makeup fixation is just like any other autistic fixation, but sublimated into acceptable, female coded pursuits.
The beauty/makeup fixation is just like any other autistic fixation, but sublimated into acceptable, female coded pursuits.
Girls of this neurotype are often wildly fractal, independent, and creative. I’ve noticed they can also freak out a lot of EA / alignment researcher types, who try to contain the “crazy” in clean, Euclidean lines.
April 23, 2025 at 5:49 PM
Girls of this neurotype are often wildly fractal, independent, and creative. I’ve noticed they can also freak out a lot of EA / alignment researcher types, who try to contain the “crazy” in clean, Euclidean lines.
Reposted by Sonia
I’ve been giving some context on AI development environments here, particularly how co-living and tribal bonds among researchers in Silicon Valley influence ethical decisions, whistleblowing, and the trajectory of AGI development.
x.com/soniajoseph_...
x.com/soniajoseph_...
x.com
x.com
December 6, 2024 at 4:36 PM
I’ve been giving some context on AI development environments here, particularly how co-living and tribal bonds among researchers in Silicon Valley influence ethical decisions, whistleblowing, and the trajectory of AGI development.
x.com/soniajoseph_...
x.com/soniajoseph_...
I’ve been giving some context on AI development environments here, particularly how co-living and tribal bonds among researchers in Silicon Valley influence ethical decisions, whistleblowing, and the trajectory of AGI development.
x.com/soniajoseph_...
x.com/soniajoseph_...
x.com
x.com
December 6, 2024 at 4:36 PM
I’ve been giving some context on AI development environments here, particularly how co-living and tribal bonds among researchers in Silicon Valley influence ethical decisions, whistleblowing, and the trajectory of AGI development.
x.com/soniajoseph_...
x.com/soniajoseph_...
The purpose of the original tweet was to alert my allies; I’m not really sure the point of your response. If you don’t find it useful, feel free to block.
December 6, 2024 at 4:33 PM
The purpose of the original tweet was to alert my allies; I’m not really sure the point of your response. If you don’t find it useful, feel free to block.