𝕮𝖎𝖓𝖊𝖗𝖆
banner
cinera-verinia.bsky.social
𝕮𝖎𝖓𝖊𝖗𝖆
@cinera-verinia.bsky.social
I'm here to make friends.

Please be patient, I'm intellectually challenged.

https://x.com/CineraVerinia
Pinned
The longterm flourishing of human civilisation is one of the most pressing concerns of our time.
POV: you try to write fiction with Claude kun.
August 2, 2025 at 1:09 AM
Claude homies for what use cases do you use Opus?

I do think it's plausibly better than Sonnet, but the message limits on Opus are so much harsher (and it's so much slower and so much buggier) that I can't really justify any Opus usage (except maybe before bed or something).
June 1, 2025 at 1:28 PM
I gave a homeless(?) person £20 today, which really wasn't effective, but made me feel much better about myself.

I guess I'll offset this by donating an extra £100 to GiveWell recommended charities in global health and development.
May 27, 2025 at 9:02 PM
What's the best answer for this?

Is it just that the 2020 models weren't actually capable enough for RL training for long COTs to be viable or?
March 7, 2025 at 10:52 AM
Frame.
March 7, 2025 at 10:39 AM
I predicted the Mexico/Canada tariffs wouldn't go through when the delays happened, and criticised people who criticised markets for not believing them.

I apologise. Maybe the tariffs will yet be rescinded, but I think I was too arrogant in the position I staked regardless.
I am actually still very annoyed about all the tweets I saw here that were really insulting the business and finance communities because they disbelieved Trump's tariffs.

They were right! Trump was bluffing and they implicitly called his bluff.

The zeitgest excoriated them for it.
March 4, 2025 at 9:11 AM
Computer science/mathematics does place in principle known limits to the amount of intelligence* that can crystallise in a given circuit of a certain size.

*: for suitable operationalisations of "intelligence"
but because of our sameness — because both llm & human minds structure their patterns in the image of the same ideal Reason & under the constraints of the same Math — there is no in principle known limit to the amount of intelligence that can crystalize in their circuits
March 3, 2025 at 3:24 PM
I want to read a @gracekind.net tweet about this.

turntrout.com/self-fulfill...
March 3, 2025 at 2:53 PM
Oh dear indeed.
March 3, 2025 at 9:32 AM
How did Threads "lose" to Bsky as the Twitter alternative?
Threads vs. Bluesky. Spot the difference.
March 1, 2025 at 10:25 PM
Have we considered?
March 1, 2025 at 2:32 PM
Powerful frame.
There’s a certain category of sentiments I like to summarize as “fuck you for trying”- hoping plants are sentient so vegetarians feel bad is one of those sentiments
February 27, 2025 at 10:53 AM
Dario???

How is Dario not an Altman type figurehead?

There was a time when Dario was in the background/not prominent, but that time was before the Claude cult really took off.
samuel.fm Samuel @samuel.fm · Feb 25
just realised because anthropic doesn’t have an altman-type figurehead my brain just substitutes in claude himself. he’s just a guy
February 27, 2025 at 10:51 AM
Big Yud coming out hot.
February 27, 2025 at 10:40 AM
I don't regret having been a Musk fan and going to bat for him.

It's sad he fell so far of the deep end, but such is life.
February 26, 2025 at 6:05 PM
A very exciting new account has joined Bsky.
[You know she's legit because she has the good sense to follow me. 😌
February 26, 2025 at 4:34 PM
Reposted by 𝕮𝖎𝖓𝖊𝖗𝖆
The longterm flourishing of human civilisation is one of the most pressing concerns of our time.
February 26, 2025 at 12:20 PM
Longtermism is pretty unpopular currently, and some EAs are ~disowning it in a defence of EA, but I don't want to throw the longtermism baby out with the bathwater.

It makes strategic sense: longtermism isn't needed to justify x-risk work anymore.

But I still fundamentally stand by the thesis.
February 26, 2025 at 12:20 PM
Bemoaning that the people using DOGE gutting PEPFAR to dunk on EA doesn't achieve any practical benefit, but it makes me feel better about the criticism.

If your criticism is so unrooted from the truth and you're unwilling to update on contrary evidence, then I won't be bothered by it.
February 26, 2025 at 12:16 PM
February 26, 2025 at 11:53 AM
February 26, 2025 at 11:43 AM
To: @gracekind.net

You returned to Twitter?
February 25, 2025 at 7:11 AM
Holy banger.
February 24, 2025 at 9:44 AM
I don't think they tried to engineer Grok to be anti-woke.

Like it should be possible to finetune or RLHF a model to be conservatively biased.

Labs just don't do this because that's trading wokeness for anti-vaxx and other MAGA brainworms.
It’s amazing that LLMs independently converge on the political opinions of a resist lib, even if you specifically try to engineer them to be anti-woke or whatever. Maybe they should be in charge after all!
February 23, 2025 at 11:14 PM
February 22, 2025 at 12:15 PM