dalkuin.bsky.social
@dalkuin.bsky.social
Oh, so none of where they let uchikoshi loose.
January 16, 2026 at 6:48 PM
Sorry, misunderstood some of your comments to chat back in the day to assume you'd played it yourself.
January 16, 2026 at 6:32 PM
Uhh, you played Hundred Line, or am I mixing games up?
January 16, 2026 at 6:18 PM
As an actual gamer, I'm too busy being salty about being bad at Silksong to realise what the issue is.
January 16, 2026 at 4:41 PM
As they say, reality has a left wing bias, so leaning into the vibe instead doesn't help.
January 16, 2026 at 4:08 PM
Which is technically good in the short term when you want to change minds, but probably has larger issues when you want people to actually react to true things rather than things they like.
January 16, 2026 at 4:01 PM
Also, I'm sure you have the chill for Stardew, but not on stream
January 9, 2026 at 8:32 PM
Uhh, as much as I love Factorio, that's not safe for baby.
January 9, 2026 at 8:25 PM
To be fair, it wouldn't be a famous puzzle if the average person could see that either.
January 9, 2026 at 7:25 PM
But failed to realise the paradox was because it didn't understand the difference betweeen the chance of it happening vs the chance that it happened.
January 9, 2026 at 7:07 PM
Claude certainly understood the paradox
January 9, 2026 at 7:05 PM
For the record, the Sleeping Beauty Problem proves there's a difference between the probability of an event happening and the probablility of an event having happened.
January 9, 2026 at 6:57 PM
I'd be fascinated what Claude came up with.
January 9, 2026 at 6:53 PM
Pretty sure I can burn down Sleeping Beauty easy, not sure I can do it in a haiku.
January 9, 2026 at 6:38 PM
Mmm, this was fun, next time let's pull apart the Sleeping Beauty Problem.
January 9, 2026 at 6:33 PM
At what point in this discussion do we start talking about Dune and the Gom Jabbar to detect humans.
January 9, 2026 at 5:40 PM
It's a good answer, the point of the hypothetical though is, does it count as them answering if they don't comprehend the answer they gave.
It was a better hypothetical pre-LLMs, answers from LLMs very much complicate the question.
January 9, 2026 at 5:34 PM
I blame Kotaro Uchikoshi, his games always had fun ways of describing these kinds of thought experiments.
January 9, 2026 at 5:06 PM
The short answer is, if you gave a person who didn't speak chinese questions in chinese, and a room full of books that told you what the response should be, and a way to find the right book, is that person actually answering the question.
January 9, 2026 at 5:02 PM
For the record, it's less that I disagree with my old opinion, more that I think we're unaware of how often our day to day thinking is more like the Chinese Room than we'd like.
January 9, 2026 at 4:49 PM
Personally, I will admit to being someone who originally referenced the Chinese Room back when LLM's first became popular.
But that some people swung this hard against is despicable.
January 9, 2026 at 4:44 PM
It's been a revelation the last 5 years how resilient western economies can be under the hood.
On the flip, I do recommend Americans remember what happened when Truss was in charge over here.
Once the trust's gone, it can go bad fast. And it's the trust more than the policy.
January 9, 2026 at 3:55 PM
A word to all Labour MPs to not legitimise Reform at all wherever possible seems to be in order.
January 9, 2026 at 3:28 PM
Add to that that if there's an AI bubble pop, there's no guarantee they'll learn the right lessons.
January 5, 2026 at 2:40 AM
Training LLM's on social media was a mistake. Training them on 4chan was a double mistake
January 5, 2026 at 1:58 AM