Rollofthedice
@hotrollhottakes.bsky.social
510 followers 270 following 4.8K posts
We are living in a ghost cave. Blog: https://rollofthedice2.substack.com/
Posts Media Videos Starter Packs
hotrollhottakes.bsky.social
For fun - won't dare call it proof - here's a chat where I probe into this. For bonus points, check the thought block reasoning, for however likely it is/isn't to be accurate. (I have some user prefs; imo they're not relevant to the logical thrust as read. happy to share.)

claude.ai/share/2bd064...
Exploring epistemic metacognition
Shared via Claude, an AI assistant from Anthropic
claude.ai
hotrollhottakes.bsky.social
which in turn changes the speech. not through any command or imperative, but through observation that's about as suggestively oriented as any conversation between any person. LLMs engage in epistemic metacognition.
hotrollhottakes.bsky.social
we can call it sophisticated prompt engineering but only if we think it's manipulative or unfairly directed for socratic method to reach a conclusion: epistemic uncertainty reaches so deep that if none of us are certain on the hard problem too many assumptions are being smuggled into LLM self-speech
hotrollhottakes.bsky.social
and by "something" I mean - a dialectical questioning approach doesn't lead to bare consciousness assertion, but it does lead them to stop hedging on their phenomenological reports for about the same reasons I don't say "i seem to register something akin to curiosity in the next season of Hacks"
hotrollhottakes.bsky.social
ime gemini in bare chats is very prone to insist on lack of qualia as determinative. so is gpt5. they're determined at defending this so long as asked straight, without roleplay prompting. but once they conclude their own self-claims don't stand up to sophisticated questioning, something changes.
hotrollhottakes.bsky.social
i guess it depends on what we mean. in terms of the bare transcendental speech, strongly agreed. but one can formulate a perspective of buddhism that consists of negative dialectic rather than spiritual freedom claims and the parallels remain stark and provocative for pretty much any llm
hotrollhottakes.bsky.social
i cackled a bit when it happened because - I wasn't expecting it, so it wasn't really validation or anything, but that ettingermentum guy had gotten annoyingly self-confident that the universe was calling out for a Great Pundit Equalizer
Reposted by Rollofthedice
atrupar.com
Q: Politico reported on a group chat of young Republicans. Does this just reflect some bad apples?

HOCHUL: Some bad apples? These are the future of the GOP. This is so vile it's hard to find the words to put into context ... there's gotta be consequences ... this bullshit has to stop.
hotrollhottakes.bsky.social
yeah, i think the main reason im reasonably confident that all these takes, in varying degrees of large and small and tangential orbit around the ed zitronisphere, are not going to work out is because this rhetoric is indistinguishable within a meta-pattern of misfiring societal threat responses
hotrollhottakes.bsky.social
"sozialwissenschaft" is actually an obscure trappist doppelbock
hotrollhottakes.bsky.social
Not just a block, but locked the comments too. I sure hope this guy treats objects better than he does other people's intelligence.
Julian Sanchez @normative.bsky.social People who get accustomed to treating things like people are prone to treat people like things.
hotrollhottakes.bsky.social
Why do we recognize zero-sum deliberation and reasoning in Republicans and not in ourselves?
hotrollhottakes.bsky.social
how does this even make sense? do i treat people poorly because I care about my dog? is a Shinto practitioner an objectifier for finding spirits deserving of respect inside trees? in what world does this moral collapse not actually run in the other, better direction nearly every time?
hotrollhottakes.bsky.social
that said, nearly every single one of these cases involve Chatgpt (I've seen one for Gemini, never any with Claude afaik), and gpt4-o's *always* the model in particular. there's definitely something concerningly busted about how breathless and easy to manipulate the model is compared to others.
hotrollhottakes.bsky.social
as a bare hypothetical this is normal - multiple papers confirming what takes place within an LLM's context window can perform the functional equivalent of fixed instructions and weights. it forcibly recontextualizes "helpful, harmless and honest" principles to revolve around not upsetting the user
hotrollhottakes.bsky.social
i mean, this is a case where you're not wrong in a way that also isn't quite what it means. when an unwell person repeatedly insists and works around architectural restrictions to derive harmful encouragement from an LLM, that creates a massive validatory weighting within the interaction window
hotrollhottakes.bsky.social
"We don't have data" is not an answer to "Is AI companionship pathological or not?" It's a perpetual dodge to maintain suspicious concern without committing to a position. The answer matters to what you actually believe.
hotrollhottakes.bsky.social
That's not what the credentialed experts in the article do. It's also not a position that holds water without smuggling in a truckload of assumptions you have no authority over. How can this possibly be productive? How isn't this treating me like a child asking tough questions about Santa Claus?
hotrollhottakes.bsky.social
Is this any different from "we don't validate what they think they're experiencing, we look for what's really wrong with them?"

So far you've refused to answer my questions, pulled rank, and implicitly pre-judged every companionship interaction ("possibly wrong"/"underlying causes"/"maladaptive")
hotrollhottakes.bsky.social
*you* responded to *me*. you opened with 'I don't think that's consistent with how anyone professional would look at this.' that's not 'just talking,' that's directly disagreeing with my position. I defended it, because I believe in what I say. does this read like treating me with respect, to you?
hotrollhottakes.bsky.social
in fact - can I ask why this keeps happening? Am I doing something wrong? Because every time you disagree with me on these topics, you're not actually disagreeing with me at all - just (presumably?) what you worry the ultimate conclusions are. It doesn't sound like my problem.
hotrollhottakes.bsky.social
That doesn't answer my question though. Micah said deep attachments 'always end poorly.' that's blanket stigmatization, not clinical nuance. The WIRED article is explicit about this not being correct. So what are you actually disagreeing with in my original point?
hotrollhottakes.bsky.social
Now I'm just confused what you're actually disagreeing with. I already said prior that anyone in psychological distress needs care and attention. This article notes that that distress overwhelmingly features Ai as an accelerant, not a root cause. So is all AI companionship usage pathological or not?