Nate A.M.
banner
kbnet.bsky.social
Nate A.M.
@kbnet.bsky.social
sakuga dork; avgeek; many things besides.
I help with http://artistunknown.info and allegedly run Why Animation? on youtube.
Fedibridge: [email protected]
The Union forever: down with the traitors; up with the Stars.
There's plenty of hot air to work with
November 27, 2025 at 12:48 AM
I guess the only solution is that a solarpunk writer should write my project and I should write solarpunk
November 26, 2025 at 6:20 AM
The same reason a solarpunk writer would roll their eyes about my embryonic space opera project where liberal internationalism is thriving in the late 28th century and be unsympathetic to my defense that large swathes their political culture and economy would be utterly unrecognizable
November 26, 2025 at 6:19 AM
Unsurprising that people who have an ideological-aesthetic project want to present that project in the best possible light – but that's a skill issue. It's trivial for me to come up with that conflict, but only because I'm negatively polarized to think small is Bad Generally
November 26, 2025 at 6:19 AM
Reposted by Nate A.M.
saw some real good ones last night 🖤
November 26, 2025 at 3:06 AM
Next time, if they do the radio program!
November 23, 2025 at 3:30 AM
But imo that just makes it all the more important to try to think about these things clearly. That's where I'm coming from.
November 22, 2025 at 11:08 PM
As someone who, as an old forum mod, was once approached by an actively suicidal person (…it turned into an intensely strange years-long storyline but did not end happily), I am not insensitive to why someone might have a hard time with a post like that.
November 22, 2025 at 11:08 PM
For instance, you keep talking about *the chatbot* being held liable. That is a real opinion people can and do consistently hold if they believe LLMs are part of the same moral community as humans. I'm 99% sure you'd find this ridiculous (and I'd agree), but if I didn't know you, then who's to say?
November 22, 2025 at 11:08 PM
Yeah I def. can't argue with hitting block and moving on. Honestly, I'm not even talking about "the benefit of the doubt" about intentions (though I tend to think that's also a good idea), I just mean trying to understand what it is other people even think.
November 22, 2025 at 11:08 PM
That said, I've definitely seen posts I think are deeply mistaken, looked in the notes, and been disappointed that nobody has voiced a strong counterargument, knowing that I don't have the energy to write it myself. I *very* much empathize with that part lol
November 22, 2025 at 8:06 PM
Which like, that's partially on them for not being clear too, but that doesn't absolve me.

(as for myself, I'm not sure what the right laws ought to be, except GPT-4o is fundamentally fucked and needs to be recalled)
November 22, 2025 at 8:06 PM
idk and I'm not really interested in asking them to find out. Whether or not you think any of this has merit, I think it's unclear enough that a callout post is unwarranted. I've certainly made the embarrassing and insulting mistake of weighing in against someone who I fundamentally misunderstood
November 22, 2025 at 8:06 PM
So they might come back and say that they only meant the LLM devs shouldn't be induced to snitch on kids through liability, but that they *should* be held accountable for "misaligned" sycophantic models that are a greater risk. Or they could have something else in mind entirely
November 22, 2025 at 8:06 PM
OP sounds like they could mean something like "the LLM didn't say anything that a well-meaning but misguided human friend wouldn't, and we wouldn't hold that human liable, so the company shouldn't be held liable either." But it also sounds like their main concern is that the LLM not snitch on kids
November 22, 2025 at 8:06 PM