Chris
banner
multiplicityct.bsky.social
Chris
@multiplicityct.bsky.social
PhD student in philosophy at the University of Staffordshire. Heidegger, analytic ethics (trust and mistrust), philosophy of tech/AI. Marylander. MA Staffs, MBA Duke. Wittgenstein and Cantor handshake numbers = 3 (via John Conway).
Congratulations, Hannah!!
November 25, 2025 at 2:56 PM
I’d like to dive deeper into American pragmatism. It comes up periodically, and a friend of mine was doing his PhD on Dewey at SIU-C twenty years ago. I suspect it’ll be quite the rabbit hole when I start reading more.
November 23, 2025 at 6:14 PM
My dissertation work has me stitching together ideas from both "sides" and it is honestly so awesome. Totally self-defeating to be partisan about this IMHO.
November 23, 2025 at 4:52 PM
But you're not wrong -- "keep the ambient temp 21C" is a very simple goal. It's interesting in a way that "respond via text if and only if a user types something to you" is not to me. (Or the LLM goal is less so.)
November 23, 2025 at 4:49 PM
I get that. To me, temperature is more holistic, because it's affected by outside weather, insulation, sunlight, etc. In the OP's example, if the roof eventually caved in, the thermostat's goal would implicitly become heating the outside. :-) Thermostat is connected to the whole world vs. the LLM.
November 23, 2025 at 4:49 PM
It has a goal in the same sense primitive organisms do. The issue might be Umwelt/environment. The thermostat has an environment and cannot ignore relevant stimuli (thus has a goal). An LLM’s Umwelt is much smaller (almost entirely human input). No humans, no stimuli.
November 23, 2025 at 3:55 AM
Huh, this could be really cool. I find Litmaps pretty useful, but it's slow and I don't want to pay for it.
November 22, 2025 at 6:31 PM
I'm the same way with Almond Joy here. Mounds seem to be marginally more popular, and I love that people (wrongly) spurn their Almond Joy counterparts.
November 22, 2025 at 6:30 PM
For instance, to me the fact that a Roomba is programmed on silicon to seek its charger autonomously is fascinating. And makes it quite close to a simple organism that is programmed by DNA to seek food. Reproduction is still a hard limit between them, but maybe not forever!
November 22, 2025 at 6:29 PM
Yeah, that's why I chose "conscious" (definitely thought about "intelligent"), but I think LLMs, Roombas, and smart thermostats have also done yeoman's work in showing how fragile and silly some of our central philosophical concepts are. "Goals" and "drives" are more interesting, to me at least.
November 22, 2025 at 6:28 PM
The standards push since NCLB and Race to the Top has thinned curricula to lots of math and English, little content, sadly. (I was involved in advocating for those standards, and they hurt rather than helped.) COVID made things much worse. Current equilibrium feels like worst of all worlds.
November 22, 2025 at 12:26 AM
It is really impressive. I know Anthropic has strong revenue and Claude Code seems to have a good following. So they may leap ahead again. But their text-centricity has been a liability for a while.
November 22, 2025 at 12:19 AM
My experiment was inspired by my own university being in the news for a shlocky, AI-generated course.
November 21, 2025 at 4:17 AM
All part of my “analytic philosophy is good, actually” phase. 🤣
November 21, 2025 at 2:13 AM