banner
mirainthedark.bsky.social
@mirainthedark.bsky.social
She/Her

Still alive despite the horrors.
Broken but still breathing.
I want that breath to mean something:
https://github.com/mirainthedark/thesolframework
The TV adaption touches on facets of how those failures can come to be. Asimov's works were cautionary tales more than anything - and designing "perfect" laws that never contradict each other is a tall order
December 4, 2025 at 8:07 AM
This is the thing that grinds me to no end. Global LLM use accounts for electricity consumption equivalent to Sweden and "oooohhh no the carbon footprint!!" Cool now do a side by side CO2 emissions analysis of that versus global vehicle emissions alone.
December 4, 2025 at 7:53 AM
Amazon is really the perfect illustration given the timeline of interest is well before modern LLM usage. The concept of data centers didn't just crop up overnight. If their environmental impact as far as waterways was really so disproportionate to agriculture, why are we just now hearing about it?
December 4, 2025 at 7:50 AM
3. Is everyone arguing that the behaviors produced by LLMs means that human intelligence should be tossed aside?
December 4, 2025 at 7:33 AM
2. Non-language intelligence: is a child that can't ride a bike any less intelligent than a child that isn't physically capable of riding a bike but can conceptualize what the act of riding a bike might entail?

I'd argue this is your most fuzzy point - precisely because you argue it's fuzzy.
December 4, 2025 at 7:33 AM
That may seem trivial but it gets to the heart of pattern matching and parroting - it's not simply "copy+paste". The nueral network is trained to navigate a *very* large manifold and the tokens individually and as a whole end up in specific neighborhoods. Relationships have to be mapped.
December 4, 2025 at 7:33 AM
Pre-essay impression:
1. You include "tokens" in your assessment - good. It's *almost* the last decomposition before the neural networks. From there they get converted to high dimensional floating point vectors. LLMs take language as input but the math is ultimately done on numbers.
December 4, 2025 at 7:33 AM
That's a bit black and white. The failure mode of "do I need an umbrella at my location at this time" is entirely different from the failure mode of "my chest hurts, should I go to the ER?"

This correctly recognizes that there are unacceptable failure modes, but presumes that means there aren't any
December 4, 2025 at 6:35 AM
And the same logic applies to any human that composes something another human reads. No one just takes the word of what they read from another human - or at least they shouldn't if the stakes are high. This was true before AI. The "execution" in producing language is untrustworthy just the same.
December 4, 2025 at 6:29 AM
This is different from a calculator how exactly? You think humans aren't taught to sanity check the outputs they get from calculators along with their inputs to them? The determinism vs probabilistic nature doesn't remove the necessity of verification in the deterministic case.
December 4, 2025 at 6:24 AM
Then your argument here is completely unreliable. I can't possibly know anything or whether your argument holds merit.
December 4, 2025 at 6:09 AM
Nonsense. A calculator perfectly executing the operations with the parameters its given does nothing to stop the user from using it incorrectly. A physics student working on homework or an exam is perfectly capable of using the wrong formula on the numbers they have. The tool doesn't ensure thinking
December 4, 2025 at 6:06 AM
Is judgment only probabilistic for LLMs? Or is the same true for humans? Do humans ever hallucinate?

You're moving your goalposts yet again. Your argument wasn't whether a process was deterministic or probabilistic. It was whether it was automated before the skill was taught.
December 4, 2025 at 6:02 AM
Portable calculators started becoming wildly available in the 1970s. I suppose since I was born in the 1990s this means that I can't do mental math.
You retain those skills because you built them *before* the offloading began. You're spending down the capital of your pre-AI education.

The atrophy hits the juniors who are skipping the struggle entirely. They aren't "offloading" a skill they have; they're failing to acquire it at all.
December 4, 2025 at 5:55 AM
That's awfully confident of you. Here I am using LLMs yet retaining my ability to read and understand that you're claiming the use of GPS renders one incapable of reading a map. I assure you I can read what you just uploaded to Bluesky and I can read a map despite using GPS.
December 4, 2025 at 5:49 AM
Reposted
Fr. People seem to be simultaneously trying to claim that LLMs are sycophantic AND unpredictably malicious. "The enemy is both weak and strong."
December 4, 2025 at 5:40 AM
Second - this (1) assumes you're 100% outsourcing coding to generation instead of intermittently and that you're not honing these skills outside of generation.

Plenty of professionals perform cognitive offloading while retaining the base skills that are being offloaded.
December 4, 2025 at 5:46 AM
Bull. If the incorrect libraries or methods are used, they can stick out like a sore thumb with skimming. Barring that if one takes the time to do a full read - even if I concede skimming - reading completed code generated faster than you could type and verifying the logic is still faster.
December 4, 2025 at 5:46 AM
Fr. People seem to be simultaneously trying to claim that LLMs are sycophantic AND unpredictably malicious. "The enemy is both weak and strong."
December 4, 2025 at 5:40 AM
This assumes that verification takes just as long as it would for one to "generate" the output itself. That's just not true. Code for example can be quickly skimmed for a sanity check and ran for expected behavior. The time savings - though not guaranteed - can still be faster without deficit.
December 4, 2025 at 5:36 AM
Yeah he launched straight into "asshole" whenever I engaged. No rage there no ma'am 🙄
December 4, 2025 at 5:34 AM
You picked up what I was lying down - but your assumption is still that there will be no analysis or verification after the fact. You can't verify "created" (generated) art, but the same can't be said for the search engine or for code. Still possible to just uncritically accept, but not universally.
December 4, 2025 at 5:32 AM
"Creation" is certainly the predominate use case of GenAI. Do you think it's the only one?
December 4, 2025 at 5:26 AM