Elly
banner
ellyzoe.bsky.social
Elly
@ellyzoe.bsky.social
Multidisciplinerdy. Lit & writing major, CS PhD, teacher of many subjects. Jack of all trades. Jewish hillbilly in the PNW. 🏳️‍🌈 Fix your hearts or die. she/they/elle/iel. English/Français
🌩️👀 9MDQTEGZfpxH
I mean, Baba Yaga is the best. I was thinking of her but was not gonna attempt chicken feet; I thought a hefty tree trunk was safer
December 21, 2025 at 10:50 PM
Haha makes sense. I’m not a polyglot but I love languages: speak two languages pretty well, understand 50-70% of two more and have studied another 3-4 some, and IME you are correct. It’s all just work. For some people the work feels fun often, and immersion helps a lot, but only work “works” 😂
December 21, 2025 at 10:49 PM
Haha, tho in our house they usually sit around til January some time when they are visibly dusty and kind of gross 😂
December 21, 2025 at 10:42 PM
It was pretty great
December 21, 2025 at 10:37 PM
Ooo maybe?? I dunno!
December 21, 2025 at 10:37 PM
That’s what I thought! And it did look like it would. Maybe it’s the weight of all the decorations
December 21, 2025 at 10:36 PM
Musta been too much weight. Maybe it needed a double layer for both trunk dimensions?
December 21, 2025 at 10:35 PM
The trunk cookies just literally broke in half.
December 21, 2025 at 10:35 PM
We did that. 😭 We just kept the trunk and the house separate to decorate (but had glued a platform to the top of the trunk for the house to sit on
December 21, 2025 at 10:28 PM
It was so cool! 😭
December 21, 2025 at 10:22 PM
(This was poorly phrased—some companies have made multi-component systems that include both LLMs and explicit reasoning systems that evaluate what the LLMs output before deciding what to pursue/output)
December 21, 2025 at 10:00 PM
Yup. It’s fine for something you can verify, but if you don’t know the right answer yourself, it’s a nightmare bc it’ll make up something that sounds very convincingly answer-shaped and it might be true or it might kill you. 😵‍💫
December 21, 2025 at 9:52 PM
Reposted by Elly
LLMs are a subfield of machine learning, which is a subfield of artificial intelligence, which is a category we have been using for decades to classify pretty much every problem we don’t know how to solve well computationally.

LLMs alone will never be reliable for “truth”
December 21, 2025 at 9:44 PM
Sometimes, or even often in the right situations, most-likely-next-word sort of language generation DOES produce true statements, but there is no actually checking or verification built into the system, so no way to be sure without checking everything after.
December 21, 2025 at 9:48 PM
Some companies, for some purposes have hooked up LLMs with explicit reasoning, and that has worked better, but isn’t what chatbots are using—they’re just barfing out free-associations basically, bc that’s what LLMs *do*
December 21, 2025 at 9:47 PM
They’re vaguely, sorta like if you set the language center of your brain to free associating without any of the other bits that evaluate whether what’s coming out makes sense or not. They’re very good at producing language that sounds natural and grammatical, and good at spotting patterns in lang
December 21, 2025 at 9:45 PM
LLMs are a subfield of machine learning, which is a subfield of artificial intelligence, which is a category we have been using for decades to classify pretty much every problem we don’t know how to solve well computationally.

LLMs alone will never be reliable for “truth”
December 21, 2025 at 9:44 PM