Jackdaws_Of_Midgard aka Elijah Blyth
banner
jackdawsofmidgard.bsky.social
Jackdaws_Of_Midgard aka Elijah Blyth
@jackdawsofmidgard.bsky.social
I am a streamer (twitch, see if you can guess my handle), an Author in training, I compose bad music, and create terrible art... you could call me a Jack of all trades... but that would not be my name.
There is, there's even a few laws that were put forward by AI researchers in places like the EU. I would say most reputable AI researchers would say there's a need for it, and all accredited universities require a code of ethics for all research. But there are a fair few specifically asking for it.
January 15, 2026 at 2:02 PM
Yeah, the insidious, murderous, propaganda driving, CSAM, and invasive shit is exactly why private companies shouldn't be allowed to run wild without unbiased scientific oversight for these LLMs. Preferably in the form of an international body like the UN. Never trust any "self policing" industry.
January 15, 2026 at 2:08 AM
Him along with Palantir. I'm getting distracted

What I mean is that those with grievances have every right with LLMs, I just wish the TechBros would stop calling LLMs "AI". While LLMs are a tiny subset of AI, they are not AI, and it damages actual useful AI technologies immeasurably by association
January 15, 2026 at 1:46 AM
I appreciate the calm, and knowledge driven debating. It's an emotive subject for sure, especially with the likes of fElon Musk(rat) creating an LLM that basically enables child pornography and then says "But Ma Free Speech". Odious little... well, you know. I really hope he blows himself up soon.
January 15, 2026 at 1:46 AM
The difference between someone using a tool for what it's supposed to be for, the betterment of mankind, over those who see the potential for a quick buck, at the expense of mankind, is stark unfortunately.

Never trust a CEO to do the right thing, and even when they do, they have ulterior motives.
January 15, 2026 at 1:36 AM
I will wait and see with LLMs outside of laboratory uses, but inside university labs, they don't use nearly a fraction of what the likes of chatgpt or Copilot use, and are used and developed with oversight in full. Mainly as university's are super restrictive with funding unlike corporations.
January 15, 2026 at 1:34 AM
With LLMs? Not so sure, haven't seen a killer application that excuses the costs (both environmentally and ethically) with the current lot, but with other AI techniques that are actually built with efficiency in mind when designing them, and are built to assist rather than replace, absolutely, yes.
January 15, 2026 at 1:28 AM
Very cool. Not tried any of their games myself, been stuck on Warframe since... well... it was released. If you ever get the chance to meet any of the folks at Digital Extremes, they're also genuinely good folks.
January 14, 2026 at 4:35 PM
If that's how you feel, that's your choice. I was trying to have a genuine conversation, but if you feel threatened by that, I won't bother you again. It's okay, we shall go our separate ways. I wish you all the best.
January 14, 2026 at 2:46 PM
This is how it goes without oversight, if someone values money over morals or ethics, they can make anything dangerous.

What I did was purely based on reason and my own ethics. I decided that being superwealthy wasn't worth all the lives it would cost. Some don't see it that way unfortunately.
January 14, 2026 at 2:27 PM
This is the single reason I quit my PhD, as they tried pushing me to work with the UK Ministry Of Defence. I wanted to make computer games more fun by having enemies and computer allies more interesting to play against or with. They wanted more efficient murderbots. I left and deleted everything.
January 14, 2026 at 2:27 PM
Unfortunately, some folks at my university saw the potential money they could make from an AI that knows how to utilise destructible environments to better achieve their goals (in games this is fun, applied to drones this is terrifying and immoral) thanks to using various techniques in synergy
January 14, 2026 at 2:27 PM
My research was in unifying AI techniques and identifying the best/most efficient use cases for each technique (it was to be used in computer games, so efficency was paramount), and while LLMs can do some impressive stuff, they're far from efficient... downright criminally inefficient in my mind
January 14, 2026 at 2:27 PM
I agree. There's a limit to how effective a technique is, and throwing more and more processing power at something just reaches that limit quicker, as can be seen with LLMs, they've had more processing time than any previous AI technique, yet have pretty much plateaued. Again, oversold and overhyped
January 14, 2026 at 2:27 PM
Yeah, like a lot of tech before LLMs came along, there is always someone who tries to oversell something, then reality hits and folks get hurt.

Why yes, we can have 3000hp jet propelled cars... should we have 3000hp jet propelled cars on the road? Hell no. Doesn't mean jets are bad.
January 14, 2026 at 1:55 PM
I think that AI should be used to Augment, not replace, and should never be left without supervision. There are some things AI simply cannot do (yet), and when it can, it should still be as an assistant, not as the director... I believe in Symbiosis, but that is far more philosophically divisive.
January 14, 2026 at 1:53 PM
This is also an example of companies thinking "hey we can just use cheap labour to do this"... like Microsoft did with it's entire testing department towards the end of Windows 7 (long before LLMs were about). They fired almost their entire testing staff, the results speak for themselves.
January 14, 2026 at 1:53 PM
This, so much this. AI, like all science, NEEDS oversight and unbiased validation, otherwise it's literally not science.
January 14, 2026 at 1:47 PM
The problem is, as they said in one of their posts, the CEOs who are overselling it, who are pushing it down everyones throats whilst allowing unchecked development by people who have no business using the technology (Elon Musk for instance). It Absolutely SHOULD have oversight.
January 14, 2026 at 1:43 PM
This does not mean that the threat of Nukes is any less scary or that Nuclear Science could cause untold damage to the world if unchecked. Subjects that can have both a positive and negative tend to get emotive, especially if some people oversell or push a subject without understanding it.
January 14, 2026 at 1:43 PM
It's quite alright, people get emotional about things that matter, and this does matter. The perception the general public has is that ALL AI is bad, just like the perception of Nuclear Science must all be bad because Nukes. the thing is, that detracts from the very real advantages.
January 14, 2026 at 1:43 PM
Again, LLMs are useful for a limited and use-case specific range of things. How they are presented and sold to the public is NOT useful, and does more damage to AI as a whole. This entire thread is proof of that. Some people are getting VERY emotional about AI thinking that it's Copilot or Grok🤢
January 14, 2026 at 1:37 PM
I don't disagree, LLMs are over sold, over hyped, and over used. LLMs have a very limited and range of specific use-cases, i.e. the collation and tabulation of information so that it can be presented in a human understandable way. It can replicate patterns, and find patterns, useful in limited ways.
January 14, 2026 at 1:37 PM