Thomas Dietterich
Thomas Dietterich
@tdietterich.bsky.social
Safe and robust AI/ML, computational sustainability. Former President AAAI and IMLS. Distinguished Professor Emeritus, Oregon State University. https://web.engr.oregonstate.edu/~tgd/
There are plenty of narrow AI systems that exceed human performance. Example: AlphaFold for protein folding.

Even a simple calculator beats humans at arithmetic.

Properly-deployed, AI can help us address many important problems.
November 18, 2025 at 6:12 AM
There has been a Promethean thread throughout the history of AI. Bezos is bringing it out into the open.
November 18, 2025 at 5:46 AM
Yes, maybe that’s the fix. It still feels a bit slippery
November 17, 2025 at 5:41 PM
Good question. I've thought a bit about this but can't decide. If memory is free, you could remember everything as you suggest. But otherwise, the rules of cache management would apply, I guess.
November 17, 2025 at 6:56 AM
Yes. I define learning as an increase in the knowledge of the system. At a minimum, that requires memory. Memory without generalization is "rote learning". Speaking from experience, maybe we should call generalization without memory a kind of "senior moment"?
November 16, 2025 at 8:07 PM
I generally agree with your analysis here. I didn't intend my comment as an attack or as "whataboutism", but rather to have exactly this discussion. Thank you
November 16, 2025 at 7:55 PM
Sorry for my confusing analogy.
November 15, 2025 at 11:25 PM
I can give instructions to "agentic" LLM systems and they will execute them. That is a form of programming. I don't think the vendor of such systems is liable for every single token that is produced or action that is taken. But the vendor should be responsible for harms caused by LLM flaws
November 15, 2025 at 11:25 PM
The question is how does the voting public understand the slogan. In the 60s, some folks interpreted "Peace Now" as "Let's withdraw from Vietnam" and others interpreted it as "Let's unilaterally disarm".
November 15, 2025 at 10:58 PM
Machine learning is a mimicry technology, so of course these LLMs mimic us. But that is not necessarily evidence about the nature of general intelligence (i.e., intelligence more general than ours).
November 15, 2025 at 10:53 PM
I agree that bad medical advice is the fault of the vendor, not the user.
November 15, 2025 at 2:12 AM
OpenAI in the ChatGPT case, as well as all of these “agentic” systems that are coming on the market.
November 15, 2025 at 2:10 AM
Oh, I worry! (But I still fly.)
November 15, 2025 at 2:08 AM
There is a gray zone where the user tells an AI system to commit a crime (and it does). Under what conditions is the AI vendor an accessory to the crime? @rcalo.bsky.social ? Is this in your book?
November 15, 2025 at 2:06 AM
Some nuance is required. If I write a computer program that prints out something libelous, for example, the vendor is not liable. But if a compiler bug causes someone to be harmed, the vendor should be liable.
November 15, 2025 at 2:06 AM
Russia and China combined have created twice as much debris as the US. China notoriously blew up a satellite as a test. It would be supreme justice if it was Chinese-sourced debris that struck the Chinese spacecraft.
www.armscontrol.org/act/2007-03/...
Chinese Satellite Destruction Stirs Debate | Arms Control Association
www.armscontrol.org
November 8, 2025 at 2:43 AM
“Archival” and “workshop” don’t usually go together. If you can provide evidence of strong peer review, that’s the key. You may need to do that in an appeal, as we don’t have any mechanism in our submission system for providing such evidence
November 5, 2025 at 9:40 PM
You still should fix the first paragraph. We will be releasing review articles and position papers, but only after they have passed peer review.
November 4, 2025 at 7:01 PM
This is a very good point. It is one of the reasons why I think generic chatbots should probably be outlawed.
November 4, 2025 at 12:19 AM
Here is a good use: LLMs as proof assistants in mathematical research.

Here is a bad use: Automated synthesis of misinformation for social media.

It *is* a new technology, people are trying to figure out how to use it both for good and for ill. Not all technology has a "use" when it is invented.
November 3, 2025 at 3:31 AM
This only concerns a small fraction of the papers submitted or released by Arxiv.
November 2, 2025 at 5:54 AM
This is only for position papers and literature surveys. We see a lot of slop literature surveys.
November 2, 2025 at 5:30 AM