Harvey Lederman
harveylederman.bsky.social
Harvey Lederman
@harveylederman.bsky.social
Professor of philosophy UTAustin. Philosophical logic, formal epistemology, philosophy of language, Wang Yangming.

www.harveylederman.com
As a Wang Yangming partisan, I cheered at this quote:
October 30, 2025 at 11:30 AM
😢
October 23, 2025 at 12:35 AM
Could be but what about a not too but decent student — very bad incentives there I fear
October 22, 2025 at 11:06 PM
the fear is that we’re under accounting for the fact that
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts
October 22, 2025 at 7:00 PM
yes agree! you want to make the risk bad enough that it becomes less incentivized; current practice means that if you don't use it, you're being dumb...I want to change that
October 22, 2025 at 6:48 PM
Reposted by Harvey Lederman
The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
See how you write or use AI | Process Feedback Every Student’s Work Has a Story |
Process Feedback enables teachers and students to see the writing process and AI usage. It helps students reflect on their writing and the role of AI.
processfeedback.org
October 22, 2025 at 5:33 PM
Yes
October 22, 2025 at 5:14 PM
Yes I ask them to make me editor on their doc
October 22, 2025 at 4:42 PM
oh? I have been using the “history” function there but it doesn’t track copy-paste
October 22, 2025 at 4:37 PM
Totally agree! it's such a confusing and hard area. I fear that feelings run so high about it that many are (reasonably) steering clear of discussing it for fear of error. But IMO we need to get clearer in our thinking, even if that involves stumbles along the way.
October 17, 2025 at 4:43 PM
Not actually relevant, but I don't eat meat (including fish), and I do delete AI chats all the time, so take that for what it's worth.
October 17, 2025 at 4:38 PM
Thanks, I appreciate this. I hoped it was clear that the analogy is about illustrating that many think that uncertainty about what is a welfare subject can motivate action, not that "fish = AI". But ambiguity is in the eye of the reader and I'm sorry to hear it isn't/wasn't clear.
October 17, 2025 at 4:38 PM
The analogy is clearly about risk! We say "It is *uncertain*". This uncertainty...it's clear that the point is about potential welfare subjects...

Your original post said we are "equating" them; I don't think that's a reasonable reading of this
October 17, 2025 at 4:29 PM
This is not "equating" the moral status of the two as you originally said. It's an **analogy** about risk.

These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
October 17, 2025 at 4:27 PM
neutral on how welfare status and mentality are understood. That's a presentational issue not a misunderstanding.

2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
October 17, 2025 at 4:27 PM
1. This is very different from what you said in your original post. It is not misunderstanding "how AI works". I appreciate you would have done things differently than we did, but this isan unfair accusation. You would have liked functionalism to be a premise; we thought it was better to be...
October 17, 2025 at 4:27 PM
We definitely don't make this "equation". We give an example to illustrate why potential moral subject-hood can matter to what we should do. An illustrative example is not an equation.

Your point about repeating is interesting. I don't share that view, but I understand it.
October 17, 2025 at 4:16 PM