Harvey Lederman
@harveylederman.bsky.social
Professor of philosophy UTAustin. Philosophical logic, formal epistemology, philosophy of language, Wang Yangming.
www.harveylederman.com
www.harveylederman.com
Tiwald also has a nice academic article on this important topic, if you want to go deeper!
philpapers.org/rec/TIWGIO
philpapers.org/rec/TIWGIO
Justin Tiwald, “Getting It Oneself" (_Zide_ 自得) as an Alternative to Testimonial Knowledge and Deference to Tradition - PhilPapers
To morally defer is to form a moral belief on the basis of some credible authority's recommendation rather than on one’s own moral judgment. Many philosophers have suggested that the sort ...
philpapers.org
October 30, 2025 at 11:30 AM
Tiwald also has a nice academic article on this important topic, if you want to go deeper!
philpapers.org/rec/TIWGIO
philpapers.org/rec/TIWGIO
As a Wang Yangming partisan, I cheered at this quote:
October 30, 2025 at 11:30 AM
As a Wang Yangming partisan, I cheered at this quote:
Essay here: scottaaronson.blog?p=9030
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman
Scott Aaronson’s Brief Foreword: Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my …
scottaaronson.blog
October 24, 2025 at 4:02 PM
Essay here: scottaaronson.blog?p=9030
Could be but what about a not too but decent student — very bad incentives there I fear
October 22, 2025 at 11:06 PM
Could be but what about a not too but decent student — very bad incentives there I fear
the fear is that we’re under accounting for the fact that
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts
October 22, 2025 at 7:00 PM
the fear is that we’re under accounting for the fact that
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts
the student who doesn’t want to use AI is currently punished because their peers do better by shortcuts
yes agree! you want to make the risk bad enough that it becomes less incentivized; current practice means that if you don't use it, you're being dumb...I want to change that
October 22, 2025 at 6:48 PM
yes agree! you want to make the risk bad enough that it becomes less incentivized; current practice means that if you don't use it, you're being dumb...I want to change that
Reposted by Harvey Lederman
The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
See how you write or use AI | Process Feedback Every Student’s Work Has a Story |
Process Feedback enables teachers and students to see the writing process and AI usage. It helps students reflect on their writing and the role of AI.
processfeedback.org
October 22, 2025 at 5:33 PM
The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
Yes I ask them to make me editor on their doc
October 22, 2025 at 4:42 PM
Yes I ask them to make me editor on their doc
oh? I have been using the “history” function there but it doesn’t track copy-paste
October 22, 2025 at 4:37 PM
oh? I have been using the “history” function there but it doesn’t track copy-paste
Totally agree! it's such a confusing and hard area. I fear that feelings run so high about it that many are (reasonably) steering clear of discussing it for fear of error. But IMO we need to get clearer in our thinking, even if that involves stumbles along the way.
October 17, 2025 at 4:43 PM
Totally agree! it's such a confusing and hard area. I fear that feelings run so high about it that many are (reasonably) steering clear of discussing it for fear of error. But IMO we need to get clearer in our thinking, even if that involves stumbles along the way.
Not actually relevant, but I don't eat meat (including fish), and I do delete AI chats all the time, so take that for what it's worth.
October 17, 2025 at 4:38 PM
Not actually relevant, but I don't eat meat (including fish), and I do delete AI chats all the time, so take that for what it's worth.
Thanks, I appreciate this. I hoped it was clear that the analogy is about illustrating that many think that uncertainty about what is a welfare subject can motivate action, not that "fish = AI". But ambiguity is in the eye of the reader and I'm sorry to hear it isn't/wasn't clear.
October 17, 2025 at 4:38 PM
Thanks, I appreciate this. I hoped it was clear that the analogy is about illustrating that many think that uncertainty about what is a welfare subject can motivate action, not that "fish = AI". But ambiguity is in the eye of the reader and I'm sorry to hear it isn't/wasn't clear.
The analogy is clearly about risk! We say "It is *uncertain*". This uncertainty...it's clear that the point is about potential welfare subjects...
Your original post said we are "equating" them; I don't think that's a reasonable reading of this
Your original post said we are "equating" them; I don't think that's a reasonable reading of this
October 17, 2025 at 4:29 PM
The analogy is clearly about risk! We say "It is *uncertain*". This uncertainty...it's clear that the point is about potential welfare subjects...
Your original post said we are "equating" them; I don't think that's a reasonable reading of this
Your original post said we are "equating" them; I don't think that's a reasonable reading of this
This is not "equating" the moral status of the two as you originally said. It's an **analogy** about risk.
These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
October 17, 2025 at 4:27 PM
This is not "equating" the moral status of the two as you originally said. It's an **analogy** about risk.
These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
These are hard issues. I appreciate people have very strong feelings about them. But exactly for that reason it's important to be fair in issuing very strongly worded claims.
neutral on how welfare status and mentality are understood. That's a presentational issue not a misunderstanding.
2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
October 17, 2025 at 4:27 PM
neutral on how welfare status and mentality are understood. That's a presentational issue not a misunderstanding.
2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
2. We rote: "As an analogy, it is uncertain whether fish are welfare subjects. This uncertainty stops many people from eating fish, because they want to avoid the risk of moral harm."
1. This is very different from what you said in your original post. It is not misunderstanding "how AI works". I appreciate you would have done things differently than we did, but this isan unfair accusation. You would have liked functionalism to be a premise; we thought it was better to be...
October 17, 2025 at 4:27 PM
1. This is very different from what you said in your original post. It is not misunderstanding "how AI works". I appreciate you would have done things differently than we did, but this isan unfair accusation. You would have liked functionalism to be a premise; we thought it was better to be...
We definitely don't make this "equation". We give an example to illustrate why potential moral subject-hood can matter to what we should do. An illustrative example is not an equation.
Your point about repeating is interesting. I don't share that view, but I understand it.
Your point about repeating is interesting. I don't share that view, but I understand it.
October 17, 2025 at 4:16 PM
We definitely don't make this "equation". We give an example to illustrate why potential moral subject-hood can matter to what we should do. An illustrative example is not an equation.
Your point about repeating is interesting. I don't share that view, but I understand it.
Your point about repeating is interesting. I don't share that view, but I understand it.