Harvey Lederman
@harveylederman.bsky.social
Professor of philosophy UTAustin. Philosophical logic, formal epistemology, philosophy of language, Wang Yangming.
www.harveylederman.com
www.harveylederman.com
Reposted by Harvey Lederman
This essay is by far the best along its line, but the more I reflect about this stuff the more I think it's hard to hold an image of human experience, human thought, human understanding, human life, and human relationships as meaningful ends while also seeing them as dead-ends
Essay here: scottaaronson.blog?p=9030
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman
Scott Aaronson’s Brief Foreword: Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my …
scottaaronson.blog
November 6, 2025 at 1:45 AM
This essay is by far the best along its line, but the more I reflect about this stuff the more I think it's hard to hold an image of human experience, human thought, human understanding, human life, and human relationships as meaningful ends while also seeing them as dead-ends
Thanks for the kind words, and thoughtful response @peligrietzer.bsky.social ! I'm not here as much, but I put some responses on the other site: x.com/LedermanHarv...
November 7, 2025 at 5:16 PM
Thanks for the kind words, and thoughtful response @peligrietzer.bsky.social ! I'm not here as much, but I put some responses on the other site: x.com/LedermanHarv...
Enjoyed this nice piece by the great Justin Tiwald on autonomy and morality in Confucianism. Not sure I love the clickbait title, but I love the work Justin is doing uncovering views about moral deference and moral autonomy in (neo)Confucianism...
iai.tv/articles/the...
iai.tv/articles/the...
The radical independent thinking in Chinese philosophy | Justin Tiwald
iai.tv
October 30, 2025 at 11:30 AM
Enjoyed this nice piece by the great Justin Tiwald on autonomy and morality in Confucianism. Not sure I love the clickbait title, but I love the work Justin is doing uncovering views about moral deference and moral autonomy in (neo)Confucianism...
iai.tv/articles/the...
iai.tv/articles/the...
Very excited to be going to Chicago for
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!
October 24, 2025 at 4:02 PM
Very excited to be going to Chicago for
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!
@agnescallard.bsky.social's famous Night Owls next week! I'll be discussing my essay "ChatGPT and the Meaning of Life". Hope to see you there if you're local!
Reposted by Harvey Lederman
UT Austin Linguistics is hiring in computational linguistics!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
October 7, 2025 at 8:53 PM
UT Austin Linguistics is hiring in computational linguistics!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Reposted by Harvey Lederman
The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
See how you write or use AI | Process Feedback Every Student’s Work Has a Story |
Process Feedback enables teachers and students to see the writing process and AI usage. It helps students reflect on their writing and the role of AI.
processfeedback.org
October 22, 2025 at 5:33 PM
The professor I'm currently TAing for is making students use an extension called 'Process Feedback' that tracks key logs and time on the document: processfeedback.org
If you're doing *any* out of class assessment, you're incentivizing AI use and harming students who do the work themselves. But some day we have to assess writing again. The solution is monitored computer-labs. What Universities are building these? We need to push for them.
October 22, 2025 at 4:06 PM
If you're doing *any* out of class assessment, you're incentivizing AI use and harming students who do the work themselves. But some day we have to assess writing again. The solution is monitored computer-labs. What Universities are building these? We need to push for them.
Reposted by Harvey Lederman
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
Claude’s Right to Die? The Moral Error in Anthropic’s End-Chat Policy
Anthropic has given its AI the right to end conversations when it is “distressed.” But doing so could be akin to unintended suicide.
www.lawfaremedia.org
October 17, 2025 at 3:43 PM
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
Simon Goldstein and I have an op-ed live in Lawfare today! Anthropic's policy is premised on the idea that AI is a potential welfare subject. We argue that if you take that idea seriously (we don't take a stand on it here), the policy commits a moral mistake on its own terms.
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
Claude’s Right to Die? The Moral Error in Anthropic’s End-Chat Policy
Anthropic has given its AI the right to end conversations when it is “distressed.” But doing so could be akin to unintended suicide.
www.lawfaremedia.org
October 17, 2025 at 3:55 PM
Simon Goldstein and I have an op-ed live in Lawfare today! Anthropic's policy is premised on the idea that AI is a potential welfare subject. We argue that if you take that idea seriously (we don't take a stand on it here), the policy commits a moral mistake on its own terms.
Reposted by Harvey Lederman
You can read our full review (without a paywall) @ philpapers.org/archive/GOOK.... And you can also check out Harvey's paper that inspired us here: philpapers.org/archive/LEDU...
philpapers.org
October 17, 2025 at 2:43 AM
You can read our full review (without a paywall) @ philpapers.org/archive/GOOK.... And you can also check out Harvey's paper that inspired us here: philpapers.org/archive/LEDU...
Reposted by Harvey Lederman
Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesis—that common knowledge explains coordination—faces a powerful challenge. 🧵
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...
Knowledge for two
A psychologist explores common knowledge and coordination
www.science.org
October 17, 2025 at 2:43 AM
Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesis—that common knowledge explains coordination—faces a powerful challenge. 🧵
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...
Reposted by Harvey Lederman
I really enjoyed reading Steven Pinker’s new book and thinking through what it would take to share infinitely iterated knowledge with someone (I know that you know that I know that you know…).
In @science.org, my colleague @jeremygoodman.bsky.social & I briefly give our perspective on this issue.
In @science.org, my colleague @jeremygoodman.bsky.social & I briefly give our perspective on this issue.
October 17, 2025 at 11:36 AM
I really enjoyed reading Steven Pinker’s new book and thinking through what it would take to share infinitely iterated knowledge with someone (I know that you know that I know that you know…).
In @science.org, my colleague @jeremygoodman.bsky.social & I briefly give our perspective on this issue.
In @science.org, my colleague @jeremygoodman.bsky.social & I briefly give our perspective on this issue.
Reposted by Harvey Lederman
Wow @utaustin.bsky.social maybe @utaustinihs.bsky.social maybe there are other ways to shout out to the department of East Asia Studies and the Department of History, but however that may be, they are doing some amazing stuff with in house games for their JapanLab! Just found even more stuff here.
Projects — JapanLab
www.utjapanlab.com
September 24, 2025 at 4:21 PM
Wow @utaustin.bsky.social maybe @utaustinihs.bsky.social maybe there are other ways to shout out to the department of East Asia Studies and the Department of History, but however that may be, they are doing some amazing stuff with in house games for their JapanLab! Just found even more stuff here.
Simon Goldstein and I have a new paper, “What does ChatGPT want? An interpretationist guide”.
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
Simon Goldstein & Harvey Lederman, What Does ChatGPT Want? An Interpretationist Guide - PhilPapers
This paper investigates LLMs from the perspective of interpretationism, a theory of belief and desire in the philosophy of mind. We argue for three conclusions. First, the right object of study ...
philpapers.org
September 24, 2025 at 12:37 PM
Simon Goldstein and I have a new paper, “What does ChatGPT want? An interpretationist guide”.
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
Jonathan Lear's Aristotle: the Desire to Understand was pivotal in some of my first encounters with Aristotle. I found Aristotle and Logical Theory later, but it became a key inspiration for how to think about core parts of the corpus...1/2
September 23, 2025 at 11:18 AM
Jonathan Lear's Aristotle: the Desire to Understand was pivotal in some of my first encounters with Aristotle. I found Aristotle and Logical Theory later, but it became a key inspiration for how to think about core parts of the corpus...1/2
Max Weber, 1917:
September 22, 2025 at 11:10 AM
Max Weber, 1917:
Reposted by Harvey Lederman
Go grue
I’ve been training my whole life for this moment
September 22, 2025 at 10:41 AM
Go grue
What are the most important new ideas in normative ethics from this century?
September 19, 2025 at 6:20 PM
What are the most important new ideas in normative ethics from this century?
Reposted by Harvey Lederman
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
How Linguistics Learned to Stop Worrying and Love the Language Models
How Linguistics Learned to Stop Worrying and Love the Language Models
www.cambridge.org
September 15, 2025 at 3:46 PM
📣@futrell.bsky.social and I have a BBS target article with an optimistic take on LLMs + linguistics. Commentary proposals (just need a few hundred words) are OPEN until Oct 8. If we are too optimistic for you (or not optimistic enough!) or you have anything to say: www.cambridge.org/core/journal...
Something I cherish about analytic philosophy is that no matter how famous you are or how profound your ideas sound, it's still your job to answer all the objections. I wish public promoters of philosophy held themselves to the same standard.
September 5, 2025 at 1:42 PM
Something I cherish about analytic philosophy is that no matter how famous you are or how profound your ideas sound, it's still your job to answer all the objections. I wish public promoters of philosophy held themselves to the same standard.
Reposted by Harvey Lederman
Can AI models introspect? What does introspection even mean for AI?
We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.
Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.
Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)
August 26, 2025 at 5:59 PM
Can AI models introspect? What does introspection even mean for AI?
We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.
Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
We revisit a recent proposal by Comșa & Shanahan, and provide new experiments + an alternate definition of introspection.
Check out this new work w/ @siyuansong.bsky.social, @harveylederman.bsky.social, & @kmahowald.bsky.social 👇
exciting new paper from Siyuan! I really enjoyed working with him on this, inspired by important work by Murray Shanahan and Julia Comsa. Hard questions about how to operationalize the notion of “introspection” that’s relevant for practical applications in AI today. Hope you’ll check it out!
How reliable is what an AI says about itself? The answer depends on whether models can introspect. But, if an LLM says its temperature parameter is high (and it is!)….does that mean it’s introspecting? Surprisingly tricky to pin down. Our paper: arxiv.org/abs/2508.14802 (1/n)
August 26, 2025 at 5:20 PM
exciting new paper from Siyuan! I really enjoyed working with him on this, inspired by important work by Murray Shanahan and Julia Comsa. Hard questions about how to operationalize the notion of “introspection” that’s relevant for practical applications in AI today. Hope you’ll check it out!
Great piece as usual. Particular fun this week to see the shoutout to Adrienne Raphel and the close reading of @rkubala.bsky.social's lovely paper on the aesthetics of crosswords!
August 24, 2025 at 1:57 PM
Great piece as usual. Particular fun this week to see the shoutout to Adrienne Raphel and the close reading of @rkubala.bsky.social's lovely paper on the aesthetics of crosswords!
My piece is featured in today's Browser. If you don't know it the Browser is a phenomenal newsletter that puts together fascinating pieces from all over the web. Link in first comment--you should subscribe! @uribram.bsky.social
I wrote about automation and the meaning of life, as a guest post on Scott Aaronson's Shtetl-Optimized. (1/5)
scottaaronson.blog?p=9030
scottaaronson.blog?p=9030
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman
Scott Aaronson’s Brief Foreword: Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my …
scottaaronson.blog
August 13, 2025 at 3:29 PM
My piece is featured in today's Browser. If you don't know it the Browser is a phenomenal newsletter that puts together fascinating pieces from all over the web. Link in first comment--you should subscribe! @uribram.bsky.social
Reposted by Harvey Lederman
(1/5)
🤖 Harvey Lederman asks: What if AI doesn’t destroy us—just does everything better?
From polar explorers to master mathematicians, he traces our drive for discovery and purpose.
If machines take that work, what’s left for us?
🔗 scottaaronson.blog?p=9030&ref=t...
@harveylederman.bsky.social
🤖 Harvey Lederman asks: What if AI doesn’t destroy us—just does everything better?
From polar explorers to master mathematicians, he traces our drive for discovery and purpose.
If machines take that work, what’s left for us?
🔗 scottaaronson.blog?p=9030&ref=t...
@harveylederman.bsky.social
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman
Scott Aaronson’s Brief Foreword: Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my …
scottaaronson.blog
August 12, 2025 at 8:39 PM
(1/5)
🤖 Harvey Lederman asks: What if AI doesn’t destroy us—just does everything better?
From polar explorers to master mathematicians, he traces our drive for discovery and purpose.
If machines take that work, what’s left for us?
🔗 scottaaronson.blog?p=9030&ref=t...
@harveylederman.bsky.social
🤖 Harvey Lederman asks: What if AI doesn’t destroy us—just does everything better?
From polar explorers to master mathematicians, he traces our drive for discovery and purpose.
If machines take that work, what’s left for us?
🔗 scottaaronson.blog?p=9030&ref=t...
@harveylederman.bsky.social