Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Pinned
Sharing a new preprint on AI and philosophy of mind/cogsci.

I make the case that human beings exhibit a species-specific form of intellectual freedom, expressible through natural language, and this is likely an unreachable threshold for computational systems.

philpapers.org/rec/CARCBC
Vincent Carchidi, Computational Brain, Creative Mind: Intellectual Freedom as the Upper Limit on Artificial Intelligence - PhilPapers
Some generative linguists have long maintained that human beings exhibit a species-specific form of intellectual freedom expressible through natural language. With roots in Descartes’ effort to distin...
philpapers.org
"It is not enough, therefore, to build a machine that could use words (if that were possible), it would have to be able to create concepts and to find for itself suitable words in which to express additions to knowledge that it brought about. Otherwise, it would be no more than a cleverer parrot...
November 18, 2025 at 5:34 PM
Larry Summers being on the board of the OpenAI nonprofit has kind of flown under the radar. Arguably a more influential position than his (current) Harvard affiliation.
November 18, 2025 at 2:04 PM
One of the reasons why debates about phenomena like consciousness seem unending is because the only way to observe it is to be a subject with consciousness. So far as we know, only humans have consciousness. We can only be sure of our own. We assume others are because we're made similarly. 🧵
i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i'm made of magic meat i
November 18, 2025 at 3:11 AM
Some Fukuyama content for the Fukuyama enjoyers out there

youtu.be/-GDAEzuHn1U?...
Why Populism Is Winning (w/ Francis Fukuyama)
YouTube video by The Bulwark
youtu.be
November 18, 2025 at 12:46 AM
Muted "Nuzzi." Still seeing Nuzzi content.
November 17, 2025 at 6:19 PM
Insightful piece, echoes much of what I've come to in my own writing over the years.

I'd just suggest an additional problem for LLM writing beyond how they're trained: part of the oomph of writing, as Nathan discusses, is the person's voice. But that is a result of idiosyncratic motivations to use
November 17, 2025 at 4:53 PM
You tell 'em Yann
November 17, 2025 at 3:10 PM
Purely anecdotal, but I have been coming across more published articles that either don't pass the smell test or have hallucinated citations.

Came across a piece that cites a real article I co-authored, but lists my name as Victoria Carchidi, who is a real but completely unrelated researcher.
Hey folks. I have spent *days* reading and meticulously drafting comments on a very lengthy manuscript. Which I have just found includes an AI-faked quote. Attributed to ME.

Here is a thread of my feelings
a man in a blue shirt says " i am untethered and my rage knows no bounds ! "
ALT: a man in a blue shirt says " i am untethered and my rage knows no bounds ! "
media.tenor.com
November 16, 2025 at 6:36 PM
What an absolute disgrace Chomsky made of his last years...
"He quickly became a highly valued friend and regular source of intellectual exchange and stimulation." Noam Chomsky on Jeffrey Epstein. New article catalogues everything we know about Chomsky's friendship with Epstein. Link 👇
www.realtimetechpocalypse.com/p/noam-choms...
Noam Chomsky Is a Scumbag
From Jeffrey Epstein to Lawrence Krauss to Woody Allen, Chomsky has shown a clear pattern of poor judgment and low moral standards. Hard to express how disappointed I am in him. (2,800 words.)
www.realtimetechpocalypse.com
November 16, 2025 at 2:42 PM
November 15, 2025 at 10:41 PM
Silver lining of a possible AI downturn: it would give me an excuse to learn more about tech (and tech policy) I've always been interested in but never had time to dive into. Renewables are probably top of the list.
November 15, 2025 at 5:20 PM
This past few months is probably the most difficult period I've had staying focused on any one thing. Feels like everything is in a state of transition, and doesn't make much sense to think about much beyond the next year or two.
November 15, 2025 at 3:10 PM
Still thinking about this quote from Larry Summers, which is by no means a novel attitude, but it expresses an attitude that's usually kept a least somewhat covered so clearly.

And I keep thinking about how destructive an attitude this is for the elite of an advanced society to hold.
November 14, 2025 at 6:21 PM
Good stuff here.

I saw something from Benedict Evans recently where he noticed that the term "AI slop" at some point shifted from its original meaning of "trash output" to "anything automated by an LLM."

It seems like having a barrage of even *better* outputs in areas like recruitment becomes slop
November 14, 2025 at 2:39 PM
Appreciate the general idea in this thread, but I think this answers the original question about why some people can't/don't use them appropriately: the thrust of the current agent craze is that the models are...agents. They shouldn't need handholding. Why learn when you can automate?
ed3d.net Ed @ed3d.net · 5d
What's frustrating (and @golikehellmachine.com has ranted about this to me before) is that Anthropic etc. seem largely uninterested in teaching their users this stuff. The theory seems to be "we'll just make it smarter so you don't have to know how to do that".

It's uh. Not working so well!
November 13, 2025 at 6:31 PM
I accepted some time ago that pretty much anyone I've been influenced by intellectually who was born before a certain year will have made...questionable decisions.
Jeffrey Epstein was developing a series, moderated by @lkrauss1.bsky.social, to bring scientists and celebrities together. The first season would include an episode where "Woody Allen talks about the human condition with Linguist Noam Chomsky."

drive.google.com/file/d/14Sla...
HOUSE_OVERSIGHT_023123.txt
drive.google.com
November 13, 2025 at 1:20 AM
There's also the intellectual autonomy issue here, which I repeatedly bring up because so long as LLMs' quality of output are dependent on the competencies of the person prompting them, it's difficult to say the LLM has mastered the skills necessary to produce the output in any sense relevant to us.
LLMs' essays are almost always impressive in the "it's amazing what they can produce" sense, rather than the "making conceptual progress" sense.

This can quickly turn into a can of worms that I don't want to open, because many would likely say they do this already (or that humans don't do this).
November 12, 2025 at 4:02 PM
I think it's true the paradox is less useful/clean cut than it used to be.

I've been thinking recently that it also started getting misapplied in the 2023-present period.

(gyges may not agree with this part, just my hot take)
i honestly think moravec's paradox has broken. like the boundary between easy and hard is this fucked up fractal thing and we're sort of just sitting on it. we can do some things and not others and no simple heuristic tells you which ones
it would be cool to have humanoid robotics. we are still trying to deal with moravec's paradox in robotics
November 12, 2025 at 3:58 PM
Reminded of some commentary from the pre-ChatGPT era because of the LeCun news.

AI Winters are seen as being in the rearview mirror, deep learning might be capturing general principles of intelligence, but it's useful enough either way to put those concerns aside.

ojs.aaai.org/aimagazine/i...
The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture | AI Magazine
ojs.aaai.org
November 11, 2025 at 6:07 PM
Only just got around to this, which turned out to be interesting in a different way than I expected. It's not a defense of a specific AI nativist research program so much as a defense that there *should be* an AI nativist program comparable to ML empiricism.

philarchive.org/rec/KARAET-4
Brett Karlan, AI empiricism: the only game in town? - PhilArchive
I offer an epistemic argument against the dominance of empiricism and empiricist-inspired methods in contemporary machine learning (ML) research. I first establish, as many ML researchers and philosop...
philarchive.org
November 11, 2025 at 1:32 AM
This is an interesting read, and seems like a well-motivated project.

"Small open high quality sources are increasingly more valuable than large data collections of questionable provenance."
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 10:02 PM