Vincent Carchidi
@vcarchidi.bsky.social
470 followers 590 following 1.5K posts
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.) https://philpeople.org/profiles/vincent-carchidi All opinions entirely my own.
Posts Media Videos Starter Packs
vcarchidi.bsky.social
"a market crash today is unlikely to result in the brief and relatively benign economic downturn that followed the dotcom bust. There is a lot more wealth on the line now—and much less policy space to soften the blow of a correction."
economist.com
“Though technological innovation is undeniably reshaping industries and increasing productivity, there are good reasons to worry that the current rally may be setting the stage for another painful market correction,” writes Gita Gopinath in a guest essay
Gita Gopinath on the crash that could torch $35trn of wealth
The world has become dangerously dependent on American stocks, writes the former IMF chief economist
econ.st
vcarchidi.bsky.social
Yeah I hear that, and I haven't read his most recent stuff. But IIRC the survival of OpenAI was often bound up in the survival of either SV itself or major firms like Microsoft. I remember a flair of "GenAI is SV's last idea, and if it goes belly up, so does SV."
vcarchidi.bsky.social
I think Zitron is probably right about that, but he seems to have much grander things in mind than just this one company not surviving, no?
vcarchidi.bsky.social
Would add a willingness to disagree and be criticized to this.
vcarchidi.bsky.social
A good take from Martin Peers.
vcarchidi.bsky.social
I could also see an argument that ties them both together - an intelligent system, free of our (human) deficiencies, would not hallucinate, could reliably apply the algorithm(s) responsible for its training data *because* it has none of our frailties, etc.
vcarchidi.bsky.social
A bit of an issue is that the leading voices in support of Neuro-Symbolic AI have often made the case for it in reference to "real" intelligence or whatever.

I could see plausible arguments for or against that, but it adds to the confusion - is it for "real" intelligence or greater applicability?
theophite.bsky.social
(4) is a LLM a dead end in terms of real artificial intelligence?

most of these arguments are made by people who think that neurosymbolic models are necessary to produce real intelligence -- essentially, a fixed ontology alongside statistical learning.
vcarchidi.bsky.social
Good piece to get your bearings on the bubble talk.

The gist is that economists/organizations are themselves debating this, and that AI risks are intersecting with other potentially destabilizing factors like political pressure on the US Federal Reserve.

www.ft.com/content/fe47...
IMF and BoE warn AI boom risks ‘abrupt’ stock market correction
Kristalina Georgieva and UK financial stability watchdog say valuations are closing in on dotcom bubble levels
www.ft.com
vcarchidi.bsky.social
Yeah in retrospect, I've done this myself in the past.

Very much an empirical question, but anecdotally, I've had someone tell me that GPT-4o not following their image generation instructions fully was the model exercising its "artistic liberty."
vcarchidi.bsky.social
The shift discussed from RL in a traditional ML context to Neuro-Symbolic is worth paying attention to...not confined to this research team.
vcarchidi.bsky.social
I've always assumed LLM-Modulo is most promising for verification on narrow-ish problems, but not quite as narrow as problems GOFAI was put toward. Boosting accuracy with the increased flexibility of LLMs as generators. Would follow the trend of successful N-S approaches being mostly specialized.
vcarchidi.bsky.social
Yeah this is a good point...I suppose then the question is whether there's any real baggage attached to that kind of talk. If pressed, do people still say it's thinking or do they default to just saying they don't know? Probably varies quite a bit.
vcarchidi.bsky.social
Academically, I think the Kambhampati-led ASU group has been notably level headed about this. (And I am partial to how he specifically tends to approach the study of language models, i.e. drawing first from compsci instead of searching for humanity in them.)

arxiv.org/abs/2504.09762
Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!
Intermediate token generation (ITG), where a model produces output before the solution, has been proposed as a method to improve the performance of language models on reasoning tasks. These intermedia...
arxiv.org
vcarchidi.bsky.social
My opinion is that calling the intermediate tokens "Chains-of-Thought" has effectively turned reasoning models into a collective Rorschach test that nobody asked to take.
vcarchidi.bsky.social
Descriptively, I think things may go toward something along these lines, at least among the public. They "think," but not like us.

Though it also depends quite a bit on commercial dynamics. Would the general public say ChatGPT is "thinking" if that's not what the interface said? (I don't know).
timkellogg.me
my current position is, “LLMs can think” is a useful anthropomorphism as long as you remember that the mechanics for how they think are entirely different from our own with their own tradeoffs
vcarchidi.bsky.social
The 2024 US elections were supposed to be at risk of AI-generated misinformation leading to a crisis. We're about a year out from that, and the quality of the misinfo has only gotten better. Not saying it's not a problem (obviously is), but I think the problem is more about how to judge sources.
vcarchidi.bsky.social
I have no idea how this'll look a decade from now, but yeah, more realistic misinformation hasn't really led to the info apocalypse that was predicted. I think it's just leading to a more banal (still bad) situation where the internet is filled with more garbage than before.
vcarchidi.bsky.social
One thing which I'd be happy to get input on is, just to be blunt, I do find the urge to automate a person's life without consulting them in some systematic ways about it first a little odd...especially if this leads to dismay that non-AI people don't love it. Clashes with virtuous tech posturing.
vcarchidi.bsky.social
Agree with the sentiment, but I do think a number of people (not OP) who say "I want robots to do my dishes so I can do art" don't actually want their manual chores automated.

And, if we're being charitable, they may not be wrong to worry about all household chores being automated. Time will tell.
sam.robotsfightingdinosaurs.com
i think it's a pretty clear repudiation of modern capitalism and capitalism as a whole that what people actually want robots to do is shit like laundry, taking the trash out, vacuuming, etc, and instead the market has forced robots to be The Future Of Everything. no, please just wash my dishes thx
sethdmichaels.bsky.social
AI HUCKSTER: "i built you a machine that reads books for you."

ME: "can you build me a machine that unloads the dishwasher for me so *i* can read a book?"
vcarchidi.bsky.social
I'd add that the US govt may behave in reaction to a crash in what we might call non-traditional ways. Idk how that impacts everything else.
vcarchidi.bsky.social
I think there's ample room to argue that the tech is not useless but the correction would still not be quick.

My disclaimer is that I have no idea how it'll play out, but difficult for me to not see the risk of a painful period of correction.