Eryk Salvaggio
@eryk.bsky.social
12K followers 4.6K following 2.9K posts
Situationist Cybernetics. Gates Scholar researching AI & the Humanities at the University of Cambridge. Tech Policy Press Writing Fellow. Researcher, AI Pedagogies, metaLab (at) Harvard University. Aim to be kind. cyberneticforests.com
Posts Media Videos Starter Packs
eryk.bsky.social
Tech is full of philosophers who couldn’t get jobs in philosophy
Reposted by Eryk Salvaggio
simongroth.com
“Current architectures of LLMs cannot imagine, but they can sequence … For the same reason that a dog can go to church but a dog cannot be Catholic, an LLM can have a conversation but cannot participate in the conversation.”
eryk.bsky.social
I've been re-acquainting myself with NNs, transformers and LLMs this week, so these are early thoughts. But I am starting to think about language without the capacity to imagine language, i.e., an LLM unable to imagine itself participating in language. mail.cyberneticforests.com/what-machine...
What Machines Don't Know
Imagining Language Without Imagination It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterizatio...
mail.cyberneticforests.com
Reposted by Eryk Salvaggio
dryad.technology
What I'm listening to today: "Olson"

Boards of Canada arranged to play on a DEC PDP-1 from 1959. The PDP-1 doesn't have sound, but it does have front-panel light bulbs for debugging, so they rewired the light bulb lines into speakers to create 4 square wave channels

www.youtube.com/watch?v=wubk...
Boards of Canada "Olson" on a 1959 PDP-1 Computer
YouTube video by Joe Lynch
www.youtube.com
eryk.bsky.social
Recent posts re: Meta’s goals to build solipsistic networks that simulate social interaction through sycophantic LLMs to replace their dead mall vibe reminded me of this text, written in 2023, which tied the logics of social media analytics directly to logics of generative AI.
eryk.bsky.social
“Now AI promises to further constrain those relationships, to move us from a time when one could speak and hear from many to a time when one can speak only to ourselves: one-to-none communication, a throwback to the days of yelling at the TV, but now the TV can adjust.” (From 2023).
The Hypothetical Image — Cybernetic Forests.
The history of AI images as data analytics, and AI art as the aestheticization of Big Data’s politics.
www.cyberneticforests.com
eryk.bsky.social
“Now AI promises to further constrain those relationships, to move us from a time when one could speak and hear from many to a time when one can speak only to ourselves: one-to-none communication, a throwback to the days of yelling at the TV, but now the TV can adjust.” (From 2023).
The Hypothetical Image — Cybernetic Forests.
The history of AI images as data analytics, and AI art as the aestheticization of Big Data’s politics.
www.cyberneticforests.com
Reposted by Eryk Salvaggio
abruzos.bsky.social
Excellent essay!

"To believe this to be true, one would have to imagine that all human speech is motivated entirely by grammar."

But it is not. Meaning depends on extra-linguistic context much more than we realize, so language is more than just words and syntax; it's an embodied practice.
eryk.bsky.social
I've been re-acquainting myself with NNs, transformers and LLMs this week, so these are early thoughts. But I am starting to think about language without the capacity to imagine language, i.e., an LLM unable to imagine itself participating in language. mail.cyberneticforests.com/what-machine...
What Machines Don't Know
Imagining Language Without Imagination It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterizatio...
mail.cyberneticforests.com
Reposted by Eryk Salvaggio
eryk.bsky.social
I've been re-acquainting myself with NNs, transformers and LLMs this week, so these are early thoughts. But I am starting to think about language without the capacity to imagine language, i.e., an LLM unable to imagine itself participating in language. mail.cyberneticforests.com/what-machine...
What Machines Don't Know
Imagining Language Without Imagination It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterizatio...
mail.cyberneticforests.com
Reposted by Eryk Salvaggio
benjaminjriley.bsky.social
"Human language is motivated by the articulation of thought; machine language is crafted through structure....As a result, the likelihood of finding new arrangements of words through an LLM is determined not by the capacity of AI reason, but to shuffle the expectations of a word's proper position."
eryk.bsky.social
I've been re-acquainting myself with NNs, transformers and LLMs this week, so these are early thoughts. But I am starting to think about language without the capacity to imagine language, i.e., an LLM unable to imagine itself participating in language. mail.cyberneticforests.com/what-machine...
What Machines Don't Know
Imagining Language Without Imagination It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterizatio...
mail.cyberneticforests.com
eryk.bsky.social
A lot of people do, in fact, "pretend to be themselves," and this makes them more sympathetic to the idea that LLMs "think the way humans do." It's typical in younger people who lack experiences that shape their self-knowledge, so when we ask "which mind are we modeling?" that ought to factor in.
eryk.bsky.social
I've been re-acquainting myself with NNs, transformers and LLMs this week, so these are early thoughts. But I am starting to think about language without the capacity to imagine language, i.e., an LLM unable to imagine itself participating in language. mail.cyberneticforests.com/what-machine...
What Machines Don't Know
Imagining Language Without Imagination It's important to acknowledge that Large Language Models are complex. There's an oversimplified binary in online chatter between the dismissive characterizatio...
mail.cyberneticforests.com
Reposted by Eryk Salvaggio
patmat.bsky.social
1/10 There are very abstract concepts that must be taken as truth in order to subjugate humans to technology and its owners.One of the most powerful is the idea that computation is consciousness and life is information,the core belief behind what philosophers call computational functionalism.
👇
Reposted by Eryk Salvaggio
valoisdubins.bsky.social
THIS

It’s also why I really find it difficult to give two shits about superhero CGI stunt work and can’t fathom why others do.
eryk.bsky.social
No shame, I’ve laughed at those TikTok videos of dogs crashing into wedding cakes, funny. But the joke — dog crashing into cake! — is actually not as funny when you know it’s staged, or prompted. That is a legitimate response to humor: context can make it unfunny after the fact of laughing.
eryk.bsky.social
No shame, I’ve laughed at those TikTok videos of dogs crashing into wedding cakes, funny. But the joke — dog crashing into cake! — is actually not as funny when you know it’s staged, or prompted. That is a legitimate response to humor: context can make it unfunny after the fact of laughing.
eryk.bsky.social
“If it makes you laugh, who cares if it’s AI?” Well, I care because my laughter and aesthetic pleasure is not how I determine whether a thing is ethical or good for society. I like how the sun bounces off of oil slicks in parking lots, does not mean I want oil spills everywhere.
eryk.bsky.social
Lots of real slop making it through the social filters these days thanks to Sora 2. If you laugh out loud at a TikTok video it’s probably AI. It’s fine to laugh of course. But now you have to deal with the slop hangover.
Reposted by Eryk Salvaggio
techviews.bsky.social
"...some of the politically connected people in the charter cities movement, people receiving investment from Thiel and others, were very excited about the prospect of the U.S. taking over #Greenland."

www.politico.com/news/magazin...
‘The Democrats Still May Not Understand What They're Dealing With’
A Silicon Valley chronicler on the increasingly radical politics of Elon Musk, David Sacks and Mark Zuckerberg.
www.politico.com
Reposted by Eryk Salvaggio
danrosen.xyz
"This back-and-forth between builders and critics makes neither happy, but ultimately leads to compromises...The adoption of new technological forms does not erase the interest in technosocial purposes to which it might be directed."
eryk.bsky.social
I’ve just learned that John Searle, whose Chinese Room thought experiment is often used to challenge ideas of “understanding” in LLMs, died at age 93 on Sunday. www.theguardian.com/world/2025/o...
John Searle obituary
American philosopher whose Chinese Room thought experiment rebuts the idea that computers can think as humans do
www.theguardian.com
eryk.bsky.social
Sorry, what “position” are you referring to here?
Reposted by Eryk Salvaggio
eryk.bsky.social
The "you don't understand how (AI/LLMs/Diffusion Models/NNs) work" posture in any debate is often just "we don't agree on how to interpret what (AI/LLMs/Diffusion Models/NNs) are doing"