Eryk Salvaggio
banner
eryk.bsky.social
Eryk Salvaggio
@eryk.bsky.social
Situationist Cybernetics. Gates Scholar researching AI’s impacts on the Humanities at the University of Cambridge. Tech Policy Press Writing Fellow. Researcher, AI Pedagogies, metaLab (at) Harvard University. Aim to be kind. cyberneticforests.com
Not sure where I glorify the artist. I’m more interested in creative hobbyists and the way the pressure to be productive has shifted the nature of intrinsic motivation to make stuff or practice things.
November 30, 2025 at 7:33 PM
I don’t mean to frame it as inauthentic, but Pfaller suggests it is a kind of disappearance of the “self that enjoys things,” because it facilitates a postponement of joy. But the fantasy of postponement is still true: buying a record is fun, even if you never listen to it.
November 30, 2025 at 7:30 PM
(5/5 — this is straying pretty far from the original concept, just acknowledging that, but the conclusions might be helpful to riff on anyway.)
November 30, 2025 at 6:48 PM
Thanks for the heads up on Attali! I am trying to think all about noise & AI at the moment.
November 30, 2025 at 6:46 PM
Think of kids who show up and try to convince you they’re already educated on the topic, instead of trying to show you they’re engaged and curious. They’re smart! But education is about improving on that, not demonstrating they don’t “need” to learn. [4/4]
November 30, 2025 at 6:36 PM
Students know education is supposed to bridge the gap they are seeing, but don’t seem to know the mechanism. Many assume they’re meant to exhibit their existing mastery (tests reinforce this) rather than the process of learning. [3/4]
November 30, 2025 at 6:36 PM
In my experience the reason students use Gen AI (aside from just cheating, which cheaters will do anyway) is because Gen AI output looks like what they know they are aiming for, but don’t know how to do themselves. But there it is! They “have it.” [2/4]
November 30, 2025 at 6:36 PM
That’s the cognitive offloading concept at its best. And cognitive offloading can absolutely be helpful in some cases (like bus schedules or GPS, etc). Just a matter of identifying when and where it is appropriate vs becoming a negative influence on things we want / need to remember.
November 30, 2025 at 3:51 PM
Thanks! I have to say, for me a model does not imply a complete representation of the thing being modeled. I kinda think even the best possible models will be a little shabby. So I don’t mean LLMs are successful models of language, just that language is what they are intended to represent.
November 30, 2025 at 3:47 PM