austinzheng.bsky.social
@austinzheng.bsky.social
separating doomscrolling from fun
Pinned
Silicon Valley's three historical phases

1. commercializing mostly gov't-funded research/infrastructure (and passing it off as the fruits of private enterprise)
2. stupid consumer services for millennials
3. turbo grift fascist death cult
I find quite bizarre this notion that everyone was (rightfully) worried that Trump would sabotage the 2024 elections, and when he didn't need to because he just won them outright there's absolutely no way he or his goons could ever possibly try anything like that ever again.
i know one view is that "they don't intend to lose power" but i honestly think this gives them way too much credit.
December 22, 2025 at 4:19 AM
Reposted
Per NY Times’s Michael Grynbaum on X, this is Sharyn Alfonsi’s email to her “60 Minutes” colleagues in full:
December 22, 2025 at 3:37 AM
Reposted
In general, AI adoption levels are a measurement of to what extent a particular reward system (a corporation, a market, academia as a whole system) has ceased to distinguish between doing a thing and pretending to do a thing
One of many things the pro-AI crowd doesn't seem to understand is the very important distinction between academic scholarship and the production of papers. Of course they can be related but they're not necessarily the same thing
December 22, 2025 at 12:04 AM
Reposted
Another parallel between AI and crypto is that while proponents of crypto insisted constantly that it was the inevitable, unstoppable future, that never seemed to convince them that they didn't need to be arguing the point with random people online
December 22, 2025 at 1:52 AM
Reposted
Being even baseline literate in the 2020s is like never working out but slowly becoming the strongest person on earth as everyone around you eats shredded newspaper and chugs poison
illiterate guy, daydreaming
December 21, 2025 at 9:02 PM
Reposted
Your summary (and these other intellectual products) are *also* communication to other people, but without intentionality in creation, they cannot be used for the same purposes. The author/speaker matters for interpretation even if there are no false statements.
December 21, 2025 at 5:50 AM
Reposted
When I write a summary, the summary is not the work product. The work product is me having read it and put it into structure and making decisions about what to include at what level. The summary is a signifier of me having done that work.
December 21, 2025 at 5:06 AM
Reposted
My theory is that as the "AI" fraud crashes, "AI believers will feel the need to convince others of their, increasingly delusions, beliefs, in order to fortify their own belief, as evidence crashes down around them.
December 21, 2025 at 7:25 PM
Reposted
Very earnest and thoughtful ppl being like “not all AI” is getting to me too tbh, the room is not being read
December 21, 2025 at 7:58 PM
Reposted
Thinking of GenAI as pollution is a good lens to look at it. It will forever more destroy the credibility of images, text and video. It’s a stain on the internet, an oil spill that no amount of dawn and toothbrushes will clear. We will live with the consequences for generations.
December 21, 2025 at 2:00 PM
Reposted
Okay real talk, the only sense in which GenAI is "here to stay" is that even after the bubble pops, AI is digital asbestos that is poisoning everything and we'll probably be picking the detritus out of everywhere it's ended up for years, assuming there's a will to remove it en masse that is.
December 21, 2025 at 1:47 PM
The 'human in the loop' thing is so stupid!!! Approximately nobody using a L2 ADAS is actually paying attention to the road like they should, and you can't force them to! Likewise, there is absolutely no way you can practically enforce a requirement to manually check the output of an LLM!
There are always empty words about "human in the loop", that you should always check the work of an LLM. But the entire value of a summary is as a replacement for reading. Nobody uses an LLM to generate a "summary" *and* independently verifies.
December 21, 2025 at 7:03 AM
There is a particular type of anti-copyright dimwit who thinks that, in a field where you can literally invent any character or story out of thin air, true creativity entails the right to co-opt other people's creations as one pleases.
December 21, 2025 at 6:11 AM
Reposted
I'm an "A.i." abolitionist. Here's why.

gregpak.net/2025/12/19/i...
I’m an “A.i.” abolitionist - gregpak.net
I’m an “A.i.” abolitionist. No consumer-facing LLMs or generative “A.i.” in anything. Sure, machine learning in science and a f...
gregpak.net
December 19, 2025 at 8:42 PM
Reposted
These are people who have no particular talent, skill, or contribution to make to the world finding an arena that's if anything hostile to the idea of talent, skill, or service and latching on to that scene like an orphaned macaque to cloth mother.
December 21, 2025 at 3:08 AM
Generative AI fans are like their crypto/NFT/$GME forebearers in that they're all selfish grifters who are just waiting for the right moment to step over each other on their way to the top, but for the time being have to grit their teeth and pretend their similar goals make them a 'community'.
It's a whole genre it's beautiful
December 21, 2025 at 2:32 AM
Reposted
But let's turn that around: Anyone who wants to call what they are doing "AI" should be accountable for making clear why their product (or research) isn't in fact slop. That's on them. The rest of us do not need to hold space for that.

>>
December 19, 2025 at 9:42 AM
Not sure how economists can say people are richer than they were in the 1960s, when housing, education, and healthcare are now indisputably more expensive relative to median income.
December 20, 2025 at 5:11 PM
Reposted
I’m sorry, but it is disgraceful to be an academic who uses this technology to conduct research. It should be prohibited in all of our scholarly institutions, including universities and journals.
December 20, 2025 at 12:35 PM
Reposted
Do people realize that this is also happening with case law?

Like Do You Understand What That Fucking Means???
December 20, 2025 at 2:48 AM
Reposted
Any article observing that the AI bubble is about to burst *but then suggesting AI is too important to our financial systems to be allowed to fail* is missing the point. Generative AI is a massive fraud and a brain drain on humanity, even leaving theft and resource issues aside.
December 19, 2025 at 5:21 AM
Not surprised programmers have been the glaring exception, considering how utterly dysfunctional the field was even before AI.
The thing that continues to amaze me about the AI push is how top-down it is. It seems to always come from the managers and the C-suite. (At least in creative fields, I know some programmers who are genuinely interested.)
‘An overwhelmingly negative and demoralizing force’: What it’s like working for a company that’s forcing AI on its artists and developers:

aftermath.site/ai-video-game-...
December 19, 2025 at 8:25 PM
There used to be a sort of stultifying anti-intellectual academic Western-leftist stream of thought that was in vogue ~15 years ago, and I always wonder what happened to it. Dissipated in the face of changing academic trends? Still quietly doing its thing, overlooked by the rage machine?
December 19, 2025 at 8:13 AM
Reposted
This is just... not a real problem any creative professional has. If you told me someone spent an hour scrolling Google to find an image they could show to an artist to say "like this" I'd think they're just not qualified to whatever job they're doing that involves briefing an artist.
Anyone who supports this is an embarrassment. "Instead of typing it out they'll generate their idea"? Are you insane?

Call yourself whatever you want but you aren't creatives, that's for sure.
December 18, 2025 at 6:05 PM
Reposted
Anthropic C-suite folks are ridiculous, but unfortunately also empowers to impose things on others based on their batshit views.

www.404media.co/anthropic-ex...

Short 🧵>>
December 18, 2025 at 5:17 PM