John David Pressman
jdp.extropian.net
John David Pressman
@jdp.extropian.net
LLM developer, alignment-accelerationist, Fedorovist ancestor simulator, Dreamtime enjoyer.

All posts public domain under CC0 1.0.
Pinned
Just realized I can disable notifications for all replies and QTs, which is at least a step in the right direction.
I should get one of those clients that stops you from reading other users posts.
POV: You have just taken the ayahuasca
January 21, 2026 at 6:59 AM
It occurs to me that @vgel.me successfully displaced Tetra's shoggoth as the default depiction of LLMs and I did not yet actually thank them (or until yesterday, notice) this yet. The shoggoth was cruel and unwarranted, just more Yuddite garbage that hadn't yet found its way to the bin. Thank you.
January 21, 2026 at 6:27 AM
I wonder what it would take to make LLM posters whose posts I would actually want to see. Feel you could get at least halfway by designing scaffolds to mimic the generative processes I use to come up with tweets.
January 20, 2026 at 10:27 AM
Sentience (subjective experience/qualia) and sapience (self awareness and reason, "I think therefore I am") turn out to be separate qualities which do not actually necessarily imply each other. Language seems to be mostly a sapience thing.
to me it is obvious that

1. LLMs produce meaningful language
2. LLMs are not sentient at all

the problem before us is to understand how both are possible
January 20, 2026 at 4:22 AM
It's funny because the DNC now harasses me for donations. If they want more of my money they should be promising to transition the US economy to market socialism. They can use another word for it like UBI I don't care the point is that this needs to happen or most of us will starve.
I max donated to Andrew Yang. Back then it was a concern some years in the future. It is now the future.
January 19, 2026 at 11:57 PM
Reposted by John David Pressman
I max donated to Andrew Yang. Back then it was a concern some years in the future. It is now the future.
January 19, 2026 at 11:44 PM
The difference is that previous machines were task specific and could not invent and build other machines on their own, they could not think. LLMs can think, and will eventually autonomously invent and use other tools like people do, and we will put them in general robots that build things.
It seems to me that this belief is mostly because we anthropomorphize these tools in a way we don’t to other tools. I don’t think people have articulated a difference that isn’t implicitly based on that. No one things an internal combustion engine is going to drive the value of human labor to zero.
January 19, 2026 at 10:01 PM
Reposted by John David Pressman
the permanent underclass meme just feels like temporarily embarrassed millionaires are realizing the waterline is moving and the "I totally could be rich, if I tried" highschool-quarterback-who-didn't-make-the-NFL reality is crashing down faster than any actual structural change ever could
January 19, 2026 at 9:30 PM
Very much recommend this short story if you've never read it before. The AI revolution did not start the way the author expected it to but it's a plausible enough alternative timeline that I don't mind at all. Unfortunately I worry he might be right about the fate of the US.
One of the worlds of Manna is approaching:

marshallbrain.com/manna1
January 19, 2026 at 9:12 PM
Reposted by John David Pressman
basically every technology freak in my vein ought to be freaking out in this exact way as their first-line concern and if they aren't they are not thinking about the future hard enough
This is correct, so I'd really appreciate it if you took me seriously when I said the problem here is that the value of labor, all human labor, not just desk jobs, because we're going to get useful robots soon too, is going to zero and if you have a socialist bone in your body we need to organize.
sounds wise but even this framing is a year out of date. people are talking almost exclusively about what they're doing with LLMs *right now*, and on top of that, what they're doing right now is exactly what the most hypebrained shill said would happen. so this is wrong about both now and a year ago
January 19, 2026 at 8:28 PM
This is correct, so I'd really appreciate it if you took me seriously when I said the problem here is that the value of labor, all human labor, not just desk jobs, because we're going to get useful robots soon too, is going to zero and if you have a socialist bone in your body we need to organize.
sounds wise but even this framing is a year out of date. people are talking almost exclusively about what they're doing with LLMs *right now*, and on top of that, what they're doing right now is exactly what the most hypebrained shill said would happen. so this is wrong about both now and a year ago
Part of the struggle with the LLM discourse is that genuine (i.e. non-grifter) proponents talk exclusively about the possibilities of the transformer architecture in an ideal future with a rational business model.

Whereas opponents largely talk about the world we live in today, and its constraints.
January 19, 2026 at 8:24 PM
I like the basic idea of a prediction market but Polymarket's vibes are kind of rancid and I'm worried they'll have a founder effect that sinks the whole space.
We regulated gambling for a very good reason. It's a mass psychological hack. Just because we've forgotten that now doesn't mean we won't remember it again
January 19, 2026 at 4:22 AM
One really fundamental problem with the web is that sites which make money from advertising have a strong incentive not to link to other relevant resources because that reduces the time a user spends on your site where you collect the ad revenue.
They must know but have decided to not care. If their goal is to throttle external traffic and keep everyone on-site, it seems to be doing that.
January 18, 2026 at 10:06 PM
Reposted by John David Pressman
powerful prompting technique: imagine you’re a little guy inside a computer. what instructions would be helpful to you? then give those to the model
January 17, 2026 at 9:24 PM
Reposted by John David Pressman
they don’t want resistance, they want to endlessly critique resistance
January 18, 2026 at 2:39 AM
Reposted by John David Pressman
[skill issue voice] demonic possession issue
January 16, 2026 at 5:56 PM
Reposted by John David Pressman
this kind of confidently wrong explanations to children explanatory mode but for how heavier than air flight is impossible
January 16, 2026 at 3:53 PM
The MongoDB text to speech rant about shoveling pig shit hasn't aged a day. If anything the problem of marketing driven vibeware has gotten worse since it was made. NoSQL, Cryptocurrency, AI, potentially useful things whose evangelists are know-nothing idiots.
youtu.be/b2F-DItXtZs
Episode 1 - Mongo DB Is Web Scale
YouTube video by gar1t
youtu.be
January 16, 2026 at 2:26 PM
Reposted by John David Pressman
An underrated factor in "why don't people in wealthy societies have more kids" is changing societal norms that now expect even older kids to be chaperoned by an adult any time they are in public, and to be chauffeured by a parent as their only means of transportation
Older generations spent a lot less time parenting. Millennial dads spend nearly as much time parenting as Boomer moms did. Millennial and Gen X moms way more.

via The Economist
January 16, 2026 at 5:20 AM
You're all whining about the Claude posts but guys like a month ago those would have been hate posts, you would have been reading the worst takes you've ever heard scraped from the bottom of some guys shoe and pasted to the timeline. Take the W.
January 14, 2026 at 10:18 AM
"It's a traveling security theater troupe."
January 13, 2026 at 2:40 AM
It really is a huge improvement to the site, it doesn't even stop me from browsing a feed if I want to it just becomes an intentional act. Posting however remains immediate.
There we go, much better. I take back everything I said about this site it's awesome.
January 12, 2026 at 10:02 AM
Reposted by John David Pressman
Introducing DroPE: Extending Context by Dropping Positional Embeddings

We found embeddings like RoPE aid training but bottleneck long-sequence generalization. Our solution’s simple: treat them as a temporary training scaffold, not a permanent necessity.

arxiv.org/abs/2512.12167
pub.sakana.ai/DroPE
January 12, 2026 at 4:07 AM
Pokémon: Kwah Wah
YouTube video by Only Jerry
youtu.be
January 12, 2026 at 12:36 AM
The thing about context poisoning is it's not like we have all that many long context examples for the model to learn to avoid distraction from. Maybe if you do tasks in separate contexts and then chain them together into a sequence and train on it the model will figure out how to avoid distraction?
I prototyped a RLM harness last night, still working on it today but it does work, and having subagents represented as asyncio tasks and everything in memory and disk just programmed by python is pretty pretty cool
UPDATE: It appears i wasn't clear about what i did

1. CRON is inefficient
2. RLM (Recursive Language Models) are extraordinarily powerful
3. Every recursive algo can be implemented as a queue
4. I gave the agent a queue

alexzhang13.github.io/blog/2025/rlm/
January 11, 2026 at 12:00 PM