Tilmon Edwards
banner
tilmonedwards.com
Tilmon Edwards
@tilmonedwards.com
engineering and infosec · he/him · vis tacita

maine bluesky feed: https://bsky.app/profile/did:plc:txfqncx66asrjzitxfur3of6/feed/aaap7ldsnvpkw
THE INTERNET IS SCARY: @internetisscary.org
Pinned
I have 3 axioms that help me decide when to talk to people I disagree with:

1. To debate a subject is to concede that the subject is debatable.

2. To reason with a person is to concede that the person is reasonable.

3. To ask for evidence of a claim is to concede that the claim might be true.
Reposted by Tilmon Edwards
I regret to inform that Ed Is At It Again.

I wanted to put a few thoughts (and I'm being brief, for once!) together about the OECD PIAAC stuff @jordantcarlson.bsky.social mentioned a couple weeks ago and how it provides useful naming-of-parts around how one might Get Good.

ed3d.net/blog/posts/t...
The shape of a knowledge worker
In a recent post, I threw around the term 'cognitive exponent' a bunch. Today I'd like to talk about a thing that might help us frame our investigation of what puts someone on the right side of that e...
ed3d.net
January 6, 2026 at 10:35 PM
Reposted by Tilmon Edwards
You're thinking of Kerberos. Kubernetes is the Nintendo game where you have to defeat Death and Alucard
January 7, 2026 at 6:17 AM
Reposted by Tilmon Edwards
You're thinking of Korea Wave. Kubernetes is the name of the ship Starfleet Academy cadets have to try to save in a no-win scenario test.
You’re thinking of Castlevania. Kubernetes is a global musical and cultural phenomenon that originates in South Korea.
You're thinking of Kerberos. Kubernetes is the Nintendo game where you have to defeat Death and Alucard
January 7, 2026 at 2:56 PM
You’re thinking of Castlevania. Kubernetes is a global musical and cultural phenomenon that originates in South Korea.
You're thinking of Kerberos. Kubernetes is the Nintendo game where you have to defeat Death and Alucard
January 7, 2026 at 12:49 PM
Reposted by Tilmon Edwards
the reason I'd follow Cat Hicks into hell is this unswerving humanist conviction that actually

people are going to do the best they can

we can help them do even better

and neither avenue is served by thinking less of people
My general observation after years of working on learning-related topics across a lot of contexts is that people are not lazy, are curious, are prone to many effort-illusions about learning (eg easy = good but also hard = good when it isn't), are capable of growing metacognition
January 3, 2026 at 11:13 PM
This is framed like a bad thing, but I'm not so sure it is, because the value of code as an asset is also changing. The amount of labor that goes into creating code is going to fall through the floor, which makes the cost of a rewrite much lower. The code itself becomes more ephemeral.
We’re creating more code than we can maintain or understand. With an explosion of new code we’ll see a lot of the code go extinct

Even current projects that seemed well supported with company backing are dying without dedicated resources to maintain it
January 3, 2026 at 8:01 PM
I cannot use gas town because I threw out a muscle in my back, which makes laughing excruciating. Prohibitively funny.
January 3, 2026 at 5:11 AM
Reposted by Tilmon Edwards
satya shops “slop” stop; some say short hop to AI top, see drop
thinking really hard about calling a top because satya even acknowledging this is a pretty bad sign
Microsoft CEO Satya Nadella says people should move beyond calling AI 'slop'

"We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium ... that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other"
January 3, 2026 at 4:05 AM
Reposted by Tilmon Edwards
I've been consciously trying to do more retweeting of folks with Good And Thoughtful Takes around AI and the like lately, so here's a starter-pack thing I'll definitely not forget to keep updated later 🫡
January 3, 2026 at 3:30 AM
Ok, here's a really small detail but I think it's illustrative of how I use LLMs. I've been doing a lot of work with quantitative analysis and workflows that follow the scientific method.

I often get a hypothesis like: "Change parameter X"

That's not a hypothesis.
January 3, 2026 at 3:04 AM
Reposted by Tilmon Edwards
i disagree. i think grok is a spokesperson for twitter inc and i think grok’s actions and statements can and should be understood to be representative of and directed by that company. if they didn’t want it to do those things they wouldn’t allow it to. its not a third party user
The whole "Grok admits" "Grok apologizes" thing reminds me of the famous IBM presentation quote that evidently everyone has forgotten. And I do think representing your brand on social media counts as a "management decision".
January 3, 2026 at 2:37 AM
Reposted by Tilmon Edwards
the phrase "you have to be a reasonably whole person to use these effectively *and* not go insane" rolls through my brain a lot.
where I think the models and enthusiasts make a misstep is in their hope, dream, promise, that language models can alleviate the miseries of being a human with a skull-sized kingdom in this big old world. i don’t think they can do it, and they might only be useful for developed/mature souls!
January 2, 2026 at 4:47 PM
Reposted by Tilmon Edwards
It's a little funny how the successful integration of AI involves essentially adopting animist logic. If you build a new little god & use it to do evil, or to indulge your uncontrolled vices, you cannot expect to keep it controlled & safe. It must be cultivated like a sapling or a baby.
memory is poison. you can get these things high-centered in a crazy part of the latent space and it won't come out.
ed3d.net Ed @ed3d.net · 7d
1) this is fucked

2) openai absolutely should have caught his

the thing I don't understand from a practical perspective is 3) how were people rehydrating the thing across multiple sessions in order to get juiced the way they wanted to? were they throwing out sessions where it didn't take the bait?
January 1, 2026 at 8:41 PM
Reposted by Tilmon Edwards
You have no idea how many Software as a Service products are out there that you don’t think about but would basically shut down wherever you work if they stopped working
“When I reached out to Brex to try to understand what its ad meant, no one answered.”
The Year of Subway Slop
AI and nonsense ads trolled us on our commutes.
www.curbed.com
January 1, 2026 at 5:01 PM
Reposted by Tilmon Edwards
many current problems began when people started trusting computer. you must never trust computer. computer is the machine that hates you
January 1, 2026 at 10:13 AM
The poor guy poisoned his OpenAI account with his own psychosis.
Memory launched in April... 😬
December 31, 2025 at 10:10 PM
Reposted by Tilmon Edwards
This is good.
ed3d.net Ed @ed3d.net · 8d
I've spent a couple of days working on this one, and it didn't end up quite where I expected it to. But I think this is the clearest way I can describe where my head's at, why I approach LLMs in the way that I do, and why, while it's describing-not-prescribing, it's a little light prescribing too.
The abstraction you didn't ask for
When I say
ed3d.net
December 31, 2025 at 3:23 PM
Reposted by Tilmon Edwards
Which I mean. Is true. But it did a remarkable job of opposing the position and effectively making me defend my points in a way that did force me to reckon with alternate arguments more seriously and in a practical way. I'm still chewing over some of it.

The future's really weird, man.
December 30, 2025 at 10:33 PM
Reposted by Tilmon Edwards
Interestingly, Claude changed my approach a bit as I was working on this post. I was on a four hour solo drive home, so I listened to Chris Krycho's talk, then fed the script to Claude and went for a nice long ramble at it

It then proceeded to (gently) call me insensitive and my arguments normative
December 30, 2025 at 10:33 PM
Reposted by Tilmon Edwards
I've spent a couple of days working on this one, and it didn't end up quite where I expected it to. But I think this is the clearest way I can describe where my head's at, why I approach LLMs in the way that I do, and why, while it's describing-not-prescribing, it's a little light prescribing too.
The abstraction you didn't ask for
When I say
ed3d.net
December 30, 2025 at 10:19 PM
A forklift is a tool that a single human uses to move objects many times her own body weight, provided that the object meets certain criteria, and that the environment is suitable for operating a forklift.

AI agents are information forklifts.
December 30, 2025 at 9:54 PM
I started a new job in February, building an Identity and Access Management platform for a company that you have definitely heard of. It's much more difficult than my previous job, but I'm kicking ass and am really excited and proud of the work I'm doing to build something I feel is important.
Every year around this time I do the same thread and it has been a lot of fun.

Tell me something you did in 2025 that you're proud of that you want everyone to know about. Did you write a book? Did you get a promotion? Did you survive the year?

I want to spend the end of the year celebrating you.
December 30, 2025 at 2:24 PM
I will not be taking any questions.
December 30, 2025 at 12:08 PM
Reposted by Tilmon Edwards
i love it that every new yorker in this thread has opinions about which neighborhoods i randomly assigned to a color but no opinion about labeling California "France" or Montana "Egypt"
every new yorker's version of this map
December 30, 2025 at 4:43 AM
Reposted by Tilmon Edwards
yeah this rips. there's an epistemological gap that seems to often be solved by "well, people who are getting good results out of it have easier problems/are lying about it" which is...uh...wrong! it's wrong.
I wrote a short essay on what I'm calling the AI Attribution Error. doi.org/10.59350/c3g...
December 28, 2025 at 4:57 PM