Computational Cosmetologist
banner
dferrer.bsky.social
Computational Cosmetologist
@dferrer.bsky.social
ML Scientist (derogatory), Ex-Cosmologist, Post Large Scale Structuralist

I worked on the Great Problems in AI: Useless Facts about Dark Energy, the difference between a bed and a sofa, and now facilitating bank-on-bank violence. Frequentists DNI.
The reveal at the end is definitely that the super-human AI has reprinted the King James Bible and the scientists are hiding it
December 10, 2025 at 5:37 AM
If you forced me at gunpoint to write the scientist character in a conservative Christian propaganda movie, this is the sort of thing I’d come up with.
December 10, 2025 at 5:27 AM
I mean even in this hypothetical, I could see rejecting the magic pills because you *don’t think they work*—but accepting they work and rejecting them anyway is squarely in villain territory
December 10, 2025 at 5:18 AM
Thank you for being a publication that lives up to its nominal ideals
December 10, 2025 at 5:11 AM
I don’t think you can coherently call yourself a liberal and oppose trans rights. Some do. They’re wrong.

I *do* think you can oppose them while identifying as an anti-capitalist. I will oppose you, but I won’t insist you’re using wrong label.
December 10, 2025 at 5:11 AM
This sort of thing is why it’s important to abandon the language of politics as a 1d continuum.

“This group disagrees with me on some things, so they must be a more moderate form of the group that disagrees with me on more things,” is almost never true in real politics.
December 10, 2025 at 5:11 AM
An engineer once escalated a demand I re-enable code execution to management because they had been “using it” and now the model claimed it wasn’t available. No such faculty ever existed. They had just created a prompt that hallucinated running scripts in a sandbox.
December 9, 2025 at 4:37 PM
I guess I’ve just seen too many people get burned by this. Trusting a system you know cannot be trustworthy is how you end up treating the LLM like a magical system.

I’ve had engineers with vast webs of incantations they add to prompts complain that I “broke” them when there were no changes.
December 9, 2025 at 4:31 PM
My reconstruction of my own process will be a plausible but wrong hallucination, even if I understand how I think better than anyone else.

Asking LLMs to do this can catch gross errors, but needs to be treated with *heavy* skepticism. It’s not quite an anti-pattern, but I’d call it “prompt-smell”.
December 9, 2025 at 4:12 PM
I *do* have special knowledge for that question. If the answer is “I’ve always been bad at mental long division”, that’s an interesting insight I might be able to reconstruct.

But if the answer is “a cute girl walked by and smiled at me during step b” I probably won’t remember that.
December 9, 2025 at 4:12 PM
I would call this an exploratory technique for debugging, but don’t trust it at all.

Even if introspection is real (evidence is controversial), it’s forward not retrospective.

It’s like asking “why did you miss this problem on this midterm a decade ago”.
December 9, 2025 at 4:12 PM
Yud’s writing is like psychedelics. Is my life better for it? Absolutely not. Am I more interesting for having the experience? Also no.

But I still feel like it was compelling for reasons hard to articulate.
December 9, 2025 at 2:12 AM
Tried explaining this to my wife and now I have to convince her I’m not having a manic episode
December 9, 2025 at 2:02 AM
Is this like The Ring where now I have to show it to 100 people or in 7 days I’ll become addicted to Zyn?
December 9, 2025 at 1:58 AM
“Now, allow me to describe in exquisite detail the subtle graduations that separate this from ‘bad’ pedophilia with these visual aids I keep handy”
December 9, 2025 at 1:43 AM
This is the hardest thing about image / vid gen (arguably maybe even text) right now. How do you get consistency across generations without training in a specific narrow style?

It would be great to have a separable embedding of composition and components and pass along what you want to keep.
December 9, 2025 at 12:00 AM
In this paper, I show that all the previous things in the field that *did* work are a special case of my framework that will definitely produce a new, better working thing.

Here, look at this result where I show my new thing is almost as good on a toy problem as the SotA from 2 years ago.
December 8, 2025 at 10:22 PM
But realistically, 95+% of the times people say this to me they think Knowledge has an obvious, non-controversial definition they simply choose not to articulate at the moment.

You know, the "obvious" one.
December 8, 2025 at 10:15 PM
I'm very open to going beyond Justified True Belief, but I feel like if a critic wants to say "LLMs don't have Knowledge" and *doesn't* mean Justified True Belief, it's 100% on them to provide the definition of Knowledge they do mean.
December 8, 2025 at 10:15 PM
And like all the rest of the SM / recurrence/ fast+slow weights work, the claims are never decisively *better* than SoA transformer variants. Whole context attention is obviously dumb—it feels like if any of these worked they’d clean up at long context

I would love this time to be different though.
December 8, 2025 at 2:46 PM
I *want* this to work, but I’ve spent the last 8 years getting burned by memory / recurrence architectures. Even TITANS, for all that it looked cool, was underwhelming. The open effort to reproduce it has been a real struggle. For non-language I’ve seen little success at all.
December 8, 2025 at 2:46 PM
I’ve seldom seen someone who seemed to care about it sincerely.

My last big engagement on this was like 8 years ago on a now lost academic blog post where I railed against GANs, though, so maybe I’m out of the loop
December 8, 2025 at 11:56 AM
At a practical level, I feel like I usually see it deployed as something like whataboutism. “On average, all search algorithms are the same, so it’s actually not worse to use [my garbage]”

Less often the reverse, “actually all search performs identically, so my hatred of your method is justified”
December 8, 2025 at 11:56 AM