Maxine 💃🏼
banner
maxine.science
Maxine 💃🏼
@maxine.science
🔬 Looking at the brain’s “dark matter”
🤯 Studying how minds change
👩🏼‍💻 Building science tools

🦋 ♾️ 👓
🌐 maxine.science
You missed my favorite,

• Possibly also quantum field theory
November 27, 2025 at 3:39 AM
My equivalent is like “hey there’s this pattern for numpy that I’ve done a thousand times but it’s super idiosyncratic cause the library is bad, can you remind me what this is I never remember”.

I do get that from afar frontend seems like basically you make the entire job that. Which, I’m sorry.
November 26, 2025 at 8:56 PM
that makes sense, frontend can be pretty pastiche
November 26, 2025 at 8:55 PM
I admit this in the grand scheme of software is pretty niche. But it’s not within like real-real scientific data science / ML.
November 26, 2025 at 6:55 PM
Yeah I for sure see on a lot of those tasks basically being able to make a slash-command to take it out haha.

My workload is like pretty niche and new algorithm / platform architecture stuff where accuracy of interpretation on outputs is critical and introspection is time-consuming to check. +
November 26, 2025 at 6:55 PM
I think I mostly agree! For what I do this hasn’t reached the point of being a net-positive long term for much of my toil—there are insidious negatives I think get underweighted by others.

It’s at ~90% semantic accuracy on documentation, which recently moved into net positive for me—which is fun!
November 26, 2025 at 6:44 PM
Similarly for the AI scientist camp:

If you think these current systems are good at science, your metric defining “good at science” has been drawn to make the current systems good, not to actually reflect what good science is.
November 26, 2025 at 6:02 PM
My central critique of the pro-AI coder camp is I think the incentives of builders in this field are for exceptionally short time-horizons that are artificially maintained. I think those chickens will come home to roost—for anything other than a demo, maintainability reigns supreme.
November 26, 2025 at 6:00 PM
I can get it to be helpful.

At the point it is helpful for anything besides a relatively simple one-off prototype, I may be able to get it to work, but the result isn’t maintainable. I may gain time in of lines of code per hour at first; I lose time if my objective function time horizon is far. +
November 26, 2025 at 6:00 PM
“Haha these crazy water people talking nonsense we don’t need to listen to what they say.”

—The exact dynamic the non-water camp accuses the water camp of engaging in the other way ‘round.
November 26, 2025 at 5:55 PM
again: prior or posterior with respect to the linked content?

I’m all for talking stuff over but piling on repost dislike based on a collective a priori agreement of negative valence without discussing the actual argument as presented is to me the null discourse.
November 26, 2025 at 5:53 PM
To be transparent, pastiche is pretty rad for docstrings. For actual domain research code that’s more than a toy even Opus 4.5 in Claude Code I find to be absolutely ass.
November 26, 2025 at 4:09 PM
I didn’t use any LLM frontend directly from a four message chat with GPT-4o to Sonnet 4.5 in Claude Code, and I am still—as someone who gets the internals and follows the research—convinced most of the subjective gains are pastiche.

Not convinced they’ll necessarily even see “6” post-bubble. +
November 26, 2025 at 4:09 PM
“In the end, they discovered the answer really was the small specialist models they thought it was at the start.

“But the real meaning of the story was the fat pile of capex NVIDIA gouged outta companies with artificial supply scarcity created by paying people to buy their own GPUs along the way.”
November 26, 2025 at 1:25 PM
(Not like an issue specific thing I think it’s nuanced, but this issue in particular I’ve seen really bad siloing on both ends without much nuance.)
November 26, 2025 at 3:47 AM
Do you have specific critique of the article’s contents, or is this just finding people with your same priors?
November 26, 2025 at 3:46 AM
glad to see more people coming around to views like mine.

took an impending bubble collapse but better late than never 🤪
November 26, 2025 at 3:27 AM
The result of the paper is simply to say that if you don’t mandate (2), you tautologically get (1) by solving the task just falling out of the representation theory.
November 25, 2025 at 11:43 PM
2. as an embodied form, needing that representation theory to be compressed into the material substrate available at the given evolutionary epoch from the various composable units available.
November 25, 2025 at 11:41 PM