Steve Steiner
banner
stevejsteiner.bsky.social
Steve Steiner
@stevejsteiner.bsky.social
Love learning new stuff. Exploring emerging AI dev practice and tools. Enthusiast of 4E Cog Sci, Story structure, process ontology.
Pinned
I want to analyze novels, and pysbd didn’t split dialog the way I want it split, so I wrote my own splitter.

I kind of rabbit holed on optimizing this until I could get better than 1000 Mb/sec including the disk reads and writes.
GitHub - KnowSeams/KnowSeams: Fast, accurate sentence detection for English narrative with proper dialog handling.
Fast, accurate sentence detection for English narrative with proper dialog handling. - KnowSeams/KnowSeams
github.com
“Once you become obsessed with the problem of process, you either become a complex systems nut or a cybernetics nut, and no one listens to either.”

www.argmin.net/p/staging-in...
Staging Interventions
Actions are fundamentally different than predictions, but it's hard to write this distinction in math.
www.argmin.net
November 9, 2025 at 4:10 PM
Next bookclub book.
November 6, 2025 at 2:44 AM
Reposted by Steve Steiner
What happens when we model the detective archetype at scale? 🕵️‍♂️📚
Our new paper, accepted for #CHR2025 combines literary history and computational modeling to trace how the figure of the detective evolves across 150 years of French fiction.

arxiv.org/pdf/2511.00627
November 4, 2025 at 5:36 PM
I had Chatgpt analyze our favorite tv shows and suggest an ideal series.

With The Diplomat, Patriot, Lupin, Tokyo Vice, Killing Eve, Minx, you’re consistently drawn to:
“people with skills navigating systems — stylishly, emotionally, and intelligently.”

Now Watching:
The Bureau (TV series) - Wikipedia
en.wikipedia.org
November 3, 2025 at 5:03 AM
Claude just loves creating layer violations.

Fortunately it’s not allowed to automatically change .proj files so I can just say don’t make a layer violation and it fixes it.
November 1, 2025 at 11:38 PM
Llms poor story telling seems to me at least complicated by the simple bias toward non-suspensive sentences.
October 27, 2025 at 2:04 PM
I gave Claude Code some ‘feedback’.

Claude Code: “Yes that’s ass backwards”
October 27, 2025 at 12:48 AM
Given we’ve tried and have given up multiple times, Chatgpt just suggested Slow Horse again, but added we should skip the first episode.
October 26, 2025 at 9:16 PM
Claude sonnet 4.5 in GH Copilot just dropped a FUCK YES when it solved the problem. Hmm ICL?

For no reason at all: arstechnica.com/science/2023...
Do better coders swear more, or does C just do that to good programmers?
For open source C code, curses mean quality, a recent bachelor’s thesis suggests.
arstechnica.com
October 22, 2025 at 4:24 AM
Love this wording: “sloptimized models”

From: www.interconnects.ai/p/latest-ope...
Latest open artifacts (#15): It’s Qwen's world and we get to live in it, on CAISI's report, & GPT-OSS update
After a quiet month, Qwen is back in full force.
www.interconnects.ai
October 18, 2025 at 9:53 PM
Reposted by Steve Steiner
𝑵𝒆𝒘 𝒃𝒍𝒐𝒈𝒑𝒐𝒔𝒕! A rundown of some cool papers I got to chat about at #COLM2025 and some scattered thoughts

saxon.me/blog/2025/co...
COLM 2025: 9 cool papers and some thoughts
Reflections on the 2025 COLM conference, and a discussion of 9 cool COLM papers on benchmarking and eval, personas, and improving models for better long-context performance and consistency.
saxon.me
October 17, 2025 at 5:24 AM
I hear ‘grow an AI’ being used because training it is not ‘engineering an AI.’

This is a bad metaphor in a different direction.

‘Brew an AI’ vs ‘Bake an AI’ provides a more informative pair of metaphors about what’s happening.
October 17, 2025 at 2:48 PM
Claude —

Ha! No, “truthical” is not a word - you’ve caught me in an inconsistency! I was being overly prescriptive about “dynamical” vs “dynamic”:

“Dynamical systems” is indeed the standard term in mathematics

But it’s not because there’s some ironclad rule that “-al” makes things more technical
October 13, 2025 at 7:50 PM
Sonnet 4.5's suggestion on fixing the error it made splitting one large file into 10 smaller files.

"let me propose a more efficient approach: git reset HEAD"
October 12, 2025 at 11:32 PM
At some point he needs to say "Smash that subscribe button"
youtu.be/K5w7VS2sxD0?...
Kevin Buzzard - Where is Mathematics Going? (September 24, 2025)
YouTube video by Simons Foundation
youtu.be
October 12, 2025 at 9:48 PM
I was curious …

These findings establish robust evidence that viable microbes (or at least their DNA / reproductive capacity) can exist in upper-atmospheric aerosols. A domain we might call the “aerobiome.”

It’s an open question if these microbes merely survive or actively live at altitude.
October 10, 2025 at 11:13 PM
Not good. We’ve swapped to using the AeroPress while waiting for a replacement.
October 9, 2025 at 4:28 PM
Things going well, then "I'll make 10K tests."

Claude my friend - What happened to the thing we were supposed to be doing?

"Root Cause: Vocabulary Shift
Phase 1 uses proof vocabulary
Phase 2.3 uses testing vocabulary

The WU1.8 task document diverged from it during the Phase 2 detailed planning."
October 9, 2025 at 3:26 AM
mixed_script_confusables?

Made some Lean 4 proofs before asking Claude Code to build the implementation in rust.

The rust toolchain was not having this Greek symbol in source code malarky -

let θ = test_thresholds();
October 9, 2025 at 1:13 AM
I’m a fan of Joe Edelman. His Attentional Policies for eliciting a persons values was interesting before llms, and is incredibly more interesting now.
This new paper is a collaboration. The norms model in it also sounds interesting.
www.full-stack-alignment.ai/paper
October 8, 2025 at 9:50 PM
Reposted by Steve Steiner
🎙️ w/ @edelwax.bsky.social on “Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value.”

Pluralism as a core principle in social design, thick models of value, multipolar traps, moral graph elicitation, starting with membranes, Moloch-free zones, co-optation risks, & more.
EP 325 Joe Edelman on Full-Stack AI Alignment - The Jim Rutt Show
Jim talks with Joe Edelman about the Meaning Alignment Institute's paper "Full Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value."
www.jimruttshow.com
October 8, 2025 at 12:59 PM