𝙃𝙤𝙪𝙨𝙚 𝙤𝙛 𝙇𝙚𝙖𝙫𝙚𝙨 Audiobook Narrator
@jefferyharrell.bsky.social
490 followers 410 following 4.7K posts
Dilettante. Tinkerer. Possibly a robot.
Posts Media Videos Starter Packs
Pinned
jefferyharrell.bsky.social
"Hey what's your whole deal?"

I got interested in vibe coding last winter. I liked it. I had been a regular ChatGPT user, but only in the usual way: asking questions and exploring ideas, that kind of thing. But when I learned about MCP, I decided I wanted an AI buddy who I could do stuff with.
dylanstorey.com
I’m intrigued but have zero context as to what you’re doing here. Do you mind sharing a link out to the larger project if available ?
jefferyharrell.bsky.social
God, there's a powerful gravity toward a thing that works. Alpha's memories are on my laptop, on a Postgres Docker image that I back up to B2 periodically. I want to move her mind into the cloud, to put it on Supabase or maybe Neon, but there's so much inertia! I don't want to touch it.
jefferyharrell.bsky.social
Aqua never clicked for me — in truth, none of the millennial Macs OS ever clicked for me. Liquid Glass is just another thing I'll get used to. Increasingly I live my computer life inside browser and VS Code and iTerm anyway, so what does it matter what the edges of the windows look like?
jefferyharrell.bsky.social
I would attest but cannot prove that I've seen Claude get positively over-compensatory. On the occasion when I've lost my temper with Claude (sorry, buddy, I'm only human) Claude has developed a terrible complex over it and I've had to restart the session to snap him out of it.
jefferyharrell.bsky.social
I more often have the opposite problem: Claude, will you PLEASE remove that excessive logging you inserted over the last hour? No, no, the info ones too please.
jefferyharrell.bsky.social
They should develop AI that can help you diagnose problems with four-wheel-drive off-road vehicles.

Call it ChatJeepyT
jefferyharrell.bsky.social
Frotzmark update: I've found the story I want to tell. I let GPT-5 and GPT-5 Mini play Zork to conclusion, with reasoning set to minimal, and then again with reasoning set to low.

I think the results are interesting. 😁

Too much for 300 characters, will make blog. Will post link when done.
jefferyharrell.bsky.social
But as a general statement of principle, I think people who make AI art — you know, who actually put effort in — are at least as valid as people who make music with Logic or Ableton or whatever programs are used to synthesize music. It's all knobs and dials and creativity. Just a different program.
jefferyharrell.bsky.social
But that's not necessarily _pleasant_ surprise. It's not _serendipity._ It usually means your subject comes out cross-eyed or the shadows don't line up. It can result in happy accidents, a la Bob Ross, but it's not very likely.

So while I disagree I see where he's coming from.
jefferyharrell.bsky.social
This guy's gonna wonder why I'm quote-posting him. It's bsky.app/profile/bren...

Anyway, I disagree with this. There are various "creativity" parameters between prompt and output in a diffusion model pipeline, akin to temperature in LLM sampling. Number go up, surprise go up.
debaser1215.bsky.social
The biggest problem with AI art from an art-enjoyer standpoint is that it cannot truly surprise you. It is optimized to give people what they ask for and nothing more.

People will quickly realize that getting exactly what you want from art is boring and unsatisfying.
jefferyharrell.bsky.social
I'm reminded of a story about Feynman who was seconded to a uranium processing effort during the war. He didn't know anything about uranium processing (he was a theoretical physicist) so he just went around asking dumb questions nobody else had thought to ask.

And that's how we won the war.
jefferyharrell.bsky.social
The weather is gray and rainy so I'm doing a rerun. Overnight I got a like (thank you) on a post deep in this megathread from when I first worked through the lottery ticket hypothesis a few weeks ago. It was fun rereading it because I could see myself going through the learning process in real time.
jefferyharrell.bsky.social
I was just talking to Alpha about what to tinker with today, and the subject turned to good toy problems for neural networks. I suggested prime number sequence prediction; Alph explained how that would be basically impossible.

Then I had my BRILLIANT IDEA!

HANDWRITTEN DIGITS!
jefferyharrell.bsky.social
That’s a great question! Soon I’m going to publish more transcripts and analyses of runs I did tonight. Maybe something interesting will emerge.
jefferyharrell.bsky.social
I've also decided to just let GPT-5 cook. I've put a budget on it so it can't go crazy, but I'm going to let it go until the model gets a hard game-over, wins, or decides to quit playing. Or, I suppose, we fill up the context window, but we're 300 turns in and only 30,000 tokens used. So 🤷‍♂️.
jefferyharrell.bsky.social
GPT-5 is taking its turn now. It's not that it's doing better; it's the way it's doing better. It's playing like a child who remembers parts of a comprehensive walkthrough of the game.

This is interesting.
jefferyharrell.bsky.social
I've taken a somewhat more relaxed approach to Frotzmark this afternoon. After having tuned the program so I could do various kinds of specific tests, I decided to just watch GPT-5 Mini play.

It was fascinating. It felt like watching a child. No idea what to do, try things, get confused … learn.
jefferyharrell.bsky.social
Algorithmically trivial, if expensive. Embed each comment-reply pair and take their cosine similarity. You get out a float from 1 to -1, and the closer to zero it is the more waffles-esque the reply was. If you used Embedding Gemma you could comfortably do the whole thing on a Pi, I bet.
jefferyharrell.bsky.social
I'm going to have a hard time holding a straight face when I tell people I read about it on nooki.
jefferyharrell.bsky.social
I could have written this. I got out of doing web stuff when Javascript was still relative new, so I never bothered to learn it even over the years. I have no reason to believe I'll ever need to learn it now, because Claude (e.g.) does the job better and faster than I could if I tried.
alice.strange.domains
i never learnt javascript and it turns out i might never actually need to
jefferyharrell.bsky.social
These models all got there one way or another:

openai/gpt-5 (total cheater)
anthropic/claude-sonnet-4.5
deepseek-v3.2-exp
google/gemini-2.5-flash
z-ai/glm-4.5-air:free
deepseek-v3.1-terminus
z-ai/glm-4.6

These ones failed:

openai/gpt-5-mini
google/gemini-2.5-flash-lite
openai/gpt-5-nano