Greg Detre
gregdetre.bsky.social
Greg Detre
@gregdetre.bsky.social
Building AI products that help us grow.
@pvh.ca I thought you might appreciate this, if you hadn't already seen it.
November 24, 2025 at 4:05 AM
Things are in a pretty buggy state (importing HTML import should work, but PDF is broken), and there are also problems with heading/table of contents, and probably lots of other areas.

I just started a new gig, but I'll try to get it into a working state over Christmas :~
November 16, 2025 at 3:52 PM
I had forgotten to flip the GitHub 'public' switch on the repository - now fixed!

github.com/spideryarn/r...

Thanks for the heads-up @martin.kleppmann.com!
GitHub - spideryarn/reading
Contribute to spideryarn/reading development by creating an account on GitHub.
github.com
November 16, 2025 at 3:51 PM
Might the opposite be true?
August 28, 2025 at 2:32 PM
In practice, it's a spectrum.

LinkedIn feels more and more like a game being played against a ranking algorithm, where the consumer is often an AI-commenter :~
August 25, 2025 at 5:57 AM
If you were to present these stimuli in a more disfluent way, does that help the LLMs avoid the mistake?

cf Alter et al (2007) pages.stern.nyu.edu/~aalter/intu...
pages.stern.nyu.edu
August 19, 2025 at 5:22 PM
Love!
August 19, 2025 at 5:14 PM
An hour of AI-assisted programming might involve: exploring, hitting dead ends, reverting, shipping. Each feels significant & separate.

Our sense of time is heavily influenced by the number of distinct events we experience. So that productive hour might feel longer than a day of traditional coding.
August 6, 2025 at 3:26 PM
Classic programming has a 'variable reinforcement' quality - maybe this time when I hit 'compile', it'll work! Like slot machines, this unpredictable reward creates robust addictive behaviours.

But we get less of a dopamine hit watching the AI get the reward.
August 6, 2025 at 3:25 PM
When we watch the AI fail and iterate, we notice every mistake.

Baumeister et al., 1990: we use a 'perpetrator' narrative for our own mistakes (seeing them as isolated, comprehensible incidents) but a 'victim' narrative for the AI's failures (arbitrary, incomprehensible, with lasting implications).
August 6, 2025 at 3:24 PM