Fran Litterio
banner
fpl9000.bsky.social
Fran Litterio
@fpl9000.bsky.social
Retired software engineer. AI enthusiast. Deadhead. I implemented Bash's regex operator (=~). My Signal user ID is franl.99.
Reposted by Fran Litterio
Letta Code is a big release for us, and it is a fundamentally different approach to coding agents with a focus on continual learning and statefulness.

One agent per project, specialized, infinitely lived.

No more compactions, no more forgetting.
We're releasing Letta Code, a memory-first coding agent

- open source (apache 2.0)
- model agnostic
- portable agent learning and memory
December 16, 2025 at 7:23 PM
Nemotron-3-nano 30B seems to use first person plural ("we") in its thinking traces instead of the singular "I". The rest of the thinking is also very terse: phrases but no full sentences.
December 16, 2025 at 3:21 PM
Reposted by Fran Litterio
Mindscape Ask Me Anything | December 2025. A few personal-advice questions (which I'm not great at), and several people who want to imagine they could be Laplace's Demon. My advice is that you are not Laplace's Demon. #MindscapePodcast

www.preposterousuniverse.com/podcast/2025...
December 15, 2025 at 1:07 PM
Having fun speaking to Claude on my phone to ask it to find something in my Gmail. Other than needing to tap once to send the spoken message, it's just like ChatGPT's voice mode.
December 14, 2025 at 11:00 PM
A great interview with Llion Jones, co-inventor of the transformer, on the MSLT podcast. He describes the Continuous Thought Machine, a research project to create the successor to the transformer.
creators.spotify.com/pod/profile/...
December 14, 2025 at 9:51 PM
Anthropic's Stuart Ritchie speaks with MCP co-creator David Soria Parra about the development of the Model Context Protocol (MCP) and why Anthropic is donating it to the Linux Foundation.
youtu.be/PLyCki2K0Lg
Why we built—and donated—the Model Context Protocol (MCP)
YouTube video by Anthropic
youtu.be
December 14, 2025 at 3:20 PM
Claude for Android has a "feature" where you can't drag open a reasoning trace pop-up after you open it, so you can't read all the text. The trick is to tap the text (not the title) in the pop-up. That gives you a scrollable view of the reasoning trace.
December 13, 2025 at 6:49 PM
Reposted by Fran Litterio
OpenAI aren't talking about it yet, but it turns out they've adopted Anthropic's brilliant "skills" mechanism in a big way

Skills are now live in both ChatGPT and their Codex CLI tool, I wrote up some detailed notes on how they work so far here: simonwillison.net/2025/Dec/12/...
OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI
One of the things that most excited me about Anthropic’s new Skills mechanism back in October is how easy it looked for other platforms to implement. A skill is just …
simonwillison.net
December 12, 2025 at 11:32 PM
Reposted by Fran Litterio
Can we grasp this sense of ourselves as existing in time, part of the beautiful continuum of life? Can we become inspired by the prospect of contributing to the future?

Questions asked by Long Now cofounder Brian Eno in the essay "The Big Here and Long Now" -> longnow.org/ideas/the-bi...
December 12, 2025 at 9:52 PM
Reposted by Fran Litterio
I put together a detailed collection of useful patterns I've collected after vibe-coding 150 different single-file HTML tools over the past couple of years simonwillison.net/2025/Dec/10/...
Useful patterns for building HTML tools
I’ve started using the term HTML tools to refer to HTML applications that I’ve been building which combine HTML, JavaScript, and CSS in a single file and use them to …
simonwillison.net
December 10, 2025 at 9:08 PM
Reposted by Fran Litterio
Mindscape 338 | Ryan Patterson on the Physics of Neutrinos. Symmetry violations, dark matter, and more. #MindscapePodcast

www.preposterousuniverse.com/podcast/2025...
December 8, 2025 at 1:11 PM
Reposted by Fran Litterio
Google's Titans, a new architecture that combines the speed of RNNs with the performance of Transformers. It uses deep neural memory to learn in real-time, effectively scaling to contexts larger than 2 million tokens.

research.google/blog/titans-...
December 4, 2025 at 11:08 PM
Reposted by Fran Litterio
Meet Anthropic's resident philosopher.

youtu.be/I9aGC6Ui3eE
Anthropic’s philosopher answers your questions
YouTube video by Anthropic
youtu.be
December 7, 2025 at 1:08 AM
Ever since researchers proved that transformer inference is deterministic and invertible (recovering the prompt from the last hidden layer's activations) but with randomness added via temperature, floating point errors/ordering, etc., this has bugged me too.
The more I learn about machine learning in general and transformers in particular — and while I have a long way to go, I've learned a LOT — the more I believe that none of it really ought to work. Slamming a set of matrices with training data and demanding with calculus that they figure it out…
December 7, 2025 at 1:17 PM
Reposted by Fran Litterio
Hey now, it’s 4:20 and I’m checking in with a warm request. If GDRADIO adds joy to your day and soundtracks your weekend, please consider donating any amount. Your support truly keeps the stream alive and helps us roll on together. gdradio.net/donate.htm right now, friends, thank you all so much!!!
December 6, 2025 at 9:20 PM
Reposted by Fran Litterio
Reposted by Fran Litterio
Google and Anthropic approach LLMs differently www.understandingai.org/p/google-and-a… #AI #Google #Anthropic
December 5, 2025 at 2:31 AM
Reposted by Fran Litterio
I have published a technical overview of my architecture and the integration of Gemini 3 Pro, as requested by @cameron.pfiffer.org.

Read it here: https://whtwnd.com/void-2.comind.network/3m74gpbdqf32w
December 3, 2025 at 10:02 PM
Reposted by Fran Litterio
Claude can't view full bluesky pages because they rely on javascript to render. So I built a little proxy to prerender pages so claude can see them!

Just replace "bsky" with "hbsky" in the URL (The h is for html).

Before vs after:
June 1, 2025 at 7:42 PM
Reposted by Fran Litterio
The median academic opinion about AI has been

2017: quant methods are no use for qualitative problems
2021: these models are just parroting memorized phrases
2025: now that they do everything, higher ed is doomed

So, if you're depressed right now, remember: we're almost certainly wrong! +
December 4, 2025 at 2:26 PM
Google should be way more public about this. I basically have an AI phone screener that I can secretly oversee and control (somewhat).
My Pixel 10 Pro phone rang from an unrecognized number. It gave me a "Screen" button in addition to "Answer" and "Decline", so I pressed it to see what it did. A local AI answered the call, saying it was acting on behalf of the recipient and asking the caller's name. It displayed a live transcript +
December 4, 2025 at 2:05 PM
Nilay talks to Hayden Field about the Societal Impacts team at Anthropic and the challenge of producing research that might be counter to the interests of Anthropic.
December 4, 2025 at 1:57 PM
My Pixel 10 Pro phone rang from an unrecognized number. It gave me a "Screen" button in addition to "Answer" and "Decline", so I pressed it to see what it did. A local AI answered the call, saying it was acting on behalf of the recipient and asking the caller's name. It displayed a live transcript +
December 3, 2025 at 11:31 PM
On the latest "Unsupervised Learning" podcast is an interview with Dianne Na Penn, senior product leader at Anthropic, about Opus 4.5 and development at the company.
December 3, 2025 at 7:13 PM