working on https://blebbit.app | @blebbit.app | #blebbit
personal: https://verdverm.com | https://github.com/verdverm
- implemented autocomplete for its input
- environment as vscode scm with diffs
- sorted agents.md files, even discovered and adjusted its own prompt without prompting
- enhanced the ui around tool calls
changes spanning languages and front/backends
the idea that using polar coordinates, thereby separating "what" from "where", would make things easier for the llm seems to be confirmed by the results
reshaping the problem space often leads to optimizations
They found that PoPE maintains stable performance without any fine-tuning or frequency interpolation.
Paper: arxiv.org/abs/2509.10534
the idea that using polar coordinates, thereby separating "what" from "where", would make things easier for the llm seems to be confirmed by the results
reshaping the problem space often leads to optimizations
title doesn't do justice to the topics covered nor to the wisdoms Kelsey shares as a natural educator
www.youtube.com/watch?v=HdUb...
title doesn't do justice to the topics covered nor to the wisdoms Kelsey shares as a natural educator
www.youtube.com/watch?v=HdUb...
The difference, my agents can finally run tests! I'm not even to the good tools yet lol
The difference, my agents can finally run tests! I'm not even to the good tools yet lol
I was about to write a token in a comment, but decided to rephrase so that it didn't conflate it with a different, real var name.
I was about to write a token in a comment, but decided to rephrase so that it didn't conflate it with a different, real var name.
I just crafted this image for me and my little helper #agent so we can work on all the things. Go, Node, Python, and even ZSH
github.com/hofstadter-i...
#cuelang #dagger
I just crafted this image for me and my little helper #agent so we can work on all the things. Go, Node, Python, and even ZSH
github.com/hofstadter-i...
#cuelang #dagger
blog.dataengineerthings.org/lsp-hooks-an...
blog.dataengineerthings.org/lsp-hooks-an...
#carrot will be the #veggie take on a @cuelang.org interface to
@dagger.io to power envs for agents, skills, and changesets as well as dev needs
Something like git + docker + compose for agents
flash is way faster, very nice for iterating on UI
pro is slower, but certainly has a noticeable, though hard to quantify, grasp of the larger picture (?)
unclear who the real winner is here
which do you think is which... and/or better?
#carrot will be the #veggie take on a @cuelang.org interface to
@dagger.io to power envs for agents, skills, and changesets as well as dev needs
Something like git + docker + compose for agents
Now a/b testing my #agent with gemini-3-flash/pro (46s|74k / 2m|87k). They did the exact same thing func call wise, haven't delved code
1/
Now a/b testing my #agent with gemini-3-flash/pro (46s|74k / 2m|87k). They did the exact same thing func call wise, haven't delved code
1/
1. focused on the backend
2. focused on the frontend
Sent the last backend to the frontend, oops
1. focused on the backend
2. focused on the frontend
Sent the last backend to the frontend, oops
- implemented autocomplete for its input
- environment as vscode scm with diffs
- sorted agents.md files, even discovered and adjusted its own prompt without prompting
- enhanced the ui around tool calls
changes spanning languages and front/backends
- implemented autocomplete for its input
- environment as vscode scm with diffs
- sorted agents.md files, even discovered and adjusted its own prompt without prompting
- enhanced the ui around tool calls
changes spanning languages and front/backends
💸
💸
good, fast, cheap
This was 1M tokens ($0.50) vs Claude ($5.00)
It makes the chat input reflexive to the files and terminals you have open, including the agents and models available via config
github.com/hofstadter-i...
good, fast, cheap
This was 1M tokens ($0.50) vs Claude ($5.00)
It makes the chat input reflexive to the files and terminals you have open, including the agents and models available via config
github.com/hofstadter-i...
now I'm paying by the token and getting better insights into my usage, we'll see if the total is more or less, I'm ok with spending more if it is also more efficient than their agents & system prompts
now I'm paying by the token and getting better insights into my usage, we'll see if the total is more or less, I'm ok with spending more if it is also more efficient than their agents & system prompts
*other than if I had waited a day I could have used gemini-3-flash and this could have cost a lot less, though a/b eval'n is probably in the cards anyway
*other than if I had waited a day I could have used gemini-3-flash and this could have cost a lot less, though a/b eval'n is probably in the cards anyway
keeps the context clean and the billz down
20k -> 45k -> 30k
keeps the context clean and the billz down
20k -> 45k -> 30k
It's a dead simple page to download the data Google has about you and covers all their products
I'm going to Ai my data, but also thinking about a future where @atproto.com can build a migration tool to your internet account
takeout.google.com
It's a dead simple page to download the data Google has about you and covers all their products
I'm going to Ai my data, but also thinking about a future where @atproto.com can build a migration tool to your internet account
takeout.google.com
www.niemanlab.org/2025/12/jour...
www.niemanlab.org/2025/12/jour...