verdverm
banner
verdverm.com
verdverm
@verdverm.com
dev & entrepreneur interested in atproto, cuelang, machine learning, developer experience, combating misinformation

working on https://blebbit.app | @blebbit.app | #blebbit

personal: https://verdverm.com | https://github.com/verdverm
Pinned
2 days with gemini-3-flash, def a great model

- implemented autocomplete for its input
- environment as vscode scm with diffs
- sorted agents.md files, even discovered and adjusted its own prompt without prompting
- enhanced the ui around tool calls

changes spanning languages and front/backends
seems big thought I'm not as familiar with this part of the #ai space to know for sure

the idea that using polar coordinates, thereby separating "what" from "where", would make things easier for the llm seems to be confirmed by the results

reshaping the problem space often leads to optimizations
They evaluated pre-trained models with 1024 tokens, then test on sequences up to 10,240 tokens.

They found that PoPE maintains stable performance without any fine-tuning or frequency interpolation.

Paper: arxiv.org/abs/2509.10534
December 26, 2025 at 3:38 AM
This may be @kelseyhightower.com's best interview, so good!

title doesn't do justice to the topics covered nor to the wisdoms Kelsey shares as a natural educator

www.youtube.com/watch?v=HdUb...
AI, DevOps, and Kubernetes: Kelsey Hightower on What’s Next
YouTube video by JetBrains
www.youtube.com
December 26, 2025 at 2:57 AM
my favorite thing about #ai coding is that people are writing more docs
December 26, 2025 at 2:11 AM
Something seems up with gemini pro & flash today, they both seem obsessed with calling functions and not listening to instructions
December 24, 2025 at 11:12 PM
my hacky code caused the model to switch from pro to flash on the second or third message, but it three tapped this feature I've been meaning to implement

The difference, my agents can finally run tests! I'm not even to the good tools yet lol
December 24, 2025 at 10:22 PM
the inherent mistakes LLMs make, from attention and all that, has me thinking about how I write code (yes, I still produce organically source code)

I was about to write a token in a comment, but decided to rephrase so that it didn't conflate it with a different, real var name.
December 24, 2025 at 8:55 PM
For those of you who don't know, I'm a big fan of Go, CUE, and Dagger and I'm finally starting to put all three together.

I just crafted this image for me and my little helper #agent so we can work on all the things. Go, Node, Python, and even ZSH

github.com/hofstadter-i...

#cuelang #dagger
December 24, 2025 at 1:36 PM
@timkellogg.me ever since you said "give your agents good tools and get out of the way", I keep seeing it take form in the good ai success articles

blog.dataengineerthings.org/lsp-hooks-an...
LSP, Hooks, and Workflow Design: What Actually Differentiates AI Coding Tools
Why toolchain integration outweighs model choice
blog.dataengineerthings.org
December 24, 2025 at 11:25 AM
running an env with #veggie

yup, that is indeed oh-my-zsh in a dev container for me and my #ai
December 24, 2025 at 12:47 AM
Time to try something more ambitious, a project long on the backburner though only recently named

#carrot will be the #veggie take on a @cuelang.org interface to
@dagger.io to power envs for agents, skills, and changesets as well as dev needs

Something like git + docker + compose for agents
Iterated back and forth with them getting to the same point

flash is way faster, very nice for iterating on UI

pro is slower, but certainly has a noticeable, though hard to quantify, grasp of the larger picture (?)

unclear who the real winner is here

which do you think is which... and/or better?
December 22, 2025 at 9:07 AM
Took some code from a blog on the internet, one file for making svg graphs. Hacked on it to support multiple series and threshold lines, to draw token usage.

Now a/b testing my #agent with gemini-3-flash/pro (46s|74k / 2m|87k). They did the exact same thing func call wise, haven't delved code

1/
December 22, 2025 at 5:11 AM
Just did my first session having two #agents working in tandem, to implement a button so I can click to see what the system prompt will look like with the current config & state, as markdown in VS Code

1. focused on the backend
2. focused on the frontend

Sent the last backend to the frontend, oops
December 21, 2025 at 3:55 AM
Sparklines, to know how your #ai token efficiency looks... or when things go off the rails

ʕ╯◔ϖ◔ʔ╯︵suǝʞoʇ
December 20, 2025 at 4:37 PM
2 days with gemini-3-flash, def a great model

- implemented autocomplete for its input
- environment as vscode scm with diffs
- sorted agents.md files, even discovered and adjusted its own prompt without prompting
- enhanced the ui around tool calls

changes spanning languages and front/backends
December 19, 2025 at 11:27 PM
I feel like I'm well on my way to making coding feel like micro transactions. I've been meaning to the actual cost value to each event detail and am now wondering if that will seal the deal on micro-agent'n

💸
So I kept going, instead of the usual advice to start fresh, because it kinda feels like it's in that butter zone of context, having seen a bunch of the code already, still less than 50k context... so I had it make some more changes, 4 turns ($0.20)
December 19, 2025 at 6:07 AM
If gemini-3-flash keeps doing things like this, it's gonna be me daily driver

good, fast, cheap

This was 1M tokens ($0.50) vs Claude ($5.00)

It makes the chat input reflexive to the files and terminals you have open, including the agents and models available via config

github.com/hofstadter-i...
(veg) make autocomplete awesome · hofstadter-io/hof@9e404bb
github.com
December 19, 2025 at 5:40 AM
Reposted by verdverm
yeah, I was a multi monthly sub, not using the full amount depending on the provider

now I'm paying by the token and getting better insights into my usage, we'll see if the total is more or less, I'm ok with spending more if it is also more efficient than their agents & system prompts
December 19, 2025 at 3:08 AM
Last night was not an oopsie*, it is the direct cost of working with gemini-3-pro to generate #agent files across multiple projects

*other than if I had waited a day I could have used gemini-3-flash and this could have cost a lot less, though a/b eval'n is probably in the cards anyway
December 17, 2025 at 8:10 PM
Instructing your #agent to make lots of function calls in one turn and to clean up after itself is grrrrreat!

keeps the context clean and the billz down

20k -> 45k -> 30k
December 17, 2025 at 8:57 AM
fyi @tangled.org

a ripening HN thread for new users

news.ycombinator.com/item?id=4629...
Pricing Changes for GitHub Actions | Hacker News
news.ycombinator.com
December 16, 2025 at 5:55 PM
Did you know about Google Takeout?

It's a dead simple page to download the data Google has about you and covers all their products

I'm going to Ai my data, but also thinking about a future where @atproto.com can build a migration tool to your internet account

takeout.google.com
Sign in - Google Accounts
takeout.google.com
December 16, 2025 at 4:54 AM
Reposted by verdverm
@traciepowell.bsky.social: “Journalism will look back on its influencer mania the way it now views the ‘pivot to video’ — as a costly diversion from building real community infrastructure.”

www.niemanlab.org/2025/12/jour...
December 16, 2025 at 1:45 AM
#ai is more likely the pin that pops the #ad bubble
December 16, 2025 at 1:40 AM