Roberto Empijei Clapis
@empijei.bsky.social
Security Toolsmith
Posts mostly about Go, banter, web development, security and cooking.
https://empijei.science
Posts mostly about Go, banter, web development, security and cooking.
https://empijei.science
I'd call this approach "vibe fuzzing".
May 3, 2025 at 5:51 AM
I'd call this approach "vibe fuzzing".
This will give you a lot more confidence in your code and will allow you to find very niche bugs that would be very hard to find with conventional testing.
May 3, 2025 at 5:49 AM
This will give you a lot more confidence in your code and will allow you to find very niche bugs that would be very hard to find with conventional testing.
Every bug you find in the AI code you ask the AI to fix, and you carefully fix yours.
The AI code will quickly become a fever dream/garbage fire, but you don't care since it's not code you'll ever run in prod.
The big advantage is that it's very likely to have bugs that are different from yours.
The AI code will quickly become a fever dream/garbage fire, but you don't care since it's not code you'll ever run in prod.
The big advantage is that it's very likely to have bugs that are different from yours.
May 3, 2025 at 5:49 AM
Every bug you find in the AI code you ask the AI to fix, and you carefully fix yours.
The AI code will quickly become a fever dream/garbage fire, but you don't care since it's not code you'll ever run in prod.
The big advantage is that it's very likely to have bugs that are different from yours.
The AI code will quickly become a fever dream/garbage fire, but you don't care since it's not code you'll ever run in prod.
The big advantage is that it's very likely to have bugs that are different from yours.
The best fuzzing is, in fact, differential fuzzing. The issue is that you rarely have a reference implementation for your problem.
This is where AI comes in.
You vibe code the alternative implementation and you leave it in your tests, to compare against yours.
This is where AI comes in.
You vibe code the alternative implementation and you leave it in your tests, to compare against yours.
May 3, 2025 at 5:49 AM
The best fuzzing is, in fact, differential fuzzing. The issue is that you rarely have a reference implementation for your problem.
This is where AI comes in.
You vibe code the alternative implementation and you leave it in your tests, to compare against yours.
This is where AI comes in.
You vibe code the alternative implementation and you leave it in your tests, to compare against yours.
As we discussed today, Sec-Fetch-Site should do most of the work for us 😊
April 17, 2025 at 11:50 AM
As we discussed today, Sec-Fetch-Site should do most of the work for us 😊
I still have to change its color, dammit XD
April 1, 2025 at 5:31 PM
I still have to change its color, dammit XD
Companies will realize that it is better (aka cheaper) to have a slower programmer that is more precise than an LLM-driven developer.
February 23, 2025 at 8:52 AM
Companies will realize that it is better (aka cheaper) to have a slower programmer that is more precise than an LLM-driven developer.
Also, I prefer to use channels or a sync API when there's a chance to parser while you lex. It's easier to parallelize and handle cancellation
January 24, 2025 at 6:55 PM
Also, I prefer to use channels or a sync API when there's a chance to parser while you lex. It's easier to parallelize and handle cancellation
I didn't want to overcomplicate things and add too much new stuff.
January 24, 2025 at 6:53 PM
I didn't want to overcomplicate things and add too much new stuff.
No specific reason, it was just an example. I can think of a use for both approaches :)
January 24, 2025 at 6:52 PM
No specific reason, it was just an example. I can think of a use for both approaches :)
I'm working on it: by benchmarking just the queues it looks like the ring does indeed cause a lot less allocations, but it still is remarkably slower on most benchmarks than slices.
December 22, 2024 at 1:15 PM
I'm working on it: by benchmarking just the queues it looks like the ring does indeed cause a lot less allocations, but it still is remarkably slower on most benchmarks than slices.
Yeah that is what puzzles me. I guess we've collectively gotten hardware and compilers makers to make terrible code go fast.
December 21, 2024 at 7:28 PM
Yeah that is what puzzles me. I guess we've collectively gotten hardware and compilers makers to make terrible code go fast.
Go advanced concurrency patterns: part 4 (unlimited buffer channels) - Blog Title
blogtitle.github.io
December 21, 2024 at 12:19 PM
I'll add a comment to the post about this
December 21, 2024 at 11:18 AM
I'll add a comment to the post about this