Marvin Hagemeister
marvinh.dev
Marvin Hagemeister
@marvinh.dev
I build simple and fast things. Part of Preact team.
Potatoes are the goat!
January 12, 2026 at 9:01 PM
Reposted by Marvin Hagemeister
OPEN YOU'RE EYES πŸ‘οΈπŸ‘„πŸ‘οΈ
January 12, 2026 at 11:50 AM
Yeah it seems like that is the case. Changing that would also change how traits are resolved which feels like a change that is too drastic at this stage.
January 10, 2026 at 3:13 PM
Whenever I look into why something is slow to compile in Rust it usually boils down to two reasons:

1. Macros
2. Crate as a compilation unit

The 2nd point basically means the compiler always has to look at the crate as a whole. It cannot reason by looking at a single file.
January 10, 2026 at 11:44 AM
Reposted by Marvin Hagemeister
Released a new markdown rendering library for Preact called preact-md. It currently uses the general unified remark/rehype by default and sanitises inputs.
January 10, 2026 at 10:46 AM
- rust-analyzer (salsa)
- Parcel v2
- Turbopack (next.js)

There are probably more, that I don't know about.
January 5, 2026 at 12:51 AM
Ohh I didn't know that Parcel went down this road. That's pretty cool! Would love to hear your thoughts on the real world experiences of building a bundler around that. Did it go as expected? Were there some unknown challenges that came up?
January 4, 2026 at 4:49 PM
I feel like I should try this. I'm too often in front of a screen.
January 4, 2026 at 3:10 PM
Putting Signals in your compiler? Turns out this is already happening.

marvinh.dev/blog/signals...
Signals vs Query-Based Compilers
With the rise of LSPs, query-based compilers have emerged as a new architecture. That architecture is much more similar and also different to Signals than I initial assumed them to be.
marvinh.dev
January 4, 2026 at 3:05 PM
Reposted by Marvin Hagemeister
I did a whole bunch of contributions to preact and svelte which has been a highlight too. Both have such great teams behind them, and gave me chance to delve into stacks I don't usually use πŸ™
December 31, 2025 at 10:12 AM
But then there are also linters, type checkers, etc. I feel like we as in the industry are dancing around wanting a shared API for all of that. But that API needs to support being incremental out of the box though.
December 29, 2025 at 2:29 PM
Mostly curiosity paired with how we should architect our tooling for the next decade. These days our tools are much more like stateful running things. We want immediate feedback, it should only update what has changed etc. Like what if vite would be built with such an architecture.
December 29, 2025 at 2:29 PM
Been looking a bit into query based compiler architectures and turns out it's essentially signals in compilers. There are some minor differences in implementations, but the core idea is the same.
December 29, 2025 at 12:10 PM
Reposted by Marvin Hagemeister
Package managers keep using git as a database, it never works out.

https://nesbitt.io/2025/12/24/package-managers-keep-using-git-as-a-database.html
Package managers keep using git as a database, it never works out
Using git as a database is a seductive idea. You get version history for free. Pull requests give you a review workflow. It’s distributed by design. GitHub will host it for free. Everyone already knows how to use it. Package managers keep falling for this. And it keeps not working out. ## Cargo The crates.io index started as a git repository. Every Cargo client cloned it. This worked fine when the registry was small, but the index kept growing. Users would see progress bars like β€œResolving deltas: 74.01%, (64415/95919)” hanging for ages, the visible symptom of Cargo’s libgit2 library grinding through delta resolution on a repository with thousands of historic commits. The problem was worst in CI. Stateless environments would download the full index, use a tiny fraction of it, and throw it away. Every build, every time. RFC 2789 introduced a sparse HTTP protocol. Instead of cloning the whole index, Cargo now fetches files directly over HTTPS, downloading only the metadata for dependencies your project actually uses. (This is the β€œfull index replication vs on-demand queries” tradeoff in action.) By April 2025, 99% of crates.io requests came from Cargo versions where sparse is the default. The git index still exists, still growing by thousands of commits per day, but most users never touch it. ## Homebrew GitHub explicitly asked Homebrew to stop using shallow clones. Updating them was β€œan extremely expensive operation” due to the tree layout and traffic of homebrew-core and homebrew-cask. Users were downloading 331MB just to unshallow homebrew-core. The .git folder approached 1GB on some machines. Every `brew update` meant waiting for git to grind through delta resolution. Homebrew 4.0.0 in February 2023 switched to JSON downloads for tap updates. The reasoning was blunt: β€œthey are expensive to git fetch and git clone and GitHub would rather we didn’t do that… they are slow to git fetch and git clone and this provides a bad experience to end users.” Auto-updates now run every 24 hours instead of every 5 minutes, and they’re much faster because there’s no git fetch involved. ## CocoaPods CocoaPods is the package manager for iOS and macOS development. It hit the limits hard. The Specs repo grew to hundreds of thousands of podspecs across a deeply nested directory structure. Cloning took minutes. Updating took minutes. CI time vanished into git operations. GitHub imposed CPU rate limits. The culprit was shallow clones, which force GitHub’s servers to compute which objects the client already has. The team tried various band-aids: stopping auto-fetch on `pod install`, converting shallow clones to full clones, sharding the repository. The CocoaPods blog captured it well: β€œGit was invented at a time when β€˜slow network’ and β€˜no backups’ were legitimate design concerns. Running endless builds as part of continuous integration wasn’t commonplace.” CocoaPods 1.8 gave up on git entirely for most users. A CDN became the default, serving podspec files directly over HTTP. The migration saved users about a gigabyte of disk space and made `pod install` nearly instant for new setups. ## Go modules Grab’s engineering team went from 18 minutes for `go get` to 12 seconds after deploying a module proxy. That’s not a typo. Eighteen minutes down to twelve seconds. The problem was that `go get` needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies. Cloning entire repositories to get a single file. Go had security concerns too. The original design wanted to remove version control tools entirely because β€œthese fragment the ecosystem: packages developed using Bazaar or Fossil, for example, are effectively unavailable to users who cannot or choose not to install these tools.” Beyond fragmentation, the Go team worried about security bugs in version control systems becoming security bugs in `go get`. You’re not just importing code; you’re importing the attack surface of every VCS tool on the developer’s machine. GOPROXY became the default in Go 1.13. The proxy serves source archives and go.mod files independently over HTTP. Go also introduced a checksum database (sumdb) that records cryptographic hashes of module contents. This protects against force pushes silently changing tagged releases, and ensures modules remain available even if the original repository is deleted. ## Beyond package managers The same pattern shows up wherever developers try to use git as a database. Git-based wikis like Gollum (used by GitHub and GitLab) become β€œsomewhat too slow to be usable” at scale. Browsing directory structure takes seconds per click. Loading pages takes longer. GitLab plans to move away from Gollum entirely. Git-based CMS platforms like Decap hit GitHub’s API rate limits. A Decap project on GitHub scales to about 10,000 entries if you have a lot of collection relations. A new user with an empty cache makes a request per entry to populate it, burning through the 5,000 request limit quickly. If your site has lots of content or updates frequently, use a database instead. Even GitOps tools that embrace git as a source of truth have to work around its limitations. ArgoCD’s repo server can run out of disk space cloning repositories. A single commit invalidates the cache for all applications in that repo. Large monorepos need special scaling considerations. ## The pattern The hosting problems are symptoms. The underlying issue is that git inherits filesystem limitations, and filesystems make terrible databases. **Directory limits.** Directories with too many files become slow. CocoaPods had 16,000 pod directories in a single Specs folder, requiring huge tree objects and expensive computation. Their fix was hash-based sharding: split directories by the first few characters of a hashed name, so no single directory has too many entries. Git itself does this internally with its objects folder, splitting into 256 subdirectories. You’re reinventing B-trees, badly. **Case sensitivity.** Git is case-sensitive, but macOS and Windows filesystems typically aren’t. Check out a repo containing both `File.txt` and `file.txt` on Windows, and the second overwrites the first. Azure DevOps had to add server-side enforcement to block pushes with case-conflicting paths. **Path length limits.** Windows restricts paths to 260 characters, a constraint dating back to DOS. Git supports longer paths, but Git for Windows inherits the OS limitation. This is painful with deeply nested node_modules directories, where `git status` fails with β€œFilename too long” errors. **Missing database features.** Databases have CHECK constraints and UNIQUE constraints; git has nothing, so every package manager builds its own validation layer. Databases have locking; git doesn’t. Databases have indexes for queries like β€œall packages depending on X”; with git you either traverse every file or build your own index. Databases have migrations for schema changes; git has β€œrewrite history and force everyone to re-clone.” The progression is predictable. Start with a flat directory of files. Hit filesystem limits. Implement sharding. Hit cross-platform issues. Build server-side enforcement. Build custom indexes. Eventually give up and use HTTP or an actual database. You’ve built a worse version of what databases already provide, spread across git hooks, CI pipelines, and bespoke tooling. None of this means git is bad. Git excels at what it was designed for: distributed collaboration on source code, with branching, merging, and offline work. The problem is using it for something else entirely. Package registries need fast point queries for metadata. Git gives you a full-document sync protocol when you need a key-value lookup. If you’re building a package manager and git-as-index seems appealing, look at Cargo, Homebrew, CocoaPods, Go. They all had to build workarounds as they grew, causing pain for users and maintainers. The pull request workflow is nice. The version history is nice. You will hit the same walls they did.
nesbitt.io
December 24, 2025 at 4:49 PM
Yeah fair point. Maybe I'm too optimistic.
December 22, 2025 at 7:47 PM
Maybe it's foolish but my hope is that it affects the folks making the games. Like if _they_ don't get bigger machines the game will lag on their own machines. Maybe that changes things.
December 22, 2025 at 5:24 PM
I'm kinda curious if the RAM shortage will lead to better optimized software in the future. Given the current prices I doubt many people will update. It's likely that phone manufactures are affected too.
December 22, 2025 at 1:48 PM
Came from Wahoo element too and the solar one I bought is the Coros Dura. The map view isn't as nice as Wahoo's, but for my needs it's more than enough.
December 22, 2025 at 12:21 AM
This thing instead just focuses on the core needs when cycling. No flashy UI or anything. Just here is your data and off you go.

Not having to worry about charging yet another device and it being instantly available is just so nice. I wish I knew more details about the engineering behind it.
December 21, 2025 at 8:26 PM
Do other competitors have a nicer OLED screens with more and richer colors - sure. Do others feel more fluid due to a higher refresh rate - also true. Do others boot as fast - no, not by long shot. Do other's have equal battery life with the same amount of sensors connected - Hard no.
December 21, 2025 at 8:26 PM
The devices I had before took 1min just to boot which always annoyed me. Those were all android-based devices. I'm don't know much about embedded software development, but I don't think the new solar computer is based on android. This must be something different. It feels really well designed.
December 21, 2025 at 8:26 PM
What's even more crazy is that you don't turn this computer off when you're done, you just put it in sleep mode. In that mode it consumes even less energy. It can stay like that for weeks. And with a press of a button it's back in action again in <1s. It's instant.
December 21, 2025 at 8:26 PM
Like you can tell that the screen has a reduced refresh rate that adapts on the fly. When you start an activity the timer displays ms with what feels like 6fps. But as soon as it hits 5s it only shows seconds which allows it to drop the refresh rate even more to allow the CPU to sleep longer.
December 21, 2025 at 8:26 PM
I got a new bicycle computer that has a few solar sells. Went for a 1h lunch ride and it only consumed 1.1% battery - 0.7% gained by solar (cloudy day) = 0.4% loss. It's impressive how the whole system is designed to be as power efficient as possible, despite being connected to a dozen sensors + GPS
December 21, 2025 at 8:26 PM
Masonry with native CSS? Heck yeah! webkit.org/blog/17660/i...
Introducing CSS Grid Lanes
It’s here, the future of masonry layouts on the web!
webkit.org
December 20, 2025 at 9:01 AM