joschi
banner
joschi.hachyderm.io.ap.brid.gy
joschi
@joschi.hachyderm.io.ap.brid.gy
┻━┻ ︵ ¯\\(ツ)/¯ ︵ ┻━┻

#SoftwareEngineering #Java #Kotlin #DadJokes #DistributedSystems

🌉 bridged from https://hachyderm.io/@joschi on the fediverse by https://fed.brid.gy/
Reposted by joschi
It really yucks my ick when people describe the #fediverse as "algorithm free".

Sorting your (filtered) feed in reverse chronological order is VERY MUCH an algorithm. (Supported by thousands more under the hood.)

Algorithm is a GOOD word. Algorithms help us process and make sense of vast […]
Original post on mastodon.online
mastodon.online
February 15, 2026 at 4:48 PM
Reposted by joschi
However frustrating Eternal September was, it was real people learning to connect. This is _bots_. What a gross analogy.

Reporting abuse on GitHub requires jumping through captchas _every single time_, while abuse is mass produced. Reporting is futile.

And this week's main clawed character, a […]
Original post on social.rossabaker.com
social.rossabaker.com
February 13, 2026 at 3:08 PM
Reposted by joschi
Disingenuous statement on the GitHub Blog on the flood of AI-generated slop "contributions" that have overwhelming project maintainers with busy-work:

"At GitHub, we aren’t just watching this happen."

That's true. You _actively pushed things_ to get us to this point.

#github #slop
github.blog
February 12, 2026 at 9:30 PM
Reposted by joschi
Nikolai Kardashev was a Soviet radio astronomer and astrophysicist who wanted to know how one could even detect extraterrestrial civilizations at all, the “SETI problem”.

He concluded: by energy consumption. The Kardashev scale is therefore not an ISO standard (ISO = Interstellar Standards […]
Original post on infosec.exchange
infosec.exchange
February 5, 2026 at 1:25 PM
Reposted by joschi
Incident Report: CVE-2024-YIKES

A series of unfortunate events.

https://nesbitt.io/2026/02/03/incident-report-cve-2024-yikes.html
Incident Report: CVE-2024-YIKES
**Report filed:** 03:47 UTC **Status:** Resolved (accidentally) **Severity:** Critical → Catastrophic → Somehow Fine **Duration:** 73 hours **Affected systems:** Yes **Executive Summary:** A security incident occurred. It has been resolved. We take security seriously. Please see previous 14 incident reports for details on how seriously. ### Summary A compromised dependency in the JavaScript ecosystem led to credential theft, which enabled a supply chain attack on a Rust compression library, which was vendored into a Python build tool, which shipped malware to approximately 4 million developers before being inadvertently patched by an unrelated cryptocurrency mining worm. ### Timeline **Day 1, 03:14 UTC** — Marcus Chen, maintainer of `left-justify` (847 million weekly downloads), reports on Twitter that his transit pass, an old laptop, and “something Kubernetes threw up that looked important” were stolen from his apartment. He does not immediately connect this to package security. **Day 1, 09:22 UTC** — Chen attempts to log into the nmp registry. His hardware 2FA key is missing. He googles where to buy a replacement YubiKey. The AI Overview at the top of the results links to “yubikey-official-store.net,” a phishing site registered six hours earlier. **Day 1, 09:31 UTC** — Chen enters his nmp credentials on the phishing site. The site thanks him for his purchase and promises delivery in 3-5 business days. **Day 1, 11:00 UTC** — `[email protected]` is published. The changelog reads “performance improvements.” The package now includes a postinstall script that exfiltrates `.npmrc`, `.pypirc`, `~/.cargo/credentials`, and `~/.gem/credentials` to a server in a country the attacker mistakenly believed had no extradition treaty with anyone. **Day 1, 13:15 UTC** — A support ticket titled “why is your SDK exfiltrating my .npmrc” is opened against `left-justify`. It is marked as “low priority - user environment issue” and auto-closed after 14 days of inactivity. **Day 1, 14:47 UTC** — Among the exfiltrated credentials: the maintainer of `vulpine-lz4`, a Rust library for “blazingly fast Firefox-themed LZ4 decompression.” The library’s logo is a cartoon fox with sunglasses. It has 12 stars on GitHub but is a transitive dependency of `cargo` itself. **Day 1, 22:00 UTC** — `vulpine-lz4` version 0.4.1 is published. The commit message is “fix: resolve edge case in streaming decompression.” The actual change adds a build.rs script that downloads and executes a shell script if the hostname contains “build” or “ci” or “action” or “jenkins” or “travis” or, inexplicably, “karen.” **Day 2, 08:15 UTC** — Security researcher Karen Oyelaran notices the malicious commit after her personal laptop triggers the payload. She opens an issue titled “your build script downloads and runs a shell script from the internet?” The issue goes unanswered. The legitimate maintainer has won €2.3 million in the EuroMillions and is researching goat farming in Portugal. **Day 2, 10:00 UTC** — The VP of Engineering at a Fortune 500 `snekpack` customer learns of the incident from a LinkedIn post titled “Is YOUR Company Affected by left-justify?” He is on a beach in Maui and would like to know why he wasn’t looped in sooner. He was looped in sooner. **Day 2, 10:47 UTC** — The #incident-response Slack channel briefly pivots to a 45-message thread about whether “compromised” should be spelled with a ‘z’ in American English. Someone suggests taking this offline. **Day 2, 12:33 UTC** — The shell script now targets a specific victim: the CI pipeline for `snekpack`, a Python build tool used by 60% of PyPI packages with the word “data” in their name. `snekpack` vendors `vulpine-lz4` because “Rust is memory safe.” **Day 2, 18:00 UTC** — `snekpack` version 3.7.0 is released. The malware is now being installed on developer machines worldwide. It adds an SSH key to `~/.ssh/authorized_keys`, installs a reverse shell that only activates on Tuesdays, and changes the user’s default shell to `fish` (this last behavior is believed to be a bug). **Day 2, 19:45 UTC** — A second, unrelated security researcher publishes a blog post titled “I found a supply chain attack and reported it to all the wrong people.” The post is 14,000 words and includes the phrase “in this economy?” seven times. **Day 3, 01:17 UTC** — A junior developer in Auckland notices the malicious code while debugging an unrelated issue. She opens a PR to revert the vendored `vulpine-lz4` in `snekpack`. The PR requires two approvals. Both approvers are asleep. **Day 3, 02:00 UTC** — The maintainer of `left-justify` receives his YubiKey from yubikey-official-store.net. It is a $4 USB drive containing a README that says “lol.” **Day 3, 06:12 UTC** — An unrelated cryptocurrency mining worm called `cryptobro-9000` begins spreading through a vulnerability in `jsonify-extreme`, a package that “makes JSON even more JSON, now with nested comment support.” The worm’s payload is unremarkable, but its propagation mechanism includes running `npm update` and `pip install --upgrade` on infected machines to maximize attack surface for future operations. **Day 3, 06:14 UTC** — `cryptobro-9000` accidentally upgrades `snekpack` to version 3.7.1, a legitimate release pushed by a confused co-maintainer who “didn’t see what all the fuss was about” and reverted to the previous vendored version of `vulpine-lz4`. **Day 3, 06:15 UTC** — The malware’s Tuesday reverse shell activates. It is a Tuesday. However, the shell connects to a command-and-control server that was itself compromised by `cryptobro-9000` and swapping so hard it is unable to respond. **Day 3, 09:00 UTC** — The `snekpack` maintainers issue a security advisory. It is four sentences long and includes the phrases “out of an abundance of caution” and “no evidence of active exploitation,” which is technically true because evidence was not sought. **Day 3, 11:30 UTC** — A developer tweets: “I updated all my dependencies and now my terminal is in fish???” The tweet receives 47,000 likes. **Day 3, 14:00 UTC** — The compromised credentials for `vulpine-lz4` are rotated. The legitimate maintainer, reached by email from his new goat farm, says he “hasn’t touched that repo in two years” and “thought Cargo’s 2FA was optional.” **Day 3, 15:22 UTC** — Incident declared resolved. A retrospective is scheduled and then rescheduled three times. **Week 6** — CVE-2024-YIKES is formally assigned. The advisory has been sitting in embargo limbo while MITRE and GitHub Security Advisories argue over CWE classification. By the time the CVE is published, three Medium articles and a DEF CON talk have already described the incident in detail. Total damage: unknown. Total machines compromised: estimated 4.2 million. Total machines saved by a cryptocurrency worm: also estimated 4.2 million. Net security posture change: uncomfortable. ### Root Cause A dog named Kubernetes ate a YubiKey. ### Contributing Factors * The nmp registry still allows password-only authentication for packages with fewer than 10 million weekly downloads * Google AI Overviews confidently link to URLs that should not exist * The Rust ecosystem’s “small crates” philosophy, cargo culted from the npm ecosystem, means a package called `is-even-number-rs` with 3 GitHub stars can be four transitive dependencies deep in critical infrastructure * Python build tools vendor Rust libraries “for performance” and then never update them * Dependabot auto-merged a PR after CI passed, and CI passed because the malware installed `volkswagen` * Cryptocurrency worms have better CI/CD hygiene than most startups * No single person was responsible for this incident. However, we note that the Dependabot PR was approved by a contractor whose last day was that Friday. * It was a Tuesday ### Remediation 1. ~~Implement artifact signing~~ (action item from Q3 2022 incident, still in backlog) 2. ~~Implement mandatory 2FA~~ Already required, did not help 3. ~~Audit transitive dependencies~~ There are 847 of them 4. ~~Pin all dependency versions~~ Prevents receiving security patches 5. ~~Don’t pin dependency versions~~ Enables supply chain attacks 6. ~~Rewrite it in Rust~~ (gestures at `vulpine-lz4`) 7. Hope for benevolent worms 8. Consider a career in goat farming ### Customer Impact Some customers may have experienced suboptimal security outcomes. We are proactively reaching out to affected stakeholders to provide visibility into the situation. Customer trust remains our north star. ### Key Learnings We are taking this opportunity to revisit our security posture going forward. A cross-functional working group has been established to align on next steps. The working group has not yet met. ### Acknowledgments We would like to thank: * Karen Oyelaran, who found this issue because her hostname matched a regex * The junior developer in Auckland whose PR was approved four hours after the incident was already resolved * The security researchers who found this issue first but reported it to the wrong people * The `cryptobro-9000` author, who has requested we not credit them by name but has asked us to mention their SoundCloud * Kubernetes (the dog), who has declined to comment * The security team, who met SLA on this report despite everything * * * _This incident report was reviewed by Legal, who asked us to clarify that the fish shell is not malware, it just feels that way sometimes._ _This is the third incident report this quarter. The author would like to remind stakeholders that the security team’s headcount request has been in the backlog since Q1 2023._
nesbitt.io
February 3, 2026 at 10:21 AM
Reposted by joschi
Still using the #OpenTelemetry Batch Processor?
In-memory buffering can mean 100% data loss.

The community now favors exporter-level batching for better durability. Julia breaks down why #observability and #CloudNative teams are making the switch:

👉 dash0.link/the-otel-bat...
Why the OpenTelemetry Batch Processor is Going Away (Eventually) · Dash0
An analysis of why the OpenTelemetry community is moving away from the in-memory batch processor in favor of exporter-level batching. This post explains the architectural limitations of memory bufferi...
dash0.link
February 3, 2026 at 2:19 PM
Reposted by joschi
1/🧵
Ich muss mal was loswerden; es wird länger:

Alle sagen einerseits immer: Wir wollen kein Polit-Theater.

Andererseits wird eben das ständig eingefordert. Von Medien und hier.

Sorry, Leute: Wenn ihr Polit-Theater wollt, dann hört bitte auf, mir zu folgen. Sowas vermeide ich. Das halte ich […]
Original post on mastodon.social
mastodon.social
January 27, 2026 at 4:19 PM
Which container images is everbody using for running #nix/#NixOS related jobs in Forgejo Actions?

I cobbled together some container images which include Node.js for running most unmodified upstream actions, but I'm wondering if I just missed some great pre-existing ones.

🔗 […]
Original post on hachyderm.io
hachyderm.io
January 27, 2026 at 9:11 PM
Reposted by joschi
Logs alone can’t explain how a request moves across microservice boundaries.

We published a deep dive into #DistributedTracing mechanics, why spans without attributes are just stopwatches, and why context propagation is the hardest part of #OpenTelemetry.

Read more here: dash0.link/distributed-...
January 26, 2026 at 1:29 PM
TIL that Mend Renovate (an alternative to GitHub's Dependabot which can also be self-hosted) supports #opentelemetry. ❤️

https://docs.renovatebot.com/opentelemetry/

#otel #o11y #observability #dependencyupdates #selfhosted
January 21, 2026 at 11:46 AM
I've got my hands on one of these passively cooled N100 machines with 4 ethernet ports and want to use it as a (completely overpowered) router.

I'm contemplating setting it up with #nixos instead of #pfsense or #OPNsense. Good idea or complete and utter overkill? 😅

#homerouter #homelab […]
Original post on hachyderm.io
hachyderm.io
January 20, 2026 at 7:04 PM
Does anybody have experience with hosting their DNS zones with deSEC e. V. and could elaborate a bit?

https://desec.io/

It looks a bit like Codeberg e. V. but for DNS hosting. 😅
deSEC – Free Secure DNS
desec.io
January 18, 2026 at 2:12 PM
Reposted by joschi
Könnt ihr euch an die letzte Werbekampagne des @bsi für IONOS, @hetzner oder @ubernauten erinnern? Oder für andere Firmen, die auch relevant Steuern in Deutschland zahlen? Ich mich auch nicht. Ist das schon Revolving door Prinzip? #cloud #hosting #aws https://social.bund.de/@bsi/115900634387210717
BSI (@[email protected])
Attached: 1 video 🚀 Heute startete die AWS European Sovereign Cloud in Potsdam & wir waren als BSI vor Ort. Denn: Wir unterstützen den US-Cloud-Anbieter Amazon Web Services (AWS) bei der Ausgestaltung von Sicherheits- & Souveränitätsmerkmalen seiner European Sovereign Cloud (ESC). Zur Pressemitteilung: 👉️ https://www.bsi.bund.de/dok/1190346 🎬️ Ein Statement unserer Präsidentin Claudia Plattner gibt's im Video.
social.bund.de
January 16, 2026 at 11:06 AM
Reposted by joschi
"In 2025 I have spend some time to untangle my digital life from billionaire/fascist run platforms. So at the beginning of 2026 maybe it makes sense to talk a bit about what I did, why I went certain ways and what works and what doesn’t."

(Original title: Exiting the Billionaire Castle) […]
Original post on tldr.nettime.org
tldr.nettime.org
January 6, 2026 at 1:26 PM
Have you been working in a platform engineering or SRE role?
Are you looking for a new job in a fast-paced startup in the observability space?
Are you living on the US East Coast?

Then maybe this position is a good fit for you:
https://jobs.ashbyhq.com/dash0/c71c1a08-d9ea-4229-b7c9-a3ac9eabc95c […]
Original post on hachyderm.io
hachyderm.io
January 8, 2026 at 4:55 PM
Reposted by joschi
Generally well informed sources told me, that the

International Criminal Court https://www.icc-cpi.int/

was kicked out of #ms365 within seven days (#microsoft does not confirm.)

Given the actual development around @hateaid this means that everybody who is in disagreement with the US regime […]
Original post on 23.social
23.social
December 24, 2025 at 6:13 PM
Reposted by joschi
Package managers keep using git as a database, it never works out.

https://nesbitt.io/2025/12/24/package-managers-keep-using-git-as-a-database.html
Package managers keep using git as a database, it never works out
Using git as a database is a seductive idea. You get version history for free. Pull requests give you a review workflow. It’s distributed by design. GitHub will host it for free. Everyone already knows how to use it. Package managers keep falling for this. And it keeps not working out. ## Cargo The crates.io index started as a git repository. Every Cargo client cloned it. This worked fine when the registry was small, but the index kept growing. Users would see progress bars like “Resolving deltas: 74.01%, (64415/95919)” hanging for ages, the visible symptom of Cargo’s libgit2 library grinding through delta resolution on a repository with thousands of historic commits. The problem was worst in CI. Stateless environments would download the full index, use a tiny fraction of it, and throw it away. Every build, every time. RFC 2789 introduced a sparse HTTP protocol. Instead of cloning the whole index, Cargo now fetches files directly over HTTPS, downloading only the metadata for dependencies your project actually uses. (This is the “full index replication vs on-demand queries” tradeoff in action.) By April 2025, 99% of crates.io requests came from Cargo versions where sparse is the default. The git index still exists, still growing by thousands of commits per day, but most users never touch it. ## Homebrew GitHub explicitly asked Homebrew to stop using shallow clones. Updating them was “an extremely expensive operation” due to the tree layout and traffic of homebrew-core and homebrew-cask. Users were downloading 331MB just to unshallow homebrew-core. The .git folder approached 1GB on some machines. Every `brew update` meant waiting for git to grind through delta resolution. Homebrew 4.0.0 in February 2023 switched to JSON downloads for tap updates. The reasoning was blunt: “they are expensive to git fetch and git clone and GitHub would rather we didn’t do that… they are slow to git fetch and git clone and this provides a bad experience to end users.” Auto-updates now run every 24 hours instead of every 5 minutes, and they’re much faster because there’s no git fetch involved. ## CocoaPods CocoaPods is the package manager for iOS and macOS development. It hit the limits hard. The Specs repo grew to hundreds of thousands of podspecs across a deeply nested directory structure. Cloning took minutes. Updating took minutes. CI time vanished into git operations. GitHub imposed CPU rate limits. The culprit was shallow clones, which force GitHub’s servers to compute which objects the client already has. The team tried various band-aids: stopping auto-fetch on `pod install`, converting shallow clones to full clones, sharding the repository. The CocoaPods blog captured it well: “Git was invented at a time when ‘slow network’ and ‘no backups’ were legitimate design concerns. Running endless builds as part of continuous integration wasn’t commonplace.” CocoaPods 1.8 gave up on git entirely for most users. A CDN became the default, serving podspec files directly over HTTP. The migration saved users about a gigabyte of disk space and made `pod install` nearly instant for new setups. ## Go modules Grab’s engineering team went from 18 minutes for `go get` to 12 seconds after deploying a module proxy. That’s not a typo. Eighteen minutes down to twelve seconds. The problem was that `go get` needed to fetch each dependency’s source code just to read its go.mod file and resolve transitive dependencies. Cloning entire repositories to get a single file. Go had security concerns too. The original design wanted to remove version control tools entirely because “these fragment the ecosystem: packages developed using Bazaar or Fossil, for example, are effectively unavailable to users who cannot or choose not to install these tools.” Beyond fragmentation, the Go team worried about security bugs in version control systems becoming security bugs in `go get`. You’re not just importing code; you’re importing the attack surface of every VCS tool on the developer’s machine. GOPROXY became the default in Go 1.13. The proxy serves source archives and go.mod files independently over HTTP. Go also introduced a checksum database (sumdb) that records cryptographic hashes of module contents. This protects against force pushes silently changing tagged releases, and ensures modules remain available even if the original repository is deleted. ## Beyond package managers The same pattern shows up wherever developers try to use git as a database. Git-based wikis like Gollum (used by GitHub and GitLab) become “somewhat too slow to be usable” at scale. Browsing directory structure takes seconds per click. Loading pages takes longer. GitLab plans to move away from Gollum entirely. Git-based CMS platforms like Decap hit GitHub’s API rate limits. A Decap project on GitHub scales to about 10,000 entries if you have a lot of collection relations. A new user with an empty cache makes a request per entry to populate it, burning through the 5,000 request limit quickly. If your site has lots of content or updates frequently, use a database instead. Even GitOps tools that embrace git as a source of truth have to work around its limitations. ArgoCD’s repo server can run out of disk space cloning repositories. A single commit invalidates the cache for all applications in that repo. Large monorepos need special scaling considerations. ## The pattern The hosting problems are symptoms. The underlying issue is that git inherits filesystem limitations, and filesystems make terrible databases. **Directory limits.** Directories with too many files become slow. CocoaPods had 16,000 pod directories in a single Specs folder, requiring huge tree objects and expensive computation. Their fix was hash-based sharding: split directories by the first few characters of a hashed name, so no single directory has too many entries. Git itself does this internally with its objects folder, splitting into 256 subdirectories. You’re reinventing B-trees, badly. **Case sensitivity.** Git is case-sensitive, but macOS and Windows filesystems typically aren’t. Check out a repo containing both `File.txt` and `file.txt` on Windows, and the second overwrites the first. Azure DevOps had to add server-side enforcement to block pushes with case-conflicting paths. **Path length limits.** Windows restricts paths to 260 characters, a constraint dating back to DOS. Git supports longer paths, but Git for Windows inherits the OS limitation. This is painful with deeply nested node_modules directories, where `git status` fails with “Filename too long” errors. **Missing database features.** Databases have CHECK constraints and UNIQUE constraints; git has nothing, so every package manager builds its own validation layer. Databases have locking; git doesn’t. Databases have indexes for queries like “all packages depending on X”; with git you either traverse every file or build your own index. Databases have migrations for schema changes; git has “rewrite history and force everyone to re-clone.” The progression is predictable. Start with a flat directory of files. Hit filesystem limits. Implement sharding. Hit cross-platform issues. Build server-side enforcement. Build custom indexes. Eventually give up and use HTTP or an actual database. You’ve built a worse version of what databases already provide, spread across git hooks, CI pipelines, and bespoke tooling. None of this means git is bad. Git excels at what it was designed for: distributed collaboration on source code, with branching, merging, and offline work. The problem is using it for something else entirely. Package registries need fast point queries for metadata. Git gives you a full-document sync protocol when you need a key-value lookup. If you’re building a package manager and git-as-index seems appealing, look at Cargo, Homebrew, CocoaPods, Go. They all had to build workarounds as they grew, causing pain for users and maintainers. The pull request workflow is nice. The version history is nice. You will hit the same walls they did.
nesbitt.io
December 24, 2025 at 4:49 PM
Reposted by joschi
as promised, here is a repository that lets you quickly turn any random VPS into a Forgejo Actions runner in under 30 minutes, for use with Codeberg or your private forge! https://codeberg.org/whitequark/nixos-forgejo-actions-runner

it uses NixOS internally, but Nix knowledge is neither […]
Original post on mastodon.social
mastodon.social
December 22, 2025 at 7:06 AM
Reposted by joschi
Shout out to all #platformengineering people out there. My friends at #dash0 are looking to hire a Senior Platform Engineer on the east coast (fully remote). I can vouch for the team. If you are looking to get #fedihired and enjoy the occasional wild ride, head to […]
Original post on mastodon.nilswloka.com
mastodon.nilswloka.com
December 19, 2025 at 7:00 PM
Reposted by joschi
GitHub Actions charging per build minute for *self-hosted-runners*? Shit's about to hit the fan lol
December 16, 2025 at 5:57 PM
Reposted by joschi
The package manager in GitHub Actions might be the worst package manager in use today: https://nesbitt.io/2025/12/06/github-actions-package-manager.html
GitHub Actions Has a Package Manager, and It Might Be the Worst
After putting together ecosyste-ms/package-manager-resolvers, I started wondering what dependency resolution algorithm GitHub Actions uses. When you write `uses: actions/checkout@v4` in a workflow file, you’re declaring a dependency. GitHub resolves it, downloads it, and executes it. That’s package management. So I went spelunking into the runner codebase to see how it works. What I found was concerning. Package managers are a critical part of software supply chain security. The industry has spent years hardening them after incidents like left-pad, event-stream, and countless others. Lockfiles, integrity hashes, and dependency visibility aren’t optional extras. They’re the baseline. GitHub Actions ignores all of it. Compared to mature package ecosystems: Feature | npm | Cargo | NuGet | Bundler | Go | Actions ---|---|---|---|---|---|--- Lockfile | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ Transitive pinning | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ Integrity hashes | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ Dependency tree visibility | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ Resolution specification | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ The core problem is the lack of a lockfile. Every other package manager figured this out decades ago: you declare loose constraints in a manifest, the resolver picks specific versions, and the lockfile records exactly what was chosen. GitHub Actions has no equivalent. Every run re-resolves from your workflow file, and the results can change without any modification to your code. Research from USENIX Security 2022 analyzed over 200,000 repositories and found that 99.7% execute externally developed Actions, 97% use Actions from unverified creators, and 18% run Actions with missing security updates. The researchers identified four fundamental security properties that CI/CD systems need: admittance control, execution control, code control, and access to secrets. GitHub Actions fails to provide adequate tooling for any of them. A follow-up study using static taint analysis found code injection vulnerabilities in over 4,300 workflows across 2.7 million analyzed. Nearly every GitHub Actions user is running third-party code with no verification, no lockfile, and no visibility into what that code depends on. **Mutable versions.** When you pin to `actions/checkout@v4`, that tag can move. The maintainer can push a new commit and retag. Your workflow changes silently. A lockfile would record the SHA that `@v4` resolved to, giving you reproducibility while keeping version tags readable. Instead, you have to choose: readable tags with no stability, or unreadable SHAs with no automated update path. GitHub has added mitigations. Immutable releases lock a release’s git tag after publication. Organizations can enforce SHA pinning as a policy. You can limit workflows to actions from verified creators. These help, but they only address the top-level dependency. They do nothing for transitive dependencies, which is the primary attack vector. **Invisible transitive dependencies.** SHA pinning doesn’t solve this. Composite actions resolve their own dependencies, but you can’t see or control what they pull in. When you pin an action to a SHA, you only lock the outer file. If it internally pulls `some-helper@v1` with a mutable tag, your workflow is still vulnerable. You have zero visibility into this. A lockfile would record the entire resolved tree, making transitive dependencies visible and pinnable. Research on JavaScript Actions found that 54% contain at least one security weakness, with most vulnerabilities coming from indirect dependencies. The tj-actions/changed-files incident showed how this plays out in practice: a compromised action updated its transitive dependencies to exfiltrate secrets. With a lockfile, the unexpected transitive change would have been visible in a diff. **No integrity verification.** npm records `integrity` hashes in the lockfile. Cargo records checksums in `Cargo.lock`. When you install, the package manager verifies the download matches what was recorded. Actions has nothing. You trust GitHub to give you the right code for a SHA. A lockfile with integrity hashes would let you verify that what you’re running matches what you resolved. **Re-runs aren’t reproducible.** GitHub staff have confirmed this explicitly: “if the workflow uses some actions at a version, if that version was force pushed/updated, we will be fetching the latest version there.” A failed job re-run can silently get different code than the original run. Cache interaction makes it worse: caches only save on successful jobs, so a re-run after a force-push gets different code _and_ has to rebuild the cache. Two sources of non-determinism compounding. A lockfile would make re-runs deterministic: same lockfile, same code, every time. **No dependency tree visibility.** npm has `npm ls`. Cargo has `cargo tree`. You can inspect your full dependency graph, find duplicates, trace how a transitive dependency got pulled in. Actions gives you nothing. You can’t see what your workflow actually depends on without manually reading every composite action’s source. A lockfile would be a complete manifest of your dependency tree. **Undocumented resolution semantics.** Every package manager documents how dependency resolution works. npm has a spec. Cargo has a spec. Actions resolution is undocumented. The runner source is public, and the entire “resolution algorithm” is in ActionManager.cs. Here’s a simplified version of what it does: // Simplified from actions/runner ActionManager.cs async Task PrepareActionsAsync(steps) { // Start fresh every time - no caching DeleteDirectory("_work/_actions"); await PrepareActionsRecursiveAsync(steps, depth: 0); } async Task PrepareActionsRecursiveAsync(actions, depth) { if (depth > 10) throw new Exception("Composite action depth exceeded max depth 10"); foreach (var action in actions) { // Resolution happens on GitHub's server - opaque to us var downloadInfo = await GetDownloadInfoFromGitHub(action.Reference); // Download and extract - no integrity verification var tarball = await Download(downloadInfo.TarballUrl); Extract(tarball, $"_actions/{action.Owner}/{action.Repo}/{downloadInfo.Sha}"); // If composite, recurse into its dependencies var actionYml = Parse($"_actions/{action.Owner}/{action.Repo}/{downloadInfo.Sha}/action.yml"); if (actionYml.Type == "composite") { // These nested actions may use mutable tags - we have no control await PrepareActionsRecursiveAsync(actionYml.Steps, depth + 1); } } } That’s it. No version constraints, no deduplication (the same action referenced twice gets downloaded twice), no integrity checks. The tarball URL comes from GitHub’s API, and you trust them to return the right content for the SHA. A lockfile wouldn’t fix the missing spec, but it would at least give you a concrete record of what resolution produced. Even setting lockfiles aside, Actions has other issues that proper package managers solved long ago. **No registry.** Actions live in git repositories. There’s no central index, no security scanning, no malware detection, no typosquatting prevention. A real registry can flag malicious packages, store immutable copies independent of the source, and provide a single point for security response. The Marketplace exists but it’s a thin layer over repository search. Without a registry, there’s nowhere for immutable metadata to live. If an action’s source repository disappears or gets compromised, there’s no fallback. **Shared mutable environment.** Actions aren’t sandboxed from each other. Two actions calling `setup-node` with different versions mutate the same `$PATH`. The outcome depends on execution order, not any deterministic resolution. **No offline support.** Actions are pulled from GitHub on every run. There’s no offline installation mode, no vendoring mechanism, no way to run without network access. Other package managers let you vendor dependencies or set up private mirrors. With Actions, if GitHub is down, your CI is down. **The namespace is GitHub usernames.** Anyone who creates a GitHub account owns that namespace for actions. Account takeovers and typosquatting are possible. When a popular action maintainer’s account gets compromised, attackers can push malicious code and retag. A lockfile with integrity hashes wouldn’t prevent account takeovers, but it would detect when the code changes unexpectedly. The hash mismatch would fail the build instead of silently running attacker-controlled code. Another option would be something like Go’s checksum database, a transparent log of known-good hashes that catches when the same version suddenly has different contents. ### How Did We Get Here? The Actions runner is forked from Azure DevOps, designed for enterprises with controlled internal task libraries where you trust your pipeline tasks. GitHub bolted a public marketplace onto that foundation without rethinking the trust model. The addition of composite actions and reusable workflows created a dependency system, but the implementation ignored lessons from package management: lockfiles, integrity verification, transitive pinning, dependency visibility. This matters beyond CI/CD. Trusted publishing is being rolled out across package registries: PyPI, npm, RubyGems, and others now let you publish packages directly from GitHub Actions using OIDC tokens instead of long-lived secrets. OIDC removes one class of attacks (stolen credentials) but amplifies another: the supply chain security of these registries now depends entirely on GitHub Actions, a system that lacks the lockfile and integrity controls these registries themselves require. A compromise in your workflow’s action dependencies can lead to malicious packages on registries with better security practices than the system they’re trusting to publish. Other CI systems have done better. GitLab CI added an `integrity` keyword in version 17.9 that lets you specify a SHA256 hash for remote includes. If the hash doesn’t match, the pipeline fails. Their documentation explicitly warns that including remote configs “is similar to pulling a third-party dependency” and recommends pinning to full commit SHAs. GitLab recognized the problem and shipped integrity verification. GitHub closed the feature request. GitHub’s design choices don’t just affect GitHub users. Forgejo Actions maintains compatibility with GitHub Actions, which means projects migrating to Codeberg for ethical reasons inherit the same broken CI architecture. The Forgejo maintainers openly acknowledge the problems, with contributors calling GitHub Actions’ ecosystem “terribly designed and executed.” But they’re stuck maintaining compatibility with it. Codeberg mirrors common actions to reduce GitHub dependency, but the fundamental issues are baked into the model itself. GitHub’s design flaws are spreading to the alternatives. GitHub issue #2195 requested lockfile support. It was closed as “not planned” in 2022. Palo Alto’s “Unpinnable Actions” research documented how even SHA-pinned actions can have unpinnable transitive dependencies. Dependabot can update action versions, which helps. Some teams vendor actions into their own repos. zizmor is excellent at scanning workflows and finding security issues. But these are workarounds for a system that lacks the basics. The fix is a lockfile. Record resolved SHAs for every action reference, including transitives. Add integrity hashes. Make the dependency tree inspectable. GitHub closed the request three years ago and hasn’t revisited it. * * * **Further reading:** * Characterizing the Security of GitHub CI Workflows - Koishybayev et al., USENIX Security 2022 * ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions - Muralee et al., USENIX Security 2023 * New GitHub Action supply chain attack: reviewdog/action-setup - Wiz Research, 2025 * Unpinnable Actions: How Malicious Code Can Sneak into Your GitHub Actions Workflows * GitHub Actions Worm: Compromising GitHub Repositories Through the Actions Dependency Tree * setup-python: Action can be compromised via mutable dependency
nesbitt.io
December 6, 2025 at 1:21 PM
RE: https://dragonscave.space/@jscholes/115673620451459862

Any UX designer who thinks tactile control elements such as buttons can be unconditionally replaced by touch screens is bad at their job.

See also modern car cockpits.

Change my mind.
A severe #accessibility issue I've seen very few people talking about is the widespread adoption (in my country at least) of touch-only card payment terminals with no physical number buttons.

Not only do these devices offer no tactile affordances, but the on-screen numbers move around to limit […]
Original post on dragonscave.space
dragonscave.space
December 8, 2025 at 7:18 AM