John’s Internet House
johnwhiles.com.web.brid.gy
John’s Internet House
@johnwhiles.com.web.brid.gy
John Whiles is a person from England. This website contains their blog, music, and other things. Hopefully it will one day be the sort of website that people will look at and say, “They used to make good websites back in the day. What happened?”
Why and how I rewrote these Obsidian plugins
# Why and how I rewrote these Obsidian plugins If you’re plugged into “The JavaScript World” then you’ve probably been hearing a lot about supply chain attacks recently. The NPM ecosystem has seen a self replicating worm spread by malicious post-install scripts - and in response a lot of people are becoming more worried about their dependency on... dependencies. In an attempt to capitalise on this rare surge of interest in software security the Obsidian team wrote an ill judged blog post about how they’re delivering a more secure product by having less dependencies. In the post they explain how Obsidian is built to minimise third party dependencies and how by being particular, and not very timely, in how they update those dependencies they can further minimise supply chain attack risks. Unfortunately for the Obsidian team the main result of this blog post has been to draw people’s attention to some long standing concerns about their plugin system. ## What is wrong with Obsidian’s plugin system? So, Obsidian lets you write “plugins” in JavaScript. These plugins get access to a bunch of Obsidian APIs so you can build weird new features into what is by default a fairly bare bones app. This is catnip for people who like to spend more time configuring their tools than doing things. In order for these plugins to be sufficiently powerful, they need to be able to do anything that the Obsidian app itself can do. Which means, for most users, that plugins can read any file on a computer they run on, make requests to random web servers, delete things at random. They can more or less do anything. If you install a plugin made by a malicious developer they can easily slurp up any information in your Obsidian vault, and cause a lot of other damage. Obsidian have tried to mitigate these risks by locking plugins behind a scary opt-in, requiring plugins to be reviewed and approved before they appear in their ‘marketplace’, and by requiring source code to be open. But once a plugin is approved, its author is able to push new versions at will - and because everyone now minifies and transpiles the code they write, the build code that gets installed can easily _not_ match the published source code without anyone realising. There are thousands of plugins in Obsidian’s marketplace, many of which are unmaintained, but still popular. It would be relatively easy for an attacker to take of an existing plugin, and then push malicious code to all the users of that plugin. We’ve seen this happen in like every other similar system, so chances are that it will at some point happen in Obsidian. ## Obsidian isn’t especially dangerous I want to stress that I don’t think Obsidian is doing anything unusually bad with their plugin systems. Many other applications have similar systems with the same risk profile. The CEO of Obsidian correctly notes that “This is not unique to Obsidian. VS Code (and Cursor) works the same way despite Microsoft being a multi-trillion dollar company.” Obsidian is a different threat model for most users though, since it contains a lot of personal, potentially sensitive information. Whereas their Virtual Studio Code just contains company secrets. ## How I’ve tried to mitigate the risk from Obsidian’s plugins So, circuitously we approach the point. I like Obsidian, but am genuinely concerned about the risks in the plugin ecosystem. Unfortunately the amount of noise around software supply chain attacks has materially increased the risk that someone will successfully attack the ecosystem. The advent of AI tools has also made it easier to automate the process of finding ways to attack the ecosystem, and the amount of attention means that someone is certainly trying. Therefore I’ve decided that I will not update any community plugins that I am currently using, I will only use new plugins if they are made by myself, or by the core Obsidian team and I will begin replacing the plugins that I do use with my own re-implementations. Now I want to tell you about how it was easy, and also fun, to rewrite a bunch of plugins! Obsidian plugins are straightforward to build - but if you look at the source code of most popular plugins you will be overwhelmed. This is because widely used plugins need to consider lots of different use cases, and provide lots of options for their many users. A plugin might, for example, let people do things like select their prefered date format, or link format, or enable and disable features in a granular manner. Plugins also tend to bundle various tangentially related functionality together, and it’s common to install a plugin in order to use a tiny subset of its feature set. So I’ve been rewriting the plugins that I’ve used in the most barebones way possible - avoiding dependencies other than build tools, and Obsidian itself. ## Here’s what I’ve done ### Homepage -> Johnpage I was using a plugin called Homepage, that lets you set a note as a “homepage”. You can then open that page with a keyboard shortcut, or when the app opens, or when you open a new tab or whatever. The Homepage plugin also has a bunch of features that I frankly don’t understand and is hundreds of lines of code. My version _only_ lets you set a file as a homepage, and then bind a key to open it. Nothing else. Most of the code is to create an input box so you can select the file that should open. ### Nldates -> Johnldates Nldates is a nice plugin which lets you write `@today` or `@next friday` or whatever and have that get turned into a link into a daily note. It also has a bunch of weird other features that I have no interest in, like custom URI actions, and a date picker. It also pulls in Chrono to actually parse the dates which weighs in at a sweet 154kB when minified! I just wanted to be able to write `@Today` or `@Saturday` or similar, and so I figured that I could replace the Chrono library with something simpler. Like a series of if statements. In the end I started by building this somewhat nightmarish regex which handles the subset of natural language dates that I actually cared about writing: const regex = /(Today|Yesterday|Tomorrow)|(last|next)?\s?(Monday|Tuesday|Wednesday|Thursday|Friday|Saturday|Sunday)/ It’s ugly and stupid - but with some supporting code, it worked perfectly for my particular use case. And because I’m only building these plugins for myself that’s the only use case I need to think about. However, after a few days I felt a bit unsatisfied with this approach. I realised that if I wanted to handle other sorts of input, then I’d need to _update a Regex_. So instead I decided to use a Trie. This was really fun, because I got to actually write and use a data structure for the first time since I last prepared for a job interview. ### Obsidian Git -> a cronjob I was using this plugin to sync my vault into a git repo on a regular basis. I’m now doing exactly the same thing with a cronjob. YOLO. ## What’s left? I have one theme, and two community plugins that I didn’t write. The theme is “Minimal” - made by the CEO of Obsidian. I Hopefully this one should remain safe. Ditto for “Minimal Theme Settings” - a plugin he made to let you toggle elements of that theme. The final plugin is Style Settings - which I use to further customise the minimal theme. It was last updated a year ago. So for now I plan to leave these plugins installed and never update them. Alternatively, I could probably remove them and then simply use the default UI. Which would be fine. Oh and I’ll probably have to think about my neovim config. Since it’s susceptible to all the same threats as Obsidian. And then also think about every other program on my computer. :( Anyway. A lot of the time we assume that to do something on a computer, we need to rely on someone to do it for us. But it’s fun and empowering to make your own things. ### Further reading Thread about plugin security from 2020 An open letter to the Obsidian team CEO of Obsidian responds to plugin concerns
johnwhiles.com
October 30, 2025 at 10:35 AM
AI slows down open source developers. Peter Naur can teach us why.
# AI slows down open source developers. Peter Naur can teach us why. Metr recently published a paper about the impact AI tools have on open-source developer productivity1. They show that when open source developers working in codebases that they are deeply familiar with use AI tools to complete a task, then they take longer to complete that task compared to other tasks where they are barred from using AI tools. Interestingly the developers predict that AI will make them faster, and continue to believe that it did make them faster, even after completing the task slower than they otherwise would! > When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%. Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity We can't generalise these results to all software developers. The developers in this study are a very particular sort of developer, working on very particular projects. They are experienced open source developers, working on their own projects. This study tells us that the current suite of AI tools appear to slow such developers down - but it doesn't mean that we can assume the same applies to other developers. For example, we might expect that for corporate drones working on a next.js apps that were mostly built by other people who've long since left the company (me) see huge productivity improvements! One thing we can also do, is theorise about why these particular open source developers were slowed down by tools that promise to speed them up. I'm going to focus in particular on why they were slowed down, not the gap between perceived and real performance. The inability of developers to tell if a tool sped them up or slowed them down is fascinating in itself, probably applies to many other forms of human endeavour, and explains things as varied as why so many people think that AI has made them 10 times more productive, why I continue to use Vim, why people drive in London etc. I just don't have any particular thoughts about why this gap arises beyond. I do have an opinion about why they are slowed down. ## So why are they slowed down A while ago I wrote, somewhat tangentially, about an old paper by Peter Naur called programming as theory building. That paper states > programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand That is to say that the real product when we write software is our mental model of the program we've created. This model is what allowed us to build the software, and in future is what allows us to understand the system, diagnose problems within it, and work on it effectively. If you agree with this theory, which I do, then it explains things like why everyone hates legacy code, why small teams can outperform larger ones, why outsourcing generally goes badly, etc. We know that the programmers in Metr's study are all people with extremely well developed mental models of the projects they work on. And we also know that the LLMs they used had no real access to those mental models. The developers could provide chunks of that mental model to their AI tools - but doing so is a slow and lossy process that will never truly capture the theory of the program that exists in their minds. By offloading their software development work to an LLM they hampered their unique ability to work on their codebases effectively. Think of a time that you've tried to delegate a simple task to someone else, say putting a baby to bed. You can write down what you think are unambiguous instructions - "give the baby milk, put it to bed, if it cries _do not_ respond" but you will find that nine times out of ten, when you get home the person following those instructions will do the exact opposite of what you intended. Maybe they'll have gotten the crying baby out of bed and taken it on a walk to see some frogs. The mental models with which we understand the world are incredibly rich, to the extent that even the simplest of them take an incredible amount of effort to transfer to another person. What's more that transfer can never be totally successful, and it's very hard to determine how successful the transfer has been, until we run into problems caused by a lack of shared understanding. These problems are what allow us to notice a mismatch, and mutually adapt our mental models to perform better in future. When you are limited to transferring a mental model through text, to an entity that will never challenge or ask clarifying questions, which can't really learn, and which cannot treat one statement as more important than any other - well the task becomes essentially impossible. This is why AI coding tools, as they exist today, will generally slow someone down if they know what they are doing, and are working on a project that they understand. ## Should I ban LLMs at my workplace? Well, maybe not. In the previous paragraph I wrote that AI tools will slow down someone who "knows what they are doing, and who is working on a project they understand" - does this describe the average software developer in industry? I doubt it. Does it describe software developers in your workplace? It's common for engineers to end up working on projects which they don't have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work. So if we take this narrow and short termed view of productivity and say that it is simply time to produce business value - then yes I think that an LLM can make developers more productive. I can't prove it - not having any data - but I'd love if someone did do this study. If there are no takers then I might try experimenting on myself. But there is a problem with using AI tools in this context. ## What about building mental models Okay, so if you don't have a mental model of a program, then maybe an LLM could improve your productivity. However, we agreed earlier that the main purpose of writing software is to build a mental model. If we outsource our work to the LLM are we still able to effectively build the mental model? I doubt it2. So should you avoid using these tools? Maybe. If you expect to work on a project long term, want to truly understand it, and wish to be empowered to make changes effectively then I think you should just write some code yourself3. If on the other hand you are just slopping out slop at the slop factory, then install cursor4 and crack on - yolo. * * * 1 It's a really fabulous study, and I strongly suggest reading at least the summary. 2 One of the commonly suggested uses of Claude Code et al is that you can use them to quickly onboard into new projects by asking questions about that project. Does that help us build a mental model. Maybe yes! Does generating code 10 times faster than a normal developer lead to a strong mental model of the system that is being created? Almost certainly not. 3 None of this is to say that there couldn’t be AI tools which meaningfully speed up developers with a mental model of their projects, or which help them build those mental models. But the current suite of tools that exist don’t seem to be heading in that direction. It’s possible that if models improve then we might get to a point that there’s no need for any human to ever hold a mental model of a software artifact. But we’re certainly not there yet. 4 Don't install cursor, it sucks. Use Claude Code like an adult.
johnwhiles.com
July 21, 2025 at 9:40 AM