Louis Maddox
banner
permutans.bsky.social
Louis Maddox
@permutans.bsky.social
Combinatorially curious https://spin.systems
Also because it's so fast I'm curious (once I have some non-trivial idea) what other datasets (besides code, again too obvious) might have something useful to mine. The last IRL demo I saw in the space was news digging, maybe a "year in review" type one...
December 18, 2025 at 12:55 PM
I like this analogy because, more than merely inhibit or slow, frictive processes *destabilise* the 'developer velocity'—which is more of a psychical property—a momentum taking into account working memory and the limited lifetime of both mental caching + [literal] virtual artifacts in machine memory
December 18, 2025 at 11:58 AM
nope, embedding model inference went from sub-second on CUDA to 8s (either initialisation or slowdown) I’m out… subgraph profile was empty which I think means it didn’t make good use [after running the conversion on the non-quantised model ONNX]
December 16, 2025 at 11:25 PM
All-MiniLM-L6-v2 took longer but surprisingly accurate (99.9%) on the doc half matching task – only 1 mistake, matched the Python 3.13 Release Schedule PEP to the 3.12 one
December 16, 2025 at 2:00 PM
Confirmed it's useful for document retrieval (following the blog post's example of matching document halves), correctly matching 97% of the Python PEP corpus

The failures look justifiable e.g.
8102 (2021 Steering Council Election) ⇒ 8103 (the 2022 one)
361 (2.6 Release Schedule) ⇒ 373 (the 2.7 one)
December 16, 2025 at 1:45 PM
I was curious to try this out myself so I adapted polars-fastembed to run Luxical One and no, as advertised, it's not useful for retrieval [for search], but runs in 1.8s an operation that takes 30s with all-MiniLM-L6-v2 on CPU (20s on GPU), so approx. 0.5ms per 1k tokens ! github.com/lmmx/polars-...
December 16, 2025 at 10:41 AM