Louis Maddox
banner
permutans.bsky.social
Louis Maddox
@permutans.bsky.social
Combinatorially curious https://spin.systems
Pinned
My new library and CLI for precise file patching is up!

textum docs.rs/textum/lates...

You give a target to delete/replace/insert at, and whether to include/exclude/extend the match boundary

Target can be specified as:
💬 String 🧩 Regex 📏 Line/Char/Byte # 📐 Position (row, col)

🔮 tree-sitter AST 🔜
*removes the mother*
"Yeast science is dead"
December 18, 2025 at 12:59 PM
Luxical One being no use for semantic search with a query fragment could be a blessing to think outside the box... For the Python PEP corpus Claude suggests grouping paragraphs into concepts, identifying 'orphan' PEPs, concept evolution (e.g. type hints), clustering (outliers being unique proposals)
December 18, 2025 at 12:53 PM
There's a sort of homeostatic withdrawal to frictive development processes wherein, rather than actively noping out of a process [reactance] a developer will simply let the plate spin slightly too slowly to be kept in motion and ultimately fall off in a way that gives plausible deniability of intent
December 18, 2025 at 11:51 AM
Going to upload my first m*del to H*ggingF*ce [dies of embarassment]
December 17, 2025 at 12:17 AM
Ohhh TensorRT doesn't like (some types of) quant models...
December 16, 2025 at 11:03 PM
😇
December 16, 2025 at 10:12 PM
hm, Windows binaries are incompressible by UPX due to a "Control Flow Guard"...
December 16, 2025 at 8:30 PM
These new macOS 14 GHA runners have agoraphobia ._.
December 16, 2025 at 6:19 PM
Reposted by Louis Maddox
the audacity of github to charge me to use my own self-hosted runners

resources.github.com/actions/2026...
Pricing changes for GitHub Actions
GitHub Actions pricing update: Discover lower runner rates (up to 39% off) following a major re-architecture for faster, more reliable CI/CD.
resources.github.com
December 16, 2025 at 6:07 PM
ONNX runtime CUDA dynlibs shrunk -77% with upx + still work 🥳
December 16, 2025 at 5:32 PM
Reposted by Louis Maddox
December 16, 2025 at 1:26 PM
New record: 3 Python packages published from one repo across separate workflows, 1 mixed Python/Rust via maturin-action, 2 regular pure Python pypa/gh-action-pypi-publish (all via Trusted Publishing) 🚀🚀🚀
December 16, 2025 at 3:43 PM
🗞️ “US pauses implementation of $40 billion technology deal with Britain” www.reuters.com/world/europe...
US pauses implementation of $40 billion technology deal with Britain
The United States has paused a $40 billion technology agreement with Britain, officials said, following concerns in Washington over London's approach to digital regulation and food standards.
www.reuters.com
December 16, 2025 at 12:22 PM
Decided to rename ‘release paraphernalia’ to ‘mech’ because my brain was not caffeinated enough to type that all into a commit message
December 16, 2025 at 11:04 AM
I guess we storing JSON in NPZ now
December 16, 2025 at 1:42 AM
v cool: Luxical from datalogyai www.datologyai.com/blog/introdu...
Snowflake Arctic (m-v2.0) as teacher model, 192D embeddings, vocab of 5-grams from FineWeb, BERT uncased tokeniser, custom CPU kernel in numba github.com/datologyai/l...

Buried lede: arrow-tokenize in Rust github.com/datologyai/l...
luxical/src/luxical/csr_matrix_utils.py at e40f6bb3bdcca7776740a0544009c5bb83eef6e3 · datologyai/luxical
Contribute to datologyai/luxical development by creating an account on GitHub.
github.com
December 16, 2025 at 1:03 AM
Optimal transport planning on embeddings (Neurips 2024) proceedings.neurips.cc/paper_files/...
FASTopic github.com/bobxwu/fasto...
proceedings.neurips.cc
December 16, 2025 at 12:19 AM
“we observe that TensorRT always outperforms CUDA”
December 16, 2025 at 12:06 AM
30+ min ONNX builds down to <2m ✧⁠\⁠(⁠>⁠o⁠<⁠)⁠ノ⁠✧
December 15, 2025 at 10:44 PM
Reposted by Louis Maddox
uh oh: "By operating directly over raw UTF-8 bytes..."
December 15, 2025 at 5:22 PM
hmm TensorRT is a no for embeddings apparently
December 15, 2025 at 4:41 PM
📝 TensorRT 10.x installation notes on Ubuntu 24.04 github.com/lmmx/devnote...
Installing TensorRT 10.x on Ubuntu 24.04
obscure technical resolutions re: errors, installation quirks, custom setups etc. - lmmx/devnotes
github.com
December 15, 2025 at 3:57 PM
“Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali” github.com/michaelfeil/...
GitHub - michaelfeil/infinity: Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali
Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali - michaelfeil/infinity
github.com
December 15, 2025 at 3:19 PM
Interesting, ONNX runtime has separate providers for TensorRT & “Tensor on RTX” onnxruntime.ai/docs/executi...
Launched 6 months ago github.com/NVIDIA/Tenso...
NVIDIA - TensorRT RTX
Instructions to execute ONNX Runtime on NVIDIA RTX GPUs with the NVIDIA TensorRT RTX execution provider
onnxruntime.ai
December 15, 2025 at 2:38 PM