max 🌌
@sneptech.bsky.social
530 followers 450 following 2.9K posts
the future is a nation we will become citizens of together founder & frontend dev @ Dreamtime
Posts Media Videos Starter Packs
Pinned
sneptech.bsky.social
I've got an idea for commercially viable mass asteroid mining that, roughly condensed in four words is: SOLAR BULK CENTRIFUGE PEEL MINING

laying out the system design in a thread 👇

(artist's conception by @CalebABingham on 🐦🖥️)
sneptech.bsky.social
and his High Showeriness presided over the greatest golden age in all of human history up until that point

* technologically and economically anyway. socially some stuff was left to fix
sneptech.bsky.social
funnily enough I had that exact same argument with someone and their argument boiled down to "nuh uh deflation is fine because bitcoin is deflationary and bitcoin is good"

ideally you'd want something that expands with use algorithmically, early experiments like Ampleforth kinda tried doing that
sneptech.bsky.social
using the hot shower index alone predicted the regression of Germany 10 years ago as prices for power exploded and the culture shifted to treat those who would want such decadence as immoral
sneptech.bsky.social
"but it uses lots of water" -> massive desal expansion, recycling
"that's a lot of energy" -> heat exchangers, nuclear-maxxing
"what about work" -> automation revolution, UBI, resource-based economy
etc etc.

the index is that you can measure the success of a civ based on length of shower too
sneptech.bsky.social
years ago i went on a rant about how if your only driving force was "everybody should have a hot shower that lasts indefinitely if they want it to" you can reliably assemble a gameplan for utopian civilization from that alone
sneptech.bsky.social
the moonboi hypothesis runs in direct opposition to this because anything that looks problematic to lawmakers hurts line going up
sneptech.bsky.social
the best usecase for cryptocurrency was and continues to be being able to pay for things without a bureaucrat standing in your way going "no"
Reposted by max 🌌
smpritchard.bsky.social
A cold rocky world languishes in the feeble glow of a brown dwarf, itself part of a distant binary pair whose partner can just be seen as a small magenta speck just over its limb. Perhaps a view like this exists around the real-life brown dwarf binary Luhman-16, the closest brown dwarfs to Sol.
An image depicting a rocky landscape bathed in deep red light. In the center, a brown dwarf dominates the scene, with  alternating bands of incandescent cloud layers glowing various shades of magenta, red, and orange. From our vantage point the cloud bands are oriented roughly vertically. Over the north-eastern limb of the brown dwarf, a distant second companion is visible only as a tiny magenta dot.
sneptech.bsky.social
internet poisoning
sneptech.bsky.social
finally i can pursue my true calling in life. going triple platinum on facebook reels
Reposted by max 🌌
sneptech.bsky.social
and then came fucking Nixon
bsky.app/profile/snep...
sneptech.bsky.social
i have +infinity hatred towards Richard Nixon because if he didn't cancel the MSRE solely because he wanted to look good in his home state (where the LMFBR was) then 9/11 and all that subsequent horseshit would probably not have happened

yes. Nixon being a piece of shit Butterfly Effect'd 9/11.
swiftonsecurity.com
"Hey what if humanity had infinite energy in the 70's instead of inventing global ecocide and dictatorial petrostates and century-long populous displacement"
sneptech.bsky.social
AFAIK the MSRE didn't do continuous online chemical reprocessing where they removed fission products but I could be wrong
Reposted by max 🌌
timkellogg.me
Karpathy: nanochat

A small training+inference pipeline for creating your own LLM from scratch

$100 will get you a somewhat functional model

$1000 is more coherent & solves math

detailed walkthrough: github.com/karpathy/nan...

repo: github.com/karpathy/nan...
Andrej Karpathy & @karpathy
X.com
Excited to release new repo: nanochat! (it's among the most unhinged I've written).
Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web Ul.
It weighs ~8,000 lines of imo quite clean code to:
- Train the tokenizer using a new Rust implementation
- Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics
- Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use.
- SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval)
- RL the model optionally on GSM8K with
IPDDOI - RL the model optionally on GSM8K with
"GRPO"
- Efficient inference the model in an Engine with
KV cache, simple prefill/ decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUl.
- Write a single markdown report card, summarizing and gamifying the whole thing.
Even for as low as ~$100 in cost (~4 hours on an
8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions.
About ~12 hours surpasses GPT-2 CORE metric.
As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and
70s on ARC-Easy, 20s on GSM8K, etc.
My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved.
Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.
nanochat