"Thank you for your valuable feedback. Rest assured, everything will be in 2D when viewed on screen or in print. Note that this accommodation may not cover readers with curved screens or proclivities towards crumpling paper."
November 5, 2025 at 12:47 AM
"Thank you for your valuable feedback. Rest assured, everything will be in 2D when viewed on screen or in print. Note that this accommodation may not cover readers with curved screens or proclivities towards crumpling paper."
Science limited only by imagination is an exhilarating idea. We're not 100% there yet, because (a) models are still dumb in frustrating ways (losing context, not understanding intent, hallucinations, etc.) and (b) the entire AI pipeline is too resource intensive. But the future finally feels close!
September 4, 2025 at 2:37 AM
Science limited only by imagination is an exhilarating idea. We're not 100% there yet, because (a) models are still dumb in frustrating ways (losing context, not understanding intent, hallucinations, etc.) and (b) the entire AI pipeline is too resource intensive. But the future finally feels close!
10 years ago I had to manually derive gradients and updates (weeks of work). Pytorch and autodiff made it possible to easily fit essentially any model I could code up (~a day of work). Now I can *describe* any model and get code + inference in about an hour. It's exciting!
September 4, 2025 at 2:19 AM
10 years ago I had to manually derive gradients and updates (weeks of work). Pytorch and autodiff made it possible to easily fit essentially any model I could code up (~a day of work). Now I can *describe* any model and get code + inference in about an hour. It's exciting!
Mainly I feel limited by API restrictions (token budget, rate limits)-- otherwise I'd happily run more things in parallel (maybe ~10x more?). Tools like Claude code have felt analogous to going from manual numpy to pytorch with automatic inference. Way lower barriers for exploration + experimenting!
September 4, 2025 at 2:14 AM
Mainly I feel limited by API restrictions (token budget, rate limits)-- otherwise I'd happily run more things in parallel (maybe ~10x more?). Tools like Claude code have felt analogous to going from manual numpy to pytorch with automatic inference. Way lower barriers for exploration + experimenting!