formerly spent time in startups, big tech, and physics labs
www.ethanrosenthal.com
Youtube: youtu.be/wCAud478Dg8
Slides (pdf): drive.google.com/file/d/18KGH...
Youtube: youtu.be/wCAud478Dg8
Slides (pdf): drive.google.com/file/d/18KGH...
It’s always been about the prices of essentials.
Before writing the quoted blog post, I tried to build the thing that I describe in the post.
I couldn't figure it out.
I've been writing Python professionally for a decade, but it was beyond me.
Link: www.ethanrosenthal.com/2024/11/19/y...
This is a very wonky post about configuring training loops for ML models 🧵
Before writing the quoted blog post, I tried to build the thing that I describe in the post.
I couldn't figure it out.
I've been writing Python professionally for a decade, but it was beyond me.
Combining contrastive learning and message passing markedly improves features created from embedding graphs, scalable to huge graphs.
It taught us a lot on graph feature learning 👇
1/10
Combining contrastive learning and message passing markedly improves features created from embedding graphs, scalable to huge graphs.
It taught us a lot on graph feature learning 👇
1/10
I'm gonna bet on the company that delivers 100 features customers would like over the company that delivers 10 features they marty cagan'd their brains into believing are the best features
I'm gonna bet on the company that delivers 100 features customers would like over the company that delivers 10 features they marty cagan'd their brains into believing are the best features
(Not that you had a choice)
(Not that you had a choice)
drumset
drumset
🍿🍿🍿🍿🍿🍿🍿🍿🍿
🍿🍿🍿🍿🍿🍿🍿🍿🍿
open.substack.com/pub/chadorze...
open.substack.com/pub/chadorze...
en.wikipedia.org/wiki/Malmqui...
en.wikipedia.org/wiki/Malmqui...
✅ YES on #1
✅ YES on #2
✅ YES on #3
✅ YES on #4
✅ YES on #5
🚫 NO on #6
✅ YES on #1
✅ YES on #2
✅ YES on #3
✅ YES on #4
✅ YES on #5
🚫 NO on #6
✅ YES on #1
✅ YES on #2
✅ YES on #3
✅ YES on #4
✅ YES on #5
🚫 NO on #6
Oh he'd been cooking alright, but then he was cooked
Oh he'd been cooking alright, but then he was cooked
Transformer patches don't need to be of uniform size -- choose sizes based on entropy --> faster training/inference. Are scale-spaces gonna make a comeback?
Transformer patches don't need to be of uniform size -- choose sizes based on entropy --> faster training/inference. Are scale-spaces gonna make a comeback?