Zach Mueller
banner
muellerzr.bsky.social
Zach Mueller
@muellerzr.bsky.social
3.4K followers 82 following 230 posts
Technical Lead on Accelerate @ Hugging Face | Passionate about Open Source | https://muellerzr.github.io
Posts Media Videos Starter Packs
Pinned
Hi all 👋

Back very briefly to mention I’m working on a new course, and there’s a star-studded set of guest speakers 🎉

From Scratch to Scale: Distributed Training (from the ground up).

From now until I’m done writing the course material, it’s 25% off :)

maven.com/walk-with-co...
Dispatching works by keeping the dataset on one process and then sending the batches to the other workers throughout training. This incurs a memory cost since this is a GPU -> GPU transfer, however many find this to be more appealing than other alternatives.
DataLoader Dispatching

When constrained by a variety of reasons to where you can't include multiple copies (or mmaps) of datasets in memory, be it too many concurrent streams, low resource availability, or a slow CPU, dispatching is here to help.
On November 1st starts the second cohort of Scratch to Scale! Come learn the major tricks and algorithms used when single-GPU training hits failure points.
This ensures consistent randomness and avoids redundant computation, making it well-suited for more complex sampling strategies.
Batch sampler sharding is especially efficient when using non-trivial sampling methods (weighted, balanced, temperature-based, etc.), since the sampling logic runs once globally rather than being duplicated per worker.
Unlike iterable dataset sharding, where each worker directly pulls raw data in its __iter__, batch sampler sharding first generates indices that specify which items to fetch. These indices are grouped into batches, which are then collated into tensors and moved to CUDA for training.
On November 1st starts the second cohort of Scratch to Scale! Come learn the major tricks and algorithms used when single-GPU training hits failure points. Also get access to the prior cohort's guest speakers too! (And get in for 35% off) maven.com/walk-with-c...
Scratch to Scale: Large-Scale Training in the Modern World by Zachary Mueller on Maven
Learn the techniques used today to take your model training from Colab to Clusters
maven.com
The main benefit to doing things this way is it's extremely fast (there are no communications involved since each process can figure out its own slice of the data), however it's more RAM heavy since the entire dataset needs to live in memory.
Afterwards, all that's needed is to move the data to the device and you're ready to perform distributed data parallelism!
For this to work, every process contains the entire dataset (so uses more RAM, but RAM is cheap). During the Dataset's __iter__() call each process will grab a pre-determined "chunk" of a global batch of items from the dataset in the __getitem__().
Dataset Sharding

When performing distributed data parallelism, we split the dataset every batch so every device sees a different chunk of the data. There are different methods for doing so. One example is sharding at the *dataset* level, shown here.
Reach out to me if you need financial assistance/cost is too much. Happy to try and work within budgets (or company education stipends): [email protected]
Hi all 👋

Back very briefly to mention I’m working on a new course, and there’s a star-studded set of guest speakers 🎉

From Scratch to Scale: Distributed Training (from the ground up).

From now until I’m done writing the course material, it’s 25% off :)

maven.com/walk-with-co...
Reposted by Zach Mueller
AAAHHHHHHHHH BE NICE TO OPEN SOURCE MAINTAINERS OH MY GOD. SOME OF YOU ARE SO RUDE, WHO RAISED YOU
JK STRONG is still awesome. Solution (which they were already aware of this) was literally logout -> login!

11/10 customer support
Enshittification has come to my gym app. Lord help us all 😭
I specifically mean my gym tracking app, STRONG. I’ve used it for 5 years, have a lifetime membership, and they decided that there needs to be internet access to check for premium to add warmup sets. Which is a bit insane. (It is a premium feature, but there’s better ways to do this)
Enshittification has come to my gym app. Lord help us all 😭
And I’ll try wearing it not sleeping only :)
(As a result I no longer wear mine and track my sleep using other methods that don’t have that psychological affect)
As someone who did Oura for 2 years, I can say it got to a point where my sleep depended on the Oura score, not the Oura score providing insight. So be weary of that long-term otherwise full agree 💯