Stéphane Thiell
banner
sthiell.bsky.social
Stéphane Thiell
@sthiell.bsky.social
77 followers 40 following 19 posts
I do HPC storage at Stanford and always monitor channel 16 ⛵
Posts Media Videos Starter Packs
Thrilled to host Lustre Developer Day at @stanford-rc.bsky.social post-LUG 2025! 🌟 With 14+ top organizations like DDN, LANL, LLNL, HPE, CEA, AMD, ORNL, AWS, Google, NVIDIA, Sandia and Jefferson Lab represented, we discussed HSM, Trash Can, and upstreaming Lustre in Linux.
@stanford-rc.bsky.social was proud to host the Lustre User Group 2025 organized with OpenSFS! Thanks to everyone who participated and our sponsors! Slides are already available at srcc.stanford.edu/lug2025/agenda 🤘Lustre! #HPC #AI
Getting things ready for next week's Lustre User Group 2025 at Stanford University!
Why not use inode quotas to catch that earlier?
Join us for the Lustre User Group 2025 hosted by @stanford-rc.bsky.social in collaboration with OpenSFS.
Check out the exciting agenda! 👉 srcc.stanford.edu/lug2025/agenda
LUG 2025 Agenda
srcc.stanford.edu
ClusterShell 1.9.3 is now available in EPEL and Debian. Not using clustershell groups on your #HPC cluster yet?! Check out the new bash completion feature! Demo recorded on Sherlock at @stanford-rc.bsky.social with ~1,900 compute nodes and many group sources!

asciinema.org/a/699526
clustershell bash completion (v1.9.3)
This short recording demonstrates the bash completion feature available in ClusterShell 1.9.3, showcasing its benefits when using the clush and cluset command-line tools.
asciinema.org
SAS 24Gb/s (12 x 4 x 24Gb/s) switch from SpectraLogic on display at #SC24. Built by Astek Corporation.
Just another day for Sherlock's home-built scratch Lustre filesystem at Stanford: Crushing it with 136+GB/s aggregate read on real research workload! 🚀 #Lustre #HPC #Stanford
A great show of friendly open source competition and collaboration: the lead developers of Environment Modules and Lmod (Xavier of CEA and Robert of @taccutexas.bsky.social) at #SC24. They often exchange ideas and push each other to improve their tools!
Newly announced at the #SC24 Lustre BoF! Lustre User Group 2025, organized by OpenSFS, will be hosted at Stanford University on April 1-2, 2025. Save the date!
Fun fact: the Georgia Aquarium (nonprofit), next to the Congress center is the largest in the U.S. and the only one that houses whale sharks. I went on Sunday and it was worth it. Just in case you need a break from SC24 this week… 🦈
I always enjoy an update from JD Maloney (NCSA), but even more when it is about using S3 for Archival Storage, something we are deploying at scale at Stanford this year (using MinIO server and Lustre/HSM!)
Our branch of robinhood-lustre-3.1.7 on Rocky Linux 9.3 with our own branch of Lustre 2.15.4 and MariaDB 10.11 can ingest more than 35K Lustre changelogs/sec. Those gauges seem appropriate for Pi Day, no?
github.com/stanford-rc/...
github.com/stanford-rc/...
Cannot say for all Stanford but at Research Computing we manage more than 60PB of on-prem research storage with most of it in use and growing...
We keep a few interesting numbers here: www.sherlock.stanford.edu/docs/tech/fa...
Facts - Sherlock
User documentation for Sherlock, Stanford's HPC cluster.
www.sherlock.stanford.edu
Filesystem upgrade complete! Stanford cares about HPC I/O! The Sherlock cluster has now ~10PB of full flash Lustre scratch storage at 400 GB/s, to support a wide range of research jobs on large datasets! Fully built in-house!
My header image is an extract from this photo taken at the The Last Bookstore in Los Angeles, a really cool place.