Yuzhe Yang
banner
yuzheyang.bsky.social
Yuzhe Yang
@yuzheyang.bsky.social
1.7K followers 70 following 22 posts
Asst Prof @UCLA | RS @Google | PhD @MIT | BS @PKU #ML, #AI, #health, #medicine https://www.cs.ucla.edu/~yuzhe
Posts Media Videos Starter Packs
Pinned
🚨 Let your wearable data "speak" for themselves! ⌚️🗣️

Introducing *SensorLM*, a family of sensor-language foundation models, trained on ~60 million hours of data from >103K people, enabling robust wearable sensor data understanding with natural language. 🧵
Beyond its discriminative power, SensorLM showcases compelling generative capabilities. It can produce hierarchical and realistic captions from input wearable data only, offering more coherent & correct descriptions compared to LLMs like Gemini 2.0 Flash. ✍️✨

(7/8)
SensorLM also demonstrates intriguing capabilities, including interesting scaling behavior over data size, model size, and compute. 📈💡

(6/8)
Experiments across real-world tasks in human activity analysis 🏃‍♀️ & healthcare ⚕️ showcase its superior performance over SOTA models in:
✨ Zero-shot recognition
✨ Few-shot learning
✨ Cross-modal retrieval

(5/8)
SensorLM extends prominent multimodal pretraining architectures (e.g., contrastive, generative) unifying their principles for sensor data. It extends prior approaches, recovering them as specific configurations within a single architecture. 🏗️🔗

(4/8)
This enabled us to curate the largest sensor-language dataset to date: over 59.7 million hours of data from >103,000 people. That's orders of magnitude larger than prior studies! 🚀💾

(3/8)
Despite its pervasiveness, aligning & interpreting sensor data with language remains challenging 📈 due to the lack of richly annotated sensor-text descriptions. 🚫

Our solution? A hierarchical pipeline captures statistical📊, structural🏗️, and semantic🧠 sensor info.

(2/8)
🚨 Let your wearable data "speak" for themselves! ⌚️🗣️

Introducing *SensorLM*, a family of sensor-language foundation models, trained on ~60 million hours of data from >103K people, enabling robust wearable sensor data understanding with natural language. 🧵
Reposted by Yuzhe Yang
🩻⚖️ AI underdiagnoses Black female patients

A new study found that expert-level vision-language models for chest X-rays systematically underdiagnose marginalised groups – especially Black women – more than radiologists.

🔗 doi.org/10.1126/sciadv.adq0305

#SciComm #AI #HealthEquity 🧪
Demographic bias of expert-level vision-language foundation models in medical imaging
Compared to certified radiologists, expert-level AI models show notable and consistent demographic biases across pathologies.
doi.org
Science News provides a great cover of our paper: www.science.org/content/arti...

Started in 2023, delayed but finally out! Huge congrats & thanks to amazing collaborators: Yujia, @xliucs, @Avanti0609, @Mastrodicasa_MD, Vivi, @ejaywang, @sahani_dushyant, Shwetak 🎉

(6/6)
#AI #health #fairness
AI models miss disease in Black and female patients
Analysis of chest x-rays underscores need for monitoring artificial intelligence tools for bias, experts say
science.org
Why the gap? These foundation models in medical imaging encode demographic info (age, sex, race) from X-rays—more than humans do! Fascinating, but a challenge for fair healthcare ⚖️.

(5/)
This fairness disparity also holds for unseen pathologies during training, as well as for differential diagnoses across 50+ pathologies. ⚕️

(4/)
While expert-level VLMs can achieve _overall_ diagnosis accuracy on par with clinicians, they show significant underdiagnosis disparity over (intersectional) subpopulations vs. Radiologists 🚨

(3/)
We tested top vision-language models like CheXzero on 5 global datasets 🌍. Result? They consistently show disparities in diagnosis based on race, sex, and age—esp. across marginalized groups—compared to certified radiologists 📷

(2/)
Do foundation models in medical imaging see everyone fairly?🤔

Excited to share our new Science Advances paper uncovering & auditing demographic biases of expert-level VLMs, and comparing to board-certified radiologists🧑‍⚕️

📄science.org/doi/10.1126/sciadv.adq0305
💻github.com/YyzHarry/vlm-fairness
(1/)
Reposted by Yuzhe Yang
Reposted by Yuzhe Yang
Just published in Nature Biomedical Engineering! Working with the incredible PhD student Wei Qiu and our brilliant collaborator Kamila Naxerova at Harvard was a great pleasure. Our deep profiling framework enables us to view 18 human cancers through the lens of AI!

www.nature.com/articles/s41...
Deep profiling of gene expression across 18 human cancers - Nature Biomedical Engineering
Using unsupervised deep learning to generate low-dimensional latent spaces for gene-expression data can unveil biological insight across cancers.
www.nature.com
Reposted by Yuzhe Yang
A neurologist with 2 APOE4 copies tells us about his experience with #Alzheimers disease
washingtonpost.com/wellness/202...
Reposted by Yuzhe Yang
Seven years ago, Scott Lundberg, presented our SHAP framework at the NeurIPS 2017 conference. Since then, SHAP has become one of the most widely used feature attribution methods, with our paper receiving approximately 30,000 citations. It's wonderful that SHAP's birthday aligns perfectly with mine!😊
I will be at #NeurIPS and #ML4H all next week — let me know if you would like to catch up in person!

📢 I am also recruiting PhD students! Drop me an email if you're attending NeurIPS and would like to chat or learn more 😀
Hello world! I’m recruiting ~3 PhD students for Fall 2025 at UCLA 🚀

Please apply to UCLA CS or CompMed if you are interested in ML and (Gen)AI for healthcare / medicine / science.

See my website for more on my research & how to apply: people.csail.mit.edu/yuzhe
Would love to be added!
Would love to be added, thanks!
Reposted by Yuzhe Yang
Here is a #compbio starter kit! go.bsky.app/QVPoZXp To all the #Bioinformatics #Genomics #MachineLearning folks: please RP and let’s build this together!
⌚️ Check out our latest work on scaling foundation models for large-scale multimodal wearable sensor data!
Scaling wearable foundation models
research.google
Would love to be added!