Jason Alan Fries
banner
jason-fries.bsky.social
Jason Alan Fries
@jason-fries.bsky.social
Research scientist at Stanford University working on healthcare AI, foundation models, and data-centric AI. I focus on evaluating model reproducibility, training multimodal models with EHRs, and improving human-AI collaboration in medicine.
Pinned
[1/4] 🎉 We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicine—now freely available for non-commercial research use worldwide! 🚀
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...
Advancing Responsible Healthcare AI with Longitudinal EHR Datasets
Current evaluations of AI models in healthcare rely on limited datasets like MIMIC, lacking complete patient trajectories. New benchmark datasets offer an alternative.
hai.stanford.edu
I’m excited to share that I’ll be joining Stanford as a tenure-track Assistant Professor of Biomedical Data Science and of Medicine on Dec 1, 2025. 🎉

I’ll hold a joint appointment in DBDS and the Division of Computational Medicine.
December 1, 2025 at 7:29 AM
Reposted by Jason Alan Fries
AI in Clinical Science - amazing data being presented today by @jason-fries.bsky.social Sylvia Plevritis @roxanadaneshjou.bsky.social @akshay-chaudhari.bsky.social but still feel like we are just barely cracking the egg in this field. So impatient for the omelette…!

@stanford-cancer.bsky.social
November 4, 2025 at 10:33 PM
🎉 Headed to MLHC 2025 this weekend?

Swing by Poster #154 (Session C) on Saturday, Aug 16 to check out FactEHR — our new benchmark for evaluating factuality in clinical notes. As LLMs enter the clinic, we need rigorous, source-grounded tools to measure what they get right (and wrong).
📢 How factual are LLMs in healthcare?
We’re excited to release FactEHR — a new benchmark to evaluate factuality in clinical notes. As generative AI enters the clinic, we need rigorous, source-grounded tools to measure what these models get right — and what they don’t. 🏥 🤖
August 14, 2025 at 6:17 PM
🎉 Excited to present our #ICLR2025 work—leveraging future medical outcomes to improve pretraining for prognostic vision models.

🖼️ "Time-to-Event Pretraining for 3D Medical Imaging"
👉 Hall 3+2B #23
📍 Sat 26 Apr, 10 AM–12:30 PM
🔗 iclr.cc/virtual/2025...
ICLR Poster Time-to-Event Pretraining for 3D Medical ImagingICLR 2025
iclr.cc
April 23, 2025 at 9:00 PM
[1/4] 🎉 We're thrilled to announce the general release of three de-identified, longitudinal EHR datasets from Stanford Medicine—now freely available for non-commercial research use worldwide! 🚀
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...
Advancing Responsible Healthcare AI with Longitudinal EHR Datasets
Current evaluations of AI models in healthcare rely on limited datasets like MIMIC, lacking complete patient trajectories. New benchmark datasets offer an alternative.
hai.stanford.edu
February 13, 2025 at 1:38 AM
Reposted by Jason Alan Fries
[1/4] Excited to share that our paper 𝘛𝘪𝘮𝘦-𝘵𝘰-𝘌𝘷𝘦𝘯𝘵 𝘗𝘳𝘦𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘧𝘰𝘳 3𝘋 𝘔𝘦𝘥𝘪𝘤𝘢𝘭 𝘐𝘮𝘢𝘨𝘪𝘯𝘨 is accepted at ICLR 2025! 🚀
We introduce 𝗧𝗧𝗘 𝗽𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴, using EHR-linked imaging to improve AI-driven prognosis—essential for assessing disease progression.
🔗 Paper: arxiv.org/abs/2411.09361
Time-to-Event Pretraining for 3D Medical Imaging
With the rise of medical foundation models and the growing availability of imaging data, scalable pretraining techniques offer a promising way to identify imaging biomarkers predictive of future disea...
arxiv.org
February 2, 2025 at 6:10 AM
Excited to share our paper "Time-to-Event Pretraining for 3D Medical Imaging" is accepted at ICLR 2025! 🚀

We introduce time-to-event pretraining for imaging, leveraging longitudinal EHRs to provide temporal supervision and enhance disease prognosis performance.

🔗 Paper: arxiv.org/abs/2411.09361
February 2, 2025 at 9:31 PM