Hye Sun Yun
banner
hyesunyun.bsky.social
Hye Sun Yun
@hyesunyun.bsky.social
PhD candidate in CS at Northeastern University | NLP + HCI for health | she/her 🏃‍♀️🧅🌈
I am at CHI this week to present my poster (Framing Health Information: The Impact of Search Methods and Source Types on User Trust and Satisfaction in the Age of LLMs) on Wednesday April 30

CHI Program Link: programs.sigchi.org/chi/2025/pro...

Looking forward to connecting with you all!
April 29, 2025 at 12:50 AM
Can we fix this? We tested zero-shot prompts to reduce LLMs' susceptibility to spin.
Good news: prompts that encouraged reasoning reduced their tendency to overstate trial results! 🛠️
Careful design is key to improving evidence synthesis for clinical decisions. [6/7]
February 15, 2025 at 2:34 AM
When we asked LLMs to simplify abstracts into plain language, they often propagated spin into their summaries. This means LLMs could unintentionally mislead patients and non-experts about the effectiveness of treatments. 😱 [5/7]
February 15, 2025 at 2:34 AM
We asked LLMs how favorably they perceived a treatment’s results (0-10 scale). Even though LLMs could detect spin, they were far more influenced by it than human experts.
Meaning: LLMs believed spun abstracts presented more favorable results! 😬 [4/7]
February 15, 2025 at 2:34 AM
When we prompted 22 LLMs to identify spin in medical abstracts, we found that they were moderately to strongly capable of detecting spin.
However, things got interesting when we asked LLMs to interpret the results… [3/7]
🔽
February 15, 2025 at 2:34 AM
🚨 Do LLMs fall for spin in medical literature? 🤔

In our new preprint, we find that LLMs are susceptible to biased reporting of clinical treatment benefits in abstracts—more so than human experts. 📄🔍 [1/7]

Full Paper: arxiv.org/abs/2502.07963

🧵👇
February 15, 2025 at 2:34 AM