Dustin Wright
@dustinbwright.com
3.7K followers 1K following 49 posts
Postdoc @ University of Copenhagen (CopeNLU) | Making the world's knowledge reliable and accessible w/ ML + NLP | Former UMSI, AI2, IBM Research, UCSD | https://dustinbwright.com
Posts Media Videos Starter Packs
Pinned
dustinbwright.com
Which, whose, and how much knowledge do LLMs represent?

I'm excited to share our preprint answering these questions:

"Epistemic Diversity and Knowledge Collapse in Large Language Models"

📄Paper: arxiv.org/pdf/2510.04226
💻Code: github.com/dwright37/ll...

1/10
dustinbwright.com
And finally, work was done with amazing colleagues!

Sarah Masud, Jared Moore, @srishtiy.bsky.social, @mariaa.bsky.social, Peter Ebert Christensen, Chan Young Park, and @iaugenstein.bsky.social

10/10
dustinbwright.com
🛣️Methodology can be used in the future to study epistemic diversity for any arbitrary topics, downstream tasks, and real-world use cases with open-ended plain-text LLM outputs. This allows researchers to answer research questions about which, whose, and how much knowledge LLMs are representing

9/10
dustinbwright.com
📏 To measure diversity we use a statistically grounded measure commonly used to measure species diversity in ecology, in order to fairly compare the relative diversity of models in different settings.

8/10
dustinbwright.com
🪛 Approach: we propose a new methodology which includes sampling plain text LLM outputs with 200 prompt variations from real chats across 155 topics, decomposing into individual claims, and clustering those claims based on entailment.

7/10
dustinbwright.com
🌍 There are gaps in country specific knowledge. When matching claims to English and local language Wikipedia, no local language is statistically significantly more represented than English, and English language knowledge is statistically significantly more represented for 5 of 8 countries

6/10
dustinbwright.com
🏗️ Model size has an unintuitive negative impact on diversity; smaller models tend to be more diverse

🔎 RAG has a positive impact on diversity, indicating its usefulness in making LLM outputs more diverse. However, the gains from RAG are not equal across topics about different countries

5/10
dustinbwright.com
📈 Knowledge in LLMs across 3 of 4 model families has *expanded* since 2023 ✅ ; however, their absolute diversity is quite low compared to a very modest traditional search baseline 👎

4/10
dustinbwright.com
👍 To assess this risk, we set out to measure to what extent LLMs are homogenous in terms of the *real-world claims* they generate. We perform a large study across 27 LLMs, 2 generation settings, with different model versions and sizes. In a nutshell, our findings are:

3/10
dustinbwright.com
🤔 A lot of people are using LLMs. However, their outputs are not very diverse. What does this mean for the future of knowledge? Many speculate that overreliance on LLMs will lead to "knowledge collapse", where the diversity of human knowledge is narrowed by a reliance on homogenous LLMs.

2/10
dustinbwright.com
Which, whose, and how much knowledge do LLMs represent?

I'm excited to share our preprint answering these questions:

"Epistemic Diversity and Knowledge Collapse in Large Language Models"

📄Paper: arxiv.org/pdf/2510.04226
💻Code: github.com/dwright37/ll...

1/10
dustinbwright.com
🦾 We demonstrate across 5 LLMs and 4 datasets that LLMs adapted with SUnsET generate more relevant and factually consistent evidence, extract evidence from more diverse locations in their context, and can generate more relevant and consistent summaries than baselines.
dustinbwright.com
🔎 We show for existing large language models that evidence is often copied incorrectly and "lost-in-the-middle". To help perform this task, we create the Summaries with Unstructured Evidence Text dataset (☀️SUnsET☀️), a synthetic dataset which can be used to train unstructured evidence citation.
dustinbwright.com
💡 Normally when automatically generated summaries cite supporting evidence, they cite fixed-granular evidence e.g., individual sentences or whole documents. Our work proposes to extract spans of *any* length as more relevant and consistent evidence for long context query focused summaries.
dustinbwright.com
🎉 Our work on attribution in summarization is now accepted to #EMNLP2025 main! 🎉

"Unstructured Evidence Attribution for Long Context Query Focused Summarization"

w/ @zainmujahid.me , Lu Wang, @iaugenstein.bsky.social , and @davidjurgens.bsky.social
dustinbwright.com
There’s something really special about seeing a physical print copy of our work 🤩

You can read “Efficiency is Not Enough: A Critical Perspective on Environmentally Sustainable AI” now in CACM!!!

dl.acm.org/doi/10.1145/...
Reposted by Dustin Wright
andersgiovanni.com
No fewer than three people were needed to cover all the aspects of our dialogue simulation paper. Thanks for the interest — check out the preprint. Link in Dustin’s post.
@dustinbwright.com @ic2s2.bsky.social #ic2s2
dustinbwright.com
We had a great time talking about dialogue simulation with LLMs at @ic2s2.bsky.social !!! Amazing work by all of our colleagues at UMich.

See the preprint of this work here: arxiv.org/abs/2409.08330
Reposted by Dustin Wright
ariannapera.bsky.social
The work “Extracting Participation in Collective Action from Social Media”, in collaboration with @lajello.bsky.social, at @ic2s2.bsky.social today!

Check out the paper ojs.aaai.org/index.php/IC... and models huggingface.co/ariannap22

Feat. poster and research buddy @alessianetwork.bsky.social ♥️
dustinbwright.com
Open PhD positions in Denmark! daracademy.dk/fellowship/f...

If you want to apply to work with me and Johannes Bjerva at @aau.dk Copenhagen, I'll be at @ic2s2.bsky.social this week and @aclmeeting.bsky.social next week! DM me if you'd like to meet :)
Dara
daracademy.dk
dustinbwright.com
Join us for the Pre-ACL 2025 Workshop in Copenhagen, 26 July, 2025!
🇩🇰 With international NLP experts from Columbia, UCLA, University of Michigan, and more to Copenhagen to meet with the Danish NLP community. 🇩🇰
📅 Poster submission deadline: June 16, 2025
🔗 Register: www.aicentre.dk/events/pre-a...
Pre-ACL 2025 Workshop | Event | Pioneer Centre for Artificial Intelligence
www.aicentre.dk
Reposted by Dustin Wright
aicentre.dk
Thanks to @dustinbwright.com (@copenlu.bsky.social) and @mxij.me (@itu.dk) for sharing insights on your research within the collaboratory of Speech & Language, at the Last Fridays Talks!