Come chat with me about this at #EMNLP2025!
Huge thanks to my amazing collaborators
@indiiigo.bsky.social, @wanlo.bsky.social, @ Elisa Rogers, and @mstrohm.bsky.social!
7/7
Come chat with me about this at #EMNLP2025!
Huge thanks to my amazing collaborators
@indiiigo.bsky.social, @wanlo.bsky.social, @ Elisa Rogers, and @mstrohm.bsky.social!
7/7
This is the first systematic evidence of language-switching bias in persona prompting!
6/7
This is the first systematic evidence of language-switching bias in persona prompting!
6/7
Across all tasks and measures, larger models performed worse than smaller ones—sometimes even showing lower opinion alignment than a random baseline.
5/7
Across all tasks and measures, larger models performed worse than smaller ones—sometimes even showing lower opinion alignment than a random baseline.
5/7
4/7
4/7
✅ Reduces stereotyping
✅ Improves diversity of responses
3/7
✅ Reduces stereotyping
✅ Improves diversity of responses
3/7
Simulations of nonbinary, Hispanic, and Middle Eastern personas are more stereotyped and less diverse than those of other groups.
2/7
Simulations of nonbinary, Hispanic, and Middle Eastern personas are more stereotyped and less diverse than those of other groups.
2/7
You can find our code and annotated dataset of papers here: github.com/Indiiigo/LLM...
We annotated way more things, e.g., LLM used, response format, so please check it out!
5/5
You can find our code and annotated dataset of papers here: github.com/Indiiigo/LLM...
We annotated way more things, e.g., LLM used, response format, so please check it out!
5/5