Aina Gallego
banner
ainagallego.bsky.social
Aina Gallego
@ainagallego.bsky.social
Associate Professor of Political Science at Universitat de Barcelona. I am interested in AI & Politics

http://www.ainagallego.org/
Full call for papers:
February 6, 2025 at 7:04 AM
For more details on the approach, check our paper in open access: 5/5
www.cambridge.org/core/journal...
Positioning Political Texts with Large Language Models by Asking and Averaging | Political Analysis | Cambridge Core
Positioning Political Texts with Large Language Models by Asking and Averaging
www.cambridge.org
January 31, 2025 at 7:46 AM
Positioning British party manifestos on the Economic policy dimension (left to right wing scale). The numbers next to the dots indicate the years of the manifestos (4/5)
January 31, 2025 at 7:46 AM
Positioning Senators of the 117th Congress on the left-right ideological spectrum based on a random sample of 100 of their tweets (3/5)
January 31, 2025 at 7:46 AM
Positioning tweets published by members of the US Congress on the left-right ideological spectrum: (2/5)
January 31, 2025 at 7:46 AM
Please follow my brilliant coauthor @glemens.bsky.social , the soul of this project
January 28, 2025 at 8:45 AM
Explore more: The paper is available in open access here: 7/7
www.cambridge.org/core/journal...
Positioning Political Texts with Large Language Models by Asking and Averaging | Political Analysis | Cambridge Core
Positioning Political Texts with Large Language Models by Asking and Averaging
www.cambridge.org
January 28, 2025 at 7:02 AM
An important caveat is that the scope of application of this approach is unclear without case-by-case validation. Our findings provide an important proof of concept that positioning political texts by "asking" an LLM and "averaging" the position ratings produces accurate position estimates. 6/7
January 28, 2025 at 7:02 AM
Ideological scaling with LLMs has significant advantages over previous approaches
• Cost-efficient ($1.50 for 900 tweets vs. >1000$ human coding)
• Speed
• Scalability to diverse text types
• Reproducibility (with open LLMs like Llama or Mistral) 5/7
January 28, 2025 at 7:02 AM
Our results show that position estimates correlate strongly (>0.90) with benchmarks based on expert coding, crowd workers, or roll-call votes. Moreover, this direct query approach outperforms traditional supervised classifiers like BERT trained on thousands of texts, especially for tweets! 4/7
January 28, 2025 at 7:02 AM
We validated this method with:
• A random sample of 900 US Congress tweets published in 2023
• Positions of US Senators based on their published tweet
• UK party manifestos on economic/social policies
• Multilingual EU legislative speeches on subsidy policy 3/7
January 28, 2025 at 7:02 AM
The approach is simple: we ask the LLM to evaluate where a short text (e.g., tweet or sentence) stands on a political scale such as a 0 to 100 left-right scale. We then average the ratings to obtain the position of longer texts (party manifestos) or political actors (US Senators). 2/7
January 28, 2025 at 7:01 AM