Louis Teitelbaum
@louisteitelbaum.bsky.social
37 followers 92 following 10 posts
Computational Social Psychology @ Ben-Gurion University, Distributional Semantics × Spread of Ideas. Co-author of https://ds4psych.com/
Posts Media Videos Starter Packs
Reposted by Louis Teitelbaum
mikeybiddlestone.bsky.social
Our article "Norm-enhanced prebunking for actively open-minded thinking indirectly improves misinformation discernment and reduces conspiracy beliefs" has now been published open access in JESP!
@rakoenmaertens.bsky.social
@profsanderlinden.bsky.social
authors.elsevier.com/sd/article/S00…
🧵👇
louisteitelbaum.bsky.social
9/
Finally—cosine is not the only similarity metric out there. We go through the pros and cons of each, with advice about when e.g. dot product is more effective.
louisteitelbaum.bsky.social
8/
You may think good and evil are opposites, but your embedding model might think: “Those are both moral judgements! Very similar!” If your construct has an opposite, consider using an anchored vector.
louisteitelbaum.bsky.social
7/
CAV = learn a vector representation from labeled examples. Humans rate a few posts; you apply the pattern to analyze new texts! This new method gives precise, interpretable scores if you have relevant training data on hand.
louisteitelbaum.bsky.social
6/
CCR = embed a questionnaire. Very powerful when your texts are similar to questionnaire scale items (e.g. open-ended responses). We point out a risk—if you aren’t careful, you might measure how much your texts sound like psychological questionnaires—but there are solutions!
louisteitelbaum.bsky.social
5/
DDR = average embedding of a word list. Great for summarizing abstract dimensions (emotion, morality) across genres. Not good for more complex constructs. NEW important tip: weight words by frequency to reduce noise from rare words.
louisteitelbaum.bsky.social
4/
We review 3 ways to improve on traditional methods: Distributed Dictionary Representation (DDR), Contextualized Construct Representation (CCR) & our new Correlational Anchored Vectors (CAV).
Each has advantages and disadvantages.
louisteitelbaum.bsky.social
3/
Your trusty Likert scale questionnaire could be free response instead.
Your validated word list could be leveraged to analyze words that aren’t included.
Your painstaking MTurk-rated dataset could be extended to analyze 10,000 social media posts.
louisteitelbaum.bsky.social
2/
What’s an embedding?
Why choose one model over another?
Why do you need embeddings when you can ask ChatGPT to rate your texts?
Take a look: doi.org/10.31234/osf...
OSF
doi.org
louisteitelbaum.bsky.social
New conceptual review + tutorial on text embeddings out in #APA_Journals w/ @almogsi. Beginner-friendly, but experts will find spicy new takes as well. Tag a colleague who’s still counting words... #RStats #tidyverse #quanteda
1/
louisteitelbaum.bsky.social
Word embeddings are still the right tool for studying words.
commspsychol.nature.com
Blind people show similar associations between adjectives (e.g. cold) and colours (e.g. blue) as sighted people; word embedding models trained on corpora of written and spoken language learn these associations from indirect co-occurrences.
www.nature.com/articles/s44...
Learning about color from language - Communications Psychology
Blind people show similar associations between adjectives (e.g. cold) and colours (e.g. blue) as sighted people; word embedding models trained on corpora of written and spoken language learn these ass...
www.nature.com
Reposted by Louis Teitelbaum
soniakmurthy.bsky.social
(1/9) Excited to share my recent work on "Alignment reduces LM's conceptual diversity" with @tomerullman.bsky.social and @jennhu.bsky.social, to appear at #NAACL2025! 🐟

We want models that match our values...but could this hurt their diversity of thought?
Preprint: arxiv.org/abs/2411.04427
Reposted by Louis Teitelbaum
fabiocarrella.bsky.social
How politicians communicate shapes online discourse in ways we might overlook.

Our new paper shows that their choice between a fact-based (evidence-driven) and a belief-based (sincerity-driven) honesty creates a "contagion" effect, influencing how users engage and respond. ⬇️(1/8)
Reposted by Louis Teitelbaum
jongreen.bsky.social
Extremely happy to share that "Curation Bubbles" is online (open access!) at @apsrjournal.bsky.social: www.cambridge.org/core/journal...
Title: Curation Bubbles
Abstract: Information on social media is characterized by networked curation processes in which users select other users from whom to receive information, and those users in turn share information that promotes their identities and interests. We argue this allows for partisan “curation bubbles” of users who share and consume content with consistent appeal drawn from a variety of sources. Yet, research concerning the extent of filter bubbles, echo chambers, or other forms of politically segregated information consumption typically conceptualizes information’s partisan valence at the source level as opposed to the story level. This can lead domain-level measures of audience partisanship to mischaracterize the partisan appeal of sources’ constituent stories—especially for sources estimated to be more moderate. Accounting for networked curation aligns theory and measurement of political information consumption on social media. Figure 1: Stylized Examples
a) Users consuming information directly from sources
b) Users curating information for other users Figure 4: URL Scores by Share Volume for Selected Domains on Twitter and Facebook Figure 8: Proportion of URLs Substantively Distinct from Domain for Different Facebook Engagement Types
Reposted by Louis Teitelbaum
almogsi.bsky.social
Our paper is finally out!
fintan-smith.bsky.social
Hyper partisan content thrives on social media, increasing affective polarisation and poisoning political discourse. Our new paper @lewan.bsky.social, @almogsi.bsky.social, Dawn Holford, just out in @commspsychol.bsky.social, finds that inoculation interventions may help us tackle the problem. 🧵
Reposted by Louis Teitelbaum
tobigerstenberg.bsky.social
🔊 New paper just accepted in JPSP 🥳

In "Inference from social evaluation", we explore how people use social evaluations, such as judgments of blame or praise, to figure out what happened.

📜 osf.io/preprints/ps...

📎 github.com/cicl-stanfor...

1/6