This study shows we should focus on building what we call digital strength:
a holistic skillset for navigating AI-mediated information environments--
Focused not just on detection skills
But also on cultivating open-minded thinking and evidentiary judgment (10/10)
This study shows we should focus on building what we call digital strength:
a holistic skillset for navigating AI-mediated information environments--
Focused not just on detection skills
But also on cultivating open-minded thinking and evidentiary judgment (10/10)
It’s not enough to teach people how to spot AI.
We also need to help them know when to trust authentic content.
Effective interventions must combine GenAI literacy, cognitive reflection training, and demographic targeting. (9/)
It’s not enough to teach people how to spot AI.
We also need to help them know when to trust authentic content.
Effective interventions must combine GenAI literacy, cognitive reflection training, and demographic targeting. (9/)
Two factors helped:
🧠 Actively Open-Minded Thinking (AOT):
A cognitive tendency to consider evidence that challenges one’s prior beliefs.
📚 GenAI knowledge:
Factual understanding of generative AI.
AOT especially helped restore trust in real images—not just spot synthetics(8/)
Two factors helped:
🧠 Actively Open-Minded Thinking (AOT):
A cognitive tendency to consider evidence that challenges one’s prior beliefs.
📚 GenAI knowledge:
Factual understanding of generative AI.
AOT especially helped restore trust in real images—not just spot synthetics(8/)
Older adults: more likely to doubt authentic images
Women: showed a larger accuracy gap than men
Partisans: more likely to doubt real images that conflict with their beliefs
#GenAI is amplifying existing digital and partisan divides. (7/)
Older adults: more likely to doubt authentic images
Women: showed a larger accuracy gap than men
Partisans: more likely to doubt real images that conflict with their beliefs
#GenAI is amplifying existing digital and partisan divides. (7/)
Because trust in authentic political imagery is eroding.
This isn’t just about deception—it’s about undermining visual evidence itself, leading to a "liar’s dividend":
real images get dismissed as fake. (6/)
Because trust in authentic political imagery is eroding.
This isn’t just about deception—it’s about undermining visual evidence itself, leading to a "liar’s dividend":
real images get dismissed as fake. (6/)
Participants over-attributed AI generation, labeling nearly 60% of all images as synthetic—even though only half were.
This "AI attribution bias" leads to:
✅ Higher accuracy detecting synthetic images
❌ Lower accuracy recognizing authentic images (5/)
Participants over-attributed AI generation, labeling nearly 60% of all images as synthetic—even though only half were.
This "AI attribution bias" leads to:
✅ Higher accuracy detecting synthetic images
❌ Lower accuracy recognizing authentic images (5/)
Participants evaluated political images balanced by party lean (pro-Dem vs. pro-Rep) and image type (authentic vs. AI-generated)— using actual images that circulated online during the election. (4/)
Participants evaluated political images balanced by party lean (pro-Dem vs. pro-Rep) and image type (authentic vs. AI-generated)— using actual images that circulated online during the election. (4/)
⚠️ BUT our study shows a different threat:
People have become suspicious of real images too.
Authentic visual evidence is no longer taken for granted. (3/)
⚠️ BUT our study shows a different threat:
People have become suspicious of real images too.
Authentic visual evidence is no longer taken for granted. (3/)
But did voters mistake them for authentic imagery? (2/)
But did voters mistake them for authentic imagery? (2/)