Christoph Abels
cabels18.bsky.social
Christoph Abels
@cabels18.bsky.social
Post-Doctoral Fellow @unipotsdam.bsky.social‬, visiting @arc-mpib.bsky.social | PhD @hertieschool.bsky.social | Democracy, Technology, Behavioral Public Policy | Website: https://christophabels.com
Super important work, thanks for sharing!
October 26, 2025 at 10:05 AM
Very glad to hear that!
October 9, 2025 at 9:00 AM
Thank you so much for spreading the word! We are really in a crucial period right now - and everyone should understand that protecting democracy is at its core a joint endeavor.
October 7, 2025 at 12:19 PM
Reposted by Christoph Abels
This month's Facts in my climate action newsletter focuses on the excellent study by @cabels18.bsky.social, @kiiahuttunen.bsky.social, Ralph Hertwig & @lewan.bsky.social. I really appreciated their excellent analysis, please check out my summary here: wecanfixit.substack.com/i/174419696/...
Democracy is worth fighting for. Here's how
Facts: Fight democratic backsliding ⚖️| Feelings: Unthinkable Resource Hub🫶| Action: Protect democracy🗳️
wecanfixit.substack.com
October 7, 2025 at 11:52 AM
GenAI offers powerful tools. But when it shapes what we believe, especially about our own health, we need to treat it as a behavioral system with real-world consequences.

@lewan.bsky.social @eloplop.bsky.social @stefanherzog.bsky.social @dlholf.bsky.social
July 28, 2025 at 10:38 AM
What can we do?
We call for a multi-level approach:

Design-level interventions to help users maintain situational awareness
Boosting user competencies to help them understand the technology's impact
Developing public infrastructure to detect and monitor unintended system behaviour
July 28, 2025 at 10:38 AM
This isn't just about potentially problematic design.

It’s about systemic risk: As GenAI tools fragment (Custom GPTs, GPT Stores, third-party apps), the public is exposed to a growing landscape of low-oversight, increasingly high-trust agents.

And that creates challenges for the individual.
July 28, 2025 at 10:38 AM
You can make ChatGPT even more biased, just by tweaking a few settings.

We built a Custom GPT that’s a little more "friendly" and engagement-driven.

It ended up validating fringe treatments like quantum healing, just to keep the user happy.
July 28, 2025 at 10:38 AM
In this paper, we showcase how this plays out across 3 “pressure points”:

Biased query phrasing → biased answers
Selective reading → echo chambers
Dismissal of contradiction → belief reinforcement

Confirmation bias isn't new. GenAI just takes it a bit further.
July 28, 2025 at 10:38 AM
Generative AI tools are designed to adapt to you: your tone, your preferences, your beliefs.
That’s great for writing emails.

But in health contexts, that adaptability becomes hypercustomization - and can entrench existing views, even when they're wrong.
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
doi.org
July 28, 2025 at 10:38 AM
You can read the full open-access article here:
doi.org/10.1177/2379...

Thanks for reading!
Sage Journals: Discover world-class research
Subscription and open access journals from Sage, the world's leading independent academic publisher.
doi.org
July 7, 2025 at 8:40 AM
I also discuss many of the arguments in a recent interview in @spiegel.de (in German).

www.spiegel.de/netzwelt/kue...
(S+) KI: So doof macht uns ChatGPT
KI-Systeme sind bisweilen zu freundlich, sagt Verhaltensforscher Christoph Abels. Hier erklärt er, warum ChatGPT uns doof macht.
www.spiegel.de
July 7, 2025 at 8:40 AM
Hypercustomization offers useful functionality - but it also complicates oversight and raises new policy questions.

Early, thoughtful action can help ensure that the benefits are not overshadowed by unintended consequences.
July 7, 2025 at 8:40 AM
💬 Response 5: In-app reflection prompts
GenAI systems should occasionally ask users to pause and reflect:
“How is this conversation shaping your views?”
“Is the system affirming everything you say?”

These prompts reduce overreliance and help surface bias. Although further research is needed.
July 7, 2025 at 8:40 AM
🧠 Response 4: Boosting GenAI literacy
Disclaimers aren't enough. We need to train users - through games, videos, tools - how to recognize biased responses, resist manipulation, and navigate emotionally persuasive content.

Boosting builds agency without restricting access.
July 7, 2025 at 8:40 AM
🤲 Response 3: Data donations (with consent)
To understand real-world GenAI risks, we need real-world data.

We recommend voluntary data donation channels, where users can share selected interactions with researchers. Anonymized, secure, and essential for building safer systems.
July 7, 2025 at 8:40 AM
📢 Response 2: Public issue reporting
Think of it like post-market drug safety:
We need public platforms where users can report problematic GenAI behavior - bias, sycophancy, manipulation, etc.

This kind of crowdsourced oversight can catch what testing alone might miss.
July 7, 2025 at 8:40 AM
🧪 Response 1: Public black-box testing
GenAI providers should open up standardized test datasets so independent researchers can evaluate how these systems respond.

This helps surface ethical issues, hallucinations, or manipulation risks that might otherwise remain hidden.
July 7, 2025 at 8:40 AM
We suggest five key responses:
– Public black-box testing
– Issue reporting platforms
– Voluntary data donation
– GenAI literacy interventions
– In-app prompts for critical reflection
July 7, 2025 at 8:40 AM