Gunnar König
gunnark.bsky.social
Gunnar König
@gunnark.bsky.social
PostDoc @ Uni Tübingen
explainable AI, causality

gunnarkoenig.com
Reposted by Gunnar König
🔹 Speakers: @jessicahullman.bsky.social, @doloresromerom.bsky.social, @tpimentel.bsky.social & Bernt Schiele

🕒 Call for contributions open until Oct 15 (AoE)
🔗 More info: eurips.cc/ellis
ELLIS UnConference - A NeurIPS-endorsed conference in Europe
A NeurIPS-endorsed conference in Europe held in Copenhagen, Denmark
eurips.cc
October 13, 2025 at 9:10 AM
In short: Many XAI papers are based on goals such as "transparency". But what does that mean? We argue that XAI methods should be motivated by concrete goals (e.g., explaining how to change an unfavorable prediction) instead of vague concepts (e.g., interpretability).

Section 3, Misconception 1
arxiv.org
October 8, 2025 at 8:13 AM
Our article is also on arXiv: arxiv.org/pdf/2306.04292
arxiv.org
October 8, 2025 at 7:57 AM
Not that I know of. But the method is relatively easy to implement. Please reach out if you would like to use it. I'm happy to assist!
July 8, 2025 at 3:00 PM
Sounds interesting? Have a look at our paper!

Joint work with Eric Günther and @ulrikeluxburg.bsky.social.
arxiv.org
July 7, 2025 at 3:43 PM
DIP is
✅ unique under mild assumptions,
✅ easy to interpret,
✅ entails an efficient estimation procedure,
✅ describes properties of the data (instead of just a specific model), and
✅ comes with a python implementation (github.com/gcskoenig/dipd).
GitHub - gcskoenig/dipd: Code implementing the DIP Decomposition that disentangles standalone contributions and cooperative contributions stemming from interactions and dependencies in global loss-bas...
Code implementing the DIP Decomposition that disentangles standalone contributions and cooperative contributions stemming from interactions and dependencies in global loss-based feature attribution...
github.com
July 7, 2025 at 3:41 PM
In our recent AISTATS paper, we propose DIP, a novel mathematical decomposition of feature attribution scores that cleanly separates individual feature contributions and the contributions of interactions and dependencies.
July 7, 2025 at 3:40 PM
Dependencies are not only a neglected cooperative force but also complicate the definition and quantification of feature interactions. In particular, the contributions of interactions and dependencies may cancel each other out and must be disentangled to be fully revealed.
July 7, 2025 at 3:39 PM
For example, suppose we predict kidney function (Y) from creatinine (C) and muscle mass (M), and that C reflects Y but also M, which is not linked to Y. Here, M becomes useful once combined with C, as it allows us to subtract irrelevant variation from C. In other words, C&M cooperate via dependence!
July 7, 2025 at 3:39 PM
Determining whether variables are relevant due to cooperation is crucial, as variables that cooperate must be considered jointly to understand their relevance. Notably, features cooperate not only through interactions but also through statistical dependencies, which existing methods neglect.
July 7, 2025 at 3:38 PM