Tim van Erven
timvanerven.nl
Tim van Erven
@timvanerven.nl
Associate professor in machine learning at the University of Amsterdam. Topics: (online) learning theory and the mathematics of explainable AI.

www.timvanerven.nl

Theory of Interpretable AI seminar: https://tverven.github.io/tiai-seminar
When reading a large literature, it is really helpful to have opinionated opinions that help to categorize papers.

One of mine for explainable AI is that methods need to address a fundamental limit in how much information can be communicated.

Blog post: www.timvanerven.nl/blog/xai-com... (no math)
The Central Challenge in Explainable AI: Channel Capacity
Explainable AI is about communication: we want to tell people how or whya machine learning model is making certain decisions. Why is this sodifficult? In this post I take an information-theoretic pers...
www.timvanerven.nl
November 21, 2025 at 1:11 PM
Reposted by Tim van Erven
Our libraries are cutting staff so that Elsevier can have its 32% profit margin
A staggering statistic: "North American researchers were charged over US$2.27 billion by just two for-profit publishers. The Canadian research councils and the US National Science Foundation were allocated US$9.3 billion in that year." What are we doing?
We wrote the Strain on scientific publishing to highlight the problems of time & trust. With a fantastic group of co-authors, we present The Drain of Scientific Publishing:

a 🧵 1/n

Drain: arxiv.org/abs/2511.04820
Strain: direct.mit.edu/qss/article/...
Oligopoly: direct.mit.edu/qss/article/...
November 14, 2025 at 1:37 AM
Reposted by Tim van Erven
The schedule for our Workshop on the Theory of XAI is now online!

🕰️ Dec 2, starting 9am
📍 Bella Center Copenhagen (co-located with EurIPS)
🔗 sites.google.com/view/theory-...
Theory of XAI Workshop
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...
sites.google.com
November 12, 2025 at 10:45 AM
Great seminar talk by @ulrikeluxburg.bsky.social yesterday.

Here's the video if you missed it: youtu.be/zR_GvDF65OM

Seminar info: tverven.github.io/tiai-seminar/
November 12, 2025 at 8:06 AM
Happening in two hours.
Coming up tomorrow (Tuesday 11 Nov) in the Theory of Interpretability seminar: Ulrike von Luxburg will discuss why
informative explanations only exist for simple functions 👀

tverven.github.io/tiai-seminar/
November 11, 2025 at 1:01 PM
Coming up tomorrow (Tuesday 11 Nov) in the Theory of Interpretability seminar: Ulrike von Luxburg will discuss why
informative explanations only exist for simple functions 👀

tverven.github.io/tiai-seminar/
November 10, 2025 at 9:57 AM
Reposted by Tim van Erven
Here is a formal impossibility result for XAI: Informative Post-Hoc Explanations Only Exist for Simple Functions. I'll give an online presentation about this work next tuesday in @timvanerven.nl 's Theory of Interpretable AI Seminar:

arxiv.org/abs/2508.11441

tverven.github.io/tiai-seminar/
November 7, 2025 at 6:25 AM
Reposted by Tim van Erven
The blog post is available: blog.arxiv.org/2025/10/31/a...
November 1, 2025 at 5:06 PM
I am recruiting via Ellis for a PhD position to develop mathematical foundations for explainable AI.

Benefits:
- Do new fundamental math / machine learning theory on a societally important topic
- Part of a team
- Good conditions in Amsterdam, NL

Applications via Ellis PhD program before Oct. 31.
October 23, 2025 at 9:02 AM
The video for Shahaf's talk is now available: youtu.be/AsJB4kym4Tc

See the seminar website for upcoming talks: tverven.github.io/tiai-seminar/
October 21, 2025 at 3:17 PM
Happening tomorrow, October 21!
The next talk in the Theory of Interpretable AI seminar will be by Shahaf Bassan on "The Computational Complexity of Explaining ML Models".

Tuesday October 21
tverven.github.io/tiai-seminar/
October 20, 2025 at 2:42 PM
The next talk in the Theory of Interpretable AI seminar will be by Shahaf Bassan on "The Computational Complexity of Explaining ML Models".

Tuesday October 21
tverven.github.io/tiai-seminar/
October 16, 2025 at 1:59 PM
Reposted by Tim van Erven
How can we make AI explanations provably correct — not just convincing? 🤔

Join us for the Theory of Explainable Machine Learning Workshop, part of the ELLIS UnConference Copenhagen 🇩🇰 on Dec 2, co-located with #EurIPS.

🕒 Call for contributions open until Oct 15 (AoE)
🔗 eurips.cc/ellis
October 13, 2025 at 9:10 AM
Reposted by Tim van Erven
🎓 Ready to expand your horizons in #AI research? Do your PhD with 2 leading academic institutions across Europe and build your international network with the Academic Track in the #ELLISPhD Program.

Apply now via our central recruiting portal until Oct 31.

👉 All info: https://bit.ly/45DSe75
October 10, 2025 at 1:10 PM
Reposted by Tim van Erven
Looking forward to talking about our work on the value of explanation for decision-making at this workshop
October 7, 2025 at 2:32 PM
Reposted by Tim van Erven
Are you coming ?

I will be talking about
#XAI #orms
October 7, 2025 at 6:27 PM
Excellent bachelor's thesis by my student David van Batenburg with a mathematically rigorous treatment of the axiomatic characterization of Shapley values for feature attribution:

arxiv.org/abs/2510.03281

Clears up and corrects many math aspects that tripped me up when first learning about SHAP.
Mathematically rigorous proofs for Shapley explanations
Machine Learning is becoming increasingly more important in today's world. It is therefore very important to provide understanding of the decision-making process of machine-learning models. A popular ...
arxiv.org
October 7, 2025 at 1:42 PM
Reposted by Tim van Erven
Interested in provable guarantees and fundamental limitations of XAI? Join us at the "Theory of Explainable AI" workshop Dec 2 in Copenhagen! @ellis.eu @euripsconf.bsky.social

Speakers: @jessicahullman.bsky.social @doloresromerom.bsky.social @tpimentel.bsky.social

Call for Contributions: Oct 15
Theory of XAI Workshop
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...
sites.google.com
October 7, 2025 at 12:53 PM
I have 2 open PhD positions on Mathematical Foundations for Explainable AI:

Position 1: werkenbij.uva.nl/en/vacancies... (apply by October 13, 2025)

Position 2: applications via the Ellis PhD Program: ellis.eu/news/ellis-p... by Oct. 31.

Both positions are equivalent (except for starting dates)
Vacancy — PhD Position on Mathematical Foundations for Explainable AI
Are you highly motivated to do PhD research in mathematical machine learning, with special emphasis on mathematical foundations for explainable AI? If yes, the Korteweg-de Vries Institute for Mathemat...
werkenbij.uva.nl
October 7, 2025 at 11:48 AM
🚨 Workshop on the Theory of Explainable Machine Learning

Call for ≤2 page extended abstract submissions by October 15 now open!

📍 Ellis UnConference in Copenhagen
📅 Dec. 2
🔗 More info: sites.google.com/view/theory-...

@gunnark.bsky.social @ulrikeluxburg.bsky.social @emmanuelesposito.bsky.social
Theory of XAI Workshop, Dec 2, 2025
Explainable AI (XAI) is now deployed across a wide range of settings, including high-stakes domains in which misleading explanations can cause real harm. For example, explanations are required by law ...
sites.google.com
September 30, 2025 at 2:01 PM
Reposted by Tim van Erven
Late, but arxiv.org/abs/0804.2996 is *incredible*, so many good lines (e.g., "This comes close to being an accusation of a false claim of priority for a false discovery of an untrue fact, which would be a rare triple-negative in the history of intellectual property disputes.").
The Epic Story of Maximum Likelihood
At a superficial level, the idea of maximum likelihood must be prehistoric: early hunters and gatherers may not have used the words ``method of maximum likelihood'' to describe their choice of where a...
arxiv.org
September 28, 2025 at 9:13 PM
Reposted by Tim van Erven
I am hiring PhD students and/or Postdocs, to work on the theory of explainable machine learning. Please apply through Ellis or IMPRS, deadlines end october/mid november. In particular: Women, where are you? Our community needs you!!!

imprs.is.mpg.de/application
ellis.eu/news/ellis-p...
September 17, 2025 at 6:18 AM
Reposted by Tim van Erven
I reported a paper with fake citations to the Neurips editors a few months ago, and I've been informed Neurips will be introducing systematic checking of citations.
September 14, 2025 at 10:58 PM
Many interesting ideas in Tobias' talk today!

You can still watch the recording: youtu.be/IZ4m_II3DVI
September 9, 2025 at 3:27 PM
Reposted by Tim van Erven
Wanna chat with an experienced learning theory researcher? We have open office hours online with four awesome folks, Surbhi Goel (@surbhigoel.bsky.social), Gautam Kamath (@gautamkamath.com), Ayush Sekhari, and Lydia Zakynthinou)!

Book a slot and ask them anything!
let-all.com/officehours....
September 8, 2025 at 4:26 PM