Mikkel Jordahn
mjordahn.bsky.social
Mikkel Jordahn
@mjordahn.bsky.social
Probabilistic Machine Learning for Molecular Discovery
If you would like to read the paper, download the models or catch us at AISTATS, here are the details:
📑: arxiv.org/abs/2503.13296.
📍: Poster Session 1 - Poster 109.
Code+Models: github.com/jonasvj/OnLo... (7/7)
https://arxiv.org/abs/2503.13296.📍
April 22, 2025 at 11:48 AM
Finally, we open-source all of our trained BNNs for further analysis - we do this due to the computational efforts required to train these models, and to allow further analysis of empirical results that we find highly unintuitive and surprising. (6/7)
April 22, 2025 at 11:48 AM
We also conduct a number of sensitivity and ablation studies to explain the different predictive performance between BNN and DE-BNNs. (5/7)
April 22, 2025 at 11:48 AM
We show that increased out-of-distribution performance in DE-BNNs often comes at an in-distribution performance cost and that DEs generally outperform DE-BNNs on in-distribution metrics for large ensemble sizes. (4/7)
April 22, 2025 at 11:48 AM
Surprisingly, we find that across a number of dataset, architectures, approximate inference methods and tasks, that this is not the case when the ensembles grow large enough (but not in the asymptotic regime). A few key points from the paper: (3/7)
April 22, 2025 at 11:48 AM
In this work we investigated the commonly held belief that trivially equipping Deep Ensembles (DEs) with local posterior structure (obtaining what we call DE-BNNs) should improve predictive uncertainty and model calibration. (2/7)
April 22, 2025 at 11:48 AM