💥 DIsoN consistently performs strongly against state-of-the-art methods, with higher AUROC and fewer false positives.
Attention bad pun: 🧹 DIsoN cleans up OOD samples like a Dyson 💨
💥 DIsoN consistently performs strongly against state-of-the-art methods, with higher AUROC and fewer false positives.
Attention bad pun: 🧹 DIsoN cleans up OOD samples like a Dyson 💨
How?
🔑 We train a binary classifier per test sample to “isolate” it from training data.
📈 The more training steps needed → the more likely the sample is in-distribution.
How?
🔑 We train a binary classifier per test sample to “isolate” it from training data.
📈 The more training steps needed → the more likely the sample is in-distribution.
⚠️ Models must flag unusual scans (artifacts, rare conditions) so clinicians can double-check.
But there’s a problem:
📦 Training data is often private, large, and unavailable after deployment.
⚠️ Models must flag unusual scans (artifacts, rare conditions) so clinicians can double-check.
But there’s a problem:
📦 Training data is often private, large, and unavailable after deployment.
Meet DIsoN, our 🧹💨 privacy-preserving OOD detector that compares test samples to training data without ever sharing the training data.
We make Out-of-Distribution detection decentralized!
📄Paper: arxiv.org/pdf/2506.09024
🧵👇
Meet DIsoN, our 🧹💨 privacy-preserving OOD detector that compares test samples to training data without ever sharing the training data.
We make Out-of-Distribution detection decentralized!
📄Paper: arxiv.org/pdf/2506.09024
🧵👇
✔️ Different brain diseases per dataset
✔️ Different modality combinations per dataset
✔️ No data sharing
✔️ Different brain diseases per dataset
✔️ Different modality combinations per dataset
✔️ No data sharing
🔗 arXiv: arxiv.org/pdf/2406.11636
💻 GitHub: github.com/FelixWag/Fed...
🧵1/N
🔗 arXiv: arxiv.org/pdf/2406.11636
💻 GitHub: github.com/FelixWag/Fed...
🧵1/N