Felix Wagner
banner
felwag.bsky.social
Felix Wagner
@felwag.bsky.social
PhD student at University of Oxford 💻 Computer Vision for Medicine | Federated Learning 🖥️👨‍💻
🙏 Thanks to my supervisor Prof. @kostaskamnitsas.bsky.social and co-authors @psaha.bsky.social, @harryanthony.bsky.social, Prof. Alison Noble

Excited to present at @neuripsconf.bsky.social - code coming soon!
@ox.ac.uk
@oxengsci.bsky.social

#OOD #ComputerVision #AI #ML #Research
September 20, 2025 at 8:29 AM
Tested on 12 OOD tasks across 🧴dermatology, 🩻 chest X-ray, ultrasound & 🔬 histopathology.
💥 DIsoN consistently performs strongly against state-of-the-art methods, with higher AUROC and fewer false positives.

Attention bad pun: 🧹 DIsoN cleans up OOD samples like a Dyson 💨
September 20, 2025 at 8:29 AM
BUT here's the key:
Only model parameters are exchanged between training + deployment, no raw data leaves the training site.

We also add a class-conditional extension (CC-DIsoN):
Compare each test sample only to training samples of its predicted class → stronger OOD performance
September 20, 2025 at 8:29 AM
DIsoN enables comparing a test sample with the training data distribution, without data transfer!

How?
🔑 We train a binary classifier per test sample to “isolate” it from training data.
📈 The more training steps needed → the more likely the sample is in-distribution.
September 20, 2025 at 8:29 AM
In medical imaging, safe deployment isn’t just about accuracy.

⚠️ Models must flag unusual scans (artifacts, rare conditions) so clinicians can double-check.

But there’s a problem:
📦 Training data is often private, large, and unavailable after deployment.
September 20, 2025 at 8:29 AM
The paper will be presented at @wacvconference.bsky.social on March 1 in Arizona🌵
@ox.ac.uk
I am happy that my first post on 🦋 are so exciting news! 🎉
#MedicalImaging #FL #AI #WACV25
January 27, 2025 at 7:11 PM
Big thank you to my supervisor @kostaskamnitsas.bsky.social and my co-authors: @psaha.bsky.social Wentian Xu Ziyun Liang Daniel Whitehouse Whitehouse David Menon Virginia Newcombe Natalie Voets J Alison Noble
January 27, 2025 at 7:11 PM
This is the first time we’ve demonstrated that FL can train a single 3D segmentation model for decentralized MRI datasets each with:
🧠 Different brain diseases
📷 Varying MRI modalities

A step forward in training large foundation models for multi-modal MRIs 🙌
January 27, 2025 at 7:11 PM
🏆 Our results: FedUniBrain was evaluated on 7 MRI datasets with 5 brain diseases.
📊 It achieved promising results across all diseases during training!

Even better, it generalizes to new datasets with unseen modality combinations, something traditional methods fail to do.
January 27, 2025 at 7:11 PM
We propose the FedUniBrain framework: Train a single model across decentralized MRI datasets with:
✔️ Different brain diseases per dataset
✔️ Different modality combinations per dataset
✔️ No data sharing
January 27, 2025 at 7:11 PM
Traditional brain segmentation models are disease-specific and rely on predefined MRI modalities for both training and inference. They can’t handle other diseases or scans with different input modalities🚫Plus, patient privacy prevents the creation of big centralized databases🧠
January 27, 2025 at 7:11 PM