Huge thanks to the team: Mostafa ElAraby, Sabyasachi Sahoo, Yann Pequignot, Paul Novello, Liam Paull.
And collaborators: @Mila_Quebec, @DIRO_UdeM, @UniversiteLaval, & the DEEL project.
Huge thanks to the team: Mostafa ElAraby, Sabyasachi Sahoo, Yann Pequignot, Paul Novello, Liam Paull.
And collaborators: @Mila_Quebec, @DIRO_UdeM, @UniversiteLaval, & the DEEL project.
GROOD achieves SOTA performance on major benchmarks. Highlights:
🔹 84.44% Far-OOD AUROC on CIFAR-100
🔹 94.8% Far-OOD AUROC on ImageNet-1K
🔹 Robust on Transformers (ViT & Swin-T) where others fail!
GROOD achieves SOTA performance on major benchmarks. Highlights:
🔹 84.44% Far-OOD AUROC on CIFAR-100
🔹 94.8% Far-OOD AUROC on ImageNet-1K
🔹 Robust on Transformers (ViT & Swin-T) where others fail!
While the standard feature space (left) is messy, our gradient-aware space (right) creates a much clearer separation between In-Distribution (blue) and OOD (red/orange) samples!
#DataScience #ComputerVision
While the standard feature space (left) is messy, our gradient-aware space (right) creates a much clearer separation between In-Distribution (blue) and OOD (red/orange) samples!
#DataScience #ComputerVision
Instead of just feature distance, GROOD uses a synthetic OOD prototype to measure a sample's gradient sensitivity.
ID samples are stable, while OOD samples are sensitive. This creates a powerful, clear signal for detection!
Instead of just feature distance, GROOD uses a synthetic OOD prototype to measure a sample's gradient sensitivity.
ID samples are stable, while OOD samples are sensitive. This creates a powerful, clear signal for detection!