Neehar Kondapaneni
therealpaneni.bsky.social
Neehar Kondapaneni
@therealpaneni.bsky.social
Researching interpretability and alignment in computer vision.
PhD student @ Vision Lab Caltech
Thank you to my mentors @oisinmacaodha.bsky.social and Pietro Perona. Check out our project page! nkondapa.github.io/rdx-page/
November 19, 2025 at 4:50 PM
To summarize our experimental results, we propose a new metric that quantifies how well a method isolates model differences. We found RDX consistently outperforms baseline approaches on this metric and several other established metrics.
November 19, 2025 at 4:50 PM
Beyond controlled experiments, RDX uncovers previously unknown differences. For example, we found DINOv2 has extra structure for distinguishing monkey species, helping explain its improved fine-grained performance.
November 19, 2025 at 4:50 PM
In a case study on models with small performance differences, baselines (like SAE and NMF) mostly captured shared concepts. RDX, in contrast, localized groups of images that were incorrectly clustered in the weaker model, revealing subtle but important differences.
November 19, 2025 at 4:50 PM
To do this, we use a graph-based method to find clusters of images unique to one model. In controlled experiments, RDX reliably recovered the exact changes we introduced, while other methods identify shared structure.
November 19, 2025 at 4:50 PM
In the game, you’ll see that prior methods often highlight the wrong parts of each model’s representations by explaining shared structure. RDX, by contrast, consistently focuses on the differences between models, making it much easier to interpret what changed during training. 🔍
November 19, 2025 at 4:50 PM
When comparing models, we were surprised by how often existing tools fail even on simple change-detection tasks. Even small, intentional differences got washed out by shared structure. So we’ve built a little game to show these failure cases.👇
🔗https://nkondapa.github.io/rdx-page/game/
November 19, 2025 at 4:50 PM
This work was done in collaboration with @oisinmacaodha and @PietroPerona. It builds on our earlier related work RSVC (ICLR 2025). Check out our project page here nkondapa.github.io/rdx-page/ and our preprint here arxiv.org/abs/2505.23917.
Representational Difference Explanations (RDX)
Isolating and creating explanations of representational differences between two vision models.
nkondapa.github.io
July 8, 2025 at 3:43 PM
TLDR: ٍٍRDX is a new method for isolating representational differences and leads to insights about subtle, yet, important differences between models. We test it on vision models, but the method is general and can be applied to any representational space.
July 8, 2025 at 3:43 PM
Due to these issues we took a graph-based approach for RDX that does not use combinations of concept vectors. That means the explanation grid and the concept are equivalent -- what you see is what you get. This makes it much simpler to interpret RDX outputs.
July 8, 2025 at 3:43 PM
Even on a simple MNIST model, it is essentially impossible to anticipate that a weighted sum over these explanations results in this normal-looking five. Linear combinations of explanation grids are tricky to understand!
July 8, 2025 at 3:43 PM
Notably, we noticed two challenges with applying DL methods to model comparison. Explanations from DL methods are a grid of images (for vision). These grids (1) can overly simplify the underlying concept and/or (2) must be interpreted as part of a linear combination of concepts.
July 8, 2025 at 3:43 PM
We compare RDX to several popular dictionary-learning (DL) methods (like SAEs and NMF) and find that the DL methods struggle. In the spotted wing (SW) comparison experiment, we find that NMF shows model similarities rather than differences.
July 8, 2025 at 3:43 PM
After demonstrating that RDX works when there are known differences, we compare models with unknown differences. For example, when comparing DINO and DINOv2, we find that DINOv2 has learned a color based categorization of gibbons that is not present in DINO.
July 8, 2025 at 3:43 PM
We apply RDX on trained models with known differences and show that it isolates the core differences. For example, we compare model representations with and w/out a “spotted wing” (SW) concept and find that RDX shows that only one model groups birds according to this feature.
July 8, 2025 at 3:43 PM
Model comparison allows us to subtract away shared knowledge, revealing interesting concepts that explain model differences. Our method, RDX, isolates differences by answering the question: what does Model A consider similar that Model B does not?
nkondapa.github.io/rdx-page/
Representational Difference Explanations (RDX)
Isolating and creating explanations of representational differences between two vision models.
nkondapa.github.io
July 8, 2025 at 3:43 PM
The poster will actually be presented at Saturday 10am (Singapore time). Please ignore the previous time.
April 24, 2025 at 3:34 PM
If you’re attending ICLR, stop by our poster April 25, 3PM (Singapore time).
I’ll also be presenting a workshop poster, pushing further in this direction at the Bi-Align Workshop bialign-workshop.github.io#/ .
April 11, 2025 at 4:11 PM
We found these unique and important concepts to be fairly complex, requiring deep analysis. We use ChatGPT-4o to analyze the concept collages and find that it gives detailed and clear explanations about the differences between models. More examples here -- nkondapa.github.io/rsvc-page/
April 11, 2025 at 4:11 PM
We then look at “in-the-wild” models. We compare ResNets and ViTs trained on ImageNet. We measure concept importance and concept similarity. Do models learn unique and important concepts? Yes, sometimes they do!
April 11, 2025 at 4:11 PM
We first show this approach can recover known differences. We train Model 1 to use a pink square to make classification decisions and Model 2 to ignore it. Our method, RSVC, isolates this difference.
April 11, 2025 at 4:11 PM