Nhut
tlmnhut.bsky.social
Nhut
@tlmnhut.bsky.social
PhD student from CIMeC, University of Trento. Interested in computational cognitive neuroscience and Machine learning. tlmnhut.github.io
Reposted by Nhut
@annabavaresco.bsky.social and
@tlmnhut.bsky.social show: supervised pruning of a DNN’s feature space better aligns with human category representations, selects distinct subspaces for different categories, and more accurately predicts people’s preferences for GenAI images.
doi.org/10.1145/3768...
Modeling Human Concepts with Subspaces in Deep Vision Models | ACM Transactions on Interactive Intelligent Systems
Improving the modeling of human representations of everyday semantic categories, such as animals or food, can lead to better alignment between AI systems and humans. Humans are thought to represent su...
doi.org
September 22, 2025 at 7:28 PM
Reposted by Nhut
Come and check out our poster at #CCN2025, presented by @tlmnhut.bsky.social
August 15, 2025 at 12:23 PM
Reposted by Nhut
Interested in category selectivity and topographic modelling? Come see my poster tomorrow at CCN (A57). We show that encoding models confirm dissociable selective responses to bodies, hands, and tools, and test if topographic ANNs capture that organization.
See you there!
August 11, 2025 at 3:26 PM
Reposted by Nhut
Missed #CCN2025 this year, but still excited to share two works there!

1️⃣ From my PhD with @manpiazza.bsky.social — accepted in the CCN proceedings. My young collaborator @tlmnhut.bsky.social will be presenting it. It’s about numerosity representation in CNNs.

📄 tinyurl.com/yc2dyhm3
Investigation of Numerosity Representation in Convolution Neural...
Convolutional neural networks (CNNs) have emerged as powerful models for predicting neural activity and behavior in visual tasks. Recent studies suggest that number-detector units—analogous to...
tinyurl.com
August 11, 2025 at 8:14 AM
Reposted by Nhut
New lab preprint, led by @tlmnhut.bsky.social. We show that certain topographic CNNs offer computational advantages, including greater weight matrix robustness, better handling of OOD noisy data, and higher entropy of unit activation.
arxiv.org/abs/2508.00043
Improved Robustness and Functional Localization in Topographic CNNs Through Weight Similarity
Topographic neural networks are computational models that can simulate the spatial and functional organization of the brain. Topographic constraints in neural networks can be implemented in multiple w...
arxiv.org
August 7, 2025 at 8:21 PM
Reposted by Nhut
New preprint out! We propose that action is a key dimension shaping the topographic organization of object categories in lateral occipitotemporal cortex (LOTC)—and test whether standard and topographic neural networks capture this pattern. A thread:

www.biorxiv.org/content/10.1...

🧵 1/n
Investigating action topography in visual cortex and deep artificial neural networks
High-level visual cortex contains category-selective areas embedded within larger-scale topographic maps like animacy and real-world size. Here, we propose action as a key organizing factor shaping vi...
www.biorxiv.org
August 7, 2025 at 3:17 PM
Reposted by Nhut
DNNs can predict human similarity judgments—but why? In a new #XAI study, we introduce Alignment Importance Scores (AIS), a method that improves AI-human alignment and generates heatmaps highlighting the image features that drive this alignment. link.springer.com/article/10.1...
Explaining Human Comparisons Using Alignment-Importance Heatmaps - Computational Brain & Behavior
We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature map’s unique ...
link.springer.com
March 11, 2025 at 5:28 PM
Reposted by Nhut
Excited to share our work with @tlmnhut.bsky.social at the NeurIPS Workshop on Behavioral Machine Learning! 🧠

Come visit our poster!

#NeurIPS2024 #BehavioralML #Numerosity #DeepLearning
December 15, 2024 at 11:33 AM
Reposted by Nhut
(1/2) We'll be presenting two recent projects in the ICLR 2024 Re-Align Workshop. PHD students Nhut Truong and Dario Pesenti introduce an explainability technique indicating what image information is relevant when it is compared to a target image cohort. openreview.net/forum?id=bWe... #cogsci
Explaining Human Comparisons using Alignment-Importance Heatmaps
We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature-map's...
openreview.net
March 3, 2024 at 6:50 PM
Reposted by Nhut
An updated version of our work on using feature maps of pre-trained DNNs to explain human similarity judgments; now on arXiv.
arxiv.org/abs/2409.16292
Explaining Human Comparisons using Alignment-Importance Heatmaps
We present a computational explainability approach for human comparison tasks, using Alignment Importance Score (AIS) heatmaps derived from deep-vision models. The AIS reflects a feature-map's...
arxiv.org
September 26, 2024 at 6:34 AM