Spiros Chavlis
banner
spiroschav.bsky.social
Spiros Chavlis
@spiroschav.bsky.social
Postdoc @dendritesgr.bsky.social
Passionate about math, neuro & AI

dendrites.gr
🔍 Finally, we crafted challenging scenarios for traditional ANNs, starting with added noise and sequentially feeding batches of the same class. Our findings show that dANNs with RFs exhibit greater robustness, accuracy, and efficiency, especially as task difficulty increases. (13/14)
January 31, 2025 at 9:27 AM
🔍Rather than becoming class-specific early, dANNs show mixed-selectivity in both layers. This enhances trustworthy representations, achieving high accuracy with less overfitting and fewer, fully utilized params. (12/14)
January 31, 2025 at 9:27 AM
🔍 To understand dANN's edge over vANNs, we analyzed weight distributions after training on Fashion MNIST. ANNs fully utilize their parameters, especially dendrosomatic weights. Entropy and selectivity distributions also indicate different strategies for tackling the same classification task. (11/14)
January 31, 2025 at 9:27 AM
🔍 Our findings highlight that structured connectivity and restricted input sampling in dANNs yield significant efficiency gains in image classification over classical vANNs! When comparing dANNs and pdANN to vANN, we found that RFs boost efficiency, but not to the extent of dANNs. (10/14)
January 31, 2025 at 9:27 AM
📈 To validate the benefits of dendritic features, we tested dANN models on five benchmark datasets. Results showed that top dANN models matched or even outperformed the best vANNs in accuracy and loss! Additionally, dANNs proved significantly more efficient across all datasets. (9/14)
January 31, 2025 at 9:27 AM
Our dANN and pdANN models show improved learning with network sizes, lower loss, and better accuracy! More importantly, they maintain performance and stability as the number of layers increases. This reveals their potential for deeper architectures! 🧠💪 (8/14)
January 31, 2025 at 9:27 AM
🔍 We explored three input sampling methods for dendritic ANN models (dANN): a) random (R), b) local receptive fields (LRF), and c) global receptive fields (GRF). We also included a fully connected sampling (F), calling it a partly-dendritic ANN (pdANN) 🧠✨. (7/14)
January 31, 2025 at 9:27 AM