Zahra Moradimanesh
banner
zrmor.bsky.social
Zahra Moradimanesh
@zrmor.bsky.social
Postdoc @fz-juelich.de working at the intersection of ML and Neuro: #ai4neuro and #neuroai | Outside the office, mountain and rock climbing 🏔️🧗🏻‍♀️

https://www.linkedin.com/in/zrmor/
Reposted by Zahra Moradimanesh
These are among the oldest forests on earth, but they're running up against the new world of climate change--firefighting help badly needed!
www.nytimes.com/2025/11/23/w...
Fire Threatens Iran’s Ancient Forest, a World Heritage Site
www.nytimes.com
November 24, 2025 at 2:15 PM
Reposted by Zahra Moradimanesh
For our next UCL #NeuroAI online seminar, we are happy to welcome Dr Cian O’Donell @cianodonnell.bsky.social (@ulsteruni.bsky.social)

🗓️Wed 11 June 2025
⏰2-3pm BST

Talk title: 'Neurobiological constraints on learning: bug or feature?'

ℹ️ Details / registration: www.eventbrite.co.uk/e/ucl-neuroa...
UCL NeuroAI Talk Series
A series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.
www.eventbrite.co.uk
May 30, 2025 at 10:31 AM
Reposted by Zahra Moradimanesh
Out in @natureneuro.bsky.social today 🥂

Cytoarchitecture, wiring and signal flow of the human default mode network

Combining 3D histology, 7T MRI, and connectomics to explore DMN structure-function associations

Led by Casey Paquola, @themindwanders.bsky.social & a terrific team of colleagues 🙏
The architecture of the human default mode network explored through cytoarchitecture, wiring and signal flow - Nature Neuroscience
The default mode network (DMN) is implicated in cognition and behavior. Here, the authors show that the DMN is cytoarchitecturally heterogeneous, it contains regions receptive to input from the sensory cortex and a core relatively insulated from environmental input, and it uniquely balances its output across sensory hierarchies.
www.nature.com
January 28, 2025 at 2:24 PM
Reposted by Zahra Moradimanesh
Nature

How neurons make a memory
www.nature.com/articles/d41...
How neurons make a memory
Loosely packaged DNA might make these nerve cells better able to encode memories.
www.nature.com
November 21, 2024 at 7:54 AM
Reposted by Zahra Moradimanesh
This paper may be very important:

www.biorxiv.org/content/10.1...

tl;dr: if you repeatedly give an animal a stimulus sequence XXXY, then throw in the occasional XXXX, there are large responses to the Y in XXXY, but not to the final X in XXXX, even though that's statistically "unexpected".

🧠📈 🧪
Stimulus history, not expectation, drives sensory prediction errors in mammalian cortex
Predictive coding (PC) is a popular framework to explain cortical responses. PC states that the brain computes internal models of expected events and responds robustly to unexpected stimuli with predi...
www.biorxiv.org
October 4, 2024 at 5:43 PM
Reposted by Zahra Moradimanesh
Findings of scientific misconduct on a monumental scale by a prominent Alzheimer's researcher

www.science.org/content/arti...
Did a top NIH official manipulate Alzheimer's and Parkinson’s studies for decades?
Agency announces research misconduct finding for neuroscientist Eliezer Masliah as scores of his papers fall under suspicion
www.science.org
September 26, 2024 at 8:03 PM
Reposted by Zahra Moradimanesh
Evidence that entorhinal cortex and prefrontal cortex (in mice, sorry) work together to encode outcomes in associative learning:

www.nature.com/articles/s41...

Studying this kind of outcome monitoring is going to be critical to understand the losses at play in the brain!

🧠📈
Prefrontal and lateral entorhinal neurons co-dependently learn item–outcome rules - Nature
The bidirectional loop circuit between layers 5/6 of the lateral entorhinal cortex and the medial prefrontal cortex encodes item–outcome associative memory in mice.
www.nature.com
September 26, 2024 at 3:21 PM
Reposted by Zahra Moradimanesh
Contrastive Learning Explains the Emergence and Function of Visual Category Selectivity

kempnerinstitute.harvard.edu/research/dee...
Contrastive Learning Explains the Emergence and Function of Visual Category Selectivity
How does the visual system support our effortless ability to recognize faces, places, objects, and words?...
kempnerinstitute.harvard.edu
September 27, 2024 at 6:52 AM
Reposted by Zahra Moradimanesh
some cool news - I've started a regular column at The Transmitter @thetransmitter.bsky.social

First column out now on that most convenient of all the fictions in neuroscience: averaging
thetransmitter.org/neural-codin...
Averaging is a convenient fiction of neuroscience
But neurons don’t take averages. This ubiquitous practice hides from us how the brain really works.
thetransmitter.org
September 23, 2024 at 8:08 PM
Reposted by Zahra Moradimanesh
This post led to a lot of people saying, "Well, is in-context learning really *learning*?"

I'd like to add to that confusing mix: Learning by thinking

www.cell.com/trends/cogni...

If I figure out something purely through self-reflection, is that "learning"? If not, why?

🧠📈 🧪
September 19, 2024 at 9:09 PM
Reposted by Zahra Moradimanesh
This paper from @martinhebart.bsky.social's lab is fantanstic: www.nature.com/articles/s41...

My takeaways:

1/ Clearly, semantic categories alone aren't enough to explain object perception or the neural system behind it.

#neuroscience #VisionScience
Distributed representations of behaviour-derived object dimensions in the human visual system - Nature Human Behaviour
Contier et al. show that dimensions are superior to categories at predicting brain responses to visual objects.
www.nature.com
September 12, 2024 at 1:36 PM
Reposted by Zahra Moradimanesh
What aspects of human knowledge are vision models missing, and can we align them with human knowledge to improve their performance and robustness on cognitive and ML tasks? Excited to share this new work (arxiv.org/abs/2409.06509) by @lukasmut.bsky.social! 1/10
September 13, 2024 at 11:04 PM
Reposted by Zahra Moradimanesh
1/ Here's a critical problem that the #neuroai field is going to have to contend with:

Increasingly, it looks like neural networks converge on the same representational structures - regardless of their specific losses and architectures - as long as they're big and trained on real world data.

🧠📈 🧪
September 12, 2024 at 7:00 PM