Christoph Miehl
cmiehl.bsky.social
Christoph Miehl
@cmiehl.bsky.social
HFSP Postdoc Fellow at U Chicago in the Doiron lab | former PhD @MPI for Brain Research & TU Munich in the Gjorgjeva lab
https://www.christophmiehl.com/
I have heard about it, but I am not sure about the connection to our model. Could you share some papers with me that you think might be relevant? Thanks!
July 28, 2025 at 9:05 AM
Thank you!
Yes, exactly, the idea is that (in cortex) the VIP-SST disinhibitory pathway gates plasticity at the dendrites, while PVs stabilize the activity of E cells. We have a paragraph about this with more details in the Discussion section of the manuscript.
July 25, 2025 at 3:16 PM
This work has been co-led by Sebastian Onasch and me, with help from M. Maurycy Miękus under the guidance of @gjorjulijana.bsky.social.
Thanks to our funding sources @erc.europa.eu and @hfspo.bsky.social!

If you have questions/feedback, don’t hesitate to contact us! 12/12
July 25, 2025 at 10:53 AM
In conclusion, we introduce new avenues for performing assembly computations with biologically inspired mechanisms, demonstrating flexible learning and combining of assemblies without forgetting, bridging the gap between biological realism and computational functionality in circuit modeling. 11/12
July 25, 2025 at 10:53 AM
We show that in hierarchically connected areas, assemblies enhance recall & pattern completion in downstream areas, despite disruptions in previous areas.
Finally, we show how our model allows for learning of associations with existing assemblies, another key concept for flexible learning. 10/12
July 25, 2025 at 10:53 AM
We designed a visual-auditory association task to demonstrate the applicability of this ‘assembly calculus’ framework in a real-world example. Our model can correctly classify letters and numbers in a downstream concept area, even if only part of the sensory information is presented. 9/12
July 25, 2025 at 10:53 AM
We next investigated how the formed assemblies can be combined across areas. We focus on two assembly operations – projections and associations. In short, these operations can be learned across areas – importantly without any decay (forgetting) of previously learned structures. 8/12
July 25, 2025 at 10:53 AM
In a recurrent network with 400 multi-compartment neurons, switching one inhibitory context “on” allows for learning of stable dendrite-specific assemblies at disinhibited dendritic compartments. 7/12
July 25, 2025 at 10:53 AM
Assuming that inhibitory (sub-) populations encode “context” signals controlling distinct dendritic compartments, inhibitory neurons can flexibly gate plasticity on or off at each specific dendritic compartment. 6/12
July 25, 2025 at 10:53 AM
We use a multi-compartment neuron model with a spiking soma in which synapses at the dendrites undergo voltage-dependent synaptic plasticity. Here, the balance of excitatory and inhibitory inputs at each dendrite determines the sign and amount of plasticity. 5/12
July 25, 2025 at 10:53 AM
We propose a biological plausible model, combining nonlinear dendrites and inhibitory context-dependent gating to enable flexible learning without forgetting. Our model bridges scales from dendritic properties to assembly learning in recurrent circuits to multi-area assembly computations. 4/12
July 25, 2025 at 10:53 AM
At the same time, artificial neural networks (ANNs) have proven successful in performing a diversity of tasks and computations. However, they mainly rely on assumptions that are not biologically plausible and suffer from the problem of catastrophic forgetting. 3/12
July 25, 2025 at 10:53 AM
A prominent hypothesis suggests that groups of neurons (assemblies) provide the basis of perception/memory in the brain. Previous studies often fall short in showing how assemblies can be flexibly learned and combined to perform complex computations without forgetting previously learned ones. 2/12
July 25, 2025 at 10:53 AM