Magdalena Kachlicka
banner
mkachlicka.bsky.social
Magdalena Kachlicka
@mkachlicka.bsky.social
Postdoctoral researcher @birkbeckpsychology.bsky.social @audioneurolab.bsky.social | speech + sounds + brains 🧠 cognitive sci, auditory neurosci, brain imaging, language, methods https://mkachlicka.github.io
Reposted by Magdalena Kachlicka
New preprint by Mika Nash and others on how selective attention affects neural tracking of prediction during ecologically valid music listening: www.biorxiv.org/content/10.1...
Neural tracking of melodic prediction is pre-attentive
Music’s ability to modulate arousal and manipulate emotions relies upon formation and violation of predictions. Music is often used to modulate arousal and mood while individuals focus on other tasks,...
www.biorxiv.org
November 4, 2025 at 4:09 PM
Reposted by Magdalena Kachlicka
As it's hiring season again I'm resharing the NeuroJobs feed. Add #NeuroJobs to your post if you're recruiting or looking for an RA, PhD, Postdoc, or faculty position in Neuro or an adjacent field.

bsky.app/profile/did:...
September 3, 2025 at 3:25 PM
Reposted by Magdalena Kachlicka
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
August 19, 2025 at 1:12 AM
Reposted by Magdalena Kachlicka
My PhD student Yue Li is looking for L1 speakers of Chinese and Spanish for her online English experiment! Please see below for details!
🎓 Call for Participants – Paid Online English Study

We look for: native speaker of Spanish or Chinese who has an advanced level of English

💸 Compensation provided

✅ Check the flyer for eligibility
📲 Scan the QR code to get in touch.

Feel free to share this news!

#linguistics #paidstudy
August 14, 2025 at 3:01 PM
Reposted by Magdalena Kachlicka
Can you think of examples of books, films, TV shows, etc. featuring earworms or other types of imagined music? Please share them here! musicinmyhead.org/inner-music-...
Inner Music in Fiction and Biography - The Inner Music and Wellbeing Network
Inner Music in Fiction and Biography ‘Inner music’ or ‘musical imagery’ refers to the music that one hears in one’s own head. For example, an ‘earworm’ is a catchy piece of music that is stuck in one’...
musicinmyhead.org
August 6, 2025 at 7:45 PM
Reposted by Magdalena Kachlicka
🎧 Join us for some fun listening tasks!

🧠 Researchers at the University of Manchester want to recruit normal hearing volunteers aged 18-50 who are native English speakers to take part in research, which will help us to understand different aspects of listening in noise.

#hearinghealth #research
July 23, 2025 at 1:11 PM
Reposted by Magdalena Kachlicka
A ✨bittersweet✨ moment – after 5 years at UCL, my final first-author project with @smfleming.bsky.social is ready to read as a preprint! 🥲
Distinct neural representations of perceptual and numerical absence in the human brain: https://doi.org/10.31234/osf.io/zyrdk_v1
July 25, 2025 at 9:23 AM
Reposted by Magdalena Kachlicka
Nice review, but why "controversies"? Evidence isn’t controversial. Like "epiphenomenon," it often just means, "doesn’t fit my hypothesis." That’s ad hominem science.

Brain rhythms in cognition -- controversies and future directions
arxiv.org/abs/2507.15639
#neuroscience
arxiv.org
July 25, 2025 at 3:25 PM
Reposted by Magdalena Kachlicka
Delighted to have our newest paper out in #Jneurosci ! We looked at how much a single cell contributes to an auditory-evoked EEG signal. Big thanks to my co-authors Ira Kraemer, Christine Köppl, Catherine Carr and Richard Kempter (all not in Bsky). Here’s how: (1/13)
bsky.app/profile/sfnj...
#JNeurosci: <a href="https://bsky.app/profile/did:plc:2qzyacanl3gck4zou547d34y" class="hover:underline text-blue-600 dark:text-sky-400 no-card-link" target="_blank" rel="noopener" data-link="bsky-mention">@paulakuokkanen.bsky.social et al. isolated scalp signals from single neurons in the 1st processing stage of the barn owl auditory pathway, finding that single neurons' contributions to the scalp signal were unexpectedly large, and time-locked to the 2nd peak.
vist.ly/3n7ycdj
June 28, 2025 at 2:18 PM
Reposted by Magdalena Kachlicka
Children are incredible language learning machines. But how do they do it? Our latest paper, just published in TICS, synthesizes decades of evidence to propose four components that must be built into any theory of how children learn language. 1/
www.cell.com/trends/cogni... @mpi-nl.bsky.social
Constructing language: a framework for explaining acquisition
Explaining how children build a language system is a central goal of research in language acquisition, with broad implications for language evolution, adult language processing, and artificial intelli...
www.cell.com
June 27, 2025 at 5:17 AM
Reposted by Magdalena Kachlicka
🚨 New preprint 🚨

Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧵
June 24, 2025 at 1:49 PM
Reposted by Magdalena Kachlicka
In what way is the frontoparietal network domain general? We show it uses the same neural resources to represent rules in auditory and visual tasks but does so with independent codes doi.org/10.1162/IMAG..., thanks to A Rich, D Moerel, @linateichmann.bsky.social, J Duncan @alexwoolgar.bsky.social
June 24, 2025 at 9:27 AM
Reposted by Magdalena Kachlicka
What makes humans similar or different to AI? In a paper out in @natmachintell.nature.com led by @florianmahner.bsky.social & @lukasmut.bsky.social, w/ Umut Güclü, we took a deep look at the factors underlying their representational alignment, with surprising results.

www.nature.com/articles/s42...
Dimensions underlying the representational alignment of deep neural networks with humans - Nature Machine Intelligence
An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual ...
www.nature.com
June 23, 2025 at 8:03 PM
Reposted by Magdalena Kachlicka
Music is universal. It varies more within than between societies and can be described by a few key dimensions. That’s because brains operate by using the raw materials of music: oscillations (brainwaves).
www.science.org/doi/10.1126/...
#neuroscience
Universality and diversity in human song
Songs exhibit universal patterns across cultures.
www.science.org
June 23, 2025 at 11:38 AM
Reposted by Magdalena Kachlicka
a plea to think carefully about surprisal + what it means to understand how we understand >> link.springer.com/article/10.1...

brand new paper in Computational Brain and Behaviour with @andreaeyleen.bsky.social at @mpi-nl.bsky.social
What’s Surprising About Surprisal - Computational Brain & Behavior
In the computational and experimental psycholinguistic literature, the mechanisms behind syntactic structure building (e.g., combining words into phrases and sentences) are the subject of considerable...
link.springer.com
February 25, 2025 at 9:26 AM
Reposted by Magdalena Kachlicka
🔍 When do neurons encode multiple concepts?

We introduce PRISM, a framework for extracting multi-concept feature descriptions to better understand polysemanticity.

📄 Capturing Polysemanticity with PRISM: A Multi-Concept Feature Description Framework
arxiv.org/abs/2506.15538

🧵 (1/7)
June 19, 2025 at 3:18 PM
Reposted by Magdalena Kachlicka
Out now @cp-trendscognsci.bsky.social, w/ @akalt.bsky.social & @drmattdavis.bsky.social.

Are sensory sampling rhythms fixed by intrinsically-determined processes, or do they couple to external structure? Here we highlight the incompatibility between these accounts and propose a resolution [1/6]
June 19, 2025 at 11:18 AM
Reposted by Magdalena Kachlicka
Excited to announce a new study from my time at the Center for Music in the Brain, Aarhus University, now available on bioRxiv!

Inharmonicity enhances brain signals of attentional capture and auditory stream segregation

www.biorxiv.org/content/10.1...

#EEG
#neuroscience
#neuroskyence
Inharmonicity enhances brain signals of attentional capture and auditory stream segregation
Harmonicity is an important feature for auditory perception as it influences pitch processing, memory and hearing in noisy environments. However, the neural substrates of processing harmonic and inhar...
www.biorxiv.org
April 30, 2025 at 1:27 PM
Reposted by Magdalena Kachlicka
For auditory processing aficionados: new from Y Sun, O Ghitza & G Michalareas
tracking rate of periodicity is achievable without sharp acoustic edges or consistent phase alignment to envelope. consistent with assuming distinct processes for phase and rate tracking
www.jneurosci.org/content/45/2...
Complex Impact of Stimulus Envelope on Motor Synchronization to Sound
The human brain tracks temporal regularities in acoustic signals faithfully. Recent neuroimaging studies have shown complex modulations of synchronized neural activities to the shape of stimulus envel...
www.jneurosci.org
June 20, 2025 at 3:43 AM
Reposted by Magdalena Kachlicka
Multidimensional feature tuning in category-selective areas of human visual cortex 🧠👇

New preprint from Leonard E. van Dyck, @martinhebart.bsky.social @kathadobs.bsky.social

www.biorxiv.org/content/10.1...

#neuroskyence
www.biorxiv.org
June 17, 2025 at 7:41 PM
Reposted by Magdalena Kachlicka
New paper with Chao Zhou in BLC: doi.org/10.1017/S1366728925100114
What makes lexical tones challenging for L2 learners? Previous studies suggest that phonological universals are at play... In our perceptual study, we found little evidence for these universals.
L2 difficulties in the perception of Mandarin tones: Phonological universals or domain-general aptitude? | Bilingualism: Language and Cognition | Cambridge Core
L2 difficulties in the perception of Mandarin tones: Phonological universals or domain-general aptitude?
doi.org
June 16, 2025 at 6:24 PM
Reposted by Magdalena Kachlicka
Neural Speech-Tracking During Selective Attention: A Spatially Realistic Audiovisual Study

www.eneuro.org/content/earl...
Neural Speech-Tracking During Selective Attention: A Spatially Realistic Audiovisual Study
Paying attention to a target talker in multi-talker scenarios is associated with its more accurate neural-tracking relative to competing non-target speech. This “neural-bias” to target speech has larg...
www.eneuro.org
June 16, 2025 at 2:10 PM
Reposted by Magdalena Kachlicka
my latest, in Trends in Cognitive Sciences

this review lays out what I think the fundamental specializations are for music perception in humans, namely, the hierarchical processing of pitch and rhythm

or, how our minds turn vibrating air into music

authors.elsevier.com/a/1lG9G_V1r-...
June 13, 2025 at 9:16 PM
Reposted by Magdalena Kachlicka
Check out our preprint (linked again here: osf.io/preprints/ps...) where we measured neural sensitivity to changes in semantic space while listening to a podcast. Not only do we look at word-to-word but larger chunks too (2-gram, 5-gram, 10-gram) to examine meaning construction at multiple levels.
June 13, 2025 at 8:33 PM
Reposted by Magdalena Kachlicka
1/
Excited to share our new paper just out in Scientific Reports!
🧠🎧 Using intracranial EEG, we show how the human brain automatically encodes patterns in random sounds– without attention or explicit awareness.
🔗 doi.org/10.1038/s415...
Direct brain recordings reveal implicit encoding of structure in random auditory streams - Scientific Reports
Scientific Reports - Direct brain recordings reveal implicit encoding of structure in random auditory streams
doi.org
May 5, 2025 at 10:36 AM