Peter Donhauser
banner
pwdonh.bsky.social
Peter Donhauser
@pwdonh.bsky.social
Cognitive Neuroscience Researcher | Speech & Audition | MEG

Frankfurt, Germany

https://scholar.google.com/citations?user=276f1C0AAAAJ
The development of parallel phonological representations varied based on the timing of language exposure, showing how earlier-learned languages shape the acquisition of subsequent ones.
August 29, 2025 at 12:20 PM
We show that multiple phonological systems are organized through parallel representations, preserving the unique aspects of each language while maintaining shared articulatory features (here e.g. manner of articulation and consonant voicing).
August 29, 2025 at 12:20 PM
The development of parallel phonological representations varied based on the timing of language exposure, showing how earlier-learned languages shape the acquisition of subsequent ones.
August 29, 2025 at 12:14 PM
We show that multiple phonological systems are organized through parallel representations, preserving the unique aspects of each language while maintaining shared articulatory features (here e.g. manner of articulation and consonant voicing).
August 29, 2025 at 12:14 PM
We demonstrate the approach on a dataset collected using a speaker odd-one-out task, where we show that people’s first language can shape how they perceive continuous and categorical aspects of accents.
February 6, 2025 at 3:46 PM
(Q2) However, we show in simulations how to incorporate design matrices in the model fit. This allows us to quantify how well participants' odd-one-out choices can be explained using prior knowledge (here: stimulus categories).
February 6, 2025 at 3:46 PM
(Q1) In this task, human raters have to choose the odd-one-out in a triplet of 3 stimuli. In this simulated example two raters disagree on 1 triplet. Our approach assumes a common feature space that describes stimuli, but raters can weigh features differently in their choices.
February 6, 2025 at 3:46 PM
(Q2) Sometimes the features underlying people’s similarity judgments are not obvious. How can we combine prior knowledge about stimulus domains with data-driven approaches to gain new insights?
February 6, 2025 at 3:46 PM
In a new preprint with @kleind.bsky.social, we ask two questions: (Q1) People differ in how they perceive the similarity of stimuli in their environment. How can we model the features underlying similarity judgments in arbitrary domains, while accounting for individual differences? osf.io/agpb5_v1 🧵
February 6, 2025 at 3:46 PM
This way online participants can rate samples on multiple features, sort them into fixed or flexible number of clusters, or judge their similarity
December 13, 2024 at 2:57 PM
The nice thing is that audio samples are associated with visual tokens that control audio playback and that can be manipulated depending on the type of rating. You can try it out here: pwdonh.github.io/posts/audio-tokens/
December 13, 2024 at 2:57 PM
Re-advertising a tool we created some time ago for rating, sorting and comparing audio samples in the browser. It can be used as a jspsych plugin for online behavioral experiments. Check the repository: github.com/pwdonh/audio_tokens 🧵
December 13, 2024 at 2:57 PM
"So, how was your weekend?" #neuroimaging #neuroscience #worklifebalance
December 9, 2024 at 10:52 AM