David Brang
dbrang.bsky.social
David Brang
@dbrang.bsky.social
Associate Professor of Psychology UMich. Cognitive neuroscience of multisensory perception, neural oscillations, and brain tumor physiology. 🏳️‍🌈
sites.lsa.umich.edu/brang-lab/
Our department is recruiting! New tenure-track, open rank faculty position in the Department of Psychology at the University of Michigan (emphasis on human cognition and artificial intelligence).
apply.interfolio.com/169170
July 16, 2025 at 3:46 PM
New paper with @herveyjumper.bsky.social and Vardhaan Ambati! It shows that causal (DCS+) language cortex in glioma patients has greater ECoG information encoding (higher entropy and linguistic decoding). DCS+ sites showed stronger oscillations and high gamma, enabling prediction of DCS sites.
July 6, 2025 at 7:33 PM
Tissue samples were from healthy cortex (e.g., insular tumor resection), allowing transcriptionally defined neuronal subclasses, including numbers of glutamatergic + GABAergic neurons. This link better enables aperiodic (1/f) slope from EEG/ECoG to be used as a surrogate for cortical excitability.
May 24, 2025 at 7:37 PM
New paper with Shawn Hervey-Jumper's lab! As only PART of this project, it validates the relationship between aperiodic slope and cortical excitability using ECoG and single-nuc RNA sequencing in humans. Tissue samples with more excitatory neurons showed flatter aperiodic slope.
May 24, 2025 at 7:37 PM
No other tools currently exist specifically for intraOp ECoG registration. This was a multi-year collaboration with the amazing lab of Shawn Hervey-Jumper and included @senaoten.bsky.social, @sanjeevherr.bsky.social, Vardhaan Ambati, @ysibih.bsky.social, Katie Lu, and Jasleen Kaur.
May 8, 2025 at 9:21 PM
Localizing ECoG electrodes from photos alone is challenging (particularly with small craniotomies) and novice undergrads/medical students showed high variability (mean error = 16mm, ICC = .40), compared to experts with neuroanatomy training (mean error = 4mm, ICC = .93).
May 8, 2025 at 9:21 PM
ExtraOp ECoG/sEEG localization require a postOp MRI or CT that are unavailable in intraOp contexts. As an alternative, we used preOp MRI imaging, vascular reconstructions, and intraOp photos to place grids along cortical reconstructions, in conforming electrodes to the brain.
May 8, 2025 at 9:21 PM
Fourier Wave Explorer lets you create a waveform from 5 sine waves that vary in frequency and amplitude, then shows the individual and composite waveforms, along with the PSD.

Github pages:
github.com/dbrang/Fouri...
github.com/dbrang/Fouri...

Features inspired by: github.com/Jezzamonn/fo...
March 31, 2025 at 2:57 PM
Fourier Wave Creator lets you either select example waveforms (e.g., square wave, 10hz + 3hz sine wave) or draw your own waveform, then shows the sine waves needed to reconstruct that complex waveform, along with the PSD. You can suppress frequencies to show the effects on reconstruction accuracy.
March 31, 2025 at 2:57 PM
I'm teaching an undergrad EEG/MEG/iEEG methods course and created a pair of interactive webpages to help build intuition about the Fourier Transform.
dbrang.github.io/Fourier-Wave...
dbrang.github.io/Fourier-Wave...
March 31, 2025 at 2:57 PM
Transition GIF showing the photos with and without the grid registered together. {Warning: these are intraoperative photos}
December 4, 2024 at 3:14 PM
First R01 for the lab! Grateful to have such amazing lab members and collaborators. We’re recruiting postdocs for this work and for an NSF grant (salary 65-70k; the postdoc could potentially live out of state and work remotely). More info at...
November 23, 2024 at 5:06 AM
These data also emphasize that silent visual speech increases neural activity (increased BOLD and iEEG HGp) at the posterior STG and STS, but suppresses activity elsewhere in the STG (including A1). Visemes could be classified from both regions suggesting two mechanisms. (5/7)
November 23, 2024 at 5:41 AM
Classifiers were also run at individual time-points to test when viseme information is encoded in the auditory system, showing similar onsets across phonemes and visemes (potentially starting earlier for visemes). For context, vis information can precede speech by 50-200ms. (4/7)
November 23, 2024 at 5:34 AM
We extended this result in patients using iEEG with different phonemes and visemes. Classification was strongest for ERP signals (reflecting oscillatory phase and power) but also present in broadband power (70-150 Hz). (3/7)
November 23, 2024 at 5:27 AM
During silent visual speech, visemes could be classified from the STG (bilaterally) and the left pSTS using fMRI. Classification was driven by distributed spatial information, consistent with the population-coded nature of phonemic representations. (2/7)
November 23, 2024 at 5:20 AM
New preprint from the lab! "Auditory cortex encodes lipreading information through spatially distributed activity." We used fMRI (n=64) and intracranial recordings (n=6) to study how the auditory system represents lipreading information....
November 23, 2024 at 5:06 AM
These were lightly photoshopped and lovely, but came in second place to the one up at https://sites.lsa.umich.edu/brang-lab/
November 23, 2024 at 5:20 AM
But, visual responses did not differ according to the rate of the amplitude modulated sound in terms of phase or power. In contrast, auditory electrodes showed strong phase-based entrainment at each rhythmic sound rate. (4/5)
November 23, 2024 at 5:34 AM
We found that 41 out of 178 (23%) visual electrodes responded to the onset of a sound (clustered largely around hMT+ and V1), with 10 of these 41 electrodes (24%) also generating a second response to the offset of a sound. (3/5)
November 23, 2024 at 5:27 AM
New iEEG paper from the lab testing what auditory features visual cortex is sensitive to (we've previously found that the auditory system sends sound onset timing and some coarse spatial information, but anything else?). https://journals.physiology.org/doi/full/10.1152/jn.00164.2021...
November 23, 2024 at 5:06 AM
The classic model (the race model) assumes that the senses are independent, and that faster RTs are due to the presence of 2 stimuli vs 1. Multisensory efftects are those that exceed the race model. This works well but fails to explain slower RTs (2/3)
November 23, 2024 at 5:20 AM