jerrytang.bsky.social
@jerrytang.bsky.social
postdoctoral fellow at UT Austin interested in language disorders and brain-computer interfaces
We tested our approach on neurologically healthy participants, and found that silent movies are nearly as effective as narrative stories for transferring semantic decoders. Scientifically, this adds to the growing evidence that semantic representations are shared between language and vision

4/5
February 6, 2025 at 5:39 PM
Our new approach can decode language from a participant without language data! First we train decoders on reference participants with language data. Then we use *silent movies* to align brain responses across participants. Finally we decode new participant responses using the reference decoders

3/5
February 6, 2025 at 5:39 PM