Leyla Isik
banner
lisik.bsky.social
Leyla Isik
@lisik.bsky.social
Cognitive neuroscientist studying visual and social perception. Asst Prof at JHU Cog Sci. She/her
Excited to be in Amsterdam for #CCN2025! If you're here, check out the presentations from our lab 👇@qinwenshuo.bsky.social @ziruichen.bsky.social @manasimalik.bsky.social

@cogcompneuro.bsky.social
August 11, 2025 at 8:54 AM
If you're heading to @cogscisociety.bsky.social check out the presentations from our lab

And congratulations again to @emaliemcmahon.bsky.social on her Glushko dissertation prize 🥳
July 29, 2025 at 1:08 PM
I’m happy to be at #VSS2025 and share what our lab has been up to this year!

I’m also honored to receive this year’s young investigator award and will give a short talk at the awards ceremony Monday
May 16, 2025 at 6:13 PM
We found distinct neural signatures for physical interactions (eg fighting, dancing) and communicative interactions along posterior to anterior regions of the recently proposed lateral pathway. A distinction that has been difficult to find with hypothesis-driven experiments

2/n
May 14, 2025 at 10:30 PM
While we observed marginal differences between species, these differences could not be attributed to language model prediction alone, suggesting they arise from other inter-species or recording modality factors 5/n
March 14, 2025 at 4:19 PM
We compared vision, language, and multimodal models to ventral stream responses and found strikingly similar results across species. Notably, pure language models (evaluated on image captions) predicted ventral stream responses almost as well as pure vision models in both humans AND macaques 4/n
March 14, 2025 at 4:18 PM
We collected electrophysiology data from the ventral visual stream of rhesus macaques while they viewed the same 1,000 natural scene images as human fMRI participants from the Natural Scenes Dataset (the "NSD1000") 3/n
March 14, 2025 at 4:17 PM
New preprint “Monkey See, Model Knew: LLMs accurately predict visual responses in humans AND NHPs”
Led by Colin Conwell with @emaliemcmahon.bsky.social Akshay Jagadeesh, Kasper Vinken @amrahs-inolas.bsky.social @jacob-prince.bsky.social George Alvarez @taliakonkle.bsky.social & Marge Livingstone 1/n
March 14, 2025 at 4:14 PM
Monkey See, Model Knew: LLMs accurately Predict Human AND Macaque Visual Brain Activity
Colin Conwell, @emaliemcmahon.bsky.social, Akshay Vivek Jagadeesh, Kasper Vinken, @amrahs-inolas.bsky.social, @jacob-prince.bsky.social, George Alvarez, @taliakonkle.bsky.social and Marge Livingstone
December 13, 2024 at 3:24 PM
Vision and language representations in multimodal AI models and human social brain regions during natural movie viewing led by
@hsmall.bsky.social, with @hleemasson.bsky.social and Stewart Mostofsky
December 13, 2024 at 3:12 PM
Our paper "Relational visual representations underlie human social interaction recognition" led by @manasimalik.bsky.social is now out in Nature Communications
www.nature.com/articles/s41...
November 13, 2023 at 3:54 PM
Our paper "Hierarchical organization of social action features in the lateral visual stream" led by @emaliemcmahon.bsky.social with Mick Bonner is now out in @currentbiology.bsky.social

www.sciencedirect.com/science/arti...
November 1, 2023 at 4:55 PM
📣 Now out in J Neuro, led by @hleemasson.bsky.social: Using EEG-fMRI fusion we find that observing others' touch events (social vs. non-social) is processed in a feedforward manner through social perceptual brain regions
www.jneurosci.org/content/earl...
#neuroskyence #PsychSciSky
October 24, 2023 at 2:19 PM
Great examples! I get the sense that the internal prompt engineering does help somewhat, and possibly even within a chat...

For example, if I first ask for a "closed, empty box", it seems to do better with "closed box with sheep inside, not visible"(though still only 50-50!)
October 23, 2023 at 9:01 PM