Rachel Ryskin
@ryskin.bsky.social
440 followers 270 following 18 posts
Cognitive scientist @ UC Merced http://raryskin.github.io PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io she
Posts Media Videos Starter Packs
Pinned
ryskin.bsky.social
🚨 Postdoc Opportunity PSA! 🚨

🗓️ UC President’s Postdoctoral Fellowship Program applications are due Nov. 1 (ppfp.ucop.edu/info/)

Open to anyone interested in a postdoc & academic career at a UC campus.

I'm happy to sponsor an applicant if there’s a good fit— please reach out!
University of California | President’s Postdoctoral Fellowship Program
ppfp.ucop.edu
Reposted by Rachel Ryskin
strijkers.bsky.social
The first publication of the #ERC project ‘LaDy’ is a fact and it’s an important one I think:

We show that word processing and meaning prediction is fundamentally different during social interaction compared to using language individually!
👀 short 🧵/1

psycnet.apa.org/fulltext/202...
#OpenAccess
Reposted by Rachel Ryskin
neuranna.bsky.social
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social
built a library to easily compare design choices & model features across datasets!

We hope it will be useful to the community & plan to keep expanding it!
1/
neurotaha.bsky.social
🚨 Paper alert:
To appear in the DBM Neurips Workshop

LITcoder: A General-Purpose Library for Building and Comparing Encoding Models

📄 arxiv: arxiv.org/abs/2509.091...
🔗 project: litcoder-brain.github.io
Reposted by Rachel Ryskin
ryskin.bsky.social
🚨 Postdoc Opportunity PSA! 🚨

🗓️ UC President’s Postdoctoral Fellowship Program applications are due Nov. 1 (ppfp.ucop.edu/info/)

Open to anyone interested in a postdoc & academic career at a UC campus.

I'm happy to sponsor an applicant if there’s a good fit— please reach out!
University of California | President’s Postdoctoral Fellowship Program
ppfp.ucop.edu
Reposted by Rachel Ryskin
gretatuckute.bsky.social
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.

In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.

Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
Reposted by Rachel Ryskin
alexanderhuth.bsky.social
New paper with @rjantonello.bsky.social @csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, & Nima Mesgarani: We use LLMs to generate VERY interpretable embeddings where each dimension corresponds to a scientific theory, & then use these embeddings to predict fMRI and ECoG. It WORKS!
biorxiv-neursci.bsky.social
Evaluating scientific theories as predictive models in language neuroscience https://www.biorxiv.org/content/10.1101/2025.08.12.669958v1
Reposted by Rachel Ryskin
adelegoldberg.bsky.social
LLM finds it FAR easier to distinguish b/w DO & PO constructions when the lexical & info structure of instances conform more closely w/ the respective constructions (left 👇). Where's pure syntax? LLM seems to say "🤷‍♀️" (right) @SRakshit
adele.scholar.princeton.edu/sites/g/file...
Reposted by Rachel Ryskin
nogazs.bsky.social
If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧵

1/n
Reposted by Rachel Ryskin
moshepoliak.bsky.social
(1)💡NEW PUBLICATION💡
Word and construction probabilities explain the acceptability of certain long-distance dependency structures

Work with Curtis Chen and Ted Gibson

Link to paper: tedlab.mit.edu/tedlab_websi...

In memory of Curtis Chen.
tedlab.mit.edu
Reposted by Rachel Ryskin
thomashikaru.bsky.social
1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with @postylem.bsky.social, Ted Gibson, and @rplevy.bsky.social).
ryskin.bsky.social
Looking forward to seeing everyone at #CogSci2025 this week! Come check out what we’ve been working on in the LInC Lab, along with our fantastic collaborators!

Paper 🔗 in 🧵👇
ryskin.bsky.social
Thrilled to see this work published — and even more thrilled to have been part of such a great collaborative team!

One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.
Reposted by Rachel Ryskin
ellscain.bsky.social
New paper w/ @ryskin.bsky.social and Chen Yu: We analyzed parent-child toy play and found that cross-situational learning statistics were present in naturalistic settings!

onlinelibrary.wiley.com/doi/epdf/10....
onlinelibrary.wiley.com
Reposted by Rachel Ryskin
gretatuckute.bsky.social
What are the organizing dimensions of language processing?

We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
Reposted by Rachel Ryskin
rtommccoy.bsky.social
🤖🧠 Paper out in Nature Communications! 🧠🤖

Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?

Our answer: Use meta-learning to distill Bayesian priors into a neural network!

www.nature.com/articles/s41...

1/n
A schematic of our method. On the left are shown Bayesian inference (visualized using Bayes’ rule and a portrait of the Reverend Bayes) and neural networks (visualized as a weight matrix). Then, an arrow labeled “meta-learning” combines Bayesian inference and neural networks into a “prior-trained neural network”, described as a neural network that has the priors of a Bayesian model – visualized as the same portrait of Reverend Bayes but made out of numbers. Finally, an arrow labeled “learning” goes from the prior-trained neural network to two examples of what it can learn: formal languages (visualized with a finite-state automaton) and aspects of English syntax (visualized with a parse tree for the sentence “colorless green ideas sleep furiously”).
Reposted by Rachel Ryskin
sparksociety.bsky.social
Unfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.
Reposted by Rachel Ryskin
thomhills.bsky.social
Does the mind degrade or become enriched as we grow old? To explain healthy aging effects, the evidence supports enrichment. Indeed, the evidence suggests changes in crystallized (enrichment) and fluid intelligence (slowing) share a common cause. psycnet.apa.org/record/2026-...
APA PsycNet
psycnet.apa.org
Reposted by Rachel Ryskin
mcxfrank.bsky.social
Super excited to submit a big sabbatical project this year: "Continuous developmental changes in word
recognition support language learning across early
childhood": osf.io/preprints/ps...
title of paper (in text) plus author list Time course of word recognition for kids at different ages.