andrea e. martin
andreaeyleen.bsky.social
andrea e. martin
@andreaeyleen.bsky.social
::language, cognitive science, neural dynamics::
Lise Meitner Group Leader, Max Planck Institute for Psycholinguistics |
Principal Investigator, Donders Centre for Cognitive Neuroimaging, Radboud University |
http://www.andreaemartin.com/
lacns.GitHub.io
Reposted by andrea e. martin
I don't know about you but the way my brain works is by analyzing the contents of the entire internet to make an educated guess about what word I should use next.
November 25, 2025 at 2:05 PM
Reposted by andrea e. martin
Are we really at a stage in public education where we consider it OK to have literally Google-branded schoolchildren whose learner identities are tied to being "responsible AI" users of private for-profit technologies?
November 22, 2025 at 8:31 PM
Correct. No notes
November 17, 2025 at 7:16 PM
And “patriarch maxxing” was right there…
November 17, 2025 at 7:15 PM
Reposted by andrea e. martin
Now, we should think about what the questions are & how we can answer them.

An important question is: how is the brain capable of bootstrapping structure from statistics? And the reverse: does the brain refine probabilistic representations with structured knowledge, and if so, how does this work?
a cartoon elephant with glasses says " i now have additional questions "
Alt: a cartoon elephant says "i now have additional questions "
media.tenor.com
November 17, 2025 at 5:13 PM
Reposted by andrea e. martin
This means that any effects found for surprisal always leave room for the possibility of latent factors driving both the probabilities and the human responses, and do not allow any conclusions about which factors are involved (and why).

So... Now what?

(image by Noémie te Rietmolen)
November 17, 2025 at 5:13 PM
Reposted by andrea e. martin
By contrast, using a data-driven feature like surprisal as an explanation prevents us from looking at the influence from latent factors by reflecting variance that stems from these factors as a second-order variable.
November 17, 2025 at 5:13 PM
Reposted by andrea e. martin
The problem with this power is that data-driven estimate will perform better than a theory-driven estimate. Because the data do not err, the theorizer does (@olivia.science & @andreaeyleen.bsky.social, 2021). These mistakes are awesome: they are opportunities to adjust our theory!
November 17, 2025 at 5:13 PM
Reposted by andrea e. martin
The power of surprisal stems from the fact that (lexical) surprisal can —and will— parametrically reflect variation stemming from any domain or representational level of language. Why? Because words form patterns for many reasons! Semantics, syntax, frequency... Surprisal does not distinguish.
November 17, 2025 at 5:13 PM
Reposted by andrea e. martin
Surprisal is the ‘everything bagel/nothing burger’ of predictors—it has everything baked in, which is the problem.
November 17, 2025 at 5:13 PM