Abstract Deadline: December 17
Notification: January 15
Abstract Deadline: December 17
Notification: January 15
Really excited to have this out, where we give a formal account, w/ experiments, of how to make sense of that!
Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.
Yet they often assign higher probability to ungrammatical strings than to grammatical strings.
How can both things be true? 🧵👇
Really excited to have this out, where we give a formal account, w/ experiments, of how to make sense of that!
Also delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps... and the huge potential for LMs to inform such topics!
Also delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps... and the huge potential for LMs to inform such topics!
Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).
Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
Our PhD students also run an application mentoring program for prospective students. Mentoring requests due November 15.
tinyurl.com/2nrn4jf9
Topics of interest include pragmatics, metacognition, reasoning, & interpretability (in humans and AI).
Check out JHU's mentoring program (due 11/15) for help with your SoP 👇
TTIC is recruiting both tenure-track and research assistant professors: ttic.edu/faculty-hiri...
NYU is recruiting faculty fellows: apply.interfolio.com/174686
Happy to chat with anyone considering either of these options
TTIC is recruiting both tenure-track and research assistant professors: ttic.edu/faculty-hiri...
NYU is recruiting faculty fellows: apply.interfolio.com/174686
Happy to chat with anyone considering either of these options
Very happy to chat about my experience as a new faculty at UT Ling, come find me at #COLM2025 if you’re interested!!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Very happy to chat about my experience as a new faculty at UT Ling, come find me at #COLM2025 if you’re interested!!
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Asst or Assoc.
We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)
faculty.utexas.edu/career/170793
🤘
Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
New work with @kmahowald.bsky.social and @cgpotts.bsky.social!
🧵👇!
Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
When: Tuesday, 11 AM – 1 PM
Where: Poster #75
Happy to chat about my work and topics in computational linguistics & cogsci!
Also, I'm on the PhD application journey this cycle!
Paper info 👇:
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
When: Tuesday, 11 AM – 1 PM
Where: Poster #75
Happy to chat about my work and topics in computational linguistics & cogsci!
Also, I'm on the PhD application journey this cycle!
Paper info 👇:
@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513
@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850
@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002
I’ll talk at INTERPLAY too. Come say hi!
@siyuansong.bsky.social Tue am introspection arxiv.org/abs/2503.07513
@qyao.bsky.social Wed am controlled rearing: arxiv.org/abs/2503.20850
@sashaboguraev.bsky.social INTERPLAY ling interp: arxiv.org/abs/2505.16002
I’ll talk at INTERPLAY too. Come say hi!
Not presenting anything but here are two posters you should visit:
1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning
Paper: arxiv.org/abs/2503.20850
Not presenting anything but here are two posters you should visit:
1. @qyao.bsky.social on Controlled rearing for direct and indirect evidence for datives (w/ me, @weissweiler.bsky.social and @kmahowald.bsky.social), W morning
Paper: arxiv.org/abs/2503.20850
Are you fascinated by whether linguistic representations are lurking in LLMs?
Are you in need of a richer model of spatial words across languages?
Consider UT Austin for all your Computational Linguistics Ph.D. needs!
mahowak.github.io
Are you fascinated by whether linguistic representations are lurking in LLMs?
Are you in need of a richer model of spatial words across languages?
Consider UT Austin for all your Computational Linguistics Ph.D. needs!
mahowak.github.io
A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...
This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social
A stimulus-computable rational model of visual habituation in infants and adults doi.org/10.7554/eLif...
This is the thesis of two wonderful students: @anjiecao.bsky.social @galraz.bsky.social, w/ @rebeccasaxe.bsky.social
cool ideas about representations in llms with linguistic relevance!
I want to draw your attention to a COLM paper by my student @sfeucht.bsky.social that has totally changed the way I think and teach about LLM representations. The work is worth knowing.
And you can meet Sheridan at COLM, Oct 7!
bsky.app/profile/sfe...
cool ideas about representations in llms with linguistic relevance!
Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:
A new benchmark developed by researchers at the NSF-Simons AI Institute for Cosmic Origins is testing how well LLMs implement scientific workflows in astronomy and visualize results.
Check out @sebajoe.bsky.social’s feature on ✨AstroVisBench:
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
The paper argues for three main claims.
philpapers.org/rec/GOLWDC-2 1/7
My own perspective is that while there is utility to LMs, the scientific insights are greatly overstated.
My own perspective is that while there is utility to LMs, the scientific insights are greatly overstated.
🥳I'm excited to share that I've started as a postdoc at Uppsala University NLP @uppsalanlp.bsky.social, working with Joakim Nivre on topics related to constructions and multilinguality!
🙏Many thanks to the Walter Benjamin Programme of the DFG for making this possible.