Kyle Mahowald
kmahowald.bsky.social
Kyle Mahowald
@kmahowald.bsky.social
UT Austin linguist http://mahowak.github.io/. computational linguistics, cognition, psycholinguistics, NLP, crosswords. occasionally hockey?
Check it out for cool plots like this about how affinities between words in sentences and how they can show how Green Day isn't like green paint or green tea. And congrats to @coryshain.bsky.social and the CLiMB lab! climblab.org
March 11, 2025 at 8:04 PM
We are happy to join them as well as more recent positions:
January 29, 2025 at 4:07 PM
Many of these points about different levels of analysis were presciently made about earlier models, including in 1988 by Smolensky philpapers.org/rec/SMOOTP-2 and here by Stabler www.cambridge.org/core/journal...
January 29, 2025 at 4:07 PM
For a sentence like “The *keys* to the cabinet *are* on the table”, it is a good explanation to say something like there is a "grammatical subject" that agrees with a "matrix verb", even if LMs or brains don’t have a structure that exactly corresponds to those concepts.
January 29, 2025 at 4:07 PM
FIRST: the success of deep learning makes us rethink language learning. The old idea in linguistics (and machine learning!) was: if you want your learner to generalize in the nice blue way, not in the bad red way, you need to restrict your learner to a small set of hypotheses.
January 29, 2025 at 4:07 PM
1. Although there are edge cases, LMs have made progress learning syntax. It’s increasingly hard to argue they’re doing so only through shallow tricks (but it’s important to be careful). Rather, they’ve learned some of “the real thing”. (Figure from Hu et al. 2024)
x
January 29, 2025 at 4:07 PM
Language Models learn a lot about language, much more than we expected, without much built-in structure. This matters for linguistics and opens up enormous opportunities. So should we just throw out linguistics? No! Quite the opposite: we need theory and structure. See this slide for a summary.
January 29, 2025 at 4:07 PM
LMs need linguistics! New paper, with @futrell.bsky.social, on LMs and linguistics that conveys our excitement about what the present moment means for linguistics and what linguistics can do for LMs. Paper: arxiv.org/abs/2501.17047. 🧵below.
January 29, 2025 at 4:07 PM
If this is anything like the live version at the LSA (and it seems to be!), it's worth watching for an inspiring vision for how linguistics and LLMs can fit together...and, as this slide near the end shows, how linguistic phenomena can be described neurally, artificial-neurally, or symbolically.
January 13, 2025 at 4:06 PM
November 14, 2024 at 9:57 PM
Getting back to my South Florida roots at #EMNLP2024 this week, say hi. Tues: @kanishka.bsky.social on how LLMs learn a rare construction and a posters on his work on property inheritance. Wednesday: Ask-a-Philosopher! @harveylederman.bsky.social and I will be presenting our TACL philosophy paper.
November 10, 2024 at 11:26 PM