Venkat
banner
venkatasg.net
Venkat
@venkatasg.net
Assistant Professor CS @ Ithaca College. Computational Linguist interested in pragmatics & social aspects of communication.

venkatasg.net
I love creating this graph every five years over ACL anthology titles and abstracts. Mentions of nuance/fine-grain seem to be doubling every five years 🙃 Nuance rising has yet to level off among *CL publications.
September 20, 2025 at 11:15 PM
Exhibit N on how synthetic text/AI detectors just don't work reliably. Generating some (long) sentences from GPT4.1 and GPT5 with the same prompt, the top open-source model on the RAID benchmark classifies most GPT4.1 outputs as synthetic and most GPT5 as not synthetic.
September 10, 2025 at 8:05 PM
Love the glaze 😍 I started a beginner wheel class recently and was so happy with what came out! Didn’t know it would be this satisfying
June 7, 2025 at 12:09 AM
If we keep telling people to ‘be on the right side of history’ it will surely work sooner or later!
December 21, 2024 at 9:01 PM
A totally normal way to respond to a positive interaction with LLMs... from www.wired.com/story/at-age...
December 3, 2024 at 8:01 PM
Genius's lyric annotations are inadequate for this moment only memes can make any sense of this 😭
November 23, 2024 at 4:56 PM
Did Tyler and ASAP both watch German expressionist films all summer 😅 Both their new music videos are so clearly influenced by them — really wonder why that style is in vogue now.
October 25, 2024 at 1:51 PM
YouTube has AI summaries now? They do a terrible job advertising the video as something you should watch? Is the point that I shouldn't watch it now that I've read the summary?
October 9, 2024 at 1:59 PM
There are at least 3 different books titled 'The Last Man who knew everything' — one says it's Leibniz, another Young, and the third Fermi??? Guess the phrase was too catchy to not use to describe a polymath😂
October 9, 2024 at 12:19 AM
Implicit references to the in-group go up, and references to the in-group using 'they' go down, eclipsed by out-group 'they'. WP is implicit in the language commenters use, even when the model doesn't receive it as input in any way, as these clear linear trends show...[5/7]
June 27, 2024 at 5:59 PM
Large-scale analysis of comments with our best fine-tuned model reveals some surprising trends. Commenters are less likely to refer to the in-group (or at all) the more likely their team is to win, in a linear fashion. Within referring comments, things get more interesting..[4/7]
June 27, 2024 at 5:58 PM
insight into how models use WP, as well as how usage of referring expressions change with WP. Few-shot (but not fine-tuning) results on our gold dataset are best with linguistic descriptions of WP rather than numbers. LLMs are fickle with processing numerical information...[3/7]
June 27, 2024 at 5:58 PM
this as a *tagging* task - tag references to in/out group, and build models end-to-end to output the tagged comment from the original comment. As it's a live game, we can align each comment with the **live win probability** for each team. Grounding comments in win probability (WP) gives us... [2/7]
June 27, 2024 at 5:56 PM
What differentiates in-group speech from out-group speech? I've been pondering this question for most of my PhD, and the final chapter of my dissertation tackles this question in a super interesting domain: comments from NFL🏈 team subreddits on live game threads. Our insight was to frame...🧵[1/7]
June 27, 2024 at 5:54 PM
Other fonts also seem to have it I think, but Times New Roman is the worst.
June 13, 2024 at 7:18 PM
So I used to pooh-pooh the numbers estimated for greenhouse gas emissions from LMs, but LMs are so big now that the Llama3 models emitted as much as me taking 2000 return flights from NY to Delhi??? Assuming 100 ppl in a plane, its a more reasonable (sounding, to me at least) 20 flights.
April 26, 2024 at 8:45 PM
DeTeXt is available as a native app on Apple Vision Pro for anyone who has one of these, and writes in LaTeX - I'm sure there will be dozens of you in a few years. If you need to find the command for that symbol, I got you. apps.apple.com/us/app/id153... mastodon.social/@cameronbang...
February 18, 2024 at 5:39 PM
translucent latex 😩
January 13, 2024 at 4:15 PM
If any of you (lol) get the Vision Pro in a few week, don't worry - you can look up LaTeX commands 'spatially' while you write your super influential papers with my app DeTeXt which works out of the box. I assume you draw by hovering your finger in the air?
January 13, 2024 at 1:47 PM
I was curious with the subreddit r/BrandNewSentence - are the sentences actually brand new? Getting the averaged log probs from the Mistral LLM and compared to a similar subreddit...sort of? One problem is getting the actual brand new sentence - its not always in the title 🙄 which is what I used...
December 16, 2023 at 2:43 AM
2023 in films
December 15, 2023 at 2:41 AM
Presumably pictures of monkey rock in Lost Maples park, Texas are in its training data, but Bard cites a picture in an article about a rock *in France* then says it’s from China???? It misses the prominent ears of a 🐵? There are no trees or people either in the foreground 🙄
December 7, 2023 at 8:04 PM
Ridley Scott’s filmography is honestly bewildering - so many highs and lows and mids, and so many different genres. I’d find this personally more satisfying if I were looking back on my career - he seems willing to fuck around.
December 4, 2023 at 2:23 AM
I'm happy to announce that 🐮Lil-Bevo🤠 is ready to see the world. It's UT Austin's submission to BabyLM with @kmahowald.bsky.social Juan Diego & Kaj Bostrom. We tried 3 strategies inspired by human learning - music, shorter sequences, and targeted pretraining. Read our paper: arxiv.org/abs/2310.17591
October 27, 2023 at 4:13 PM