Ted Underwood
banner
tedunderwood.com
Ted Underwood
@tedunderwood.com
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学

Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
Pinned
Wrote a short piece arguing that higher ed must help steer AI. TLDR: If we outsource this to tech, we outsource our whole business. But rejectionism is basically stalling. If we want to survive, schools themselves must proactively shape AI for education & research. [1/6, unpaywalled at 5/6] +
Opinion | AI Is the Future. Higher Ed Should Shape It.
If we want to stay at the forefront of knowledge production, we must fit technology to our needs.
www.chronicle.com
Reposted by Ted Underwood
It's first round interview season and the most useful thing I can recommend is to spend time on these: csfaculty.github.io
Interview Questions for Computer Science Faculty Jobs
Practice answering typical interview questions you might be asked during faculty job interviews in Computer Science
csfaculty.github.io
January 20, 2026 at 4:07 AM
Reposted by Ted Underwood
Baguettotron is now taught in the classroom.
January 19, 2026 at 11:00 PM
Everyone has heard that AI complicates traditional forms of student assessment.

What you haven't heard is that AI also makes it possible for motivated students to design & execute quite interesting projects, so some profs (cough) are now regularly running 4-6 independent studies. Sustainable? 🤷‍♂️
January 19, 2026 at 8:35 PM
Reposted by Ted Underwood
Two extremes of student LLM usage: 1) avoid ever touching one & maybe get really into fountain pens and 100gsm paper, vs 2) build some kind of Rube Goldberg setup where a novel-length set of prompts and 10,000 lines of Python directs a half-dozen models recursively critiquing each others' outputs.
January 19, 2026 at 8:13 PM
Reposted by Ted Underwood
I love this new preprint from Cody Kommers + @ari-holtzman.bsky.social so much. arxiv.org/abs/2601.08768
Super contrarian & generative argument that we need to start better evaluating AI systems for their capacity to delight/entertain, not just perform intelligence/cognition - as cultural machines.
AI as Entertainment
Generative AI systems are predominantly designed, evaluated, and marketed as intelligent systems which will benefit society by augmenting or automating human cognitive labor, promising to increase per...
arxiv.org
January 19, 2026 at 7:06 PM
Reposted by Ted Underwood
We didn’t watch Veronica Mars in 2003 but the series is now on Nflix

Apart from the fun dated fashion, what’s striking is the show’s celebration of the lower middle class & hatred of violent rich kids. Just before the wave of princess and superhero fantasies, which were conspicuously undemocratic 😕
January 19, 2026 at 7:11 PM
Reposted by Ted Underwood
Loved this piece, which can be read as a defense of descriptive work. Instead of scales, let people speak. "Will it be causal? Hell no."
if i could make you read ONE (1) single post to improve your understanding of the challenges of social science in general it would be this one from @markfabian.bsky.social about wellbeing science specifically

profmarkfabian.substack.com/p/airing-my-...
Airing my grievances with wellbeing science
We have a streetlight problem
profmarkfabian.substack.com
January 19, 2026 at 6:30 PM
Reposted by Ted Underwood
sounds wise but even this framing is a year out of date. people are talking almost exclusively about what they're doing with LLMs *right now*, and on top of that, what they're doing right now is exactly what the most hypebrained shill said would happen. so this is wrong about both now and a year ago
Part of the struggle with the LLM discourse is that genuine (i.e. non-grifter) proponents talk exclusively about the possibilities of the transformer architecture in an ideal future with a rational business model.

Whereas opponents largely talk about the world we live in today, and its constraints.
What many of us have been saying for a while: AI*-related technologies are tools like any other technology, useful in some places, not in others. If it had been presented (and funded) as such, we'd be in a better place.

Instead, tech ideologues pushed AI* as the Second Coming, went all-in on it. /1
January 19, 2026 at 5:26 PM
Reposted by Ted Underwood
Zhipu just released a powerful lightweight option of GLM 4.7

✨ 30B total/3B active - MoE
huggingface.co/zai-org/GLM-...
January 19, 2026 at 4:30 PM
Reposted by Ted Underwood
yes yes we’ve all seen claude code
January 19, 2026 at 4:16 PM
Reposted by Ted Underwood
Blacksky @blackskyweb.xyz now has its own servers running the network

This means they're able to run their own feature development and moderation policies, while still connecting to Bluesky through the shared Atmosphere network!

s/o to @rude1.blacksky.team and the blacksky team for amazing work
January 19, 2026 at 5:29 PM
Reposted by Ted Underwood
Will I be at variance with Nelson? Maybe?

At the DSI this morning.
January 19, 2026 at 4:09 PM
Reposted by Ted Underwood
To calm your doomscrolling, birds at a feeder on a very snowy day.
January 19, 2026 at 12:22 PM
Reposted by Ted Underwood
One thing I appreciate about Bluesky at moments like this is it's gotten pretty good at surfacing credible reports from careful journalists who do their homework.

That Bluesky remains relatively small seems a dire commentary on how much the world values careful reporting
January 19, 2026 at 1:04 PM
Reposted by Ted Underwood
if i could make you read ONE (1) single post to improve your understanding of the challenges of social science in general it would be this one from @markfabian.bsky.social about wellbeing science specifically

profmarkfabian.substack.com/p/airing-my-...
Airing my grievances with wellbeing science
We have a streetlight problem
profmarkfabian.substack.com
January 19, 2026 at 9:35 AM
My, this letter to the Norwegian prime minister must certainly be embarrassing for Americans. I've never been more grateful that I made the decision to emigrate to a scientific research station on the Kerguelen Islands, where I feel guilty only that our research sometimes disturbs penguins.
January 19, 2026 at 6:42 AM
Reposted by Ted Underwood
Call for papers -- due March 31, 2026 (abstracts due March 26)
colmweb.org/cfp.html

Call for workshops -- due April 14, 2026
colmweb.org/cfw.html
January 18, 2026 at 10:18 PM
Reposted by Ted Underwood
i don't think there is any desk job i have ever done that i could not have done better by having Claude Code to hand. being able to build bespoke widgets at just below the speed of language is really, really good!
these tools were not good at dev tasks until a massive amount of resources made them fantastic at them. i do believe other fields will be more difficult.

but if you are counting these tools out and have confidence they'll never be able to work.. i do not understand that confidence.
January 18, 2026 at 11:03 PM
The telling detail rn is that AI users are much less interested in debating Doctorow about efficacy than in reading posts by Ronacher and Hughes about how to handle transformative change
January 18, 2026 at 11:20 PM
Reposted by Ted Underwood
More broadly, imagine dumping, like, every 19th century diary into a machine and being able to ask questions! My personal fantasy is to be able to go to a single non-special street corner in London and look up every reference in history to that single spot.
January 18, 2026 at 3:26 PM
A key piece of received wisdom right now is that LLMs by definition produce "average & predictable" output and can only homogenize human culture.

This is where it matters that people envision models working via search/retrieval and haven't thought about high-dimensional probability distributions. +
I think it'd be so good if people in general had the idea that these models' outputs exist in a 'latent space'

I remember in early 2023 my uncle who was techie (hasn't coded in ages) asked me how ChatGPT worked w/o a 'massive database'. Got to roll up my sleeves and give my "✨it's math!✨" spiel
January 18, 2026 at 4:40 PM
Reposted by Ted Underwood
Someone said "we become the person we think other people think we are" and it's been stuck in my head as I'm pretty sure its true
January 18, 2026 at 2:41 PM
Reposted by Ted Underwood
formats over apps
A Social Filesystem — overreacted
Formats over apps.
overreacted.io
January 18, 2026 at 7:05 AM
Reposted by Ted Underwood
Weekend thoughts on Gas Town, Beads, slop AI browsers, and AI-generated PRs flooding overwhelmed maintainers. I don't think we're ready for our new powers we're wielding. lucumr.pocoo.org/2026/1/18/ag...
Agent Psychosis: Are We Going Insane?
What’s going on with the AI builder community right now?
lucumr.pocoo.org
January 18, 2026 at 10:38 AM
Watching atproto turn into a real medium for social experiments — and thinking back to hellthread/alf days — gives me a glimpse of what parents must feel looking at their successful adult children and remembering 6500 diapers.
Maybe the most important thing I’ve learned over the past few years is that the solution to what looks like a collective action problem is to just start solving it. People will show up.
January 18, 2026 at 1:00 PM