Tucker Hermans
thermans.bsky.social
Tucker Hermans
@thermans.bsky.social
Professor of Computing, I do research on robotics and AI. I also ski.

https://robot-learning.cs.utah.edu/thermans
Reminding me of Voldo from Soulcalibur
November 12, 2025 at 3:37 AM
Sounds painful
October 25, 2025 at 12:22 AM
I've been vibecoding demos for class this semester, stuff I would historically just draw by hand is now much more precise, like PID control for simple unicycle models. I find it to still be fairly interpretable since it's not building massive codebases on its own.
October 24, 2025 at 12:29 AM
Reading groups were maybe my favorite part of grad school! I've been able to convince my group they are great off and on over the past 10 years, but it's never seemed to capture the magic of my youth.
October 17, 2025 at 4:26 PM
I too prefer the no presenters version and we go around the table and have everyone make a comment about their impressions before jumping into the details.
October 17, 2025 at 4:20 PM
Confidence based on the original stopping criteria or the updated analysis from this year?
October 16, 2025 at 11:22 PM
A junior colleague wanted to list me as co last author on a paper from their group as thanks for helping advise, and I had to say no. It was the first I’d heard people were doing this. This was just in the past month.
October 5, 2025 at 10:37 PM
This was big at the tail end of the learning with kernels era that ended about 10 years ago. I’m curious what you think we can learn from it for neural nets beyond use latent states or RNNs, asking because I am dissatisfied with that answer myself!
October 5, 2025 at 10:23 PM
I still love reading textbooks! Especially when the authors release the PDFs :)
September 16, 2025 at 2:55 AM
I had a student ask me after class how he could get more in details than the lecture. I asked if he'd been reading the text book and he said "Oh yeah, I should do that." Much better than last year: student missed 2 weeks of class and I suggested the textbook, he said "I'm not fond of reading"
September 16, 2025 at 12:53 AM
I wrote my statement of purpose for grad school apps on using RL and transfer learning to learn team level strategy in 2008. Things went a very different way for me, but that would be fun to get back into, especially with quadrupeds as I miss the Aibos we won RoboCup with.
September 3, 2025 at 2:59 AM
The semester started, I doubt that explains it all, but maybe from profs? I’ve barely logged on since classes started up
September 3, 2025 at 2:48 AM
The reality gap
August 30, 2025 at 3:25 PM
Lyndon LaRouche Machine?
August 6, 2025 at 3:19 PM
I agree it helps understand more clearly what was actually done, but I feel that math is helpful to justify that what was done is correct, reasonable. We should of course also just publish the code which could sometimes substitute for the pseudocode in the paper if cleanly structured.
July 22, 2025 at 3:08 PM
LOL, I just had to double check, but yeah the papers I’ve written with you have way leas math and more pseudocode than my typical paper.
July 22, 2025 at 2:46 PM
I think you're right that it's rare. Pre 9/11 I think there was a lot more "America is the federal government and we are weary of them" type sentiment and that was still the view post 9/11 for some. I don't have as great a sense of things now having left 20 years ago, but those folks are MAGA.
July 19, 2025 at 3:59 PM
Having grown up in Texas I definitely knew people who put Texas before America, but they were essentially neo-Confederates.
July 19, 2025 at 3:40 PM
That Thing You Dune
July 10, 2025 at 2:03 AM
It's exactly my strong memory of the deep learning skepticism that I experienced and participated in during grad school that's helped me keep a much more agile view about research as a prof for the past decade.
June 21, 2025 at 10:36 PM
Just an abuser.
June 7, 2025 at 2:49 PM