Daniel Lowd
lowd.bsky.social
Daniel Lowd
@lowd.bsky.social
4.1K followers 490 following 400 posts
CS Prof at the University of Oregon, studying adversarial machine learning, data poisoning, interpretable AI, probabilistic and relational models, and more. Avid unicyclist and occasional singer-songwriter. He/him
Posts Media Videos Starter Packs
In case you haven’t been following closely, there’s been amazing progress in AI models for music!

My partner has been adapting her original spec fic story collections into whole albums, 1 song per story. (Kind of like how Elton John’s “Rocketman” is based on the Bradbury short.)
This is a song inspired by the first story I ever sold — “Forget Me Not” — which is about a man addicted to a memory drug. I love how jazzy and catchy it turned out.

youtu.be/w3ui-XMrFcQ?...
Forget Me Not
YouTube video by Mary E. Lowd - Topic
youtu.be
Oh wow I have to check this out
Eh, I think monarchy-as-mascot is mostly harmless. It’s a silly cultural phenomenon that makes some people happy, like Mardi Gras or the Olympics. And if you’re going to have a national, cultural focal point like that, a figurehead monarchy is much less dangerous than religion.
I’ve done therapy. It’s… fine, but only if you have a specific goal and a good therapist who is a good match for you. That’s hard to find.

Talking to AI is like a fancy form of journaling — you reflect on your thoughts and feelings and a milquetoast average of the internet responds.
Reposted by Daniel Lowd
I can remember analogous panics about Dungeons and Dragons.

Everything that teenagers start doing in large numbers is dangerous. It always causes them to lose touch with reality and estranges them from their families. I doubt that civilization can survive if we keep letting teenagers do new stuff.
Reposted by Daniel Lowd
Yeah to be clear I expect slop to end up like kitsch in that it becomes its own countercultural art movement that people defiantly brand their work as to get attention and cater to the audience that's looking for it.
I worry that, in an effort to prevent chatbots from supporting self-harm, we will end up with LLMs that refuse to comment on a draft of a murder mystery novel. Worse, all LLMs will be controlled by large companies, because they’re the only entities that can afford the liability.
Reposted by Daniel Lowd
I thought I wouldn‘t be one of those academics super into outreach talks, but I just put together something about understanding LLMs for laypeople and I get to talk about results that I don’t really focus on in any of my technical talks! It’s actually really cool. I made this lil takeaway slide
Reposted by Daniel Lowd
melt down the guns
forge the steel into beams
raise the beams to build schools
but first
melt down all the guns
Using a verification email for 2FA turns a 10-second login experience into a 10-minute login experience.
I'm thousands of miles from home, but my laptop connects to the wifi with no hiccups. I love eduroam!
I'm at @satml.org this week! Great conference, great community, and hosted in a great country this year (Denmark).
Reposted by Daniel Lowd
A very exciting day for open-source AI! We're releasing our biggest open source model yet -- OLMo 2 32B -- and it beats the latest GPT 3.5, GPT 4o mini, and leading open weight models like Qwen and Mistral. As usual, all data, weights, code, etc. are available.
UO faculty have voted to authorize a strike.

89% turnout, 92% in favor.

Why? During the pandemic, inflation soared while salaries were stagnant. UO admin refuses to acknowledge the problem and wants us to just accept the pay cut.

Thanks to my union, @uauoregon.bsky.social, for representing us!
Reposted by Daniel Lowd
New research from Costello, Pennycook, and Rand. Can conversations with AI help reduce belief in conspiracy theories? Quite possibly. What’s the mechanism? Evidence production. Let me explain my little-t theory about how this might work.
Last year, we published a paper showing that AI models can "debunk" conspiracy theories via personalized conversations. That paper raised a major question: WHY are the human<>AI convos so effective? In a new working paper, we have some answers.

TLDR: facts

osf.io/preprints/ps...
Reposted by Daniel Lowd
New blog post: Stop talking about AGI, it's lazy and misleading

I argue that we all could do better than using the slippery, undefinable term Artificial General Intelligence. We would have a much better discussion by focusing on concrete tasks and capabilities
togelius.blogspot.com/2025/01/stop...
Reposted by Daniel Lowd
If the current AI gold rush is anything like the dot com boom, some of these ventures will end up like pets dot com and some will end up like Amazon dot com.
Reposted by Daniel Lowd
I think that's a key question here: do you find AI tools useful or not?

People who don't find them useful think their energy use is a complete waste

People (like myself) who use them every day are much more accepting of their energy costs
After all these years, The Hampsterdance Album is still a masterpiece.
AI is like plastic — a lot of people hate it because it often comes across as fake and tacky, but it’s flexible and it’s cheap and there’s so many things that would be impossible or impractical without it.

And yes, a lot of AI will be junk. Just like everything else. (cf Sturgeon’s Law.)
I think social media would be better if there were separate servers for PVP and PVE, just like in WoW.

Let everyone who wants to troll, dunk, flame, and pile-on do so in their own space.

And then let the rest of us gain xp, form parties, collect achievements, or whatever, without being ganked.
What I’m getting from this is that days are bad for you and you should try to avoid them.
Garbage in, garbage out — and it may not take much garbage to do real harm! This is why data curation and model testing is so important.
🧪 NYU researchers show AI models can be easily poisoned with medical misinformation, increasing risks of false outputs. Their study also suggests strategies to intercept and mitigate harmful content. 🩺💻 #MLSky
Vaccine misinformation can easily poison AI – but there's a fix
Adding just a little medical misinformation to an AI model’s training data increases the chances that chatbots will spew harmful false content about vaccines and other topics
www.newscientist.com