Emily M. Bender
banner
emilymbender.bsky.social
Emily M. Bender
@emilymbender.bsky.social
But the number of people who seem to think that we some kind of empirical study to support the claim that students cheat out of desperation, but not for the claim that students cheat because they're lazy or whatevs, is kind of astonishing to me.
December 9, 2025 at 8:28 PM
For the purposes of having a classroom environment that is actually conducive to learning, I think it's critical that instructors see students as there with the goal of learning --- and avoid setting ourselves up as cops.

>>
December 9, 2025 at 8:27 PM
I'm sorry to hear that. That's malpractice and needs to be treated as such.
December 9, 2025 at 7:43 PM
Would you like to be asked if you are bot?
December 9, 2025 at 7:14 PM
But somehow that post has attracted a lot of attention from people who seem to want to believe that students, by their nature, cheat, and that any statement to the contrary is inadmissible unless backed up by empirical studies.
December 9, 2025 at 7:08 PM
The grounds for the statement is that I have empathy for my students, and believe that they are interested in learning and wouldn't take the shortcuts that cut off learning if they weren't under pressure.

>>
December 9, 2025 at 7:08 PM
Against that background, the only legit use for university administrators, is, as I said, as a contrast-dye test.
December 9, 2025 at 6:44 PM
I'm not sure why two separate posters have asked me the same aggressive question in response to that tweet.

But the logic is: ChatGPT et al are detrimental to the project of education. Education administrators should not how to pressure to find use cases.

www.youtube.com/live/l-OWi6V...
Digital Learning Week 2025
YouTube video by UNESCO
www.youtube.com
December 9, 2025 at 6:43 PM
Did you read the previous post to the one you are responding to. It's important context. I was talking to University administrators who seemed under pressure to use/advocate for the use of these new "tools" (ahem, products).
December 9, 2025 at 6:30 PM
You can find all of the resources for the Data Statements for NLP project here:

techpolicylab.uw.edu/data-stateme...

But it's deliberately set up not to be automated. Part of the point is actually engaging with the data.
Data Statements | Tech Policy Lab
techpolicylab.uw.edu
December 9, 2025 at 5:25 PM
p.s. for Anil -- that was directed at the troll, not you.
December 9, 2025 at 3:04 PM
This will be my last response before muting you:

You are exhibiting troll behavior, not candid discussion.

And: the friction is where the learning happens. There are no shortcuts. LLMs are harmful to the project of education, period.
December 9, 2025 at 3:03 PM
Yes, agreed that we have to come at this from all angles. But I think if we're going to change public sentiment, in addition to leading with empathy for the situations that lead to LLM use in the first place, we also can't come in with "using a little bit is okay, actually".
December 9, 2025 at 3:02 PM
Being a linguist here, but "makes mistakes" is anthropomorphizing. It's something people do. We have accountability for our mistakes, too.

System output can be incorrect, can be an error, etc, but the system isn't "making a mistake".
December 9, 2025 at 3:00 PM
Do you believe that the purpose of education is grades?
December 9, 2025 at 2:45 PM
Warning labels are weak sauce, honestly. We need stronger regulation.
December 9, 2025 at 2:43 PM
I think it is crucial to hold a vision of e.g. information access where OpenAI, Google, Meta, etc in fact are not granted the right to push their technologies of isolation (h/t @hypervisible.blacksky.app ) into our every sphere of interaction.
December 9, 2025 at 2:43 PM
I think we're about 80% on the same page, actually.

When you say "millions of people using these tools every day" you are locating the source of that use primarily with the users, and not with the marketing, loss-leader tactics, etc of the companies behind the products.

>>
December 9, 2025 at 2:41 PM
And I think an interesting angle here is that the problems are long-standing. The illusory solution is 3 years old (or less, in some cases). So we can ask: What did you do three years ago? How can we build on that?
December 9, 2025 at 2:40 PM
That's actually a separate point from what I was making. ChatGPT is a product, not a tool. It is controlled by and benefits OpenAI, not the users.
December 9, 2025 at 2:39 PM
It is possible to acknowledge and validate the need (e.g. for accessible medical information, presented without condescension) without validating the use of the product for those services -- i.e. without confirming big tech's marketing.
December 9, 2025 at 2:37 PM
Agreed that we need to approach people with empathy and find ways to communicate the message that work. But what I'm saying is that "these products are dangerous to you & those you love" has to be part of the message.

>>
December 9, 2025 at 2:37 PM