Emily M. Bender
@emilymbender.bsky.social
35K followers 570 following 2.1K posts
Book: https://thecon.ai Web: https://faculty.washington.edu/ebender
Posts Media Videos Starter Packs
Reposted by Emily M. Bender
charlotteclymer.bsky.social
"After a few moments of laughter..."

The No Kings organizers with a fun, tidy response to Mike Johnson's shenanigans.
Reposted by Emily M. Bender
emilymbender.bsky.social
Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.

>>
Comment by Tom Diettrich on a linkedin post reading:

"You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. 

Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. 

In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea."

Original:
https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)
emilymbender.bsky.social
In other words, CS has culture problems. (What else is new?) I don't have quick fixes to those problems, but those are the problems that I think shoud be being addressed.
emilymbender.bsky.social
My guess is that it is a combination of pressures to publish in quantity (bean-counting much?) combine with field-level norms that set the MPU (minimum publishable unit) too small.

>>
emilymbender.bsky.social
To solve that problem, rather than reaching for imagined "AI" as a band-aid, we could ask how it came to be.

>>
emilymbender.bsky.social
The problem here seems to be publishing venues getting overrun with shoddy submissions.

>>
emilymbender.bsky.social
Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.

>>
Comment by Tom Diettrich on a linkedin post reading:

"You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. 

Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. 

In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea."

Original:
https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)
Reposted by Emily M. Bender
judiciarydems.senate.gov
BREAKING: Sens. DURBIN, DUCKWORTH were just denied entrance to the Broadview ICE Facility in Illinois, unable to conduct constitutional role of oversight.
emilymbender.bsky.social
I wish @thedailyshow.com would stop talking to Doomers and TESCREALists and instead interview someone like Dr. Joy Buolamwini, @timnitgebru.bsky.social , @ruha9.bsky.social , @mmitchell.bsky.social , @alexhanna.bsky.social or me :)
emilymbender.bsky.social
This.
timnitgebru.bsky.social
One side, "AI 2027" is led by a bunch of privileged white people working at the companies causing the other side (AI con/Empire of AI).

One side, the "AI is gonna be so powerful if you don't let US be the AGI builders" is making money from stealing data, killing the environment & exploiting labor.
Reposted by Emily M. Bender
timnitgebru.bsky.social
One side, "AI 2027" is led by a bunch of privileged white people working at the companies causing the other side (AI con/Empire of AI).

One side, the "AI is gonna be so powerful if you don't let US be the AGI builders" is making money from stealing data, killing the environment & exploiting labor.
emilymbender.bsky.social
If your invited keynote speaker works for a company started because OpenAI wasn't basing its work enough on science fiction and said speaker uncritically cites TESCREAList, fictional work masquerading as science, you might not have organized a serious academic gathering.
emilymbender.bsky.social
I mean, I guess if people are going around spouting bullshit about "reasoning machines" they probably would feel insulted when someone calls it out. But that's a them problem.
emilymbender.bsky.social
New favorite octopus fact
wordsmithgetxo.bsky.social
In an old ad for board game Scattergories in Spain a player was shown flouncing out while anothet said “OK, we’ll accept ‘octopus’ as a pet”.
“Aceptamos pulpo” has now entered the language in the meaning of “that’s a bit of a stretch but let’s go with it just for the sake of argument”.
Reposted by Emily M. Bender
wordsmithgetxo.bsky.social
In an old ad for board game Scattergories in Spain a player was shown flouncing out while anothet said “OK, we’ll accept ‘octopus’ as a pet”.
“Aceptamos pulpo” has now entered the language in the meaning of “that’s a bit of a stretch but let’s go with it just for the sake of argument”.
emilymbender.bsky.social
No, I don't "AI" represents a useful or essential skill set. What's needed is the ability to think critically about automation.

You might find some useful arguments in our book (w/ @alexhanna.bsky.social )

thecon.ai
THE AI CON
How to Fight Big Tech's Hype and Create the Future We Want
thecon.ai
emilymbender.bsky.social
Funny typo, dude. Also, see my previous reply to you.
emilymbender.bsky.social
Maybe check who you're talking to before jumping in with the mansplaining.