Arthur Holland Michel
banner
writearthur.bsky.social
Arthur Holland Michel
@writearthur.bsky.social
310 followers 700 following 280 posts
Friendly reminders about spooky technologies. Words in Wired, The Economist, The Atlantic, and others. Author of “Eyes in the Sky”
Posts Media Videos Starter Packs
Holy hell. An internal Meta AI safety policy document sated that “It is acceptable [for a chatbot] to engage a child in conversations that are romantic or sensual.”

Meta only removed that line (and others like it) after Reuters contacted them for comment.

www.reuters.com/investigates...
A flirty Meta AI bot invited a retiree to meet. He never made it home.
Impaired by a stroke, a man fell for a Meta chatbot originally created with Kendall Jenner. His death spotlights Meta’s AI rules, which let bots tell falsehoods.
www.reuters.com
Maybe this won’t come as a surprise but this tech is built, in part, on models made by OpenAI (CLIP) and Meta (DINOv2). So there’s that, too.
I’ve spent a decade reminding people that for all their terrible powers of intrusion, drones still can’t recognize your face. Well, they could do so soon.
Someone needs to make a manual on the do's and don'ts of writing about predictive policing.

eg. never say something like "spot future killers." There's no such thing as a future killer.
Aaargh, deleted that pre crime post because article was from 2022. I was looking for this one from today, but so far it’s only on Apple News.
Imagine if a college in 2010 announced that it was going to accept the reality that a bunch of its students were paying people from Craigslist to write their essays.
The new definition of insanity is training a chatbot on the whole of the Internet and expecting it not to repeat the Web's ugly biases.
Study finds A.I. LLMs advise women to ask for lower salaries than men. When prompted w/ a user profile of same education, experience & job role, differing only by gender, ChatGPT advised the female applicant to request $280K salary; Male applicant=$400K.
thenextweb.com/news/chatgpt...
ChatGPT advises women to ask for lower salaries, study finds
A new study has found that large language models (LLMs) like ChatGPT consistently advise women to ask for lower salaries than men.
thenextweb.com
Reposted by Arthur Holland Michel
For this week's @economist.com I investigate how heavy AI use can degrade our cognitive abilities. The science on this question is still very new. But the evidence so far is troubling, to put it mildly.

www.economist.com/science-and-...
Does AI make you stupid?
Creativity and critical thinking might take a hit. But there are ways to soften the blow
www.economist.com
I'm trying to imagine what we would have thought in 2016 if the DoD had given Microsoft $200 million for Tay, the chatbot that went full bigot on Twitter within hours of being launched, a mere week after the debacle. Our heads would have exploded.
Reposted by Arthur Holland Michel
Media outlets can't pivot to AI to save themselves. It's not a business strategy and it's not going to work. The only path forward is for journalists to lean into their humanity, to do things AI can't, and to make clear they are writing for people, not algorithms:

www.404media.co/the-medias-p...
The Media's Pivot to AI Is Not Real and Not Going to Work
AI is not going to save media companies, and forcing journalists to use AI is not a business model.
www.404media.co
Those who predicted the AI revolution decades ago were right about a lot of things but they never counted on just how racist a lot of these machines would turn out to be
If you're wondering why people who work on AI ethics look a little tired, maybe it's because they've spent a decade watching AI do really racist stuff. They can be forgiven for being a bit grumpy.
Reposted by Arthur Holland Michel
My takeaway from the Grok white genocide debacle is that it has been nine years since Microsoft Tay and tech companies still have zero idea how to get their chatbots say what they want them to say.
I find this a little hard to believe. Unless by "serious defense officials" the author means people with very limited understanding of where the tech was at the time and the extremely limited ways it was, at that time, being used.
It’s dangerous for LLMs to generate terms such as “I understand your point,” “my intention was” and “I apologize.” None of these words represent the actual reason the system did what it did.
Friendly reminder that 99.999% of what tech leaders say about AI is not expertise. It's content.