wendy norris
banner
wendynorris.bsky.social
wendy norris
@wendynorris.bsky.social
Asst Prof | Ethical Data Science
Ex-investigative reporter and editor
Crisis informatics nerd
Makes good trouble
Tell your dog I said "hi"
Reposted by wendy norris
I call this a technology of coercion because it is almost entirely an upper-elite phenomenon being forced on unaccepting organizations, which frames how I approach “might as well learn how to use it” discourses
A study by Dayforce shows 87% of executives use AI for work, compared to 57% of managers and just 27% of employees.

I think this explains the massive disconnect we see in how CEOs talk about AI versus everyone else. It also raises the question of how useful it truly is for frontline work?
Execs are embracing AI more than their employees are, new research suggests
Research from HR software company Dayforce suggests that executives are leaning into AI far more than their employees.
www.businessinsider.com
November 29, 2025 at 3:43 PM
Reposted by wendy norris
This blog post by Helen Toner (well-known by some for a stint on the OpenAI board) is *really really* interesting because it is the first thing I've read by someone in Frontier AI that starts to get to grips with sociotechnical concerns open.substack.com/pub/helenton...
Taking Jaggedness Seriously
Why we should expect AI capabilities to keep being extremely uneven, and why that matters
open.substack.com
November 29, 2025 at 6:32 AM
The data rights hill that I will die on:

If content (computer-generated or otherwise) is predicated on theft, the resulting output is not ethical, legal or moral. Full stop.
November 29, 2025 at 4:47 PM
Reposted by wendy norris
I joined @cwarzel.bsky.social on the Galaxy Brain Podcast to explain some of the reasons why America, and a lot of other Western democracies, are sliding into authoritarianism and why it's probably inevitable
www.youtube.com/watch?v=HvFh...
America’s Slide Toward Simulated Democracy with Eliot Higgins
YouTube video by The Atlantic
www.youtube.com
November 28, 2025 at 4:30 PM
Introduce yourself with five concerts you've seen — Part 2

The Bridge School Benefit 2010:
Neil Young
Buffalo Springfield
Elton John
Leon Russell
Elvis Costello
Grizzly Bear
Jeff Bridges
Kris Kristofferson
Modest Mouse
Neko Case
Pearl Jam
Ralph Stanley
T Bone Burnett
November 29, 2025 at 2:05 AM
Introduce yourself with five concerts you've seen —

Queen (Night at the Opera tour!)
Talking Heads
Sarah McLachlan
Boz Scaggs
KD Lang
November 29, 2025 at 1:40 AM
Reposted by wendy norris
How come the policy obsession with "evidence-based" policy and technology in education of at least the last 20 years has completely evaporated with "AI"? ...
November 29, 2025 at 12:37 AM
Reposted by wendy norris
don’t cheapen your slides! there *are* places to get free, human-created, rich, visual content on this bitch of an internet, some on this list: livelaugh.blog/posts/non-ai...
November 28, 2025 at 4:40 PM
Reposted by wendy norris
I don’t know if anyone else notices or cares, but when I see a presentation in which the speaker uses obviously generated-AI images to illustrate their slides, it makes me immediately less confident in whatever other content they’re presenting.
November 28, 2025 at 3:07 PM
I know a lot of people like to dump on Facebook. But let me make an IRL argument that, in these particular times, the platform is a counterculture goldmine.

Come along for a story about how a basic community page can shift to hyper local news sharing and mutual aid-centered to reach everyday folks.
November 28, 2025 at 4:37 PM
Friendly reminder to you Black Friday shoppers:

AI is for surveillance
Chatbots are for surveillance
Wearables are for surveillance

Your data/clicks/behavior/content is collected and analyzed to be used against you in decisions re: hiring, insurance risk, credit cards, loans, and law enforcement.
I wrote about Oura's security and privacy practices earlier this year for this.weekinsecurity.com, and found:

• Oura rings *don't* end-to-end encrypt users' health data;
• Oura *can* access its users' data;
• Oura told me that the company *has* received U.S. government demands for users' data.
November 28, 2025 at 3:32 PM
I teach computational data science, not history, but I found the examples helpful and extensible to my own class prep and formative assessments.

Thanks for sharing!
Excited to launch, with @callingdrjones.bsky.social, 'History in Practice' - an informal space to collectively share, reflect, and collaborate on all things related to history teaching in universities

Some excellent pieces to begin with, please do get involved!

www.history-uk.ac.uk/history-in-p...
History in Practice
‘Doing the Readings’ – Will Pooley ‘Escaping the Lecture: Using game-based learning to engage History students’ – Rebecca Andrew and Sam Chadwick ‘Using Fo…
www.history-uk.ac.uk
November 28, 2025 at 3:22 PM
Reposted by wendy norris
“When we critique AI, we should do so with intellectual honesty and in a principled way. Utmost care is needed to avoid ethics washing, greenwashing, and generally — what we dub — critical washing.”
“Reflecting on the harms of AI is not itself harm reduction. It may even contribute to rationalizing, normalizing, and enabling harm. Critical reflection without appropriate action is thus quintessentially critical washing.”

Suarez [et al. (2025, par. 7)]

zenodo.org/records/1567...
Critical AI Literacy: Beyond hegemonic perspectives on sustainability
How can universities resist being coopted and corrupted by the AI industries’ agendas? Originally published here: https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic
zenodo.org
November 27, 2025 at 11:56 PM
Reposted by wendy norris
In light of record submission rates and a large volume of AI-generated slop, SocArXiv recently implemented a policy requiring ORCIDs linked in the OSF profile of submitting authors, and narrowing our focus to social science subjects. Today we are taking two more steps:
/1
November 27, 2025 at 2:54 PM
Reposted by wendy norris
Not one mention of eugenics.
November 27, 2025 at 4:01 AM
The problem of returning a "single answer" is that an AI overview is unable to grapple with fundamentally human ways of knowing: uncertainty and ambiguity.

Smoothing out these necessary frictions with Gen AI chatbots outputs and AI overviews from search retrieval is dangerously reductive.
We need more research and articles like this about how people are actually using LLMs and chatbots, instead of ones wishcasting in either direction, so we can make informed decisions about how to help people make better sense of the world where they are at
The ChatGPT effect: In 3 years the AI chatbot has changed the way people look things up. By @debmsu.bsky.social

"shift of the tool people reach for first for finding information is at the heart of how ChatGPT has changed everyday technology use."

theconversation.com/the-chatgpt-...
November 26, 2025 at 10:21 PM
Reposted by wendy norris
BROOKINGS: Our survey results lend themselves" to some conclusions:

".. professional AI use is far from ubiquitous and many respondents expressed skepticism that it would be as revolutionary as some experts expect."

@brookings.edu
www.brookings.edu/articles/how...
How are Americans using AI? Evidence from a nationwide survey | Brookings
Brookings scholars Alikhani, Harris, and Patnaik break down the latest evidence on how Americans use AI, both personally and at work
www.brookings.edu
November 26, 2025 at 2:42 PM
When is the last time that you saw a hypothetical situation described in a Terms of Use agreement? Doesn't this admission provide evidence that OpenAI knew that people were seeking self-harm information from ChatGPT *and* that the LLM would return dangerous outputs with no apparent guardrails?
Additionally, OpenAI argues its not liable because Raine, by using ChatGPT for self-harm, broke its terms of service
November 26, 2025 at 4:52 AM
It's been in a minute but this former news editor can still sniff out the unmistakable scent of desperate pre-Black Friday press releases extolling the virtues of some tech gadget.

This week, the magic word that makes assignment editors weak in the knees is "conscious".
can’t fucking catch a breath

make it stop
November 25, 2025 at 11:05 PM
If you come for Chotiner you best not miss
Every American should have this in their wallet:
November 25, 2025 at 7:23 PM
Reposted by wendy norris
Deloitte used AI without doing any fact checking on the information it provided, making “recommendations” on Newfoundland & Labrador’s healthcare system using made up sources and papers

I hope this constitutes contractual beach and the province can claw back the $1.6M for the “analysis” they got
Deloitte just got caught again citing fabricated and potentially AI-generated research—this time in a million-dollar report for a Canadian provincial government | Fortune
In a healthcare report aimed to address a nurse and doctor shortage, Deloitte cited several fake studies with real researchers’ names attached.
fortune.com
November 25, 2025 at 2:04 PM
Reposted by wendy norris
Sometimes I think it’s going to be the librarians who will save us all.
November 25, 2025 at 1:47 AM
Reposted by wendy norris
Grateful to The Verge for publishing my essay on why large-language models are not going to achieve general intelligence nor push the scientific frontier.

www.theverge.com/ai-artificia...
Is language the same as intelligence? The AI industry desperately needs it to be
The AI boom is based on a fundamental mistake.
www.theverge.com
November 25, 2025 at 12:49 PM
Reposted by wendy norris
I will add the following: our students lack the research skills required to audit an LLM essay for errors. They don’t arrive on campus with these skills; we teach it to them over four long years. So throwing freshmen in the deep end and saying “swim your way to a shore of rectitude” is folly.
November 24, 2025 at 1:23 PM
Reposted by wendy norris
I don't think I've ever disagreed and agreed more strongly with a piece, seesawing from one paragraph to the next.

Will come back to dissect.

#GiftLink #GiftArticle
America’s Children Are Unwell. Are Schools Part of the Problem?
www.nytimes.com
November 24, 2025 at 11:42 AM