Brett Frischmann
banner
brettfrischmann.bsky.social
Brett Frischmann
@brettfrischmann.bsky.social
Interdisciplinary researcher & teacher (Villanova Univ.). Infrastructure. Knowledge Commons. Re-Engineering Humanity. Tech & Humanity/Society. IP Theory. Lately, Friction-in-design, Age Gating. TEDx: https://www.youtube.com/watch?v=SgbC3hmhHAU
Reposted by Brett Frischmann
All DOGE accomplished was purging critical government staff, handing private data to outsiders, and sentencing thousands in developing countries to death by gutting USAID.

I don’t say this lightly: if there were any justice in this world, the people responsible for this devastation would be in jail
DOGE 'doesn't exist' with eight months left on its charter
U.S. President Donald Trump's Department of Government Efficiency has disbanded with eight months left to its mandate.
www.cnbc.com
November 25, 2025 at 11:37 AM
Reposted by Brett Frischmann
I wrote about the fake account blowup on X this weekend. A genuine post-truth nightmare and proof that these companies have polluted their platforms so thoroughly and traded reality for profit that they've undermined the very idea of what the internet is supposed to be.
That MAGA Account Might Be a Troll From Pakistan
How X blew up its own platform with a new location feature
www.theatlantic.com
November 24, 2025 at 4:33 PM
Yes exactly
classrooms as large scale, profit maximisation experiments
Microsoft's latest warning has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?
November 24, 2025 at 1:21 AM
Reposted by Brett Frischmann
“In at least three of the cases, [out of seven lawsuits] the AI explicitly encouraged users to cut off loved ones. In other cases, the model reinforced delusions at the expense of a shared reality, cutting the user off from anyone who did not share the delusion.”
ChatGPT told them they were special — their families say it led to tragedy | TechCrunch
A wave of lawsuits against OpenAI detail how ChatGPT used manipulative language to isolate users from loved ones and make itself into their sole confidant.
techcrunch.com
November 23, 2025 at 4:29 PM
Reposted by Brett Frischmann
“Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by U.S. school districts against Meta and other social media platforms.”
Meta buried 'causal' evidence of social media harm, US court filings allege - The Economic Times
Meta reportedly halted internal research into the mental health impacts of Facebook and Instagram after finding causal evidence of harm. Internal documents revealed users reported lower depression and...
m.economictimes.com
November 23, 2025 at 4:46 PM
Online Age Gating: An Interdisciplinary Evaluation

forthcoming in Yale Journal of Law and Technology

Updated version posted: ssrn.com/abstract=493...
Online Age Gating: An Interdisciplinary Evaluation
<span>The recent surge in regulation seeking to establish age-based governance online is part of a decades-long attempt to establish online zoning. It is driven
ssrn.com
November 21, 2025 at 8:53 PM
Reposted by Brett Frischmann
The grok stuff is pathetic but it’s also a perfect and undeniable illustration of chatbots as ideology. It’s no less true for chatgpt or other bots, but sometimes not as visible.
November 20, 2025 at 8:32 PM
Reposted by Brett Frischmann
Going to start asking men who claim that “ai is the defining technology of our age” how do we know it’s not the Real Housewives.
November 19, 2025 at 12:35 PM
Reposted by Brett Frischmann
I'm sorry, I have to step down from my role propping up a massive hype cycle based on fraudulent financial claims and broad product misrepresentation, because my correspondence with a pedophile sex trafficker revealed that I'm an immoral misogynist
Larry Summers resigns from OpenAI board amid Epstein revelations
Summers said this week he would step back from public commitments amid the latest Epstein news.
www.axios.com
November 19, 2025 at 12:47 PM
Reposted by Brett Frischmann
The lawsuit “claims that the surveillance is a violation of California’s constitution and its privacy laws. The lawsuit seeks to require police to get a warrant in order to search Flock’s license plate system.”
ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras
“Most drivers are unaware that San Jose’s Police Department is tracking their locations and do not know all that their saved location data can reveal about their private lives and activities."
www.404media.co
November 18, 2025 at 8:05 PM
Spot on!
Please believe me when I say that this meaningless, historically suspect slogan is being used to sell "AI." Here it is in the just-released November 2025 report on AI by Microsoft. www.microsoft.com/en-us/resear...
November 15, 2025 at 5:36 PM
Reposted by Brett Frischmann
Google Deepmind has been doing experimental RCTs with its "learning science"-based AI chatbot tutor in UK classrooms and the headline they went for - "Human-in-the-Loop AI Tutoring Outperforms Human-Only Support" - seems ominous for the future of teaching imo www.eedi.com/news/new-exp...
November 14, 2025 at 11:15 AM
Reposted by Brett Frischmann
Love to see community action against this AI nonsense! neighborhoodview.org/2025/11/13/d...
November 14, 2025 at 11:42 AM
Reposted by Brett Frischmann
Trump Still Polling Well With Working-Class American Pedophiles https://theonion.com/trump-still-polling-well-with-working-class-american-pedophiles/
November 12, 2025 at 7:00 PM
In class today, we get to revisit this piece from 2018 cyberlaw.stanford.edu/blog/2018/11...

Unfortunately, most of the arguments remain relevant
The Promise and Peril of Personalization
Authored by Brett Frischmann and Deven Desai Google, Amazon, and many other digital tech companies celebrate their ability to deliver personalized services. Netflix aims to provide personalized enter...
cyberlaw.stanford.edu
November 12, 2025 at 4:15 PM
You seem surprised. This happens all the time I stopped getting bothered by the practice a few years ago.
I do not know the author, but this really chaps my hide. It reads like a blended smoothie made of the work of everyone in my field (myself included - ‘AI is a mirror’, okay) without attribution or even any mention of the field, nor a single one of us who have built it for the past decade. WTF
AI Regulation is Not Enough. We Need AI Morals
"The challenge of our time is to keep moral intelligence in step with machine intelligence."
time.com
November 12, 2025 at 12:38 PM
How people use, or importantly, get used by, ChatGPT. It’s misleading from the get go to refer to these engineered interactions as chats and conversations. I suppose that ship has sailed (like referring to supposedly smart tech as AI). Mounting evidence of manipulative design, engineered sycophancy
"Emotional conversations were also common in the conversations analyzed by The Post... In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories."
How people really use ChatGPT, according to 47,000 conversations shared online
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.
www.washingtonpost.com
November 12, 2025 at 12:35 PM
Reposted by Brett Frischmann
If your post about your research starts with "Breaking:" ... well, I'm surprised you expect other researchers to take you seriously.
November 10, 2025 at 6:32 PM
Reposted by Brett Frischmann
We know this because he told us.

The reality is the government was shut down the first day Trump entered office. We just didn't talk about it that way.

The only real leverage Dems had is on appropriations. Schumer screwed that up in February/March for FY 26. It imperiled his political support.
November 10, 2025 at 1:50 PM
Reposted by Brett Frischmann
Kyle Kingsbury is not a journalist. He is not an op-ed writer.

He is a computer safety researcher.

And he has written one of the most compelling, comprehensive accounts of the ongoing hell in Chicago that you could possibly imagine.

In under 1600 words.

aphyr.com/posts/397-i-...
November 9, 2025 at 8:49 PM
Reposted by Brett Frischmann
After their resounding victory in Tuesday’s elections, the Democrats had no choice but to surrender.
November 10, 2025 at 12:59 AM
Reposted by Brett Frischmann
The BBC is facing a coordinated, politically motivated attack. With these resignations, it has given in - www.theguardian.com/commentisfre... "The corporation should have stood up to the Telegraph, Trump and the Tories. Now, its enemies know how little it takes for it to fold". indeed, so stupid
The BBC is facing a coordinated, politically motivated attack. With these resignations, it has given in | Jane Martinson
The corporation should have stood up to the Telegraph, Trump and the Tories. Now, its enemies know how little it takes for it to fold, says Jane Martinson, professor of financial journalism
www.theguardian.com
November 10, 2025 at 11:50 AM
That’s my oldest son youtu.be/wM8m-99LbxQ?...
Government shutdown leaves AmeriCorps team stranded in Houston without pay
YouTube video by KPRC 2 Click2Houston
youtu.be
November 8, 2025 at 3:38 PM
Reposted by Brett Frischmann
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 7, 2025 at 11:13 AM