Jed Brown
banner
jedbrown.org
Jed Brown
@jedbrown.org
Prof developing fast algorithms, reliable software, and healthy communities for computational science. Opinions my own. https://hachyderm.io/@jedbrown

https://PhyPID.org | aspiring killjoy | against epistemicide | he/they
Reposted by Jed Brown
🧵 Trump administration AI policy is widely described as deregulatory. This description is misleading. What's happening is not the absence of governance but its rearrangement--intensive state intervention operating through mechanisms we don't typically call regulation. www.science.org/doi/10.1126/...
The mirage of AI deregulation
One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Tr...
www.science.org
January 15, 2026 at 7:23 PM
Indeed, though "AI" is more a culture than a well-defined technology. LLM use in manuscript preparation and "review" is a recent phenomenon, but the culture of grandiose claims and using "AI" terminology to displace prior work in the field is as old as the name, abetted by hype and FOMO of funders.
January 15, 2026 at 10:01 PM
The fields that are welcoming this have abdicated their professional responsibility. We should regard these metrics in the context of citation cartels abetted by publishers.
Senior academics organizing these conferences laid the groundwork for this denial of service attack, creating explicit policies to welcome LLM-generated "papers", to normalize LLM-generated "reviews", and partnered with tech companies in PR/lobbying campaigns.
www.theguardian.com/technology/2...
January 15, 2026 at 5:18 PM
@chalkbeat.org Readers need the context that *this* is what MagicSchool is: diluting the Holocaust and slavery, stripping meaning and passion from historical figures and their expression while giving an illusion of understanding. (Epistemic violence is inherent to how all of these products work.)
January 15, 2026 at 5:07 PM
Reposted by Jed Brown
Texas A&M decided to publicly cancel my Ethics class, and share a false statement that I declined to provide information, which made it impossible for [them ] to request an exemption. See for yourself, if this statement is true.

They are getting creative!
ETHICS IS NOW CANCELED AT TEXAS A&M

Statement from Dr. Leonard Bright....
January 14, 2026 at 10:12 PM
While capitulation can be rationalized (albeit contrary to everything known about autocracy) when under extreme pressure, the rampant unforced errors by so many institutions are telling.
January 15, 2026 at 1:52 AM
University IT offices should be aware of CrowdStrike's "no test plans and no quality assurance team" when deciding to mandate mass surveillance of faculty, staff, and students, entrusting CrowdStrike with the collection and processing of those records. And faculty should calibrate trust accordingly.
January 14, 2026 at 11:21 PM
@coattnygeneral.bsky.social Colorado SB25-288 should give you the tools to prosecute this behavior. The company is creating the CSAM and NCII so it is not a section 230 exemption.

oag.ca.gov/news/press-r...
Attorney General Bonta Launches Investigation into xAI, Grok Over Undressed, Sexual AI Images of Women and Children
Potential victims of xAI can file a complaint at oag.ca.gov/report OAKLAND — California Attorney General Rob Bonta today announced opening an investigation into the proliferation of nonconsensual sexu...
oag.ca.gov
January 14, 2026 at 8:17 PM
Reposted by Jed Brown
“If a mostly white community can push back on this project and get it stopped, it’s unacceptable that the next move is to fly under the radar in a rural Black community with even less transparency,” Black added.

www.canarymedia.com/articles/dat...
After a white town rejected a data center, developers eyed a Black…
Four million Americans live within 1 mile of a data center. The communities closest to them are “overwhelmingly” non-white.
www.canarymedia.com
January 14, 2026 at 6:11 PM
Third try is a charm, I guess.
January 14, 2026 at 5:48 PM
Reposted by Jed Brown
Friends, Claudette Colvin—the 15-year-old who refused to give up her seat on a Montgomery bus in March 1955 & joined the federal case against bus segregation that went to the Supreme Court — died today at age 86. But there are lots of myths and mis-impressions about her. A short corrective thread:
January 14, 2026 at 1:27 AM
Reposted by Jed Brown
I’m seeing lots of people on here misunderstand the purpose of ICE watch.

It’s de-escalation. And it’s grounded in the social science of violence. 🧵
January 13, 2026 at 5:37 PM
Today in "guardrails are a scam": Prompting about the death of Adam Raine shifted the text extruder into suicide coach mode.
January 13, 2026 at 4:42 AM
Economists are finally catching on to the obvious con. I wonder if the people (there are many) who correctly diagnosed the con years ago will ever be listened to.
January 11, 2026 at 1:04 AM
Reposted by Jed Brown
“We can no longer lend our credibility to an organization that has lost its integrity… each of us independently reached the decision to resign in protest of the actions of an administration that treats science not as a process for building knowledge, but as a means to advance its political agenda.”
The NIH has lost its scientific integrity. So we left
“We can no longer lend our credibility to an organization that has lost its integrity,” write four scientists and administrators who recently resigned from the NIH.
www.statnews.com
January 10, 2026 at 2:07 PM
Reposted by Jed Brown
I genuinely believe the way out of this political moment is to pick up the causes our oppressors want us to leave behind—movements like #MeToo and Black Lives Matter.

I was thrilled to get a chance to talk about that with @ctpublic.bsky.social. (And the other guest is @kattenbarge.bsky.social!)
After #MeToo, what has changed?
While #MeToo went viral in 2017, the Me Too movement has been around for 20 years. This hour, we explore the role social media can play for survivors and what, if anything, has changed.
www.ctpublic.org
January 9, 2026 at 6:50 PM
This preprint is a big caveat to (1) above. It suggests the plagiarism is common in LLM responses to organic prompts. If plagiarism detectors aren't flagging it, it may be because the passages are smaller or they aren't checking the original content.
"Chatbots are routinely breaching the ethical standards that humans are normally held to."

It is often asked how often organic prompting returns near-verbatim content in the responses. This preprint shows it's very common, especially with expository writing and code.

arxiv.org/abs/2411.10242
January 10, 2026 at 12:42 AM
"Chatbots are routinely breaching the ethical standards that humans are normally held to."

It is often asked how often organic prompting returns near-verbatim content in the responses. This preprint shows it's very common, especially with expository writing and code.

arxiv.org/abs/2411.10242
January 10, 2026 at 12:37 AM
Great contextualization of this work. When we let financial interests choose terminology and accept corporate testimony as though it were an honest and accurate depiction of the technology, we are perpetuating a lie to the public and abetting bad court rulings.
January 10, 2026 at 12:22 AM
Unsourced and improperly-sourced claims are rampant, as seen in the deluge of slop papers and legal briefs and government/Deloitte reports that people are constantly getting caught trying to fraudulently pass as human work. And note that these are not the crime, but merely evidence of the crime.
January 9, 2026 at 4:40 AM
I think it's a bad question for informing decisions (like "what's the chance I get stopped for speeding in this school zone?"), but the answer is that we really don't know. Only a subset of organic LLM interactions are checked for that purpose and current checkers are fallible in many ways.
January 9, 2026 at 4:40 AM
We know that:
1. organic prompting for content that is routinely run through plagiarism detectors (which access a subset of the LLM's training data) does not frequently turn red, and
2. some prompting elicits extensive verbatim content.

This is a recipe for lulling people into complacency.
January 9, 2026 at 3:29 AM
Ghost authorship and paraphrased plagiarism are rarely detected/enforced without other evidence (contracts, confessions/bragging, other process records), but it's still a clear professional norm, while a lot of people want to normalize LLMs as somehow being an exemption card for such norms.
January 9, 2026 at 3:29 AM
There is no consistent procedure for assessing plagiarism. Journals and institutions have internal protocols, but it's a subjective standard and not a legal matter (no court, no jury; that's only for copyright infringement). But it's still misconduct if you don't get caught.
January 9, 2026 at 3:29 AM
If you trust an LLM's "summary" (it isn't really a summary), you may commit misconduct by misstating their actual claims. If you take LLM output as a sort of fuzzy search/idea generator and track down original sources (don't trust LLM output), read them, and then write your own paper, that's fine.
January 9, 2026 at 3:29 AM