Jed Brown
@jedbrown.org
460 followers 580 following 330 posts
Prof developing fast algorithms, reliable software, and healthy communities for computational science. https://hachyderm.io/@jedbrown https://PhyPID.org | aspiring killjoy | against epistemicide | he/him
Posts Media Videos Starter Packs
jedbrown.org
Everything is moving so fast, we need to accept papers generated by LLMs based on "reviews" generated by LLMs. But don't worry, it's all Rigorous and Responsible.

Deeply unserious.
aaai.org/aaai-launche...
Cutting-Edge Methods with Rigorous Safeguards

The pilot employs state-of-the-art methodologies in the responsible deployment of LLM technology, including:

    Multi-step reasoning processes at inference time
    Web search capabilities as tools in the reasoning chain
    Rigorous checks for proper data source attribution
    Comprehensive monitoring and evaluation of LLM contributions
jedbrown.org
Also, there is no company advertising all-human bad scholarly practice as the future of scholarship and jobs. Educators can do a better job at teaching critical scholarship skills, but at no point has manual bad scholarship been taught or promoted. It was human nature, cutting corners, without pride
jedbrown.org
It does mimic (somewhat common) bad scholarly practice to an extent, however, a person just reading an abstract is still epistemically better. People actually interpret meaning and can notice adjacent claims that may or may not be consistent with their understanding (prompting more thought/reading).
jedbrown.org
We also have to recognize the context of "inevitability" discourse and outrageous claims. The METR study, being a statement against interest and pretty good *relative* to other citable studies, is useful to start a more sober conversation and reconsider our null hypotheses.
jedbrown.org
Some of the studies are expensive, federal funding agencies have a lot of FOMO, and academic incentives are not great for that kind of work. Meanwhile, industry can shut off their own such studies at the drop of a hat (or stop publishing them).
jedbrown.org
It's a societal vulnerability that significant amounts of "AI" critique is coming from industry and industry-friendly sources (Apple on illusion of reasoning, Microsoft on medical benchmark brittleness, Meta on delimiter brittleness, METR, etc.). They're notable as statements against interest.
Reposted by Jed Brown
olivia.science
important on LLMs for academics:

1️⃣ LLMs are usefully seen as lossy content-addressable systems

2️⃣ we can't automatically detect plagiarism

3️⃣ LLMs automate plagiarism & paper mills

4️⃣ we must protect literature from pollution

5️⃣ LLM use is a CoI

6️⃣ prompts do not cause output in authorial sense
5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics, Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!
Reposted by Jed Brown
sifill.bsky.social
We have the timidity of countless corporate and university GCs to thank for the fact DEI is treated as though it were illegal.
itsafronomics.bsky.social
lol gentle reminder that DEI is not illegal. Supporting marginalized groups is not illegal. Anyone reneging on support is making a CHOICE to do so.
Reposted by Jed Brown
irisvanrooij.bsky.social
“Deloitte “misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent,” Pocock told Australian Broadcasting Corp. “I mean, the kinds of things that a first-year university student would be in deep trouble for.””

👀

fortune.com/2025/10/07/d...
Deloitte was caught using AI in $290,000 report to help the Australian government crack down on welfare after a researcher flagged hallucinations | Fortune
The updates “in no way impact” the report’s findings and recommendations, the Big Four firm said.
fortune.com
Reposted by Jed Brown
michae.lv
Does your university have a contract with Grammarly? Write to the decision-maker asking if they think the university should be paying for a tool that is fast integrating features that can only be used for academic misconduct and cognitive offloading and request they drop the contract.
jedbrown.org
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
For help with attribution and sourcing, Grammarly is releasing a citation finder agent that automatically generates correctly formatted citations backing up claims in a piece of writing, and an expert review agent that provides personalized, topic-specific feedback. Screenshot from Grammarly's demo of inserting a post-hoc citation.
https://www.grammarly.com/ai-agents/citation-finder
jedbrown.org
To be clear, the original crime is lying about the diligence and cognitive process that are claimed to underly the report. A fake citation is secondary, and most significant as circumstantial evidence of the first.

There is now an industry around obfuscating the evidence.
jedbrown.org
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
For help with attribution and sourcing, Grammarly is releasing a citation finder agent that automatically generates correctly formatted citations backing up claims in a piece of writing, and an expert review agent that provides personalized, topic-specific feedback. Screenshot from Grammarly's demo of inserting a post-hoc citation.
https://www.grammarly.com/ai-agents/citation-finder
jedbrown.org
Imagine being able to get caught doing fraud and simply clean up the evidence, avoid reputational harm, keep most of the money, and continue to influence government in a way that most honest actors could only dream of.
“The updates made in no way impact or affect the substantive content, findings and recommendations in the report,” Deloitte wrote.
jedbrown.org
One can interpret this brittleness as (a) an embarrassing property to be concealed by training/prompting, or (b) failure of the model to validate (benchmark score doesn't assess what is implied), and either restrict the application space or go back to the drawing board. Science demands the latter.
Figure 1 One can manipulate rankings to put any model in the lead by varying the single delimiter character. On the left, we
show the delimiter used to separate examples in common evals with few-shot examples such as mmlu. On the right,
we show model rankings based on mmlu performance as the example delimiter varies with each column corresponding
to a different ranking.
Reposted by Jed Brown
jedbrown.org
He hasn't been subtle about it.
blacksky.community/profile/did:...
letsgomathias.bsky.social
Jack collaborated with neo-Nazi twins to make a documentary. He was a fan of white supremacist Richard Spencer. He has tweeted 1488, the alphanumeric code for Heil Hitler. He wrote an unreadable anti-antifa book. Last year he wrote a book abt the left called “Unhumans.” That he’s now speaking here…
premthakker.bsky.social
From Donald Trump's Roundtable on Antifa just now —
"Antifa has been around in various iterations for almost 100 years in some instances, going back to the Weimar Republic in Germany."
— special guest Jack Posobiec
jedbrown.org
Just like the segregationists who filled community swimming pools with concrete.
Asked what percentage of children she imagines should be in public schools going forward, Justice, who is now with The Heritage Foundation’s political advocacy arm, told ProPublica: “I hope zero. I hope to get to zero.”
Reposted by Jed Brown
molly.wiki
New research from AWU/CWU/Techquity on AI data workers in North America. “[L]ow paid people who are not even treated as humans [are] making the 1 billion dollar, trillion dollar AI systems that are supposed to lead our entire society and civilization into the future.”
cwa-union.org/ghost-worker...
We identify four broad themes that should concern policymakers: Workers struggle to make ends meet. 86% of surveyed workers worry about meeting their financial responsibilities, and 25% of respondents rely on public assistance, primarily food assistance and Medicaid. Nearly two-thirds of respondents (66%) report spending at least three hours weekly sitting at their computers waiting for tasks to be available, and 26% report spending more than eight hours waiting for tasks. Only 30% of respondents reported that they are paid for the time when no tasks are available. Workers reported a median hourly wage of $15 and a median workweek of 29 hours of paid time, which equates to annual earnings of $22,620. Workers perform critical, skilled work but are increasingly hamstrung by lack of control over the work process, which results in lower work output and, in turn, higher-risk AI systems. More than half of the workers who are assigned an average estimated time (AET) to complete a task felt that AETs are often not long enough to complete the task accurately. 87% of respondents report they are regularly assigned tasks for which they are not adequately trained. With limited or no access to mental health benefits, workers are unable to safeguard themselves even as they act as a first line of defense, protecting millions of people from harmful content and imperfect AI systems. Only 23% of surveyed workers are covered by health insurance from their employer. Deeply involved in every aspect of building AI systems, workers recognize the wide range of risks that these systems pose to themselves and to society at large. Fifty-two percent of surveyed workers believe they are training AI to replace other workers’ jobs, and 36% believe they are training AI to replace their own jobs. 74% were concerned about AI’s contribution to the spread of disinformation, 54% concerned about surveillance, and 47% concerned about the use of AI to suppress free speech, among other issues.
jedbrown.org
Indeed, and the fediverse has lots of experience with similar moderation and federation issues. It limits what a PDS can provide, but choice of PDS is still your choice of trust. In the past few months, the Blacksky team has consistently shown courage and principle while Bluesky has disappointed.
jedbrown.org
@hyraemous.and.camera You can keep a custom domain (choose *.myatproto.social or whatever during migration, then Settings -> Profile -> Handle; no DNS changes are needed).

Choice of handle has nothing to do with moderation.

Choice of PDS is about trust in moderation and infrastructure/data.
jedbrown.org
The pro-AI people evidently haven't been reading the Journal of Marketing. blacksky.community/profile/did:...
jedbrown.org
This appears in Journal of Marketing and so some of the writing is 🙃 dystopian, suggesting that effective strategy is to intentionally promote misconceptions and keep the low-AI-literacy audience ignorant. I prefer to read that as a warning rather than an instruction manual.

doi.org/10.1177/0022...
Our results suggest that as people become more AI literate
over time, general AI receptivity may decrease. Thus, our find-
ings suggest that until capability considerations outweigh AI
receptivity fueled by perceptions of AI as magical, there may
be unintended consequences of policy makers’ efforts to
educate the public about AI. Of course, other factors—such as
shifting norms and expectations about AI adoption or improve-
ments in AI capabilities—may affect and change the nature of
the relationship between literacy and receptivity over time.
Future research could examine the lower literacy–higher recep-
tivity link longitudinally. Further, the results of Study 7 suggest that there are opportu-
nities to target not only those with lower AI literacy but also
those with higher AI literacy, as long as these two groups of
consumers are targeted differently. Specifically, our findings
suggest that marketing efforts aiming to make AI more accessi-
ble to a broader audience by demystifying how AI works may
inadvertently reduce consumers’ receptivity toward AI by
making it seem less magical. Thus, marketing efforts targeting
low-AI-literacy consumers may benefit from promoting an
aura of magic around AI by emphasizing how the AI powering
the respective product or service emulates human characteris-
tics. Conversely, marketing efforts targeting high-AI-literacy
consumers may benefit from highlighting how their AI-based
products and services execute tasks that are based on character-
istics shared between humans and AI.
jedbrown.org
And that's before getting into the centralization of power, incentive structure, and technosolutionist trap that afflicts even the more benignly-worded "AI for social good".
blacksky.community/profile/did:...
jedbrown.org
The "AI for Social Good" narrative is a tactic to consolidate power in the hands of those with no commitment to social good. The tech does not do what it is claimed to do and what it is claimed to do exacerbates root problems. Uncritical technosolutionism is a denial of service attack on humanity.
abeba.bsky.social
AI is the wrong tool to tackle complex societal & systemic problems. AI4SG is more about PR victories, boosting AI adoption (regardless of merit/usefulness) & laundering accountability for harmful tech, extractive practices, abetting atrocities. yours truly
www.project-syndicate.org/magazine/ai-...
jedbrown.org
Autocracy thrives on the erosion of truth; it does not need to convince everyone of a single preferred narrative. That's why "flood the zone" is effective. There cannot be pluralistic democracy without shared reality. The asymmetry is intrinsic, and "anti-fascist disinformation" isn't effective.
Reposted by Jed Brown
rude1.blacksky.team
TL; DR
In the next 5 business days you should be able to go to Link's page on blacksky.community and see his posts and he'll be able to post but people using the bsky mobile apps won't see it.
After that we'll get Blacksky-only posts up.
After that we can talk about purging bsky mod tools
fin/11
jedbrown.org
Yes, there is no reason to presume integrity of statements found between the detected fraud. The entire report should be considered fraud and Deloitte should repay the entire contract plus interest and damages.

Also, the non-existent refs will soon be replaced with fraudulent citation of real refs.
"This is no longer a ‘strong hypothesis’,” Rudge said. “Deloitte has now issued a confession, albeit buried in the methodology section. Deloitte has admitted to using generative AI for a core analytical task; but it failed to disclose this in the first place.”
The academic said the recommendations of the report could no longer be trusted because “the core analysis was done by an AI”.
Reposted by Jed Brown
fishkin.bsky.social
I thought I'd put the administration's proposed "compact" with universities in context, so I wrote the blog post below.

It's especially for journalists covering this story!

Many details about how the compact itself works and why the administration has retreated to this strategy.
Balkinization: The Art of Replacing the Law with the Deal
A group blog on constitutional law, theory, and politics
balkin.blogspot.com