Juliette Wade
@juliettewade.bsky.social
5.2K followers 5.6K following 5.1K posts
Novelist, linguist, anthropologist, Ph.D. Fascinated by social systems. Rep'd by K. O'Higgins The Broken Trust series: 1. Mazes of Power (DAW 2020) juliettewade.com Patreon: https://www.patreon.com/c/JulietteWade CA in AUS, Naarm she/her NO AI
Posts Media Videos Starter Packs
Pinned
juliettewade.bsky.social
Things I like to post:
#SFF
books, stories, art
human rights/politics
Discourse Analysis
LGBT+ support, trans and acespec rep
Cute animals esp otters, corvids, possums, cats
Science!
IRL language and culture geekery
worldbuilding, especially linguistic and cultural
my novels/stories and WIPs
The cover of Mazes of Power, Book One of The Broken Trust by Juliette Wade. The cover is in block letters over a series of rotating frames that seem to pull you in toward an underground city, while silhouettes emerge from different layers of the spin, hinting at the people inside.
Reposted by Juliette Wade
olivia.science
thank you both! also this might be useful: bsky.app/profile/oliv...
olivia.science
Really enjoyed & honoured to be part of the Critical AI Literacy Symposium yesterday. Also @lucyavraamidou.bsky.social & @miquelpt.bsky.social (and others) spoke about their wonderful work too. Big thanks to @irisvanrooij.bsky.social, Leo & Barbara for organising. ✨

www.youtube.com/watch?v=Fxyg...
juliettewade.bsky.social
Zoom in on that Venn diagram.
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Juliette Wade
olivia.science
Also very important ❣️

HUGE thank you to all my co-authors @marentierra.bsky.social @altibel.bsky.social @jedbrown.org @lucyavraamidou.bsky.social @felienne.bsky.social @irisvanrooij.bsky.social; full list here for those not on bsky: doi.org/10.5281/zeno... — sorry if I did not tag anybody!

11/n
Reposted by Juliette Wade
olivia.science
We end on "Machine Yearning for a Better Present" because why can't we dream? Why accept that universities are not places of learning? Nothing, except industry and their paid shills amongst us, force us to accept this & this force is not one of reason, but one of regressive values & profit.

10/n
Reposted by Juliette Wade
olivia.science
7. the kinda appealing, but substantively indefensible, idea that somehow AI is different to other technology, like calculators, in a pedagogical context — but we totally ban a great deal of technology in the classroom.

(Section 3.7 here doi.org/10.5281/zeno...)

9/n
Reposted by Juliette Wade
olivia.science
6. the extremely unhinged series of claims that without training them on how to be users of such systems that we somehow fail as teachers — truly ludicrous, utterly bizarre, and in fact directly contradicts other industry selling points.

(Section 3.6 here doi.org/10.5281/zeno...)

8/n
Reposted by Juliette Wade
olivia.science
5. the nonsense refrain that somehow everybody — every one of our students — is cheating now and we need to police them more and more.

(Section 3.5 here doi.org/10.5281/zeno...)
7/n
3.5 Supposedly students are all cheating now
No serious scholar or scientist in their right mind would want LLMs to produce their
texts; and hence, also no student pursuing an academic education would want to do so.
Iris van Rooij (2022, para. 7)
Students have always cheated. Bending and breaking the rules is human nature. And by the same
token, educators are not police. We are not here to obsessively surveil our students — education is
based on mutual trust. Therefore, our duty is to build mutually shared values with our students and
colleagues. Especially when education is not valued, we as educators are obliged to show our students
11
O. Guest et al.
that they are not just here to receive a degree: education is more than qualification (Biesta 2021). It is
about preparing students to become a capable and active members of society.
We emphasize that there are two victims of plagiarism: the original authors whose work is taken
without credit and the audience who is being deceived.
Reposted by Juliette Wade
olivia.science
4. the disregard for the corrosive power of anthropomorphism, which is taken advantage of by industry to sell & steal our data, in the base case scenario, and in the worst to abuse and push vulnerable groups to dependance and worse.

(Section 3.4 here doi.org/10.5281/zeno...)
6/n
3.4 Anthropomorphism and other circular reasoning
While opacity is a distinguishing feature of many other areas of science and technology, the myths surrounding computing may stem less from the fact that it is an opaque
esoteric subject and more from the way in which it can be seen to blur the boundary between people and machines (Turkle 1984). To be sure, most people do not understand
the workings of a television set or how to program their video cassette recorders properly, but then they do not usually believe that these machines can have intelligence. The
public myths about computing and AI are also no doubt due to the ways in which computers are often depicted in the mass media — e.g. as an abstract source of wisdom, or
as a mechanical brain.
Brian P. Bloomfield (1987, p. 72)
There is circular reasoning at play when we suggest and assume machines can think, reason, or argue
like humans can, and therefore, treat them — and test them — like humans. Within human-machine
10
Against the Uncritical Adoption of ‘AI’ Technologies in Academia
interaction research, often, AI technology output is compared to human performance, mistakenly
assuming such benchmarks are informative about AI’s capabilities. However, correlations with human output mean little to substantiate claims of human-likeness, especially when the input to the AI
models tested is the output of human cognition in the first place. There are so many cases of this from
daily life and the history of science that it appears shocking such results are taken so uncritically to
be cognition (Bernardi 2024; Guest 2025; Guest and Martin 2023; Placani 2024; van Rooij and Guest
2025). An example from the 1960s:
Weizenbaum (1966) was afraid of
Reposted by Juliette Wade
olivia.science
3. the obsession with denying and rewriting history, pretending AI only appeared in the last 3 years or that it has no history before the last few decades, etc.

(Section 3.3 here doi.org/10.5281/zeno...)
5/n
3.3 Ahistoricism and the AI hype cycles
When I started writing about science decades ago [...] I edited an article in which [a computer scientist] predicted that AI would soon replace experts in law, medicine, finance
and other professions. That was in 1984.
John Horgan (2020, n.p.)
When we engage with the public, we notice people think that AI, as a field or a technology, appeared
on the scene in the last three years. And they experience confusion and even dissonance when they
discover the field and the technologies have existed for decades, if not centuries or even millennia
(Bloomfield 1987; Boden 2006; Bogost 2025; Guest 2025; Hamilton 1998; Mayor 2018). Such ahistoricism facilitates “the AI-hype cycles that have long been fuelled by extravagant claims that substitute
fiction for science.” (Heffernan 2025, n.p. Duarte et al. 2024). We have been here before, both with entanglements of AI and statistics with industry corrupting our academic processes, and with so-called
AI summers: hype cycles that pivot from funding booms to complete busts and cessation of research
(Bassett and Roberts 2023; Boden 2006; Law 2024; Lighthill et al. 1973; Merchant 2023; Olazaran
1996; Perez 2002; P. Smith and L. Smith 2024; Thornhill 2025).
To understand how industry tries to influence independent research for their benefit, we can look
to past examples of entanglement of industry and statistics. Ronald A. Fisher, a eugenicist and “the
founder of modern statistics” (Rao 1992), having been paid by the tobacco industry, claimed that because ‘correlation is not causation’ that therefore ‘smoking does not cause lung cancer’ (Fisher 1958;
Stolley 1991). The parallel between tobacco and technology does not end here: “both industries’ increased funding of academia was as a reaction to increasingly unfavourable public opinion and an increased threat of legislation.” (Mohamed Abdalla and Moustafa Abdalla 2021, p. 2; also see Knoester
et al. 2025) The histories of eugenics, statistics, comput…
Reposted by Juliette Wade
olivia.science
2. the strange but often repeated cultish mantra that we need to "embrace the future" — this is so bizarre given, e.g. how destructive industry forces have proven to be in science, from petroleum to tobacco to pharmaceutical companies.

(Section 3.2 here doi.org/10.5281/zeno...)
4/n
3.2 We do not have to ‘embrace the future’ & we can turn back the tide
It must be the sheer magnitude of [artificial neural networks’] incompetence that makes
them so popular.
Jerry A. Fodor (2000, p. 47)
Related to the rejection of expertise is the rejection of imagining a better future and the rejection
of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum
2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying
and critically interrogating ideologies and sociotechnical relationships — such as historians and gender scholars — unfortunately fall prey to the teleological belief that AI is an unstoppable force. They
embrace it because alternative responses seem too difficult, incompatible with industry developments,
or non-existent. Instead of falling for this, we should “refuse [AI] adoption in schools and colleges,
and reject the narrative of its inevitability.” (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen
and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting
that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour
workday, the weekend and other workers’ rights, and governments banned chlorofluorocarbons from
fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously
turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill
our students and selves, and allow the technology industry’s immense contributions to climate crisis
Reposted by Juliette Wade
olivia.science
We also go through many arguments that can be used as counters to typical false frames forced upon us, such as:

1. the powerful nonsense that we as experts know nothing

(Section 3.1 here doi.org/10.5281/zeno...)

3/n
3.1 Rejection of expertise, ironically including our own
Being in a colonizing discipline first demands and then encourages an attitude that might
be called intellectual hubris. Furthermore, since you cannot master all the disciplines that
you have designs on, you need confidence that your knowledge makes the ‘traditional
wisdom’ of these fields unworthy of serious consideration. Here too, the AI scientist
feels that seeing things through a computational prism so fundamentally changes the
rules of the game in the social and behavioural sciences that everything that came before
is relegated to a period of intellectual immaturity.
Sherry Turkle (1984, p. 230)
Every field that comes into contact with AI discourse becomes infected even within AI as a field of
study (recall Table 1). Our colleagues have embraced these systems, uncritically incorporating them
into their workflows and their classrooms, without input from experts on automation, cognitive science, computer science, gender and diversity studies, human-computer interaction, pedagogy, psychology, and law to name but a few fields with direct relevant expertise (Sloane et al. 2024). Meanwhile, technology companies have rushed to invest in ‘AI ethics’ or ‘AI safety’ to ethics wash their
claims, thereby “laundering accountability” (as Abeba Birhane explains in Arseni 2025) and “distracti[ng] from real AI ethics” (Crane 2021), while censoring academics and thus, violating academic
freedom (Gebru and Torres 2024; Gerdes 2022; Goudarzi 2025; Munn 2023; Ochigame 2019; Suarez
et al. 2025; Tafani 2023).
Reposted by Juliette Wade
olivia.science
As seen in the table & figure above, we dissect and explain how terminology is abused and contorted by industry — terms like 'generative' or 'agentic' are not able to isolate what is being critiqued. We have seen this countless times before; flitting from one nonsense buzzword to another. 2/n
extract from page 3
Reposted by Juliette Wade
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Juliette Wade
haushoferl.bsky.social
“High technology is often as socially regressive as it is technically revolutionary or progressive.”

Referencing the wonderful @histoftech.bsky.social, this critical piece on the uncritical adoption of AI in universities pulls no punches.
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Juliette Wade
thepixelproject.bsky.social
Our daily Violence Against Women Helpline Re-Tweet session is coming right up! Everybody please RT - you cld save someone's life! #VAW
juliettewade.bsky.social
The thing I struggle with is not whether I write well; it's whether anyone will care whether I write well or not.

It gets me down. I'm grateful for all my friends who're willing to hold my hand when I feel that way (literally or figuratively).
Reposted by Juliette Wade
needhibhalla.bsky.social
"Even if a political cause isn’t popular now, that doesn’t mean it can’t be, with the right moral leadership.

Until it’s done, we fight. We take *pride* in fighting for people like Sylvia. And that is how we make the alternative of transphobia as shameful as it needs to be."
Reposted by Juliette Wade
hillarymonahan.bsky.social
Any violence will result in more occupations. Chicago is similarly occupied. Note: no red state occupation despite Kirk's assassination happening in red Utah.

Every accusation, in this case of violent thuggery, is a confession. Tackling a blow up unicorn? VERY CLEARLY indicates the aggressors. 2/2
Reposted by Juliette Wade
hillarymonahan.bsky.social
Seen intnl criticism of how Portland is protesting occupation. I understand it looks surreal. That said, the narrative is blue bastion cities are full of organized, violent antifa thugs. The mainstream news outlets are owned by billionaires propping up this narrative--they ARE the oligarchs. 1/2
Reposted by Juliette Wade
rahaeli.bsky.social
Holy shit, and also, great blessings upon the family of the human recipient mentioned in the article for allowing their loved one with no remaining brain activity to be a test candidate to determine if their body would reject a treated kidney.
honestcanadian.bsky.social
This is huge news!
UBC has developed an enzyme that can convert donor organs to type O, making them universal.
Normally a using organs of the wrong blood type causes the recipient's immune system to attack the organ, leading to failure.
❤️🇨🇦⚕️
news.ubc.ca/2025/10/univ...
UBC enzyme technology clears first human test toward universal donor organs for transplantation - UBC News
UBC-developed enzymes successfully converted a kidney to universal type O for transplant, marking a major step toward faster, more compatible organ donations.
news.ubc.ca
Reposted by Juliette Wade
skyladawn.ca
I just want to add that for people on immunosuppressants, catching covid (or many other things) means they're often directed to stop taking their medication so they can fight off the virus. This risks a relapse of their condition and introduces a host of complications in addition to the covid ones.
rippermd41.bsky.social
To you, getting covid may be no big deal.

But thanks to a failure in public health you have no idea who or what you’re risking.

Repeat covid infections are really bad for those with Long Covid.

Did you know that Black, Latino and Native American communities are at highest risk for Long Covid?