Lauren Woolsey
banner
cgsunit.bsky.social
Lauren Woolsey
@cgsunit.bsky.social
Earthling, she/her, teacher of science, reader of books, player of board games, and more. Avatar by Corinne Roberts :) These days, I post a lot about how AI sucks, see zines and info at padlet.com/laurenUU/antiAI
Reposted by Lauren Woolsey
We finally get to see @adambecker.bsky.social nerd out about astrophysics! He neatly covers light travel time back to the CMB and time itself forward to the heat death of the universe and discussion of entropy (p177-182).

"The impermanence of the universe does not make existence meaningless."
July 26, 2025 at 2:50 AM
Reposted by Lauren Woolsey
It’s an exercise in infantilization to do everything you can to prevent a student from taking a shortcut. Let them develop their own agency if they want to. Use your time as an educator to come up with assignments that are good on their own merits & not bc they flummox a probability machine.
November 14, 2025 at 3:13 PM
Reposted by Lauren Woolsey
It is literally not your job to mount defensive maneuvers against LLM vomited essays. Give those papers the grades the words deserve (generally it's a 'C'). Take a deep breath and decompress after the sense of disappointment. Move on.
November 14, 2025 at 2:49 PM
Ha, right?! Either take a photo yourself, make some art (love yours, Jordan!), or pay an artist (I went with that commission option and credit her in my bio, so I didn't have to choose a selfie ever again lol)
November 13, 2025 at 11:17 PM
Here's this entire thread all in one easy-scrolling webpage!

(Seeing it that way reminds me why this was a four-day endeavor for 130 pages of reading LOL)
Lauren Woolsey (@cgsunit.bsky.social)
Thank you all for reading along with me, and consider supporting [@fractalecho.bsky.social](https://bsky.app/profile/did:plc:qsavm6exsnbprmb2o5utdnst) with your own copy of the book ([link.springer.co...
tbsky.app
November 13, 2025 at 11:03 PM
Thank you all for reading along with me, and consider supporting @fractalecho.bsky.social with your own copy of the book (link.springer.com/book/10.1007...).

Lastly, I have my own free zines and collected online resources at padlet.com/laurenUU/antiAI!

[Next post = readable link to full thread]
November 13, 2025 at 10:59 PM
I was sad the book ended, this has been one of my favorites on the topic. Williams notes it was "an exercise in working my own way out of hopelessness."

Forging new human relationships, healing others, growing away from toxic/hateful ones - that's "a way out of our contemporary dystopia" (p127)
November 13, 2025 at 10:50 PM
The Just AI Toolkit presented by Rua Williams is four stages of info gathering, evaluation, and reflection.

1. Specify the nature of the system
2. Observe the flow of data and interactions
3. Assess system's claims against outcomes
4. Rewrite the system or policies as needed (or reject it!)
November 13, 2025 at 10:46 PM
This part feels too central to summarize so here's a longer clip with extended discussion from Abeba Birhane's "Algorithmic Injustice: A Relational Ethics Approach" (in Patterns 2, 2021)
November 13, 2025 at 10:42 PM
Williams also highlights the "Playbook for Resistance, a set of ideas from "Data Grab: The New Colonialism of Big Tech and How to Fight Back" by Ulises A. Mejias and Nick Couldry (2024).

When a system is corrupt, can work within, against, or beyond that system to shift values and resist together.
Data Grab
A compelling argument that the extractive practices of today’s tech giants are the continuation of colonialism—and a crucial guide to collective resistance.   Large technology companies like Meta, Ama...
press.uchicago.edu
November 13, 2025 at 10:33 PM
After that, a whole suite of ideas for radically changing our priorites, values and motivations:

Ruha Benjamin's Abolitionist Toolkit, Allied Media's Design Justice Principles, Disability Justice Principles of Sins Invalid, Meredith Broussard's Public Interest Technologies, etc.
November 13, 2025 at 10:27 PM
Next section: some commentary of the bias in GPT checkers and human anti-AI scrutiny against ESL and neurodivergent folks. I have been *very* vocal about this at my institution, and it's one reason I don't have a ban on it in my classrooms. The idea of enforcing/policing such a ban gives me the ick.
November 13, 2025 at 10:23 PM
"I find the question of robot rights to be a wholly contemptible distraction from the present-day human rights violations already being perpetuated by human executives of automated systems." (Williams, p118)

(I put a book dart by that, @fractalecho.bsky.social & I would have clapped in a live talk)
November 13, 2025 at 10:19 PM
Williams also discusses responsibility gaps when trying to determine who is liable when a system fails. They quote Shannon Vallor and Bhargavi Ganesh "Artificial Intelligence and the Imperative of Responsibility: Reconceiving AI Governance as Social Care" (and ask who will be deemed worthy of care?)
November 13, 2025 at 10:14 PM
Those and others, however, do not address "the reality that the premise of a system itself may be flawed" (Williams, p115).

Regarding alignment, "to whom and to whose values? [...] Alignment relies on the rationalist conceit that values are measurable and proceduralizable" (Williams, p116).
November 13, 2025 at 10:03 PM
Next section deals with ethics and "alignment" and Williams notes the prominent "FAT*" framework: Fairness, Accountability, and Transparency and some critiques. Also mentioned: Explainable AI as a project in machine learning, sometimes considered a prerequisite to other frameworks of ethics.
November 13, 2025 at 10:00 PM
Beyond those (user, client, builder, executive, researcher), Williams adds:

-"reporter" : journalists and other influencers who play role in "how language is used to represent individual AI systems and AI as a whole."

-"governor" : regulators of policies; in institutions, orgs, and govt agencies
November 13, 2025 at 9:56 PM
Within any context, you might be a user (could be consumer or worker as this role), a client (with purchasing decisions), a builder (with design/development decisions), or an executive (with purpose/goal decisions).

Researchers might operate as each of these based on their research questions.
November 13, 2025 at 9:51 PM
They note that use of optimist, skeptic, pessimist categories "inherently uphold the AI fatalism that asserts a nebulous and ungovernable 'AI' is 'here to stay' that we must 'learn to live with' or risk being 'left behind'" (Williams, p111).

Instead, notice role played in specific systems. 🧵➡️
November 13, 2025 at 9:47 PM