LJ_MI
banner
lj-mi.bsky.social
LJ_MI
@lj-mi.bsky.social
Higher ed., PhD, teacher of rhetoric, design, tech comm, and digital studies. J'apprécie de plus en plus le silence et l'écoute des autres. 🚴‍♀️
Reposted by LJ_MI
“Just when you thought you heard it all, AI systems designed to spot cancer have startled researchers with a baked-in penchant for racism.”
December 20, 2025 at 4:30 PM
Reposted by LJ_MI
And tbh we had better, because this shit at scale is going to introduce so many more harms we can’t even fully comprehend yet.
We can do even more.
December 18, 2025 at 9:26 PM
Reposted by LJ_MI
The ACM Digital Library, where a LOT of computing-related research is published (I'd say at least 75% of my own publications), is now not only providing (without consent of the authors and without opt-in by readers) AI-generated summaries of papers, but they appear as the *default* over abstracts.
December 16, 2025 at 11:31 PM
Reposted by LJ_MI
& I’m being purposefully a bit naive about how students self report—there is 100% some mirroring at play, as students perceive—whether rightly or wrongly—how teachers want them to talk about AI & echo it back—not unlike how students echoed teacherly warnings about wikipedia 10 years ago+
December 18, 2025 at 8:45 PM
Reposted by LJ_MI
This all resonates for me with this reply from @thekitchentapes.bsky.social, down in the threads under @mkirschenbaum.bsky.social's post—because if you had a chat with your class & they seemed super opposed to AI, that is absolutely not sufficient to ensure they aren’t using it, if that’s your goal+
I know faculty who still think *their* students aren't using AI bc they had a good discussion about it as a class, their assignments are truly unique, students are passionate about the subject matter, etc.

How do you maintain basic standards with colleagues who refuse to acknowledge the problem?
December 18, 2025 at 8:29 PM
Reposted by LJ_MI
Ethical AI is an oxymoron, like automated science
December 16, 2025 at 10:10 PM
Reposted by LJ_MI
Grading and googling hallucinated citations, as one does nowadays, and now that LLMs have been around for a while, I've discovered new horrors: hallucinated journals are now appearing in Google Scholar with dozens of citations bc so many people are citing these fake things
December 15, 2025 at 8:41 PM
Reposted by LJ_MI
Merriam-Webster’s human editors have chosen ‘slop’ as the 2025 Word of the Year.
December 15, 2025 at 2:07 PM
Reposted by LJ_MI
If you really wanted to change the political climate in this country you'd get back on LiveJournal
December 15, 2025 at 7:45 PM
Reposted by LJ_MI
I am heartbroken for Brown and the students, staff, and faculty who have to get through this. I hope they have everything they need to navigate such senseless brutality.
December 13, 2025 at 11:34 PM
Reposted by LJ_MI
Constant surveillance and in return you don’t have to communicate with your family.
Amazon’s new Alexa aims to detangle household chaos, like who fed the dog and the name of that restaurant everyone wanted to try | Fortune
This is a “classic fight in my house,” said Panos Panay, SVP of devices and services at Amazon.
fortune.com
December 9, 2025 at 11:52 AM
Reposted by LJ_MI
It is so striking to see the disclaimer "AI can make mistakes. Check outputs before using" on every single piece of software. Imagine the equivalent warning label on the Ford Pinto. "The fuel tank ruptures and bursts into deadly flames when crashed. Do not crash the Ford Pinto while driving"
I will be teaching about Ford Pinto case in my sociology of law lecture tomorrow on the risk society & collective harms. It is clear that the development of (gen) #AI fits in same capitalist & risk logic but big difference is that this type of corporate behaviour has become normalised and accepted
December 9, 2025 at 11:25 AM
Reposted by LJ_MI
I mean we are absolutely in a place now where the only solution to this information disorder is for everyone to constantly evaluate the source of information. Never trust a chatbot, but also don't believe a video unless you know and trust where it comes from.

Unfortunately... that's a lot of work.
December 5, 2025 at 11:18 PM
Reposted by LJ_MI
Both the "Continue" button and the "Try it now" button do *the same thing* in what is the clearest example of a dark pattern I've ever seen. Imagine someone is getting metrics about the "number of Google Docs users who are using AI" and all the numbers are total bullshit.
December 7, 2025 at 10:44 PM
Reposted by LJ_MI
Here's the reality this example illustrates:

It's not even just about people blindly trusting what ChatGPT tells them. LLMs are poisoning the entire information ecosystem. You can't even necessarily trust that the citations in a published paper are real (or a search engine's descriptions of them).
December 5, 2025 at 11:15 PM
Reposted by LJ_MI
“we estimate employment-weighted share of Americans using AI at work has fallen by a percentage point & now sits at 11%. Adoption has fallen sharply at the largest businesses, those employing +250 people. 3 yrs into the genAI wave, demand looks surprisingly flimsy” www.economist.com/finance-and-...
Investors expect AI use to soar. That’s not happening
Recent surveys point to flatlining business adoption
www.economist.com
December 4, 2025 at 4:12 PM
Reposted by LJ_MI
Weird situation. AI adoption in the work world is stagnating, while higher-ed institutions are rushing to embrace it so that students will be prepared for…the work world?
“we estimate employment-weighted share of Americans using AI at work has fallen by a percentage point & now sits at 11%. Adoption has fallen sharply at the largest businesses, those employing +250 people. 3 yrs into the genAI wave, demand looks surprisingly flimsy” www.economist.com/finance-and-...
Investors expect AI use to soar. That’s not happening
Recent surveys point to flatlining business adoption
www.economist.com
December 4, 2025 at 4:17 PM
Reposted by LJ_MI
Last week @theverge.com published my essay exploring the limitations of large-language models. This week, that same essay is cited by a federal judge in Michigan to distinguish the process of human reasoning from what these models do. Very, very gratifying.
h/t @robertfreundlaw.bsky.social

holy shit, an accurate legal critique of LLMs. LLMs don't reason because they're just stitching together plausible-looking sentences indifferent to the content
December 3, 2025 at 9:04 PM
Reposted by LJ_MI
It's wild to be of the generation of educators who was told by elders to never let students use Wikipedia (which is meticulously sourced and curated and which I taught students to use as a starting point) to now have those same folks embrace a crystal ball that doesn't identify its sources.
December 1, 2025 at 1:17 PM
Reposted by LJ_MI
I especially feel this when sociologists who study technology do this.

Y'all are supposed to take sociotechnical approaches to the study of technology. A throwaway AI-generated image does not help that credibility.
I don’t know if anyone else notices or cares, but when I see a presentation in which the speaker uses obviously generated-AI images to illustrate their slides, it makes me immediately less confident in whatever other content they’re presenting.
November 29, 2025 at 5:54 PM
Reposted by LJ_MI
A study by Dayforce shows 87% of executives use AI for work, compared to 57% of managers and just 27% of employees.

I think this explains the massive disconnect we see in how CEOs talk about AI versus everyone else. It also raises the question of how useful it truly is for frontline work?
Execs are embracing AI more than their employees are, new research suggests
Research from HR software company Dayforce suggests that executives are leaning into AI far more than their employees.
www.businessinsider.com
November 28, 2025 at 10:58 AM
Reposted by LJ_MI
Imagine posing for a photographer friend who sells your image to iStock and a year later, you see this is what The Washington Post has done with the image of you walking on a beautiful fall day
November 29, 2025 at 6:02 AM
Reposted by LJ_MI
If your admin says "but we need to prepare students for AI-driven careers," you can calmly say no. Reiterate that AI-integration is the result of wild capitalist greed, the technologies themselves aren't "generative" or useful in most careers, and students should focus on process-based learning.
November 26, 2025 at 6:18 PM