Ben Williamson
banner
benpatrickwill.bsky.social
Ben Williamson
@benpatrickwill.bsky.social
Researching data, tech, futures, and biological sciences in education | Senior Lecturer and co-director at the Centre for Research in Digital Education | University of Edinburgh | Editor of Learning, Media and Technology @lmt-journal.bsky.social
Pinned
New paper just out on how changing sociotechnical systems of knowledge production and access - platforms, the cloud, AI - pose profound challenges to educational practice and research doi.org/10.1080/0305...
Knowledge infrastructure crisis: digital democratic deficits and alternative designs for education
The production and circulation of knowledge in education increasingly depends on large-scale digital infrastructures. In this article we provide a critical review of the transformation of the conte...
doi.org
Reposted by Ben Williamson
I have a feature essay for The Guardian today on the mirage of AI medicine, why care cannot be automated, and how overwhelming uptake of AI by American health capitalism threatens to undermine the very possibility of democracy.
www.theguardian.com/us-news/ng-i...
What we lose when we surrender care to algorithms | Eric Reinhart
A dangerous faith in AI is sweeping American healthcare – with consequences for the basis of society itself
www.theguardian.com
November 9, 2025 at 4:05 PM
Yes, academic publishing is surely an industry but is getting worse

"... in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale" www.theguardian.com/science/2017...
November 9, 2025 at 12:17 AM
Reposted by Ben Williamson
Identifying flaws in GenAI unfortunately offers a pretext for claims that perfecting the product is just a matter of time & money. So pointing to chatbots’ role in,say, suicides can only go so far if we don’t also identify the systemic, irresolvable lack of Gen AI’s human commitment bc math has none
November 8, 2025 at 1:51 AM
Reposted by Ben Williamson
“When the neuroscientist reduces love to neurochemistry or the economist treats humans as rational utility-maximizers, these explanations may be technically sophisticated, but they feel profoundly dehumanizing…hyper-specialization destroyed: comprehensive rational frameworks for human flourishing.”
November 8, 2025 at 7:03 PM
Maybe I'm missing something but is it actually even *meaningful* if AI can improve measurable learning performance by a few standard deviation points? This leaves me so cold as an account of learning, even leaving aside all the *other* problems with AI in schools that learning science often ignores.
The evidence AI tutors work "astonishing well" is incredibly thin. We've seen these same promises made for virtually every technological "innovation" of the past 100 years. (Carl's essay is more nuanced than this post describing it.)
Latest post looks at the complicated picture emerging on AI tutoring: when AI tutors work astonishingly well and when they quietly make students worse. ⬇️
November 8, 2025 at 11:10 PM
Reposted by Ben Williamson
The 2025 Curriculum and Assessment Review Final Report was meant to “build a world-class curriculum for all.” In reality, it signals something quite different: the closure of curriculum reform in England. 🧵
November 5, 2025 at 10:08 PM
"If a school mandates a ‘phone-free’ environment, it needs a system. Enter the entrepreneurs. We are seeing an explosion of products like magnetic locking pouches and high-tech storage solutions ... to manage a problem that, arguably, schools could handle with existing disciplinary policies."
'The real debate isn't whether to manage phones; it’s whether a blanket, national prohibition is a genuine solution or merely a symbolic surrender that distracts from the deeper work of education'

https://schoolsweek.co.uk/banning-phones-in-schools-is-a-lazy-opt-out/
November 8, 2025 at 8:48 AM
Reposted by Ben Williamson
So this is all a bunch of bullshit, especially the “Foundational AI” cohort.

“Foundational AI” and “Foundation Models” are garbage concepts developed to distance LLMs and GenAI from longstanding criticisms by STS scholars who make clear the harms of such technologies.
November 7, 2025 at 11:17 PM
We wrote this paper a while ago and since then OpenAI has claimed plans to be "core infrastructure" of education, Google has rammed Gemini into schools via its education platforms, and AWS showed it underpins most edtech platforms... 1/
November 7, 2025 at 9:31 PM
This is so right, and why everyone wanting to cram "AI" into the school curriculum - as if we even knew *what* we'd teach about it to children in 2028 - is so wrong.
For political hacks like myself, 'put it in the curriculum' is the oldest and hattiest of old hat policy ideas.

Why do we think schools should solve all our problems? What if we thought differently about life skills and civic education?

howtorunacountry.substack.com/p/schools-ca...
Schools cannot solve all our problems
The limits of ‘put it in the curriculum’ politics
howtorunacountry.substack.com
November 7, 2025 at 7:29 PM
Reposted by Ben Williamson
«Silicon Valley isn’t building apps anymore.
It’s building empires.»

The Authoritarian Stack: A mapping of firms, funds, and political actors turning core state functions into private platforms, by @francescabria.bsky.social et. al. for @fesonline.bsky.social
www.authoritarian-stack.info
The Authoritarian Stack
How Tech Billionaires Are Building a Post-Democratic America — And Why Europe Is Next
www.authoritarian-stack.info
November 2, 2025 at 10:00 AM
Reposted by Ben Williamson
“Whoever controls AI infrastructure—compute, models, data, & cloud—will shape the economic & political order of the 21st century. The U.S. & China understand this & are mobilizing every instrument of statecraft to secure supremacy. Europe must understand it too.”

@francescabria.bsky.social

#ai
Reclaiming Europe’s Digital Sovereignty | NOEMA
Europe can accept permanent technological dependency, or it can build democratic digital systems rooted in climate commitments, labor protections and social diversity.
www.noemamag.com
October 2, 2025 at 5:22 PM
Reposted by Ben Williamson
Google's Gemini Deep Research can now read your Gmail and rummage through Google Drive
Google's Gemini Deep Research can now read your Gmail and rummage through Google Drive
Even with more info, web giant says agent can't be trusted to keep you healthy, wealthy, and wise Google's Gemini Deep Research tool can now reach deep into Gmail, Drive, and Chat to obtain data that might be useful for answering research questions.…
dlvr.it
November 7, 2025 at 5:33 PM
Reposted by Ben Williamson
Lots of terrible news out lately about ChatGPT, but it doesn't seem like anyone is investigating Gemini, esp the education space (ie Gems). If you want to crack AI edtech, that's a big one. Lots of Google ecosystem schools have mandated Gemini, and I have few critical resources on it specifically
November 7, 2025 at 3:36 PM
Reposted by Ben Williamson
“We should train novices based on the practices of disciplinary experts who have achieved AI fluency in their discipline. Unfortunately, there aren’t any such experts yet.” www.chronicle.com/article/stop...
Opinion | Stop Pretending You Know How to Teach AI
Colleges are racing to make students ‘fluent.’ One problem: No one knows what that means.
www.chronicle.com
November 7, 2025 at 12:48 PM
Reposted by Ben Williamson
“And then you have librarians who are experiencing a real existential crisis because they are getting asked by their jobs to promote [AI] tools that produce more misinformation. It's the most, like, emperor-has-no-clothes-type situation that I have ever witnessed.” - Alison Macrina
AI Is Supercharging the War on Libraries, Education, and Human Knowledge
"Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another."
www.404media.co
November 7, 2025 at 7:15 AM
Reposted by Ben Williamson
In light of that 404 article about librarians fighting against AI:
What if we could work with our public libraries to create + sustain convivial infrastructures for local news, local 🎶, digital equity, and more? I wrote abt lots of communities that are doing this work, bldg networks of solidarity + resistance, modeling alternatives to extractive commercial systems.
I wrote about public libraries as spaces of civic solidarity, common knowledge, and public infrastructure — all endangered by commercial and federal saboteurs
November 7, 2025 at 12:00 PM
Bit late to this but it's such a clever piece of AI criticism, developing a literary critique of Sam Altman's auto-metafiction story as a way to explore the grave threats to the "intellectual infrastructure" of the humanities - and HE more broadly - posed by AI lareviewofbooks.org/article/lite...
Literature Is Not a Vibe: On ChatGPT and the Humanities | Los Angeles Review of Books
Rachele Dini discusses OpenAI’s “A Machine-Shaped Hand” and an academic sector in crisis.
lareviewofbooks.org
November 7, 2025 at 10:52 AM
Reposted by Ben Williamson
"This initiative will give teachers from every region of Iceland [...] access to advanced AI tools as the country explores how artificial intelligence can transform education."

Has there ever before been products sold where customers* must figure out the use?

www.anthropic.com/news/anthrop...
Anthropic and Iceland announce one of the world’s first national AI education pilots
Anthropic and Iceland announce national AI education pilot
www.anthropic.com
November 5, 2025 at 7:10 AM
Reposted by Ben Williamson
*sigh* and if you're in the UK and have been stonewalling the age verification thing, access to these settings is denied...
Wondering why no one likes your posts anymore, even among your friends? It's because @jay.bsky.team and team have decided to hide a huge amount of content from all of our feeds by default.

Here's how to turn it off.

First go to the hamburger menu in the upper left corner
November 7, 2025 at 8:38 AM
Reposted by Ben Williamson
I'm surprised the LMS companies aren't being more helpful considering all the money they make from providing access to data. AI agents will poison years of learning analytics research.
November 6, 2025 at 11:37 PM
"AI agents will poison years of learning analytics research."

Great point. Where are the learning analytics folks at on this stuff?
I'm surprised the LMS companies aren't being more helpful considering all the money they make from providing access to data. AI agents will poison years of learning analytics research.
November 6, 2025 at 11:45 PM
"If we do not act, we risk seeing the development of a fully automated loop in which assignments are generated by AI with the support of a LMS, AI-generated content is submitted...on behalf of the student, and AI-driven metrics evaluate the work" www.mla.org/Resources/Ad...
November 6, 2025 at 11:14 PM