Ben Williamson
@benpatrickwill.bsky.social
6.4K followers 1.3K following 1.2K posts
Researching data, tech, futures, and biological sciences in education | Senior Lecturer and co-director at the Centre for Research in Digital Education | University of Edinburgh | Editor of Learning, Media and Technology @lmt-journal.bsky.social
Posts Media Videos Starter Packs
Reposted by Ben Williamson
neilselwyn.bsky.social
Aidan Walker on why "now is *not* the time to ban phones ...
why Jonathan Haidt sucks"

howtodothingswithmemes.substack.com/p/now-is-not...
If you take Haidt’s premise that phones and social media are hurting children as true, then you should question whether the right policy remedy is an intervention in the way teachers run their classrooms and what children are allowed to see and do online

Why not fine the companies for endangering kids or create new rules they have to follow? Why not introduce competition into a monopolized market space, so that parents and kids have more choice in how to spend their time online? Why not put consumer safety standards on the algorithms, the software, the devices themselves? 

Why is the preferred tool to save a generation from anguish and our democracy from decline a patchwork of laws governing the decisions consumers can make, instead of a strategy to hold bad actors and industry to account?
benpatrickwill.bsky.social
I would like to learn more about grassroots developments if you have time to share what's going on there.
Reposted by Ben Williamson
justinhendrix.bsky.social
"Sceptics are privately - and some now publicly - asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call 'financial engineering'. In other words - there are fears these companies are overvalued."
A tangled web of deals stokes AI bubble fears in Silicon Valley
Some are worried that the rapid rise in the value of AI tech companies may be a bubble waiting to burst.
www.bbc.com
benpatrickwill.bsky.social
Once again, there is a good history of critical research on AI in education which both pre-dates chatgpt and provides insights to inform how we should respond to AI in education now bsky.app/profile/benp...
benpatrickwill.bsky.social
A social sciences and humanities reading list on AI in education 🧵
benpatrickwill.bsky.social
Google, MSFT, even Amazon have been after education for years with cloud and platforms/apps, so my view is AI is latest effort to attach education to big tech infrastructure. I struggle to see AI in education beyond this lens as it utterly depends on them, right?
benpatrickwill.bsky.social
I understand and appreciate efforts to work with AI in teaching and research for well-specified reasons and purposes but only so long as it's acknowledged AI in education is also and mainly a public problem for the sector that still needs addressing
codeactsineducation.wordpress.com/2024/02/22/a...
AI in education is a public problem
Photo by Mick Haupt on Unsplash Over the past year or so, a narrative that AI will inevitably transform education has become widespread. You can find it in the pronouncements of investors, tech ind…
codeactsineducation.wordpress.com
benpatrickwill.bsky.social
AI in education is unaccountable, opaque, and increasingly impervious to critique - you can't challenge it, you can't know what it does, you can't say it's wrong because it responsibilizes the user for its faults.

AI in education is something to avoid as much as you can.
benpatrickwill.bsky.social
AI in education is an investors' dream as education is a huge sector to get locked in for big returns while little is being returned on their investments elsewhere

AI in education is speculative capitalism at full throttle
benpatrickwill.bsky.social
AI in education is a policy discourse without substance except economic speculation about "jobs for the future"

AI in education policy talks about "AI literacy" or "responsible AI" but doesn't consider the irresponsibilities of those promoting AI in education
benpatrickwill.bsky.social
AI in education amplifies student surveillance

AI in education makes surveillance companies into trusted educational providers

AI in education surveillance techniques always threaten to creep beyond their original scope of operations
benpatrickwill.bsky.social
AI in education amplifies and intensifies educational systems of product-centredness

AI in education reproduces the idea of de-skilled, casualized pedagogy where the computer is the primary reader of the curriculum/syllabus and the tutor plays a subsidiary role
benpatrickwill.bsky.social
AI in education centres entrepreneurs as experts in teaching and learning

AI in education is based mostly on technical potential not educational needs

AI in education locks learning into models that afford summarization instead of archives of knowledge
benpatrickwill.bsky.social
AI in education amplifies existing biases against marginalized or vulnerable groups

AI in education is not evidence-based but based on speculation and proof of concept claims
benpatrickwill.bsky.social
I actually don't really care if AI is useful/interesting/good for some things in education actually - it is besides these things clearly a big problem already that maybe need listing yet again:
benpatrickwill.bsky.social
I mean, I'm thousands of miles away but this looks like algo-fascistic takeover of education from here, and there are plenty here who wants emulate your awful state of affairs too.
Reposted by Ben Williamson
michae.lv
Does your university have a contract with Grammarly? Write to the decision-maker asking if they think the university should be paying for a tool that is fast integrating features that can only be used for academic misconduct and cognitive offloading and request they drop the contract.
jedbrown.org
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
For help with attribution and sourcing, Grammarly is releasing a citation finder agent that automatically generates correctly formatted citations backing up claims in a piece of writing, and an expert review agent that provides personalized, topic-specific feedback. Screenshot from Grammarly's demo of inserting a post-hoc citation.
https://www.grammarly.com/ai-agents/citation-finder
benpatrickwill.bsky.social
When kids in the UK were screwed over by a biased statistical model 5 years ago, it led to "Fuck the algorithm" protests and screaming government u-turns. But that model was explainable. AI is a biased black box. At some point it's going to produce a new scandal www.theguardian.com/commentisfre...
Why 'Ditch the algorithm' is the future of political protest | Louise Amoore
Students challenging the A-levels debacle have exposed the anti-democratic politics of predictive models, says Louise Amoore, a professor of political geography
www.theguardian.com
benpatrickwill.bsky.social
Now we have intentional political bias in the fine-tuning layer of LLMs, designed to restrict access to "woke" content, which I read as code for limiting social scientific and humanities styles of thinking. It's a serious imposition of political bias in education bsky.app/profile/benp...
benpatrickwill.bsky.social
The alarming aspect of this deliberate "anti-woke" algorithmic biasing of LLMs from an educational perspective is our institutions all bought in to an imaginary of innovation, then got locked in to enterprise contracts, and now the models are being recoded so they undermine educational values
marcusluther.bsky.social
Genuine question: for those enthusiastically pushing AI tools into every part of our education system—what checks/guardrails are there around algorithmic biases like this?
benpatrickwill.bsky.social
Another is the rapid rollout of AI writing detection tech that is biased against non-English native writers - which was well documented as a problem with Turnitin years ago hai.stanford.edu/news/ai-dete...
AI-Detectors Biased Against Non-Native English Writers | Stanford HAI
Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.
hai.stanford.edu
benpatrickwill.bsky.social
Now generative AI is producing new bias challenges. One is the underrepresentation of children with additional needs, which "may result in technologies that misunderstand, overlook or pathologise neurodivergent and disabled learners"
schoolsweek.co.uk/ai-bias-pose...
How AI bias could undermine inclusive education
The promise of personalised learning masks very real risks that demand careful attention from educators and policymakers
schoolsweek.co.uk
benpatrickwill.bsky.social
We have known that data and "AI" systems in education are prone to various biases for years so it is just inexcusable it's being ignored again now, especially by the teacher unions. A quick thread🧵
peterswimm.com
Right now, almost none. Most tools enter classrooms with zero independent bias audits.

Schools rely on vendor assurances and pilot anecdotes instead of accountability.