Ben Williamson
@benpatrickwill.bsky.social
6.4K followers 1.3K following 1.2K posts
Researching data, tech, futures, and biological sciences in education | Senior Lecturer and co-director at the Centre for Research in Digital Education | University of Edinburgh | Editor of Learning, Media and Technology @lmt-journal.bsky.social
Posts Media Videos Starter Packs
Reposted by Ben Williamson
edtechequity.bsky.social
A super helpful anthology on the philosophical, ethical and pedagogical dilemmas posed by disruptive influence of AI in education by @unesco.org #Education #AI #Schools #Teachers #EdTech
unesdoc.unesco.org
benpatrickwill.bsky.social
"we are now seeing the emergence of the single platform solution, notably from big tech companies like Microsoft, with apps to cover all aspects of school life—from precise monitoring of student learning through to expense management and human resources" - great paper by @lucipangrazio.bsky.social
Reposted by Ben Williamson
eryk.bsky.social
New from me on productivity, AI, and the problems in how we measure it: “AI’s core function is completing goals as if they were a yes/no checkmark: having a text, rather than writing the text; having a numbered list of ideas rather than finding a good idea.” www.techpolicy.press/generative-a...
Generative AI’s Productivity Myth | TechPolicy.Press
People may be using artificial intelligence, but that doesn’t mean it’s useful, writes Eryk Salvaggio.
www.techpolicy.press
Reposted by Ben Williamson
samuelmoore.org
"Tasks that once took months of manual work — from curating datasets and checking compliance to creating metadata and publishable outputs — are now completed in minutes by the AI Data Steward"

Another case of commercial publishers looking to replace library labour with their junk AI.
90% of Science Is Lost: Frontiers’ revolutionary AI-powered service transforms data sharing to deliver breakthroughs faster
Frontiers, the open-science publisher, is tackling this problem with the launch of Frontiers FAIR² Data Management, the world’s first all-in-one, AI-powered ser
www.frontiersin.org
Reposted by Ben Williamson
neilselwyn.bsky.social
Aidan Walker on why "now is *not* the time to ban phones ...
why Jonathan Haidt sucks"

howtodothingswithmemes.substack.com/p/now-is-not...
If you take Haidt’s premise that phones and social media are hurting children as true, then you should question whether the right policy remedy is an intervention in the way teachers run their classrooms and what children are allowed to see and do online

Why not fine the companies for endangering kids or create new rules they have to follow? Why not introduce competition into a monopolized market space, so that parents and kids have more choice in how to spend their time online? Why not put consumer safety standards on the algorithms, the software, the devices themselves? 

Why is the preferred tool to save a generation from anguish and our democracy from decline a patchwork of laws governing the decisions consumers can make, instead of a strategy to hold bad actors and industry to account?
benpatrickwill.bsky.social
I would like to learn more about grassroots developments if you have time to share what's going on there.
Reposted by Ben Williamson
justinhendrix.bsky.social
"Sceptics are privately - and some now publicly - asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call 'financial engineering'. In other words - there are fears these companies are overvalued."
A tangled web of deals stokes AI bubble fears in Silicon Valley
Some are worried that the rapid rise in the value of AI tech companies may be a bubble waiting to burst.
www.bbc.com
benpatrickwill.bsky.social
Once again, there is a good history of critical research on AI in education which both pre-dates chatgpt and provides insights to inform how we should respond to AI in education now bsky.app/profile/benp...
benpatrickwill.bsky.social
A social sciences and humanities reading list on AI in education 🧵
benpatrickwill.bsky.social
Google, MSFT, even Amazon have been after education for years with cloud and platforms/apps, so my view is AI is latest effort to attach education to big tech infrastructure. I struggle to see AI in education beyond this lens as it utterly depends on them, right?
benpatrickwill.bsky.social
I understand and appreciate efforts to work with AI in teaching and research for well-specified reasons and purposes but only so long as it's acknowledged AI in education is also and mainly a public problem for the sector that still needs addressing
codeactsineducation.wordpress.com/2024/02/22/a...
AI in education is a public problem
Photo by Mick Haupt on Unsplash Over the past year or so, a narrative that AI will inevitably transform education has become widespread. You can find it in the pronouncements of investors, tech ind…
codeactsineducation.wordpress.com
benpatrickwill.bsky.social
AI in education is unaccountable, opaque, and increasingly impervious to critique - you can't challenge it, you can't know what it does, you can't say it's wrong because it responsibilizes the user for its faults.

AI in education is something to avoid as much as you can.
benpatrickwill.bsky.social
AI in education is an investors' dream as education is a huge sector to get locked in for big returns while little is being returned on their investments elsewhere

AI in education is speculative capitalism at full throttle
benpatrickwill.bsky.social
AI in education is a policy discourse without substance except economic speculation about "jobs for the future"

AI in education policy talks about "AI literacy" or "responsible AI" but doesn't consider the irresponsibilities of those promoting AI in education
benpatrickwill.bsky.social
AI in education amplifies student surveillance

AI in education makes surveillance companies into trusted educational providers

AI in education surveillance techniques always threaten to creep beyond their original scope of operations
benpatrickwill.bsky.social
AI in education amplifies and intensifies educational systems of product-centredness

AI in education reproduces the idea of de-skilled, casualized pedagogy where the computer is the primary reader of the curriculum/syllabus and the tutor plays a subsidiary role
benpatrickwill.bsky.social
AI in education centres entrepreneurs as experts in teaching and learning

AI in education is based mostly on technical potential not educational needs

AI in education locks learning into models that afford summarization instead of archives of knowledge
benpatrickwill.bsky.social
AI in education amplifies existing biases against marginalized or vulnerable groups

AI in education is not evidence-based but based on speculation and proof of concept claims
benpatrickwill.bsky.social
I actually don't really care if AI is useful/interesting/good for some things in education actually - it is besides these things clearly a big problem already that maybe need listing yet again:
Reposted by Ben Williamson
michae.lv
Does your university have a contract with Grammarly? Write to the decision-maker asking if they think the university should be paying for a tool that is fast integrating features that can only be used for academic misconduct and cognitive offloading and request they drop the contract.
jedbrown.org
It is not "attribution and sourcing" to generate post-hoc citations that have not been read and did not inform the student's writing. Those should be regarded as fraudulent: artifacts testifying to human actions and thought that did not occur.
www.theverge.com/news/760508/...
For help with attribution and sourcing, Grammarly is releasing a citation finder agent that automatically generates correctly formatted citations backing up claims in a piece of writing, and an expert review agent that provides personalized, topic-specific feedback. Screenshot from Grammarly's demo of inserting a post-hoc citation.
https://www.grammarly.com/ai-agents/citation-finder
benpatrickwill.bsky.social
When kids in the UK were screwed over by a biased statistical model 5 years ago, it led to "Fuck the algorithm" protests and screaming government u-turns. But that model was explainable. AI is a biased black box. At some point it's going to produce a new scandal www.theguardian.com/commentisfre...
Why 'Ditch the algorithm' is the future of political protest | Louise Amoore
Students challenging the A-levels debacle have exposed the anti-democratic politics of predictive models, says Louise Amoore, a professor of political geography
www.theguardian.com
benpatrickwill.bsky.social
Now we have intentional political bias in the fine-tuning layer of LLMs, designed to restrict access to "woke" content, which I read as code for limiting social scientific and humanities styles of thinking. It's a serious imposition of political bias in education bsky.app/profile/benp...
benpatrickwill.bsky.social
The alarming aspect of this deliberate "anti-woke" algorithmic biasing of LLMs from an educational perspective is our institutions all bought in to an imaginary of innovation, then got locked in to enterprise contracts, and now the models are being recoded so they undermine educational values
marcusluther.bsky.social
Genuine question: for those enthusiastically pushing AI tools into every part of our education system—what checks/guardrails are there around algorithmic biases like this?