Michiel de Lange
mdelange.bsky.social
Michiel de Lange
@mdelange.bsky.social
Associate Professor New Media at Utrecht University | digital media and urban culture | [urban interfaces] | The Mobile City | ‘Inclusion in the datafied city’
 | http://blog.bijt.org | https://www.uu.nl/staff/MLdeLange
This already was a sign on the wall (mid July 2025 news article): www.semafor.com/article/07/1...
Exclusive: Saudi considers NEOM job cuts, relocations amid cost pressures
More than 1,000 people may be moved and a further 1,000 laid off as part of a sweeping audit of the $500 billion project.
www.semafor.com
November 6, 2025 at 2:03 PM
Reposted by Michiel de Lange
The contracts usually prevent the council/PD/HOA/retailer from physically interacting *in any way* with these camera installations.

The Evanston, IL case is just one example that illustrates just how untrustworthy #Flock is as a vendor and why we should avoid/cancel contracts with them.
City covers Flock cameras while waiting for removal - Evanston RoundTable
A Flock camera on the south side of Emerson Street, east of McCormick Boulevard, is seen on Sept. 24 covered up by black plastic secured with tape to the
evanstonroundtable.com
November 2, 2025 at 7:00 PM
Yes I agree. But what kinda bugs me is that this plea for EU tech sovereignty opens with 1. a reference to an American president, 2. immediately followed by quoting an American scholar 3. and is published on an American opinion platform. Makes 'European sovereignty' feel so.. reactive.
October 17, 2025 at 1:23 PM
My colleagues Karin van Es & Dennis Nguyen have written an accessible and critical paper about visual GenAI imaginaries:
link.springer.com/article/10.1...
“Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI - AI & SOCIETY
This study analyzes how ChatGPT portrays and describes itself, revealing misleading myths about AI technologies, specifically conversational agents based on large language models. This analysis allows for critical reflection on the potential harm these misconceptions may pose for public understanding of AI and related technologies. While previous research has explored AI discourses and representations more generally, few studies focus specifically on AI chatbots. To narrow this research gap, an experimental-qualitative investigation into auto-generated AI representations based on prompting was conducted. Over the course of a month, ChatGPT (both in its GPT-4 and GPT-4o models) was prompted to “Draw an image of yourself,” “Represent yourself visually,” and “Envision yourself visually.” The resulting data (n = 50 images and 58 texts) was subjected to a critical exploratory visual semiotic analysis to identify recurring themes and tendencies in how ChatGPT is represented and characterized. Three themes emerged from the analysis: anthropomorphism, futuristic/futurism and (social)intelligence. Importantly, compared to broader AI imaginations, the findings emphasize ChatGPT as a friendly AI assistant. These results raise critical questions about trust in these systems, not only in terms of their capability to produce reliable information and handle personal data, but also in terms of human–computer relations.
link.springer.com
September 24, 2025 at 12:20 PM
With a chapter i co-authored with @ernaruijer.bsky.social and Krisztina Varro “Governing the Digital Society: Platforms, Artificial Intelligence, and Public Values”
September 23, 2025 at 12:14 PM
Turkey tails ?
September 17, 2025 at 4:09 PM