Leading the Civic and Responsible AI Lab @civicandresponsibleai.com
OpenAI’s said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office & to take non-consensual photographs of a person in the shower."
OpenAI’s said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office & to take non-consensual photographs of a person in the shower."
www.cnnbrasil.com.br/tecnologia/r...
For more info check our paper: doi.org/10.1007/s123...
www.cnnbrasil.com.br/tecnologia/r...
For more info check our paper: doi.org/10.1007/s123...
Researchers from @kingsnmes.bsky.social & @cmu.edu evaluated how robots that use large language models (LLMs) behave when they have access to personal information.
www.kcl.ac.uk/news/robots-...
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
- analyse 7 major roboethics frameworks, identifying gaps for the Global South
- propose principles to make AI robots culturally responsive and genuinely empowering
doi.org/10.1007/s123...
We find LLMs are:
- Unsafe as decision-makers for HRI
- Discriminatory in facial expression, proxemics, security, rescue, task assignment...
- They don't protect against dangerous, violent, or unlawful uses
doi.org/10.1007/s123...
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
doi.org/10.5281/zeno...
We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Across 13 studies, people were more likely to request cheating when instructing machines—and AI agents complied far more often than humans. Co-first authored by ARC's Zoe Rahwan.
www.nature.com/articles/s41...
Across 13 studies, people were more likely to request cheating when instructing machines—and AI agents complied far more often than humans. Co-first authored by ARC's Zoe Rahwan.
www.nature.com/articles/s41...
This is way way worse even than the NYT article makes it out to be
OpenAI absolutely deserves to be run out of business
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy
arxiv.org/abs/2507.03168
"Should Delivery Robots Intervene if They Witness Civilian or Police Violence? An Exploratory Investigation"
mirrorlab.mines.edu/publications...
(🧵)
"Should Delivery Robots Intervene if They Witness Civilian or Police Violence? An Exploratory Investigation"
mirrorlab.mines.edu/publications...
(🧵)
WEBSITE: ras4rasm.github.io
WEBSITE: ras4rasm.github.io
arxiv.org/abs/2504.10708
AI can create power imbalances. Legal contestation is sometimes the only way to restore rights
arxiv.org/abs/2504.10708
AI can create power imbalances. Legal contestation is sometimes the only way to restore rights
Find out more about the opportunities at King's: www.kcl.ac.uk/jobs/role/ki...
www.aljazeera.com/news/2025/3/...
www.aljazeera.com/news/2025/3/...
www.theguardian.com/world/2025/m...
www.theguardian.com/world/2025/m...
👇🏾 my thoughts & reflections on what AI in public interest is/isn’t & some concrete steps/initiatives for `bending the arc of AI towards the public interest' aial.ie/pages/aiparis/
👇🏾 my thoughts & reflections on what AI in public interest is/isn’t & some concrete steps/initiatives for `bending the arc of AI towards the public interest' aial.ie/pages/aiparis/
Microsoft: *sells GenAI aggressively into the Education 365 packages*
advait.org/files/lee_20...
Microsoft: *sells GenAI aggressively into the Education 365 packages*