Suresh Venkatasubramanian
geomblog.bsky.social
Suresh Venkatasubramanian
@geomblog.bsky.social

Director, Center for Tech Responsibility@Brown. FAccT OG. AI Bill of Rights coauthor. Former tech advisor to President Biden @WHOSTP. He/him/his. Posts my own.

Suresh Venkatasubramanian is an Indian computer scientist and professor at Brown University. In 2021, Prof. Venkatasubramanian was appointed to the White House Office of Science and Technology Policy, advising on matters relating to fairness and bias in tech systems. He was formerly a professor at the University of Utah. He is known for his contributions in computational geometry and differential privacy, and his work has been covered by news outlets such as Science Friday, NBC News, and Gizmodo. He also runs the Geomblog, which has received coverage from the New York Times, Hacker News, KDnuggets and other media outlets. He has served as associate editor of the International Journal of Computational Geometry and Applications and as the academic editor of PeerJ Computer Science, and on program committees for the IEEE International Conference on Data Mining, the SIAM Conference on Data Mining, NIPS, SIGKDD, SODA, and STACS. .. more

Computer science 89%
Engineering 5%
In the beginning of November, Kate and I met to think more about how to continue the work from our tech policy press blogpost on AI sovereignty.

She mentioned she’d been reading Laleh Khalili's works on oil in the Middle East and that she couldn’t get AI out of her head.
'Sovereignty' Myth-Making in the AI Race
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
ICE detain yet another minor child—chase him down his own street and tackle him to pavement.

"I am legal! I am legal!" he cries over and over in Spanish.

Agent pins the boy down in snow—he desperately tries to keep his exposed hands from freezing at -25°F wind chill temperature.

Minneapolis, MN
Congrats to @bhargaviganesh.bsky.social for passing her viva today with flying colours! It’s been a joy to work with you and Stuart - we at the @technomoralfutures.bsky.social will miss you dearly. Looking forward to your work with the AI Accountability Lab and all that you will do in the future!

Actually I'm not the author of 'The Ethical Algorithm".

Hello Void

oh duh. stupid me.

how did you figure out that this was fake?

that's reasonable. although if the system for detecting fake cites has its own errors, at least sending back the report of errors should be a step. along with an appeal process. because you are accusing someone of academic misconduct.

There are two parts to this: identifying the issue, and deciding appropriate actions to take. I think @ccanonne.github.io's point about the tools is about identification. Your point is about actions to take. they are not incompatible no?

yeah that's a good use case.

No I mean the DBLP suggestion and conference reviewing. i.e building the code to do reliable flagging with manageable false positive/false negative rates.

10 lines of code might handle 80% of the cases, but to handle the remaining 20% would require probably many many lines of code.
What is wild to me is the defense, BY THE NEURIPS BOARD, that fabricated citations do not mean "the content of the papers themselves [is] necessarily invalidated"

It does. It very much does. What do you think citing other work is for? What do you think writing a paper is for? What do you *think*?
NEW: NeurIPS,one of the world’s top academic AI conferences, accepted research papers with 100+ AI-hallucinated citations, new report claims

fortune.com/2026/01/21/n...
NeurIPS papers contained 100+ AI-hallucinated citations, new report claims | Fortune
An analysis of NeurIPS 2025 papers by startup GPTZero reveals how AI-generated citations are slipping into elite academic research.
fortune.com
Those of us who have worked on progressive domestic policy at the federal level are used to negotiating with a seemingly immovable NSC apparatus that often invokes ambiguous or unspecifiable national security concerns as a reason to kill any number of civil/human rights priorities.
I worked in a Democratic presidential administration and, even then, DHS was very often the most problematic, obstinate part of negotiating a civil/human rights provision into executive branch policy.

I have been thinking about this a lot in recent weeks and want to offer a short 🧵with reflections:
When I was at the White House my team wrote policy that specifically addressed this - the use of often discriminatory facial recognition tools in law enforcement contexts or other areas where civil rights were in play. The image below was the worst case scenario - what we were trying to avoid. 1/3

What is .... an algorithm?
Things we hoped for when we wrote the A.I. Bill of Rights under @alondra.bsky.social at the White House in 2022: new legislation, executive action, holding tech companies accountable

Things that we *literally* would not have believed: a Jeopardy! clue in 2026

@friedler.net @geomblog.bsky.social
Things we hoped for when we wrote the A.I. Bill of Rights under @alondra.bsky.social at the White House in 2022: new legislation, executive action, holding tech companies accountable

Things that we *literally* would not have believed: a Jeopardy! clue in 2026

@friedler.net @geomblog.bsky.social

This is a sharp and perceptive point.
🧵 Trump administration AI policy is widely described as deregulatory. This description is misleading. What's happening is not the absence of governance but its rearrangement--intensive state intervention operating through mechanisms we don't typically call regulation. www.science.org/doi/10.1126/...
The mirage of AI deregulation
One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Tr...
www.science.org

The article is amazing. I strongly recommend reading the whole thing for anyone interested in scaling properties. The thoughtfulness of the scientific process is refreshing (sadly, because it should be the norm for ML papers)
This post on scale discontinuities by @ericjmichaud.bsky.social has my best citation yet ericjmichaud.com/quanta/
This post on scale discontinuities by @ericjmichaud.bsky.social has my best citation yet ericjmichaud.com/quanta/
🧵 Trump administration AI policy is widely described as deregulatory. This description is misleading. What's happening is not the absence of governance but its rearrangement--intensive state intervention operating through mechanisms we don't typically call regulation. www.science.org/doi/10.1126/...
The mirage of AI deregulation
One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Tr...
www.science.org
The Rolling Pebbles
Stationary stone?

Led Balloon
Slightly diminish a band:

Jefferson Kite
Slightly diminish a band:

They Might Be Unusually Large
Suspect arrested in predawn fire that left parts of Mississippi’s largest synagogue in charred ruins

A fire heavily damaged Jackson’s only synagogue before dawn Saturday – the same house of worship that was firebombed by the Ku Klux Klan in 1967 because the rabbi had been an advocate for civil…
Suspect arrested in predawn fire that left parts of Mississippi’s largest synagogue in charred ruins
A fire heavily damaged Jackson’s only synagogue before dawn Saturday – the same house of worship that was firebombed by the Ku Klux Klan in 1967 because the rabbi had been an advocate for civil rights.
mississippitoday.org
Slightly diminish a band:

Jefferson Kite
Slightly diminish a band:

They Might Be Unusually Large
Slightly diminish a band

Something Something
ACLU @aclu.org · 13d
We released a new report in partnership with the Center for Tech Responsibility at Brown University on how policymakers and researchers can better analyze AI legislation to protect our civil rights and liberties.
Making Sense of AI Policy Using Computational Tools | TechPolicy.Press
A new report examines how to use computational tools to evaluate policy, with AI policy as a case study.
www.techpolicy.press