Jevan Hutson
banner
jevan.bsky.social
Jevan Hutson
@jevan.bsky.social
data privacy/cybersecurity attorney by day, tech law professor/clinic director by night. into data rights, not into data wrongs.
Reposted by Jevan Hutson
No, no it cannot. Looks like we need to re-up our paper @jevan.bsky.social
November 6, 2025 at 7:51 PM
Reposted by Jevan Hutson
“Terrible things are happening outside. Poor helpless people are being dragged out of their homes. Families are torn apart. Men, women, and children are separated. Children come home from school to find that their parents have disappeared.”

Diary of Anne Frank
January 13, 1943
October 4, 2025 at 8:17 PM
Reposted by Jevan Hutson
📣🚨NEW: ☁️ Big Cloud—Google, Microsoft & Amazon—control two thirds of the cloud compute market. They’re getting rich off the AI gold rush.

In new work with @nathanckim.bsky.social, we show how Big Cloud is expanding their empire by scrutinizing their *investments*… 🧵

📄PDF: dx.doi.org/10.2139/ssrn...
August 6, 2025 at 2:40 PM
Reposted by Jevan Hutson
I'm delighted to share that my article, AI and Doctrinal Collapse, is forthcoming in Stanford Law Review! Draft at papers.ssrn.com/sol3/papers.....
August 17, 2025 at 3:25 PM
Reposted by Jevan Hutson
Judge a society by how it treats its most vulnerable members
July 22, 2025 at 5:14 PM
Reposted by Jevan Hutson
Excellent scoop by @eileenguo.bsky.social! This is a timely reminder: always keep private data off public sites. If it's out there, it will likely be harvested. Prioritise your online privacy and think twice before oversharing.
#PII #Privacy #Cybersecurity
NEW FROM ME: new research has found millions of ex's of personal info, including credit cards, passports, résumés, birth certificates etc in 1 of the largest web-scraped datasets used to train image generation AI models.

It's a major privacy violation.

www.technologyreview.com/2025/07/18/1...
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 19, 2025 at 1:38 AM
Reposted by Jevan Hutson
Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 9:57 PM
Reposted by Jevan Hutson
Important new work from @jevan.bsky.social, Yoshi Kohno, & other UW & CMU coauthors.
NEW FROM ME: new research has found millions of ex's of personal info, including credit cards, passports, résumés, birth certificates etc in 1 of the largest web-scraped datasets used to train image generation AI models.

It's a major privacy violation.

www.technologyreview.com/2025/07/18/1...
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 7:15 PM
Reposted by Jevan Hutson
My response to Ring's decision to roll back reforms and go back to surveillance on behalf of police as a business model.

This is part of a trend of companies bending over backwards to get in on the techno-authoritarian money/power grab that comes with being a lapdog of the carceral state.
Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance
Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices....
www.eff.org
July 18, 2025 at 4:14 PM
Reposted by Jevan Hutson
Another major AI privacy violation and, again, almost no way for injured individuals to get remedy—or even to know who stole or misused their sensitive personal data.
NEW FROM ME: new research has found millions of ex's of personal info, including credit cards, passports, résumés, birth certificates etc in 1 of the largest web-scraped datasets used to train image generation AI models.

It's a major privacy violation.

www.technologyreview.com/2025/07/18/1...
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 3:55 PM
Reposted by Jevan Hutson
The bottom line, says @willie-agnew.bsky.social, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.”

The paper: arxiv.org/pdf/2506.17185
July 18, 2025 at 3:52 PM
Reposted by Jevan Hutson
NEW FROM ME: new research has found millions of ex's of personal info, including credit cards, passports, résumés, birth certificates etc in 1 of the largest web-scraped datasets used to train image generation AI models.

It's a major privacy violation.

www.technologyreview.com/2025/07/18/1...
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 3:25 PM
Reposted by Jevan Hutson
Super excited and thankful to have Tech Review feature our work!
Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 3:51 PM
Reposted by Jevan Hutson
Reposted by Jevan Hutson
Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found.
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
www.technologyreview.com
July 18, 2025 at 1:39 PM
Reposted by Jevan Hutson
"Men who sell machines that mimic people want us to become people who mimic machines. They want techno feudal subjects who will believe and do what they’re told. We, as people, are being strategically simplified. This is a fascist process."

organizingmythoughts.org/some-thought...
Some Thoughts on Techno-Fascism From Socialism 2025
"This is the endgame of our isolation."
organizingmythoughts.org
July 5, 2025 at 10:24 AM
Reposted by Jevan Hutson
New paper alert! In a collaboration between computer scientists and legal scholars, we find a significant amount of PII in a common AI training dataset and conduct a legal analysis showing that these issues put web-scale datasets in tension with existing privacy law. [🧵1/N] arxiv.org/abs/2506.17185
A Common Pool of Privacy Problems: Legal and Technical Lessons from a Large-Scale Web-Scraped Machine Learning Dataset
We investigate the contents of web-scraped data for training AI systems, at sizes where human dataset curators and compilers no longer manually annotate every sample. Building off of prior privacy con...
arxiv.org
June 30, 2025 at 9:15 PM
Reposted by Jevan Hutson
Forget Me Not? Machine Unlearning's Implications for Privacy Law: this paper explores the
technical challenges, and legal implications, of effectively removing or suppressing personal #data from large, complex models.
http://spkl.io/63325fCCiz
@jevan.bsky.social @cedricwhitney.bsky.social
June 24, 2025 at 3:00 PM
Reposted by Jevan Hutson
If another world leader had threatened, in the middle of a bombing campaign, that a government "must make a deal, before there is nothing left" -- both scholars and the U.S. government would have labeled it a threat of genocide.
June 14, 2025 at 2:37 AM
Reposted by Jevan Hutson
ICE is going to parks in Los Angeles and kidnapping nannies who are caring for small children

The children are witnessing these kidnappings and being held in custody until their parents are contacted

Women and children in parks
June 14, 2025 at 12:55 AM
Reposted by Jevan Hutson
New from me @gabrielgeiger.bsky.social + Justin-Casimir Braun:

Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.

Our deep dive why: www.technologyreview.com/2025/06/11/1...
Inside Amsterdam’s high-stakes experiment to create fair welfare AI
The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?
www.technologyreview.com
June 11, 2025 at 5:04 PM
Reposted by Jevan Hutson
“Thousands of newly obtained documents show” that the founders of Clearview AI—which is backed by Peter Thiel—“always intended to target immigrants & the political left. Now their digital dragnet is in the hands of the Trump admin.” This is incredible reporting by @lukeobrien.bsky.social, 4/2025 1/
The shocking far-right agenda behind the surveillance tech used by ICE and the FBI
Clearview AI’s founders always intended to target immigrants and the political left. Now their digital dragnet is in the hands of the Trump administration.
www.motherjones.com
June 1, 2025 at 5:47 PM
Reposted by Jevan Hutson
The Administration has arrested a judge in WI and a mayor in NJ, and is threatening to arrest federal lawmakers. In any other country we would say the strongman's security forces are locking up political opponents. We need to treat it exactly that way here. www.axios.com/2025/05/10/t...
Trump administration threatens to arrest House Democrats over ICE facility incident
"There will likely be more arrests coming," a Department of Homeland Security spokesperson said.
www.axios.com
May 10, 2025 at 7:48 PM
Reposted by Jevan Hutson
Elon Musk’s AI data centers polluting Black communities so that his Nazi followers can ask his chatbot questions is a perfect distillation of why AI is fascist technology
May 10, 2025 at 10:49 PM