#NCII
@berinszoka.bsky.social and I wrote about this in the context of the Grok debacle, where defenders of children stayed silent as indefensible acts took place and Elon tried to normalize them.

This one covers CG-CSAM & NCII law and govt responses, too.

www.lawfaremedia.org/article/grok...
Grok, ‘Censorship,’ & the Collapse of Accountability
Grok’s nudification scandal shows how “free speech” rhetoric is being used to obscure any ethical responsibility for real-world harm.
www.lawfaremedia.org
January 31, 2026 at 12:06 AM
And, of course, let's not let Amazon take up the entire spotlight:

*Hey X and xAI, are you even bothering to file NCMEC reports after you decided to let Grok become the massive-scale deepfake CSAM/NCII machine?
*Hey NCMEC, how many CyberTipline reports have X Corp. & xAI filed since December 2025?
January 30, 2026 at 2:13 AM
Turns out, there are a TON of image/video AI models hosted on CivitAI with dogwhistles for NCII and/or CSAM in their names. 👀

Max Kamachee and I just updated our "Video Deepfake Abuse" paper with this new fig:

🔗 papers.ssrn.com/sol3/papers....
January 30, 2026 at 9:43 PM
【繰り返し】
CSAMを「『児童ポルノ』を直接言わなくていいようにした今風の言い方」みたいに捉えるな。
"Child Sexual Abuse Material" "児童性的虐待資料" で明確に『児童ポルノ』とは違う。
www.missingkids.org/theissues/csam

NCIIを「『リベンジポルノ』を直接言わなくていいようにした今風の言い方」みたいに捉えるな。
性的でない「推しと仲良し♡」みたいなアピールのディープフェイクだってNCIIに含まれる。
stopncii.org
Stop Non-Consensual Intimate Image Abuse | StopNCII.org
StopNCII.org is operated by the Revenge Porn Helpline which is part of SWGfL, a charity that believes that all should benefit from technology, free from harm.
stopncii.org
January 25, 2026 at 7:30 PM
NCIIは "non-consensual intimate imagery", "非同意親密画像"
つまり「脅迫されたり拒めぬ状況下でムリしてあたかも合意の元に行っていると見せかけた画像的物証」を指すために定義された言葉なので、雑に「不同意性的画像」としないでほしい。

CSAMも "Child Sexual Abuse Material" "児童性的虐待資料" で明確に『児童ポルノ』とは別の言葉として規定されてるので混同して使う人は生成AI規制を求める人であっても間違えないでほしい。

これら、現行容認派でも否定派でもどちらも混同と濫用をしているのでそれが本当にダメ。
January 25, 2026 at 5:05 PM
NCIIもCSAMも両方とも「世界において、全員が基本的人権たる幸福追求権と生存権と自由意志を十分に保障されるために、"人々の尊厳を守り続けるため"」であって、「世界からいかがわしい破廉恥な猥褻物を廃絶するため」の用語ではない。

「本人の知らぬ場所で、自分の尊厳が脅かされ汚されている」から問題であることを再確認してほしい。

こちらに対する説得、本当に大変なのでデリケートな言葉選びが要るんです。

とりあえずデリカシーなく「まー、エロはダメだよね」って乱暴な言い捨てで話を切られるし、そのスティグマ化が継続されるから
January 25, 2026 at 7:18 PM
ハッキリ言いますよ。LAION5BにてCSAMが検出された際の時も「AI用データセットに児ポが」として拡散した(今なお続けてる)人は正義感であっても間違っていたし怒ってます。

ポルノグラフィティであることへの性的嫌悪を利用したアジテーションとして二重に問題があります。

NCIIは「(大人も含めて)、これは『合意』だよな?」と前後で脅迫し、映像や音声上の記録ではその脅迫が見えないようにしたメディアを含んでいることの意味をよく考えてください。
January 25, 2026 at 5:11 PM
ディープフェイクにおいてポルノ制作以上にNCII時点での問題になるのは「この俳優は好色であり、妻帯者だろうが女性ファンには抱きつきキスをファンサービスとしてよく行う」としたパブリックイメージを捏造できたりするから、であると考えてみてください。

裸にひん剥いたり性的行為の捏造だけじゃなくて、「戯れで尻を触ってる証拠映像」等々、ポルノじゃなくても他人の社会的信用を失墜させられる。

これらの繊細な議論を雑に"性的であること"にまとめて解釈する流れは逆に危険であることに気づいてほしい。
January 25, 2026 at 5:25 PM
案の定、どのマスメディアもSNS上のコンセンサスも「子供の裸体」と「人間に対する性的娯楽消費」の二点をスティグマにして、問題への間違った解釈と安易な解決策を図って終わりにしようとしている。

児童を裸にひん剥かなくても性的虐待記録は作れるし有る。そしてそうやって「これならセーフ」として実質的な性的搾取が再生産されるのが問題なんす。

NCIIにおいても「AIで推しの俳優にキスしてもらった画像」でも直球でアウトなことに未だに気づかない。

それが大問題なんです。
January 25, 2026 at 5:17 PM
I'm over a week late in reading this, but huge kudos to Charlie Warzel and Matteo Wong for doing the thing I thought nobody seemed to be doing, which is asking xAI's investors why they're giving money to the deepfake NCII/CSAM machine (spoiler alert: no answers): www.theatlantic.com/technology/2...
Elon Musk Cannot Get Away With This
If there is no red line around AI-generated sex abuse, then no line exists.
www.theatlantic.com
January 23, 2026 at 12:15 AM
35 Attorneys General are demanding that xAI take further action to prevent Grok from creating sexualized images which have included images of children.
January 23, 2026 at 8:45 PM
Five clear trends emerged:
• Advancing AI literacy for students & teacher training
• Creating guidance on responsible AI use (privacy, transparency, security)
• Establishing studies & task forces
• Prohibiting high-risk AI uses
• Addressing AI-generated deepfake NCII in schools
January 23, 2026 at 7:28 PM
January 23, 2026 at 8:45 PM
"one of the first"?!
I've been researching this for multiple years, it's been happening to children in schools across the US for years, with multiple news segments and court cases.
NCII (Non-Consensual Intimate Imagery) is a massive industry across the world and has been for several years.
January 22, 2026 at 10:19 PM
Without meaningful pushback from regulators and users, Grok’s example risks setting a permissive precedent—signaling to other nudification apps and platform companies that AI-generated NCII can be treated as funny, harmless, or otherwise tolerable, writes Kaylee Williams.
Grok Supercharges the Nonconsensual Pornography Epidemic
Without meaningful pushback from regulators and users, Grok’s example risks setting a dangerously permissive precedent that, writes Kaylee Williams.
buff.ly
January 18, 2026 at 1:48 PM
We've also literally just seen the US put tariffs on Europe because European leaders are reacting to threats of invasion against one of their neighbours and also starting to take action against X for being a CSAM/NCII producer.
January 18, 2026 at 7:24 AM
Pointed out just some of the many hypocrisies (not to mention stupidities) of the US gov't threatening the UK for opening an investigation into Grok generating NCII.
January 15, 2026 at 10:05 PM
So! Now we have a famous plaintiff's-side NCII lawyer suing one AI company over its image generator's nonconsensual deepfake porn of the plaintiff, and she's married to a famous 1A lawyer defending the developer of a different AI tool for making nonconsensual deepfake porn of a different plaintiff.
wait I'm sorry **Carrie's husband** is Randazza‽‽
January 16, 2026 at 1:43 AM
I’m really happy to say that #portmoody council voted last night to get off X. Governments should not be participating in a platform that makes and distributes CSAM and NCII.

thetyee.ca/Opinion/2026...
Musk’s Grok Is Abusing Women and Children. Our Government Needs to Act | The Tyee
Where are Canadian lawmakers? Oh, they’re on X.
thetyee.ca
January 14, 2026 at 5:57 PM
California DOJ is the latest to initiate a formal investigation of xAI and its Grok issues - the AI chatbot and image generator enabled widespread creation and publication of deepfake NCII & CSAM (nonconsensual intimate images, and child sexual abuse material). www.cnbc.com/2026/01/14/e...
Elon Musk's xAI probed by California DOJ over Grok's deepfake explicit images
Musk's Grok AI chatbot faces investigations from India, Malaysia, Indonesia, Ireland, Australia as well.
www.cnbc.com
January 14, 2026 at 9:25 PM
Without meaningful pushback from regulators and users, Grok’s example risks setting a permissive precedent—signaling to other nudification apps and platform companies that AI-generated NCII can be treated as funny, harmless, or otherwise tolerable, writes Kaylee Williams.
Grok Supercharges the Nonconsensual Pornography Epidemic | TechPolicy.Press
Without meaningful pushback from regulators and users, Grok’s example risks setting a dangerously permissive precedent that, writes Kaylee Williams.
buff.ly
January 14, 2026 at 5:54 PM
I really only know AI-CSAM law but am getting unwillingly dragged into NCII law (where 1A issues make regulating harder). For familiarity with state regs, the usual suspects are folks like Carrie Goldberg, Mary Anne Franks, @daniellecitron.bsky.social, or Erica Johnstone at Ridder Costa Johnstone.
January 14, 2026 at 6:35 PM
Without meaningful pushback from regulators and users, Grok’s example risks setting a permissive precedent—signaling to other nudification apps and platform companies that AI-generated NCII can be treated as funny, harmless, or otherwise tolerable, writes Kaylee Williams.
Grok Supercharges the Nonconsensual Pornography Epidemic | TechPolicy.Press
Without meaningful pushback from regulators and users, Grok’s example risks setting a dangerously permissive precedent that, writes Kaylee Williams.
www.techpolicy.press
January 14, 2026 at 4:31 PM
Without meaningful pushback from regulators and users, Grok’s example risks setting a permissive precedent—signaling to other nudification apps and platform companies that AI-generated NCII can be treated as funny, harmless, or otherwise tolerable, writes Kaylee Williams.
Grok Supercharges the Nonconsensual Pornography Epidemic | TechPolicy.Press
Without meaningful pushback from regulators and users, Grok’s example risks setting a dangerously permissive precedent that, writes Kaylee Williams.
www.techpolicy.press
January 14, 2026 at 3:40 PM
What I appreciate most about @dwillner.bsky.social and @samidh.bsky.social is their tendency towards action. A very timely labeler and prompt to detect non consensual intimate imagery (NCII) solicitation:

zentropi.ai/labelers/eb6...
Zentropi - Build Custom Content Labelers Instantly
zentropi.ai
January 13, 2026 at 11:56 PM