Peter N. Salib
petersalib.bsky.social
Peter N. Salib
@petersalib.bsky.social
Assistant Professor, University of Houston Law Center

AI, Risk, Constitution, Economics
Pinned
Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?
Enjoyed the recent @80000hours.bsky.social w/ @tobyord.bsky.social. Agree that AI policy researchers should dream bigger on societal Qs. Simon Goldstein and I have been working on one of Toby's big questions: Should the AGI economy be run like a slave society (as it will under default law)?
July 29, 2025 at 4:02 PM
The WH's AI Action plan has some good stuff. But it begins, "The US is in a race to achieve global dominance in AI."

Like many, @simondgoldstein
and I think that an AI arms race w/ China is a mistake.

Our new paper lays out a novel game-theoretic approach to avoiding the race.
July 29, 2025 at 3:55 PM
I'm on balance relieved that the federal ban on state-level AI regulation is dead. I do expect many state laws to be dumb, and tech-illiterate. But government also needs to take seriously the warnings that advanced AI systems could kill large numbers of people. Bills like NY's...
July 1, 2025 at 6:38 PM
This First Amendment ruling is correct: As I argue in @WashULRev, the outputs of generative AI systems like LLMs are not protected speech. Not of the AI company. Not of the user. Read more here! papers.ssrn.com/sol3/papers....

www.law.com/therecorder/...
In Lawsuit Over Teen's Death, Judge Rejects Arguments That AI Chatbots Have Free Speech Rights
The judge's order sends a message that Silicon Valley “needs to stop and think and impose guardrails before it launches products to market," said attorney Meetali Jain of the Tech Justice Law Project.
www.law.com
May 23, 2025 at 4:31 PM
Reposted by Peter N. Salib
Very important point raised by @petersalib.bsky.social and Simon Goldstein regarding AI risk and alignment:

www.ai-frontiers.org/articles/tod...
May 21, 2025 at 4:33 PM
Which US Constitutional or Canon laws, if any, forbid someone from being simultaneously Pope and the US President?

Asking for a friend.

x.com/TahraHoops/s...
Tahra Hoops on X: "New Pope is abundance-pilled https://t.co/jnSFxcmNR3" / X
New Pope is abundance-pilled https://t.co/jnSFxcmNR3
x.com
May 8, 2025 at 6:02 PM
AGI is, I think, the most important thing that could happen in the next 4 years. Yes, even more than the other insane stuff. I wish more legal thinkers were engaged seriously with the prospect of world-shattering AI. Law can’t fix all of the problems alone. But it can help.
Today’s episode of The Ezra Klein Show.

The Biden Administration’s A.I. adviser Ben Buchanan discusses how the US Government is preparing for artificial general intelligence — and all the challenges that remain.
open.spotify.com/episode/6u7l...

youtu.be/Btos-LEYQ30?...
The Government Knows AGI is Coming | The Ezra Klein Show
YouTube video by The Ezra Klein Show
youtu.be
March 5, 2025 at 2:01 AM
Pleased to share that my (and Simon Goldstein's) newest article, "AI Rights for Human Safety," is forthcoming in the Virginia Law Review.
March 4, 2025 at 5:10 PM
When authors of the AGI-denialist "stochastic parrots" paper publish "Fully Autonomous AI Agents Should Not Be Developed," you should start to worry that AGI really is imminent.

When their main argument is that AGI will kill people, you should worry more.
February 7, 2025 at 6:01 PM
Reposted by Peter N. Salib
In light of OpenAI’s new o3 model, @petersalib.bsky.social writes that "rogue AI is a concern worth taking seriously—and taking seriously now. This is a problem for which, by its very nature, solutions cannot wait until there is conclusive proof of their need."
Rogue AI Moves Three Steps Closer
OpenAI’s new o3 model suggests that it will not be long before AI systems are as smart as their human minders—or smarter.
www.lawfaremedia.org
January 9, 2025 at 6:23 PM
Very important findings here. AIs exhibit deceptive behavior to avoid having their goals impeded. Earlier this week, a similar paper was criticized as unrealistic b/c it "nudged" the AI toward misbehavior. This paper does not do that, but gets similar results.

assets.anthropic.com/m/983c85a201...
assets.anthropic.com
December 19, 2024 at 4:06 PM
ProPublica reporting this morning supplying some pretty strong evidence for this account. @ggkrishnamoomoo.bsky.social and I analyze the net effects that paying Justices not to retire will have on the court as an institution.

www.propublica.org/article/clar...
Sharing @petersalib.bsky.social's and my draft essay Justices on Yachts: A Value Over Replacement Theory papers.ssrn.com/sol3/papers.cf…

Very preliminary draft, comments welcome!
December 18, 2023 at 4:53 PM