Peter N. Salib
petersalib.bsky.social
Peter N. Salib
@petersalib.bsky.social
Assistant Professor, University of Houston Law Center

AI, Risk, Constitution, Economics
the legal system should organize AGI labor is of great importance.

Our proposal: Do what has always worked before. Let all workers, human and AI, own their labor, make contracts to sell it, and keep the proceeds.

Not for the sake of AIs, but for the sake of global human flourishing.
AI Rights for Human Flourishing
<div> <br> </div>AI companies are racing to create Artificial General Intelligence (AGI): AI systems that outperform humans at most economically valuable work.
papers.ssrn.com
July 29, 2025 at 4:12 PM
been feudal lords, encomenderos, slaveholders, and so on. In the AGI economy, the elite owners will be AI companies and their investors.

If, as many believe, the advent of AGI--AIs that can do most jobs humans can--*could* deliver rapid economic progress and material abundance, the question of how
July 29, 2025 at 4:09 PM
disastrous for almost everyone living under them. A wealth of economic evidence shows that they substantially slow growth, impoverishing ordinary workers, whether free or unfree.

Unfree labor systems benefit only the elite class who own substantial numbers of laborers. Historically, those have
July 29, 2025 at 4:06 PM
To be clear, our argument is not that a labor system based on the ownership of (AI) laborers will be the *moral* equivalent of systems based on the ownership of humans!

Rather, we argue that the systems will have similar economic effects. In short, systems of unfree labor are economically
July 29, 2025 at 4:04 PM
caught up to the frontier.

If the joint lab couldn't clear the bottleneck, we think that it would also serve as a credible scientific authority to both the US and China around which a more coordinated global pause could be built.

Much more in the full draft: papers.ssrn.com/sol3/papers....
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5369439
t.co
July 29, 2025 at 3:57 PM
others, and it hit a new level of capabilities (and misalignment) where advanced rogue systems became a serious threat, *it* could pause capabilities progress and go all-in on clearing the alignment bottleneck. The frontier lab would have 1 year to do so before others
July 29, 2025 at 3:57 PM
capabilities parity (and thus deterrence) all the way up the AI capabilities ladder.

2) for AI safety, the joint lab would, essentially automatically, function as a global "pause" button on frontier capabilities advancement. If the joint lab was, e.g., 1 year ahead of all
July 29, 2025 at 3:57 PM
the most compute, hire the best researchers, and (we think) have an excellent change of becoming the leading AI lab in the world. This would have two effects:

1) On geostrategy, this lab would diffuse the most advanced AI systems to to the US and China simultaneously, ensuring
July 29, 2025 at 3:56 PM
How to operationalize this while also reducing catatrophic/exisential risk from AI? Our proposal:

The US and China should make an agrement to jointly found a frontier AI lab. Backed by the sovereign wealth and power of the two most powerful countries on earth, that lab could buy
July 29, 2025 at 3:56 PM
But the same AIs needed for advanced military application will also likely be excellent at improving healthcare, ed, research, and much more.

Here, there is no guns/butter tradeoff. The guns *are* the butter.

Thus, game theory favors equilibria of *high* capabilities.
July 29, 2025 at 3:56 PM
In nuclear competition, equilibria of *low* capabilities (e.g., 6K warheads per side, rather than 60K) are attractive b/c of the guns/butter tradeoff. Nukes are expensive, and they have few positive spillovers to the rest of the economy. They don't, e.g., improve healthcare.
July 29, 2025 at 3:56 PM
One thing from nuclear game theory that *does* apply to AI is the idea that what matters most is rough parity of capabilities (for second-strike deterrence), rather than the total number of warheads (or total AI capability)

But there are many possible equilibria of parity.
July 29, 2025 at 3:55 PM
Most critics of an AI arms race advocate international coordination to *slow* AI progress. They rely on analogies to Cold War nonproliferation and disarmament agreements.

We argue that there are important differences between AI and nukes that make such strategies hard.
July 29, 2025 at 3:55 PM
RAISE Act are a extremely reasonable first-steps towards mitigating that risk. I would, of course, favor a single, well-designed federal regime over a patchwork of state regs. But if the feds want to do that, they can. The ban was no substitute for actually doing something.
July 1, 2025 at 6:38 PM