www.lesswrong.com/posts/csdn3e...
www.lesswrong.com/posts/csdn3e...
forms.gle/xcfgBNmaP7Wk...
Co-organized with @kulveit.bsky.social @scasper.bsky.social Raymond Douglas, and Maria Kostylew
forms.gle/xcfgBNmaP7Wk...
Co-organized with @kulveit.bsky.social @scasper.bsky.social Raymond Douglas, and Maria Kostylew
Atoosa Kasirzadeh of CMU on "Taking post-AGI human power seriously"
Deger Turan, CEO of Metaculus on "Concrete Mechanisms for Slow Loss of Control"
Atoosa Kasirzadeh of CMU on "Taking post-AGI human power seriously"
Deger Turan, CEO of Metaculus on "Concrete Mechanisms for Slow Loss of Control"
Anna Yelizarova of Windfall Trust on "What would UBI actually entail?"
Ivan Vendrov of Midjourney on "Supercooperation as an alternative to Superintelligence"
Anna Yelizarova of Windfall Trust on "What would UBI actually entail?"
Ivan Vendrov of Midjourney on "Supercooperation as an alternative to Superintelligence"
Anton Korinek on the Economics of Transformative AI
Alex Tamkin of Anthropic on "The fractal nature of automation vs. augmentation"
Anders Sandberg on "Cyborg Leviathans and Human Niche Construction"
Anton Korinek on the Economics of Transformative AI
Alex Tamkin of Anthropic on "The fractal nature of automation vs. augmentation"
Anders Sandberg on "Cyborg Leviathans and Human Niche Construction"
“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."
www.lesswrong.com/posts/onsZ4J...
"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."
www.lesswrong.com/posts/onsZ4J...
www.post-agi.org
Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
www.post-agi.org
Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?
- What empirical evidence could help us tell which trajectory we’re on?
- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?
- What empirical evidence could help us tell which trajectory we’re on?
- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?
- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?
- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.
We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.