46 min

Lawfare Daily: Peter Salib on AI Self-Improvement The Lawfare Podcast

    • Government

In foundational accounts of AI risk, the prospect of AI self-improvement looms large. The idea is simple. For any capable, goal-seeking system, the system’s goal will be more readily achieved if the system first makes itself even more capable. Having become somewhat more capable, the system will be able to improve itself again. And so on, possibly generating a rapid explosion of AI capabilities, resulting in systems that humans cannot hope to control.
Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with Peter Salib, who is less worried about this danger than many. Salib is an Assistant Professor of Law at the University of Houston Law Center and co-Director of the Center for Law & AI Risk. He just published a new white paper in Lawfare's ongoing Digital Social Contract paper series arguing that the same reason that it's difficult for humans to align AI systems is why AI systems themselves will hesitate to self-improve.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.
Support this show http://supporter.acast.com/lawfare.

Hosted on Acast. See acast.com/privacy for more information.

In foundational accounts of AI risk, the prospect of AI self-improvement looms large. The idea is simple. For any capable, goal-seeking system, the system’s goal will be more readily achieved if the system first makes itself even more capable. Having become somewhat more capable, the system will be able to improve itself again. And so on, possibly generating a rapid explosion of AI capabilities, resulting in systems that humans cannot hope to control.
Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with Peter Salib, who is less worried about this danger than many. Salib is an Assistant Professor of Law at the University of Houston Law Center and co-Director of the Center for Law & AI Risk. He just published a new white paper in Lawfare's ongoing Digital Social Contract paper series arguing that the same reason that it's difficult for humans to align AI systems is why AI systems themselves will hesitate to self-improve.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.
Support this show http://supporter.acast.com/lawfare.

Hosted on Acast. See acast.com/privacy for more information.

46 min

Top Podcasts In Government

HARDtalk
BBC World Service
Pekingology
Center for Strategic and International Studies
Southeast Asia Radio
CSIS | Center for Strategic and International Studies
The Truth of the Matter
CSIS | Center for Strategic and International Studies
The Trade Guys
CSIS | Center for Strategic and International Studies
Into Africa
CSIS | Center for Strategic and International Studies