AI Safety Fundamentals: Governance BlueDot Impact
-
- Technology
Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance
-
Future Risks of Frontier AI
This report from the UK’s Government Office for Science envisions five potential risk scenarios from frontier AI. Each scenario includes information on the AI system’s capabilities, ownership and access, safety, level and distribution of use, and geopolitical context. It provides key policy issues for each scenario and concludes with an overview of existential risk. If you have extra time, we’d recommend you read the entire document.
Original text:
https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf
Author:
The UK Government Office for Science -
What risks does AI pose?
This resource, written by Adam Jones at BlueDot Impact, provides a comprehensive overview of the existing and anticipated risks of AI. As you're going through the reading, consider what different futures might look like should different combinations of risks materialize.
Original text:
https://aisafetyfundamentals.com/blog/ai-risks/
Author:
Adam Jones -
AI Could Defeat All Of Us Combined
This blog post from Holden Karnofsky, Open Philanthropy’s Director of AI Strategy, explains how advanced AI might overpower humanity. It summarizes superintelligent takeover arguments and provides a scenario where human-level AI disempowers humans without achieving superintelligence. As Holden summarizes: “if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem."
Original text:
https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#the-standard-argument-superintelligence-and-advanced-technology
Authors:
Holden Karnofsky -
Moore's Law for Everything
This blog by Sam Altman, the CEO of OpenAI, provides insight into what AI company leaders are saying and thinking about their reasons for pursuing advanced AI. It lays out how Altman thinks the world will change because of AI and what policy changes he believes we will need to make.
As you’re reading, consider Altman’s position and how it might affect the way he discusses this technology or his policy recommendations.
Original text:
https://moores.samaltman.com
Author:
Sam Altman -
The Transformative Potential of Artificial Intelligence
This paper by Ross Gruetzemacher and Jess Whittlestone examines the concept of transformative AI, which significantly impacts society without necessarily achieving human-level cognitive abilities. The authors propose three categories of transformation: Narrowly Transformative AI, affecting specific domains like the military; Transformative AI, causing broad changes akin to general-purpose technologies such as electricity; and Radically Transformative AI, inducing profound societal shifts comparable to the Industrial Revolution.
Note: this resource uses “GPT” to refer to general purpose technologies, which they define as “a technology that initially has much scope for improvement and eventually comes to be widely used.” Keep in mind that this is a different term than a generative pre-trained transformer (GPT), which is a type of large language model used in systems like ChatGPT.
Original text:
https://arxiv.org/pdf/1912.00747.pdf
Authors:
Ross Gruetzemacher and Jess Whittlestone -
The Economic Potential of Generative AI: The Next Productivity Frontier
This report from McKinsey discusses the huge potential for economic growth that generative AI could bring, examining key drivers and exploring potential productivity boosts in different business functions. While reading, evaluate how realistic its claims are, and how this might affect the organization you work at (or organizations you might work at in the future).
Original text:
https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
Authors:
Michael Chui et al.