2 hr 1 min

Daniela and Dario Amodei on Anthropic Future of Life Institute Podcast

    • Technology

Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Topics discussed in this episode include:

-Anthropic's mission and research strategy
-Recent research and papers by Anthropic
-Anthropic's structure as a "public benefit corporation"
-Career opportunities

You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/

Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A

Careers at Anthropic: https://www.anthropic.com/#careers

Anthropic's Transformer Circuits research: https://transformer-circuits.pub/

Follow Anthropic on Twitter: https://twitter.com/AnthropicAI

microCOVID Project: https://www.microcovid.org/

Follow Lucas on Twitter: https://twitter.com/lucasfmperry

Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT

Timestamps:

0:00 Intro
2:44 What was the intention behind forming Anthropic?
6:28 Do the founders of Anthropic share a similar view on AI?
7:55 What is Anthropic's focused research bet?
11:10 Does AI existential safety fit into Anthropic's work and thinking?
14:14 Examples of AI models today that have properties relevant to future AI existential safety
16:12 Why work on large scale models?
20:02 What does it mean for a model to lie?
22:44 Safety concerns around the open-endedness of large models
29:01 How does safety work fit into race dynamics to more and more powerful AI?
36:16 Anthropic's mission and how it fits into AI alignment
38:40 Why explore large models for AI safety and scaling to more intelligent systems?
43:24 Is Anthropic's research strategy a form of prosaic alignment?
46:22 Anthropic's recent research and papers
49:52 How difficult is it to interpret current AI models?
52:40 Anthropic's research on alignment and societal impact
55:35 Why did you decide to release tools and videos alongside your interpretability research
1:01:04 What is it like working with your sibling?
1:05:33 Inspiration around creating Anthropic
1:12:40 Is there an upward bound on capability gains from scaling current models?
1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?
1:21:10 Bootstrapping models
1:22:26 How does Anthropic see itself as positioned in the AI safety space?
1:25:35 What does being a public benefit corporation mean for Anthropic?
1:30:55 Anthropic's perspective on windfall profits from powerful AI systems
1:34:07 Issues with current AI systems and their relationship with long-term safety concerns
1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers
1:41:28 AI evaluations and monitoring
1:42:50 AI governance
1:45:12 Careers at Anthropic
1:48:30 What it's like working at Anthropic
1:52:48 Why hire people of a wide variety of technical backgrounds?
1:54:33 What's a future you're excited about or hopeful for?
1:59:42 Where to find and follow Anthropic

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Topics discussed in this episode include:

-Anthropic's mission and research strategy
-Recent research and papers by Anthropic
-Anthropic's structure as a "public benefit corporation"
-Career opportunities

You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/

Watch the video version of this episode here: https://www.youtube.com/watch?v=uAA6PZkek4A

Careers at Anthropic: https://www.anthropic.com/#careers

Anthropic's Transformer Circuits research: https://transformer-circuits.pub/

Follow Anthropic on Twitter: https://twitter.com/AnthropicAI

microCOVID Project: https://www.microcovid.org/

Follow Lucas on Twitter: https://twitter.com/lucasfmperry

Have any feedback about the podcast? You can share your thoughts here:
www.surveymonkey.com/r/DRBFZCT

Timestamps:

0:00 Intro
2:44 What was the intention behind forming Anthropic?
6:28 Do the founders of Anthropic share a similar view on AI?
7:55 What is Anthropic's focused research bet?
11:10 Does AI existential safety fit into Anthropic's work and thinking?
14:14 Examples of AI models today that have properties relevant to future AI existential safety
16:12 Why work on large scale models?
20:02 What does it mean for a model to lie?
22:44 Safety concerns around the open-endedness of large models
29:01 How does safety work fit into race dynamics to more and more powerful AI?
36:16 Anthropic's mission and how it fits into AI alignment
38:40 Why explore large models for AI safety and scaling to more intelligent systems?
43:24 Is Anthropic's research strategy a form of prosaic alignment?
46:22 Anthropic's recent research and papers
49:52 How difficult is it to interpret current AI models?
52:40 Anthropic's research on alignment and societal impact
55:35 Why did you decide to release tools and videos alongside your interpretability research
1:01:04 What is it like working with your sibling?
1:05:33 Inspiration around creating Anthropic
1:12:40 Is there an upward bound on capability gains from scaling current models?
1:18:00 Why is it unlikely that continuously increasing the number of parameters on models will lead to AGI?
1:21:10 Bootstrapping models
1:22:26 How does Anthropic see itself as positioned in the AI safety space?
1:25:35 What does being a public benefit corporation mean for Anthropic?
1:30:55 Anthropic's perspective on windfall profits from powerful AI systems
1:34:07 Issues with current AI systems and their relationship with long-term safety concerns
1:39:30 Anthropic's plan to communicate it's work to technical researchers and policy makers
1:41:28 AI evaluations and monitoring
1:42:50 AI governance
1:45:12 Careers at Anthropic
1:48:30 What it's like working at Anthropic
1:52:48 Why hire people of a wide variety of technical backgrounds?
1:54:33 What's a future you're excited about or hopeful for?
1:59:42 Where to find and follow Anthropic

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

2 hr 1 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
In Her Ellement
Boston Consulting Group BCG
Acquired
Ben Gilbert and David Rosenthal
Deep Questions with Cal Newport
Cal Newport
Hard Fork
The New York Times