Artificial Intelligence: The Journey and the Risks with Jaan Tallinn
Join us on this fascinating journey as we sit down with Jann Tallinn, the founder of Skype and Kazaa, to explore the groundbreaking world of AI and its potential implications for society and businesses.
Listen in as we tackle the difference between summoning AI and aliens and discuss how it impacts our ability to control the outcomes of AI development. We also delve into the idea of computational universality, the Church-Turing thesis, and how AI is advancing rapidly due to the need for significant computational resources. Additionally, we ponder if aligning AI development more closely with the functioning of our brain could lead to a decrease in the computational power currently required for AI.
The conversation doesn't stop there, though. We venture into an examination of the three components that can potentially increase AI power and how context learning allows a model to modify its behavior according to a given context. The risks associated with AI's black box nature, the difficulty of predicting how AI might act in the future, the public's attitude towards AI, its potential economic implications, and the increasing leverage of technology are all on the table. Jaan shares his insights on these critical topics as we underscore the fact that the number of futures that contain humans is a small target as technology advances.
Lastly, we discuss the potential implications of AI and how it differs from the human brain. Jaan provides intriguing insights into the need for regulation and the potential pitfalls of having one company control the compute. We debate the pros and cons of constraining AI experiments and consider the potential risks of centralization versus existential risks. Don't miss out on this illuminating conversation with Jaan Tallinn as we traverse the captivating world of AI.
Key Quotes:
- Ultimately, we were developed by evolution, right? So it's like an evolution had like no idea what it was doing. It wasn't planning ahead like at all. Ultimately there is no kind of like knockdown argument that you can't reach things with just throwing compute that it. It's clear that human brains are doing things that no AI are currently doing.
- I just like file it under the general problem that are the current paradigm of neural networks just are black boxes that are grown, not built. therefore they're very hard to get actual guarantees about their performance and the performance failures, again as you're gonna repeat the value of AI is kind of mostly measured by its mistakes or by, its more generally by its misbehavior.
- We have had warnings from the people like Alan Turing from 70 years ago. He said that once AI becomes as powerful as humans we should expect to lose control to it. We didn't really spend much of that 70 years researching ways how to remain in control or how to make sure that the future goes well with AI that is potentially more powerful than humans. So we should catch up. We should to do more, more [of] that research. But in order to do that research, we need time.
Time Stamps:
(00:24) - AI's Risks and Opportunities
(13:34) - AI Advancements, Risks, and Regulation
(20:13) - Neural Networks and the Need for Regulation
(25:18) - Centralization Risks Versus Existential Risks
Links:
Learn more about Jaan Tallinn
Learn more about the Future of Life Institute
Connect with Vahe
Check out all things Cognaize
Information
- Show
- FrequencyUpdated Biweekly
- PublishedNovember 30, 2023 at 6:00 AM UTC
- Length26 min
- Episode3
- RatingClean