15 episodes

When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/

Artificial General Intelligence (AGI) Show with Soroush Pour Soroush Pour

    • Technology

When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/

    Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)

    Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)

    We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...

    • 2 hrs 42 min
    Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)

    Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)

    We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity.We talk to Katja about:* How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations.* The survey's methodology and why...

    • 1 hr 20 min
    Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

    Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

    We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence.Through his work, Robert has educated thousands on AI safety, including many now wo...

    • 1 hr 21 min
    Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

    Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

    We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to hel...

    • 1 hr 37 min
    Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

    Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

    We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS".MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment.Prior to MATS, Ryan completed a PhD in Physics at the University of Queensland (UQ) in Australia.We talk about:* What the MATS program is* Who should apply to MATS (...

    • 1 hr 16 min
    Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

    Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

    We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.We talk to Adam about:* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and...

    • 1 hr 19 min

Top Podcasts In Technology

Click IQ Academy Podcast
Alan Walker
Tech&Co, la quotidienne
BFM Business
Hack'n Speak
mpgn
The Vergecast
The Verge
Practical AI: Machine Learning, Data Science
Changelog Media
Tech Café
Guillaume Vendé

You Might Also Like