13 episodes

When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/

Artificial General Intelligence (AGI) Show with Soroush Pour Soroush Pour

    • Technology

When will the world create an artificial intelligence that matches human level capabilities, better known as an artificial general intelligence (AGI)? What will that world look like & how can we ensure it's positive & beneficial for humanity as a whole? Tech entrepreneur & software engineer Soroush Pour (@soroushjp) sits down with AI experts to discuss AGI timelines, pathways, implications, opportunities & risks as we enter this pivotal new era for our planet and species.Hosted by Soroush Pour. Follow me for more AGI content:Twitter: https://twitter.com/soroushjpLinkedIn: https://www.linkedin.com/in/soroushjp/

    Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

    Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)

    We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence.Through his work, Robert has educated thousands on AI safety, including many now wo...

    • 1 hr 21 min
    Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

    Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)

    We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment.In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to hel...

    • 1 hr 37 min
    Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

    Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)

    We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS".MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment.Prior to MATS, Ryan completed a PhD in Physics at the University of Queensland (UQ) in Australia.We talk about:* What the MATS program is* Who should apply to MATS (...

    • 1 hr 16 min
    Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

    Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)

    We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more.We talk to Adam about:* The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI) and...

    • 1 hr 19 min
    Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)

    Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)

    We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact.The free courses they offer are created in collaboration with people on the cutting edge of AI sa...

    • 1 hr 7 min
    Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)

    Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)

    In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI.Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the Internatio...

    • 1 hr 10 min

Top Podcasts In Technology

Syntax - Tasty Web Development Treats
Wes Bos & Scott Tolinski - Full Stack JavaScript Web Developers
Underscore_
Micode
TED Radio Hour
NPR
Upgrade
Relay FM
Choses à Savoir TECH
Choses à Savoir
Choses à Savoir TECH VERTE
Choses à Savoir