For Humanity: An AI Safety Podcast

John Sherman
For Humanity: An AI Safety Podcast

For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

  1. Protecting Our Kids From AI Risk | Episode #58

    27 JANV.

    Protecting Our Kids From AI Risk | Episode #58

    Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km You can also donate any amount one time. Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about RESOURCES: BENGIO/NG DAVOS VIDEO https://www.youtube.com/watch?v=w5iuHJh3_Gk&t=8s STUART RUSSELL VIDEO https://www.youtube.com/watch?v=KnDY7ABmsds&t=5s AL GREEN VIDEO (WATCH ALL 39 MINUTES THEN REPLAY) https://youtu.be/SOrHdFXfXds?si=s_nlDdDpYN0RR_Yc Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! / @doomdebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes / aisafetymemes **************** To learn more about protecting our children from AI risks such as deep fakes, please feel free to visit our YouTube channel. In this video, we cover 2025 AI risk preview along with the following topics: AI AI risk AI safety

    1 h 43 min
  2. 2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57

    13 JANV.

    2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57

    What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km Anthropic Alignment Faking Video:https://www.youtube.com/watch?v=9eXV64O2Xp8&t=1s Neil DeGrasse Tyson Video: https://www.youtube.com/watch?v=JRQDc55Aido&t=579s Max Winga's Amazing Speech:https://www.youtube.com/watch?v=kDcPW5WtD58 Get Involved! EMAIL JOHN: forhumanitypodcast@gmail.com SUPPORT PAUSE AI: https://pauseai.info/ SUPPORT STOP AI: https://www.stopai.info/about Check out our partner channel: Lethal Intelligence AI Lethal Intelligence AI - Home https://lethalintelligence.ai SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!! https://www.youtube.com/@DoomDebates BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on-ai-risk Best Account on Twitter: AI Notkilleveryoneism Memes https://twitter.com/AISafetyMemes

    1 h 40 min
  3. 23/10/2024

    AI Risk Funding | Big Tech vs. Small Safety I Episode #51

    In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. **************** RESOURCES: SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!    / @doomdebates   Join the Pause AI Weekly Discord Thursdays at 2pm EST / discord     / discord   Max Winga’s “A Stark Warning About Extinction”    • A Stark Warning About AI Extinction   For Humanity Theme Music by Josef Ebner Youtube:    / @jpjosefpictures   Website: https://josef.pictures BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!! https://stephenhansonart.bigcartel.co... 22 Word Statement from Center for AI Safety Statement on AI Risk | CAIS https://www.safe.ai/work/statement-on... Best Account on Twitter: AI Notkilleveryoneism Memes   / aisafetymemes   *********************** Explore the realm of AI risk funding and its potential to guide you toward achieving your goals and enhancing your well-being. Delve into the essence of big tech vs. small safety, and discover how it profoundly impacts your life transformation. In this video, we'll examine the concept of AI risk funding, explaining how it fosters a positive, growth-oriented mindset. Some of the topics we will discuss include: AI AI safety AI safety research

    1 h 6 min

Bande-annonce

4,4
sur 5
8 notes

À propos

For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Vous aimeriez peut‑être aussi

Pour écouter des épisodes au contenu explicite, connectez‑vous.

Recevez les dernières actualités sur cette émission

Connectez‑vous ou inscrivez‑vous pour suivre des émissions, enregistrer des épisodes et recevoir les dernières actualités.

Choisissez un pays ou une région

Afrique, Moyen‑Orient et Inde

Asie‑Pacifique

Europe

Amérique latine et Caraïbes

États‑Unis et Canada