Consistently Candid Sarah Hastings-Woodhouse
-
- Technology
-
AI safety, philosophy and other things.
-
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with state-of-art AI capabilities, whether we should be advocating for a pause on scaling frontier models, Nathan's Red Team in Public project, and some reasons not be a hardcore doomer!Follow Nathan on TwitterListen to The Cognitive Revolution
-
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sheha founded Encode Justice, the importance of youth advocacy in AI safety, and what the movement can learn from climate activism. We also dug into the details of SB-1047 and answered some common criticisms of the bill!Follow Sneha on Twitter: https://twitter.co...
-
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter
Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more!Follow Nathan on Twitter: Nathan 🔍 (@NathanpmYoung) / X (twitter.com) Nathan's substack: Predictive Text | Nathan Young | SubstackMy Twitter: sarah ⏸️ (@littIeramblings) / X (twitter.com)
-
#7 Noah Topper helps me understand Eliezer Yudkowsky
A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an attempt to more completely understand why AI is going to kill us all, I enlisted the help of Noah Topper, recent Computer Science Masters graduate and long-time EY fan, to help me break down A List of Lethalities (which, for anyone unfamiliar, is a fun list of 43 reasons why we're all totally screwed). Follow Noah on Twitter: Noah Topper 🔍⏸️ (@Noah...
-
#6 Holly Elmore on pausing AI, protesting, warning shots & more
Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising protests against frontier AGI research, the danger of relying on warning shots, the prospect of techno-utopia, possible risks of pausing and more!Follow Holly on Twitter: Holly ⏸️ Elmore (@ilex_ulmus) / X (twitter.com)Official PauseAI US Twitter account: PauseAI US ⏸️ (@pauseaius) / X (twitter.com)My Twitter: sarah ⏸️ (@littIeramblings) / X (twitter...
-
#5 Joep Meindertsma on founding PauseAI and strategies for communicating AI risk
In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising existential risks, strategies for communicating AI risk, his assessment of recent AI policy developments and more!Find out more about PauseAI at www.pauseai.info
Customer Reviews
Pod
🫛