18 episodes

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"

For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io
For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety

Into AI Safety Jacob Haimes

    • Technology
    • 5.0 • 3 Ratings

The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI"

For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io
For even more content and community engagement, head over to my Patreon at https://www.patreon.com/IntoAISafety

    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)

    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)

    As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.
    As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.
    If you would like to investigate further into Dr. Park's work, view his website, Google Scholar, or follow him on Twitter
    00:00:54 ❙ Intro00:02:41 ❙ Rapid development00:08:25 ❙ Provable safety, safety factors, & CSAM00:18:50 ❙ Litigation00:23:06 ❙ Open/Closed Source00:38:52 ❙ AIxBio00:47:50 ❙ Scientific rigor in AI00:56:22 ❙ AI deception01:02:45 ❙ No takesies-backsies01:08:22 ❙ StakeOut.AI's start01:12:53 ❙ Sustainability & Agency01:18:21 ❙ "I'm sold, next steps?" -you01:23:53 ❙ Lessons from the amazing Spiderman01:33:15 ❙ "I'm ready to switch careers, next steps?" -you01:40:00 ❙ The most important question01:41:11 ❙ Outro
    Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
    StakeOut.AIPause AIAI Governance Scorecard (go to Pg. 3)CIVITAIArticle on CIVITAI and CSAMSenate Hearing: Protecting Children OnlinePBS Newshour CoverageThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted WorkOpen Source/Weights/Release/InterpretationOpen Source InitiativeHistory of the OSIMeta’s LLaMa 2 license is not Open SourceIs Llama 2 open source? No – and perhaps we need a new definition of open…Apache License, Version 2.03Blue1Brown: Neural NetworksOpening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generatorsThe online tableSignalBloomz model on HuggingFaceMistral websiteNASA TragediesChallenger disaster on WikipediaColumbia disaster on WikipediaAIxBio RiskDual use of artificial-intelligence-powered drug discoveryCan large language models democratize access to dual-use biotechnology?Open-Sourcing Highly Capable Foundation Models (sadly, I can't rename the article...)Propaganda or Science: Open Source AI and Bioterrorism RiskExaggerating the risks (Part 15: Biorisk from LLMs)Will releasing the weights of future large language models grant widespread access to pandemic agents?On the Societal Impact of Open Foundation ModelsPolicy briefApart ResearchScienceCiceroHuman-level play in the game of Diplomacy by combining language models with strategic reasoningCicero webpageAI Deception: A Survey of Examples, Risks, and Potential SolutionsOpen Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulationAI Safety CampInto AI Safety Patreon

    • 1 hr 41 min
    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (2)

    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (2)

    Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position.
    In addition to the normal links, I wanted to include the links to the petitions that Dr. Park mentions during the podcast. Note that the nonprofit which began these petitions, StakeOut.AI, has been dissolved.Right AI Laws, to Right Our Future: Support Artificial Intelligence Safety Regulations NowIs Deepfake Illegal? Not Yet! Ban Deepfakes to Protect Your Family & Demand Deepfake LawsBan Superintelligence: Stop AI-Driven Human Extinction Risk

    00:00:54 - Intro00:02:34 - Battleground 1: Copyright00:06:28 - Battleground 2: Moral Critique of AI Collaborationists00:08:15 - Rich Sutton00:20:41 - OpenAI Drama00:34:28 - Battleground 3: Contract Negotiations for AI Ban Clauses00:37:57 - Tesla, Autopilot, and FSD00:40:02 - Recycling00:47:40 - Battleground 4: New Laws and Policies00:50:00 - Battleground 5: Whistleblower Protections00:53:07 - Whistleblowing on Microsoft00:54:43 - Andrej Karpathy & Exercises in Empathy01:05:57 - Outro
    Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
    StakeOut.AIThe Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted WorkSusman Godfrey LLPRich SuttonReinforcement Learning: An Introduction (textbook)AI Succession (presentation by Rich Sutton)The Alberta Plan for AI Research Moore's LawThe Future of Integrated Electronics (original paper)Computer History Museum's entry on Moore's LawStochastic gradient descent (SGD) on WikipediaOpenAI DramaMax Read's Substack postZvi Mowshowitz's Substack series, in order of postingOpenAI: Facts from a WeekendOpenAI: The Battle of the BoardOpenAI: Altman ReturnsOpenAI: Leaks Confirm the Story ← best singular post in the seriesOpenAI: The Board ExpandsOfficial OpenAI announcementWGA on WikipediaSAG-AFTRA on WikipediaTesla's False AdvertisingTesla's response to the DMV's false-advertising allegations: What took so long?Tesla Tells California DMV that FSD Is Not Capable of Autonomous DrivingWhat to Call Full Self-Driving When It Isn't Full Self-Driving?Tesla fired an employee after he posted driverless tech reviews on YouTubeTesla's page on Autopilot and Full Self-DrivingRecyclingBoulder County Recycling Center Stockpiles Accurately Sorted Recyclable MaterialsOut of sight, out of mindBoulder Eco-Cycle Recycling GuidelinesDivide-and-Conquer Dynamics in AI-Driven DisempowermentMicrosoft WhistleblowerWhistleblowers call out AI's flawsShane's LinkedIn postLetters sent by JonesKarpathy announces departure from OpenAI

    • 1 hr 6 min
    MINISODE: Restructure Vol. 2

    MINISODE: Restructure Vol. 2

    UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed.

    After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will still be releasing what I am calling research ride-along content to my Patreon, but the show's feed will consist only of content that I aim to make as accessible as possible.

    00:35 - TL;DL01:12 - Advice from Pete03:10 - My personal goal05:39 - Reflection on refining my goal09:08 - Looking forward (logistics

    • 13 min
    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

    INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

    Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded
    ⁠StakeOut.AI, a non-profit focused on making AI go well for humans.

    00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - Outro

    Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.


    StakeOut.AI
    AI Governance Scorecard (go to Pg. 3)
    Pause AI
    Regulations.gov
    USCO StakeOut.AI Comment
    OMB StakeOut.AI Comment


    AI Treaty open letter
    TAISC
    Alpaca: A Strong, Replicable Instruction-Following Model
    References on EU AI Act and Cedric O
    Tweet from Cedric O
    EU policymakers enter the last mile for Artificial Intelligence rulebook
    AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’
    EU’s AI Act negotiations hit the brakes over foundation models
    The EU AI Act needs Foundation Model Regulation
    BigTech’s Efforts to Derail the AI Act


    Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation
    Divide-and-Conquer Dynamics in AI-Driven Disempowerment

    • 54 min
    MINISODE: "LLMs, a Survey"

    MINISODE: "LLMs, a Survey"

    Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website.

    00:36 - Intro and authors01:50 - My takes and paper structure04:40 - Getting to LLMs07:27 - Defining LLMs & emergence12:12 - Overview of PLMs15:00 - How LLMs are built18:52 - Limitations if LLMs23:06 - Uses of LLMs25:16 - Evaluations and Benchmarks28:11 - Challenges and future directions29:21 - Recap & outro

    Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
    Large Language Models, A SurveyMeysam's LinkedIn PostClaude E. ShannonA symbolic analysis of relay and switching circuits (Master's Thesis)Communication theory of secrecy systemsA mathematical theory of communicationPrediction and entropy of printed EnglishFuture ML Systems Will Be Qualitatively DifferentMore Is DifferentSleeper Agents: Training Deceptive LLMs that Persist Through Safety TrainingAre Emergent Abilities of Large Language Models a Mirage?Are Emergent Abilities of Large Language Models just In-Context Learning?Attention is all you needDirect Preference Optimization: Your Language Model is Secretly a Reward ModelKTO: Model Alignment as Prospect Theoretic OptimizationOptimization by Simulated AnnealingMemory and new controls for ChatGPTHallucinations and related concepts—their conceptual background

    • 30 min
    FEEDBACK: Applying for Funding w/ Esben Kran

    FEEDBACK: Applying for Funding w/ Esben Kran

    Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future.
    Head over to Apart Research's website to check out their work, or the Alignment Jam website for information on upcoming hackathons.
    A doc-capsule of the application at the time of this recording can be found at this link.
    01:38 - Interview starts05:41 - Proposal11:00 - Personal statement14:00 - Budget21:12 - CV22:45 - Application questions34:06 - Funding questions44:25 - Outro
    Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
    AI governance talent profiles we’d like to seeThe AI Governance Research SprintReasoning TransparencyPlaces to look for fundingOpen Philanthropy's Career development and transition fundingLong-Term Future FundManifund

    • 45 min

Customer Reviews

5.0 out of 5
3 Ratings

3 Ratings

Top Podcasts In Technology

The Neuron: AI Explained
The Neuron
Lex Fridman Podcast
Lex Fridman
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
BG2Pod with Brad Gerstner and Bill Gurley
BG2Pod