Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

  1. Michael Timothy Bennett: Defining Intelligence and AGI Approaches

    6 DAYS AGO

    Michael Timothy Bennett: Defining Intelligence and AGI Approaches

    Dr. Michael Timothy Bennett is a computer scientist who's deeply interested in understanding artificial intelligence, consciousness, and what it means to be alive. He's known for his provocative paper "What the F*** is Artificial Intelligence" which challenges conventional thinking about AI and intelligence.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=mb***Michael takes us on a journey through some of the biggest questions in AI and consciousness. He starts by exploring what intelligence actually is - settling on the idea that it's about "adaptation with limited resources" (a definition from researcher Pei Wang that he particularly likes).The discussion ranges from technical AI concepts to philosophical questions about consciousness, with Michael offering fresh perspectives that challenge Silicon Valley's "just scale it up" approach to AI. He argues that true intelligence isn't just about having more parameters or data - it's about being able to adapt efficiently, like biological systems do.TOC:1. Introduction & Paper Overview [00:01:34]2. Definitions of Intelligence [00:02:54]3. Formal Models (AIXI, Active Inference) [00:07:06]4. Causality, Abstraction & Embodiment [00:10:45]5. Computational Dualism & Mortal Computation [00:25:51]6. Modern AI, AGI Progress & Benchmarks [00:31:30]7. Hybrid AI Approaches [00:35:00]8. Consciousness & The Hard Problem [00:39:35]9. The Diverse Intelligences Summer Institute (DISI) [00:53:20]10. Living Systems & Self-Organization [00:54:17]11. Closing Thoughts [01:04:24]Michaels socials:https://michaeltimothybennett.com/https://x.com/MiTiBennettTranscript:https://app.rescript.info/public/share/4jSKbcM77Sf6Zn-Ms4hda7C4krRrMcQt0qwYqiqPTPIReferences:Bennett, M.T. "What the F*** is Artificial Intelligence"https://arxiv.org/abs/2503.23923Bennett, M.T. "Are Biological Systems More Intelligent Than Artificial Intelligence?" https://arxiv.org/abs/2405.02325Bennett, M.T. PhD Thesis "How To Build Conscious Machines"https://osf.io/preprints/thesiscommons/wehmg_v1Legg, S. & Hutter, M. (2007). "Universal Intelligence: A Definition of Machine Intelligence"Wang, P. "Defining Artificial Intelligence" - on non-axiomatic reasoning systems (NARS)Chollet, F. (2019). "On the Measure of Intelligence" - introduces the ARC benchmark and developer-aware generalizationHutter, M. (2005). "Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability"Chalmers, D. "The Hard Problem of Consciousness"Descartes, R. - Cartesian dualism and the pineal gland theory (historical context)Friston, K. - Free Energy Principle and Active Inference frameworkLevin, M. - Work on collective intelligence, cancer as information isolation, and "mind blindness"Hinton, G. (2022). "The Forward-Forward Algorithm" - introduces mortal computation conceptAlexander Ororbia & Friston - Formal treatment of mortal computationSutton, R. "The Bitter Lesson" - on search and learning in AIPearl, J. "The Book of Why" - causal inference and reasoningAlternative AGI ApproachesWang, P. - NARS (Non-Axiomatic Reasoning System)Goertzel, B. - Hyperon system and modular AGI architecturesBenchmarks & EvaluationHendrycks, D. - Humanities Last Exam benchmark (mentioned re: saturation)Filmed at:Diverse Intelligences Summer Institute (DISI) https://disi.org/

    1h 6m
  2. Superintelligence Strategy (Dan Hendrycks)

    14 AUG

    Superintelligence Strategy (Dan Hendrycks)

    Deep dive with Dan Hendrycks, a leading AI safety researcher and co-author of the "Superintelligence Strategy" paper with former Google CEO Eric Schmidt and Scale AI CEO Alexandr Wang. *** SPONSOR MESSAGES Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal - https://github.com/google-gemini/gemini-cli Prolific: Quality data. From real people. For faster breakthroughs. https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen *** Hendrycks argues that society is making a fundamental mistake in how it views artificial intelligence. We often compare AI to transformative but ultimately manageable technologies like electricity or the internet. He contends a far better and more realistic analogy is nuclear technology. Like nuclear power, AI has the potential for immense good, but it is also a dual-use technology that carries the risk of unprecedented catastrophe. The Problem with an AI "Manhattan Project": A popular idea is for the U.S. to launch a "Manhattan Project" for AI—a secret, all-out government race to build a superintelligence before rivals like China. Hendrycks argues this strategy is deeply flawed and dangerous for several reasons: - It wouldn’t be secret. You cannot hide a massive, heat-generating data center from satellite surveillance. - It would be destabilizing. A public race would alarm rivals, causing them to start their own desperate, corner-cutting projects, dramatically increasing global risk. - It’s vulnerable to sabotage. An AI project can be crippled in many ways, from cyberattacks that poison its training data to physical attacks on its power plants. This is what the paper refers to as a "maiming attack." This vulnerability leads to the paper's central concept: Mutual Assured AI Malfunction (MAIM). This is the AI-era version of the nuclear-era's Mutual Assured Destruction (MAD). In this dynamic, any nation that makes an aggressive, destabilizing bid for a world-dominating AI must expect its rivals to sabotage the project to ensure their own survival. This deterrence, Hendrycks argues, is already the default reality we live in. A Better Strategy: The Three Pillars Instead of a reckless race, the paper proposes a more stable, three-part strategy modeled on Cold War principles: - Deterrence: Acknowledge the reality of MAIM. The goal should not be to "win" the race to superintelligence, but to deter anyone from starting such a race in the first place through the credible threat of sabotage. - Nonproliferation: Just as we work to keep fissile materials for nuclear bombs out of the hands of terrorists and rogue states, we must control the key inputs for catastrophic AI. The most critical input is advanced AI chips (GPUs). Hendrycks makes the powerful claim that building cutting-edge GPUs is now more difficult than enriching uranium, making this strategy viable. - Competitiveness: The race between nations like the U.S. and China should not be about who builds superintelligence first. Instead, it should be about who can best use existing AI to build a stronger economy, a more effective military, and more resilient supply chains (for example, by manufacturing more chips domestically). Dan says the stakes are high if we fail to manage this transition: - Erosion of Control - Intelligence Recursion - Worthless Labor Hendrycks maintains that while the risks are existential, the future is not set. TOC: 1 Measuring the Beast [00:00:00] 2 Defining the Beast [00:11:34] 3 The Core Strategy [00:38:20] 4 Ideological Battlegrounds [00:53:12] 5 Mechanisms of Control [01:34:45] TRANSCRIPT: https://app.rescript.info/public/share/cOKcz4pWRPjh7BTIgybd7PUr_vChUaY6VQW64No8XMs

    1h 46m
  3. DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

    5 AUG

    DeepMind Genie 3 [World Exclusive] (Jack Parker Holder, Shlomi Fruchter)

    This episode features Shlomi Fuchter and Jack Parker Holder from Google DeepMind, who are unveiling a new AI called Genie 3. The host, Tim Scarfe, describes it as the most mind-blowing technology he has ever seen. We were invited to their offices to conduct the interview (not sponsored).Imagine you could create a video game world just by describing it. That's what Genie 3 does. It's an AI "world model" that learns how the real world works by watching massive amounts of video. Unlike a normal video game engine (like Unreal or the one for Doom) that needs to be programmed manually, Genie generates a realistic, interactive, 3D world from a simple text prompt.**SPONSOR MESSAGES***Prolific: Quality data. From real people. For faster breakthroughs.https://prolific.com/mlst?utm_campaign=98404559-MLST&utm_source=youtube&utm_medium=podcast&utm_content=script-gen***Here’s a breakdown of what makes it so revolutionary:From Text to a Virtual World: You can type "a drone flying by a beautiful lake" or "a ski slope," and Genie 3 creates that world for you in about three seconds. You can then navigate and interact with it in real-time.It's Consistent: The worlds it creates have a reliable memory. If you look away from an object and then look back, it will still be there, just as it was. The guests explain that this consistency isn't explicitly programmed in; it's a surprising, "emergent" capability of the powerful AI model.A Huge Leap Forward: The previous version, Genie 2, was a major step, but it wasn't fast enough for real-time interaction and was much lower resolution. Genie 3 is 720p, interactive, and photorealistic, running smoothly for several minutes at a time.The Killer App - Training Robots: Beyond entertainment, the team sees Genie 3 as a game-changer for training AI. Instead of training a self-driving car or a robot in the real world (which is slow and dangerous), you can create infinite simulations. You can even prompt rare events to happen, like a deer running across the road, to teach an AI how to handle unexpected situations safely.The Future of Entertainment: this could lead to a "YouTube version 2" or a new form of VR, where users can create and explore endless, interconnected worlds together, like the experience machine from philosophy.While the technology is still a research prototype and not yet available to the public, it represents a monumental step towards creating true artificial worlds from the ground up.Jack Parker Holder [Research Scientist at Google DeepMind in the Open-Endedness Team]https://jparkerholder.github.io/Shlomi Fruchter [Research Director, Google DeepMind]https://shlomifruchter.github.io/TOC:[00:00:00] - Introduction: "The Most Mind-Blowing Technology I've Ever Seen"[00:02:30] - The Evolution from Genie 1 to Genie 2[00:04:30] - Enter Genie 3: Photorealistic, Interactive Worlds from Text[00:07:00] - Promptable World Events & Training Self-Driving Cars[00:14:21] - Guest Introductions: Shlomi Fuchter & Jack Parker Holder[00:15:08] - Core Concepts: What is a "World Model"?[00:19:30] - The Challenge of Consistency in a Generated World[00:21:15] - Context: The Neural Network Doom Simulation[00:25:25] - How Do You Measure the Quality of a World Model?[00:28:09] - The Vision: Using Genie to Train Advanced Robots[00:32:21] - Open-Endedness: Human Skill and Prompting Creativity[00:38:15] - The Future: Is This the Next YouTube or VR?[00:42:18] - The Next Step: Multi-Agent Simulations[00:52:51] - Limitations: Thinking, Computation, and the Sim-to-Real Gap[00:58:07] - Conclusion & The Future of Game EnginesREFS:World Models [David Ha, Jürgen Schmidhuber]https://arxiv.org/abs/1803.10122POEThttps://arxiv.org/abs/1901.01753[Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley]The Fractured Entangled Representation Hypothesishttps://arxiv.org/pdf/2505.11581TRANSCRIPT:https://app.rescript.info/public/share/Zk5tZXk6mb06yYOFh6nSja7Lg6_qZkgkuXQ-kl5AJqM

    58 min
  4. Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)

    31 JUL

    Large Language Models and Emergence: A Complex Systems Perspective (Prof. David C. Krakauer)

    Prof. David Krakauer, President of the Santa Fe Institute argues that we are fundamentally confusing knowledge with intelligence, especially when it comes to AI. He defines true intelligence as the ability to do more with less—to solve novel problems with limited information. This is contrasted with current AI models, which he describes as doing less with more; they require astounding amounts of data to perform tasks that don't necessarily demonstrate true understanding or adaptation. He humorously calls this "really shit programming". David challenges the popular notion of "emergence" in Large Language Models (LLMs). He explains that the tech community's definition—seeing a sudden jump in a model's ability to perform a task like three-digit math—is superficial. True emergence, from a complex systems perspective, involves a fundamental change in the system's internal organization, allowing for a new, simpler, and more powerful level of description. He gives the example of moving from tracking individual water molecules to using the elegant laws of fluid dynamics. For LLMs to be truly emergent, we'd need to see them develop new, efficient internal representations, not just get better at memorizing patterns as they scale. Drawing on his background in evolutionary theory, David explains that systems like brains, and later, culture, evolved to process information that changes too quickly for genetic evolution to keep up. He calls culture "evolution at light speed" because it allows us to store our accumulated knowledge externally (in books, tools, etc.) and build upon it without corrupting the original. This leads to his concept of "exbodiment," where we outsource our cognitive load to the world through things like maps, abacuses, or even language itself. We create these external tools, internalize the skills they teach us, improve them, and create a feedback loop that enhances our collective intelligence. However, he ends with a warning. While technology has historically complemented our deficient abilities, modern AI presents a new danger. Because we have an evolutionary drive to conserve energy, we will inevitably outsource our thinking to AI if we can. He fears this is already leading to a "diminution and dilution" of human thought and creativity. Just as our muscles atrophy without use, he argues our brains will too, and we risk becoming mentally dependent on these systems. TOC: [00:00:00] Intelligence: Doing more with less [00:02:10] Why brains evolved: The limits of evolution [00:05:18] Culture as evolution at light speed [00:08:11] True meaning of emergence: "More is Different" [00:10:41] Why LLM capabilities are not true emergence [00:15:10] What real emergence would look like in AI [00:19:24] Symmetry breaking: Physics vs. Life [00:23:30] Two types of emergence: Knowledge In vs. Out [00:26:46] Causality, agency, and coarse-graining [00:32:24] "Exbodiment": Outsourcing thought to objects [00:35:05] Collective intelligence & the boundary of the mind [00:39:45] Mortal vs. Immortal forms of computation [00:42:13] The risk of AI: Atrophy of human thought David Krakauer President and William H. Miller Professor of Complex Systems https://www.santafe.edu/people/profile/david-krakauer REFS: Large Language Models and Emergence: A Complex Systems Perspective David C. Krakauer, John W. Krakauer, Melanie Mitchell https://arxiv.org/abs/2506.11135 Filmed at the Diverse Intelligences Summer Institute: https://disi.org/

    50 min
  5. Pushing compute to the limits of physics

    21 JUL

    Pushing compute to the limits of physics

    Dr. Maxwell Ramstead grills Guillaume Verdon (AKA “Beff Jezos”) who's the founder of Thermodynamic computing startup Extropic. Guillaume shares his unique path – from dreaming about space travel as a kid to becoming a physicist, then working on quantum computing at Google, to developing a radically new form of computing hardware for machine learning. He explains how he hit roadblocks with traditional physics and computing, leading him to start his company – building "thermodynamic computers." These are based on a new design for super-efficient chips that use the natural chaos of electrons (think noise and heat) to power AI tasks, which promises to speed up AND lower the costs of modern probabilistic techniques like sampling. He is driven by the pursuit of building computers that work more like your brain, which (by the way) runs on a banana and a glass of water!  Guillaume talks about his alter ego, Beff Jezos, and the "Effective Accelerationism" (e/acc) movement that he initiated. Its objective is to speed up tech progress in order to “grow civilization” (as measured by energy use and innovation), rather than “slowing down out of fear”. Guillaume argues we need to embrace variance, exploration, and optimism to avoid getting stuck or outpaced by competitors like China. He and Maxwell discuss big ideas like merging humans with AI, decentralizing intelligence, and why boundless growth (with smart constraints) is “key to humanity's future”. REFS: 1. John Archibald Wheeler - "It From Bit" Concept 00:04:45 - Foundational work proposing that physical reality emerges from information at the quantum level Learn more: https://cqi.inf.usi.ch/qic/wheeler.pdf  2. AdS/CFT Correspondence (Holographic Principle) 00:05:15 - Theoretical physics duality connecting quantum gravity in Anti-de Sitter space with conformal field theory https://en.wikipedia.org/wiki/Holographic_principle  3. Renormalization Group Theory 00:06:15 - Mathematical framework for analyzing physical systems across different length scales https://www.damtp.cam.ac.uk/user/dbs26/AQFT/Wilsonchap.pdf  4. Maxwell's Demon and Information Theory 00:21:15 - Thought experiment linking information processing to thermodynamics and entropy https://plato.stanford.edu/entries/information-entropy/  5. Landauer's Principle 00:29:45 - Fundamental limit establishing minimum energy required for information erasure https://en.wikipedia.org/wiki/Landauer%27s_principle  6. Free Energy Principle and Active Inference 01:03:00 - Mathematical framework for understanding self-organizing systems and perception-action loops https://www.nature.com/articles/nrn2787  7. Max Tegmark - Information Bottleneck Principle 01:07:00 - Connections between information theory and renormalization in machine learning https://arxiv.org/abs/1907.07331  8. Fisher's Fundamental Theorem of Natural Selection 01:11:45 - Mathematical relationship between genetic variance and evolutionary fitness https://en.wikipedia.org/wiki/Fisher%27s_fundamental_theorem_of_natural_selection  9. Tensor Networks in Quantum Systems 00:06:45 - Computational framework for simulating many-body quantum systems https://arxiv.org/abs/1912.10049  10. Quantum Neural Networks 00:09:30 - Hybrid quantum-classical models for machine learning applications https://en.wikipedia.org/wiki/Quantum_neural_network  11. Energy-Based Models (EBMs) 00:40:00 - Probabilistic framework for unsupervised learning based on energy functions https://www.researchgate.net/publication/200744586_A_tutorial_on_energy-based_learning  12. Markov Chain Monte Carlo (MCMC) 00:20:00 - Sampling algorithm fundamental to modern AI and statistical physics https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo  13. Metropolis-Hastings Algorithm 00:23:00 - Core sampling method for probability distributions https://arxiv.org/abs/1504.01896 ***SPONSOR MESSAGE*** Google Gemini 2.5 Flash is a state-of-the-art language model in the Gemini app. Sign up at https://gemini.google.com

    1h 24m
  6. The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)

    6 JUL

    The Fractured Entangled Representation Hypothesis (Kenneth Stanley, Akarsh Kumar)

    Are the AI models you use today imposters? Please watch the intro video we did before this: https://www.youtube.com/watch?v=o1q6Hhz0MAg In this episode, hosts Dr. Tim Scarfe and Dr. Duggar are joined by AI researcher Prof. Kenneth Stanley and MIT PhD student Akash Kumar to discuss their fascinating paper, "Questioning Representational Optimism in Deep Learning." Imagine you ask two people to draw a perfect skull. One is a brilliant artist who understands anatomy, the other is a machine that just traces the image. Both drawings look identical, but the artist understands what a skull is—they know where the mouth is, how the jaw works, and that it's symmetrical. The machine just has a tangled mess of lines that happens to form the right picture. An AI with an elegant representation, has the building blocks to generate truly new ideas. The Path Is the Goal: As Kenneth Stanley puts it, "it matters not just where you get, but how you got there". Two students can ace a math test, but the one who truly understands the concepts—instead of just memorizing formulas—is the one who will go on to make new discoveries. The show is a mixture of 3 separate recordings we have done, the original Patreon warmup with Tim/Kenneth, the Tim/Keith "Steakhouse" recorded after the main interview, then the main interview with Kenneth/Akarsh/Keith/Tim. Feel free to skip around. We had to edit this in a rush as we are travelling next week but it's reasonably cleaned up. TOC: 00:00:00 Intro: Garbage vs. Amazing Representations 00:05:42 How Good Representations Form 00:11:14 Challenging the "Bitter Lesson" 00:18:04 AI Creativity & Representation Types 00:22:13 Steakhouse: Critiques & Alternatives 00:28:30 Steakhouse: Key Concepts & Goldilocks Zone 00:39:42 Steakhouse: A Sober View on AI Risk 00:43:46 Steakhouse: The Paradox of Open-Ended Search 00:47:58 Main Interview: Paper Intro & Core Concepts 00:56:44 Main Interview: Deception and Evolvability 01:36:30 Main Interview: Reinterpreting Evolution 01:56:16 Main Interview: Impostor Intelligence 02:11:15 Main Interview: Recommendations for AI Research REFS: Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis Akarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanley https://arxiv.org/pdf/2505.11581 Kenneth O. Stanley, Joel Lehman Why Greatness Cannot Be Planned: The Myth of the Objective https://amzn.to/44xLaXK Original show with Kenneth from 4 years ago: https://www.youtube.com/watch?v=lhYGXYeMq_E Kenneth Stanley is SVP Open Endedness at Lila Sciences https://x.com/kenneth0stanley Akarsh Kumar (MIT) https://akarshkumar.com/ AND... Kenneth is HIRING (this is an OPPORTUNITY OF A LIFETIME!) Research Engineer: https://job-boards.greenhouse.io/lila/jobs/7890007002 Research Scientist: https://job-boards.greenhouse.io/lila/jobs/8012245002 TRANSCRIPT: https://app.rescript.info/public/share/W_T7E1OC2Wj49ccqlIOOztg2MJWaaVbovTeyxcFEQdU

    2h 16m
  7. The Fractured Entangled Representation Hypothesis (Intro)

    5 JUL

    The Fractured Entangled Representation Hypothesis (Intro)

    What if today's incredible AI is just a brilliant "impostor"? This episode features host Dr. Tim Scarfe in conversation with guests Prof. Kenneth Stanley (ex-OpenAI), Dr. Keith Duggar (MIT), and Arkash Kumar (MIT).While AI today produces amazing results on the surface, its internal understanding is a complete mess, described as "total spaghetti" [00:00:49]. This is because it's trained with a brute-force method (SGD) that’s like building a sandcastle: it looks right from a distance, but has no real structure holding it together [00:01:45].To explain the difference, Keith Duggar shares a great analogy about his high school physics classes [00:03:18]. One class was about memorizing lots of formulas for specific situations (like the "impostor" AI). The other used calculus to derive the answers from a deeper understanding, which was much easier and more powerful. This is the core difference: one method memorizes, the other truly understands.The episode then introduces a different, more powerful way to build AI, based on Kenneth Stanley's old experiment, "Picbreeder" [00:04:45]. This method creates AI with a shockingly clean and intuitive internal model of the world. For example, it might develop a model of a skull where it understands the "mouth" as a separate component it can open and close, without ever being explicitly trained on that action [00:06:15]. This deep understanding emerges bottom-up, without massive datasets.The secret is to abandon a fixed goal and embrace "deception" [00:08:42]—the idea that the stepping stones to a great discovery often don't look anything like the final result. Instead of optimizing for a target, the AI is built through an open-ended process of exploring what's "interesting" [00:09:15]. This creates a more flexible and adaptable foundation, a bit like how evolvability wins out in nature [00:10:30].The show concludes by arguing that this choice matters immensely. The "impostor" path may be hitting a wall, requiring insane amounts of money and energy for progress and failing to deliver true creativity or continual learning [00:13:00]. The ultimate message is a call to not put all our eggs in one basket [00:14:25]. We should explore these open-ended, creative paths to discover a more genuine form of intelligence, which may be found where we least expect it.REFS:Questioning Representational Optimism in Deep Learning:The Fractured Entangled Representation HypothesisAkarsh Kumar, Jeff Clune, Joel Lehman, Kenneth O. Stanleyhttps://arxiv.org/pdf/2505.11581Kenneth O. Stanley, Joel LehmanWhy Greatness Cannot Be Planned: The Myth of the Objectivehttps://amzn.to/44xLaXKOriginal show with Kenneth from 4 years ago:https://www.youtube.com/watch?v=lhYGXYeMq_EKenneth Stanley is SVP Open Endedness at Lila Scienceshttps://x.com/kenneth0stanleyAkarsh Kumar (MIT)https://akarshkumar.com/AND... Kenneth is HIRING (this is an OPPORTUNITY OF A LIFETIME!)Research Engineer: https://job-boards.greenhouse.io/lila/jobs/7890007002Research Scientist: https://job-boards.greenhouse.io/lila/jobs/8012245002Tim's Code visualisation of FER based on Akarsh repo: https://github.com/ecsplendid/ferTRANSCRIPT: https://app.rescript.info/public/share/YKAZzZ6lwZkjTLRpVJreOOxGhLI8y4m3fAyU8NSavx0

    16 min
  8. Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

    24 JUN

    Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

    What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control. Sponsor messages: ======== Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com Tufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard! https://tufalabs.ai/ ======== Guest Powerhouse Gary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/) Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/) Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/) Transcript: http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOno TOC: Introduction: The AI Arms Race 00:00:04 - The Danger of Automated AI R&D 00:00:43 - The Rationalization: "If we don't, someone else will" 00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini) 00:02:55 - Guest Introductions The Philosophical Stakes 00:04:13 - What is the Positive Vision for AGI? 00:07:00 - The Abundance Scenario: Superintelligent Economy 00:09:06 - Differentiating AGI and Superintelligence (ASI) 00:11:41 - Sam Altman: "A Decade in a Month" 00:14:47 - Economic Inequality & The UBI Problem Policy and Red Lines 00:17:13 - The Pause Letter: Stopping vs. Delaying AI 00:20:03 - Defining Three Concrete Red Lines for AI Development 00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage" 00:31:15 - Transparency and Public Perception 00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win" Forecasting AGI: Timelines and Methodologies 00:42:29 - The Case for Short Timelines (Median 2028) 00:47:00 - Scaling Limits: Compute, Data, and Money 00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding 00:53:15 - The 10^45 FLOP Thought Experiment The Great Debate: Cognitive Gaps vs. Scaling 00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition 01:00:46 - Current AI Can't Play Chess Reliably 01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps? 01:16:13 - The Multi-Dimensional Nature of Intelligence 01:24:26 - The Benchmark Debate: Data Contamination and Reliability 01:31:15 - The Superhuman Coder Milestone Debate 01:37:45 - The Driverless Car Analogy The Alignment Problem 01:39:45 - Has Any Progress Been Made on Alignment? 01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me" 01:46:30 - Distinguishing Model vs. Process Alignment Scenarios and Conclusions 01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift 01:53:35 - Will AI Become Jeff Dean? 01:58:41 - Takeoff Speeds and Exceeding Human Intelligence 02:03:19 - Final Disagreements and Closing Remarks REFS: Gary Marcus (2001) - The Algebraic Mind https://mitpress.mit.edu/9780262632683/the-algebraic-mind/ 00:59:00 Gary Marcus & Ernest Davis (2019) - Rebooting AI https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/ 01:31:59 Gary Marcus (2024) - Taming SV https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/ 00:03:01

    2h 7m

About

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

You Might Also Like