Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

  1. How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)

    2 DAYS AGO

    How AI Could Be A Mathematician's Co-Pilot by 2026 (Prof. Swarat Chaudhuri)

    Professor Swarat Chaudhuri from the University of Texas at Austin and visiting researcher at Google DeepMind discusses breakthroughs in AI reasoning, theorem proving, and mathematical discovery. Chaudhuri explains his groundbreaking work on COPRA (a GPT-based prover agent), shares insights on neurosymbolic approaches to AI. Professor Swarat Chaudhuri: https://www.cs.utexas.edu/~swarat/ SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ TOC: [00:00:00] 0. Introduction / CentML ad, Tufa ad 1. AI Reasoning: From Language Models to Neurosymbolic Approaches [00:02:27] 1.1 Defining Reasoning in AI [00:09:51] 1.2 Limitations of Current Language Models [00:17:22] 1.3 Neuro-symbolic Approaches and Program Synthesis [00:24:59] 1.4 COPRA and In-Context Learning for Theorem Proving [00:34:39] 1.5 Symbolic Regression and LLM-Guided Abstraction 2. AI in Mathematics: Theorem Proving and Concept Discovery [00:43:37] 2.1 AI-Assisted Theorem Proving and Proof Verification [01:01:37] 2.2 Symbolic Regression and Concept Discovery in Mathematics [01:11:57] 2.3 Scaling and Modularizing Mathematical Proofs [01:21:53] 2.4 COPRA: In-Context Learning for Formal Theorem-Proving [01:28:22] 2.5 AI-driven theorem proving and mathematical discovery 3. Formal Methods and Challenges in AI Mathematics [01:30:42] 3.1 Formal proofs, empirical predicates, and uncertainty in AI mathematics [01:34:01] 3.2 Characteristics of good theoretical computer science research [01:39:16] 3.3 LLMs in theorem generation and proving [01:42:21] 3.4 Addressing contamination and concept learning in AI systems REFS: 00:04:58 The Chinese Room Argument, https://plato.stanford.edu/entries/chinese-room/ 00:11:42 Software 2.0, https://medium.com/@karpathy/software-2-0-a64152b37c35 00:11:57 Solving Olympiad Geometry Without Human Demonstrations, https://www.nature.com/articles/s41586-023-06747-5 00:13:26 Lean, https://lean-lang.org/ 00:15:43 A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play, https://www.science.org/doi/10.1126/science.aar6404 00:19:24 DreamCoder (Ellis et al., PLDI 2021), https://arxiv.org/abs/2006.08381 00:24:37 The Lambda Calculus, https://plato.stanford.edu/entries/lambda-calculus/ 00:26:43 Neural Sketch Learning for Conditional Program Generation, https://arxiv.org/pdf/1703.05698 00:28:08 Learning Differentiable Programs With Admissible Neural Heuristics, https://arxiv.org/abs/2007.12101 00:31:03 Symbolic Regression With a Learned Concept Library (Grayeli et al., NeurIPS 2024), https://arxiv.org/abs/2409.09359 00:41:30 Formal Verification of Parallel Programs, https://dl.acm.org/doi/10.1145/360248.360251 01:00:37 Training Compute-Optimal Large Language Models, https://arxiv.org/abs/2203.15556 01:18:19 Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, https://arxiv.org/abs/2201.11903 01:18:42 Draft, Sketch, and Prove: Guiding Formal Theorem Provers With Informal Proofs, https://arxiv.org/abs/2210.12283 01:19:49 Learning Formal Mathematics From Intrinsic Motivation, https://arxiv.org/pdf/2407.00695 01:20:19 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:23:58 Learning to Prove Theorems via Interacting With Proof Assistants, https://arxiv.org/abs/1905.09381 01:39:58 An In-Context Learning Agent for Formal Theorem-Proving (Thakur et al., CoLM 2024), https://arxiv.org/pdf/2310.04353 01:42:24 Programmatically Interpretable Reinforcement Learning (V

    1h 45m
  2. Nora Belrose - AI Development, Safety, and Meaning

    17 NOV

    Nora Belrose - AI Development, Safety, and Meaning

    Nora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)

    2h 30m
  3. Why Your GPUs are underutilised for AI - CentML CEO Explains

    13 NOV

    Why Your GPUs are underutilised for AI - CentML CEO Explains

    Prof. Gennady Pekhimenko (CEO of CentML, UofT) joins us in this *sponsored episode* to dive deep into AI system optimization and enterprise implementation. From NVIDIA's technical leadership model to the rise of open-source AI, Pekhimenko shares insights on bridging the gap between academic research and industrial applications. Learn about "dark silicon," GPU utilization challenges in ML workloads, and how modern enterprises can optimize their AI infrastructure. The conversation explores why some companies achieve only 10% GPU efficiency and practical solutions for improving AI system performance. A must-watch for anyone interested in the technical foundations of enterprise AI and hardware optimization. CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Cheaper, faster, no commitments, pay as you go, scale massively, simple to setup. Check it out! https://centml.ai/pricing/ SPONSOR MESSAGES: MLST is also sponsored by Tufa AI Labs - https://tufalabs.ai/ They are hiring cracked ML engineers/researchers to work on ARC and build AGI! SHOWNOTES (diarised transcript, TOC, references, summary, best quotes etc) https://www.dropbox.com/scl/fi/w9kbpso7fawtm286kkp6j/Gennady.pdf?rlkey=aqjqmncx3kjnatk2il1gbgknk&st=2a9mccj8&dl=0 TOC: 1. AI Strategy and Leadership [00:00:00] 1.1 Technical Leadership and Corporate Structure [00:09:55] 1.2 Open Source vs Proprietary AI Models [00:16:04] 1.3 Hardware and System Architecture Challenges [00:23:37] 1.4 Enterprise AI Implementation and Optimization [00:35:30] 1.5 AI Reasoning Capabilities and Limitations 2. AI System Development [00:38:45] 2.1 Computational and Cognitive Limitations of AI Systems [00:42:40] 2.2 Human-LLM Communication Adaptation and Patterns [00:46:18] 2.3 AI-Assisted Software Development Challenges [00:47:55] 2.4 Future of Software Engineering Careers in AI Era [00:49:49] 2.5 Enterprise AI Adoption Challenges and Implementation 3. ML Infrastructure Optimization [00:54:41] 3.1 MLOps Evolution and Platform Centralization [00:55:43] 3.2 Hardware Optimization and Performance Constraints [01:05:24] 3.3 ML Compiler Optimization and Python Performance [01:15:57] 3.4 Enterprise ML Deployment and Cloud Provider Partnerships 4. Distributed AI Architecture [01:27:05] 4.1 Multi-Cloud ML Infrastructure and Optimization [01:29:45] 4.2 AI Agent Systems and Production Readiness [01:32:00] 4.3 RAG Implementation and Fine-Tuning Considerations [01:33:45] 4.4 Distributed AI Systems Architecture and Ray Framework 5. AI Industry Standards and Research [01:37:55] 5.1 Origins and Evolution of MLPerf Benchmarking [01:43:15] 5.2 MLPerf Methodology and Industry Impact [01:50:17] 5.3 Academic Research vs Industry Implementation in AI [01:58:59] 5.4 AI Research History and Safety Concerns

    2h 9m
  4. Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

    11 NOV

    Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

    Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑ tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence. The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems’ fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** TOC: 1. Foundational AI Concepts and Risks [00:00:01] 1.1 AI Optimization and System Capabilities Debate [00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations [00:20:09] 1.3 Existential Risk and Species Succession [00:23:28] 1.4 Consciousness and Value Preservation in AI Systems 2. Ethics and Philosophy in AI [00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation [00:36:30] 2.2 Ethics and Moral Philosophy Debate [00:39:58] 2.3 Existential Risks and Digital Immortality [00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation 3. Truth and Logic in AI Systems [00:54:39] 3.1 AI Persuasion Ethics and Truth [01:01:48] 3.2 Mathematical Truth and Logic in AI Systems [01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics [01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate 4. AI Capabilities and Constraints [01:21:21] 4.1 AI Perception and Physical Laws [01:28:33] 4.2 AI Capabilities and Computational Constraints [01:34:59] 4.3 AI Motivation and Anthropomorphization Debate [01:38:09] 4.4 Prediction vs Agency in AI Systems 5. AI System Architecture and Behavior [01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction [01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior [02:09:41] 5.3 Machine Learning as Assembly of Computational Components [02:29:52] 5.4 AI Safety and Predictability in Complex Systems 6. Goal Optimization and Alignment [02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems [02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior [03:02:18] 6.3 Optimization Goals and Human Existential Risk [03:08:49] 6.4 Emergent Goals and AI Alignment Challenges 7. AI Evolution and Risk Assessment [03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory [03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate [03:56:05] 7.3 AI Risk and Biological System Analogies [04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality 8. Future Implications and Economics [04:13:01] 8.1 Economic and Proliferation Considerations SHOWNOTES (transcription, references, summary, best quotes etc): https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0

    4h 19m
  5. Pattern Recognition vs True Intelligence - Francois Chollet

    6 NOV

    Pattern Recognition vs True Intelligence - Francois Chollet

    Francois Chollet, a prominent AI expert and creator of ARC-AGI, discusses intelligence, consciousness, and artificial intelligence. Chollet explains that real intelligence isn't about memorizing information or having lots of knowledge - it's about being able to handle new situations effectively. This is why he believes current large language models (LLMs) have "near-zero intelligence" despite their impressive abilities. They're more like sophisticated memory and pattern-matching systems than truly intelligent beings. *** MLST IS SPONSORED BY TUFA AI LABS! The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/ *** He introduced his "Kaleidoscope Hypothesis," which suggests that while the world seems infinitely complex, it's actually made up of simpler patterns that repeat and combine in different ways. True intelligence, he argues, involves identifying these basic patterns and using them to understand new situations. Chollet also talked about consciousness, suggesting it develops gradually in children rather than appearing all at once. He believes consciousness exists in degrees - animals have it to some extent, and even human consciousness varies with age and circumstances (like being more conscious when learning something new versus doing routine tasks). On AI safety, Chollet takes a notably different stance from many in Silicon Valley. He views AGI development as a scientific challenge rather than a religious quest, and doesn't share the apocalyptic concerns of some AI researchers. He argues that intelligence itself isn't dangerous - it's just a tool for turning information into useful models. What matters is how we choose to use it. ARC-AGI Prize: https://arcprize.org/ Francois Chollet: https://x.com/fchollet Shownotes: https://www.dropbox.com/scl/fi/j2068j3hlj8br96pfa7bi/CHOLLET_FINAL.pdf?rlkey=xkbr7tbnrjdl66m246w26uc8k&st=0a4ec4na&dl=0 TOC: 1. Intelligence and Model Building [00:00:00] 1.1 Intelligence Definition and ARC Benchmark [00:05:40] 1.2 LLMs as Program Memorization Systems [00:09:36] 1.3 Kaleidoscope Hypothesis and Abstract Building Blocks [00:13:39] 1.4 Deep Learning Limitations and System 2 Reasoning [00:29:38] 1.5 Intelligence vs. Skill in LLMs and Model Building 2. ARC Benchmark and Program Synthesis [00:37:36] 2.1 Intelligence Definition and LLM Limitations [00:41:33] 2.2 Meta-Learning System Architecture [00:56:21] 2.3 Program Search and Occam's Razor [00:59:42] 2.4 Developer-Aware Generalization [01:06:49] 2.5 Task Generation and Benchmark Design 3. Cognitive Systems and Program Generation [01:14:38] 3.1 System 1/2 Thinking Fundamentals [01:22:17] 3.2 Program Synthesis and Combinatorial Challenges [01:31:18] 3.3 Test-Time Fine-Tuning Strategies [01:36:10] 3.4 Evaluation and Leakage Problems [01:43:22] 3.5 ARC Implementation Approaches 4. Intelligence and Language Systems [01:50:06] 4.1 Intelligence as Tool vs Agent [01:53:53] 4.2 Cultural Knowledge Integration [01:58:42] 4.3 Language and Abstraction Generation [02:02:41] 4.4 Embodiment in Cognitive Systems [02:09:02] 4.5 Language as Cognitive Operating System 5. Consciousness and AI Safety [02:14:05] 5.1 Consciousness and Intelligence Relationship [02:20:25] 5.2 Development of Machine Consciousness [02:28:40] 5.3 Consciousness Prerequisites and Indicators [02:36:36] 5.4 AGI Safety Considerations [02:40:29] 5.5 AI Regulation Framework

    2h 43m
  6. The Elegant Math Behind Machine Learning - Anil Ananthaswamy

    4 NOV

    The Elegant Math Behind Machine Learning - Anil Ananthaswamy

    Anil Ananthaswamy is an award-winning science writer and former staff writer and deputy news editor for the London-based New Scientist magazine. Machine learning systems are making life-altering decisions for us: approving mortgage loans, determining whether a tumor is cancerous, or deciding if someone gets bail. They now influence developments and discoveries in chemistry, biology, and physics—the study of genomes, extrasolar planets, even the intricacies of quantum systems. And all this before large language models such as ChatGPT came on the scene. We are living through a revolution in machine learning-powered AI that shows no signs of slowing down. This technology is based on relatively simple mathematical ideas, some of which go back centuries, including linear algebra and calculus, the stuff of seventeenth- and eighteenth-century mathematics. It took the birth and advancement of computer science and the kindling of 1990s computer chips designed for video games to ignite the explosion of AI that we see today. In this enlightening book, Anil Ananthaswamy explains the fundamental math behind machine learning, while suggesting intriguing links between artificial and natural intelligence. Might the same math underpin them both? As Ananthaswamy resonantly concludes, to make safe and effective use of artificial intelligence, we need to understand its profound capabilities and limitations, the clues to which lie in the math that makes machine learning possible. Why Machines Learn: The Elegant Math Behind Modern AI: https://amzn.to/3UAWX3D https://anilananthaswamy.com/ Sponsor message: DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? Interested? Apply for an ML research position: benjamin@tufa.ai Shownotes: https://www.dropbox.com/scl/fi/wpv22m5jxyiqr6pqfkzwz/anil.pdf?rlkey=9c233jo5armr548ctwo419n6p&st=xzhahtje&dl=0 Chapters: 1. ML Fundamentals and Prerequisites [00:00:00] 1.1 Differences Between Human and Machine Learning [00:00:35] 1.2 Mathematical Prerequisites and Societal Impact of ML [00:02:20] 1.3 Author's Journey and Book Background [00:11:30] 1.4 Mathematical Foundations and Core ML Concepts [00:21:45] 1.5 Bias-Variance Tradeoff and Modern Deep Learning 2. Deep Learning Architecture [00:29:05] 2.1 Double Descent and Overparameterization in Deep Learning [00:32:40] 2.2 Mathematical Foundations and Self-Supervised Learning [00:40:05] 2.3 High-Dimensional Spaces and Model Architecture [00:52:55] 2.4 Historical Development of Backpropagation 3. AI Understanding and Limitations [00:59:13] 3.1 Pattern Matching vs Human Reasoning in ML Models [01:00:20] 3.2 Mathematical Foundations and Pattern Recognition in AI [01:04:08] 3.3 LLM Reliability and Machine Understanding Debate [01:12:50] 3.4 Historical Development of Deep Learning Technologies [01:15:21] 3.5 Alternative AI Approaches and Bio-inspired Methods 4. Ethical and Neurological Perspectives [01:24:32] 4.1 Neural Network Scaling and Mathematical Limitations [01:31:12] 4.2 AI Ethics and Societal Impact [01:38:30] 4.3 Consciousness and Neurological Conditions [01:46:17] 4.4 Body Ownership and Agency in Neuroscience

    1h 53m
  7. Michael Levin - Why Intelligence Isn't Limited To Brains.

    24 OCT

    Michael Levin - Why Intelligence Isn't Limited To Brains.

    Professor Michael Levin explores the revolutionary concept of diverse intelligence, demonstrating how cognitive capabilities extend far beyond traditional brain-based intelligence. Drawing from his groundbreaking research, he explains how even simple biological systems like gene regulatory networks exhibit learning, memory, and problem-solving abilities. Levin introduces key concepts like "cognitive light cones" - the scope of goals a system can pursue - and shows how these ideas are transforming our approach to cancer treatment and biological engineering. His insights challenge conventional views of intelligence and agency, with profound implications for both medicine and artificial intelligence development. This deep discussion reveals how understanding intelligence as a spectrum, from molecular networks to human minds, could be crucial for humanity's future technological development. Contains technical discussion of biological systems, cybernetics, and theoretical frameworks for understanding emergent cognition. Prof. Michael Levin https://as.tufts.edu/biology/people/faculty/michael-levin https://x.com/drmichaellevin Sponsor message: DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? Interested? Apply for an ML research position: benjamin@tufa.ai TOC 1. Intelligence Fundamentals and Evolution [00:00:00] 1.1 Future Evolution of Human Intelligence and Consciousness [00:03:00] 1.2 Science Fiction's Role in Exploring Intelligence Possibilities [00:08:15] 1.3 Essential Characteristics of Human-Level Intelligence and Relationships [00:14:20] 1.4 Biological Systems Architecture and Intelligence 2. Biological Computing and Cognition [00:24:00] 2.1 Agency and Intelligence in Biological Systems [00:30:30] 2.2 Learning Capabilities in Gene Regulatory Networks [00:35:37] 2.3 Biological Control Systems and Competency Architecture [00:39:58] 2.4 Scientific Metaphors and Polycomputing Paradigm 3. Systems and Collective Intelligence [00:43:26] 3.1 Embodiment and Problem-Solving Spaces [00:44:50] 3.2 Perception-Action Loops and Biological Intelligence [00:46:55] 3.3 Intelligence, Wisdom and Collective Systems [00:53:07] 3.4 Cancer and Cognitive Light Cones [00:57:09] 3.5 Emergent Intelligence and AI Agency Shownotes: https://www.dropbox.com/scl/fi/i2vl1vs009thg54lxx5wc/LEVIN.pdf?rlkey=dtk8okhbsejryiu2vrht19qp6&st=uzi0vo45&dl=0 REFS: [0:05:30] A Fire Upon the Deep - Vernor Vinge sci-fi novel on AI and consciousness [0:05:35] Maria Chudnovsky - MacArthur Fellow, Princeton mathematician, graph theory expert [0:14:20] Bow-tie architecture in biological systems - Network structure research by Csete & Doyle [0:15:40] Richard Watson - Southampton Professor, evolution and learning systems expert [0:17:00] Levin paper on human issues in AI and evolution [0:19:00] Bow-tie architecture in Darwin's agential materialism - Levin [0:22:55] Philip Goff - Work on panpsychism and consciousness in Galileo's Error [0:23:30] Strange Loop - Hofstadter's work on self-reference and consciousness [0:25:00] The Hard Problem of Consciousness - Van Gulick [0:26:15] Daniel Dennett - Theories on consciousness and intentional systems [0:29:35] Principle of Least Action - Light path selection in physics [0:29:50] Free Energy Principle - Friston's unified behavioral framework [0:30:35] Gene regulatory networks - Learning capabilities in biological systems [0:36:55] Minimal networks with learning capacity - Levin [0:38:50] Multi-scale competency in biological systems - Levin [0:41:40] Polycomputing paradigm - Biological computation by Bongard & Levin [0:45:40] Collective intelligence in biology - Levin et al. [0:46:55] Niche construction and stigmergy - Torday [0:53:50] Tasmanian Devil Facial Tumor Disease - Transmissible cancer research [0:55:05] Cognitive light cone - Computational boundaries of self - Levin [0:58:05] Cogniti

    1h 4m
  8. Speechmatics CTO - Next-Generation Speech Recognition

    23 OCT

    Speechmatics CTO - Next-Generation Speech Recognition

    Will Williams is CTO of Speechmatics in Cambridge. In this sponsored episode - he shares deep technical insights into modern speech recognition technology and system architecture. The episode covers several key technical areas: * Speechmatics' hybrid approach to ASR, which focusses on unsupervised learning methods, achieving comparable results with 100x less data than fully supervised approaches. Williams explains why this is more efficient and generalizable than end-to-end models like Whisper. * Their production architecture implementing multiple operating points for different latency-accuracy trade-offs, with careful latency padding (up to 1.8 seconds) to ensure consistent user experience. The system uses lattice-based decoding with language model integration for improved accuracy. * The challenges and solutions in real-time ASR, including their approach to diarization (speaker identification), handling cross-talk, and implicit source separation. Williams explains why these problems remain difficult even with modern deep learning approaches. * Their testing and deployment infrastructure, including the use of mirrored environments for catching edge cases in production, and their strategy of maintaining global models rather than allowing customer-specific fine-tuning. * Technical evolution in ASR, from early days of custom CUDA kernels and manual memory management to modern frameworks, with Williams offering interesting critiques of current PyTorch memory management approaches and arguing for more efficient direct memory allocation in production systems. Get coding with their API! This is their URL: https://www.speechmatics.com/ DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? MLST is sponsored by Tufa Labs: Focus: ARC, LLMs, test-time-compute, active inference, system2 reasoning, and more. Interested? Apply for an ML research position: benjamin@tufa.ai TOC 1. ASR Core Technology & Real-time Architecture [00:00:00] 1.1 ASR and Diarization Fundamentals [00:05:25] 1.2 Real-time Conversational AI Architecture [00:09:21] 1.3 Neural Network Streaming Implementation [00:12:49] 1.4 Multi-modal System Integration 2. Production System Optimization [00:29:38] 2.1 Production Deployment and Testing Infrastructure [00:35:40] 2.2 Model Architecture and Deployment Strategy [00:37:12] 2.3 Latency-Accuracy Trade-offs [00:39:15] 2.4 Language Model Integration [00:40:32] 2.5 Lattice-based Decoding Architecture 3. Performance Evaluation & Ethical Considerations [00:44:00] 3.1 ASR Performance Metrics and Capabilities [00:46:35] 3.2 AI Regulation and Evaluation Methods [00:51:09] 3.3 Benchmark and Testing Challenges [00:54:30] 3.4 Real-world Implementation Metrics [01:00:51] 3.5 Ethics and Privacy Considerations 4. ASR Technical Evolution [01:09:00] 4.1 WER Calculation and Evaluation Methodologies [01:10:21] 4.2 Supervised vs Self-Supervised Learning Approaches [01:21:02] 4.3 Temporal Learning and Feature Processing [01:24:45] 4.4 Feature Engineering to Automated ML 5. Enterprise Implementation & Scale [01:27:55] 5.1 Future AI Systems and Adaptation [01:31:52] 5.2 Technical Foundations and History [01:34:53] 5.3 Infrastructure and Team Scaling [01:38:05] 5.4 Research and Talent Strategy [01:41:11] 5.5 Engineering Practice Evolution Shownotes: https://www.dropbox.com/scl/fi/d94b1jcgph9o8au8shdym/Speechmatics.pdf?rlkey=bi55wvktzomzx0y5sic6jz99y&st=6qwofv8t&dl=0

    1h 46m

About

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada