Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST) Podcast

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

  1. Prof. Mark Solms - The Hidden Spring

    2 DAYS AGO

    Prof. Mark Solms - The Hidden Spring

    Prof. Mark Solms, a neuroscientist and psychoanalyst, discusses his groundbreaking work on consciousness, challenging conventional cortex-centric views and emphasizing the role of brainstem structures in generating consciousness and affect. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key points discussed: The limitations of vision-centric approaches to consciousness studies. Evidence from decorticated animals and hydranencephalic children supporting the brainstem's role in consciousness. The relationship between homeostasis, the free energy principle, and consciousness. Critiques of behaviorism and modern theories of consciousness. The importance of subjective experience in understanding brain function. The discussion also explored broader topics: The potential impact of affect-based theories on AI development. The role of the SEEKING system in exploration and learning. Connections between neuroscience, psychoanalysis, and philosophy of mind. Challenges in studying consciousness and the limitations of current theories. Mark Solms: https://neuroscience.uct.ac.za/contacts/mark-solms Show notes and transcript: https://www.dropbox.com/scl/fo/roipwmnlfmwk2e7kivzms/ACjZF-VIGC2-Suo30KcwVV0?rlkey=53y8v2cajfcgrf17p1h7v3suz&st=z8vu81hn&dl=0 TOC (*) are best bits 00:00:00 1. Intro: Challenging vision-centric approaches to consciousness * 00:02:20 2. Evidence from decorticated animals and hydranencephalic children * 00:07:40 3. Emotional responses in hydranencephalic children 00:10:40 4. Brainstem stimulation and affective states 00:15:00 5. Brainstem's role in generating affective consciousness * 00:21:50 6. Dual-aspect monism and the mind-brain relationship 00:29:37 7. Information, affect, and the hard problem of consciousness * 00:37:25 8. Wheeler's participatory universe and Chalmers' theories 00:48:51 9. Homeostasis, free energy principle, and consciousness * 00:59:25 10. Affect, voluntary behavior, and decision-making 01:05:45 11. Psychoactive substances, REM sleep, and consciousness research 01:12:14 12. Critiquing behaviorism and modern consciousness theories * 01:24:25 13. The SEEKING system and exploration in neuroscience Refs: 1. Mark Solms' book "The Hidden Spring" [00:20:34] (MUST READ!) https://amzn.to/3XyETb3 2. Karl Friston's free energy principle [00:03:50] https://www.nature.com/articles/nrn2787 3. Hydranencephaly condition [00:07:10] https://en.wikipedia.org/wiki/Hydranencephaly 4. Periaqueductal gray (PAG) [00:08:57] https://en.wikipedia.org/wiki/Periaqueductal_gray 5. Positron Emission Tomography (PET) [00:13:52] https://en.wikipedia.org/wiki/Positron_emission_tomography 6. Paul MacLean's triune brain theory [00:03:30] https://en.wikipedia.org/wiki/Triune_brain 7. Baruch Spinoza's philosophy of mind [00:23:48] https://plato.stanford.edu/entries/spinoza-epistemology-mind 8. Claude Shannon's "A Mathematical Theory of Communication" [00:32:15] https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf 9. Francis Crick's "The Astonishing Hypothesis" [00:39:57] https://en.wikipedia.org/wiki/The_Astonishing_Hypothesis 10. Frank Jackson's Knowledge Argument [00:40:54] https://plato.stanford.edu/entries/qualia-knowledge/ 11. Mesolimbic dopamine system [01:11:51] https://en.wikipedia.org/wiki/Mesolimbic_pathway 12. Jaak Panksepp's SEEKING system [01:25:23] https://en.wikipedia.org/wiki/Jaak_Panksepp#Affective_neuroscience

    1h 27m
  2. Patrick Lewis (Cohere) - Retrieval Augmented Generation

    4 DAYS AGO

    Patrick Lewis (Cohere) - Retrieval Augmented Generation

    Dr. Patrick Lewis, who coined the term RAG (Retrieval Augmented Generation) and now works at Cohere, discusses the evolution of language models, RAG systems, and challenges in AI evaluation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key topics covered: - Origins and evolution of Retrieval Augmented Generation (RAG) - Challenges in evaluating RAG systems and language models - Human-AI collaboration in research and knowledge work - Word embeddings and the progression to modern language models - Dense vs sparse retrieval methods in information retrieval The discussion also explored broader implications and applications: - Balancing faithfulness and fluency in RAG systems - User interface design for AI-augmented research tools - The journey from chemistry to AI research - Challenges in enterprise search compared to web search - The importance of data quality in training AI models Patrick Lewis: https://www.patricklewis.io/ Cohere Command Models, check them out - they are amazing for RAG! https://cohere.com/command TOC 00:00:00 1. Intro to RAG 00:05:30 2. RAG Evaluation: Poll framework & model performance 00:12:55 3. Data Quality: Cleanliness vs scale in AI training 00:15:13 4. Human-AI Collaboration: Research agents & UI design 00:22:57 5. RAG Origins: Open-domain QA to generative models 00:30:18 6. RAG Challenges: Info retrieval, tool use, faithfulness 00:42:01 7. Dense vs Sparse Retrieval: Techniques & trade-offs 00:47:02 8. RAG Applications: Grounding, attribution, hallucination prevention 00:54:04 9. UI for RAG: Human-computer interaction & model optimization 00:59:01 10. Word Embeddings: Word2Vec, GloVe, and semantic spaces 01:06:43 11. Language Model Evolution: BERT, GPT, and beyond 01:11:38 12. AI & Human Cognition: Sequential processing & chain-of-thought Refs: 1. Retrieval Augmented Generation (RAG) paper / Patrick Lewis et al. [00:27:45] https://arxiv.org/abs/2005.11401 2. LAMA (LAnguage Model Analysis) probe / Petroni et al. [00:26:35] https://arxiv.org/abs/1909.01066 3. KILT (Knowledge Intensive Language Tasks) benchmark / Petroni et al. [00:27:05] https://arxiv.org/abs/2009.02252 4. Word2Vec algorithm / Tomas Mikolov et al. [01:00:25] https://arxiv.org/abs/1301.3781 5. GloVe (Global Vectors for Word Representation) / Pennington et al. [01:04:35] https://nlp.stanford.edu/projects/glove/ 6. BERT (Bidirectional Encoder Representations from Transformers) / Devlin et al. [01:08:00] https://arxiv.org/abs/1810.04805 7. 'The Language Game' book / Nick Chater and Morten H. Christiansen [01:11:40] https://amzn.to/4grEUpG Disclaimer: This is the sixth video from our Cohere partnership. We were not told what to say in the interview. Filmed in Seattle in June 2024.

    1h 14m
  3. Ashley Edwards - Genie Paper (DeepMind/Runway)

    13 SEPT

    Ashley Edwards - Genie Paper (DeepMind/Runway)

    Ashley Edwards, who was working at DeepMind when she co-authored the Genie paper and is now at Runway, covered several key aspects of the Genie AI system and its applications in video generation, robotics, and game creation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Genie's approach to learning interactive environments, balancing compression and fidelity. The use of latent action models and VQE models for video processing and tokenization. Challenges in maintaining action consistency across frames and integrating text-to-image models. Evaluation metrics for AI-generated content, such as FID and PS&R diff metrics. The discussion also explored broader implications and applications: The potential impact of AI video generation on content creation jobs. Applications of Genie in game generation and robotics. The use of foundation models in robotics and the differences between internet video data and specialized robotics data. Challenges in mapping AI-generated actions to real-world robotic actions. Ashley Edwards: https://ashedwards.github.io/ TOC (*) are best bits 00:00:00 1. Intro to Genie & Brave Search API: Trade-offs & limitations * 00:02:26 2. Genie's Architecture: Latent action, VQE, video processing * 00:05:06 3. Genie's Constraints: Frame consistency & image model integration 00:07:26 4. Evaluation: FID, PS&R diff metrics & latent induction methods 00:09:44 5. AI Video Gen: Content creation impact, depth & parallax effects 00:11:39 6. Model Scaling: Training data impact & computational trade-offs 00:13:50 7. Game & Robotics Apps: Gamification & action mapping challenges * 00:16:16 8. Robotics Foundation Models: Action space & data considerations * 00:19:18 9. Mask-GPT & Video Frames: Real-time optimization, RL from videos 00:20:34 10. Research Challenges: AI value, efficiency vs. quality, safety 00:24:20 11. Future Dev: Efficiency improvements & fine-tuning strategies Refs: 1. Genie (learning interactive environments from videos) / Ashley and DM collegues [00:01] https://arxiv.org/abs/2402.15391 2. VQ-VAE (Vector Quantized Variational Autoencoder) / Aaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu [02:43] https://arxiv.org/abs/1711.00937 3. FID (Fréchet Inception Distance) metric / Martin Heusel et al. [07:37] https://arxiv.org/abs/1706.08500 4. PS&R (Precision and Recall) metric / Mehdi S. M. Sajjadi et al. [08:02] https://arxiv.org/abs/1806.00035 5. Vision Transformer (ViT) architecture / Alexey Dosovitskiy et al. [12:14] https://arxiv.org/abs/2010.11929 6. Genie (robotics foundation models) / Google DeepMind [17:34] https://deepmind.google/research/publications/60474/ 7. Chelsea Finn's lab work on robotics datasets / Chelsea Finn [17:38] https://ai.stanford.edu/~cbfinn/ 8. Imitation from observation in reinforcement learning / YuXuan Liu [20:58] https://arxiv.org/abs/1707.03374 9. Waymo's autonomous driving technology / Waymo [22:38] https://waymo.com/ 10. Gen3 model release by Runway / Runway [23:48] https://runwayml.com/ 11. Classifier-free guidance technique / Jonathan Ho and Tim Salimans [24:43] https://arxiv.org/abs/2207.12598

    25 min
  4. Cohere's SVP Technology - Saurabh Baji

    12 SEPT

    Cohere's SVP Technology - Saurabh Baji

    Saurabh Baji discusses Cohere's approach to developing and deploying large language models (LLMs) for enterprise use. * Cohere focuses on pragmatic, efficient models tailored for business applications rather than pursuing the largest possible models. * They offer flexible deployment options, from cloud services to on-premises installations, to meet diverse enterprise needs. * Retrieval-augmented generation (RAG) is highlighted as a critical capability, allowing models to leverage enterprise data securely. * Cohere emphasizes model customization, fine-tuning, and tools like reranking to optimize performance for specific use cases. * The company has seen significant growth, transitioning from developer-focused to enterprise-oriented services. * Major customers like Oracle, Fujitsu, and TD Bank are using Cohere's models across various applications, from HR to finance. * Baji predicts a surge in enterprise AI adoption over the next 12-18 months as more companies move from experimentation to production. * He emphasizes the importance of trust, security, and verifiability in enterprise AI applications. The interview provides insights into Cohere's strategy, technology, and vision for the future of enterprise AI adoption. https://www.linkedin.com/in/saurabhbaji/ https://x.com/sbaji https://cohere.com/ https://cohere.com/business MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC (*) are best bits 00:00:00 1. Introduction and Background 00:04:24 2. Cloud Infrastructure and LLM Optimization 00:06:43 2.1 Model deployment and fine-tuning strategies * 00:09:37 3. Enterprise AI Deployment Strategies 00:11:10 3.1 Retrieval-augmented generation in enterprise environments * 00:13:40 3.2 Standardization vs. customization in cloud services * 00:18:20 4. AI Model Evaluation and Deployment 00:18:20 4.1 Comprehensive evaluation frameworks * 00:21:20 4.2 Key components of AI model stacks * 00:25:50 5. Retrieval Augmented Generation (RAG) in Enterprise 00:32:10 5.1 Pragmatic approach to RAG implementation * 00:33:45 6. AI Agents and Tool Integration 00:33:45 6.1 Leveraging tools for AI insights * 00:35:30 6.2 Agent-based AI systems and diagnostics * 00:42:55 7. AI Transparency and Reasoning Capabilities 00:49:10 8. AI Model Training and Customization 00:57:10 9. Enterprise AI Model Management 01:02:10 9.1 Managing AI model versions for enterprise customers * 01:04:30 9.2 Future of language model programming * 01:06:10 10. AI-Driven Software Development 01:06:10 10.1 AI bridging human expression and task achievement * 01:08:00 10.2 AI-driven virtual app fabrics in enterprise * 01:13:33 11. Future of AI and Enterprise Applications 01:21:55 12. Cohere's Customers and Use Cases 01:21:55 12.1 Cohere's growth and enterprise partnerships * 01:27:14 12.2 Diverse customers using generative AI * 01:27:50 12.3 Industry adaptation to generative AI * 01:29:00 13. Technical Advantages of Cohere Models 01:29:00 13.1 Handling large context windows * 01:29:40 13.2 Low latency impact on developer productivity * Disclaimer: This is the fifth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Filmed in Seattle in Aug 2024.

    1h 30m
  5. David Hanson's Vision for Sentient Robots

    10 SEPT

    David Hanson's Vision for Sentient Robots

    David Hanson, CEO of Hanson Robotics and creator of the humanoid robot Sofia, explores the intersection of artificial intelligence, ethics, and human potential. In this thought-provoking interview, Hanson discusses his vision for developing AI systems that embody the best aspects of humanity while pushing beyond our current limitations, aiming to achieve what he calls "super wisdom." YT version: https://youtu.be/LFCIEhlsozU MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. The interview with David Hanson covers: The importance of incorporating biological drives and compassion into AI systems Hanson's concept of "existential pattern ethics" as a basis for AI morality The potential for AI to enhance human intelligence and wisdom Challenges in developing artificial general intelligence (AGI) The need to democratize AI technologies globally Potential future advancements in human-AI integration and their societal impacts Concerns about technological augmentation exacerbating inequality The role of ethics in guiding AI development and deployment Hanson advocates for creating AI systems that embody the best aspects of humanity while surpassing current human limitations, aiming for "super wisdom" rather than just artificial super intelligence. David Hanson: https://www.hansonrobotics.com/david-hanson/ https://www.youtube.com/watch?v=9u1O954cMmE TOC 1. Introduction and Background [00:00:00] 1.1. David Hanson's interdisciplinary background [0:01:49] 1.2. Introduction to Sofia, the realistic robot [0:03:27] 2. Human Cognition and AI [0:03:50] 2.1. Importance of social interaction in cognition [0:03:50] 2.2. Compassion as distinguishing factor [0:05:55] 2.3. AI augmenting human intelligence [0:09:54] 3. Developing Human-like AI [0:13:17] 3.1. Incorporating biological drives in AI [0:13:17] 3.2. Creating AI with agency [0:20:34] 3.3. Implementing flexible desires in AI [0:23:23] 4. Ethics and Morality in AI [0:27:53] 4.1. Enhancing humanity through AI [0:27:53] 4.2. Existential pattern ethics [0:30:14] 4.3. Expanding morality beyond restrictions [0:35:35] 5. Societal Impact of AI [0:38:07] 5.1. AI adoption and integration [0:38:07] 5.2. Democratizing AI technologies [0:38:32] 5.3. Human-AI integration and identity [0:43:37] 6. Future Considerations [0:50:03] 6.1. Technological augmentation and inequality [0:50:03] 6.2. Emerging technologies for mental health [0:50:32] 6.3. Corporate ethics in AI development [0:52:26] This was filmed at AGI-24

    53 min
  6. The Fabric of Knowledge - David Spivak

    5 SEPT

    The Fabric of Knowledge - David Spivak

    David Spivak, a mathematician known for his work in category theory, discusses a wide range of topics related to intelligence, creativity, and the nature of knowledge. He explains category theory in simple terms and explores how it relates to understanding complex systems and relationships. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. We discuss abstract concepts like collective intelligence, the importance of embodiment in understanding the world, and how we acquire and process knowledge. Spivak shares his thoughts on creativity, discussing where it comes from and how it might be modeled mathematically. A significant portion of the discussion focuses on the impact of artificial intelligence on human thinking and its potential role in the evolution of intelligence. Spivak also touches on the importance of language, particularly written language, in transmitting knowledge and shaping our understanding of the world. David Spivak http://www.dspivak.net/ TOC: 00:00:00 Introduction to category theory and functors 00:04:40 Collective intelligence and sense-making 00:09:54 Embodiment and physical concepts in knowledge acquisition 00:16:23 Creativity, open-endedness, and AI's impact on thinking 00:25:46 Modeling creativity and the evolution of intelligence 00:36:04 Evolution, optimization, and the significance of AI 00:44:14 Written language and its impact on knowledge transmission REFS: Mike Levin's work https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en Eric Smith's videos on complexity and early life https://www.youtube.com/watch?v=SpJZw-68QyE Richard Dawkins' book "The Selfish Gene" https://amzn.to/3X73X8w Carl Sagan's statement about the cosmos knowing itself https://amzn.to/3XhPruK Herbert Simon's concept of "satisficing" https://plato.stanford.edu/entries/bounded-rationality/ DeepMind paper on open-ended systems https://arxiv.org/abs/2406.04268 Karl Friston's work on active inference https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind MIT category theory lectures by David Spivak (available on the Topos Institute channel) https://www.youtube.com/watch?v=UusLtx9fIjs

    46 min
  7. Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

    28 AUG

    Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

    Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe. YT version: https://youtu.be/DP454c1K_vQ MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC 00:00:00 Intro 00:03:38 Reasoning 00:13:09 Potential AI Breakthroughs Reducing Computation Needs 00:20:39 Memorization vs. Generalization in AI 00:25:19 Approach to the ARC Challenge 00:29:10 Perceptions of Chat GPT and AGI 00:58:45 Abstract Principles of Jurgen's Approach 01:04:17 Analogical Reasoning and Compression 01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI 01:15:50 Use of LSTM in Language Models by Tech Giants 01:21:08 Neural Network Aspect Ratio Theory 01:26:53 Reinforcement Learning Without Explicit Teachers Refs: ★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber): ★ Chain Rule For Backward Credit Assignment (Leibniz, 1676) ★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800) ★ First 20th Century Pioneer of Practical AI (Quevedo, 1914) ★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925) ★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34) ★ Unpublished ideas about evolving RNNs (Turing, 1948) ★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958) ★ First Published Learning RNNs (Amari and others, ~1972) ★ First Deep Learning (Ivakhnenko & Lapa, 1965) ★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68) ★ ReLUs (Fukushima, 1969) ★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960) ★ Backpropagation for NNs (Werbos, 1982) ★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988). ★ Metalearning or Learning to Learn (Schmidhuber, 1987) ★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT) ★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990) ★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT) ★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT) ★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber) ★ LSTM journal paper (1997, most cited AI paper of the 20th century) ★ xLSTM (Hochreiter, 2024) ★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015) ★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team) https://arxiv.org/abs/2305.17066 ★ Bremermann's physical limit of computation (1982) EXTERNAL LINKS CogX 2018 - Professor Juergen Schmidhuber https://www.youtube.com/watch?v=17shdT9-wuA Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997) https://sferics.idsia.ch/pub/juergen/loconet.pdf The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy https://www.youtube.com/watch?v=I4pQbo5MQOs (Refs truncated, full version on YT VD)

    1h 40m
  8. "AI should NOT be regulated at all!" - Prof. Pedro Domingos

    25 AUG

    "AI should NOT be regulated at all!" - Prof. Pedro Domingos

    Professor Pedro Domingos, is an AI researcher and professor of computer science. He expresses skepticism about current AI regulation efforts and argues for faster AI development rather than slowing it down. He also discusses the need for new innovations to fulfil the promises of current AI techniques. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Show notes: * Domingos' views on AI regulation and why he believes it's misguided * His thoughts on the current state of AI technology and its limitations * Discussion of his novel "2040", a satirical take on AI and tech culture * Explanation of his work on "tensor logic", which aims to unify neural networks and symbolic AI * Critiques of other approaches in AI, including those of OpenAI and Gary Marcus * Thoughts on the AI "bubble" and potential future developments in the field Prof. Pedro Domingos: https://x.com/pmddomingos 2040: A Silicon Valley Satire [Pedro's new book] https://amzn.to/3T51ISd TOC: 00:00:00 Intro 00:06:31 Bio 00:08:40 Filmmaking skit 00:10:35 AI and the wisdom of crowds 00:19:49 Social Media 00:27:48 Master algorithm 00:30:48 Neurosymbolic AI / abstraction 00:39:01 Language 00:45:38 Chomsky 01:00:49 2040 Book 01:18:03 Satire as a shield for criticism? 01:29:12 AI Regulation 01:35:15 Gary Marcus 01:52:37 Copyright 01:56:11 Stochastic parrots come home to roost 02:00:03 Privacy 02:01:55 LLM ecosystem 02:05:06 Tensor logic Refs: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [Pedro Domingos] https://amzn.to/3MiWs9B Rebooting AI: Building Artificial Intelligence We Can Trust [Gary Marcus] https://amzn.to/3AAywvL Flash Boys [Michael Lewis] https://amzn.to/4dUGm1M

    2h 12m

About

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada