100 episodes

Welcome! We at MLST are inspired by scientists and each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is ridiculously technical and we believe strongly in diversity of thought in AI, covering all the main ideas in the field, avoiding hype where possible.

MLST is run by Dr. Tim Scarfe and Dr. Keith Duggar, and with regular appearances from Dr. Yannic Kilcher.

Machine Learning Street Talk (MLST‪)‬ Machine Learning Street Talk

    • Technology
    • 5.0 • 14 Ratings

Welcome! We at MLST are inspired by scientists and each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is ridiculously technical and we believe strongly in diversity of thought in AI, covering all the main ideas in the field, avoiding hype where possible.

MLST is run by Dr. Tim Scarfe and Dr. Keith Duggar, and with regular appearances from Dr. Yannic Kilcher.

    #99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

    #99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

    YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0

    Support us! https://www.patreon.com/mlst

    MLST Discord: https://discord.gg/aNPkGUQtc5



    Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.



    Carla's “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.



    Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.



    The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.



    Carla Zoe Cremer

    https://carlacremer.github.io/



    Igor Krawczuk

    https://krawczuk.eu/



    Interviewer: Dr. Tim Scarfe



    TOC:

    [00:00:00] Introduction: Vox article and effective altruism / FTX

    [00:11:12] Luciano Floridi on Governance and Risk

    [00:15:50] Connor Leahy on alignment

    [00:21:08] Ethan Caballero on scaling

    [00:23:23] Alignment, Values and politics

    [00:30:50] Singularitarians vs AI-thiests

    [00:41:56] Consequentialism

    [00:46:44] Does scale make a difference?

    [00:51:53] Carla's Democratising risk paper

    [01:04:03] Vox article - How effective altruists ignored risk

    [01:20:18] Does diversity breed complexity?

    [01:29:50] Collective rationality

    [01:35:16] Closing statements

    • 1 hr 39 min
    [NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

    [NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

    Support us! https://www.patreon.com/mlst

    MLST Discord: https://discord.gg/aNPkGUQtc5

    YT version: https://youtu.be/YLNGvvgq3eg



    We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society. 

    The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result. 

    According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics. 

    We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide.

    TOC:

    [00:00:00] Introduction to Luciano and his ideas

    [00:14:00] Chat GPT / language models

    [00:28:45] AI risk / "Singularitarians" 

    [00:37:15] Forms of governance

    [00:43:56] Re-ontologising the world

    [00:55:56] It from bit and Computationalism and philosophy without purpose

    [01:03:05] Getting into Digital Ethics



    Interviewer: Dr. Tim Scarfe



    References:

    GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi]

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044



    Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi]

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347



    The Philosophy of Information [Floridi]

    https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393



    Information: A Very Short Introduction [Floridi]

    https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375



    https://en.wikipedia.org/wiki/Luciano_Floridi

    https://www.philosophyofinformation.net/

    • 1 hr 6 min
    #98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

    #98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

    Support us! https://www.patreon.com/mlst

    MLST Discord: https://discord.gg/aNPkGUQtc5

    YT version: https://youtu.be/YLNGvvgq3eg

    (If music annoying, skip to main interview @ 14:14)

    We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society. 

    The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result. 

    According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics. 

    We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide.



    TOC:

    [00:00:00] Introduction to Luciano and his ideas

    [00:14:40] Chat GPT / language models

    [00:29:24] AI risk / "Singularitarians" 

    [00:30:34] Re-ontologising the world

    [00:56:35] It from bit and Computationalism and philosophy without purpose

    [01:03:43] Getting into Digital Ethics



    References:

    GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi]

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044



    Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi]

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347



    The Philosophy of Information [Floridi]

    https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393



    Information: A Very Short Introduction [Floridi]

    https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375



    https://en.wikipedia.org/wiki/Luciano_Floridi

    https://www.philosophyofinformation.net/

    • 1 hr 6 min
    #97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

    #97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

    Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines.

    This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases. 

    It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities.

    Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI  models can be aligned to the us, world and indeed, our values.

    Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge. 

    Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour.



    References

    Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS]

    https://openreview.net/pdf?id=buXZ7nIqiwE



    Core Knowledge [Elizabeth S. Spelke / Harvard]

    https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf



    The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell]

    https://arxiv.org/abs/2210.13966



    On the Measure of Intelligence [Francois Chollet]

    https://arxiv.org/abs/1911.01547



    ARC challenge [Chollet]

    https://github.com/fchollet/ARC

    • 24 min
    #96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic

    #96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic

    Pedro Domingos, Professor Emeritus of Computer Science and Engineering at the University of Washington, is renowned for his research in machine learning, particularly for his work on Markov logic networks that allow for uncertain inference. He is also the author of the acclaimed book "The Master Algorithm".



    Panel: Dr. Tim Scarfe



    TOC:

    [00:00:00] Introduction

    [00:01:34] Galaxtica / misinformation / gatekeeping

    [00:12:31] Is there a master algorithm?

    [00:16:29] Limits of our understanding 

    [00:21:57] Intentionality, Agency, Creativity

    [00:27:56] Compositionality 

    [00:29:30] Digital Physics / It from bit / Wolfram 

    [00:35:17] Alignment / Utility functions

    [00:43:36] Meritocracy  

    [00:45:53] Game theory 

    [01:00:00] EA/consequentialism/Utility

    [01:11:09] Emergence / relationalism 

    [01:19:26] Markov logic 

    [01:25:38] Moving away from anthropocentrism 

    [01:28:57] Neurosymbolic / infinity / tensor algerbra

    [01:53:45] Abstraction

    [01:57:26] Symmetries / Geometric DL

    [02:02:46] Bias variance trade off 

    [02:05:49] What seen at neurips

    [02:12:58] Chalmers talk on LLMs

    [02:28:32] Definition of intelligence

    [02:32:40] LLMs 

    [02:35:14] On experts in different fields

    [02:40:15] Back to intelligence

    [02:41:37] Spline theory / extrapolation



    YT version:  https://www.youtube.com/watch?v=C9BH3F2c0vQ



    References;



    The Master Algorithm [Domingos]

    https://www.amazon.co.uk/s?k=master+algorithm&i=stripbooks&crid=3CJ67DCY96DE8&sprefix=master+algorith%2Cstripbooks%2C82&ref=nb_sb_noss_2



    INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS [John Wheeler/It from Bit]

    https://philpapers.org/archive/WHEIPQ.pdf



    A New Kind Of Science [Wolfram]

    https://www.amazon.co.uk/New-Kind-Science-Stephen-Wolfram/dp/1579550088



    The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future [Tom Chivers]

    https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795



    The Status Game: On Social Position and How We Use It [Will Storr]

    https://www.goodreads.com/book/show/60598238-the-status-game



    Newcomb's paradox

    https://en.wikipedia.org/wiki/Newcomb%27s_paradox



    The Case for Strong Emergence [Sabine Hossenfelder]

    https://philpapers.org/rec/HOSTCF-3



    Markov Logic: An Interface Layer for Artificial Intelligence [Domingos]

    https://www.morganclaypool.com/doi/abs/10.2200/S00206ED1V01Y200907AIM007



    Note; Pedro discussed “Tensor Logic” - I was not able to find a reference



    Neural Networks and the Chomsky Hierarchy [Grégoire Delétang/DeepMind]

    https://arxiv.org/abs/2207.02098



    Connectionism and Cognitive Architecture: A Critical Analysis [Jerry A. Fodor and Zenon W. Pylyshyn]

    https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf



    Every Model Learned by Gradient Descent Is Approximately a Kernel Machine [Pedro Domingos]

    https://arxiv.org/abs/2012.00152



    A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27 [LeCun]

    https://openreview.net/pdf?id=BZ5a1r-kVsf



    Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković]

    https://arxiv.org/abs/2104.13478



    The Algebraic Mind: Integrating Connectionism and Cognitive Science [Gary Marcus]

    https://www.amazon.co.uk/Algebraic-Mind-Integrating-Connectionism-D

    • 2 hrs 49 min
    #95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism

    #95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism

    Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Prof. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modelling. She has served as a Senior Area Chair for NeurIPS and ICML.   Irina's research is focussed on taking us closer to the holy grail of Artificial General Intelligence.  She continues to push the boundaries of machine learning, continually striving to make advancements in neuroscience-inspired AI.

    In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties.

    Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present.

    To better understand AI and intelligence, Irina suggested looking at it from multiple perspectives and understanding the importance of complex systems science in programming and understanding dynamical systems. She discussed the work of Michael Levin, who is looking into reprogramming biological computers with chemical interventions, and Tim mentioned Alex Mordvinsev, who is looking into the self-healing and repair of these systems. Ultimately, Irina argued that the key to understanding AI and intelligence is to recognize the complexity of the systems and to create hybrid models of human and AI intelligence.

    Find Irina;

    https://mila.quebec/en/person/irina-rish/

    https://twitter.com/irinarish



    YT version: https://youtu.be/8-ilcF0R7mI 

    MLST Discord: https://discord.gg/aNPkGUQtc5



    References;

    The Garden of Forking Paths: Jorge Luis Borges [Jorge Luis Borges]

    https://www.amazon.co.uk/Garden-Forking-Paths-Penguin-Modern/dp/0241339057

    The Brain from Inside Out [György Buzsáki]

    https://www.amazon.co.uk/Brain-Inside-Out-Gy%C3%B6rgy-Buzs%C3%A1ki/dp/0190905387

    Growing Isotropic Neural Cellular Automata [Alexander Mordvintsev]

    https://arxiv.org/abs/2205.01681

    The Extended Mind [Andy Clark and David Chalmers]

    https://www.jstor.org/stable/3328150

    The Gentle Seduction [Marc Stiegler]

    https://www.amazon.co.uk/Gentle-Seduction-Marc-Stiegler/dp/0671698877

    • 39 min

Customer Reviews

5.0 out of 5
14 Ratings

14 Ratings

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
Jack Rhysider
The Cut & The Verge
Vox Media Podcast Network
The New York Times

You Might Also Like

Sam Charrington
Changelog Media
NVIDIA
Kyle Polich
Santa Fe Institute, Michael Garfield
Jon Krohn and Guests on Machine Learning, A.I., and Data-Career Success