15 episodes

A podcast interviewing machine learning researchers studying how to build intelligence. Made for researchers, by researchers.

Generally Intelligent Untitled AI

    • Technology
    • 4.8 • 4 Ratings

A podcast interviewing machine learning researchers studying how to build intelligence. Made for researchers, by researchers.

    Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

    Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

    Martín Arjovsky did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.

    • 1 hr 26 min
    Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

    Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

    Yash Sharma is a Ph.D. student at the International Max Planck Research School for Intelligent Systems. He previously studied electrical engineering at Cooper Union and has spent time at Borealis AI and IBM Research. Yash’s early work was on adversarial examples and his current research interests span a variety of topics in representation disentanglement. In this episode, we discuss robustness to adversarial examples, causality vs. correlation in data, and how to make deep learning models generalize better.

    • 1 hr 27 min
    Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

    Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

    Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks.

    **Highlights from our conversation:** 

    🕸  "Why is sparsity everywhere? This isn't an accident."

    🤖  "If I gave you 500 GPUs, could you actually keep those GPUs busy?"

    📊  "In general, I think we have a crisis of science in ML."

    • 1 hr 21 min
    Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

    Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

    Jacob Steinhardt (Google Scholar) (Website) is an assistant professor at UC Berkeley.  His main research interest is in designing machine learning systems that are reliable and aligned with human values.  Some of his specific research directions include robustness, rewards specification and reward hacking, as well as scalable alignment.

    Highlights:

    📜“Test accuracy is a very limited metric.”

    👨‍👩‍👧‍👦“You might not be able to get lots of feedback on human values.”

    📊“I’m interested in measuring the progress in AI capabilities.”

    • 1 hr
    Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

    Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

    Vincent Sitzmann (Google Scholar) (Website) is a postdoc at MIT. His work is on neural scene representations in computer vision.  Ultimately, he wants to make representations that AI agents can use to solve the same visual tasks humans solve regularly, but that are currently impossible for AI.

    **Highlights from our conversation:**

    👁 “Vision is about the question of building representations”

    🧠 “We (humans) likely have a 3D inductive bias”

    🤖 “All computer vision should be 3D computer vision.  Our world is a 3d world.”

    • 1 hr 10 min
    Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

    Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

    Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general.  This is known as the value alignment problem.



    Highlights from our conversation:

    👨‍👩‍👧‍👦 How to align AI to human values

    📉 Consequences of misaligned AI -> bias & misdirected optimization

    📱 Better AI recommender systems

    • 1 hr 32 min

Customer Reviews

4.8 out of 5
4 Ratings

4 Ratings

Top Podcasts In Technology

You Might Also Like