129 episodes

Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.

thegradientpub.substack.com

The Gradient: Perspectives on AI The Gradient

    • Technology

Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.

thegradientpub.substack.com

    C. Thi Nguyen: Values, Legibility, and Gamification

    C. Thi Nguyen: Values, Legibility, and Gamification

    Episode 127
    I spoke with Christopher Thi Nguyen about:
    * How we lose control of our values
    * The tradeoffs of legibility, aggregation, and simplification
    * Gamification and its risks
    Enjoy—and let me know what you think!
    C. Thi Nguyen as of July 2020 is Associate Professor of Philosophy at the University of Utah. His research focuses on how social structures and technology can shape our rationality and our agency. He has published on trust, expertise, group agency, community art, cultural appropriation, aesthetic value, echo chambers, moral outrage porn, and games. He received his PhD from UCLA. Once, he was a food writer for the Los Angeles Times.
    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:10) The ubiquity of James C. Scott
    * (06:03) Legibility and measurement
    * (12:50) Value capture, classes and measurement
    * (17:30) Political value choice in ML
    * (23:30) Why value collapse happens
    * (33:00) Blackburn, “Hume and Thick Connexions” — projectivism and legibility
    * (36:20) Heuristics and decision-making
    * (40:08) Institutional classification systems
    * (46:55) Back to Hume
    * (48:27) Epistemic arms races, stepping outside our conceptual architectures
    * (56:40) The “what to do” question
    * (1:04:00) Gamification, aesthetic engagement
    * (1:14:51) Echo chambers and defining utility
    * (1:22:10) Progress, AGI millenarianism
    * (disclaimer: I don’t know what’s going to happen with the world, either.)
    * (1:26:04) Parting visions
    * (1:30:02) Outro
    Links:
    * Chrisopher’s Twitter and homepage
    * Games: Agency as Art
    * Papers referenced
    * Transparency is Surveillance
    * Games and the art of agency
    * Autonomy and Aesthetic Engagement
    * Art as a Shelter from Science
    * Value Capture
    * Hostile Epistemology
    * Hume and Thick Connexions (Simon Blackburn)


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 30 min
    Vivek Natarajan: Towards Biomedical AI

    Vivek Natarajan: Towards Biomedical AI

    Episode 126
    I spoke with Vivek Natarajan about:
    * Improving access to medical knowledge with AI
    * How an LLM for medicine should behave
    * Aspects of training Med-PaLM and AMIE
    * How to facilitate appropriate amounts of trust in users of medical AI systems
    Vivek Natarajan is a Research Scientist at Google Health AI advancing biomedical AI to help scale world class healthcare to everyone. Vivek is particularly interested in building large language models and multimodal foundation models for biomedical applications and leads the Google Brain moonshot behind Med-PaLM, Google's flagship medical large language model. Med-PaLM has been featured in The Scientific American, The Economist, STAT News, CNBC, Forbes, New Scientist among others.
    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:35) The concept of an “AI doctor”
    * (06:54) Accessibility to medical expertise
    * (10:31) Enabling doctors to do better/different work
    * (14:35) Med-PaLM
    * (15:30) Instruction tuning, desirable traits in LLMs for medicine
    * (23:41) Axes for evaluation of medical QA systems
    * (30:03) Medical LLMs and scientific consensus
    * (35:32) Demographic data and patient interventions
    * (40:14) Data contamination in Med-PaLM
    * (42:45) Grounded claims about capabilities
    * (45:48) Building trust
    * (50:54) Genetic Discovery enabled by a LLM
    * (51:33) Novel hypotheses in genetic discovery
    * (57:10) Levels of abstraction for hypotheses
    * (1:01:10) Directions for continued progress
    * (1:03:05) Conversational Diagnostic AI
    * (1:03:30) Objective Structures Clinical Examination as an evaluative framework
    * (1:09:08) Relative importance of different types of data
    * (1:13:52) Self-play — conversational dispositions and handling patients
    * (1:16:41) Chain of reasoning and information retention
    * (1:20:00) Performance in different areas of medical expertise
    * (1:22:35) Towards accurate differential diagnosis
    * (1:31:40) Feedback mechanisms and expertise, disagreement among clinicians
    * (1:35:26) Studying trust, user interfaces
    * (1:38:08) Self-trust in using medical AI models
    * (1:41:39) UI for medical AI systems
    * (1:43:50) Model reasoning in complex scenarios
    * (1:46:33) Prompting
    * (1:48:41) Future outlooks
    * (1:54:53) Outro
    Links:
    * Vivek’s Twitter and homepage
    * Papers
    * Towards Expert-Level Medical Question Answering with LLMs (2023)
    * LLMs encode clinical knowledge (2023)
    * Towards Generalist Biomedical AI (2024)
    * AMIE
    * Genetic Discovery enabled by a LLM (2023)


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 55 min
    Thomas Mullaney: A Global History of the Information Age

    Thomas Mullaney: A Global History of the Information Age

    Episode 125
    False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works.
    I spoke with Professor Thomas Mullaney about:
    * Telling stories about your work and balancing what feels meaningful with practical realities
    * Destabilizing our understandings of the technologies we feel familiar with, and the work of researching the history of the Chinese typewriter
    * The personal nature of research
    The Chinese Typewriter and The Chinese Computer are two of the best books I’ve read in a very long time. And they’re not just good and interesting, but important to read, for the history they tell and the ideas and arguments they present—I can’t recommend them and Professor Mullaney’s other work enough.
    Tom is Professor of History and Professor of East Asian Languages and Cultures, by courtesy. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. He is the author or lead editor of 8 books, including The Chinese Computer, The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, and Coming to Terms with the Nation: Ethnic Classification in Modern China.
    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:00) “In Their Own Words” interview: on telling stories about your work
    * (07:42) Clashing narratives and authenticity/inauthenticity in pursuing your work
    * (15:48) Why Professor Mullaney pursued studying the Chinese typewriter
    * (18:20) Worldmaking, transforming the physical world to fit our descriptive models
    * (30:07) Internal and illegible continuities/coherence in work
    * (31:45) The role of a “self”
    * (43:06) The 2008 Beijing Olympics and false (alphabetical) universalism, projectivism
    * (1:04:23) “Kicking the ladder” and the personal nature of research
    * (1:18:07) The “Technolinguistic Chinese Exclusion Act” — the situatedness of historians in their work
    * (1:33:00) Is the Chinese typewriter project finished? / on the resolution of problems
    * (1:43:35) Outro
    Links:
    * Professor Mullaney’s homepage and Twitter
    * In Their Own Words: Thomas Mullaney
    * Books
    * The Chinese Computer: A Global History of the Information Age
    * The Chinese Typewriter: A History
    * Coming to Terms with the Nation: Ethnic Classification in Modern China


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 43 min
    Seth Lazar: Normative Philosophy of Computing

    Seth Lazar: Normative Philosophy of Computing

    Episode 124
    You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.
    I spoke with Professor Seth Lazar about:
    * Why managing near-term and long-term risks isn’t always zero-sum
    * How to think through axioms and systems in political philosphy
    * Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI
    Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:54) Ad read — MLOps conference
    * (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation
    * (03:53) Attention allocation as an independent good (or bad)
    * (08:22) Axioms in political philosophy
    * (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust
    * (15:05) AI safety / catastrophic risk concerns
    * (22:10) Superintelligence arguments, reasoning about technology
    * (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?
    * (35:55) GPT-2, model weights, related debates
    * (39:11) Power and economics—coordination problems, company incentives
    * (50:42) Morality tales, relationship between safety and capabilities
    * (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy
    * (1:02:28) What is a feasibility horizon?
    * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter
    * (1:14:25) Sociotechnical lenses, narrowly technical solutions
    * (1:19:47) Experiments for responsibly integrating AI systems into society
    * (1:26:53) Helpful/honest/harmless and antagonistic AI systems
    * (1:33:35) Managing incentives conducive to developing technology in the public interest
    * (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia
    * (1:46:54) How we can help legitimize and support interdisciplinary work
    * (1:50:07) Outro
    Links:
    * Seth’s Linktree and Twitter
    * Resources
    * Attention, moral skill, and algorithmic recommendation
    * Catastrophic AI Risk slides


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 50 min
    Suhail Doshi: The Future of Computer Vision

    Suhail Doshi: The Future of Computer Vision

    Episode 123
    I spoke with Suhail Doshi about:
    * Why benchmarks aren’t prepared for tomorrow’s AI models
    * How he thinks about artists in a world with advanced AI tools
    * Building a unified computer vision model that can generate, edit, and understand pixels.
    Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they’re hiring!).
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:54) Ad read — MLOps conference
    * (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music
    * (03:45) AI and music, similarities to Playground
    * (07:50) Skill vs. creative capacity in art
    * (12:43) What we look for in music and art
    * (15:30) Enabling creative expression
    * (18:22) Building a unified computer vision model, underinvestment in computer vision
    * (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires
    * (29:05) “Benchmarks are not prepared for how powerful these models will become”
    * (31:56) Personalized models and personalized benchmarks
    * (36:39) Engaging users and benchmark development
    * (39:27) What a foundation model for graphics requires
    * (45:33) Text-to-image is insufficient
    * (46:38) DALL-E 2 and Imagen comparisons, FID
    * (49:40) Compositionality
    * (50:37) Why Playground focuses on images vs. 3d, video, etc.
    * (54:11) Open source and Playground’s strategy
    * (57:18) When to stop open-sourcing?
    * (1:03:38) Suhail’s thoughts on AGI discourse
    * (1:07:56) Outro
    Links:
    * Playground homepage
    * Suhail on Twitter


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 8 min
    Azeem Azhar: The Exponential View

    Azeem Azhar: The Exponential View

    Episode 122
    I spoke with Azeem Azhar about:
    * The speed of progress in AI
    * Historical context for some of the terminology we use and how we think about technology
    * What we might want our future to look like
    Azeem is an entrepreneur, investor, and adviser. He is the creator of Exponential View, a global platform for in-depth technology analysis, and the host of the Bloomberg Original series Exponentially.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:32) Ad read — MLOps conference
    * (01:05) Problematizing the term “exponential”
    * (07:35) Moore’s Law as social contract, speed of technological growth and impedances
    * (14:45) Academic incentives, interdisciplinary work, rational agents and historical context
    * (21:24) Monolithic scaling
    * (26:38) Investment in scaling
    * (31:22) On Sam Altman
    * (36:25) Uses of “AGI,” “intelligence”
    * (41:32) Historical context for terminology
    * (48:58) AI and teaching
    * (53:51) On the technology-human divide
    * (1:06:26) New technologies and the futures we want
    * (1:10:50) Inevitability narratives
    * (1:17:01) Rationality and objectivity
    * (1:21:13) Cultural affordances and intellectual history
    * (1:26:15) Centralized and decentralized AI systems
    * (1:32:54) Instruction tuning and helpful/honest/harmless
    * (1:39:18) Azeem’s future outlook
    * (1:46:15) Outro
    Links:
    * Azeem’s website and Twitter
    * Exponential View


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 46 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Herrasmieshakkerit
Mikko Hyppönen & Tomi Tuominen
Vikasietotila
Olli Sulopuisto, Kari Haakana, Panu Räty
Darknet Diaries
Jack Rhysider
Waveform: The MKBHD Podcast
Vox Media Podcast Network
Syntax - Tasty Web Development Treats
Wes Bos & Scott Tolinski - Full Stack JavaScript Web Developers

You Might Also Like

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
Last Week in AI
Skynet Today
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Dwarkesh Podcast
Dwarkesh Patel