59 episodes

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more.

thegradientpub.substack.com

The Gradient Podcast The Gradient

    • Technology
    • 5.0 • 1 Rating

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more.

thegradientpub.substack.com

    Steve Miller: Will AI Take Your Job? It's Not So Simple.

    Steve Miller: Will AI Take Your Job? It's Not So Simple.

    In episode 58 of The Gradient Podcast, Daniel Bashir speaks to Professor Steve Miller.
    Steve is a Professor Emeritus of Information Systems at Singapore Management University. Steve served as Founding Dean for the SMU School of Information Systems, and established and developed the technology core of SIS research and project capabilities in Cybersecurity, Data Management & Analytics, Intelligent Systems & Decision Analytics, and Software & Cyber-Physical Systems, as well as the management science oriented capability in Information Systems & Management. Steve works closely with a number of Singapore government ministries and agencies via steering committees, advisory boards, and advisory appointments.
    Have suggestions for future podcast guests (or other feedback)? Let us know here!
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (02:40) Steve’s evolution of interests in AI, time in academia and industry
    * (05:15) How different is this “industrial revolution”?
    * (10:00) What new technologies enable, the human role in technology’s impact on jobs
    * (11:35) Automation and augmentation and the realities of integrating new technologies in the workplace
    * (21:50) Difficulties of applying AI systems in real-world contexts
    * (32:45) Re-calibrating human work with intelligent machines
    * (39:00) Steve’s thinking on the nature of human/machine intelligence, implications for human/machine hybrid work
    * (47:00) Tradeoffs in using ML systems for automation/augmentation
    * (52:40) Organizational adoption of AI and speed
    * (1:01:55) Technology adoption is more than just a technology problem
    * (1:04:50) Progress narratives, “safe to speed”
    * (1:10:27) Outro
    Links:
    * Steve’s SMU Faculty Profile and Google Scholar
    * Working with AI by Steve Miller and Tom Davenport


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 10 min
    Blair Attard-Frost: Canada’s AI strategy and the ethics of AI business practices

    Blair Attard-Frost: Canada’s AI strategy and the ethics of AI business practices

    In episode 57 of The Gradient Podcast, Andrey Kurenkov speaks to Blair Attard-Frost.
    Note: this interview was recorded 8 months ago, and some aspects of Canada’s AI strategy have changed since then. It is still a good overview of AI governance and other topics, however.
    Blair is a PhD Candidate at the University of Toronto’s Faculty of Information who researches the governance and management of artificial intelligence. More specifically, they are interested in the social construction of intelligence, unintelligence, and artificial intelligence, the relationship between organizational values and AI use, and the political economy, governance, and ethics of AI value chains. They integrate perspectives from service sciences, cognitive sciences, public policy, information management, and queer studies for their research.
    Have suggestions for future podcast guests (or other feedback)? Let us know here!
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter or Mastodon
    Outline:
    * Intro
    * Getting into AI research
    * What is AI governance
    * Canada’s AI strategy
    * Other interests
    Links:
    * Once a promising leader, Canada’s artificial-intelligence strategy is now a fragmented laggard
    * The Ethics of AI Business Practices: A Review of 47 Guidelines


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 58 min
    Linus Lee: At the Boundary of Machine and Mind

    Linus Lee: At the Boundary of Machine and Mind

    In episode 56 of The Gradient Podcast, Daniel Bashir speaks to Linus Lee.
    Linus is an independent researcher interested in the future of knowledge representation and creative work aided by machine understanding of language. He builds interfaces and knowledge tools that expand the domain of thoughts we can think and qualia we can feel. Linus has been writing online since 2014–his blog boasts half a million words–and has built well over 100 side projects. He has also spent time as a software engineer at Replit, Hack Club, and Spensa, and was most recently a Researcher in Residence at Betaworks in New York. 
    Have suggestions for future podcast guests (or other feedback)? Let us know here!
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (02:00) Linus’s background and interests, vision-language models
    * (07:45) Embodiment and limits for text-image
    * (11:35) Ways of experiencing the world
    * (16:55) Origins of the handle “thesephist”, languages
    * (25:00) Math notation, reading papers
    * (29:20) Operations on ideas
    * (32:45) Overview of Linus’s research and current work
    * (41:30) The Oak and Ink languages, programming languages
    * (49:30) Personal search engines: Monocle and Reverie, what you can learn from personal data
    * (55:55) Web browsers as mediums for thought
    * (1:01:30) This AI Does Not Exist
    * (1:03:05) Knowledge representation and notational intelligence
    * Notation vs language
    * (1:07:00) What notation can/should be
    * (1:16:00) Inventing better notations and expanding human intelligence
    * (1:23:30) Better interfaces between humans and LMs to provide precise control, inefficiency prompt engineering
    * (1:33:00) Inexpressible experiences
    * (1:35:42) Linus’s current work using latent space models
    * (1:40:00) Ideas as things you can hold
    * (1:44:55) Neural nets and cognitive computing
    * (1:49:30) Relation to Hardware Lottery and AI accelerators
    * (1:53:00) Taylor Swift Appreciation Session, mastery and virtuosity
    * (1:59:30) Mastery/virtuosity and interfaces / learning curves
    * (2:03:30) Linus’s stories, the work of fiction
    * (2:09:00) Linus’s thoughts on writing
    * (2:14:20) A piece of writing should be focused
    * (2:16:15) On proving yourself
    * (2:28:00) Outro
    Links:
    * Linus’s Twitter and website


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 2 hrs 28 min
    Suresh Venkatasubramanian: An AI Bill of Rights

    Suresh Venkatasubramanian: An AI Bill of Rights

    In episode 55 of The Gradient Podcast, Daniel Bashir speaks to Professor Suresh Venkatasubramanian.
    Professor Venkatasubramanian is a Professor of Computer Science and Data Science at Brown University, where his research focuses on algorithmic fairness and the impact of automated decision-making systems in society. He recently served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy, where he co-authored the Blueprint for an AI Bill of Rights.
    Have suggestions for future podcast guests (or other feedback)? Let us know here!
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (02:25) Suresh’s journey into AI and policymaking
    * (08:00) The complex graph of designing and deploying “fair” AI systems
    * (09:50) The Algorithmic Lens
    * (14:55) “Getting people into a room” isn’t enough
    * (16:30) Failures of incorporation
    * (21:10) Trans-disciplinary vs interdisciplinary, the limiting nature of “my lane” / “your lane” thinking, going beyond existing scientific and philosophical ideas
    * (24:50) The trolley problem is annoying, its usefulness and limitations
    * (25:30) Breaking the frame of a discussion, self-driving doesn’t fit into the parameters of the trolley problem
    * (28:00) Acknowledging frames and their limitations
    * (29:30) Social science’s inclination to critique, flaws and benefits of solutionism
    * (30:30) Computer security as a model for thinking about algorithmic protections, the risk of failure in policy
    * (33:20) Suresh’s work on recourse
    * (38:00) Kantian autonomy and the value of recourse, non-Western takes and issues with individual benefit/harm as the most morally salient question
    * (41:00) Community as a valuable entity and its implications for algorithmic governance, surveillance systems
    * (43:50) How Suresh got involved in policymaking / the OSTP
    * (46:50) Gathering insights for the AI Bill of Rights Blueprint
    * (51:00) One thing the Bill did miss… Struggles with balancing specificity and vagueness in the Bill
    * (54:20) Should “automated system” be defined in legislation? Suresh’s approach and issues with the EU AI Act
    * (57:45) The danger of definitions, overlap with chess world controversies
    * (59:10) Constructive vagueness in law, partially theorized agreements
    * (1:02:15) Digital privacy and privacy fundamentalism, focus on breach of individual autonomy as the only harm vector
    * (1:07:40) GDPR traps, the “legacy problem” with large companies and post-hoc regulation
    * (1:09:30) Considerations for legislating explainability
    * (1:12:10) Criticisms of the Blueprint and Suresh’s responses
    * (1:25:55) The global picture, AI legislation outside the US, legislation as experiment
    * (1:32:00) Tensions in entering policy as an academic and technologist
    * (1:35:00) Technologists need to learn additional skills to impact policy
    * (1:38:15) Suresh’s advice for technologists interested in public policy
    * (1:41:20) Outro
    Links:
    * Suresh is on Mastodon @geomblog@mastodon.social (and also Twitter)
    * Suresh’s blog
    * Blueprint for an AI Bill of Rights
    * Papers
    * Fairness and abstraction in sociotechnical systems
    * A comparative study of fairness-enhancing interventions in machine learning
    * The Philosophical Basis of Algorithmic Recourse
    * Runaway Feedback Loops in Predictive Policing


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 40 min
    Pete Florence: Dense Visual Representations, NeRFs, and LLMs for Robotics

    Pete Florence: Dense Visual Representations, NeRFs, and LLMs for Robotics

    In episode 54 of The Gradient Podcast, Andrey Kurenkov speaks with Pete Florence.
    Note: this was recorded 2 months ago. Andrey should be getting back to putting out some episodes next year.
    Pete Florence is a Research Scientist at Google Research on the Robotics at Google team inside Brain Team in Google Research. His research focuses on topics in robotics, computer vision, and natural language -- including 3D learning, self-supervised learning, and policy learning in robotics. Before Google, he finished his PhD in Computer Science at MIT with Russ Tedrake.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00:00) Intro
    * (00:01:16) Start in AI
    * (00:04:15) PhD Work with Quadcopters
    * (00:08:40) Dense Visual Representations 
    * (00:22:00) NeRFs for Robotics
    * (00:39:00) Language Models for Robotics
    * (00:57:00) Talking to Robots in Real Time
    * (01:07:00) Limitations
    * (01:14:00) Outro
    Papers discussed:
    * Aggressive quadrotor flight through cluttered environments using mixed integer programming 
    * Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps
    * High-speed autonomous obstacle avoidance with pushbroom stereo
    * Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation. (Best Paper Award, CoRL 2018)
    * Self-Supervised Correspondence in Visuomotor Policy Learning (Best Paper Award, RA-L 2020 )
    * iNeRF: Inverting Neural Radiance Fields for Pose Estimation.
    * NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields.
    * Reinforcement Learning with Neural Radiance Fields
    * Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language.
    * Inner Monologue: Embodied Reasoning through Planning with Language Models
    * Code as Policies: Language Model Programs for Embodied Control


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 hr 15 min
    Melanie Mitchell: Abstraction and Analogy in AI

    Melanie Mitchell: Abstraction and Analogy in AI

    Have suggestions for future podcast guests (or other feedback)? Let us know here!
    In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell.
    Professor Mitchell is the Davis Professor at the Santa Fe Institute. Her research focuses on conceptual abstraction, analogy-making, and visual recognition in AI systems. She is the author or editor of six books and her work spans the fields of AI, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. 
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (02:20) Melanie’s intro to AI
    * (04:35) Melanie’s intellectual influences, AI debates over time
    * (10:50) We don’t have the right metrics for empirical study in AI
    * (15:00) Why AI is Harder than we Think: the four fallacies
    * (20:50) Difficulties in understanding what’s difficult for machines vs humans
    * (23:30) Roles for humanlike and non-humanlike intelligence
    * (27:25) Whether “intelligence” is a useful word
    * (31:55) Melanie’s thoughts on modern deep learning advances, brittleness
    * (35:35) Abstraction, Analogies, and their role in AI
    * (38:40) Concepts as analogical and what that means for cognition
    * (41:25) Where does analogy bottom out
    * (44:50) Cognitive science approaches to concepts
    * (45:20) Understanding how to form and use concepts is one of the key problems in AI
    * (46:10) Approaching abstraction and analogy, Melanie’s work / the Copycat architecture
    * (49:50) Probabilistic program induction as a promising approach to intelligence
    * (52:25) Melanie’s advice for aspiring AI researchers
    * (54:40) Outro
    Links:
    * Melanie’s homepage and Twitter
    * Papers
    * Difficulties in AI, hype cycles
    * Why AI is Harder than we think
    * The Debate Over Understanding in AI’s Large Language Models
    * What Does It Mean for AI to Understand?
    * Abstraction, analogies, and reasoning
    * Abstraction and Analogy-Making in Artificial Intelligence
    * Evaluating understanding on conceptual abstraction benchmarks


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 54 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
Jack Rhysider
The New York Times
The Cut & The Verge
Andreessen Horowitz

You Might Also Like

Sam Charrington
Changelog Media
Dwarkesh Patel
Mercatus Center at George Mason University
Andreessen Horowitz
NVIDIA