The Gradient: Perspectives on AI

Daniel Bashir
The Gradient: Perspectives on AI

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com

  1. Some Changes at The Gradient

    5 DAYS AGO

    Some Changes at The Gradient

    Hi everyone! If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward. To summarize and give some context: The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time. Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes: * Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our editor@thegradient.pub. * Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed. * Sigmoid Social: We’ll keep this alive as long as there’s financial support for it. If you like what we do and/or want to help us out in any way, do reach out to editor@thegradient.pub. We love hearing from you. Timestamps * (0:00) Intro * (01:55) How The Gradient began * (03:23) Changes and announcements * (10:10) More Gradient history! On our involvement, favorite articles, and some plugs Some of our favorite articles! There are so many, so this is very much a non-exhaustive list: * NLP’s ImageNet moment has arrived * The State of Machine Learning Frameworks in 2019 * Why transformative artificial intelligence is really, really hard to achieve * An Introduction to AI Story Generation * The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here) Places you can find us! Hugh: * Twitter * Personal site * Papers/things mentioned! * A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k) * Planning in Natural Language Improves LLM Search for Code Generation * Humanity’s Last Exam Andrey: * Twitter * Personal site * Last Week in AI Podcast Daniel: * Twitter * Substack blog * Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    34 min
  2. 10 OCT

    Jacob Andreas: Language, Grounding, and World Models

    Episode 140 I spoke with Professor Jacob Andreas about: * Language and the world * World models * How he’s developed as a scientist Enjoy! Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (00:40) Jacob’s relationship with grounding fundamentalism * (05:21) Jacob’s reaction to LLMs * (11:24) Grounding language — is there a philosophical problem? * (15:54) Grounding and language modeling * (24:00) Analogies between humans and LMs * (30:46) Grounding language with points and paths in continuous spaces * (32:00) Neo-Davidsonian formal semantics * (36:27) Evolving assumptions about structure prediction * (40:14) Segmentation and event structure * (42:33) How much do word embeddings encode about syntax? * (43:10) Jacob’s process for studying scientific questions * (45:38) Experiments and hypotheses * (53:01) Calibrating assumptions as a researcher * (54:08) Flexibility in research * (56:09) Measuring Compositionality in Representation Learning * (56:50) Developing an independent research agenda and developing a lab culture * (1:03:25) Language Models as Agent Models * (1:04:30) Background * (1:08:33) Toy experiments and interpretability research * (1:13:30) Developing effective toy experiments * (1:15:25) Language Models, World Models, and Human Model-Building * (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models” * (1:21:32) What is a world model? * (1:23:45) The Big Question — from meaning to world models * (1:28:21) From “meaning” to precise questions about LMs * (1:32:01) Mechanistic interpretability and reading tea leaves * (1:35:38) Language and the world * (1:38:07) Towards better language models * (1:43:45) Model editing * (1:45:50) On academia’s role in NLP research * (1:49:13) On good science * (1:52:36) Outro Links: * Jacob’s homepage and Twitter * Language Models, World Models, and Human Model-Building * Papers * Semantic Parsing as Machine Translation (2013) * Grounding language with points and paths in continuous spaces (2014) * How much do word embeddings encode about syntax? (2014) * Translating neuralese (2017) * Analogs of linguistic structure in deep representations (2017) * Learning with latent language (2018) * Learning from Language (2018) * Measuring Compositionality in Representation Learning (2019) * Experience grounds language (2020) * Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 53m
  3. 26 SEPT

    Evan Ratliff: Our Future with Voice Agents

    Episode 139 I spoke with Evan Ratliff about: * Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy! Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:05) Evan’s ambitious and risky projects * (04:45) Wearing different personas as a journalist * (08:31) Boundaries and acceptability in using voice agents * (11:42) Impacts on other people * (13:12) “The kids these days” — how will new technologies impact younger people? * (17:12) Evan’s approach to children’s technology use * (20:05) Techno-solutionism and improvements in medicine, childcare * (24:15) Evan’s perspective on simulations of people * (27:05) On motivations for building tech startups * (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work * (36:05) How Evan decided to write for a career * (40:02) How voice agents might impact our conversations * (43:52) Evan’s experience with Longform and podcasting * (47:15) Perspectives on doing good interviews * (52:11) Mimicking and inspiration, developing style * (57:15) Writers and their motivations, the state of longform journalism * (1:06:15) The internet and writing * (1:09:41) On the ending of Longform * (1:19:48) Outro Links: * Evan’s homepage and Twitter * Shell Game, Evan’s new podcast * Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 20m
  4. 12 SEPT

    Meredith Ringel Morris: Generative AI's HCI Moment

    Episode 138 I spoke with Meredith Morris about: * The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields * Disability studies and AI * Generative ghosts and technological determinism * Developing a useful definition of AGI I didn’t get to record an intro for this episode since I’ve been sick. Enjoy! Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Meredith’s influences and earlier work * (03:00) Distinctions between AI and HCI * (05:56) Maturity of fields and cross-disciplinary work * (09:03) Technology and ends * (10:37) Unique aspects of Meredith’s research direction * (12:55) Forms of knowledge production in interdisciplinary work * (14:08) Disability, Bias, and AI * (18:32) LaMPost and using LMs for writing * (20:12) Accessibility approaches for dyslexia * (22:15) Awareness of AI and perceptions of autonomy * (24:43) The software model of personhood * (28:07) Notions of intelligence, normative visions and disability studies * (32:41) Disability categories and learning systems * (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research * (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry * (43:25) Generative Agents and public imagination * (45:13) The state of ML conferences, the need for more cross-pollination * (46:42) Prestige in conferences, the move towards more cross-disciplinary work * (48:52) Joon Park Appreciation * (49:51) Training interdisciplinary researchers * (53:20) Generative Ghosts and technological determinism * (57:06) Examples of generative ghosts and clones, relationships to agentic systems * (1:00:39) Reasons for wanting generative ghosts * (1:02:25) Questions of consent for generative clones and ghosts * (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls * (1:06:25) Potential religious and spiritual significance of generative systems * (1:10:19) Anthropomorphization * (1:12:14) User experience and cognitive biases * (1:15:24) Levels of AGI * (1:16:13) Defining AGI * (1:23:20) World models and AGI * (1:26:16) Metacognitive abilities in AGI * (1:30:06) Towards Bidirectional Human-AI Alignment * (1:30:55) Pluralistic value alignment * (1:32:43) Meredith’s perspective on deploying AI systems * (1:36:09) Meredith’s advice for younger interdisciplinary researchers Links: * Meredith’s homepage, Twitter, and Google Scholar * Papers * Mediating Group Dynamics through Tabletop Interface Design * SearchTogether: An Interface for Collaborative Web Search * AI and Accessibility: A Discussion of Ethical Considerations * Disability, Bias, and AI * LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia * Generative Ghosts * Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 38m
  5. 5 SEPT

    Davidad Dalrymple: Towards Provably Safe AI

    Episode 137 I spoke with Davidad Dalrymple about: * His perspectives on AI risk * ARIA (the UK’s Advanced Research and Invention Agency) and its Safeguarded AI Programme Enjoy—and let me know what you think! Davidad is a Programme Director at ARIA. He was most recently a Research Fellow in technical AI safety at Oxford. He co-invented the top-40 cryptocurrency Filecoin, led an international neuroscience collaboration, and was a senior software engineer at Twitter and multiple startups. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (00:36) Calibration and optimism about breakthroughs * (03:35) Calibration and AGI timelines, effects of AGI on humanity * (07:10) Davidad’s thoughts on the Orthogonality Thesis * (10:30) Understanding how our current direction relates to AGI and breakthroughs * (13:33) What Davidad thinks is needed for AGI * (17:00) Extracting knowledge * (19:01) Cyber-physical systems and modeling frameworks * (20:00) Continuities between Davidad’s earlier work and ARIA * (22:56) Path dependence in technology, race dynamics * (26:40) More on Davidad’s perspective on what might go wrong with AGI * (28:57) Vulnerable world, interconnectedness of computers and control * (34:52) Formal verification and world modeling, Open Agency Architecture * (35:25) The Semantic Sufficiency Hypothesis * (39:31) Challenges for modeling * (43:44) The Deontic Sufficiency Hypothesis and mathematical formalization * (49:25) Oversimplification and quantitative knowledge * (53:42) Collective deliberation in expressing values for AI * (55:56) ARIA’s Safeguarded AI Programme * (59:40) Anthropic’s ASL levels * (1:03:12) Guaranteed Safe AI — * (1:03:38) AI risk and (in)accurate world models * (1:09:59) Levels of safety specifications for world models and verifiers — steps to achieve high safety * (1:12:00) Davidad’s portfolio research approach and funding at ARIA * (1:15:46) Earlier concerns about ARIA — Davidad’s perspective * (1:19:26) Where to find more information on ARIA and the Safeguarded AI Programme * (1:20:44) Outro Links: * Davidad’s Twitter * ARIA homepage * Safeguarded AI Programme * Papers * Guaranteed Safe AI * Davidad’s Open Agency Architecture for Safe Transformative AI * Dioptics: a Common Generalization of Open Games and Gradient-Based Learners (2019) * Asynchronous Logic Automata (2008) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 21m
  6. 29 AUG

    Clive Thompson: Tales of Technology

    Episode 136 I spoke with Clive Thompson about: * How he writes * Writing about the climate and biking across the US * Technology culture and persistent debates in AI * Poetry Enjoy—and let me know what you think! Clive is a journalist who writes about science and technology. He is a contributing writer forWired magazine, and is currently writing his next book about micromobility and cycling across the US. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:07) Clive’s life as a Tarantino movie * (03:07) Boring life and interesting art, life as material for art * (10:25) Cycling across the US — Clive’s new book on mobility and decarbonization * (15:07) Turning inward in writing * (27:21) Including personal experience in writing * (31:53) Personal and less personal writing * (36:08) Conveying uncertainty and the “voice from nowhere” in traditional journalism * (41:10) Finding the natural end of a piece * (1:02:10) Writing routine * (1:05:08) Theories of change in Clive’s writing * (1:12:33) How Clive saw things before the rest of us * (1:27:00) Automation in software engineering * (1:31:40) The anthropology of coders, poetry as a framework * (1:43:50) Proust discourse * (1:45:00) Technology culture in NYC + interaction between the tech world and other worlds * (1:50:30) Technological developments Clive wants to see happen (free ideas) * (2:01:11) Clive’s argument for memorizing poetry * (2:09:24) How Clive finds poetry * (2:18:03) Clive’s pursuit of freelance writing and making compromises * (2:27:25) Outro Links: * Clive’s Twitter and website * Selected writing * The Attack of the Incredible Grading Machine (Lingua Franca, 1999) * The Know-It-All Machine (Lingua Franca, 2001) * How to teach AI some common sense (Wired, 2018) * Blogs to Riches (NY Mag, 2006) * Clive vs. Jonathan Franzen on whether the internet is good for writing (The Chronicle of Higher Education, 2013) * The Minecraft Generation (New York Times, 2016) * What AI College Exam Proctors are Really Teaching Our Kids (Wired, 2020) * Companies Don’t Need to Be Creepy to Make Money (Wired, 2021) * Is Sucking Carbon Out of the Air the Solution to Our Climate Crisis? (Mother Jones, 2021) * AI Shouldn’t Compete with Workers—It Should Supercharge Them (Wired, 2022) * Back to BASIC—the Most Consequential Programming Language in the History of Computing Wired, 2024) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    2h 28m
  7. 22 AUG

    Judy Fan: Reverse Engineering the Human Cognitive Toolkit

    Episode 136 I spoke with Judy Fan about: * Our use of physical artifacts for sensemaking * Why cognitive tools can be a double-edged sword * Her approach to scientific inquiry and how that approach has developed Enjoy—and let me know what you think! Judy is Assistant Professor of Psychology at Stanford and director of the Cognitive Tools Lab. Her lab employs converging approaches from cognitive science, computational neuroscience, and artificial intelligence to reverse engineer the human cognitive toolkit, especially how people use physical representations of thought — such as sketches and prototypes — to learn, communicate, and solve problems. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (00:49) Throughlines and discontinuities in Judy’s research * (06:26) “Meaning” in Judy’s research * (08:05) Production and consumption of artifacts * (13:03) Explanatory questions, why we develop visual artifacts, science as a social enterprise * (15:46) Unifying principles * (17:45) “Hard limits” to knowledge and optimism * (21:47) Tensions in different fields’ forms of sensemaking and establishing truth claims * (30:55) Dichotomies and carving up the space of possible hypotheses, conceptual tools * (33:22) Cognitive tools and projectivism, simplified models vs. nature * (40:28) Scientific training and science as process and habit * (45:51) Developing mental clarity about hypotheses * (51:45) Clarifying and expressing ideas * (1:03:21) Cognitive tools as double-edged * (1:14:21) Historical and social embeddedness of tools * (1:18:34) How cognitive tools impact our imagination * (1:23:30) Normative commitments and the role of cognitive science outside the academy * (1:32:31) Outro Links: * Judy’s Twitter and lab page * Selected papers (there are lots!) * Overviews * Drawing as a versatile cognitive tool (2023) * Using games to understand the mind (2024) * Socially intelligent machines that learn from humans and help humans learn (2024) * Research papers  * Communicating design intent using drawing and text (2024) * Creating ad hoc graphical representations of number (2024) * Visual resemblance and interaction history jointly constrain pictorial meaning (2023) * Explanatory drawings prioritize functional properties at the expense of visual fidelity (2023) * SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction (2023) * Parallel developmental changes in children’s production and recognition of line drawings of visual concepts (2023) * Learning to communicate about shared procedural abstractions (2021) * Visual communication of object concepts at different levels of abstraction (2021) * Relating visual production and recognition of objects in the human visual cortex (2020) * Collabdraw: an environment for collaborative sketching with an artificial agent (2019) * Pragmatic inference and visual abstraction enable contextual flexibility in visual communication (2019) * Common object representations for visual production and recognition (2018) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 33m
  8. 15 AUG

    L.M. Sacasas: The Questions Concerning Technology

    Episode 135 I spoke with L. M. Sacasas about: * His writing and intellectual influences * The value of asking hard questions about technology and our relationship to it * What happens when we decide to outsource skills and competency * Evolving notions of what it means to be human and questions about how to live a good life Enjoy—and let me know what you think! Michael is Executive Director of the Christian Study Center of Gainesville, Florida and author of The Convivial Society, a newsletter about technology and society. He does some of the best writing on technology I’ve had the pleasure to read, and I highly recommend his newsletter. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :) You can also support upkeep for the full Gradient team/project through a paid subscription on Substack! Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:12) On podcasts as a medium * (06:12) Michael’s writing * (12:38) Michael’s intellectual influences, contingency * (18:48) Moral seriousness * (22:00) Michael’s ambitions for his work * (26:17) The value of asking the right questions (about technology) * (34:18) Technology use and the “natural” pace of human life * (46:40) Outsourcing of skills and competency, engagement with others * (55:33) Inevitability narratives and technological determinism, the “Borg Complex” * (1:05:10) Notions of what it is to be human, embodiment * (1:12:37) Higher cognition vs. the body, dichotomies * (1:22:10) The body as a starting point for philosophy, questions about the adoption of new technologies * (1:30:01) Enthusiasm about technology and the cultural milieu * (1:35:30) Projectivism, desire for knowledge about and control of the world * (1:41:22) Positive visions for the future * (1:47:11) Outro Links: * Michael’s Substack: The Convivial Society and his book, The Frailest Thing: Ten Years of Thinking about the Meaning of Technology * Michael’s Twitter * Essays * Humanist Technology Criticism * What Does the Critic Love? * The Ambling Mind * Waste Your Time, Your Life May Depend On It * The Work of Art * The Stuff of (a Well-Lived) Life Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 47m

About

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com

You Might Also Like

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada