127 avsnitt

Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.

thegradientpub.substack.com

The Gradient: Perspectives on AI The Gradient

    • Teknologi
    • 3,7 • 3 betyg

Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.

thegradientpub.substack.com

    Thomas Mullaney: A Global History of the Information Age

    Thomas Mullaney: A Global History of the Information Age

    Episode 125
    False universalism freaks me out. It doesn’t freak me out as a first principle because of epistemic violence; it freaks me out because it works.
    I spoke with Professor Thomas Mullaney about:
    * Telling stories about your work and balancing what feels meaningful with practical realities
    * Destabilizing our understandings of the technologies we feel familiar with, and the work of researching the history of the Chinese typewriter
    * The personal nature of research
    The Chinese Typewriter and The Chinese Computer are two of the best books I’ve read in a very long time. And they’re not just good and interesting, but important to read, for the history they tell and the ideas and arguments they present—I can’t recommend them and Professor Mullaney’s other work enough.
    Tom is Professor of History and Professor of East Asian Languages and Cultures, by courtesy. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. He is the author or lead editor of 8 books, including The Chinese Computer, The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, and Coming to Terms with the Nation: Ethnic Classification in Modern China.
    I spend a lot of time on this podcast—if you like my work, you can support me on Patreon :)
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:00) “In Their Own Words” interview: on telling stories about your work
    * (07:42) Clashing narratives and authenticity/inauthenticity in pursuing your work
    * (15:48) Why Professor Mullaney pursued studying the Chinese typewriter
    * (18:20) Worldmaking, transforming the physical world to fit our descriptive models
    * (30:07) Internal and illegible continuities/coherence in work
    * (31:45) The role of a “self”
    * (43:06) The 2008 Beijing Olympics and false (alphabetical) universalism, projectivism
    * (1:04:23) “Kicking the ladder” and the personal nature of research
    * (1:18:07) The “Technolinguistic Chinese Exclusion Act” — the situatedness of historians in their work
    * (1:33:00) Is the Chinese typewriter project finished? / on the resolution of problems
    * (1:43:35) Outro
    Links:
    * Professor Mullaney’s homepage and Twitter
    * In Their Own Words: Thomas Mullaney
    * Books
    * The Chinese Computer: A Global History of the Information Age
    * The Chinese Typewriter: A History
    * Coming to Terms with the Nation: Ethnic Classification in Modern China


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 tim. 43 min
    Seth Lazar: Normative Philosophy of Computing

    Seth Lazar: Normative Philosophy of Computing

    Episode 124
    You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.
    I spoke with Professor Seth Lazar about:
    * Why managing near-term and long-term risks isn’t always zero-sum
    * How to think through axioms and systems in political philosphy
    * Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI
    Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:54) Ad read — MLOps conference
    * (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation
    * (03:53) Attention allocation as an independent good (or bad)
    * (08:22) Axioms in political philosophy
    * (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust
    * (15:05) AI safety / catastrophic risk concerns
    * (22:10) Superintelligence arguments, reasoning about technology
    * (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?
    * (35:55) GPT-2, model weights, related debates
    * (39:11) Power and economics—coordination problems, company incentives
    * (50:42) Morality tales, relationship between safety and capabilities
    * (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy
    * (1:02:28) What is a feasibility horizon?
    * (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter
    * (1:14:25) Sociotechnical lenses, narrowly technical solutions
    * (1:19:47) Experiments for responsibly integrating AI systems into society
    * (1:26:53) Helpful/honest/harmless and antagonistic AI systems
    * (1:33:35) Managing incentives conducive to developing technology in the public interest
    * (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia
    * (1:46:54) How we can help legitimize and support interdisciplinary work
    * (1:50:07) Outro
    Links:
    * Seth’s Linktree and Twitter
    * Resources
    * Attention, moral skill, and algorithmic recommendation
    * Catastrophic AI Risk slides


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 tim. 50 min
    Suhail Doshi: The Future of Computer Vision

    Suhail Doshi: The Future of Computer Vision

    Episode 123
    I spoke with Suhail Doshi about:
    * Why benchmarks aren’t prepared for tomorrow’s AI models
    * How he thinks about artists in a world with advanced AI tools
    * Building a unified computer vision model that can generate, edit, and understand pixels.
    Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they’re hiring!).
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:54) Ad read — MLOps conference
    * (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music
    * (03:45) AI and music, similarities to Playground
    * (07:50) Skill vs. creative capacity in art
    * (12:43) What we look for in music and art
    * (15:30) Enabling creative expression
    * (18:22) Building a unified computer vision model, underinvestment in computer vision
    * (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires
    * (29:05) “Benchmarks are not prepared for how powerful these models will become”
    * (31:56) Personalized models and personalized benchmarks
    * (36:39) Engaging users and benchmark development
    * (39:27) What a foundation model for graphics requires
    * (45:33) Text-to-image is insufficient
    * (46:38) DALL-E 2 and Imagen comparisons, FID
    * (49:40) Compositionality
    * (50:37) Why Playground focuses on images vs. 3d, video, etc.
    * (54:11) Open source and Playground’s strategy
    * (57:18) When to stop open-sourcing?
    * (1:03:38) Suhail’s thoughts on AGI discourse
    * (1:07:56) Outro
    Links:
    * Playground homepage
    * Suhail on Twitter


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 tim. 8 min
    Azeem Azhar: The Exponential View

    Azeem Azhar: The Exponential View

    Episode 122
    I spoke with Azeem Azhar about:
    * The speed of progress in AI
    * Historical context for some of the terminology we use and how we think about technology
    * What we might want our future to look like
    Azeem is an entrepreneur, investor, and adviser. He is the creator of Exponential View, a global platform for in-depth technology analysis, and the host of the Bloomberg Original series Exponentially.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (00:32) Ad read — MLOps conference
    * (01:05) Problematizing the term “exponential”
    * (07:35) Moore’s Law as social contract, speed of technological growth and impedances
    * (14:45) Academic incentives, interdisciplinary work, rational agents and historical context
    * (21:24) Monolithic scaling
    * (26:38) Investment in scaling
    * (31:22) On Sam Altman
    * (36:25) Uses of “AGI,” “intelligence”
    * (41:32) Historical context for terminology
    * (48:58) AI and teaching
    * (53:51) On the technology-human divide
    * (1:06:26) New technologies and the futures we want
    * (1:10:50) Inevitability narratives
    * (1:17:01) Rationality and objectivity
    * (1:21:13) Cultural affordances and intellectual history
    * (1:26:15) Centralized and decentralized AI systems
    * (1:32:54) Instruction tuning and helpful/honest/harmless
    * (1:39:18) Azeem’s future outlook
    * (1:46:15) Outro
    Links:
    * Azeem’s website and Twitter
    * Exponential View


    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 tim. 46 min
    David Thorstad: Bounded Rationality and the Case Against Longtermism

    David Thorstad: Bounded Rationality and the Case Against Longtermism

    Episode 122
    I spoke with Professor David Thorstad about:
    * The practical difficulties of doing interdisciplinary work
    * Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations
    * why EA epistemics suck (ok, it’s a little more nuanced than that)
    Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:15) David’s interest in rationality
    * (02:45) David’s crisis of confidence, models abstracted from psychology
    * (05:00) Blending formal models with studies of the mind
    * (06:25) Interaction between academic communities
    * (08:24) Recognition of and incentives for interdisciplinary work
    * (09:40) Movement towards interdisciplinary work
    * (12:10) The Standard Picture of rationality
    * (14:11) Why the Standard Picture was attractive
    * (16:30) Violations of and rebellion against the Standard Picture
    * (19:32) Mistakes made by critics of the Standard Picture
    * (22:35) Other competing programs vs Standard Picture
    * (26:27) Characterizing Bounded Rationality
    * (27:00) A worry: faculties criticizing themselves
    * (29:28) Self-improving critique and longtermism
    * (30:25) Central claims in bounded rationality and controversies
    * (32:33) Heuristics and formal theorizing
    * (35:02) Violations of Standard Picture, vindicatory epistemology
    * (37:03) The Reason Responsive Consequentialist View (RRCV)
    * (38:30) Objective and subjective pictures
    * (41:35) Reason responsiveness
    * (43:37) There are no epistemic norms for inquiry
    * (44:00) Norms vs reasons
    * (45:15) Arguments against epistemic nihilism for belief
    * (47:30) Norms and self-delusion
    * (49:55) Difficulty of holding beliefs for pragmatic reasons
    * (50:50) The Gibbardian picture, inquiry as an action
    * (52:15) Thinking how to act and thinking how to live — the power of inquiry
    * (53:55) Overthinking and conducting inquiry
    * (56:30) Is thinking how to inquire as an all-things-considered matter?
    * (58:00) Arguments for the RRCV
    * (1:00:40) Deciding on minimal criteria for the view, stereotyping
    * (1:02:15) Eliminating stereotypes from the theory
    * (1:04:20) Theory construction in epistemology and moral intuition
    * (1:08:20) Refusing theories for moral reasons and disciplinary boundaries
    * (1:10:30) The argument from minimal criteria, evaluating against competing views
    * (1:13:45) Comparing to other theories
    * (1:15:00) The explanatory argument
    * (1:17:53) Parfit and Railton, norms of friendship vs utility
    * (1:20:00) Should you call out your friend for being a womanizer
    * (1:22:00) Vindicatory Epistemology
    * (1:23:05) Panglossianism and meliorative epistemology
    * (1:24:42) Heuristics and recognition-driven investigation
    * (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing
    * (1:29:08) Stakes of inquiry and costs of metacognitive processing
    * (1:30:00) When agents are incoherent, focuses on inquiry
    * (1:32:05) Indirect normative assessment and its consequences
    * (1:37:47) Against the Singularity Hypothesis
    * (1:39:00) Superintelligence and the ontological argument
    * (1:41:50) Hardware growth and general intelligence growth, AGI definitions
    * (1:43:55) Difficulties in arguing for hyperbolic growth
    * (1:46:07) Chalmers and the proportionality argument
    * (1:47:53) Arguments for/against diminishing growth, research productivity, Moore’s Law
    * (1:50:08) On progress studies
    * (1:52:40) Improving research productivity and techno

    • 2 tim. 19 min
    Ryan Tibshirani: Statistics, Nonparametric Regression, Conformal Prediction

    Ryan Tibshirani: Statistics, Nonparametric Regression, Conformal Prediction

    Episode 121
    I spoke with Professor Ryan Tibshirani about:
    * Differences between the ML and statistics communities in scholarship, terminology, and other areas.
    * Trend filtering
    * Why you can’t just use garbage prediction functions when doing conformal prediction
    Ryan is a Professor in the Department of Statistics at UC Berkeley. He is also a Principal Investigator in the Delphi group. From 2011-2022, he was a faculty member in Statistics and Machine Learning at Carnegie Mellon University. From 2007-2011, he did his Ph.D. in Statistics at Stanford University.
    Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
    The Gradient Podcast on: Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
    Outline:
    * (00:00) Intro
    * (01:10) Ryan’s background and path into statistics
    * (07:00) Cultivating taste as a researcher
    * (11:00) Conversations within the statistics community
    * (18:30) Use of terms, disagreements over stability and definitions
    * (23:05) Nonparametric Regression
    * (23:55) Background on trend filtering
    * (33:48) Analysis and synthesis frameworks in problem formulation
    * (39:45) Neural networks as a specific take on synthesis
    * (40:55) Divided differences, falling factorials, and discrete splines
    * (41:55) Motivations and background
    * (48:07) Divided differences vs. derivatives, approximation and efficiency
    * (51:40) Conformal prediction
    * (52:40) Motivations
    * (1:10:20) Probabilistic guarantees in conformal prediction, choice of predictors
    * (1:14:25) Assumptions: i.i.d. and exchangeability — conformal prediction beyond exchangeability
    * (1:25:00) Next directions
    * (1:28:12) Epidemic forecasting — COVID-19 impact and trends survey
    * (1:29:10) Survey methodology
    * (1:38:20) Data defect correlation and its limitations for characterizing datasets
    * (1:46:14) Outro
    Links:
    * Ryan’s homepage
    * Works read/mentioned
    * Nonparametric Regression
    * Adaptive Piecewise Polynomial Estimation via Trend Filtering (2014) 
    * Divided Differences, Falling Factorials, and Discrete Splines: Another Look at Trend Filtering and Related Problems (2020)
    * Distribution-free Inference
    * Distribution-Free Predictive Inference for Regression (2017)
    * Conformal Prediction Under Covariate Shift (2019)
    * Conformal Prediction Beyond Exchangeability (2023)
    * Delphi and COVID-19 research
    * Flexible Modeling of Epidemics
    * Real-Time Estimation of COVID-19 Infections
    * The US COVID-19 Trends and Impact Survey and Big data, big problems: Responding to “Are we there yet?”



    Get full access to The Gradient at thegradientpub.substack.com/subscribe

    • 1 tim. 46 min

Kundrecensioner

3,7 av 5
3 betyg

3 betyg

Mest populära poddar inom Teknologi

SvD Tech brief
Svenska Dagbladet
Lex Fridman Podcast
Lex Fridman
The TED AI Show
TED
Acquired
Ben Gilbert and David Rosenthal
Darknet Diaries
Jack Rhysider
AI Sweden Podcast
AI Sweden

Du kanske också gillar

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Last Week in AI
Skynet Today
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Practical AI: Machine Learning, Data Science
Changelog Media