The Gradient: Perspectives on AI

Daniel Bashir

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com

  1. JAN 22

    2025 in AI, with Nathan Benaich

    Episode 144 Happy New Year! This is one of my favorite episodes of the year — for the fourth time, Nathan Benaich and I did our yearly roundup of AI news and advancements, including selections from this year’s State of AI Report. If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you. You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com. Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Outline * (00:00) Intro * (00:44) Air Street Capital and Nathan world * Nathan’s path from cancer research and bioinformatics to AI investing * The “evergreen thesis” of AI from niche to ubiquitous * Portfolio highlights: Eleven Labs, Synthesia, Crusoe * (03:44) Geographic flexibility: Europe vs. the US * Why SF isn’t always the best place for original decisions * Industry diversity in New York vs. San Francisco * The Munich Security Conference and Europe’s defense pivot * Playing macro games from a European vantage point * (07:55) VC investment styles and the “solo GP” approach * Taste as the determinant of investments * SF as a momentum game with small information asymmetry * Portfolio diversity: defense (Delian), embodied AI (Syriact), protein engineering * Finding entrepreneurs who “can’t do anything else” * (10:44) State of AI progress in 2025 * Momentous progress in writing, research, computer use, image, and video * We’re in the “instruction manual” phase * The scale of investment: private markets, public markets, and nation states * (13:21) Range of outcomes and what “going bad” looks like * Today’s systems are genuinely useful—worst case is a valuation problem * Financialization of AI buildouts and GPUs * (14:55) DeepSeek and China closing the capability gap * Seven-month lag analysis (Epoch AI) * Benchmark skepticism and consumer preferences (”Coca-Cola vs. Pepsi”) * Hedonic adaptation: humans reset expectations extremely quickly * Bifurcation of model companies toward specific product bets * (18:29) Export controls and the “evolutionary pressure” argument * Selective pressure breeds innovation * Chinese companies rushing to public markets (Minimax, ZAI) * (21:30) Reasoning models and test-time compute * Chain of thought faithfulness questions * Monitorability tax: does observability reduce quality? * User confusion about when models should “think” * AI for science: literature agents, hypothesis generation * (23:53) Chain of thought interpretability and safety * Anthropomorphization concerns * Alignment faking and self-preservation behaviors * Cybersecurity as a bigger risk than existential risk * Models as payloads injected into critical systems * (27:26) Commercial traction and AI adoption data * Ramp data: 44% of US businesses paying for AI (up from 5% in early 2023) * Average contract values up to $530K from $39K * State of AI survey: 92% report productivity gains * The “slow takeoff” consensus and human inertia * Use cases: meeting notes, content generation, brainstorming, coding, financial analysis * (32:53) The industrial era of AI * Stargate and XAI data centers * Energy infrastructure: gas turbines and grid investment * Labs need to own models, data, compute, and power * Poolside’s approach to owning infrastructure * (35:40) Venture capital in the age of massive GPU capex * The GP lives in the present, the entrepreneur in the future, the LP in the past * Generality vs. specialism narratives * “Two or 20”: management fees vs. carried interest * Scaling funds to match entrepreneur ambitions * (40:10) NVIDIA challengers and returns analysis * Chinese challengers: 6x return vs. 26x on NVIDIA * US challengers: 2x return vs. 12x on NVIDIA * Grok acquired for $20B; Samba Nova markdown to $1.6B * “The tide is lifting all boats”—demand exceeds supply * (44:06) The hardware lottery and architecture convergence * Transformer dominance and custom ASICs making a comeback * NVIDIA still 90–95% of published AI research * (45:49) AI regulation: Trump agenda and the EU AI Act * Domain-specific regulators vs. blanket AI policy * State-level experimentation creates stochasticity * EU AI Act: “born before GPT-4, takes effect in a world shaped by GPT-7” * Only three EU member states compliant by late 2025 * (50:14) Sovereign AI: what it really means * True sovereignty requires energy, compute, data, talent, chip design, and manufacturing * The US is sovereign; the UK by itself is not * Form alliances or become world-class at one level of the stack * ASML and the Netherlands as an example * (52:33) Open weight safety and containment * Three paths: model-based safeguards, scaffolding/ecosystem, procedural/governance * “Pandora’s box is open”—containment on distribution, not weights * Leak risk: the most vulnerable link is often human * Developer–policymaker communication and regulator upskilling * (55:43) China’s AI safety approach * Matt Sheehan’s work on Chinese AI regulation * Safety summits and China’s participation * New Chinese policies: minor modes, mental health intervention, data governance * UK’s rebrand from “safety” to “security” institutes * (58:34) Prior predictions and patterns * Hits on regulatory/political areas; misses on semiconductor consolidation, AI video games * (59:43) 2026 Predictions * A Chinese lab overtaking US on frontier (likely ZAI or DeepSeek, on scientific reasoning) * Data center NIMBYism influencing midterm politics * (01:01:01) Closing Links and Resources Nathan / Air Street Capital * Air Street Capital * State of AI Report 2025 * Air Street Press — essays, analysis, and the Guide to AI newsletter * Nathan on Substack * Nathan on Twitter/X * Nathan on LinkedIn From Air Street Press (mentioned in episode) * Is the EU AI Act Actually Useful? — by Max Cutler and Nathan Benaich * China Has No Place at the UK AI Safety Summit (2023) — by Alex Chalmers and Nathan Benaich Research & Analysis * Epoch AI: Chinese AI Models Lag US by 7 Months — the analysis referenced on the US-China capability gap * Sara Hooker: The Hardware Lottery — the essay on how hardware determines which research ideas succeed * Matt Sheehan: China’s AI Regulations and How They Get Made — Carnegie Endowment Companies Mentioned * Eleven Labs — AI voice synthesis (Air Street portfolio) * Synthesia — AI video generation (Air Street portfolio) * Crusoe — clean compute infrastructure (Air Street portfolio) * Poolside — AI for code (Air Street portfolio) * DeepSeek — Chinese AI lab * Minimax — Chinese AI company * ASML — semiconductor equipment Other Resources * Search Engine Podcast: Data Centers (Part 1 & 2) — PJ Vogt’s two-part series on XAI data centers and the AI financing boom * RAAIS Foundation — Nathan’s AI research and education charity Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 1m
  2. 11/26/2025

    Iason Gabriel: Value Alignment and the Ethics of Advanced AI Systems

    Episode 143 I spoke with Iason Gabriel about: * Value alignment * Technology and worldmaking * How AI systems affect individuals and the social world Iason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights. You can find him on his website and Twitter/X. Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Outline * (00:00) Intro * (01:18) Iason’s intellectual development * (04:28) Aligning language models with human values, democratic civility and agonism * (08:20) Overlapping consensus, differing norms, procedures for identifying norms * (13:27) Rawls’ theory of justice, the justificatory and stability problems * (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy * (23:45) Actor Network Theory and alignment * (27:25) Value alignment and Iason’s starting points * (33:10) The Ethics of Advanced AI Assistants, AI’s impacts on social processes and users, personalization * (37:50) AGI systems and social power * (39:00) Displays of care and compassion, Machine Love (Joel Lehman) * (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre’s conception in After Virtue * (45:00) The Challenge of Value Alignment * (45:25) Technologists as worldmakers * (51:30) Technological determinism, collective action problems * (55:25) Iason’s goals with his work * (58:32) Outro Links Papers: * AI, Values, and Alignment (2020) * Aligning LMs with Human Values (2023) * Toward a Theory of Justice for AI (2023) * The Ethics of Advanced AI Assistants (2024) * A matter of principle? AI alignment as the fair treatment of claims (2025) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    59 min
  3. 12/26/2024

    2024 in AI, with Nathan Benaich

    Episode 142 Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year’s State of AI Report, some early takes on o3, a few minutes LARPing as China Guys……… If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you. You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com. Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Outline * (00:00) Intro * (01:00) o3 and model capabilities + reasoning capabilities * (05:30) Economics of frontier models * (09:24) Air Street’s year and industry shifts: product-market fit in AI, major developments in science/biology, "vibe shifts" in defense and robotics * (16:00) Investment strategies in generative AI, how to evaluate and invest in AI companies * (19:00) Future of BioML and scientific progress: on AlphaFold 3, evaluation challenges, and the need for cross-disciplinary collaboration * (32:00) The AGI question and technology diffusion: Nathan’s take on AGI and timelines, technology adoption, the gap between capabilities and real-world impact * (39:00) Differential economic impacts from AI, tech diffusion * (43:00) Market dynamics and competition * (50:00) DeepSeek and global AI innovation * (59:50) A robotics renaissance? robotics coming back into focus + advances in vision-language models and real-world applications * (1:05:00) Compute Infrastructure: NVIDIA’s dominance, GPU availability, the competitive landscape in AI compute * (1:12:00) Industry consolidation: partnerships, acquisitions, regulatory concerns in AI * (1:27:00) Global AI politics and regulation: international AI governance and varying approaches * (1:35:00) The regulatory landscape * (1:43:00) 2025 predictions * (1:48:00) Closing Links and Resources From Air Street Press: * The State of AI Report * The State of Chinese AI * Open-endedness is all we’ll need * There is no scaling wall: in discussion with Eiso Kant (Poolside) * Alchemy doesn’t scale: the economics of general intelligence * Chips all the way down * The AI energy wars will get worse before they get better Other highlights/resources: * Deepseek: The Quiet Giant Leading China’s AI Race — an interview with DeepSeek CEO Liang Wenfeng via ChinaTalk, translated by Jordan Schneider, Angela Shen, Irene Zhang and others * A great position paper on open-endedness by Minqi Jiang, Tim Rocktäschel, and Ed Grefenstette — Minqi also wrote a blog post on this for us! * for China Guys only: China’s AI Regulations and How They Get Made by Matt Sheehan (+ an interview I did with Matt in 2022!) * The Simple Macroeconomics of AI by Daron Acemoglu + a critique by Maxwell Tabarrok (more links in the Report) * AI Nationalism by Ian Hogarth (from 2018) * Some analysis on the EU AI Act + regulation from Lawfare Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 49m
  4. 12/12/2024

    Philip Goff: Panpsychism as a Theory of Consciousness

    Episode 141 I spoke with Professor Philip Goff about: * What a “post-Galilean” science of consciousness looks like * How panpsychism helps explain consciousness and the hybrid cosmopsychist view Enjoy! Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation * (08:00) Preferences for theories * (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument * (14:40) Phenomenal transparency and physicalism vs. anti-physicalism * (29:00) How Exactly does Panpsychism Help Explain Consciousness * (30:05) The argument for hybrid cosmopsychism * (36:35) “Bare” subjects / subjects before inheriting phenomenal properties * (40:35) Bundle theories of the self * (43:35) Fundamental properties and new subjects as causal powers * (50:00) Integrated Information Theory * (55:00) Fundamental assumptions in hybrid cosmopsychism * (1:00:00) Outro Links: * Philip’s homepage and Twitter * Papers * Putting Consciousness First * Curiosity (Grounding, Essence) and the Knowledge Argument Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1 hr
  5. Some Changes at The Gradient

    11/21/2024

    Some Changes at The Gradient

    Hi everyone! If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward. To summarize and give some context: The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time. Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes: * Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our editor@thegradient.pub. * Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed. * Sigmoid Social: We’ll keep this alive as long as there’s financial support for it. If you like what we do and/or want to help us out in any way, do reach out to editor@thegradient.pub. We love hearing from you. Timestamps * (0:00) Intro * (01:55) How The Gradient began * (03:23) Changes and announcements * (10:10) More Gradient history! On our involvement, favorite articles, and some plugs Some of our favorite articles! There are so many, so this is very much a non-exhaustive list: * NLP’s ImageNet moment has arrived * The State of Machine Learning Frameworks in 2019 * Why transformative artificial intelligence is really, really hard to achieve * An Introduction to AI Story Generation * The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here) Places you can find us! Hugh: * Twitter * Personal site * Papers/things mentioned! * A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k) * Planning in Natural Language Improves LLM Search for Code Generation * Humanity’s Last Exam Andrey: * Twitter * Personal site * Last Week in AI Podcast Daniel: * Twitter * Substack blog * Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    34 min
  6. 10/10/2024

    Jacob Andreas: Language, Grounding, and World Models

    Episode 140 I spoke with Professor Jacob Andreas about: * Language and the world * World models * How he’s developed as a scientist Enjoy! Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (00:40) Jacob’s relationship with grounding fundamentalism * (05:21) Jacob’s reaction to LLMs * (11:24) Grounding language — is there a philosophical problem? * (15:54) Grounding and language modeling * (24:00) Analogies between humans and LMs * (30:46) Grounding language with points and paths in continuous spaces * (32:00) Neo-Davidsonian formal semantics * (36:27) Evolving assumptions about structure prediction * (40:14) Segmentation and event structure * (42:33) How much do word embeddings encode about syntax? * (43:10) Jacob’s process for studying scientific questions * (45:38) Experiments and hypotheses * (53:01) Calibrating assumptions as a researcher * (54:08) Flexibility in research * (56:09) Measuring Compositionality in Representation Learning * (56:50) Developing an independent research agenda and developing a lab culture * (1:03:25) Language Models as Agent Models * (1:04:30) Background * (1:08:33) Toy experiments and interpretability research * (1:13:30) Developing effective toy experiments * (1:15:25) Language Models, World Models, and Human Model-Building * (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models” * (1:21:32) What is a world model? * (1:23:45) The Big Question — from meaning to world models * (1:28:21) From “meaning” to precise questions about LMs * (1:32:01) Mechanistic interpretability and reading tea leaves * (1:35:38) Language and the world * (1:38:07) Towards better language models * (1:43:45) Model editing * (1:45:50) On academia’s role in NLP research * (1:49:13) On good science * (1:52:36) Outro Links: * Jacob’s homepage and Twitter * Language Models, World Models, and Human Model-Building * Papers * Semantic Parsing as Machine Translation (2013) * Grounding language with points and paths in continuous spaces (2014) * How much do word embeddings encode about syntax? (2014) * Translating neuralese (2017) * Analogs of linguistic structure in deep representations (2017) * Learning with latent language (2018) * Learning from Language (2018) * Measuring Compositionality in Representation Learning (2019) * Experience grounds language (2020) * Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 53m
  7. 09/26/2024

    Evan Ratliff: Our Future with Voice Agents

    Episode 139 I spoke with Evan Ratliff about: * Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy! Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Intro * (01:05) Evan’s ambitious and risky projects * (04:45) Wearing different personas as a journalist * (08:31) Boundaries and acceptability in using voice agents * (11:42) Impacts on other people * (13:12) “The kids these days” — how will new technologies impact younger people? * (17:12) Evan’s approach to children’s technology use * (20:05) Techno-solutionism and improvements in medicine, childcare * (24:15) Evan’s perspective on simulations of people * (27:05) On motivations for building tech startups * (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work * (36:05) How Evan decided to write for a career * (40:02) How voice agents might impact our conversations * (43:52) Evan’s experience with Longform and podcasting * (47:15) Perspectives on doing good interviews * (52:11) Mimicking and inspiration, developing style * (57:15) Writers and their motivations, the state of longform journalism * (1:06:15) The internet and writing * (1:09:41) On the ending of Longform * (1:19:48) Outro Links: * Evan’s homepage and Twitter * Shell Game, Evan’s new podcast * Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 20m
  8. 09/12/2024

    Meredith Ringel Morris: Generative AI's HCI Moment

    Episode 138 I spoke with Meredith Morris about: * The intersection of AI and HCI and why we need more cross-pollination between AI and adjacent fields * Disability studies and AI * Generative ghosts and technological determinism * Developing a useful definition of AGI I didn’t get to record an intro for this episode since I’ve been sick. Enjoy! Meredith is Director for Human-AI Interaction Research for Google DeepMind and an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and in The Information School at the University of Washington, where she participates in the dub research consortium. Her work spans the areas of human-computer interaction (HCI), human-centered AI, human-AI interaction, computer-supported cooperative work (CSCW), social computing, and accessibility. She has been recognized as an ACM Fellow and ACM SIGCHI Academy member for her contributions to HCI. Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter Outline: * (00:00) Meredith’s influences and earlier work * (03:00) Distinctions between AI and HCI * (05:56) Maturity of fields and cross-disciplinary work * (09:03) Technology and ends * (10:37) Unique aspects of Meredith’s research direction * (12:55) Forms of knowledge production in interdisciplinary work * (14:08) Disability, Bias, and AI * (18:32) LaMPost and using LMs for writing * (20:12) Accessibility approaches for dyslexia * (22:15) Awareness of AI and perceptions of autonomy * (24:43) The software model of personhood * (28:07) Notions of intelligence, normative visions and disability studies * (32:41) Disability categories and learning systems * (37:24) Bringing more perspectives into CS research and re-defining what counts as CS research * (39:36) Training interdisciplinary researchers, blurring boundaries in academia and industry * (43:25) Generative Agents and public imagination * (45:13) The state of ML conferences, the need for more cross-pollination * (46:42) Prestige in conferences, the move towards more cross-disciplinary work * (48:52) Joon Park Appreciation * (49:51) Training interdisciplinary researchers * (53:20) Generative Ghosts and technological determinism * (57:06) Examples of generative ghosts and clones, relationships to agentic systems * (1:00:39) Reasons for wanting generative ghosts * (1:02:25) Questions of consent for generative clones and ghosts * (1:05:01) Labor involved in maintaining generative ghosts, psychological tolls * (1:06:25) Potential religious and spiritual significance of generative systems * (1:10:19) Anthropomorphization * (1:12:14) User experience and cognitive biases * (1:15:24) Levels of AGI * (1:16:13) Defining AGI * (1:23:20) World models and AGI * (1:26:16) Metacognitive abilities in AGI * (1:30:06) Towards Bidirectional Human-AI Alignment * (1:30:55) Pluralistic value alignment * (1:32:43) Meredith’s perspective on deploying AI systems * (1:36:09) Meredith’s advice for younger interdisciplinary researchers Links: * Meredith’s homepage, Twitter, and Google Scholar * Papers * Mediating Group Dynamics through Tabletop Interface Design * SearchTogether: An Interface for Collaborative Web Search * AI and Accessibility: A Discussion of Ethical Considerations * Disability, Bias, and AI * LaMPost: Design and Evaluation of an AI-assisted Email Writing Prototype for Adults with Dyslexia * Generative Ghosts * Levels of AGI Get full access to The Gradient at thegradientpub.substack.com/subscribe

    1h 38m
4.7
out of 5
47 Ratings

About

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com

You Might Also Like