High Signal: Data Science | Career | AI

Delphina

Welcome to High Signal, the podcast for data science, AI, and machine learning professionals. High Signal brings you the best from the best in data science, machine learning, and AI. Hosted by Hugo Bowne-Anderson and produced by Delphina, each episode features deep conversations with leading experts, such as Michael Jordan (UC Berkeley), Andrew Gelman (Columbia) and Chiara Farranato (HBS). Join us for practical insights from the best to help you advance your career and make an impact in these rapidly evolving fields. More on our website: https://high-signal.delphina.ai/

  1. JAN 27

    Episode 33: Why Your AI Product Will Be Obsolete in Six Months (And What To Do About It)

    Benn Stancil, writer and co-founder of Mode, joins High Signal to ask some uncomfortable questions about the current AI moment. Is now actually a terrible time to start a company? If the tools you build on today are obsolete in six months, at what point does the head start stop mattering? Is all that context engineering you're doing a waste of time, destined to go the way of Boolean search syntax in the 90s? Benn argues that AI is turning us all into Steve Jobs, not the visionary who delegated, but the one who berated people over pixel placement. As AI takes over the doing, our job becomes obsessing over the polish. He makes the case that technical debt may be self-healing: if future models can untangle the mess today's models made, then messy code isn't debt…it's a spec for a clean rewrite. We also dig into why Claude Cowork can't work. AI has these uncanny ticks you can't beat out, so anything it writes "as you" will smell like AI. The solution isn't better AI writing—it's to stop pretending we write to each other at all. Benn envisions a future where communication is radically intermediated: I dump facts into a shared repository, your AI reads them, and nobody bothers with the social decoration in between. LINKS Benn’s blog on Substack Benn.website, with links to all everything else Benn related Will there ever be a worse time to start a startup? Today's frontier is tomorrow's tech debt. Why Cowork can’t work: The future isn’t collaborative. Producer theory: Platforms are overrated. Tim O’Reilly on High Signal: The End of Programming As We Know It Watch the podcast episode on YouTube Delphina's Newsletter

    1 hr
  2. 12/11/2025

    Episode 30: The AI Paradox: Why Your Data Team’s Workload is About to Explode

    Chris Child, VP of Product, Data Engineering at Snowflake, joins High Signal to deliver a new playbook for data leaders based on his recent MIT report, revealing why AI is paradoxically creating more work for data teams, not less. He explains how the function is undergoing a forced evolution from back-office “plumbing” to the strategic core of the enterprise, determining whether AI initiatives succeed or fail. The conversation maps the new skills and organizational structures required to navigate this shift. We dig into why off-the-shelf LLMs consistently fail to generate useful SQL without a semantic layer to provide business context, and how the most effective data engineers must now operate like product managers to solve business problems. Chris provides a clear framework on the shift from writing code to managing a portfolio of AI agents, why solving for AI risk is an extension of existing data governance, and the counterintuitive strategy of moving slowly on foundations to unlock rapid, production-grade deployment. LINKS MIT Technology Review Report: Redefining Data Engineering in the Age of AI The Evolution of the Modern Data Engineer: From Coders to Architects Why Most AI Agents Fail (and What It Takes to Reach Production) with Anu Brahadwaj (Atlassian) The End of Programming As We Know It with Tim O'Reilly The Incentive Problem in Shipping AI Products — and How to Change It with Roberto Medri (Meta) Andrej Karpathy — AGI is still a decade away Chris Child on LinkedIn High Signal podcast Watch the podcast episode on YouTube Delphina's Newsletter

    50 min
  3. 11/28/2025

    Episode 29: Why AI Adoption Fails: A Behavioral Framework for AI Implementation

    Liz Costa of the Behavioral Insights Team returns to High Signal to deliver a critical behavioral science playbook for the AI era focused on human and business impact. We discuss why the potential of AI can only be fulfilled by understanding a single bottleneck: human behavior. The conversation reveals why leaders must intervene now to prevent temporary adoption patterns from calcifying into permanent organizational norms, the QWERTY Effect, and how to move organizations past simply automating drudgery to achieving deep integration. We dig into why AI adoption is fundamentally a behavioral challenge, providing a diagnostic framework for leaders to identify stalled progress using the Motivation-Capability-Trust triad. Liz explains how to reframe AI deployment by leveraging Loss Aversion to bypass employee skepticism, and how to design workflows that improve human reasoning rather than replace it. The conversation provides clear guidance on intentional task offloading, the power of using AI to stress-test decisions, and why sanctioning employee experimentation is essential to discovering high-value use cases. LINKS AI & Human Behaviour: Augment, Adopt, Align, Adapt Thinking Fast and Slow in AI How does LLM use affect decision-making? Defaults, Decisions, and Dynamic Systems: Behavioral Science Meets AI with Lis Costa (High Signal) The Behavioral Insights Team Lis Costa on LinkedIn High Signal podcast Watch the podcast episode on YouTube Delphina's Newsletter

    49 min
  4. 11/13/2025

    Episode 28: From Context Engineering to AI Agent Harnesses: The New Software Discipline

    Lance Martin of LangChain joins High Signal to outline a new playbook for engineering in the AI era, where the ground is constantly shifting under the feet of builders. He explains how the exponential improvement of foundation models is forcing a complete rethink of how software is built, revealing why top products from Claude Code to Manus are in a constant state of re-architecture simply to keep up. We dig into why the old rules of ML engineering no longer apply, and how Rich Sutton's "bitter lesson" dictates that simple, adaptable systems are the only ones that will survive. The conversation provides a clear framework for leaders on the critical new disciplines of context engineering to manage cost and reliability, the architectural power of the "agent harness" to expand capabilities without adding complexity, and why the most effective evaluation of these new systems is shifting away from static benchmarks and towards a dynamic model of in-app user feedback. LINKS Lance on LinkedIn Context Engineering for Agents by Lance Martin Learning the Bitter Lesson by Lance Martin Context Engineering in Manus by Lance Martin Context Rot: How Increasing Input Tokens Impacts LLM Performance by Chroma Building effective agents by Erik Schluntz and Barry Zhang at Anthropic Effective context engineering for AI agents by Anthropic How we built our multi-agent research system by Anthropic Measuring AI Ability to Complete Long Tasks by METR Your AI Product Needs Evals by Hamel Husain Introducing Roast: Structured AI workflows made easy by Shopify Watch the podcast episode on YouTube Delphina's Newsletter

    51 min
5
out of 5
19 Ratings

About

Welcome to High Signal, the podcast for data science, AI, and machine learning professionals. High Signal brings you the best from the best in data science, machine learning, and AI. Hosted by Hugo Bowne-Anderson and produced by Delphina, each episode features deep conversations with leading experts, such as Michael Jordan (UC Berkeley), Andrew Gelman (Columbia) and Chiara Farranato (HBS). Join us for practical insights from the best to help you advance your career and make an impact in these rapidly evolving fields. More on our website: https://high-signal.delphina.ai/

You Might Also Like