FUTUREPROOF.

Jeremy Goldman

Welcome to FUTUREPROOF. We're the podcast that delves into the future. From Augmented Reality to Artificial Intelligence to Smart Cities to Internet of Things to Virtual Reality, we speak with some of the sharpest minds to better help you understand what the next few years may look like.Brought to you by author Jeremy Goldman (Going Social, Getting to Like).For booking inquiries: vie@futureproofshow.com

  1. MAR 26

    The Storytelling Revolution: Why Humanity's Earliest Innovation Still Matters (ft. author Kevin Ashton)

    Send us Fan Mail In this episode of FUTUREPROOF., we sit down with Kevin Ashton—the technologist who coined the term Internet of Things and helped usher in the smartphone era—to talk about something even more foundational than AI. Stories. In his new book, The Story of Stories, Kevin traces a million-year arc—from the first fires where early humans gathered, to the invention of writing and printing, to electricity, electronics, and the smartphone. His thesis is provocative: language did not create stories. Stories created language. Every major storytelling revolution has followed a simple pattern: it increases the number of people who can tell stories—and the number of people who can hear them. For the first time in history, anyone can tell stories to everyone. But there’s a catch. While AI cannot understand meaning, algorithms now determine which stories we see, amplifying bias, shaping belief, and influencing behavior at scale. The power of storytelling has never been more democratized—or more intermediated. We explore:  Why storytelling is innate, not cultural  The eight great revolutions of human communication  Why machines can generate content but not meaning  The risks of algorithmic amplification  The role of critical thinking in a post-scarcity information world  Whether the next storytelling revolution is technological—or cognitive This conversation isn’t about nostalgia.  It’s about understanding the oldest human technology in a moment when the newest one is accelerating everything. If we think in stories—and we always will—the question becomes:  Who shapes the stories that shape us?

    24 min
  2. MAR 10

    GLP-1s, AI, and the New Health Economy (ft. Rajiv Leventhal, health analyst)

    Send us Fan Mail Healthcare is colliding with technology faster than most people realize. In this episode of FUTUREPROOF., I sit down with analyst Rajiv Leventhal, who covers the intersection of healthcare, pharma, and tech, to unpack three forces reshaping the system at once: AI, GLP-1 weight loss drugs, and the mental health impact of digital life. We start with AI as a health tool. Nearly a quarter of ChatGPT’s global weekly users now use it for health-related prompts. That’s not a niche behavior. It’s a mainstream one. The question isn’t whether people will turn to AI for medical guidance. They already are. The real tension is trust and liability. General-purpose AI tools aren’t bound by HIPAA in the same way healthcare providers are. Yet they’re increasingly acting as digital concierges — answering late-night pediatric questions, explaining lab results, and helping people prepare for appointments in a system where access is strained. And that system is strained. Even in major cities, patients can wait months — sometimes a year — to see specialists. When access gaps widen, alternative tools step in. AI isn’t replacing doctors. It’s filling holes. We then turn to GLP-1 drugs and the weight-loss explosion. What began as diabetes treatment became a cultural and commercial wave driven by social media, FDA approvals, and aggressive advertising. But beneath the surface is a regulatory gray market of compounded versions, patent battles, and telehealth platforms monetizing demand. Finally, we tackle social media’s impact on mental health. The evidence linking heavy use — especially among teens — to anxiety and depression is growing, even if causation remains complex. Is this a regulation problem? A parental problem? A public health issue? Or another example of technology moving faster than governance? This episode isn’t about hype. It’s about what happens when broken systems create openings — and tech companies move into the space. Because when trust erodes and access declines, people don’t wait. They improvise.

    27 min
  3. FEB 24

    Less DEI, more FAIRness (ft. author Lily Zheng)

    Send us Fan Mail For years, organizations have poured millions into DEI training. And yet most employees still report discrimination. Promotion gaps persist. Trust remains uneven. So what’s going on? In this episode of FUTUREPROOF., I sit down with Lily Zheng — strategist and author of Fixing Fairness — to interrogate a hard truth: much of what we call DEI doesn’t work. Not because fairness is unpopular. Not because inclusion is misguided. But because we keep trying to fix people instead of fixing systems. Lily introduces the FAIR framework — Fairness, Access, Inclusion, and Representation — and argues that the real leverage isn’t in workshops. It’s in incentives, evaluation criteria, hiring processes, and executive accountability. We explore: Why standalone DEI training can backfireThe “missing stair” metaphor — and how organizations normalize dysfunctionThe Cobra Effect of poorly designed diversity incentivesWhy representation is ultimately about trust, not opticsWhat meritocracy gets wrong about itselfAnd why rebranding DEI won’t solve structural problemsAt a moment when DEI faces political backlash and corporate retrenchment, Lily makes a counterintuitive claim: the future of workplace inclusion will be more rigorous, more measured, and more accountable — not less. This is a systems conversation. Not about slogans.  Not about performative commitments.  About incentives, power, and what actually moves outcomes. If you care about leadership, governance, and the second-order effects of institutional design, this episode will challenge you.

    32 min
  4. FEB 17

    Soft Skills Are the Hard Advantage in the AI Era (ft. Bushra Khan)

    Send us Fan Mail For years, we treated emotional intelligence like a cultural add-on. Nice to have.  Important, maybe.  But not central to performance. That framing doesn’t survive the AI era. In this episode of FUTUREPROOF., I sit down with Dr. Bushra Khan, founder of Leading with BK, to examine what actually differentiates leaders as automation compresses the knowledge gap. When AI can draft, analyze, summarize, and even simulate difficult conversations, the advantage shifts. It moves from what you know to how you show up. Bushra has spent over 15 years helping leaders translate emotional intelligence from buzzword into operating system. We talk about why “soft skills” should be understood as strategic skills, how negativity bias quietly distorts leadership judgment, and why loneliness inside high-performing teams is less about remote work and more about emotional avoidance. We also explore some uncomfortable tensions: If AI amplifies leaders, what exactly is it amplifying?When does candor become bluntness — and erode trust instead of building it?Why do leaders underestimate the emotional consequences of automation?What does bravery look like when decisions are both rational and painful?Bushra argues that most organizations are still trying to fix people instead of fixing environments. They invest in workshops while ignoring incentives. They push productivity while neglecting psychological safety. They assume proximity equals connection. But as AI takes over more technical tasks, influence becomes the real differentiator. And influence is emotional before it is analytical. This conversation isn’t about positivity or platitudes. It’s about leadership under pressure — layoffs, automation, rapid skills shifts — and what it takes to signal trust and authority through noise. Because the future of work won’t just test our systems. It will test our emotional maturity.

    28 min
  5. FEB 10

    How People Endure When Systems Collapse (ft. Trevor Reed, author & Russia detainee)

    Send us Fan Mail This episode of FUTUREPROOF. is different. My guest is Trevor Reed, a former U.S. Marine who was wrongfully detained and abused in a Russian gulag for nearly three years, freed in a high-profile prisoner exchange in 2022—and then made a decision few could comprehend: he voluntarily went to Ukraine to fight against the same system that imprisoned him. In this conversation, Trevor reflects on what captivity does to the human mind, how survival reshapes your definition of justice, and why freedom—real freedom—can’t be taken for granted once you’ve lost it. We talk about: What daily life inside a Russian penal colony is actually like—and how close he came to dying thereThe mental discipline required to survive prolonged isolation, hunger, and uncertaintyThe emotional toll of being turned into a geopolitical bargaining chipWhy revenge eventually gave way to a deeper definition of justiceThe surreal contrast between everyday life and active war zones in UkraineBeing critically wounded by a landmine—and what it means to survive twiceHow his understanding of freedom, responsibility, and humanity has fundamentally changedThis is not a conversation about politics.  It’s a conversation about power, resilience, moral injury, and what it means to remain human when systems fail you. Trevor’s memoir, Retribution: A Former US Marine's Harrowing Journey from Wrongful Imprisonment in Russia to the Front Lines of the Ukrainian War, is not an easy read—but it is an important one. And this conversation is not comfortable—but it is necessary.

    25 min
  6. JAN 27

    Designing AI You Can Trust & the Future of Human-Centered Healthcare (ft. Peter Skillman, Philips' global head of design)

    Send us Fan Mail Healthcare is entering its most consequential design moment in decades. As AI moves from the background into the core of clinical decision-making, diagnostics, and patient experience, the real question isn’t what AI can do—it’s whether people can trust it. This week on FUTUREPROOF., I’m joined by Peter Skillman, Global Head of Design at Philips, and one of the few leaders shaping what responsible, human-centered AI looks like in healthcare at scale. Peter has spent three decades designing products and systems at the intersection of hardware, software, and services—across Palm, Nokia, Microsoft, AWS, and now Philips. Today, he’s helping reimagine healthcare not as a hierarchy of authority, but as an experience built around patients, clinicians, and trust. We talk about: Why AI in healthcare must be designed with people, not just for themWhat happens when teenagers—future patients and clinicians—help design care systemsHow healthcare design is shifting from “what looks impressive” to “what feels humane”Why speed, clarity, and emotional context now matter as much as clinical accuracyThe long timelines of healthcare innovation—and why today’s design choices shape the next decadeWhat it really means to make AI visible, explainable, and trustworthy in life-and-death environmentsThis conversation isn’t about futuristic demos or abstract ethics.  It’s about how design decisions today will determine whether AI improves healthcare—or quietly erodes trust in it.

    26 min
5
out of 5
42 Ratings

About

Welcome to FUTUREPROOF. We're the podcast that delves into the future. From Augmented Reality to Artificial Intelligence to Smart Cities to Internet of Things to Virtual Reality, we speak with some of the sharpest minds to better help you understand what the next few years may look like.Brought to you by author Jeremy Goldman (Going Social, Getting to Like).For booking inquiries: vie@futureproofshow.com

You Might Also Like