Future Around & Find Out

Dan Blumberg

* Winner of the 2026 Webby Award for Best Technology Podcast * Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. On Tuesdays, we interview the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com

  1. Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)

    1D AGO

    Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)

    Catie Cuan's dad was in the hospital, surrounded by machines that were supposed to help him. Instead they made him feel alienated and afraid. Catie, a dancer-turned-roboticist, realized it's not enough for a machine to do its job — it has to be relatable, too. Today she's the founder and CEO of ART Lab, focused on what she calls the "interaction gap" between what a robot can do and how it makes us feel. Catie danced at the Metropolitan Opera Ballet and ran her own dance company before getting her PhD at Stanford and becoming an artist-in-residence at Google X, where she worked on the Everyday Robots moonshot — including teaching office robots that it's rude to cut between two people having a conversation. Now ART Lab is building a home robot that won't look anything like a robot, plus a new kind of AI model that conditions success on how the human in the room responds, not just whether the task got done. Listen for the case against humanoids, why the future of AI shouldn't live inside your phone, and a sneak peek at what our life with robots might look like. Chapters: (02:11) - “There will be billions of robots” – from dishwashers to elder care (04:45) - Why robots can be capable and still feel unsettling (08:00) - How robots could read your reactions and respond in real time (11:45) - What shape should robots take? (15:30) - The case against humanoids (19:00) - A nine foot robot hand and the wild future robot design could take (23:15) - What it's like to dance with robots (28:30) - “The robot just died” – when a live failure changed the whole performance (32:45) - Friendship loneliness and home robots (and why builders need to be clear about the future they are creating) (37:11) - Why the home may become robotics’ biggest use case (and what ART Lab is building) (40:06) - Robot tutors, homework help, and why teachers still matter most (43:51) - “We have a tremendous amount of agency” – choosing the future we build now (46:16) - Why inequality and access worry Catie most (and who gets left behind) (48:56) - Why builders need to get outside their own bubble Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!

    51 min
  2. 4D AGO

    The Goblin in the Machine | FAFO Friday

    I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant."  Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking? That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans.  Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space.  Links: Where the goblins came from (OpenAI blog post)My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)GitHub Copilot is moving to usage-based billing (GitHub announcement)‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link) Meta inks deal for solar power at night, beamed from space (TechCrunch)Support Future Around & Find Out Follow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!

    35 min
  3. AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"

    APR 28

    AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"

    Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are.  Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says: In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.” Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard. In this conversation: Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans madeHow to avoid — or at least how to mitigate — creating AI that’s biasedRed teaming AI and creating bias bountiesThe "grandma hack" and other ways regular people accidentally jailbreak AI modelsHow AI companies are quietly rewriting their terms of service to dodge liability when things go wrongWhy the benchmarks you see when a new model drops are "basically spelling tests"AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is aliveWhat builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came beforeChapters: (00:00) - "The thing I believe in the most is human agency" (02:14) - Why builders have more agency than they realize (04:00) - What is a bias bounty? (06:41) - What 2,000 hackers at DEF CON found (09:40) - The grandma hack (11:30) - Why guardrails fall apart (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game (19:10) - Why most evals are "basically spelling tests" (21:30) - How to actually evaluate an AI agent (27:16) - "Moral outsourcing" and the AI layoff lie (29:41) - Inside Rumman's tenure as U.S. AI Science Envoy (33:06) - The legal loophole AI companies use to dodge liability (36:31) - AI psychosis and the cold emails Rumman gets (39:36) - Why Google's AI overview is quietly dangerous (45:31) - The problem with "AI literacy" (49:01) - Can we trust anything we see anymore? (51:11) - What builders can do right now to take back agency Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!

    55 min
  4. APR 25

    We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?

    We won the Webby Award for best tech podcast of 2026!!! I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind... But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz.  What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so?  It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because… The Future Needs a Word.  That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions… ---FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow.

    39 min
  5. "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*

    APR 21

    "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*

    So what even is “real” software anyway? Someone builds an app over the weekend. It works. It looks good. And then the search begins — for the asterisk. Security? Design quality? Can it go to production? Paul Ford says we’re in a new era: "I can't believe it's not software!" Paul is the co-founder of Aboard, where he helps organizations build custom software quickly, using AI tools. He's also one of my favorite tech writers. You may know him from "What Is Code," the opus he wrote for Bloomberg Businessweek a decade ago or from his writing in the New York Times, including his recent opinion piece, The A.I. Disruption We’ve Been Waiting for Has Arrived. Or perhaps you’re hip to Ftrain, where he’s been writing for longer than we’ve had the word “blog.” In this conversation, recorded at Aboard’s podcast studio (Paul and his cofounder also host a great show), we dig into the strange new world where roles are colliding, software* gets built quickly, and no one is quite sure what to teach their kids. We get into: What Paul calls "the great search for the asterisk" — the moment someone demos an app and everyone scrambles to find the catchHow the power dynamic between engineers and everyone else is fundamentally shifting — and why that's both liberating and destabilizingWhy vibe coded prototypes are changing how agencies pitch and price their work — and why pricing is "very unresolved"The skills that actually matter now: client communication, systems thinking, and depth over velocityWhy "the environmental costs [of AI] have become essentially a truthful folk narrative to talk about how difficult and scary and painful it is to see your life get continually smashed into bits."What he's teaching his kids (hint: it's not to code)Chapters: (01:40) - “We’re in a funny moment now” – catching up on the ten years since “What Is Code?” (05:30) - “ You gotta stop fighting” - AI code is genuinely useful, caveats and all (08:44) - AI enables people who could never afford custom software to have it (09:50) - Why he knew he’d get yelled at for his recent piece in the NYTimes (13:00) - “AI washing” and job cuts (14:50) - Paul’s theory for why the market oscillates so wildly on AI news + are we going to vibe code our own DoorDash? (17:00) - What’s the hardest thing about building with AI right now? (19:36) - Hiring, the most in-demand skills, and “forward-deployed engineers” (27:50) - “Product is still hard” – in response to: “What is something that AI will never be great at?” (31:36) - “What is something that sounds like science fiction, but that will soon be real — and commonplace?” (32:46) - Why Paul is excited about world models (and thinks LLM’s are topping out) (36:06) - Why environmental concerns have become a “truthful folk narrative about how difficult and scary” AI is (39:26) - There is no magic solution for climate (but one positive thing AI can do is help digest climate data) (41:26) - Why kids should learn systems thinking Support Future Around & Find OutGet the free newsletterBecome a paid subscriber and help future proof this thing!Sponsor the show?  Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com

    45 min
  6. APR 16

    We're a Webby nominee for Best Tech Podcast! Please vote! And here are the FAFO highlights the Webby's loved so much

    Hey everyone... so, in case you haven't heard... this show, Future Around & Find Out, has been nominated for a Webby for best tech podcast!  *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology *** I was kind of being chill about this. I am, admittedly, not my own best hype man, but then I got riled up when I heard the hosts of The Vergecast, one of the other nominees and last year's winner, complain that they weren't winning by enough votes and that they wanted to win by such a large margin that it -- quote -- hurts everyone's feelings.  Well, those are my feelings Nilay Patel was talking about!  Look, I like the Verge -- and I definitely didn't have them on my list of people I might feud with this years -- but f* those guys! Let's win this thing! So could you please vote? Today, April 16th is the last day to do so and we're currently just behind, in second place. The link to vote is in the show notes. You can also find it on the show's website at Future Around dot com And what is it you're voting for? Well, if you've been listening then you already know what this show is all about, but I also thought for newbies and even for long time listeners, it might be fun for you to hear exactly what the Webby judges listened to when they voted for FAFO to be a best tech podcast nominee. They ask for ten minutes of audio, so I made a highlight reel — and here it is. *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***

    11 min
  7. We Need Inventors. And Inventors Need Us. Pablos Holman on Finding and Backing Zero to One Builders

    APR 14

    We Need Inventors. And Inventors Need Us. Pablos Holman on Finding and Backing Zero to One Builders

    We live in a world where every crisis lands in your pocket the moment it happens. The result? We're more informed than ever — and somehow less capable of doing anything about it. Inventor and investor Pablos Holman has a diagnosis: we're spreading ourselves across every problem, which means we're solving none of them. His prescription is uncomfortable — pick one thing, go all in, and cut the noise. ***QUICK PLUG: Future Around & Find Out is nominated for a Webby for best tech podcast! Voting is open now for the People's Choice Award. Please vote before April 16th! https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology*** Pablos is the co-founder of Deep Futures, where he hunts for inventors tackling world-scale problems: energy, water, food, waste, transportation. Not apps. Atoms. And thanks to advances in AI and software, these "impossible" problems are more solvable than ever — if the right people show up to back them. In this conversation, recorded at the fabulous PopTech conference, he makes the case that inventors are the most important creative class on earth — and the most invisible. They're undersupported, uncelebrated, and working alone in garages. Some of them are probably going to blow themselves up. Those are exactly the people he's looking for. We get into: Why doomscrolling is literally eroding your ability to make a differenceThe difference between craft (optimization) and creation (zero-to-one) — and why AI is great at one and struggling with the otherWhy you can name 100 musicians but fewer than two living inventorsHow solving energy unlocks clean water, sanitation, and climate — essentially for freeWhy software people are uniquely positioned to work on the hardest problems in the world right nowChapters: (01:15) - Why the world isn't as broken as your newsfeed makes it seem (03:00) - The sticky note exercise: how to pick the one problem worth your time (04:30) - Inventors are the most important creative class nobody talks about (07:00) - Living inventors you should actually know (09:00) - What AI is good at — and what it still can't do (12:30) - Why software people are the right ones to tackle deep tech problems (22:56) - Energy is the root problem — solve it and you solve a lot else (25:56) - Climate change needs a thousand solutions, not one big fix (28:26) - The fashion industry's dirty secret and what robots can do about it Links & ResourcesPablos Holman on LinkedInDeep Future: VC firm, book, and podcastSupport Future Around & Find Out FAFO is nominated for a Webby for best tech podcast! Vote now! Get the free newsletterAnd consider becoming a paid subscriber and help future proof this thing!Sponsor the show?  Are you looking to reach an audience of senior technologists and decision-makers? Email me: dan@futurearound.com--- Pablos's first appearance on the show covers his work at Blue Origin and Intellectual Ventures. Scroll in your podcast app to July 2025 to find that fun conversation. (Can listen before or after this one; not a prerequisite.)

    32 min

Trailers

4.1
out of 5
51 Ratings

About

* Winner of the 2026 Webby Award for Best Technology Podcast * Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. On Tuesdays, we interview the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com

You Might Also Like