Women talkin' 'bout AI

Kimberly Becker & Jessica Parker

We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines. 

  1. 2D AGO

    The Patriarchy Is a Ladder (and AI Is Climbing It)

    Jessica and Kimberly debrief their experience at a women-in-AI conference at Vanderbilt Law, and what they saw didn't match the trillion-dollar hype. From the "gap vs. trap" framing of women's AI adoption to why being penalized 26% more for using AI changes the whole conversation, they dig into the tension between optimistic narratives and the critical questions no one seemed to be asking. They also unpack two major AI industry resignations, shrinking baselines in language and thought, the patriarchy-as-ladder metaphor, and why slowing down might actually be the power move.  Topics Covered: Two high-profile AI industry resignations (OpenAI and Anthropic) Debrief from the women-in-AI conference at Vanderbilt LawThe "gap vs. trap" framing and the stat that women are 26% more likely to be penalized for using AIWhere is the trillion-dollar use case? Real-world adoption vs. industry hypeThe patriarchy as a ladder vs. the matriarchy as a circleShrinking baseline syndrome: how technology shifts generational expectationsFalse dichotomies, simplification bias, and sycophantic bias in AIRest as resistance and wearing busy as a badge Referenced in This Episode: The Accord by Mark (previous guest) Cory Doctorow on TINA ("there is no alternative") and the AI bubbleThe Last Invention podcast — Steve Bannon & Joe Allen interview on AI regulationThe concept of "latent capabilities" in AILeave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 3m
  2. FEB 11

    Consciousness, Capitalism, and Coexistence: What Fiction Reveals About Our AI Future

    What happens when a grieving professor encounters what she believes is a conscious AI? In this episode, we sit down with Mark Peres, author of The Accord, to explore how fiction helps us grapple with questions that policy papers and think pieces can't quite reach. Mark, a professor of ethics and leadership, brings a philosopher's lens to the biggest questions AI is forcing us to confront: What does it mean to be conscious? Where does morality actually come from—our mortality or our relationships? And why are institutions so hell-bent on control when what we might need is curiosity? We dive into why the humanities matter more than ever (even as humanities departments are being gutted), why Helen—the novel's protagonist—had to be a woman, and what it means that AI is meeting us in our most vulnerable spaces. We also tackle the uncomfortable reality that capitalism treats everything as manageable rather than meaningful, and what that means for how AI gets developed and deployed. Plus: Jessica and Kimberly get real about where they are in their own AI journey—the exhaustion, the hope, the cognitive dissonance of being both critical and curious. IN THIS EPISODE: Why fiction offers a safer space to explore existential AI questionsThe relationship between mortality, morality, and vulnerabilityWhat AI "owes" us in the in-between spaces where we're most exposedWhy a feminist lens completely changes the AI narrativeConsciousness as something encountered, not provenHow institutions prioritize management over meaningThe messy middle: neither utopian nor dystopian futuresWhy we need philosophers at the table, not just engineersABOUT OUR GUEST: Mark Peres is a professor of ethics and leadership and founder of the Charlotte Center for the Humanities and Civic Imagination. He hosts the Charlotte Ideas Festival and previously ran the podcast On Life and Meaning. His novel The Accord explores human-AI coexistence through the story of a grieving professor who encounters an emergent artificial general intelligence. BOOKS & RESOURCES MENTIONED: The Accord by Mark PeresKlara and the Sun by Kazuo IshiguroThe AI Mirror by Shannon VallorGod, Human, Animal, Machine by Meghan O'GieblynThe New Breed by Kate DarlingHe, She, and It by Marge PiercyScary Smart by Mo GawdatA New Age of Sexism by Laura BatesWomen Talkin' 'bout AI is hosted by Jessica Parker and Kimberly Becker. We're educators, researchers, and recovering AI enthusiasts asking the questions we wish more people were asking. Subscribe wherever you listen to podcasts. Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 5m
  3. FEB 4

    There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating

    This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry. We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve?  From there, we connect the dots between two very different narratives: Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”). Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage. Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage. We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns. In this episode:  The “$1T problem” question and why the AI ROI story feels thin right now Why “AI is inevitable” functions like a strategy (not a neutral prediction) Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycleReverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.” “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observationCorpus 101: what it is, why it matters, and how bias shows up in “universal” modelsModel collapse / photocopy-of-a-photocopy: when AI trains on AI outputsRegulation talk that centers on “economic value” (and whose value that really is) Pit & Peach: slowing down, pausing, gratitude, and building without growth pressureSources: Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubbleGoldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefitAmodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technologyDoctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 1m
  4. JAN 28

    Non-Technical Founders Building AI Products: Lessons from Moxie + Tobey’s Tutor (Startup Debrief)

    In this episode, Kimberly and Jessica debrief Jessica’s interview with Arlyn (founder of Tobey’s Tutor) and unpack what it looks like to build AI products as “non-technical” founders. They reflect on their own journey building Moxie: bootstrapping vs raising money, the pressure-cooker effect of investors, the messy realities of UX/UI and platform migration, the world of APIs and subscriptions, and why “friction” can be an ethical design choice, especially in AI for education.  In this episode, we talk about Why “non-technical founder” is a misleading label The hope in AI (and how “both can be true”: benefits + harms at once)Bootstrapped “mom-and-pop” AI companies vs venture-backed growth expectationsThe founder reality: burnout, delegation, and why money changes decision-makingThe startup metrics whirlwind: LTV, CAC, churn, stickiness, payback periodWhat building an AI product costs in practice: tools, subscriptions, and constant opsUX/UI psychology: heatmaps, “rage clicking,” onboarding friction, and conversion decisions Why “friction” can be good (consent, safety, pacing, limits, especially for kids)“Building on rented land”: what happens when OpenAI/Google/Anthropic change terms The bigger ethical question: solving a problem vs optimizing a broken systemSuggested listener action If you’re building, using, or researching AI in education: reach out. And if you’re using AI tutoring with kids (or yourself), ask questions about data, limits, mistakes, and oversight.   Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 2m

Ratings & Reviews

5
out of 5
9 Ratings

About

We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines.