Women talkin' 'bout AI

Kimberly Becker & Jessica Parker

We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines. 

  1. 2D AGO

    We Are So Vulnerable to Kindness: Companion AI as a Human, not a tech, Problem

    In this convo, Tricia Friedman and Kimberly Becker explore the concept of Companion AI and its implications for human relationships. They discuss the emotional connections people form with AI, the impact of social media on friendships, and the challenges of navigating conflict in a digital age. The discussion also touches on the importance of repair in relationships, the anxiety generation, and the role of emotional intelligence in understanding technology. They conclude by reflecting on the future of Companion AI and its potential to shape human connection. Keywords Companion AI, Emotional Intelligence, Friendship, Social Media, Technology, Human Connection, Loneliness, Repair, Anxiety Generation, Listening Literacy Books Anon — Caia Hagel Publisher page (Canada): https://www.harpercollins.ca/products/anon-caia-hagel-9781443469909Clara and the Sun — Kazuo Ishiguro Publisher page: https://www.penguinrandomhouse.com/books/564109/clara-and-the-sun-by-kazuo-ishiguro/The New Age of Sexism — Laura Bates Full title: The New Age of Sexism: How AI and Emerging Technologies Are Rewiring Misogyny (2025). Publisher listing: https://greenapplebooks.com/book/9781464234361How to Speak Chicken by Melissa Caughey: https://www.storey.com/books/how-to-speak-chickenResearch / Theory Sherry Turkle (2024) – “Who We Become When We Talk to Machines” Artificial Intimacy: Who We Become When We Talk to Machines https://mit-genai.pubpub.org/pub/uawlth3j/release/2Brown & Levinson politeness theory (1978) Politeness: Some Universals in Language Usage (Cambridge University Press, 1987; original work circulated as a 1978 manuscript): ​ https://en.wikipedia.org/wiki/Politeness_theory “My Roomba is Rambo” paper Full title: “‘My Roomba is Rambo’: Intimate Home Appliances” (UbiComp 2007). PDF:https://link.springer.com/chapter/10.1007/978-3-540-74853-3_9 https://faculty.cc.gatech.edu/~hic/hic-papers/Roomba-Ubicomp.pdfApps / Orgs / Other Replika app (AI companion) Official site: https://replika.comNew York City companion‑AI Valentine’s Day pop‑up I could not find a clearly titled NYC “companion AI Valentine’s Day” pop‑up event with a stable news URL; coverage instead folds into broader AI‑companions stories (e.g., CBC, Brookings, etc.). CBC feature on AI companions and emotional support: https://www.cbc.ca/news/business/companion-ai-emotional-support-chatbots-1.7620087Tricia’s organization, Shifting Schools Main site: https://shiftingschools.comSubstack post on politeness theory: https://open.substack.com/pub/kpb12177/p/how-reward-driven-ai-politeness-collapses?utm_campaign=post-expanded-share&utm_medium=webRobot dance for the lunar new yearLeave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    40 min
  2. FEB 25

    What's a Bot, Anyway?

    This week's episode starts where a lot of good conversations do, with someone asking a deceptively simple question. Kimberly's husband wanted to know what a bot actually is, and that one question opens up a pretty wide conversation about the language we use to talk about AI, why it matters, and what we might be underestimating when we make it sound cute and harmless. From there, Kimberly and Jessica revisit their ongoing argument that AI functions as a cultural intermediary, shaping how we understand the world in ways we don't always notice or examine. They also get into what higher education is actually for in a moment when AI can produce the essay, the lit review, and the commencement speech. Spoiler: The humanities are more relevant than ever, just as we've finished cutting the programs. Other topics this week include why behavior change is so hard (and why that matters for AI adoption), what everyday workers are actually up against when trying to experiment with new tools inside large organizations, the problem with surface-level AI use cases, and why small businesses are both well-positioned and underprepared for this moment. They also get into media literacy, AllSides, the Dunning-Kreuger internet, Jessica's agentic qualitative research experiment, and a genuinely honest conversation about mental health, medication, and showing up to your life. Mentioned this week: Cassandra Speaks by Elizabeth LesserAllSides (allsides.com)The Daily by The New York Times Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 12m
  3. FEB 18

    The Patriarchy Is a Ladder (and AI Is Climbing It)

    Jessica and Kimberly debrief their experience at a women-in-AI conference at Vanderbilt Law, and what they saw didn't match the trillion-dollar hype. From the "gap vs. trap" framing of women's AI adoption to why being penalized 26% more for using AI changes the whole conversation, they dig into the tension between optimistic narratives and the critical questions no one seemed to be asking. They also unpack two major AI industry resignations, shrinking baselines in language and thought, the patriarchy-as-ladder metaphor, and why slowing down might actually be the power move.  Topics Covered: Two high-profile AI industry resignations (OpenAI and Anthropic) Debrief from the women-in-AI conference at Vanderbilt LawThe "gap vs. trap" framing and the stat that women are 26% more likely to be penalized for using AIWhere is the trillion-dollar use case? Real-world adoption vs. industry hypeThe patriarchy as a ladder vs. the matriarchy as a circleShrinking baseline syndrome: how technology shifts generational expectationsFalse dichotomies, simplification bias, and sycophantic bias in AIRest as resistance and wearing busy as a badge Referenced in This Episode: The Accord by Mark (previous guest) Cory Doctorow on TINA ("there is no alternative") and the AI bubbleThe Last Invention podcast — Steve Bannon & Joe Allen interview on AI regulationThe concept of "latent capabilities" in AILeave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 3m
  4. FEB 11

    Consciousness, Capitalism, and Coexistence: What Fiction Reveals About Our AI Future

    What happens when a grieving professor encounters what she believes is a conscious AI? In this episode, we sit down with Mark Peres, author of The Accord, to explore how fiction helps us grapple with questions that policy papers and think pieces can't quite reach. Mark, a professor of ethics and leadership, brings a philosopher's lens to the biggest questions AI is forcing us to confront: What does it mean to be conscious? Where does morality actually come from—our mortality or our relationships? And why are institutions so hell-bent on control when what we might need is curiosity? We dive into why the humanities matter more than ever (even as humanities departments are being gutted), why Helen—the novel's protagonist—had to be a woman, and what it means that AI is meeting us in our most vulnerable spaces. We also tackle the uncomfortable reality that capitalism treats everything as manageable rather than meaningful, and what that means for how AI gets developed and deployed. Plus: Jessica and Kimberly get real about where they are in their own AI journey—the exhaustion, the hope, the cognitive dissonance of being both critical and curious. IN THIS EPISODE: Why fiction offers a safer space to explore existential AI questionsThe relationship between mortality, morality, and vulnerabilityWhat AI "owes" us in the in-between spaces where we're most exposedWhy a feminist lens completely changes the AI narrativeConsciousness as something encountered, not provenHow institutions prioritize management over meaningThe messy middle: neither utopian nor dystopian futuresWhy we need philosophers at the table, not just engineersABOUT OUR GUEST: Mark Peres is a professor of ethics and leadership and founder of the Charlotte Center for the Humanities and Civic Imagination. He hosts the Charlotte Ideas Festival and previously ran the podcast On Life and Meaning. His novel The Accord explores human-AI coexistence through the story of a grieving professor who encounters an emergent artificial general intelligence. BOOKS & RESOURCES MENTIONED: The Accord by Mark PeresKlara and the Sun by Kazuo IshiguroThe AI Mirror by Shannon VallorGod, Human, Animal, Machine by Meghan O'GieblynThe New Breed by Kate DarlingHe, She, and It by Marge PiercyScary Smart by Mo GawdatA New Age of Sexism by Laura BatesWomen Talkin' 'bout AI is hosted by Jessica Parker and Kimberly Becker. We're educators, researchers, and recovering AI enthusiasts asking the questions we wish more people were asking. Subscribe wherever you listen to podcasts. Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 5m
  5. FEB 4

    There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating

    This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry. We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve?  From there, we connect the dots between two very different narratives: Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”). Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage. Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage. We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns. In this episode:  The “$1T problem” question and why the AI ROI story feels thin right now Why “AI is inevitable” functions like a strategy (not a neutral prediction) Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycleReverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.” “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observationCorpus 101: what it is, why it matters, and how bias shows up in “universal” modelsModel collapse / photocopy-of-a-photocopy: when AI trains on AI outputsRegulation talk that centers on “economic value” (and whose value that really is) Pit & Peach: slowing down, pausing, gratitude, and building without growth pressureSources: Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubbleGoldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefitAmodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technologyDoctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 1m
  6. JAN 28

    Non-Technical Founders Building AI Products: Lessons from Moxie + Tobey’s Tutor (Startup Debrief)

    In this episode, Kimberly and Jessica debrief Jessica’s interview with Arlyn (founder of Tobey’s Tutor) and unpack what it looks like to build AI products as “non-technical” founders. They reflect on their own journey building Moxie: bootstrapping vs raising money, the pressure-cooker effect of investors, the messy realities of UX/UI and platform migration, the world of APIs and subscriptions, and why “friction” can be an ethical design choice, especially in AI for education.  In this episode, we talk about Why “non-technical founder” is a misleading label The hope in AI (and how “both can be true”: benefits + harms at once)Bootstrapped “mom-and-pop” AI companies vs venture-backed growth expectationsThe founder reality: burnout, delegation, and why money changes decision-makingThe startup metrics whirlwind: LTV, CAC, churn, stickiness, payback periodWhat building an AI product costs in practice: tools, subscriptions, and constant opsUX/UI psychology: heatmaps, “rage clicking,” onboarding friction, and conversion decisions Why “friction” can be good (consent, safety, pacing, limits, especially for kids)“Building on rented land”: what happens when OpenAI/Google/Anthropic change terms The bigger ethical question: solving a problem vs optimizing a broken systemSuggested listener action If you’re building, using, or researching AI in education: reach out. And if you’re using AI tutoring with kids (or yourself), ask questions about data, limits, mistakes, and oversight.   Leave us a comment or a suggestion! Support the show Contact us: https://www.womentalkinboutai.com/

    1h 2m

Ratings & Reviews

5
out of 5
10 Ratings

About

We’re Jessica and Kimberly – two non-computer scientists who are just as curious (and skeptical) about generative AI as you are. Each episode, we chat with people from different backgrounds to hear how they’re making sense of AI. We keep it real, skip the jargon, and explore it with the curiosity of researchers and the openness of learners.Subscribe to our channel if you’re also interested in understanding AI behind the headlines. 

You Might Also Like