Womansplaining AI

Logan Currie

Most AI content is either breathless hype or doomsday scrolling. We're neither. Womansplaining AI is for the "non-technical" woman curious about AI but unsure where to start, the one vibe coding on weekends and worried about her job security... and everyone in between. We do the reading so you don't have to. Each episode, we break down one big question about AI—the research, the implications, what it actually means for your work and your life. You'll walk away with something to think about, something to talk about, and maybe something to try. No jargon. No gatekeeping. Just two women processing the future out loud. Expect honest talk, fierce ambivalence, and the occasional swear word. We don't tell you what to think—we give you the frameworks to form your own opinion. Your hosts: Mara spent 20+ years on women's economic empowerment—from leading gender justice campaigns at Oxfam (reaching 440M people) to roles in finance and government. A Harvard Kennedy School Fellow, she founded First Prompt and asks "what does this change?" Logan spent 13 years in edtech across Asia before returning to the US to found a company building tools for job seekers (Careerspan) and complete her masters at Harvard, where she researched learning and AI adoption. An advanced-AI-tinkerer, Logan asks "how do we use this?" We met as *cough mature-student classmates in 2024 and haven't stopped talking since. Got something to say? Leave us a voice message. Disagreement is encouraged.

Episodios

  1. HACE 5 DÍAS

    On Feeling Smarter and Being Wrong

    We recorded this episode three hours before the Pentagon's 5:01 PM deadline for Anthropic to drop its two remaining safety red lines — no mass domestic surveillance, no fully autonomous weapons — or be designated a supply chain risk alongside Huawei. We break down the standoff, the Orwellian doublethink of calling a company's safety restrictions a national security threat, and what it means that the DOD wanted Anthropic's tools specifically because they're the best. Then: OpenAI is putting ads in ChatGPT. A former research scientist quit the same day and wrote a New York Times op-ed calling their chat logs "the most detailed record of private human thought ever assembled." We unpack what happens when a sycophantic AI meets an ad revenue engine — and why it's not just about behavior anymore. Facebook targeted you based on what you clicked. ChatGPT will target you based on what you think. Our main artifact: a Wharton study called "Thinking Fast, Slow, and Artificial." When AI is confidently wrong, people follow it 80% of the time — and their self-reported confidence goes up. We dig into cognitive surrender, algorithmic loafing, and why working with AI activates the same brain centers as gambling. The scariest part isn't that AI gets things wrong. It's that you feel smarter while it's happening. Also: Mara won't use AI to take out your appendix (she explains why with help from the board game Operation). Your therapist pauses mid-session to recommend Nesquik hot chocolate. We need a German word for the specific rage of being gaslit by your AI at 2 AM. And AI note-takers in meetings make women speak 9% more. Leave us a voicemail at womansplainingai.com — we want your voice in future episodes!

    1 h y 2 min
  2. 18 FEB

    On Adolescent Technology and 20,000-Word Warnings

    The CEO of the company building one of the most powerful AIs on earth just wrote a 20,000-word warning about what's coming. Should we believe him? In this episode, we break down Dario Amodei's essay "The Adolescence of Technology"—section by section, with the gloves off. We cover what he gets right (the economic pain will be real and gendered), what he dances around (his company is accelerating the thing he's warning about), and why this reads less like a blog post and more like a historical artifact. But first: the news. Companies are citing AI for layoffs that AI can't actually do yet. We dig into the Oxford Economics report on AI-washing and the HBR survey showing these are almost entirely anticipatory layoffs—firing people for what AI might do, not what it does. Also in this episode: "Slow until it's fast"—the breakdown of the employer-employee social contract from Reagan to nowElizabeth Holmes and the Theranos parallel: when "fake it till you make it" meets actual human livesAI companies as nation-states: constitutions, town halls, and statecraftDario's "country of geniuses in a data center" metaphor—and what it means for entry-level workersThe 80% wealth pledge: all Anthropic co-founders pledged to donate 80% of profits"A national highway system with no speed limits"—Mara's best metaphor of the seasonAI 2027: the predictions document Mara says to read with comfort food nearbyClaude Code tips: Mara's breakthrough moment and Logan's Mandarin nanny story

    1 h y 6 min
  3. 17 FEB

    On Job Tsunamis and Invisible Pockets of Vulnerability

    Episode 2: The She-Session No One's Talking About The Davos headlines screamed "job tsunami"—but whose jobs, exactly? In this episode, we unpack the Brookings study that sliced the data everyone else missed: of workers in the most vulnerable quadrant—high automation risk AND lowest capacity to adapt—86% are women. Not truck drivers. Not coal miners. Medical secretaries. Insurance clerks. Receptionists. And nobody at Davos said a word about them. We also dig into Anthropic's new Claude Constitution—what it means to give an AI a moral center, why Logan's college professor's definition of "institution" (where expectations converge) suddenly feels prophetic, and whether a corporate constitution can actually build trust with women who've been burned before. Also in this episode: The IMF chief's "labor market tsunami" vs. Jamie Dimon's truck driver boogeyman—and why the framing is genderedOpenAI ads in ChatGPT vs. Anthropic's constitution: two very different visions for AI's futureThe Grok of it all (briefly, because Mara refuses to touch it)"Algorithmic loafing"—the research on why one correct AI answer makes you stop catching the wrong onesThe boy vs. girl AI experiment: ask any LLM to predict a child's life trajectory and watch the million-dollar wage gap appearEntrepreneurs of necessity: what happens when women are locked out of the job market and told to "just reskill"Universal Basic Benefits > Universal Basic Income—and why decoupling healthcare from employment changes everythingLogan's 10-minute exercise: benchmark yourself against the market (could you get your own job right now?)Your assignment: Start a Womansplaining pod. Find 2-3 women. One hour a week, protected time. Do a skills audit together—ask each other "what are my superpowers?" Then pull up the Anthropic constitution and decide what's missing. That's it. That's the on-ramp. Resources mentioned: Brookings Institution: "Measuring US Workers' Capacity to Adapt to AI-Driven Job Displacement" (Jan 2025)NBER paper on automation exposure (the one that forgot to mention women)Burning Glass Institute research on augmentation vs. automationBurning Glass data on college graduate underemploymentHard Fork podcast on ChatGPT advertisingLogan's Substack on learning pods and community-based AI learningGot a reaction? Leave us a voice message. We want to hear from you.

    43 min
  4. 16 FEB

    On Women, AI and Who Gets a Seat

    Why are women using AI at lower rates than men—and is that actually a problem? In our first episode, we dig into the data: Logan scraped 1,000+ comments from a viral TikTok about women resisting AI and ran sentiment analysis to find the patterns. The top reasons? Pride in independent thinking. Skepticism about accuracy. Gendered critique of tech bros. Fear of cognitive decline. And a deep, earned distrust: "You fooled me once with social media." We get into all of it—the valid reasons to be wary, the real risks of opting out, and why ambivalence might be the healthiest response to this moment. Also in this episode: • Anthropic's Claude Cowork launch (they built it in a week and a half using Claude itself)• The $100-200/month cognitive inequality gap—who gets to experiment at the frontier?• Mara's savings circle analogy: what women's financial inclusion groups taught her about building AI on-ramps• The "fooled me once" theory: why women who lived through social media's promises aren't buying the AI hype• Carol Gilligan's "In a Different Voice" and why women's way of asking questions gets pathologized• The protégé effect: why teaching someone else is the best way to learn We end with voice notes from women around the world on what "Womansplaining AI" means to them—from a nonprofit founder explaining RAG systems to a PhD economist in Ottawa to a friend in Australia talking about reproductive justice and accessibility. This is not a show that tells you AI is good or bad. It's a space to hold both—to be excited and freaked out at the same time—and to figure out what that means for your life, your work, and the people you care about. Resources mentioned:• Didoriot's TikTok on women and AI resistance• Ethan Mollick's "Co-Intelligence"• Carol Gilligan's "In a Different Voice"• Nicholas Michelson's "The Death of a Knowledge System" (Substack)• Mara's Stanford Social Innovation Review piece on ambivalence Got a reaction? Leave us a voice message. We want to hear from you.

    1 h y 12 min

Acerca de

Most AI content is either breathless hype or doomsday scrolling. We're neither. Womansplaining AI is for the "non-technical" woman curious about AI but unsure where to start, the one vibe coding on weekends and worried about her job security... and everyone in between. We do the reading so you don't have to. Each episode, we break down one big question about AI—the research, the implications, what it actually means for your work and your life. You'll walk away with something to think about, something to talk about, and maybe something to try. No jargon. No gatekeeping. Just two women processing the future out loud. Expect honest talk, fierce ambivalence, and the occasional swear word. We don't tell you what to think—we give you the frameworks to form your own opinion. Your hosts: Mara spent 20+ years on women's economic empowerment—from leading gender justice campaigns at Oxfam (reaching 440M people) to roles in finance and government. A Harvard Kennedy School Fellow, she founded First Prompt and asks "what does this change?" Logan spent 13 years in edtech across Asia before returning to the US to found a company building tools for job seekers (Careerspan) and complete her masters at Harvard, where she researched learning and AI adoption. An advanced-AI-tinkerer, Logan asks "how do we use this?" We met as *cough mature-student classmates in 2024 and haven't stopped talking since. Got something to say? Leave us a voice message. Disagreement is encouraged.

También te podría interesar