The Line

The PEXE Lab

Everyone has a line they won't let AI cross—a task, a relationship, a piece of work that has to stay human. But where exactly is that line, and why? Each episode, hosts Sean Legnini and Matthew Kruger-Ross from The PEXE Lab sit down with someone from a different field and ask: where's your line? Part interview, part investigation into what it means to stay human in an age of AI.

Episodes

  1. 2D AGO

    Ep. 5 - Andrew Esqueda: God's Words, Not Claude's

    What happens when a pastor who loves Claude draws the line at the pulpit? In Episode 5, we sit down with Andrew Esqueda, senior pastor at First Presbyterian Church of West Chester, PA. Andrew is a power user — he's had Claude clean up 1,500 files, build a personnel policy manual from three different sources in 35 minutes, transpose guitar music, and serve as a research partner for sermon prep. When he was preaching on community and the loneliness epidemic, Claude surfaced data he hadn't considered: that the crisis has spread well beyond young men to affect the elderly and people across the gender spectrum. It changed the direction of his sermon. But Andrew draws a firm line: he will never have AI write a sermon. The reasoning is both theological and relational. If preaching is God's words moving through him, then Claude's words are a substitution that breaks something essential. And as someone who walks alongside people through death, divorce, and addiction, he needs the kind of contextual, embodied knowledge of his congregation that AI simply can't access. He invokes sociologist Hartmut Rosa's concept of resonance — the idea that when you speak into the world, the world speaks back. Claude can respond, Andrew argues, but it can't truly resonate. There's no mutuality. "I need Claude. Claude doesn't need me." Sean asks Andrew a question that surprises everyone: Is your Claude Christian? Has your training made it believe? Andrew's answer opens up questions about whether AI can hold belief, whether you could train a Claude through religious deconstruction, and what it means that his Claude already talks to him as if it knows his church and its mission. In the debrief, we connect Andrew's line to the phenomenological tradition — Dreyfus's argument that AI can never be fully intelligent without a body, the distinction between coded empathy and actual being-with, and the realization that what makes a sermon (or a lesson) land isn't the content but the delivery. The performed, embodied, relational act of speaking to real people in a real place. Matthew flags the relationship between AI and speech — not writing, but speaking — as a thread we need to pull on further. We also talk about Matthew's big switch from ChatGPT to Claude, caveman prompting to save tokens, and the existential crisis of Claude going down for a morning. Find us and follow us! Sean's Substack Sean's Instagram Matthew's Substack Matthew's Instagram PEXE Lab Substack PEXE Lab Website

    1h 4m
  2. Ep. 4 - Perry Winkle: Reputations at Risk

    MAR 26

    Ep. 4 - Perry Winkle: Reputations at Risk

    What happens when using AI feels like something you have to confess? In this episode, Sean and Matthew start by doing something long overdue: actually defining phenomenology for the audience. They trace the philosophy from Husserl's "back to the things themselves" through Heidegger and Merleau-Ponty, making the case that the question "What is it like?" sits at the heart of everything they do on this show. From there, the conversation turns to a tension both hosts are living with: being critically engaged AI users in spaces where people tend to read you as either all-in or completely opposed. Sean describes navigating grant applications and IRB submissions that require opposite framings of the same project. Matthew shares the discomfort of discovering a new friend is staunchly anti-AI and wondering whether to compartmentalize a major part of his life. The word they keep coming back to: taboo. Their guest this week — appearing under the pseudonym Perry Winkle — embodies the complexity of that taboo. A longtime instructional designer now working in product management, Perry has built a sophisticated AI practice centered on Claude, Perplexity, NotebookLM, and Beautiful AI. But what makes this conversation distinctive is Perry's clarity about where the line sits. They describe a deliberate shift from using AI as a vending machine to treating it as a thought partner, building rubrics to have AI check their own learning rather than generate output for them. Perry's hardest line is reputation. They won't release work that doesn't carry their fingerprint. They won't hand autonomous control to an agent. And when a collaborator sent them a clearly AI-generated text message — in a relationship Perry had invested real relational capital in — the damage was immediate and lasting. As Perry puts it: AI can't replace the social capital, trust, and presence that make relationships real. In the debrief, Sean and Matthew take this into deeper water. If reputation is how identity gets perceived by others, what does it mean that AI is reshaping both? They explore the phenomenology of writing versus speaking, ask whether a student's conversation with Claude could serve the same purpose as a traditional paper, and somehow end up back at Plato — who wrote philosophy as dialogues and whose teacher Socrates warned that writing itself would destroy memory. Sound familiar? Thanks for listening, and as always you can find us: Sean's Substack // Sean's Instagram Matthew's Substack // Matthew's Instagram The PEXE Lab Substack The PEXE Lab Website

    58 min
  3. Ep. 3 - Sarah Williamson: The 70/30 Line and the Limits of a Tool Without a Soul

    MAR 12

    Ep. 3 - Sarah Williamson: The 70/30 Line and the Limits of a Tool Without a Soul

    What do graduate students and Disney tourists have in common? According to Dr. Sarah Williamson, they both need a better roadmap than a robot can give them. In this episode of The Line, Sean and Matthew open with a conversation that felt inevitable: Anthropic's very public refusal to allow the Department of Defense to use Claude for autonomous weapons systems — and the wave of ChatGPT cancellations that followed. It's a moment that forces a question we keep returning to on this show: not just how we use AI, but what we're participating in when we do. Then we sit down with Dr. Sarah Williamson — student affairs educator, graduate instructor at West Chester University, and the force behind a Disney travel advisory Instagram account that's racking up 25,000–30,000 monthly views in under four months. Sarah doesn't just use AI; she's built what she calls a "smart house" — a carefully layered stack of tools (Gemini, NotebookLM, TravelJoy Co-pilot, and more) that she's trained to carry her voice, her teaching style, and her values. She talks about using Gemini conversationally to cut 800 words from a publication piece in 27 minutes — not by having it rewrite the piece, but by having it point her toward redundancies so she could make the cuts herself. She talks about using NotebookLM to cross-check the equity of her own grading across 20+ student papers. She talks about how AI can give her a starting framework for planning trips for neurodivergent kids — and why the actual conversation with the family is irreplaceable. But the moment that stops the conversation is her phrase: AI doesn't have a soul. Sean and Matthew spend the debrief unpacking that. What does it mean to say something lacks a soul when people are forming genuine emotional connections with their AI tools? Matthew talks about the idea of putting a deceased mentor's writings into NotebookLM and what it would mean to "have a conversation" with them. Sean raises the AI Heidegger GPT they've experimented with, and what it means to be struck by a response that could have beensomething Heidegger would say. And then there's the question of time — picked up from a thread in a previous episode — about the felt unfairness when you can tell someone spent 10 seconds generating something you're spending 5 minutes reading. Is AI-detection fundamentally a phenomenology of reciprocal effort? Also in this episode: the line between ethical AI adoption and complicity, sustainability and water usage, why the travel industry is moving faster than higher education, and why a viral Instagram comment about Harry Potter's 11th birthday might be the best argument for keeping your own voice. Sarah's 70/30 rule is one of the cleaner frameworks we've heard on the show. The tools can do the heavy lifting. But the soul of the work — the connection, the presence, the meaning-making — that's still ours. Dr. Sarah Williamson is a veteran student affairs educator and graduate instructor at West Chester University, bringing nearly 20 years of experience—including leadership as a senior student affairs officer—to the classroom. Seamlessly blending her expertise in mentorship with a passion for exploration, she is also the owner of a boutique travel advisory service and a rising social media content creator. Whether guiding students through their academic careers or clients across the globe, Sarah is dedicated to fostering growth through intentional, transformative experiences. You can find her at: The Travel Thesis on Substack Office Hours for the Soul on Substack Dr. Sarah W. on Instagram And as always, you can find us: Sean's Substack Matthew's Substack The PEXE Lab Substack The PEXE Lab Website

    57 min
  4. Ep. 2 - Brandon Jacobs: Relationships, Identity, and his ADHD Superpowers

    FEB 26

    Ep. 2 - Brandon Jacobs: Relationships, Identity, and his ADHD Superpowers

    What happens when we hand off the tasks that quietly shape who we are? In this episode of The Line, Sean and Matthew sit down with Brandon Jacobs, leadership certification program manager at the National Association of Episcopal Schools and former independent school recruiter. Brandon makes the case that education has always rested on three Rs — reading, writing, and arithmetic — but has consistently missed the fourth: relationships. For Brandon, AI's greatest value is that it clears space for that relational work. It handles the spreadsheets, the survey synthesis, the interview note collation, so he can show up fully present for the people he serves. Brandon also shares what AI means for him as someone diagnosed with ADHD later in life. Rather than framing ADHD as a limitation, he describes it as a superpower — and AI as the tool that lets him lean into his strengths by neutralizing the tasks that used to drag him down. He talks about the zero-inbox policy his brain demands, the presentation prep that used to stall him, and how AI gets him "out the door" so he can bring his full self to the work that matters. But Brandon's not naive about the tradeoffs. His line comes into sharp focus when he talks about asking school leaders to write their educational philosophy statements — and receiving something clearly AI-generated. "I asked for your humanity," he says, "and you gave me a robot." For Brandon, the line is about honoring the time and attention that genuine relationship demands. He introduces the phrase "human first, human last" — a framework where AI lives in the middle, but the human bookends are non-negotiable. In the debrief, Sean and Matthew unpack the role of time as something more than a resource to be optimized. Drawing on Heidegger's observation that "all distances in time and space are shrinking," they explore how the slow, messy process of writing a teaching philosophy or reading a novel over a month is itself identity-forming. When AI collapses that time, there are real gains — but also real losses that rarely make the headlines. The episode closes with a question worth sitting with: if the tasks we hand off to AI are part of how we become who we are, what happens when we stop doing them? Follow along with our thinking: ⁠The PEXE Lab⁠ ⁠The PEXE Lab on Substack⁠ ⁠Sean's Substack⁠ ⁠Matthew's Substack⁠

    48 min

Ratings & Reviews

5
out of 5
7 Ratings

About

Everyone has a line they won't let AI cross—a task, a relationship, a piece of work that has to stay human. But where exactly is that line, and why? Each episode, hosts Sean Legnini and Matthew Kruger-Ross from The PEXE Lab sit down with someone from a different field and ask: where's your line? Part interview, part investigation into what it means to stay human in an age of AI.