The Line

The PEXE Lab

Everyone has a line they won't let AI cross—a task, a relationship, a piece of work that has to stay human. But where exactly is that line, and why? Each episode, hosts Sean Legnini and Matthew Kruger-Ross from The PEXE Lab sit down with someone from a different field and ask: where's your line? Part interview, part investigation into what it means to stay human in an age of AI.

Episodes

  1. MAR 12

    Ep. 3 - Sarah Williamson: The 70/30 Line and the Limits of a Tool Without a Soul

    What do graduate students and Disney tourists have in common? According to Dr. Sarah Williamson, they both need a better roadmap than a robot can give them. In this episode of The Line, Sean and Matthew open with a conversation that felt inevitable: Anthropic's very public refusal to allow the Department of Defense to use Claude for autonomous weapons systems — and the wave of ChatGPT cancellations that followed. It's a moment that forces a question we keep returning to on this show: not just how we use AI, but what we're participating in when we do. Then we sit down with Dr. Sarah Williamson — student affairs educator, graduate instructor at West Chester University, and the force behind a Disney travel advisory Instagram account that's racking up 25,000–30,000 monthly views in under four months. Sarah doesn't just use AI; she's built what she calls a "smart house" — a carefully layered stack of tools (Gemini, NotebookLM, TravelJoy Co-pilot, and more) that she's trained to carry her voice, her teaching style, and her values. She talks about using Gemini conversationally to cut 800 words from a publication piece in 27 minutes — not by having it rewrite the piece, but by having it point her toward redundancies so she could make the cuts herself. She talks about using NotebookLM to cross-check the equity of her own grading across 20+ student papers. She talks about how AI can give her a starting framework for planning trips for neurodivergent kids — and why the actual conversation with the family is irreplaceable. But the moment that stops the conversation is her phrase: AI doesn't have a soul. Sean and Matthew spend the debrief unpacking that. What does it mean to say something lacks a soul when people are forming genuine emotional connections with their AI tools? Matthew talks about the idea of putting a deceased mentor's writings into NotebookLM and what it would mean to "have a conversation" with them. Sean raises the AI Heidegger GPT they've experimented with, and what it means to be struck by a response that could have beensomething Heidegger would say. And then there's the question of time — picked up from a thread in a previous episode — about the felt unfairness when you can tell someone spent 10 seconds generating something you're spending 5 minutes reading. Is AI-detection fundamentally a phenomenology of reciprocal effort? Also in this episode: the line between ethical AI adoption and complicity, sustainability and water usage, why the travel industry is moving faster than higher education, and why a viral Instagram comment about Harry Potter's 11th birthday might be the best argument for keeping your own voice. Sarah's 70/30 rule is one of the cleaner frameworks we've heard on the show. The tools can do the heavy lifting. But the soul of the work — the connection, the presence, the meaning-making — that's still ours. Dr. Sarah Williamson is a veteran student affairs educator and graduate instructor at West Chester University, bringing nearly 20 years of experience—including leadership as a senior student affairs officer—to the classroom. Seamlessly blending her expertise in mentorship with a passion for exploration, she is also the owner of a boutique travel advisory service and a rising social media content creator. Whether guiding students through their academic careers or clients across the globe, Sarah is dedicated to fostering growth through intentional, transformative experiences. You can find her at: The Travel Thesis on Substack Office Hours for the Soul on Substack Dr. Sarah W. on Instagram And as always, you can find us: Sean's Substack Matthew's Substack The PEXE Lab Substack The PEXE Lab Website

    57 min
  2. FEB 26

    Ep. 2 - Brandon Jacobs: Relationships, Identity, and his ADHD Superpowers

    What happens when we hand off the tasks that quietly shape who we are? In this episode of The Line, Sean and Matthew sit down with Brandon Jacobs, leadership certification program manager at the National Association of Episcopal Schools and former independent school recruiter. Brandon makes the case that education has always rested on three Rs — reading, writing, and arithmetic — but has consistently missed the fourth: relationships. For Brandon, AI's greatest value is that it clears space for that relational work. It handles the spreadsheets, the survey synthesis, the interview note collation, so he can show up fully present for the people he serves. Brandon also shares what AI means for him as someone diagnosed with ADHD later in life. Rather than framing ADHD as a limitation, he describes it as a superpower — and AI as the tool that lets him lean into his strengths by neutralizing the tasks that used to drag him down. He talks about the zero-inbox policy his brain demands, the presentation prep that used to stall him, and how AI gets him "out the door" so he can bring his full self to the work that matters. But Brandon's not naive about the tradeoffs. His line comes into sharp focus when he talks about asking school leaders to write their educational philosophy statements — and receiving something clearly AI-generated. "I asked for your humanity," he says, "and you gave me a robot." For Brandon, the line is about honoring the time and attention that genuine relationship demands. He introduces the phrase "human first, human last" — a framework where AI lives in the middle, but the human bookends are non-negotiable. In the debrief, Sean and Matthew unpack the role of time as something more than a resource to be optimized. Drawing on Heidegger's observation that "all distances in time and space are shrinking," they explore how the slow, messy process of writing a teaching philosophy or reading a novel over a month is itself identity-forming. When AI collapses that time, there are real gains — but also real losses that rarely make the headlines. The episode closes with a question worth sitting with: if the tasks we hand off to AI are part of how we become who we are, what happens when we stop doing them? Follow along with our thinking: ⁠The PEXE Lab⁠ ⁠The PEXE Lab on Substack⁠ ⁠Sean's Substack⁠ ⁠Matthew's Substack⁠

    48 min

Ratings & Reviews

5
out of 5
7 Ratings

About

Everyone has a line they won't let AI cross—a task, a relationship, a piece of work that has to stay human. But where exactly is that line, and why? Each episode, hosts Sean Legnini and Matthew Kruger-Ross from The PEXE Lab sit down with someone from a different field and ask: where's your line? Part interview, part investigation into what it means to stay human in an age of AI.