Margin of Thought with Priten

Priten Soundar-Shah

Margin of Thought is a podcast about the questions we don’t always make time for but should. Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students. Each season centers on a key tension in modern life that affects how we raise and educate our children. Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.

  1. Is Surveillance Culture Ruining Trust in Schools? – Jessica Maddry

    3D AGO

    Is Surveillance Culture Ruining Trust in Schools? – Jessica Maddry

    In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education. Key Takeaways: Policies should serve ethics, not replace them. Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.Surveillance culture damages the learning environment. Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.Deep literacy is becoming a privilege again. Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.AI detection is an unwinnable arms race. The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.Human connection is irreplaceable in education. Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.About Jessica Maddry:Jessica Maddry is an educator, strategist, and cofounder of BrightMinds AI, where she works with schools and districts to integrate AI ethically, intentionally, and with educators at the center. Her work focuses on helping systems move beyond hype toward human-centered, purpose-driven design– supporting policy, implementation, and systems change so technology strengthens learning, equity, and student well-being rather than undermines them.

    32 min
  2. What Does Representative Governance Mean for Our Future? - Nathán Goldberg

    5D AGO

    What Does Representative Governance Mean for Our Future? - Nathán Goldberg

    In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters. Key Takeaways: Voting isn't enough—perspective is. The people impacted by decisions need to be in the rooms where those decisions get made.Outsiders can win. Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.Institutions resist until they can't. Harvard ignored them, then attacked them. It didn't work.The model scales. The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.Soccer has the same problem. 4 million youth players, zero recent youth players in governance. About Nathán Goldberg:Born and raised in México, Nathán Goldberg Crenier is a new(ish) American who is passionate about using the power of democracy and sports to make the world a better place. He has been recognized in the Forbes 30 Under 30 list for his work in progressive politics and nonprofit management, in the New York Times for his work as an electoral organizer and climate advocate, and in the Sports Business Journal New Voices Under 30 list for his work as a soccer executive. He is also a proud recipient of the 2025 Paul & Daisy Soros Fellowship for New Americans as he pursues his JD at Harvard Law School, having graduated with a joint degree in philosophy and statistics from Harvard College, where he played for and captained the D1 varsity men’s soccer team.

    49 min
  3. How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta

    MAR 19

    How Do We Teach the Journey When AI Offers the Destination? - Varun Gupta

    In this episode, Priten speaks with Varun Gupta, an Accounting and Economics professor at Wharton County Junior College in the Houston area who has been teaching since 2007. Varun is refreshingly candid about his own complicated relationship with AI—he uses it extensively for lesson planning, assignment creation, and communication, but worries deeply about what happens when students skip the grind entirely.  Key Takeaways: The helicopter problem is real. Using AI to get answers without effort is like taking a helicopter to the top of Mount Everest. You get there, but you missed the point. The grind, the failure, the figuring-it-out—that's where the learning lives.Cognitive offloading is already happening to teachers, too. Varun no longer does mental math. He GPS's the airport he's been to hundreds of times. AI is next. The concern isn't hypothetical—it's already underway for him personally.Post-COVID is the bigger shift, not post-ChatGPT. Students who came through COVID developed habits of not showing up, not following through, and not asking questions. That behavioral shift is more visible than any change attributable to AI alone.The stress is gone—and that's the tell. Before ChatGPT, students peppered him with term paper questions all semester. Now? Silence. They're not less anxious because they're more prepared. They're less anxious because they've already decided how they'll produce the paper.There's inherent hypocrisy in the dynamic—and it's worth naming. Using AI to create assignments while discouraging students from using it to complete them isn't perfectly clean. Varun acknowledges it. The distinction is in where the journey matters: for the teacher creating the prompt, or for the student doing the thinking.The human value is in the face-to-face. In asynchronous online courses, the line between professor and bot is thin. Where Varun sees his irreplaceable value is in the in-person relationship—lived experience, empathy, career conversations, and the daily modeling of what professional effort actually looks like.About Varun Gupta:Varun Gupta, aka, The “Knotty” Economist is a dynamic and engaging economics professor with 19 years of experience making complex concepts both accessible and exciting. He has spent his entire career at Wharton County Jr. College (i. e. the “other” Wharton).  Known for his fun and energetic presentation style, and ever present elaborate necktie,  he has delivered insightful talks at conferences, college professional development events, and civic groups—both live and virtual. A passionate educator, Varun specializes in applying fundamental economic principles to real-world decision-making and classroom engagement. Whether tackling macro, micro, or the economics of everyday life, he brings a unique mix of expertise and humor that keeps audiences learning and laughing. When he’s not using economic concepts to explain the world, he spends time catering to his 4 year old golden doodle Cinnamon.

    29 min
  4. Can We Preserve Core Classrooms Values While Integrating Ed Tech? — Brian Tash

    MAR 17

    Can We Preserve Core Classrooms Values While Integrating Ed Tech? — Brian Tash

    In this episode, Priten speaks with Brian Tash, an elementary school teacher with nearly 30 years of experience who has witnessed the complete arc of education technology—from Scantrons to Google Classroom to AI. Brian shares how he balances technology integration with preserving fundamental skills like reading stamina and handwriting. The conversation covers his transparent approach to using AI for faster student feedback, why he's concerned about declining empathy and attention spans post-COVID, how he teaches prompt engineering to third and fourth graders, and his hope that educators will become more mindful about why they're using technology rather than just adopting everything new. He argues that personal connection, problem-solving, and collaboration are what students need most—and those can't come from a screen. Key Takeaways: Follow the 80-20 rule with AI. AI gets you 80% of the way—the other 20% is you adding your own elements. This applies to teachers giving feedback and students creating work.Transparency builds trust. When students understand why you're using AI for feedback, they embrace it. Brian's study found 90% of students were in favor once they understood the reasoning.Technology can't replace human connection. Students need to learn how to talk to each other, problem-solve collaboratively, and develop empathy—skills that don't come from screens.Stamina is the real crisis. Post-COVID students struggle to push through hard things. The growth mindset isn't there. Writing a paragraph makes their hands hurt.Teach prompting, not just usage. Focus on prompt engineering—how to get what you want from AI. Experiment with students: change the words, add details, see what happens.Standards-based grading may help. With clear standards, teachers can focus instruction, use AI to target specific skills, and have more time for the human elements once mastery is achieved.

    28 min
  5. Why Do We Teach Foreign Languages When AI is Multilingual? - Noelia Pozo

    MAR 12

    Why Do We Teach Foreign Languages When AI is Multilingual? - Noelia Pozo

    In this episode, Priten speaks with Noelia Pozo, a high school Spanish and French teacher with nearly two decades of experience who now heads the Foreign Language and Classical Department at her school. Noelia shares how she transformed her classroom by using AI openly alongside students rather than policing it. The conversation covers how she handles AI-generated work through relationship-building rather than detection tools, why she collects phones in a "Telephone Hotel," how exploring AI bias with students sparked deeper learning than lectures, and her frustration with colleagues who refuse to adapt while hypocritically using AI themselves. She argues that the question isn't whether to engage with these tools, but how to do so while preserving human connection, critical thinking, and genuine learning. Key Takeaways: Show students language is already in their lives. From "in lieu of" to Chipotle menus—they're already speaking foreign languages without realizing it. Recognition breeds respect.AI can't replace human connection. You can't build trust through a machine. Professional relationships require authentic communication, not a technological relay.Create honesty, not surveillance. Use AI openly alongside students and ask only for transparency. When trust flows both ways, students voluntarily admit mistakes—and learn from them.Teach students to verify AI output. AI isn't infallible. Once you put something in your paper, you own it—right or wrong.Explore AI bias together. "Nobody looks like me" in AI images sparked deeper conversations about bias and better prompting than any lecture could.Adapt or be replaced. Teachers won't lose jobs to AI—but they may lose them to teachers who use AI well.

    29 min
  6. Do Kids Need Phones? — Shon Holland

    MAR 11

    Do Kids Need Phones? — Shon Holland

    In this episode, Priten speaks with Shon Holland, a middle school science teacher at Sells Middle School in Dublin, Ohio. After a first career in hazardous waste management and environmental health and safety, Shon made the leap to education about 20 years ago. His experience with both seventh and eighth graders gives him frontline insight into how adolescents interact with technology. The conversation explores his balanced approach to tools like GoGuardian—using technology to monitor without creating surveillance culture—why he believes giving students responsibility actually lightens a teacher's load, and his blunt assessment that smartphones simply aren't healthy for middle schoolers. Key Takeaways: Misuse is inevitable—guidance is the goal. Middle schoolers can misuse anything from rulers to AI. Instead of trying to eliminate misuse, focus on teaching students how to make tools work for them and guiding them when they stumble.Relationships trump detection tools. Teachers who know their students can spot AI-generated work by recognizing when writing doesn't match a student's voice or level—no software required. Treat violations as learning moments, not punishments.Give responsibility to gain freedom. When you trust students with responsibility and show them consequences aren't personal, they give you space to actually teach. The more ownership they have, the less you need to police.Parents need to parent. The research on smartphones and adolescent brains is irrefutable. Kids don't need iPhones—they need dumb phones, landlines, and parents willing to set boundaries even when their children push back.Know the time and place. Technology and AI are fantastic tools that can differentiate instruction, translate languages, and unlock learning. But sometimes you just need human brain power. The skill is knowing when to use tech and when to walk away.

    26 min
  7. How Can AI Support Writing Instruction? - Kim Cowperthwaite

    MAR 5

    How Can AI Support Writing Instruction? - Kim Cowperthwaite

    In this episode, Priten speaks with Kim Cowperthwaite, an English Language Arts teacher at Freeport Middle School in Maine who has been teaching for over 20 years. Growing up in a tech-forward household in the 1970s and later working in the newspaper industry as it faced digital disruption, Kim brings a unique perspective on technological change. She was among the first teachers in the nation to work in Maine's pioneering one-to-one laptop program starting in 2004. The conversation explores her unconventional approach to AI in the classroom—treating it like "a book or a pencil"—why she believes building community and relationships matters more than policing technology use, and how she helps students recognize when AI has written their work without making it punitive. Key Takeaways: Know your students better than any detector. Teachers who build relationships with their students can identify AI-generated work by recognizing changes in sentence length, structure, and voice—no detection tools required.Make AI conversations transparent, not secretive. Rather than creating a surveillance culture, openly discuss how AI works, when it's appropriate, and how you can tell when it's been used—students respond better to honesty than to policing.Technology should amplify human expression, not replace it. Start with handwritten journals and personal ideas first, then bring in technology as a tool to enhance what students have already created on their own.Teaching self-control is lifelong. Help students recognize their own impulse patterns with technology—the habit of drifting to games during a thinking pause—because they'll need to manage this their whole lives.Focus on the goal, then find the tool. Instead of teaching specific AI technologies that come and go, teach students to identify what they want to achieve first, then select appropriate tools—this approach works for both students and teachers in professional development.

    25 min
  8. Should Students Be Trusted With Phones During Exams? - Dini Arini

    MAR 3

    Should Students Be Trusted With Phones During Exams? - Dini Arini

    In this episode, Priten speaks with Dini Arini, a PhD candidate in language literacy and technology at Washington State University who has been teaching for over 15 years. Growing up in Indonesia without access to English courses that her classmates had, Dini experienced firsthand the anxiety of being left behind—an experience that now fuels her optimism about AI's potential to democratize education. The conversation explores her unconventional approach to classroom technology, including allowing students to use phones during exams, why she believes teachers who truly know their students don't need AI detectors, and how her research into AI ethics policy is uncovering the gap between institutional guidelines and classroom reality. Dini also shares what genuinely worries her: emerging research suggesting that over-reliance on AI may be physically changing our brains. Key Takeaways: Know your students better than any detector. Teachers who truly understand their students' abilities and writing styles can identify AI-generated work without relying on detection tools—you become the filter.Technology can bridge access gaps. For students without resources for tutoring or courses, AI tools can serve as supplementary learning support that was previously unavailable.Trust can work as enforcement. Having students acknowledge an honor statement and knowing their baseline abilities can be as effective as surveillance—students often rise to the expectation of integrity.Adapt assessments to what you're testing. Use technology-enabled tests when appropriate, but return to pen-and-paper or presentations when the skill being assessed requires it.Stay creative ahead of AI. As AI improves, teachers must develop AI-resistant assignments and varied assessment methods rather than abandoning technology entirely.

    23 min

Ratings & Reviews

5
out of 5
12 Ratings

About

Margin of Thought is a podcast about the questions we don’t always make time for but should. Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students. Each season centers on a key tension in modern life that affects how we raise and educate our children. Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.