Margin of Thought with Priten

Priten Soundar-Shah

Margin of Thought is a podcast about the questions we don’t always make time for but should. Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students. Each season centers on a key tension in modern life that affects how we raise and educate our children. Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.

  1. Can You Still Teach Critical Thinking? - Paul Blaschko

    3D AGO

    Can You Still Teach Critical Thinking? - Paul Blaschko

    In this episode, Priten speaks with Paul Blaschko, an assistant teaching professor of philosophy at Wake Forest University. Paul's work sits at the intersection of liberal education, critical thinking instruction, and course design. The central question driving their conversation: in an era of AI that can generate plausible-sounding arguments and explanations, can we still teach students to think critically—or must we fundamentally reimagine what critical thinking means? Key Takeaways: EdTech should solve existing problems, not create new ones. Paul approaches technology as a tool only when he's already facing a pedagogical challenge. This shifts the question from "what can this tool do?" to "what does my classroom need?"YouTube explainers preceded ChatGPT in reshaping how students research and learn. Long before AI, students were outsourcing understanding to video tutorials rather than wrestling with dense texts, revealing a deeper shift in how students approach knowledge.Critical thinking instruction requires direct practice with real arguments, not shortcuts around difficulty. There's no substitute for students actually constructing and defending their own positions through dialogue and written work, even when AI can do it faster.Scaling critical thinking instruction demands new infrastructure, not just new pedagogy. Paul and his team are testing whether platforms like Think Arguments can help instructors manage the feedback and iteration needed to teach reasoning at scale across institutions.AI may not replace the professor's role so much as expand it into explicit curation and judgment. In a world where explanations are abundant, the teacher's value shifts toward deciding which frameworks matter and helping students evaluate competing arguments.Paul Blaschko is an assistant teaching professor at the University of Notre Dame. He teaches God and the Good Life, a course dedicated to asking the big questions about meaning, morality, and faith. He also serves as the Director of the Sheedy Family Program in Economy, Enterprise, and Society, a program devoted to exploring how the humanities can help us find meaning in work. With Meghan Sullivan, he has co-authored The Good Life Method (Penguin Press, 2022), a book about how philosophy can help us live better lives. He is currently working on a book on the philosophy of work (under contract with Princeton University Press), and is the co-founder of a Notre Dame based tech start-up that aims to solve problems with dialogue on the internet.

    51 min
  2. What Is Age-Appropriate AI in Education? - Megan Barnes

    4D AGO

    What Is Age-Appropriate AI in Education? - Megan Barnes

    In this episode, Priten speaks with Megan Barnes, a PhD student in learning technologies at the University of North Texas and a K-12 librarian with 14 years of experience, about what age-appropriate AI in education actually means. Megan holds dual roles as library director and director of educational technology for early childhood through fourth grade in Dallas, and her research draws on cognitive and affective neuroscience to evaluate how emerging tools interact with child development. The conversation moves through the real-versus-synthetic distinction that young children struggle with, the attention economy driving AI product design, information literacy as a foundation for AI literacy, and why curiosity may be the most important thing educators need to protect. Key Takeaways: Before children can use chatbots, they need a solid concept of real versus not real. Most kindergartners interact with AI through voice and animated characters, adding layers of anthropomorphization that make it nearly impossible for them to distinguish a computer from a person. Megan argues that chatbot-based AI is not developmentally appropriate at this age, and any exposure should be adult-controlled and side-by-side, consistent with American Academy of Pediatrics guidance on co-viewing media.The attention economy is becoming a relational economy—and children are the target. The same design logic that removed page numbers from Google search results is now being applied to conversational AI. If a child builds five years of chat history with a platform before adulthood, that relationship becomes a powerful lock-in mechanism. Megan also raises the concern that chat histories are now being used to drive advertising, meaning the tools students use for learning are simultaneously selling to them.AI literacy in elementary school means information literacy, not prompt engineering. Rather than teaching young students how to use AI tools directly, Megan focuses on helping them understand who generates information, who validates it, and where AI is already present in their daily lives. During morning announcements, she points out the background remover tool and tells students, "This is AI right here." The goal is building foundational skills for evaluating any new technology, not training on a specific product.Every generation of creative technology triggers the same panic—and the pattern holds. Megan draws on her background as a violinist and recording arts student. When Apple's GarageBand launched during her final semester, her synthesizer professor declared it the downfall of music. Instead, it democratized creativity. More people creating doesn't mean everything produced is good, but the tool itself is not the threat. AI follows the same arc.Curiosity doesn't need to be taught—it needs to be protected. Young children arrive with natural wonder intact. Megan distinguishes between formal classroom learning and the informal learning space of the library, where autonomy and exploration still drive engagement. The job of early education is not to instill curiosity but to give children frameworks for approaching new things with wonder while still thinking critically, so that instinct survives into adulthood.Megan E. Barnes is a librarian with over 14 years experience, as well as a Ph.D. student in Learning Technologies at the University of North Texas. Her research focuses on ethical considerations in educational technology adoption and curriculum design. She is currently a research assistant developing curriculum for edge AI and is an ed-tech leader and library director at an independent school. She believes that librarians are information professionals uniquely suited to exploring the intersection of information, technology, and pedagogy.

    44 min
  3. Is AI Literacy the New Professional Credential? - Anna Zendall

    APR 9

    Is AI Literacy the New Professional Credential? - Anna Zendall

    In this episode, Priten speaks with Anna Zendell, a social worker turned educator who oversees healthcare management, human services, and wellness programs at Bay Path University, about what it takes to rebuild a curriculum around AI when the stakes are patient outcomes. Zendell is currently piloting an AI-enhanced program from the ground up, designing courses where a closed AI system mentors students through interactive activities while faculty retain grading authority and instructional presence. The conversation covers why traditional learning outcomes don't translate cleanly into AI-driven instruction, how adult learners in healthcare face unique pressure to acquire AI literacy for careers that already demand it, and the trust gaps between students, faculty, and administrators that complicate adoption. Key Takeaways: Curriculum doesn't absorb AI -- it has to be rebuilt for it. Zendell found that standard learning outcomes written with Bloom's Taxonomy are too broad for an AI system to use as mentoring scaffolds. Her team breaks each outcome into granular component steps, essentially teaching the AI how to guide a student the way an experienced instructor would.AI is the first classroom technology to split faculty, students, and administration into opposing camps. Some faculty add zero-tolerance rubric rows while others experiment eagerly. Students range from uneasy to already reliant. Zendell describes a three-way perception gap she hasn't seen with any previous technology, including the transition to online learning.Healthcare employers aren't waiting for higher ed to figure this out. Zendell regularly scans job postings for healthcare leadership roles and finds AI literacy and AI tool proficiency appearing with increasing frequency, particularly in informatics, clinical data analytics, and healthcare finance. Her students are asking for these skills and feeling the urgency themselves.A student tester changed the entire design process. Zendell recruited an informatics student with an interest in healthcare AI to take each module as a learner before it goes live. That feedback loop -- where the student flags where prompts mislead or where the AI drifts into unproductive territory -- became central to how the team iterates on course design.The real danger isn't AI itself -- it's losing the habit of questioning it. Zendell's deepest concern is dependency: that convenience erodes the capacity to critically evaluate AI output. In healthcare especially, where students might default to ChatGPT instead of dedicated clinical interfaces, the gap between accessible and appropriate matters.Anna Zendell is the program director for the MS in Healthcare Administration program. For over a decade, she has directed degree programs in healthcare administration, health sciences, and public administration. She teaches regularly at the graduate and undergraduate levels. A major emphasis is on ensuring equitable and accessible higher education for students of all abilities by leveraging the power of online learning and the unique attributes that adult learners bring to their learning. Prior to her academic administration and teaching work, Anna oversaw operations and evaluations for grant-funded research projects focusing on issues such as walkable communities, community health education, and dementia interventions. She developed enduring interdisciplinary partnerships with organizations, local governments, and community members. She provided professional development and continuing education for healthcare professionals. Key focus areas in Anna’s work include fostering meaningful inclusion in workplaces and communities and addressing health disparities, particularly around chronic illness and health promotion. Anna earned her doctorate and master’s degrees in social work at the University at Albany with a focus on management and community systems.

    28 min
  4. What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver

    APR 7

    What's the Line Between Research Integrity and Using AI as a Tool? - Kari Weaver

    In this episode, Priten speaks with Kari Weaver, a librarian educator and program manager for the Artificial Intelligence and Machine Learning Initiative at the Ontario Council of University Libraries (OCUL), about why existing tools like citation and methodology sections can't capture how AI is actually being used in research and learning -- and what a structured disclosure standard might look like instead. Weaver, who also teaches graduate students at the University of Toronto and created the AID Framework for AI disclosure, walks through the practical and philosophical challenges of building trust infrastructure for an ecosystem that doesn't have bright lines yet. The conversation covers disciplinary divides in how AI use is understood, the global effort to establish a disclosure standard, and why the authorship question remains genuinely unresolved. Key Takeaways: Citation can't bridge the gap between AI-generated ideas and their sources. Traditional citation connects ideas to a discrete, traceable origin. AI severs that connection by synthesizing across sources in ways that can't be pinpointed. Weaver notes this is structurally similar to what Western scholarship has long done to traditional and lived knowledge -- and now researchers are experiencing that same disconnection applied to their own work.A global AI disclosure standard is actively being built. Weaver is co-leading a large-scale effort with the European Network of Research Integrity Offices, the International Science Council, and the Committee on Publication Ethics to develop a consistent disclosure framework through the World Conferences on Research Integrity. The goal is to stop researchers from having to tailor disclosures to each journal's idiosyncratic requirements.AI use in research often falls outside methodology entirely. A researcher translating articles from an unfamiliar language using AI is a real and beneficial use case, but it doesn't fit neatly into a methods section. These peripheral uses still shape how researchers interact with and think about their material, which is exactly why disclosure needs to be broader than methodological reporting.Separating the disclosure from the assignment makes students more likely to do it. At the undergraduate level, voluntary disclosure is hard to get. Weaver recommends having students submit a disclosure rubric alongside their assignment in a separate dropbox. This treats disclosure as a professional skill worth practicing on its own, and it gives instructors a reference point if questions arise about how an assignment was produced.Authorship will likely settle at the disciplinary level, not the universal one. Weaver is candid that she doesn't have an answer to the authorship question. In qualitative research, she sees coding as irreplaceable human work. In STEM fields, AI-assisted analysis may be more readily accepted. She expects discourse communities will develop their own standards -- but that shouldn't delay building consistent disclosure practices across all of them.Kari D. Weaver (she/her) holds a B.A. from Indiana University, a M.L.I.S. from the University of Rhode Island, and an Ed.D. in Curriculum and Instruction from the University of South Carolina where her dissertation examined the impact of professional development interventions on academic librarian teaching self-efficacy. She is the Program Manager, Artificial Intelligence and Machine Learning with the Ontario Council of University Libraries on secondment from her permanent role as the Learning, Teaching, and Instructional Design Librarian at the University of Waterloo. Additionally, Dr. Weaver is a continuing sessional faculty member in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education (OISE) at the University of Toronto. Her wide-ranging research background includes study of accessibility for online learning, information literacy, academic integrity, misinformation. She is widely recognized as an expert in AI citation, attribution, and disclosure practices for her development of the Artificial Intelligence Disclosure (AID) Framework and is currently the co-lead of the 2026 World Conferences on Research Integrity Focus Track: Toward a Global Reporting Standard for AI Disclosure in Research.

    38 min
  5. What Does Medicine Look Like When AI in the Room? - Jack Kincaid

    APR 2

    What Does Medicine Look Like When AI in the Room? - Jack Kincaid

    In this episode, Priten speaks with Jack Kincaid, a third-year medical student at Harvard Medical School, about navigating clinical training in an era of powerful AI tools. Jack shares his perspective on Open Evidence (a medical LLM), Harvard's AI Sandbox, and the tension between leveraging new technology and developing as a physician. Key Takeaways: AI tools can accelerate diagnostic reasoning—but training still requires struggle. Platforms like Open Evidence can reliably synthesize evidence and suggest diagnoses, but reflexively reaching for them risks stunting the critical thinking that clinical practice demands. The goal should be building heuristics strong enough to stay present with patients, not offloading cognition.Transparency about surveillance matters. From Canvas quiz monitoring in college to clinical logging systems, students often don't know what's being tracked. Jack's experience as a TA revealed the extent of visibility administrators have—and raised questions about whether strategic ambiguity helps maintain standards or just breeds anxiety.Institutions are starting to take AI governance seriously. Harvard Medical School's AI Sandbox gives trainees access to multiple LLMs in a secure environment that protects curriculum materials and personal data (though it's not HIPAA compliant). This kind of infrastructure signals that leadership is thinking carefully about responsible use.Career concerns about AI replacement are real. For students considering imaging-heavy specialties like radiology or radiation oncology, the specter of AI "scope creep" is a recurring topic in conversations with attendings and senior trainees. It's not paranoia—it's a practical factor in career planning.Discovery often happens peer-to-peer. Jack first learned about Open Evidence by glancing at a classmate's screen during a simulation exercise. The most impactful tools aren't always introduced through formal curricula—they spread through observation and word of mouth.John “Jack” Kincaid is a trainee in the Harvard/MIT MD-PhD Program at Harvard Medical School interested in the intersection of diet and disease. Jack received B.A. (Nutritional Biochemistry and Metabolism) and M.S. (Nutrition) degrees from Case Western Reserve University in 2021, where he helped investigate the impact of obesity and obesogenic diet on cancer development in the laboratory of Nathan Berger at Case Comprehensive Cancer Center. Concomitantly, Jack worked with a variety of food access and health literacy groups including CWRU Food Recovery Network and Cooking Matters STL. After leaving CWRU, Jack relocated to the UK to train as a postgraduate in the group of Sir Stephen O’Rahilly at the University of Cambridge Institute of Metabolic Science, studying the neuroendocrine regulation of human appetitive behavior and body weight. As a physician scientist, Jack hopes to leverage basic science and clinical medicine to help address the growing burden of diet-associated illnesses as well as develop safe, effective treatments for metabolic disease.

    23 min
  6. Who Builds the Tools Teachers Are Asked to Use? - Yanni Chen

    MAR 31

    Who Builds the Tools Teachers Are Asked to Use? - Yanni Chen

    In this episode, Priten and Yanni Chen explore what it actually looks like to build AI tools that support learning rather than shortcut it. Yanni, a master's student at Harvard Graduate School of Education and product developer at Deep Brain Academy, shares her experience creating an AI math tutor with a genuine commitment to scaffolding, cultural inclusivity, and keeping teachers central to the learning process. Key Takeaways: Scaffolding matters more than speed. AI tools often give direct answers because that's what they're engineered for. But real learning requires guiding students through the thinking process—something teachers do that AI cannot replicate. Educators should look for tools that provide step-by-step guidance rather than instant solutions.Teacher skepticism is healthy—and often fades with use. Most teachers approach AI with skepticism, which is appropriate. But just like PowerPoint and video once were new classroom tools, AI becomes less intimidating through hands-on experience. The recommendation: start with personal, low-stakes use before thinking about classroom implementation.Gen Alpha's AI fluency makes teacher presence more important, not less. Students are already fluent AI users. This doesn't diminish the teacher's role—it elevates it. Teachers need to help students navigate bias, develop critical thinking, and understand when AI is appropriate and when it isn't.We lack clear guidelines—so educators must set their own. In the absence of federal or state AI policies, individual educators need to establish clear ethical boundaries around data security, safety, and appropriate use. The technology is moving faster than regulation can keep up.Creative technologies extend beyond chatbots. From 3D printing and laser cutting that let students build physical objects to AR/VR simulations for medical training, there's a whole landscape of educational technology that emphasizes hands-on learning and creative exploration—not just AI conversation.Yanni Chen is an Ed.M. candidate at the Harvard Graduate School of Education, where she studies Learning Design, Innovation, and Technology. She earned her B.S. from Boston University, majoring in Public Relations and minoring in Applied Human Development. Her work sits at the intersection of education, product management, AI, XR, and edtech. She focuses on student experience and the design of educational products that foster engagement, growth, and meaningful learning outcomes. Drawing from both her academic training and her work in edtech, Yanni brings the perspective of both a student and a product manager to conversations about teaching, learning, and educational innovation.

    30 min
  7. Is Surveillance Culture Ruining Trust in Schools? - Jessica Maddry

    MAR 27

    Is Surveillance Culture Ruining Trust in Schools? - Jessica Maddry

    In this episode, Priten and Jessica Maddry examine how surveillance culture and rigid policy enforcement are eroding trust and genuine learning in schools. From cell phone bans that criminalize normal behavior to reading programs that strip away the joy of stories, they explore how the gap between written policies and their ethical implementation has created environments of control rather than connection. The conversation spans zero-tolerance enforcement, AI detection tools, and the critical importance of human relationships in education. Key Takeaways: Policies should serve ethics, not replace them. Following rules isn't the same as doing the right thing. When a student has their phone off in their pocket but gets suspended because it's not in their backpack, the punishment no longer serves the policy's original intent of reducing distraction.Surveillance culture damages the learning environment. Constant monitoring and zero-tolerance enforcement create an atmosphere where students feel unsafe and disengaged. When students associate school with punishment rather than growth, absenteeism and mental health crises follow naturally.Deep literacy is becoming a privilege again. Many students no longer read books from start to finish, instead consuming only passages for standardized tests. This loss of story-based learning strips away both the joy of reading and critical thinking skills.AI detection is an unwinnable arms race. The cycle of AI detectors, humanizers, and humanizer-detectors demonstrates a fundamental misunderstanding of how to address academic integrity—tools cannot replace the trust and relationships needed for genuine learning.Human connection is irreplaceable in education. Whether it's a professor scrapping class to process a difficult moment with students, or a teacher stepping aside to comfort a struggling child, the most impactful educational experiences come from authentic human relationships—something no technology can replicate.Jessica Maddry is an educator, strategist, and cofounder of BrightMinds AI, where she works with schools and districts to integrate AI ethically, intentionally, and with educators at the center. Her work focuses on helping systems move beyond hype toward human-centered, purpose-driven design– supporting policy, implementation, and systems change so technology strengthens learning, equity, and student well-being rather than undermines them.

    32 min
  8. What Does Representative Governance Mean for Our Future? - Nathán Goldberg

    MAR 25

    What Does Representative Governance Mean for Our Future? - Nathán Goldberg

    In this episode, Priten speaks with Nathán Goldberg, a philosopher-statistician whose career weaves together two unlikely threads: professional soccer and democratic activism. As Vice President of the US Soccer Federation and founder of both Harvard Forward and Bluebonnet Data, Nathán has spent years thinking about who gets to sit in the rooms where decisions are made—and why it matters. Key Takeaways: Voting isn't enough—perspective is. The people impacted by decisions need to be in the rooms where those decisions get made.Outsiders can win. Harvard Forward gathered 4,500 signatures on parchment paper, won board seats, and a decade of resistance to divestment collapsed within a year.Institutions resist until they can't. Harvard ignored them, then attacked them. It didn't work.The model scales. The same playbook worked at Yale and Penn State. One elected climate scientist shifted Penn State's investment policy.Soccer has the same problem. 4 million youth players, zero recent youth players in governance. Born and raised in México, Nathán Goldberg Crenier is a new(ish) American who is passionate about using the power of democracy and sports to make the world a better place. He has been recognized in the Forbes 30 Under 30 list for his work in progressive politics and nonprofit management, in the New York Times for his work as an electoral organizer and climate advocate, and in the Sports Business Journal New Voices Under 30 list for his work as a soccer executive. He is also a proud recipient of the 2025 Paul & Daisy Soros Fellowship for New Americans as he pursues his JD at Harvard Law School, having graduated with a joint degree in philosophy and statistics from Harvard College, where he played for and captained the D1 varsity men’s soccer team.

    49 min

Ratings & Reviews

5
out of 5
12 Ratings

About

Margin of Thought is a podcast about the questions we don’t always make time for but should. Hosted by Priten Soundar-Shah, the show features wide-ranging conversations with educators, civic leaders, technologists, academics, and students. Each season centers on a key tension in modern life that affects how we raise and educate our children. Learn more about Priten and his upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & K-12 at priten.org and ethicaledtech.org.