Family IT Guy Podcast

Family IT Guy

Ben Gillenwater helps families protect children from digital dangers, bringing 30 years of cybersecurity expertise to the parenting journey. His background includes working with the NSA and serving as Chief Technologist of a $10 billion IT company, where he built global-scale systems and understood technology's risks at every level. His mission began when he gave his young son an iPad with "kid-safe" apps—only to discover inappropriate content days later. Despite his deep technical background, Ben realized that if protecting children online was challenging for him, it must be even more difficult for parents without his expertise. Through Family IT Guy, Ben creates videos and articles that help parents and kids learn how to leverage the positive parts of the internet while avoiding the dangerous and risky parts. His approach bridges the knowledge gap between complex technology and practical family protection, making digital safety accessible to everyone.

  1. 5D AGO · VIDEO

    FBI Psychologist: Your Kid’s Phone Is More Dangerous Than a Gun

    Dr. Lisa Strohman spent 30 years studying what hurts kids—from profiling at FBI Quantico after Columbine to serving as an expert witness in New Mexico v. Meta. Her conclusion? The phone in your child's pocket is more dangerous than a gun on your kitchen table. At least the kid knows to be afraid of the gun. We get into the CDC data, what she saw inside Meta's own research, why 400 girls at one school deleted social media on Valentine's Day, and what happened when she gave her own son Snapchat and immediately regretted it. Timestamps: 0:00 - Lisa's background: FBI, Columbine, 30 years in digital safety 2:15 - 800,000 kids follow the Columbine ideology 5:10 - CDC data: self-harm spikes after social media 7:00 - False narratives from platforms 9:30 - "A phone is more dangerous than a gun on the table" 12:05 - Inside the case against Meta 19:40 - 400 girls quit social media on Valentine's Day 22:50 - "Tech is a tool, not a toy" 27:00 - Warning signs for parents of girls 33:37 - The expert's own parenting story 39:40 - "I gave my son Snapchat" 43:17 - One thing parents can do this week 45:11 - Digital Citizen Academy 49:45 - Final question About Dr. Lisa Strohman: Clinical psychologist, attorney, and founder of Digital Citizen Academy (digitalcitizenacademy.org). Her free book "Digital Distress" is available at digitalcitizenacademy.org/digital-resources. Resources: Family IT Guy: https://www.familyitguy.com iPhone Setup Guide: http://familyitguy.com/go/iphoneguide

    52 min
  2. FEB 3

    AI Videos vs Deepfakes: How to Tell What’s Real Online (With Jeremy Carrasco)

    How do you protect your kids online when even adults can’t tell what’s real anymore? AI-generated videos, deepfakes, and synthetic audio are not just a tech issue. They are showing up inside the apps our kids use every day, mixed in with cartoons, music clips, and “safe” educational content. Most children, and plenty of adults, are being trained to trust whatever looks and sounds real. In this episode of the Family IT Guy Podcast, I sat down with Jeremy Carrasco@showtoolsai a media producer and AI analyst, to talk about what parents need to understand right now. How AI content is made, how algorithms push it, and how families can spot it before it causes harm. Jeremy is not guessing from the outside. He has spent years in professional video production, live streaming, and audio engineering. He knows what real human media looks like when it is made by actual people, and where AI still gives itself away. One of the biggest tells? 👉 AI doesn’t breathe. AI videos can look believable, especially on a small phone screen. But once you know what to listen and look for, the cracks show up fast. Those cracks matter because kids do not have the life experience or media literacy to notice them on their own. In this conversation, we break things down in a way parents can actually use. First, AI videos versus deepfakes. They are often treated as the same thing, but they are not. Jeremy explains the difference, why deepfakes tend to be targeted, and why mass-produced AI videos are now flooding platforms at scale, often designed to hook kids with familiar characters, faces, or voices. Second, why audio matters more than visuals. Parents are taught to watch what their kids see, but listening is just as important. We talk about unnatural speech pacing, missing breaths, flat or mismatched emotion, and why the human voice is still one of the hardest things for AI to fake convincingly. Third, visual and behavioral red flags parents can learn. Subtle background warping, strange eye movement, awkward timing, and non-human rhythm. These are things media professionals spot quickly, but they can also be taught to parents who want to be more proactive instead of reactive. We also zoom out to the bigger issue parents are up against. Algorithms do not understand childhood, safety, or values. They understand engagement. A feed that starts with something harmless, Bluey, Miss Rachel, animal videos, or learning content, can shift quickly after one curious search or autoplay chain. That is how kids end up exposed to disturbing, violent, or sexualized AI-generated content that looks playful but is not. We talk about: - Why kids’ algorithms are some of the most profitable and dangerous systems online - How “safe” feeds slowly drift without parents realizing - Why YouTube Kids is safer than regular YouTube but still not a set-it-and-forget-it solution - The rise of AI-generated sexualized content involving children - Why sharing kids online can create exposure parents never intended - Safer ways to share family photos using privacy-first tools - Why adults have to act as stewards of their children’s digital privacy, even when the platforms will not This episode is not about fear or banning technology. It is about giving parents clarity in a digital world that is changing faster than most families realize. If you are raising kids right now, or care about the internet they are growing up in, this conversation is worth your time. 🎙️ Guest: Jeremy Carrasco — Media Producer & AI Analyst 🎧 Podcast: Family IT Guy

    1h 17m
  3. JAN 11 · VIDEO

    Protecting Kids Online: Missing Children, AI Scams, and Digital Exploitation with Shawnna Hoffman

    In this episode of the Family IT Guy Podcast, I sit down with Shawnna Hoffman, CEO of the International Center for Missing and Exploited Children (ICMEC), for a raw and deeply personal conversation about online exploitation, AI-enabled scams, human trafficking, and the growing risks facing kids and teens online. Shawnna shares her journey from decades in Big Tech and AI leadership to leading a global organization focused on returning missing children to their families. She also opens up about her own family’s experience with a long-term online scam that targeted her autistic son, exposing how sophisticated, patient, and psychologically damaging modern online exploitation has become. This episode covers: • How online grooming and long-term scams target kids and young adults • The role AI and social platforms play in exploitation and manipulation • Why parental controls alone are not enough • The reality of missing children and trafficking on a global scale • How ICMEC measures success by one metric only: kids reunited with families • The difference between facial detection and facial recognition • Why digital safety requires community action, better safeguards, and real accountability If you are a parent, caregiver, educator, or anyone concerned about child safety online, this conversation is essential listening. 🔒 Learn more about protecting kids online: https://familyitguy.com 🌍 Learn more about the International Center for Missing and Exploited Children: https://www.icmec.org

    1h 39m

Ratings & Reviews

5
out of 5
6 Ratings

About

Ben Gillenwater helps families protect children from digital dangers, bringing 30 years of cybersecurity expertise to the parenting journey. His background includes working with the NSA and serving as Chief Technologist of a $10 billion IT company, where he built global-scale systems and understood technology's risks at every level. His mission began when he gave his young son an iPad with "kid-safe" apps—only to discover inappropriate content days later. Despite his deep technical background, Ben realized that if protecting children online was challenging for him, it must be even more difficult for parents without his expertise. Through Family IT Guy, Ben creates videos and articles that help parents and kids learn how to leverage the positive parts of the internet while avoiding the dangerous and risky parts. His approach bridges the knowledge gap between complex technology and practical family protection, making digital safety accessible to everyone.

You Might Also Like