How do you protect your kids online when even adults can’t tell what’s real anymore? AI-generated videos, deepfakes, and synthetic audio are not just a tech issue. They are showing up inside the apps our kids use every day, mixed in with cartoons, music clips, and “safe” educational content. Most children, and plenty of adults, are being trained to trust whatever looks and sounds real. In this episode of the Family IT Guy Podcast, I sat down with Jeremy Carrasco@showtoolsai a media producer and AI analyst, to talk about what parents need to understand right now. How AI content is made, how algorithms push it, and how families can spot it before it causes harm. Jeremy is not guessing from the outside. He has spent years in professional video production, live streaming, and audio engineering. He knows what real human media looks like when it is made by actual people, and where AI still gives itself away. One of the biggest tells? 👉 AI doesn’t breathe. AI videos can look believable, especially on a small phone screen. But once you know what to listen and look for, the cracks show up fast. Those cracks matter because kids do not have the life experience or media literacy to notice them on their own. In this conversation, we break things down in a way parents can actually use. First, AI videos versus deepfakes. They are often treated as the same thing, but they are not. Jeremy explains the difference, why deepfakes tend to be targeted, and why mass-produced AI videos are now flooding platforms at scale, often designed to hook kids with familiar characters, faces, or voices. Second, why audio matters more than visuals. Parents are taught to watch what their kids see, but listening is just as important. We talk about unnatural speech pacing, missing breaths, flat or mismatched emotion, and why the human voice is still one of the hardest things for AI to fake convincingly. Third, visual and behavioral red flags parents can learn. Subtle background warping, strange eye movement, awkward timing, and non-human rhythm. These are things media professionals spot quickly, but they can also be taught to parents who want to be more proactive instead of reactive. We also zoom out to the bigger issue parents are up against. Algorithms do not understand childhood, safety, or values. They understand engagement. A feed that starts with something harmless, Bluey, Miss Rachel, animal videos, or learning content, can shift quickly after one curious search or autoplay chain. That is how kids end up exposed to disturbing, violent, or sexualized AI-generated content that looks playful but is not. We talk about: - Why kids’ algorithms are some of the most profitable and dangerous systems online - How “safe” feeds slowly drift without parents realizing - Why YouTube Kids is safer than regular YouTube but still not a set-it-and-forget-it solution - The rise of AI-generated sexualized content involving children - Why sharing kids online can create exposure parents never intended - Safer ways to share family photos using privacy-first tools - Why adults have to act as stewards of their children’s digital privacy, even when the platforms will not This episode is not about fear or banning technology. It is about giving parents clarity in a digital world that is changing faster than most families realize. If you are raising kids right now, or care about the internet they are growing up in, this conversation is worth your time. 🎙️ Guest: Jeremy Carrasco — Media Producer & AI Analyst 🎧 Podcast: Family IT Guy