Secured

MarketScale

We will be talking to top security experts, legislature, and school administrators to get an inside look on how parents and school staff can be the two golden components of any successful security plan.

  1. Why AI Can’t Replace Mental Healthcare for Teens

    4D AGO · BONUS VIDEO

    Why AI Can’t Replace Mental Healthcare for Teens

    In this Secured clip, Dr. Jacqueline Benson, licensed clinical psychologist and Founder & CEO of Center Stage Psychology, addresses a growing concern: teens turning to AI for mental health support. Dr. Benson makes a critical distinction — AI is not mental health treatment. While AI tools can provide information and function as digital resources, they were not designed to replace licensed professionals. Mental health care requires years of clinical training and, just as importantly, real-time human judgment. Effective therapy is individualized, responsive, and deeply human — something AI cannot replicate. She also warns that while AI can offer helpful information, it can just as easily surface harmful or inaccurate content. In vulnerable moments, that misinformation can do real damage. Emerging cases have already shown AI interactions worsening psychotic symptoms or even encouraging suicidality in teens. Dr. Benson emphasizes that licensed mental health professionals must play an active role in shaping ethical safeguards around AI systems, particularly because the populations most likely to rely on these tools — adolescents and vulnerable individuals — are also those most at risk. For parents, her advice is clear: stay engaged. Learn about AI alongside your children. Have open conversations. And when serious mental health concerns arise, prioritize support from trusted, licensed professionals over quick digital responses. Speed and accessibility do not equal safety — especially when it comes to mental health.

    3 min
  2. How Is AI Reshaping Trust and Fraud in the Workplace

    MAR 6 · BONUS VIDEO

    How Is AI Reshaping Trust and Fraud in the Workplace

    In this Secured clip, Andrew Feigenson, CEO of InformData, explains how AI is fundamentally reshaping trust in the workplace — both by enabling more sophisticated fraud and by strengthening detection. With the rise of AI-generated resumes, fabricated credentials, and synthetic identities, identity fraud is becoming harder to detect and easier to scale. This evolution raises the bar for employers and screening providers, who can no longer rely on traditional verification methods to ensure accuracy. At the same time, AI is equipping organizations with more powerful tools to combat these risks. Machine learning can identify subtle data patterns that signal fraud, detect inconsistencies that human reviewers might overlook, and accelerate verification processes — improving both security and the candidate experience. But Feigenson emphasizes that the most important shift is conceptual. Risk is no longer binary. It’s not simply “cleared” or “not cleared,” nor is it confined to a single moment in time. Instead, trust must be contextual, ongoing, and adaptive. He draws a parallel to cybersecurity: just as one-time security scans are insufficient in a constantly evolving threat landscape, background screening cannot remain a static, check-the-box compliance exercise. It must become part of a broader, continuous trust strategy — one that protects not only the organization, but also its clients, partners, and workforce.

    2 min
  3. Why Are Deepfakes Becoming One of the Biggest Security Threats?

    MAR 4 · BONUS VIDEO

    Why Are Deepfakes Becoming One of the Biggest Security Threats?

    In this Secured clip, Jason Crawford, Founder and CEO of Sware, discusses how artificial intelligence is fundamentally reshaping trust in digital media. Today, nearly every industry — from insurance and logistics to healthcare, security, military, and intelligence — relies on digital content to communicate, validate processes, and substantiate decisions. The authenticity of that content has long been assumed. But as AI enables the creation of hyper-realistic synthetic media, that assumption is eroding. Crawford warns that we are entering a world where the burden of proof shifts. Historically, the responsibility was to prove that something was fake. Increasingly, organizations will need to prove that something is real — particularly in high-stakes environments like the legal system, where evidentiary standards will inevitably tighten. Most current efforts focus on defense: forensic analysis designed to detect manipulated media after it circulates. But Crawford argues this approach is unsustainable. As the velocity and sophistication of AI-generated content increases, detection becomes a reactive, unwinnable arms race. Instead, he advocates protecting authenticity at the moment of creation — establishing an independent chain of custody that secures not just the pixels or audio, but the full context: when, where, how, and by whom the content was recorded. By separating security from the asset itself — using cryptographic fingerprints and distributed verification models — organizations can create stronger, tamper-evident proof of authenticity before trust is compromised.

    5 min

About

We will be talking to top security experts, legislature, and school administrators to get an inside look on how parents and school staff can be the two golden components of any successful security plan.