ClearTech Loop: In the Know, On the Move

ClearTech Research / Jo Peterson

ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.  Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.  We cut through hype. We rethink assumptions. We keep you in the loop.

  1. Security Is an Environment with Miri Rodriguez

    16H AGO

    Security Is an Environment with Miri Rodriguez

    What CISOs miss when security only lives in features  AI security is still getting framed like a technology problem: new tools, new controls, new dashboards, and new rules. In this episode of ClearTech Loop, Jo Peterson talks with Miri Rodriguez, Cofounder and CEO of Empressa.ai, about why that framing keeps breaking in the real world.  Miri brings a people first lens to AI adoption and security. She argues that security is not just something you install. It is an environment people are willing to enter. When the environment does not feel secure, adoption either slows or goes underground, and then security teams are left trying to govern what they cannot see.  This conversation connects three practical leadership threads: using GenAI upstream to understand real adoption patterns, embedding security and privacy without slowing innovation by designing for humans, and building governance that becomes habit instead of paperwork.  Subscribe to ClearTech Loop on LinkedIn https://www.linkedin.com/newsletters/7346174860760416256/  Key Quotes “The opportunity is massive when you think about security as an environment, not just a technology or a feature.”  “The features don’t matter. If you can’t tell me why the features are important in my space.”  Three Big Ideas from This Episode  GenAI beyond the tool stack Generative AI can help security leaders widen the lens on adoption. Before policies and controls, leaders need to understand where people hesitate, where they take shortcuts, and why the secure path gets avoided. Inclusion is a security control Speed without inclusion creates blind spots, and blind spots become risk. Security and privacy do not have to slow innovation, but they do have to be designed in a way people can understand and follow. Governance is behavior If governance does not translate into day to day habits, it is just documentation. Training format matters as much as content, and security sticks when people see it as personal responsibility, not corporate paperwork. Additional Resources  AI Foundations for Women (Empressa AI) https://empressa.ai/ai-foundations-for-women/ Most Tools Weren’t Built with Women in Mind, AI Is Just the Latest https://empressa.ai/2025/04/03/most-tools-werent-built-with-women-in-mind-ai-is-just-the-latest/ IABC Catalyst, Building Your Brand With Microsoft Senior Storyteller Miri Rodriguez https://www.iabc.com/Catalyst/Article/building-your-brand-with-microsoft-senior-storyteller-miri-rodriguez About the Guest Miri Rodriguez is Cofounder and CEO at Empressa.ai, an AI and storytelling strategist, bestselling author, and Microsoft alum. She focuses on ethical innovation, inclusion, and building trustworthy AI environments where women can connect, learn, and thrive. She is also the author of Brand Storytelling: Put Customers at the Heart of Your Brand Story.    🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    16 min
  2. AI, Cost, Control, and Relevance with Margaret Dawson, CMO, SUSE

    2D AGO

    AI, Cost, Control, and Relevance with Margaret Dawson, CMO, SUSE

    AI is getting embedded into everything, security workflows, engineering workflows, and customer facing products, often faster than organizations can govern it. The upside is real. The risk is in the assumptions leaders are making while they chase speed.  In this episode of ClearTech Loop, Jo Peterson sits down with Margaret Dawson, Chief Marketing Officer at SUSE, for a slightly different conversation than our usual guest mix. Same three questions, but through a CMO lens shaped by decades of enterprise buying cycles.  Margaret breaks down how security leaders can use generative AI to move beyond tools and tech, what it actually takes to embed security and privacy without slowing innovation, and how CMOs should talk about AI without getting trapped in AI washing. The throughline is practical: productivity gains are real, but cost, control, and credibility are not automatic.  Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/   Key Quotes  “The mistake that we’re making as leaders is we are assuming that the integration of AI is an automatic reduction in cost.” — Margaret Dawson  “Until all of a sudden, the CFO got a million dollar cloud bill.” — Margaret Dawson  “Consumers get smarter, very, very fast, especially BtoB Tech customers, and they start to know what’s real and what’s not.” — Margaret Dawson  “Being very specific on what it is doing for your product, for that customer, tying it back to the business outcome.” — Margaret Dawson  “People trust their peers more than any vendor or anyone else.” — Margaret Dawson  Three Big Ideas from This Episode  AI adoption fails when leaders treat it like automatic margin  The board level narrative is tempting, but dangerous. AI can improve productivity, but assuming it instantly reduces cost is how organizations create governance debt and business continuity risk. Speed is possible, but only with guardrails  Embedding security and privacy is not what slows innovation. Confusion, rework, and incidents slow innovation. The answer is clear boundaries, clear accountability, and controls that keep pace with adoption. In marketing, relevance and proof beat hype  Buyers calibrate fast. AI messaging has to be specific, tied to outcomes, and backed by customer evidence. Credibility is built peer to peer, not through louder claims. Resources Mentioned  The Secret to Digital Transformation is Human Connection by Margaret Dawson: https://devops.com/the-secret-to-digital-transformation-is-human-connection/ INTERVIEW: SUSE CMO Margaret Dawson on AI, Kubernetes & Open Source | KubeCon + CloudNativeCon NA 2025: https://www.youtube.com/watch?v=kUcEl3hfvG8  ClearTech Loop: AI Security Needs Better Execution with Zach Lewis: https://www.buzzsprout.com/2248577 🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    18 min
  3. AI Security Needs Better Execution with Zach Lewis (CIO/CISO | Author, Locked Up)

    FEB 12

    AI Security Needs Better Execution with Zach Lewis (CIO/CISO | Author, Locked Up)

    AI security is getting marketed like it requires a brand-new playbook—new frameworks, new job titles, new spend.  In this episode of ClearTech Loop, Jo Peterson sits down with Zach Lewis (CIO/CISO and author of Locked Up) for a practical reset: AI doesn’t erase the fundamentals—it punishes you faster when you ignore them. Zach breaks down where GenAI is already helping security programs today (especially tabletop exercises and prioritization), what it actually takes to embed security and privacy into AI models without slowing innovation (data classification, access controls, segmentation, documentation, least privilege), and why adoption fails when leaders treat AI like a tool rollout instead of a behavior change.  The takeaway is simple and actionable: get the foundations right, use AI to reduce friction where it matters, and build a culture where AI augments people rather than creating fear.   Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/   Key Quotes  “Strong AI security… starts with doing the basics well.” — Zach Lewis   “One of the best use cases I found for it was tabletop exercises.” — Zach Lewis   “You had a billion alerts… and you’re like, which one’s important?” — Jo Peterson   Three Big Ideas from This Episode  1) AI security is data discipline + access discipline  Before you talk tools, talk foundations: classify data before it touches a model, segment critical workloads, gate access by role and sensitivity, document prompts/sources/model versions, and enforce least privilege with ongoing testing and validation.   2) GenAI can make readiness more real (tabletop exercises)  Instead of running the same scripted scenario every year, GenAI can generate realistic incidents, inject curveballs, and help teams identify missed actions—turning tabletops into a real maturity-building loop.   3) Adoption is a leadership problem, not a platform problem  AI initiatives stall when people are afraid or unsupported. Training, shared use cases, and visible wins (time saved, friction removed) create a safe environment where AI augments work rather than threatening it.   📘 Zach’s book: Locked Up: Cybersecurity Threat Mitigation Lessons from a Real-World LockBit Ransomware Response   https://www.amazon.com/Locked-Cybersecurity-Mitigation-Real-World-Ransomware/dp/1394357044  Resources Mentioned   ClearTech Loop with Michael Machado AI Risk is Mostly Not New:https://www.buzzsprout.com/2248577/episodes/18535354-ai-risk-is-mostly-not-new-with-michael-machado   Locked Up by Zach Lewis is available: https://www.amazon.com/Locked-Cybersecurity-Mitigation-Real-World-Ransomware/dp/1394357044 NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework MITRE ATT&CK Framework: https://attack.mitre.org/    🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    11 min
  4. Governance Is a Catalyst in AI Security with Stefano Righi (AMI)

    FEB 10

    Governance Is a Catalyst in AI Security with Stefano Righi (AMI)

    AI is speeding up both attackers and defenders. The goal is not speed alone. The goal is speed with trust.  In this episode of ClearTech Loop, Jo Peterson sits down with Stefano Righi, Chief Security Architect at American Megatrends, for a hot take conversation on what AI is changing inside security programs. Stefano breaks down how GenAI can move teams beyond reactive, tool centric security into anticipatory defense through predictive threat modeling, dynamic risk assessment, and orchestration. He also explains what secure by design means for AI systems, including privacy, adversarial resiliency, prompt injection, and data poisoning, plus why human oversight still matters. The conversation closes on firmware security below the operating system and why governance aligned to standards becomes an accelerator, not paperwork.  Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/  Key Quotes  “Governance may act as a catalyst, not as a brake to innovation, enabling innovation while ensuring trust.” — Stefano Righi “From the very start, we need to pursue secure by design for AI model… build privacy and adversarial resiliency into AI life cycles… mitigate risk like prompt injection and data poisoning without slowing innovation.” — Stefano Righi “Firmware runs under the operating system, and any attack that could happen at such layer could go undetected by any antivirus solution running in the operating system.” — Stefano Righi Three Big Ideas from This Episode  Governance accelerates trusted AI adoption  Governance is not paperwork. Done right, it enables innovation while ensuring trust, and it must be cross functional rather than owned by security alone. Secure by design has to include AI systems  Privacy and adversarial resiliency belong in the AI lifecycle from the start, with attention to risks like prompt injection and data poisoning, plus human oversight to ensure compliance and prevent misuse. Firmware is a visibility blind spot below the OS  Platform layers from microcode to BIOS to BMC to Root of Trust create real complexity, and attacks below the OS can bypass controls that security leaders rely on for visibility. Episode Notes / Links  🎧 Listen: In player above ▶ Watch on YouTube: https://youtu.be/wXOf6erkQ6k 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/  Resources Mentioned  ClearTech Loop: The CSA AI Safety Initiative with George Finney https://cleartechresearch.com/the-csa-ai-safety-initiative-with-george-finney/  NIST AI Standards: https://www.nist.gov/artificial-intelligence/ai-standards   OWASP guidance for AI: https://owasp.org/www-project-ai-security-and-privacy-guide/  CSA: How Generative AI is Reshaping Zero Trust Security https://cloudsecurityalliance.org/blog/2026/01/09/how-generative-ai-is-reshaping-zero-trust-security  🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    12 min
  5. From Tool-Driven Cyber to Adaptive AI Defense with Ryan Lutz

    FEB 5

    From Tool-Driven Cyber to Adaptive AI Defense with Ryan Lutz

    Cybersecurity has become a tool driven industry. Organizations buy platforms, stack controls, generate alerts, and ask humans to stitch it all together under pressure.   In this episode of ClearTech Loop, Jo Peterson sits down with Ryan Lutz to explore what changes when AI becomes part of the security workflow. Not as another console, but as an adaptive capability that helps teams interpret signals faster, prioritize more intelligently, and respond with more consistency when the volume is too high for humans to manage alone.   The conversation focuses on three real-world themes:  Why the SOC is the best initial use case for AI augmentation, how leaders should think about the inherent exposure that comes with more AI and more code, and why Ryan’s research on AI malware matters for building adaptive defensive responses.   Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/   Key Quotes  “Cyber is a very tool driven industry… with the implementation of AI being generative, I think that we’re going to see AI being used more in a way that’s adaptive.” — Ryan Lutz   “In a setting like a SOC analyst… you have a ton of information coming in… millions of possible attack vectors… it’s very applicable to use AI… to generate a response very quickly and more efficiently.” — Ryan Lutz   “How should the CISO be thinking about AI adoption… from an organizational governance perspective, because you don’t want to be the Department of no.” — Jo Peterson   Three Big Ideas from This Episode  1) Adaptive beats tool-driven  AI helps security teams move beyond tool sprawl by accelerating interpretation, prioritization, and decision-making in high-volume environments.   2) The SOC is the natural first use case  SOC work is overwhelmed by inputs and possible attack paths. Ryan explains why AI can rank what matters, accelerate analysis, and suggest response paths quickly and efficiently.   3) Governance must guide adoption without killing innovation  More AI and more code creates more exposure. The leadership job is balance: govern the use and guide adoption without becoming the “Department of No.”   Episode Notes / Links  🎧 Listen: In player ▶ Watch on YouTube: https://youtu.be/-2mxfnCexjQ    📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/   Resources Mentioned  MITRE ATT&CK Framework https://attack.mitre.org/ NIST Cybersecurity Framework (CSF) https://www.nist.gov/cyberframework ClearTech Loop AI Only Works If Your Foundations Do: A Conversation with Dr. Anton Chuvakin https://www.buzzsprout.com/2248577/episodes/18211623-ai-only-works-if-your-foundations-do-a-conversation-with-dr-anton-chuvakin    🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    9 min
  6. From Reactive to Predictive in AI Security with Jen Waltz

    FEB 3

    From Reactive to Predictive in AI Security with Jen Waltz

    Cybersecurity has been trapped in a reactive cycle for years: a new threat emerges, a new tool gets purchased, and teams get overwhelmed by alerts.  In this episode of ClearTech Loop, Jo Peterson sits down with Jen Waltz (Chief Information Security Officer at IMAJENATIVE) to unpack how generative AI can fundamentally disrupt that cycle—shifting the focus from managing tools to achieving strategic outcomes.  The conversation goes beyond “faster alerts” and gets practical about what’s changing right now:  Moving beyond alert triage into predictive threat hunting, including simulating adversary behavior and generating TTP playbooks—especially when paired with threat intel and MITRE ATT&CK data. Upskilling SOC teams by using GenAI to reduce menial work, provide clearer remediation paths, and support more anticipatory defense postures. Embedding security, privacy, and governance early so “secure-by-design” becomes a business enabler, not a speed bump. Jen also gives a clear governance warning: as AI adoption accelerates, organizations must guide usage with approved tools and acceptable-use controls—especially to reduce the risk of sensitive data being dropped into consumer AI chat tools like ChatGPT.  If you’re responsible for security operations, AI strategy, or governance, this episode offers a grounded path for how to adopt GenAI without losing control.  👉 Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/  Key Quotes  “Cybersecurity has long been trapped in this reactive cycle… generative AI… can fundamentally disrupt the cycle by shifting the focus from managing tools to achieving strategic outcomes.” — Jen Waltz  “The CISO no longer is this superhero defender of the perimeter. You have to become a business strategist…” — Jen Waltz  Three Big Ideas from This Episode  1. GenAI can break the reactive cycle—if teams target outcomes, not tools  Jen frames GenAI as an opportunity to move beyond buying more technology and instead shift security programs toward strategic outcomes and anticipatory defense.  2. Predictive threat hunting becomes practical with TTP playbooks + MITRE ATT&CK context  Rather than only prioritizing alerts, Jen describes prompting GenAI to simulate adversaries and generate playbooks—then connecting that to threat intel and MITRE ATT&CK data to anticipate attacker evolution earlier.  3. AI governance is a leadership mandate—and the CISO role expands  Jen argues the CISO must operate as a business strategist balancing innovation enablement with risk governance. That includes guiding internal AI use with hardened, approved tools and clear controls—without shutting down creativity.  🎧 Listen: Buzzsprout player above ▶ Watch on YouTube: https://youtu.be/EEf0eRdCfzg 📰 Subscribe to the ClearTech Loop Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/  Resources Mentioned  MITRE ATT&CK Framework: https://attack.mitre.org/resources/attack-data-and-tools/  NIST Cybersecurity Framework (CSF): https://www.nist.gov/cyberframework  ISO/IEC 27001 (ISMS): 🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    17 min
  7. AI Security Is Still Software Security with Nicolas Moy

    JAN 29

    AI Security Is Still Software Security with Nicolas Moy

    AI is being embedded into enterprise tools faster than most organizations can govern it.  Productivity platforms, security systems, and development workflows now include AI capabilities by default, often without a single approval moment or clear ownership model.  In this episode of ClearTech Loop, Jo Peterson sits down with Nicolas Moy, Founder and CIO of LifeMark Financial and vCISO for Security Engineering at Halyard Labs, to talk about what AI security looks like in practice when it is treated as software, not as a separate discipline.  Nicolas shares how security teams are already using AI today to accelerate policy development, reduce operational noise, and support threat modeling earlier in the design and build process. The conversation also explores why governance is struggling to keep pace with employee behavior, especially as sensitive information enters AI systems without clear visibility into where data goes or how it is reused.  Rather than framing AI security as a future problem, this discussion focuses on what CISOs and CIOs are dealing with right now, and why accountability has to keep pace as AI compresses timelines across security and technology decisions.  If you are navigating AI adoption across security, development, and governance, this episode provides a grounded perspective on how to approach AI without losing control.  👉 Subscribe to ClearTech Loop on LinkedIn: https://www.linkedin.com/newsletters/7346174860760416256/  Key Quotes  “For AI, it’s similar, it’s software, but there’s some new evolutions to it.”  — Nicolas Moy, CISSP, CCSK  “If my employee puts this confidential information into an AI chat system, where is that being shipped out to?”  — Nicolas Moy, CISSP, CCSK  Three Big Ideas from This Episode  1. AI security accelerates familiar risks rather than creating new ones  Treating AI as software brings it into existing security and DevSecOps practices earlier, rather than isolating it as a separate problem.  2. Governance is lagging behind real employee behavior  AI tools are being used inside normal workflows faster than policies and controls were designed to handle.  3. CISOs and CIOs must engage together earlier  AI security sits between architecture, data governance, and risk ownership, requiring shared accountability across roles.  🎧 Listen on player above. ▶ Watch on YouTube: https://youtu.be/MBVbyAE33e0 📰 Subscribe to the ClearTech Loop Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/  Resources Mentioned  OWASP Top 10 for Large Language Model Applications https://owasp.org/www-project-top-10-for-large-language-model-applications/ OWASP AI Project https://owasp.org/www-project-ai/ ClearTech Loop Episode: AI as a Digital Co Worker with Timothy Youngblood https://www.buzzsprout.com/2248577/episodes/18509846-ai-as-a-digital-co-worker-with-the-experience-of-an-intern-with-timothy-youngblood LLearn more about Nicolas NicolasMo🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    11 min
  8. The CISO’s Job in AI Is Not to Stop the Wave, But to Shape It with Travis Farral, CISO at Archaea Energy

    JAN 27

    The CISO’s Job in AI Is Not to Stop the Wave, But to Shape It with Travis Farral, CISO at Archaea Energy

    AI did not arrive through a single decision. It crept into enterprises through productivity tools, cloud platforms, security products, and SaaS applications that teams were already using.  Most organizations did not choose to adopt AI. They woke up and realized it was already there.  In this episode of ClearTech Loop, Jo Peterson sits down with Travis Farral, Vice President and Chief Information Security Officer at Archaea Energy, to talk about what that reality means for security leaders who are being asked to govern AI systems that are still evolving in real time.  Travis explains why AI cannot be stopped, only shaped, and why the real risk is not the technology itself but the lack of clarity around what is actually being deployed.  “This is not something that we’re going to be able to stop,” he said. “Even if we wanted to. It’s like standing in front of a tidal wave.”   The conversation covers:  Why “AI” has become a dangerously vague label How the AI threat model is shifting toward training data, prompts, and model behavior Why frameworks from NIST, OWASP, and MITRE already exist Why fluency, not guidance, is the real gap How CISOs can define guardrails without becoming the Department of No If you are responsible for cybersecurity, data governance, or enterprise risk, this episode offers a grounded way to think about AI adoption without losing control of your environment.  🎧 Listen to the episode ▶ Watch on YouTube https://youtu.be/JyQ2mgg_hYw 📬 Subscribe to the ClearTech Loop Newsletter https://www.linkedin.com/newsletters/7346174860760416256/  Key Quote  “This is not something that we’re going to be able to stop. Even if we wanted to. It’s like standing in front of a tidal wave.”  Travis Farral, CISO, Archaea Energy   Additional Resources  NIST AI Risk Management Framework Travis specifically called out NIST as one of the primary sources for understanding the risks and controls around generative and agentic AI: https://www.nist.gov/itl/ai-risk-management-framework OWASP Top 10 for Large Language Model Applications When Travis talked about protecting prompts, inputs, and model interfaces, he was pointing directly at the kinds of risks OWASP is mapping for LLMs. https://owasp.org/www-project-top-10-for-large-language-model-applications/ MITRE ATLAS MITRE’s Adversarial Threat Landscape for AI is one of the frameworks Travis referenced when he talked about how attacks against models are different from traditional exploits. https://atlas.mitre.org/ ClearTech Loop with Dutch Schwartz Travis’s comments about guardrails, controls, and not being the Department of No connect directly to Dutch’s episode on pragmatic AI safety. https://cleartechresearch.com/bumpers-not-brakes/ 🎧 Listen: In Buzzsprout Player ▶ Watch on YouTube: https://www.youtube.com/@ClearTechResearch/playlist 📰 Subscribe to the Newsletter: https://www.linkedin.com/newsletters/7346174860760416256/

    12 min

About

ClearTech Loop is a fast, focused podcast delivering sharp, soundbite-ready insights on what’s next in cybersecurity, cloud, and AI. Hosted by Jo Peterson, Chief Analyst at ClearTech Research, each 10-minute episode explores today’s most pressing tech and risk issues through a business-focused lens.  Whether it’s CISOs rethinking cyber strategy or AI reshaping risk governance, ClearTech Loop brings clarity to a shifting landscape—built for tech leaders who don’t have time for fluff.  We cut through hype. We rethink assumptions. We keep you in the loop.