The Security Strategist

EM360Tech

Stay ahead of cyberthreats with expert insights and practical security . Led by an ensemble cast of industry thought leaderss offering in-depth analysis and practical advice to fortify your organization's defenses.

  1. How Do Attackers Exploit Executives’ Personal Lives to Breach Companies?

    FEB 5

    How Do Attackers Exploit Executives’ Personal Lives to Breach Companies?

    Cybersecurity has traditionally focused on strengthening corporate networks, cloud systems, and devices. However, in the recent episode of The Security Strategist podcast, Dr. Chris Pierson, Founder and CEO of BlackCloak, and host Richard Stiennon, Chief Research Analyst at IT-Harvest, argue that the most significant vulnerabilities are now outside the office perimeter. As AI-driven attacks increase and cybercrime combines digital, physical, and reputational risks, executives and their close contacts have become prime targets. Protecting the business now involves protecting executives in their personal lives. Broad Attack Surface: Private & Corporate PropertiesPierson points out that cybercriminals follow basic economic principles. Attacking a company that spends millions on security is costly and time-consuming. Instead, targeting an executive’s personal life—home networks, private emails, family devices—is cheaper, quicker, and often much more effective. Executives work in various environments–primary homes, vacation properties, private jets, yachts, and remote offices equipped with smart home technology. Each of these locations broadens an attack surface that traditional corporate security programs rarely address. Home automation systems, private Wi-Fi networks, and personal email accounts have become part of the corporate risk landscape, regardless of whether organisations recognise this. Pierson notes that taking over personal email accounts continues to be the number one attack method, especially for board members who often revert to personal accounts instead of using corporate options. Once attackers gain access, they can steal intellectual property, intercept financial transactions, or link back into the corporate network. The executive home, he states, is no longer just near the perimeter—it is the perimeter. AI, Deepfakes, and the Rise of Targeted ImpersonationThe discussion becomes even more pressing when addressing AI-enabled threats. Deepfakes, once a possibility, are now practical tools for fraud and extortion. Pierson spotlights a critical incident in early 2024, when a deepfake impersonation of a CFO allowed attackers to move tens of millions of dollars in one event. AI has removed much of the background work attackers used to do. Public executive biographies, earnings calls, videos, and high-resolution images provide everything needed to imitate a voice or face. What used to take days to research can now happen in mere seconds. This leads to a rise in hyper-realistic business email scams, payment diversion schemes, and reputational attacks that make it hard to distinguish between truth and lies. Beyond financial losses, the reputational and personal fallout can be significant. Family members can become collateral damage, private moments can turn into leverage, and the risks to physical safety rise when travel plans and locations become known. As Pierson stresses, digital and physical executive protection are now interconnected. The podcast message relays–high-level threats require specialized defenses. BlackCloak’s strategy, which Pierson refers to as “Digital Executive Protection,” safeguards a small but vital group: board members, the C-suite, executive leaders, and key personnel like patent holders, system administrators, executive assistants, and chiefs of staff. These individuals hold essential information, and attackers are aware of this. For security leaders, the question is no longer...

    18 min
  2. Why Are AI Agents Forcing CISOs to Rethink Identity Security Architecture?

    FEB 4

    Why Are AI Agents Forcing CISOs to Rethink Identity Security Architecture?

    For decades, identity security relied on the assumption that identities are static, predictable, and mostly human. However, the growing scale and complexity of identities in the modern enterprise, as well as the increasing adoption of artificial intelligence has changed that perspective recently. With AI agents multiplying in enterprises, acting independently, appearing and disappearing, and using credentials, the foundations of identity and access management are being tested in ways many organisations are not ready for. In the recent episode of The Security Strategist podcast, Raz Rotenberg, CEO and Co-Founder of Fabrix Security, sat down with host Richard Stiennon, Chief Research Analyst at IT Harvest. “Everything we knew about identity is about to change,” Rotenberg cautioned Stiennon. “We’ve viewed identities as mostly static. But AI agents are dynamic. They can do various tasks, change their behaviour, vanish, and reappear. Static identity models won’t survive.” The Unplanned Identity ExplosionIdentity has always been complex, but the scale and variety of identities that security teams face today are unprecedented. Besides employees and contractors, organisations now deal with service accounts, cloud workloads, APIs, and increasingly, AI-driven agents that function on their own. According to Rotenberg, the challenge isn't just the number of identities; it's their variability. “The number of ways identities can behave is infinite,” he explained. “Every organisation is unique, every system is distinct, and identities are now changing in real time.” CISOs already see this explosion. Stiennon also noted during the podcast that AI is quickly becoming a major source of new identities, with agents being deployed widely and given credentials to operate at machine speed. However, most identity programs still depend on static role-based models and periodic reviews, approaches that struggle to keep up with dynamic, non-human agents. Multiple Identity Tools Can Lead to Hidden RisksDespite a crowded identity security market with hundreds of vendors in IAM, PAM, IGA, and cloud identity, Rotenberg argues that the main issue is not a lack of tools. “We’ve had identity tools for decades,” he said. “They do a good job of facilitating operations aimed at reducing risk. But they all miss the same point – they rely too much on the human factor.” Each tool, he explained, only sees a part of the identity landscape. Identity providers handle authentication, PAM tools manage privileged access, and governance platforms oversee reviews. None provides a unified, real-time view of identity behaviours across systems. The Fabrix CEO calls it “partial truth.” Security teams dealing with identity issues have to manually gather data from various platforms, piece it together, and make decisions with incomplete information. “This leads to long review cycles, manual investigations, and over-provisioning by default,” he said. “Permissions get copied and duplicated because people don’t fully grasp who has access to what or why.” This can often lead to unclear decisions, with the organisation handing out more permissions than fewer. Eventually, it creates sprawling identity landscapes filled with excessive privileges and risky combinations. In some cases, an individual might have...

    14 min
  3. From Data to Insight: How Enterprises Are Making IoT Secure and Actionable

    JAN 30

    From Data to Insight: How Enterprises Are Making IoT Secure and Actionable

    Organisations continue to struggle with device management data and fragmented architectures while facing pressure from business and regulators. As the technology landscape changes, the integration of Internet of Things (IoT) devices with Operational Technology (OT) presents both exciting opportunities and significant security challenges. In a recent episode of the Security Strategist podcast, host Christopher Steffen, alongside Dr Juergen Kraemer, Chief Product Officer of Cumulocity, examines the complexities of securing IoT environments and the importance of resilient analytics and accountability. Understanding the IoT-OT DisconnectAs time passes, the historical divide between IT and OT persists. As highlighted by Dr Kraemer, the operational technology sector has traditionally prioritised physical safety and availability over data confidentiality. This disconnect has created a significant gap in security policies, leaving IoT devices vulnerable to exploitation. The conversation emphasises that as organisations connect these previously isolated systems to IT networks, they inadvertently expose themselves to new risks, demanding a reevaluation of security strategies. Addressing Security ChallengesDr Kraemer points out that securing data access is critical, especially for organisations that deploy IoT devices across multiple sites. For instance, managing security for an elevator company with installations worldwide presents unique challenges. Organisations must navigate various networks and ensure compliance with new legislative requirements, such as the Cyber Resilience Act and NIS2 directive. These regulations demand a structured approach to security that many legacy OT environments struggle to meet. The Importance of Unified Data ManagementAs IoT solutions proliferate, organisations often find themselves managing a patchwork of legacy systems and newer platforms. Dr Kraemer advocates for a hybrid approach, suggesting businesses create a unified data plane that integrates new and old systems. This strategy allows organisations to maintain operational continuity while gradually transitioning to modern platforms, ultimately leading to enhanced innovation and efficiency. Buy and Build StrategyA significant takeaway from the podcast is the concept of “buy and build.” Instead of choosing between purchasing a platform or developing one in-house, organisations should leverage established platforms like Cumulocity while also building innovative applications tailored to their specific needs. This dual approach allows businesses to focus on high-value projects without getting bogged down by the complexities of underlying infrastructure. The dialogue sheds light on the pressing need for organisations to adapt their cybersecurity strategies to accommodate the complexities of IoT and OT environments. By understanding the historical disconnect, addressing security challenges, and adopting a buy and build approach, enterprises can improve their cybersecurity posture and drive innovation in an increasingly interconnected world. To find out more, visit https://www.cumulocity.com/ TakeawaysIoT devices are often treated as secondary in security policies.The historical divide between IT and OT creates security challenges.Organisations struggle with integrating legacy and modern IoT systems.A buy-and-build strategy allows for...

    27 min
  4. Human-Led, AI-Driven: The Next Chapter of Security Operations

    JAN 29

    Human-Led, AI-Driven: The Next Chapter of Security Operations

    Security leaders are rethinking how detection and response work in practice in 2026 owing to growing complexities in cybersecurity technology and the threat landscape. On this episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, spoke with Daniel Martin, Director of Product Management at Rapid7. They discussed how modern Security Operations Centres (SOCs) are evolving, where AI truly adds value, and why outcomes—not features—should guide cybersecurity teams. A recurring theme in their discussion was that while the threat landscape continues to evolve, many core challenges for SOCs remain unchanged. According to Martin, security teams still struggle with alert fatigue, lack of context, and the pressure to respond quickly—all while juggling increasingly complicated domains. Organisations now require detection and response that is tailored to their specific environment, not generic threat models. Such a shift explains the rise of Managed Detection and Response (MDR) and the decline of one-size-fits-all managed security services. Customers want results, not noise, and they seek partners who understand their business context. Martin says that this philosophy lies at the heart of Rapid7’s approach to Incident Command, its modern Security Information and Event Management (SIEM) offering. Instead of treating SIEM, Security Orchestration, Automation, and Response (SOAR), and threat intelligence as separate tools, Incident Command integrates them directly into the analyst workflow. The aim is to provide decision support in real-time—delivering relevant context, threat intelligence, and recommended actions exactly when needed, without making analysts switch between different systems. Martin emphasised that a modern SIEM's success isn’t measured by the amount of data it can handle, but by how effectively it helps analysts make high-quality decisions quickly. Automation is important, but only if it’s applied thoughtfully. Deterministic automation, which includes actions that are predictable, auditable, and repeatable, remains vital for security operations. AI is most useful when it aids reasoning, summarisation, and prioritisation instead of completely replacing human judgment. “There’s a lot of excitement around autonomous security,” Martin noted, “but chaining unpredictable decisions together is not something customers can trust.” Instead, Rapid7 focuses on using AI to assist analysts at specific moments in an investigation, such as summarising activity, adding context to alerts, or helping decide if more data collection is needed. Also Watch: Is Your Attack Surface a Swiss Cheese? Solving Attack Surface Management (ASM) Challenges “Customer Zero” ApproachA key aspect of Rapid7’s product development is its “customer zero” approach. By running its own global MDR SOC, Rapid7 continuously incorporates real analyst feedback into product design. Martin shared that an early mistake was putting AI-driven insights in a separate interface to avoid disrupting workflows; this was quickly corrected after analysts indicated they wouldn’t leave their main view to check a secondary opinion. The lesson was clear: if context matters, it must be available...

    15 min
  5. Why Are Vulnerability Backlogs Still Growing Despite Better Detection?

    JAN 28

    Why Are Vulnerability Backlogs Still Growing Despite Better Detection?

    Podcast: The Security Strategist Guest: John Amaral, Co-Founder & CTO, Root.io Host: Chris Steffen, VP of Research, Enterprise Management Associates (EMA) For over a decade, shift-left security has been the leading idea in DevSecOps. The concept was straightforward: move security earlier in the software development process so vulnerabilities could be fixed more quickly and cheaply. However, new benchmark data suggests that the reality is quite different. In the latest episode of The Security Strategist podcast, Chris Steffen sat down with John Amaral, Co-Founder and CTO of Root.io, to discuss why shift-left has stalled and why autonomous remediation and “shift-out” security is the best option moving forward. One striking data point mentioned in the episode comes from the Shift-Out Benchmark Report by Root. It reveals that 82 per cent of organisations say they are confident in their shift-left strategy; however, only four per cent have achieved zero CVE backlog. “That four per cent shocked me,” Steffen expressed during the conversation. “Honestly, it felt high.” Amaral explained that this gap exists because the industry has focused on detection instead of remediation. “We built CVE detection at computer speed,” Amaral noted. “But remediation has never scaled beyond human speed.” Modern pipelines can quickly identify vulnerabilities, open tickets, and generate extensive lists. However, the actual work of fixing those vulnerabilities still falls on engineering teams. Detection Scales but Humans Don’tShift-left claimed that developers could fix security issues faster because they work closely with the code. In reality, that assumption falls apart, particularly for third-party and open-source dependencies. The Root CEO added that developers are being asked to fix code they didn’t write, don’t own, and don’t understand. “They want to build features, not reverse-engineer open-source libraries.” With over 90 per cent of modern applications built by leveraging open-source models, fixing vulnerabilities often depends on upgrades. Often, this ends up forming a risky trade-off. Upgrading dependencies has long been the go-to remediation strategy. However, recent supply-chain attacks—like “Sha1-Hulud-style malware injections”—have shown how dangerous blind upgrades can be. “If you compromise a popular repository at the right moment, malware can spread to millions of downstream projects in minutes,” Amaral warned. Organisations now face a difficult choice between upgrading automatically and risking a malware spread or pinning dependencies that build CVEs hard to fix quickly. “Pinning protects you from supply-chain attacks,” Amaral says, “but now you’ve created a CVE backlog you don’t have the resources to clear.” What Is “Shift-Out”...

    25 min
  6. What Happens to API Security When AI Agents Go Autonomous?

    JAN 16

    What Happens to API Security When AI Agents Go Autonomous?

    As companies speed up their adoption of AI, an old but increasingly serious problem is resurfacing: lack of visibility. In the recent episode of The Security Strategist podcast, Eric Schwake, Director of Cybersecurity Strategy at Salt Security, joined analyst Richard Stiennon to discuss why APIs, which have long been the backbone of modern applications, have become essential for AI-driven businesses. They particularly dive deep into the critical importance of API visibility and discovery in the context of rising AI integration within enterprises. They discuss the challenges organisations face in securing APIs, the significance of understanding the attack surface, and the role of governance in managing risks. The conversation also covers the emerging Model Context Protocol (MCP) and its implications for API security, as well as the future landscape of cybersecurity as AI systems become more autonomous. Schwake emphasises the need for CISOs to be proactive in engaging with AI projects to ensure security is prioritised. If this system isn’t secured, the entire organisation faces risks. APIs: The Foundation of AIAPIs have been vital to business structures for years, especially with the growth of microservices. However, Schwake argues that AI has changed the scale of the issue significantly. “We saw a big increase in the number and usage of APIs when microservices became popular,” Schwake explained. “Now, with AI, it’s just 10 times or even 100 times whatever it is for APIs.” While much of the industry talk has centred on large language models (LLMs), Schwake emphasised that the real actions—and risks—occur one layer below. “Everything happening is driven by APIs. The AI agents, the MCP servers, the agents communicating with the LLMs—all of it is API traffic.” In essence, AI may represent innovation, but APIs are the mechanisms that enable it. API is the “Nervous System” Organisations OverlookAs companies rush to implement copilots, agents, and automation, security often takes a back seat. Schwake warned that this creates a dangerous blind spot. “You need to ensure that you’re securing that underlying nervous system of this new world—and that relies on APIs.” This lack of attention has resulted in a surge of unknown, unmanaged, and “shadow” APIs, many of which were never documented or designed with security in mind. Without continuous discovery, security teams might not even know what they are trying to protect. “Visibility is a challenge in security. If you don’t have visibility, you can’t see what you’re protecting—you’re essentially out of luck.” Discovery First, Governance SecondFor the Director of Cybersecurity Strategy, API security begins with understanding the attack surface. This principle hasn’t changed in 20 years, but AI has made it more crucial. “With AI, the attack surface on APIs could grow tenfold. If you don’t have a grasp of that attack surface, you won’t be able to protect it.” After identifying APIs, the next step is governance. This includes finding owners, setting rules, and reducing risks before attackers exploit vulnerabilities. “You want to ensure that there isn’t a big open gap inviting attackers.” This becomes even more important as AI tools start writing code and generating APIs, raising both speed and...

    15 min
  7. Why AI Agents Demand a New Approach to Identity Security

    12/23/2025

    Why AI Agents Demand a New Approach to Identity Security

    AI agents are evolving into capable collaborators in cybersecurity, acting as operational players. These agents read sensitive data, trigger workflows, and make decisions at a speed and scale beyond human capability. Matt Fangman, Field CTO at SailPoint, explains on The Security Strategist podcast that this new power has costs. AI agents have turned into a new, mostly unmanaged identity type. Enterprises are just starting to realise how far behind they are. In the recent episode of The Security Strategist podcast, guest Fangman sat down with Alejandro Leal, Senior Analyst at KuppingerCole. They talked about the implications of AI agents for identity security and the rapid evolution of AI agents, the challenges of visibility and governance, and the need for operational control in managing these agents. The conversation highlights the importance of just-in-time permissions, the evolution of identity controls, and strategic moves for CISOs to manage the risks associated with agent-based operations. AI Agents Creating Brand New Identity LayersFangman notes a turning point in the last 12 to 18 months, driven by the fast development of large language models (LLMs). These models gave agents the reasoning and autonomy to change from toys in a sandbox to real virtual workers. Organizations can now train agents with goals, equip them with tools, and connect them to one another. Since these agents do not tire, slow down, or forget, companies see a chance to grow their workforce without hiring new people. The issue is: They didn’t establish identity controls for these AI workers. “They’ve created a brand-new layer of identities,” Matt says, “but without the protections, ownership, or visibility that exist for humans.” Shadow agents, sometimes numbering in the thousands, operate unnoticed. Identity teams are unaware of them, security teams can’t monitor them, and cloud teams might spot them briefly in a dashboard, thinking they are someone else’s issue. Meanwhile, the agents themselves explore, share tools, and adapt. It’s a governance gap that keeps widening. When Leal asks how the industry should respond, Fangman answers: “Start by treating agents like people. Give them roles. Define what they can access. Apply entitlements. Enforce policy.” When asked for advice for CISOs and what they should do before agents start to overwhelm security programs? The SailPoint Field CTO recommends beginning with inventory. If an organisation does not know what agents exist, what they access, or what they are doing, nothing else matters. Assigning each agent a corporate identity and tracking its behaviour is the essential foundation for everything that follows. TakeawaysAI agents are becoming operational actors in business systems.The lack of visibility into agents creates governance risks.Just-in-time permissions are essential for managing agents.Agents are evolving into peer systems within organisations.Identity management is shifting towards relationships and context.CISOs need to inventory and track agent behaviour.span class="ql-ui"...

    13 min
  8. Is Your Holiday Traffic Human—or AI-Driven and Under Attack?

    12/23/2025

    Is Your Holiday Traffic Human—or AI-Driven and Under Attack?

    As businesses approach the holiday season, security teams feel the pressure while online activity increases. At the same time, AI is quickly changing how attacks are launched and how organisations function daily. In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, sits down with Pascal Geenens, VP of Threat Intelligence at Radware, to discuss why CISOs need to rethink their long-held beliefs about attackers, users, and what “web traffic” really means in an AI-driven world. They talk about the dual nature of AI in cybercrime, the emergence of new tools that facilitate attacks, and the importance of automated pen testing as a defence strategy. The conversation also highlights vulnerabilities associated with AI assistants, such as indirect prompt injection, and emphasises the need for organisations to adopt best practices to safeguard against these threats. Also Watch: From Prompt Injection to Agentic AI: The New Frontier of Cyber Threats AI Attacks Lower the Barrier for CybercrimeGeenens tells Stiennon that AI’s biggest effect on security is not a new type of futuristic attack but rather its scale and accessibility. Tools like WormGPT, FraudGPT, and advanced platforms like Xanthorox AI provide reconnaissance, exploit development, data analysis, and phishing as subscription-based services. For a few hundred dollars each month, attackers can access AI-assisted tools that cover the entire cyber kill chain. This “vibe hacking” model resembles vibe coding. Attackers describe their goals in natural language, and the AI generates scripts, reconnaissance workflows, or data extraction logic. While these tools have not fully automated attacks from start to finish, they significantly lower the skills needed to engage in cybercrime. As Geenens explains, attackers can now target hundreds or thousands of organisations simultaneously, a task that once required large teams. Attackers can now afford to fail repeatedly as part of their learning process, while defenders cannot. Even flawed AI-generated exploits speed up scanning, vulnerability detection, and phishing at levels that security teams find challenging to handle. The result is a threat landscape that uses familiar techniques but operates with greater speed and intensity. Also Watch: How Do You Stop an Encrypted DDoS Attack? How to Overcome HTTPS Challenges AI Assistants & Browsers Creating Invisible Data Leak RisksThe second, and more alarming, change that the VP of Threat Intelligence emphasises occurs within companies themselves. As organisations use AI assistants and AI-powered browsers, they delegate authority along with convenience. These tools require access to emails, documents, and business systems to be effective, and this access creates new risks. Indirect prompt injection, shadow leaks, and echo leaks turn normal workflows into potential attack vectors. For instance, an AI assistant summarising emails may unintentionally process hidden commands within a message. These commands can lead the model to inadvertently leak sensitive information without the user clicking any links or noticing anything unusual. In some cases, the data doesn't even leave the endpoint; it exits directly from the AI provider's cloud infrastructure, completely bypassing established data loss prevention and network monitoring. Meanwhile, Geenens points to a fundamental shift in traffic...

    24 min

About

Stay ahead of cyberthreats with expert insights and practical security . Led by an ensemble cast of industry thought leaderss offering in-depth analysis and practical advice to fortify your organization's defenses.