Daily Cyber & AI Briefing with Michael Housch. This episode was published automatically and includes the assembled audio plus full transcript. TranscriptWelcome to today’s briefing on the evolving landscape of cyber and AI risk. If you’re a security leader, risk executive, or simply someone who wants to stay ahead of the curve, this episode will help you navigate the most pressing issues facing organizations right now. Let’s dive in. We’re living in a time where the adoption of artificial intelligence across enterprises is accelerating at a pace that’s frankly outstripping the maturity of our security controls and governance frameworks. This isn’t just a matter of playing catch-up; it’s about recognizing that the scale and subtlety of risk are changing, and the old playbooks aren’t enough. AI agents and AI-assisted development are multiplying the opportunities for both human error and oversight challenges. Meanwhile, the threat environment remains as active as ever, with state-sponsored actors exploiting vulnerabilities in critical infrastructure, and attackers leveraging increasingly sophisticated social engineering and malware delivery techniques. Let’s start with a look at some of the most important developments shaping the risk landscape today. First up, we have a significant alert regarding industrial control systems. Over 5,200 Rockwell programmable logic controllers—PLCs—have been found exposed to the internet. These devices are the backbone of manufacturing and infrastructure operations. Their exposure is not a hypothetical risk; it’s an open invitation for remote exploitation, sabotage, or ransomware attacks. Iranian advanced persistent threat actors have already been observed targeting these systems. For risk leaders, this is a wake-up call. Asset discovery, network segmentation, and continuous monitoring of operational technology environments are no longer optional—they’re essential. The potential for catastrophic disruption is real, and it’s immediate. Now, let’s talk about AI agents operating within enterprises. There’s a growing trend of deploying AI agents without adequate oversight from security teams. In many organizations, there’s little to no visibility into what these agents are doing, what data they’re accessing, or how they’re interacting with other systems. This creates significant blind spots for data leakage, privilege escalation, and compliance violations. The practical implication is clear: CISOs must move quickly to implement AI asset inventories, enforce policy controls, and develop monitoring capabilities tailored to both autonomous and semi-autonomous agents. If you don’t know what your AI is doing, you can’t secure it. Closely related to this is the rapid adoption of AI-assisted development tools. These tools are designed to accelerate software development, but they’re also amplifying the risk of human error. Faster code generation without sufficient guardrails can lead to the propagation of insecure code, misconfigurations, and vulnerabilities—often at scale. Security and risk leaders need to prioritize secure development lifecycle practices, automated code review, and AI-specific governance. The goal is not to slow down innovation, but to ensure that speed doesn’t come at the expense of security. Let’s shift gears to the threat landscape in the Middle East, where we’re seeing a sophisticated espionage campaign leveraging fake secure messaging applications to deliver ProSpy malware. This attack vector combines social engineering with advanced malware delivery, targeting sensitive communications and data exfiltration. For organizations with operations or partners in high-risk regions, this underscores the importance of user awareness, rigorous application vetting, and robust endpoint detection capabilities. The lesson here is that even trusted communication channels can be weaponized, and vigilance is critical. In Taiwan, attackers