Artificial intelligence didn’t suddenly arrive in the power system. It arrived quietly—through decades of automation, control systems, and institutional delegation. In this first episode of a four-part series, host Michael Vincent sits down with Brandon N. Owens, founder of AIxEnergy and author of The Cognitive Grid, to trace a deeper and more unsettling story than the usual AI narrative. This is not a conversation about futuristic intelligence replacing humans. It is a conversation about how judgment itself moved into infrastructure—long before anyone used the language of AI. The episode begins with a simple premise: modern power systems already act faster than human judgment can intervene. Long before machine learning entered the conversation, the grid evolved through layers of sensing, telemetry, supervisory control, and automated coordination. Each layer improved reliability. Each layer also quietly reshaped where decisions actually happen. As Owens explains, the most consequential shift was not automation replacing operators, but automation curating the decision space—determining which signals mattered, which deviations demanded attention, and how long human intervention could safely be deferred. Operators remained present, but authority began to migrate. Judgment did not disappear. It was reorganized. The conversation moves through the historical inflection points that made this migration visible only in hindsight: the rise of supervisory control and data acquisition, the emergence of automatic generation control, and the major North American blackouts of 1965, 1977, 1996, and 2003. These failures are treated not as technical anomalies, but as governance stress tests—moments when institutions were forced to reconstruct decisions that had already been embedded in machinery. A central theme emerges: governance almost always trails capability. Systems become indispensable because they work. Because they work, they become harder to inspect in real time. When failure finally occurs, legitimacy is tested after the fact—when responsibility is already diffuse and authority difficult to locate. This episode argues that the real risk of AI in critical infrastructure is not runaway intelligence or loss of human control in the cinematic sense. The risk is quieter and more structural: authority migrating ahead of governance, judgment becoming opaque, and institutions encountering consequences before they have made permission explicit. By grounding the discussion in the history of the electric grid—one of the most mature and consequential infrastructures in modern society—this episode makes a broader claim: if we cannot make machine-mediated judgment legible, bounded, and accountable here, we will struggle to do so anywhere. This is not a warning about the future. It is an explanation of what already happened—and why it matters now. In Episode 2, the series moves into the era that promised intelligence and often delivered instrumentation: the Smart Grid, and how that gap created conditions for AI to enter as the next layer of mediation.