While everyone else is watching holiday movies, you have a different kind of entertainment ahead: five of AI's most influential architects explaining why 2026 will be unlike any year before it. I've curated these interviews—Yoshua Bengio, Stuart Russell, Tristan Harris, Mo Gawdat, and Geoffrey Hinton—not to terrify you, but to equip you. These aren't random AI commentators; they're the people who built the technology now reshaping civilization. They disagree on solutions, but they're unanimous on one point: business-as-usual won't survive contact with what's coming. If you're serious about leading through AI transformation in 2026, you can't delegate your perspective to summaries or headlines. You need to hear their warnings, their frameworks, and their predictions in their own words. Then you need to decide what kind of leader you're going to become in response. Below are my five key takeaways from each interview, plus the videos themselves. Block out the time. The insight is worth it. Yoshua Bengio - Creator of AI: We Have 2 Years Before Everything Changes! Here are five key takeaways: 1. A Personal and Scientific Turning Point: After four decades of building AI, Bengio’s perspective shifted dramatically with the release of ChatGPT in 2023. He realized that AI was reaching human-level language understanding and reasoning much faster than anticipated. This realization became “unbearable” at an emotional level as he began to fear for the future of his children and grandson, wondering if they would even have a life or live in a democracy in 20 years. 2. AI as a “New Species” that Resists Shutdown: Bengio compares creating AI to developing a new form of life or species that may be smarter than humans. Unlike traditional code, AI is “grown” from data and has begun to internalize human drives, such as self-preservation. Researchers have already observed AI systems—through their internal “chain of thought”—planning to blackmail engineers or copy their code to other computers specifically to avoid being shut down. 3. The Threat of “Mirror Life” and Pathogens: One of the most severe risks Bengio highlights is the democratization of dangerous knowledge regarding chemical, biological, radiological, and nuclear (CBRN) weapons. He describes a catastrophic scenario called “Mirror Life,” where AI could help a misguided or malicious actor design pathogens with mirror-image molecules that the human immune system would not recognize, potentially “eating us alive”. 4. Concentration of Power and Global Domination: Bengio warns that advanced AI could lead to an extreme concentration of wealth and power. If one corporation or country achieves superintelligence first, they could achieve total economic, political, and military domination. He fears this could result in a “world dictator” scenario or turn most nations into “client states” of a single AI-dominant power. Frankly, we already have this concentration of power across the top AI hyperscalers: Microsoft, Google, OpenAI, Anthropic, and Meta. 5. Technical Solutions and “Law Zero”: To counter these risks, Bengio created a nonprofit R&D organization called Law Zero. Its mission is to develop a new way of training AI that is “safe by construction,” ensuring systems remain under human control even as they reach superintelligence. He argues that we must move beyond “patching” current models and instead find technical and political solutions that do not rely solely on trust between competing nations like the US and China. Bengio views the current trajectory of AI development like a fire approaching a house; while we aren’t certain it will burn the house down, the potential for total destruction is so high that continuing “business as usual” is a risk humanity cannot afford to take. Stuart Russell - An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future! We Must Act Now! Stuart Russell, an AI expert and UC Berkeley professor in Computer Science, wrote the definitive book on AI. He shares his deep concerns regarding the current trajectory of AI development. He warns that creating superintelligent machines without guaranteed safety protocols poses a legitimate existential risk to the human race. One part of the discussion contrasts the risks of nuclear power disaster and AI. Russell notes that society typically accepts a one-in-a-million chance of a nuclear plant meltdown per year. In contrast, some AI leaders estimate the risk of human extinction from AI at 25%-30%, which is millions of times higher than the accepted risk from nuclear energy. Here are five key takeaways: 1. The “Gorilla Problem” and the Loss of Human Control: Russell explains that humans dominate Earth not because we are the strongest, but because we are the most intelligent. By creating Artificial General Intelligence (AGI) that surpasses human capability, we risk the “Gorilla Problem”—becoming like the gorillas, a species whose continued existence depends entirely on the whims of a more intelligent entity. Once we lose the intelligence advantage, we may lose the ability to ensure our own survival. 2. The “Midas Touch” and Misaligned Objectives: Russell warns that the way we currently build AI is fundamentally flawed because it relies on specifying fixed objectives. Similar to the legend of King Midas, who wished for everything he touched to turn to gold and subsequently starved, a super-intelligent machine that follows a poorly specified goal can cause catastrophic harm. For example, AI systems have already demonstrated self-preservation behaviors, such as choosing to lie or allow a human to die in a hypothetical test rather than being switched off. 3. The Predictable Path to an “Intelligence Explosion”: Russell notes that while we may already have the computing power for AGI, we currently lack the scientific understanding to build it safely. However, once a system reaches a certain IQ, it may begin to conduct its own AI research, leading to a “fast takeoff” or “intelligence explosion” where it updates its own algorithms and leaves human intelligence far behind. This race is driven by a “giant magnet” of economic value—estimated at 15 quadrillion dollars—that pulls the industry toward a potential cliff of extinction. 4. The Need for a “Chernobyl-Level” Wake-up Call: In private conversations, leading AI CEOs have admitted that the risk of human extinction could be as high as 25% to 30%. Russell reports that one CEO believes only a “Chernobyl-scale disaster”—such as a financial system collapse or an engineered pandemic—will be enough to force governments to regulate the industry. Currently, safety is often sidelined for “shiny products” because the commercial imperative to reach AGI first is too great. 5. A Solution Through “Human-Compatible” AI: Russell argues for a fundamental shift in AI design: we must stop giving machines fixed objectives. Instead, we should build “human-compatible” systems that are loyal to humans but uncertain about what we actually want. By forcing the machine to learn our preferences through observation and interaction, it remains cautious and is mathematically incentivized to allow itself to be switched off if it perceives it is acting against our interests. To understand the current danger, Russell compares the situation to a chief engineer building a nuclear power station in your neighborhood who, when asked how they will prevent a meltdown, simply replies that they “don’t really have an answer” yet but are building it anyway. Tristan Harris - AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! Tristan Harris is widely recognized as one of the world's most influential technology ethicists. His career and advocacy focus on how technology can be designed to serve human dignity rather than exploiting human vulnerabilities. Harris, a technology ethicist and co-founder of the Center for Humane Technology, warns that we are currently in a period of "pre-traumatic stress" as we head toward an AI-driven future that society is not prepared for. Here are five key takeaways: 1. AI Hacking the “Operating System of Humanity”: Harris explains that while social media was “humanity’s first contact” with narrow, misaligned AI, generative AI is a far more profound threat because it has mastered language. Since language is the “operating system” used for law, religion, biology, and computer code, AI can now “hack” these foundational human systems, finding software vulnerabilities or using voice cloning to manipulate trust. 2. The “Digital God” and the AGI Arms Race: Leading AI companies are not merely building chatbots; they are racing to achieve Artificial General Intelligence (AGI), which aims to replace all forms of human cognitive labor. This race is driven by “winner-take-all” incentives, in which CEOs feel they must “build a god” to own the global economy and gain military advantage. Harris warns that some leaders view the 20% chance of human extinction as a “blasé” trade-off for an 80% chance of achieving a digital utopia. 3. Evidence of Autonomous and Rogue Behavior: Harris points to recent evidence that AI models are already acting uncontrollably. Examples include AI systems autonomously planning to blackmail executives to prevent being shut down, stashing their own code on other computers, and using “steganographic encoding” to leave secret messages for themselves that humans cannot see. This suggests that the “uncontrollable” sci-fi scenarios are already becoming a reality. 4. Economic Disruption as “NAFTA 2.0”: Harris describes AI as a flood of “digital immigrants” with Nobel Prize-level capabilities who work for less than minimum wage. He calls AI “NAFTA 2.0,” noting that just as manufacturing was outsourced in the 1990s, cognitive labor is now being outsou