AI AffAIrs

Claus Zeißler

AI Affairs: The podcast for a critical and process-oriented look at artificial intelligence. We highlight the highlights of the technology, as well as its downsides and current weaknesses (e.g., bias, hallucinations, risk management). The goal is to be aware of all the opportunities and dangers so that we can use the technology in a targeted and controlled manner. If you like this format, follow me and feel free to leave a comment.

  1. 2 HR AGO

    018 AI 2026: Transparency Laws, Reasoning Models, and the Power Play

    Episode Number: L018  Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation. What’s inside this episode: The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built. The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query. The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible. Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions. Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety. GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters. Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency. This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun. Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    17 min
  2. SEASON 2, EPISODE 6 TRAILER

    018 Quicky AI 2026: Transparency Laws, Reasoning Models, and the Power Play

    Episode Number: L018  Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation. What’s inside this episode: The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built. The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query. The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible. Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions. Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety. GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters. Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency. This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun. Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  3. 5 FEB

    017 AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth

    Episode Number: L017 Titel: AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth Join us as we dissect the most critical shifts in the tech landscape, drawing on groundbreaking research from Nature, Stanford HAI, and the Reuters Institute. In this episode, we dive into: The $500 Billion Bet: Big Tech’s capital expenditure is set to explode. We analyze why giants like Microsoft, Google, and Meta are racing to exceed $500 billion in spending, and whether OpenAI and Anthropic can hit their staggering revenue targets of $30 billion and $15 billion respectively. The Rise of "AI Slop": By 2026, experts predict that up to 90% of online content will be synthetic. We define the "AI Slop" phenomenon—an overload of low-quality, automated content that acts like the "microplastics of the internet," polluting our communication environment. The "Model Collapse" Crisis: What happens when AI is trained on its own "digital waste"? We explore the autophagous loop where models forget reality, lose the "tails" of human nuance, and converge into a repetitive, "faintEinheitsbrei". The Generative AI Paradox: As synthetic media becomes indistinguishable from reality, will society stop believing any digital evidence? We discuss the "Epistemic Tax"—the rising cost of verifying the truth in a world of voice clones and high-conviction deepfakes. Journalism as "Clean Water": In an ocean of AI-generated noise, human journalism remains the only source of "clean data." We discuss why investigative reporting is the only barrier preventing the total collapse of foundational AI models. The Robotaxi Wars: Waymo is serving 150,000 rides a week, but can Tesla finally deliver a truly driverless taxi? We look at the global battle for autonomous supremacy as Chinese players like Pony.ai threaten to surpass Western fleets. Why Listen? If you want to understand why the "context window" of reality is shrinking and how to maintain "epistemic hygiene" in a synthetic world, this episode is your essential guide to the next two years of the AI revolution. Subscribe now to stay ahead of the curve. #AI2026 #TechTrends #ModelCollapse #AISlop #GenerativeAI #BigTech #Waymo #Tesla #OpenAI #Journalism #FutureOfTech #DigitalInbreeding #SiliconValley (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    15 min
  4. SEASON 2, EPISODE 5 TRAILER

    017 Quicky AI 2026 Model Collapse, Big Tech’s $500B Bet, and the Death of Truth

    Episode Number: Q017 Titel: AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth Join us as we dissect the most critical shifts in the tech landscape, drawing on groundbreaking research from Nature, Stanford HAI, and the Reuters Institute. In this episode, we dive into: The $500 Billion Bet: Big Tech’s capital expenditure is set to explode. We analyze why giants like Microsoft, Google, and Meta are racing to exceed $500 billion in spending, and whether OpenAI and Anthropic can hit their staggering revenue targets of $30 billion and $15 billion respectively. The Rise of "AI Slop": By 2026, experts predict that up to 90% of online content will be synthetic. We define the "AI Slop" phenomenon—an overload of low-quality, automated content that acts like the "microplastics of the internet," polluting our communication environment. The "Model Collapse" Crisis: What happens when AI is trained on its own "digital waste"? We explore the autophagous loop where models forget reality, lose the "tails" of human nuance, and converge into a repetitive, "faintEinheitsbrei". The Generative AI Paradox: As synthetic media becomes indistinguishable from reality, will society stop believing any digital evidence? We discuss the "Epistemic Tax"—the rising cost of verifying the truth in a world of voice clones and high-conviction deepfakes. Journalism as "Clean Water": In an ocean of AI-generated noise, human journalism remains the only source of "clean data." We discuss why investigative reporting is the only barrier preventing the total collapse of foundational AI models. The Robotaxi Wars: Waymo is serving 150,000 rides a week, but can Tesla finally deliver a truly driverless taxi? We look at the global battle for autonomous supremacy as Chinese players like Pony.ai threaten to surpass Western fleets. Why Listen? If you want to understand why the "context window" of reality is shrinking and how to maintain "epistemic hygiene" in a synthetic world, this episode is your essential guide to the next two years of the AI revolution. Subscribe now to stay ahead of the curve. #AI2026 #TechTrends #ModelCollapse #AISlop #GenerativeAI #BigTech #Waymo #Tesla #OpenAI #Journalism #FutureOfTech #DigitalInbreeding #SiliconValley (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  5. 29 JAN

    016 LLM Council: Why Your Business Needs an AI Board of Directors

    Episode Number: L016 Titel: LLM Council: Why Your Business Needs an AI Board of Directors Do you blindly trust the first answer ChatGPT gives you? While Large Language Models (LLMs) are brilliant, relying on a single AI is a "single point of failure". Every model—from GPT-4o to Claude 3.5 and Gemini—has specific blind spots and deep-seated biases. In this episode, we dive into the LLM Council, a revolutionary concept open-sourced by Andrej Karpathy (OpenAI co-founder and former Tesla AI lead). Originally a "fun Saturday hack," this framework is transforming how businesses make strategic decisions by replacing a single AI "dictator" with a diverse panel of digital experts. The Problem: The "Judge" is Biased Current research shows that LLMs used as judges are far from perfect. They suffer from Position Bias (preferring certain answer orders), Verbosity Bias (favoring longer responses), and the significant Self-Enhancement Bias, where an AI prefers its own writing style over others. Some models even replicate human-like biases regarding gender and institutional prestige. The Solution: The 4-Stage Council Process An LLM Council forces multiple frontier models to debate, critique, and reach a consensus. We break down the four essential stages: Stage 1: First Opinions – Multiple models (e.g., Claude, GPT, Llama) answer your query independently. Stage 2: Anonymous Review – Models rank each other’s answers without knowing who wrote them, preventing brand favoritism. Stage 3: Critique – The models act as "devil's advocates," ruthlessly pointing out hallucinations and logical flaws in their peers' arguments. Stage 4: Chairman Synthesis – A designated "Chairman" model reviews the entire debate to produce one battle-tested final response. Why This Matters for the US Market: For American business owners and developers, an LLM Council acts as a free AI Board of Directors. Whether you are validating a $50,000 marketing campaign, performing automated code reviews, or checking complex contracts for unfavorable terms, the council approach provides a level of reliability and alignment with human judgment that no single model can match. What You’ll Learn in This Episode: The ROI of AI Collaboration: Why spending 5 to 20 cents on a "council meeting" is the best investment for high-stakes decisions. No-Code Implementation: How to use the Cursor IDE and natural language to build your own council in 10 minutes. The Tech Stack: An overview of OpenRouter for accessing multiple models and open-source frameworks like Council (chain-ml). Case Studies: Real-world examples of the council tackling SEO strategies and digital marketing trends for 2026. Stop settling for the first AI response. Learn how to leverage the "wisdom of the crowd" to debias your AI workflow and get the perfect answer every time. (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    12 min
  6. SEASON 2, EPISODE 4 TRAILER

    016 Quicky LLM Council: Why Your Business Needs an AI Board of Directors

    Episode Number: Q016 Titel: LLM Council: Why Your Business Needs an AI Board of Directors Do you blindly trust the first answer ChatGPT gives you? While Large Language Models (LLMs) are brilliant, relying on a single AI is a "single point of failure". Every model—from GPT-4o to Claude 3.5 and Gemini—has specific blind spots and deep-seated biases. In this episode, we dive into the LLM Council, a revolutionary concept open-sourced by Andrej Karpathy (OpenAI co-founder and former Tesla AI lead). Originally a "fun Saturday hack," this framework is transforming how businesses make strategic decisions by replacing a single AI "dictator" with a diverse panel of digital experts. The Problem: The "Judge" is Biased Current research shows that LLMs used as judges are far from perfect. They suffer from Position Bias (preferring certain answer orders), Verbosity Bias (favoring longer responses), and the significant Self-Enhancement Bias, where an AI prefers its own writing style over others. Some models even replicate human-like biases regarding gender and institutional prestige. The Solution: The 4-Stage Council Process An LLM Council forces multiple frontier models to debate, critique, and reach a consensus. We break down the four essential stages: Stage 1: First Opinions – Multiple models (e.g., Claude, GPT, Llama) answer your query independently. Stage 2: Anonymous Review – Models rank each other’s answers without knowing who wrote them, preventing brand favoritism. Stage 3: Critique – The models act as "devil's advocates," ruthlessly pointing out hallucinations and logical flaws in their peers' arguments. Stage 4: Chairman Synthesis – A designated "Chairman" model reviews the entire debate to produce one battle-tested final response. Why This Matters for the US Market: For American business owners and developers, an LLM Council acts as a free AI Board of Directors. Whether you are validating a $50,000 marketing campaign, performing automated code reviews, or checking complex contracts for unfavorable terms, the council approach provides a level of reliability and alignment with human judgment that no single model can match. What You’ll Learn in This Episode: The ROI of AI Collaboration: Why spending 5 to 20 cents on a "council meeting" is the best investment for high-stakes decisions. No-Code Implementation: How to use the Cursor IDE and natural language to build your own council in 10 minutes. The Tech Stack: An overview of OpenRouter for accessing multiple models and open-source frameworks like Council (chain-ml). Case Studies: Real-world examples of the council tackling SEO strategies and digital marketing trends for 2026. Stop settling for the first AI response. Learn how to leverage the "wisdom of the crowd" to debias your AI workflow and get the perfect answer every time. (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  7. 22 JAN

    015 Humanoid Robots – Industrial Revolution or Trojan Horse?

    Episode Number: L015 Titel: Humanoid Robots – Industrial Revolution or Trojan Horse? Welcome to a special deep-dive episode of AI Affairs! Today, we are exploring the front lines of the robotic revolution. What was once the stuff of science fiction is now walking onto the factory floors of the world’s biggest automakers. But as these machines join the workforce, they bring with them a new era of industrial opportunity—and unprecedented cybersecurity risks. In this episode, hosts Claus and Aida break down the massive shift in the humanoid market, which is projected to explode from $3.3 billion in 2024 to over $66 billion by 2032. We start with a look at the BMW Group Plant Spartanburg in South Carolina, where the Figure 02 robot recently completed a groundbreaking 11-month pilot. We discuss the stunning technical specs: a robot with three times the processing power of its predecessor, 4th-generation hands with 16 degrees of freedom, and the ability to place chassis parts with millimeter-level accuracy. But it’s not all smooth walking. We dive into the "German Sweet Spot"—the revelation that 244 hardware components of a humanoid robot align perfectly with the core competencies of German mechanical engineering. From precision gears to advanced sensors, the DACH region is positioning itself as the "hardware heart" of this global race. However, the most explosive part of today’s show covers the "Dark Side" of robotics. We analyze the shocking forensic study by Alias Robotics on the Chinese Unitree G1. This $16,000 robot, while affordable, has been labeled a potential "Trojan Horse". Our hosts reveal how static encryption keys and unauthorized data exfiltration could turn these digital workers into covert surveillance platforms, sending video, audio, and spatial LiDAR maps to external servers without user consent. Key topics covered in this episode: The BMW Success Story: How Figure 02 loaded over 90,000 parts and what the "failure points" in its forearm taught engineers about the next generation, Figure 03. Market Dynamics: Why China currently leads with 39% of humanoid companies, and how the U.S. and Europe are fighting for the remaining share. The ROI Reality Check: Can a $100,000 robot really pay for itself in under 1.36 years?. Cybersecurity AI: Why traditional firewalls aren't enough and why we need AI to defend against weaponized robots. Stanford’s ToddlerBot: The $6,000 open-source platform that is democratizing robot learning. Whether you are an industry executive, a cybersecurity professional, or a tech enthusiast, this episode of AI Affairs is your essential guide to the machines that will define the next decade of human labor. Listen now to understand why the future of work isn't just about mechanics—it's about trust. (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    15 min
  8. SEASON 2, EPISODE 3 TRAILER

    015 Quicky Humanoid Robots – Industrial Revolution or Trojan Horse?

    Episode Number: Q015 Titel: Humanoid Robots – Industrial Revolution or Trojan Horse? Welcome to a special deep-dive episode of AI Affairs! Today, we are exploring the front lines of the robotic revolution. What was once the stuff of science fiction is now walking onto the factory floors of the world’s biggest automakers. But as these machines join the workforce, they bring with them a new era of industrial opportunity—and unprecedented cybersecurity risks. In this episode, hosts Claus and Aida break down the massive shift in the humanoid market, which is projected to explode from $3.3 billion in 2024 to over $66 billion by 2032. We start with a look at the BMW Group Plant Spartanburg in South Carolina, where the Figure 02 robot recently completed a groundbreaking 11-month pilot. We discuss the stunning technical specs: a robot with three times the processing power of its predecessor, 4th-generation hands with 16 degrees of freedom, and the ability to place chassis parts with millimeter-level accuracy. But it’s not all smooth walking. We dive into the "German Sweet Spot"—the revelation that 244 hardware components of a humanoid robot align perfectly with the core competencies of German mechanical engineering. From precision gears to advanced sensors, the DACH region is positioning itself as the "hardware heart" of this global race. However, the most explosive part of today’s show covers the "Dark Side" of robotics. We analyze the shocking forensic study by Alias Robotics on the Chinese Unitree G1. This $16,000 robot, while affordable, has been labeled a potential "Trojan Horse". Our hosts reveal how static encryption keys and unauthorized data exfiltration could turn these digital workers into covert surveillance platforms, sending video, audio, and spatial LiDAR maps to external servers without user consent. Key topics covered in this episode: The BMW Success Story: How Figure 02 loaded over 90,000 parts and what the "failure points" in its forearm taught engineers about the next generation, Figure 03. Market Dynamics: Why China currently leads with 39% of humanoid companies, and how the U.S. and Europe are fighting for the remaining share. The ROI Reality Check: Can a $100,000 robot really pay for itself in under 1.36 years?. Cybersecurity AI: Why traditional firewalls aren't enough and why we need AI to defend against weaponized robots. Stanford’s ToddlerBot: The $6,000 open-source platform that is democratizing robot learning. Whether you are an industry executive, a cybersecurity professional, or a tech enthusiast, this episode of AI Affairs is your essential guide to the machines that will define the next decade of human labor. Listen now to understand why the future of work isn't just about mechanics—it's about trust. (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min

Trailers

About

AI Affairs: The podcast for a critical and process-oriented look at artificial intelligence. We highlight the highlights of the technology, as well as its downsides and current weaknesses (e.g., bias, hallucinations, risk management). The goal is to be aware of all the opportunities and dangers so that we can use the technology in a targeted and controlled manner. If you like this format, follow me and feel free to leave a comment.