AI AffAIrs

Claus Zeißler

AI Affairs: The podcast for a critical and process-oriented look at artificial intelligence. We highlight the highlights of the technology, as well as its downsides and current weaknesses (e.g., bias, hallucinations, risk management). The goal is to be aware of all the opportunities and dangers so that we can use the technology in a targeted and controlled manner. If you like this format, follow me and feel free to leave a comment.

  1. TRÁILER DE LA TEMPORADA 2, EPISODIO 8

    020 Quicky Silicon’s Successors: The Brain-Chip vs. Quantum Revolution

    Episode Numberr: Q020 Titel: Silicon’s Successors: The Brain-Chip vs. Quantum Revolution Is the era of traditional computing coming to an end? For decades, Moore’s Law—the steady doubling of transistors on silicon chips—has fueled our digital world, but we are finally hitting the fundamental physical limits of silicon. The "von Neumann bottleneck," the separation of memory and processing that creates data traffic jams, is becoming an unsustainable drain on energy. In this episode, we explore the two most promising frontiers designed to shatter these limits: Neuromorphic Computing and Quantum Technologies. What’s inside this episode? Neuromorphic Computing – Engineering the Artificial Brain: We dive into how systems like Intel’s Hala Point—the world’s largest neuromorphic system with 1.15 billion neurons—are mimicking the human brain to process data 20 times faster than a biological brain while using a fraction of the power of traditional CPUs. Discover why "spiking neural networks" (SNNs) are the secret to the future of autonomous vehicles, robotics, and energy-efficient Edge AI. Quantum Computing – Solving the "Impossible": While neuromorphic chips mimic how we think, quantum computers exploit the strange laws of subatomic physics. We discuss the race for fault-tolerant quantum computing (FTQC) and how breakthroughs like Google’s Willow chip and IBM’s roadmap to the Starling system aim to solve problems in drug discovery, materials science, and cryptography that would take classical supercomputers millions of years. The Power of Convergence: The real magic happens where these two worlds meet. We examine Neuromorphic Quantum Computing (NQC)—the integration of brain-like neural structures on quantum hardware. Learn how quantum materials, such as superconductors and topological insulators, are being used to create ultra-low-power neuromorphic components like superconducting memristors. Sustainability and "Green AI": With the energy demands of massive AI models like GPT-3 skyrocketing, we look at how these next-gen architectures offer a path toward sustainable AI. Why This Matters for the US Market: North America currently leads the world in commercial applications for these technologies. With massive investments from titans like IBM, Intel, and Google, and research being conducted at facilities like Sandia National Laboratories, the US is the primary battleground for the next era of high-performance computing. However, a significant "talent shortage" looms, with demand for quantum professionals expected to explode by 2030. Conclusion: This isn't a winner-take-all race. Neuromorphic and quantum computing are like a race car and a cargo ship—designed for completely different journeys. One will power the real-time intelligence of our devices, while the other will simulate the deepest secrets of our universe. Subscribe now to stay ahead of the curve on the future of hardware, artificial intelligence, and the post-silicon world. Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  2. HACE 3 DÍAS

    019 The AI Education Reset ChatGPT, Integrity, and the Future of Exams

    Episode Numberr: L019  Titel: The AI Education Reset: ChatGPT, Integrity, and the Future of Exams Is the traditional classroom prepared for the era of Generative AI? Since the public release of ChatGPT in late 2022, the education sector has faced a structural and ethical transformation that is moving faster than any policy update. In this episode, we dive deep into how Artificial Intelligence is redefining the roles of students and teachers, challenging our views on academic integrity, and forcing a total reboot of our testing culture. We explore the dual nature of AI as a "sparring partner"—a tool that can act as a personal tutor and motivator to challenge thinking—while navigating the dangerous waters of "skill skipping," where students might bypass critical cognitive development steps by over-relying on automation. What we cover in this episode: The Rise of the AI Sparring Partner: How to use GenAI for brainstorming and deep learning without losing the "Human-in-the-loop" necessity to verify facts and combat "AI hallucinations" or "bullshitting". The Death of the Take-Home Essay? Why traditional assignments are vulnerable to AI authorship and how institutions are pivoting toward E-Portfolios, oral exams, and supervised "Bring Your Own Device" (BYOD) assessments. Policy vs. Practice: A look at the EU AI Act—the first binding regulatory framework for AI safety and transparency—and how it compares to the evolving GenAI policies at Top 100 U.S. Universities. AI Literacy & Competence Models: Breaking down the frameworks (like the OECD’s AI Literacy or the Anthropic AI Fluency model) that help educators teach students how to interact with AI ethically and effectively. Digital Leadership: Why the digital transformation of schools isn't just about hardware, but about a new culture of leadership and professional development for teachers. Whether you are a K-12 teacher, a university professor, or a student navigating this new frontier, this episode provides data-driven insights into the future of learning. We analyze recent studies on AI-generated feedback versus human expertise and discuss why the human element remains irreplaceable in a world governed by algorithms. Join us as we decode the AI revolution in education! #AIinEducation #ChatGPT #EdTech #HigherEd #AcademicIntegrity #FutureOfLearning #AIAct #DigitalTransformation #USAEd #TeachingWithAI Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    15 min
  3. TRÁILER DE LA TEMPORADA 2, EPISODIO 7

    019 Quicky The AI Education Reset ChatGPT, Integrity, and the Future of Exams

    Episode Numberr: Q019  Titel: The AI Education Reset: ChatGPT, Integrity, and the Future of Exams Is the traditional classroom prepared for the era of Generative AI? Since the public release of ChatGPT in late 2022, the education sector has faced a structural and ethical transformation that is moving faster than any policy update. In this episode, we dive deep into how Artificial Intelligence is redefining the roles of students and teachers, challenging our views on academic integrity, and forcing a total reboot of our testing culture. We explore the dual nature of AI as a "sparring partner"—a tool that can act as a personal tutor and motivator to challenge thinking—while navigating the dangerous waters of "skill skipping," where students might bypass critical cognitive development steps by over-relying on automation. What we cover in this episode: The Rise of the AI Sparring Partner: How to use GenAI for brainstorming and deep learning without losing the "Human-in-the-loop" necessity to verify facts and combat "AI hallucinations" or "bullshitting". The Death of the Take-Home Essay? Why traditional assignments are vulnerable to AI authorship and how institutions are pivoting toward E-Portfolios, oral exams, and supervised "Bring Your Own Device" (BYOD) assessments. Policy vs. Practice: A look at the EU AI Act—the first binding regulatory framework for AI safety and transparency—and how it compares to the evolving GenAI policies at Top 100 U.S. Universities. AI Literacy & Competence Models: Breaking down the frameworks (like the OECD’s AI Literacy or the Anthropic AI Fluency model) that help educators teach students how to interact with AI ethically and effectively. Digital Leadership: Why the digital transformation of schools isn't just about hardware, but about a new culture of leadership and professional development for teachers. Whether you are a K-12 teacher, a university professor, or a student navigating this new frontier, this episode provides data-driven insights into the future of learning. We analyze recent studies on AI-generated feedback versus human expertise and discuss why the human element remains irreplaceable in a world governed by algorithms. Join us as we decode the AI revolution in education! #AIinEducation #ChatGPT #EdTech #HigherEd #AcademicIntegrity #FutureOfLearning #AIAct #DigitalTransformation #USAEd #TeachingWithAI Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  4. 12 FEB

    018 AI 2026: Transparency Laws, Reasoning Models, and the Power Play

    Episode Number: L018  Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation. What’s inside this episode: The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built. The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query. The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible. Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions. Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety. GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters. Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency. This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun. Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    17 min
  5. TRÁILER DE LA TEMPORADA 2, EPISODIO 6

    018 Quicky AI 2026: Transparency Laws, Reasoning Models, and the Power Play

    Episode Number: L018  Titel: AI 2026: Transparency Laws, Reasoning Models, and the Power Play Welcome to a deep dive into the rapidly shifting landscape of Artificial Intelligence. In this episode, we explore the major legal and technical transformations set to redefine the industry—from California’s groundbreaking transparency mandates to the emergence of "reasoning models" that challenge everything we thought we knew about AI regulation. What’s inside this episode: The End of the Black Box? California’s AB 2013: Effective January 1, 2026, generative AI developers must publicly disclose detailed information about their training data, including dataset sources and whether they contain copyrighted or personal information. We discuss how this law—and similar documentation templates under the EU AI Act—aims to shine a light on how models are built. The Shift to Inference-Time Scaling: The industry is hitting a "pretraining wall". Instead of just making models learn from more data, giants like OpenAI and DeepSeek are moving toward "test-time compute". Models like the OpenAI o-series and DeepSeek-R1 gain intelligence by "thinking out loud" through extended chains of thought (CoT) at the moment of the query. The AI Oligopoly and Infrastructure Power: The AI supply chain is becoming increasingly concentrated. We analyze the market power of hardware leaders like NVIDIA and ASML and the cloud dominance of AWS, Azure, and Google Cloud. We also explore the "antimonopoly approach" to ensuring this technology remains democratic and accessible. Safety, Deception, and "Chain of Thought" Monitoring: Can we trust what an AI says it’s thinking? We investigate CoT monitoring—a safety technique that allows humans to oversee a model’s intermediate reasoning to catch "scheming" or misbehavior before it happens. However, this opportunity is "fragile" as models may learn to rationalize or hide their true intentions. Medical AI & The "Unpredictability" Problem: AI-enabled medical devices are facing a crisis of clinical validation. We look at the gaps in the FDA’s 510(k) pathway and why "unpredictability" in AI outputs makes robust post-market surveillance (PMS) essential for patient safety. GDPR vs. LLMs: The Right to be Forgotten: How do you delete a person from a neural network? We tackle the collision between GDPR’s Right to Erasure and the architecture of large language models, where personal data becomes inextricably embedded in billions of parameters. Generative AI Regulation, California AB 2013, EU AI Act, Inference Compute Scaling, AI Safety, OpenAI o1, DeepSeek-R1, NVIDIA AI, GDPR and LLMs, AI Transparency. This episode is essential for developers, policymakers, and tech enthusiasts who want to understand the new rules of the road. The era of scaling for scaling’s sake is over—the age of accountability and reasoning has begun. Subscribe for weekly deep dives into the mechanics of AI! ⭐⭐⭐⭐⭐ (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  6. 5 FEB

    017 AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth

    Episode Number: L017 Titel: AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth Join us as we dissect the most critical shifts in the tech landscape, drawing on groundbreaking research from Nature, Stanford HAI, and the Reuters Institute. In this episode, we dive into: The $500 Billion Bet: Big Tech’s capital expenditure is set to explode. We analyze why giants like Microsoft, Google, and Meta are racing to exceed $500 billion in spending, and whether OpenAI and Anthropic can hit their staggering revenue targets of $30 billion and $15 billion respectively. The Rise of "AI Slop": By 2026, experts predict that up to 90% of online content will be synthetic. We define the "AI Slop" phenomenon—an overload of low-quality, automated content that acts like the "microplastics of the internet," polluting our communication environment. The "Model Collapse" Crisis: What happens when AI is trained on its own "digital waste"? We explore the autophagous loop where models forget reality, lose the "tails" of human nuance, and converge into a repetitive, "faintEinheitsbrei". The Generative AI Paradox: As synthetic media becomes indistinguishable from reality, will society stop believing any digital evidence? We discuss the "Epistemic Tax"—the rising cost of verifying the truth in a world of voice clones and high-conviction deepfakes. Journalism as "Clean Water": In an ocean of AI-generated noise, human journalism remains the only source of "clean data." We discuss why investigative reporting is the only barrier preventing the total collapse of foundational AI models. The Robotaxi Wars: Waymo is serving 150,000 rides a week, but can Tesla finally deliver a truly driverless taxi? We look at the global battle for autonomous supremacy as Chinese players like Pony.ai threaten to surpass Western fleets. Why Listen? If you want to understand why the "context window" of reality is shrinking and how to maintain "epistemic hygiene" in a synthetic world, this episode is your essential guide to the next two years of the AI revolution. Subscribe now to stay ahead of the curve. #AI2026 #TechTrends #ModelCollapse #AISlop #GenerativeAI #BigTech #Waymo #Tesla #OpenAI #Journalism #FutureOfTech #DigitalInbreeding #SiliconValley (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    15 min
  7. TRÁILER DE LA TEMPORADA 2, EPISODIO 5

    017 Quicky AI 2026 Model Collapse, Big Tech’s $500B Bet, and the Death of Truth

    Episode Number: Q017 Titel: AI 2026: Model Collapse, Big Tech’s $500B Bet, and the Death of Truth Join us as we dissect the most critical shifts in the tech landscape, drawing on groundbreaking research from Nature, Stanford HAI, and the Reuters Institute. In this episode, we dive into: The $500 Billion Bet: Big Tech’s capital expenditure is set to explode. We analyze why giants like Microsoft, Google, and Meta are racing to exceed $500 billion in spending, and whether OpenAI and Anthropic can hit their staggering revenue targets of $30 billion and $15 billion respectively. The Rise of "AI Slop": By 2026, experts predict that up to 90% of online content will be synthetic. We define the "AI Slop" phenomenon—an overload of low-quality, automated content that acts like the "microplastics of the internet," polluting our communication environment. The "Model Collapse" Crisis: What happens when AI is trained on its own "digital waste"? We explore the autophagous loop where models forget reality, lose the "tails" of human nuance, and converge into a repetitive, "faintEinheitsbrei". The Generative AI Paradox: As synthetic media becomes indistinguishable from reality, will society stop believing any digital evidence? We discuss the "Epistemic Tax"—the rising cost of verifying the truth in a world of voice clones and high-conviction deepfakes. Journalism as "Clean Water": In an ocean of AI-generated noise, human journalism remains the only source of "clean data." We discuss why investigative reporting is the only barrier preventing the total collapse of foundational AI models. The Robotaxi Wars: Waymo is serving 150,000 rides a week, but can Tesla finally deliver a truly driverless taxi? We look at the global battle for autonomous supremacy as Chinese players like Pony.ai threaten to surpass Western fleets. Why Listen? If you want to understand why the "context window" of reality is shrinking and how to maintain "epistemic hygiene" in a synthetic world, this episode is your essential guide to the next two years of the AI revolution. Subscribe now to stay ahead of the curve. #AI2026 #TechTrends #ModelCollapse #AISlop #GenerativeAI #BigTech #Waymo #Tesla #OpenAI #Journalism #FutureOfTech #DigitalInbreeding #SiliconValley (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    2 min
  8. 29 ENE

    016 LLM Council: Why Your Business Needs an AI Board of Directors

    Episode Number: L016 Titel: LLM Council: Why Your Business Needs an AI Board of Directors Do you blindly trust the first answer ChatGPT gives you? While Large Language Models (LLMs) are brilliant, relying on a single AI is a "single point of failure". Every model—from GPT-4o to Claude 3.5 and Gemini—has specific blind spots and deep-seated biases. In this episode, we dive into the LLM Council, a revolutionary concept open-sourced by Andrej Karpathy (OpenAI co-founder and former Tesla AI lead). Originally a "fun Saturday hack," this framework is transforming how businesses make strategic decisions by replacing a single AI "dictator" with a diverse panel of digital experts. The Problem: The "Judge" is Biased Current research shows that LLMs used as judges are far from perfect. They suffer from Position Bias (preferring certain answer orders), Verbosity Bias (favoring longer responses), and the significant Self-Enhancement Bias, where an AI prefers its own writing style over others. Some models even replicate human-like biases regarding gender and institutional prestige. The Solution: The 4-Stage Council Process An LLM Council forces multiple frontier models to debate, critique, and reach a consensus. We break down the four essential stages: Stage 1: First Opinions – Multiple models (e.g., Claude, GPT, Llama) answer your query independently. Stage 2: Anonymous Review – Models rank each other’s answers without knowing who wrote them, preventing brand favoritism. Stage 3: Critique – The models act as "devil's advocates," ruthlessly pointing out hallucinations and logical flaws in their peers' arguments. Stage 4: Chairman Synthesis – A designated "Chairman" model reviews the entire debate to produce one battle-tested final response. Why This Matters for the US Market: For American business owners and developers, an LLM Council acts as a free AI Board of Directors. Whether you are validating a $50,000 marketing campaign, performing automated code reviews, or checking complex contracts for unfavorable terms, the council approach provides a level of reliability and alignment with human judgment that no single model can match. What You’ll Learn in This Episode: The ROI of AI Collaboration: Why spending 5 to 20 cents on a "council meeting" is the best investment for high-stakes decisions. No-Code Implementation: How to use the Cursor IDE and natural language to build your own council in 10 minutes. The Tech Stack: An overview of OpenRouter for accessing multiple models and open-source frameworks like Council (chain-ml). Case Studies: Real-world examples of the council tackling SEO strategies and digital marketing trends for 2026. Stop settling for the first AI response. Learn how to leverage the "wisdom of the crowd" to debias your AI workflow and get the perfect answer every time. (Note: This podcast episode was created with the support and structuring provided by Google's NotebookLM.)

    12 min

Tráilers

Acerca de

AI Affairs: The podcast for a critical and process-oriented look at artificial intelligence. We highlight the highlights of the technology, as well as its downsides and current weaknesses (e.g., bias, hallucinations, risk management). The goal is to be aware of all the opportunities and dangers so that we can use the technology in a targeted and controlled manner. If you like this format, follow me and feel free to leave a comment.