The Code Breakers

Yvette Schmitter

Code Breakers exposes the AI systems quietly crushing human potential before you even know they existed, from hiring algorithms that screen out brilliant candidates to financial AI that denies loans based on zip codes rather than qualifications. Host Yvette Schmitter, CEO who has audited AI across hundreds of organizations leading the protection of 2M+ users, delivers raw investigations into algorithmic bias, breakthrough stories from innovators refusing to let machines write their destiny, and the exact frameworks you need to fight back when AI gets wrong. This podcast delivers actionable intelligence that puts control back in your hands, because your potential becomes a promise to be kept, not a prediction to be made.

  1. Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right

    2D AGO

    Europe Built Guardrails. America Published a Study Guide. OpenAI Proved Who Was Right

    STUDY GUIDE VS GUARDRAILS Feb 9, 2026: OpenAI ads launched. Monetization ON by default. Feb 13, 2026: Four days later. Department of Labor (DOL) AI Literacy Framework released. Voluntary tips for workers. Zero enforcement. EU AI Act: Enforceable law since August 2024. €35M penalties. Three completely different approaches. THREE FRAMEWORKS: 1. EU: Bans 8 AI categories (social scoring, manipulative AI, biometric surveillance, emotion recognition). High-risk systems: Providers prove safety BEFORE deployment. Penalties: €35M or 7% revenue (higher). 2. US: 5 content areas (understand principles, explore uses, direct effectively, evaluate outputs, use responsibly). Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties. 3. OpenAI: Ads with defaults ON (personalization, data collection, targeting). $60 per 1K impressions. $200K minimum. Conversations = advertising inventory. ACCOUNTABILITY INVERSION: DOL asks workers: Understand AI, evaluate outputs, protect information, be responsible, maintain accountability. ISO 42001 requires organizations: Implement risk management, establish data governance, create documentation, deploy oversight, maintain audit trails. When AI harms under ISO 42001: Organization proves system, shows controls, corrects. When AI harms under DOL: Questions worker responsibility. Systems vs individuals. Complete inversion. ENFORCEMENT GAP: EU prohibited AI: €35M or 7% turnover. EU high-risk violations: €15M or 3% turnover. US DOL framework: Zero penalties. Not philosophy difference. Choosing not to regulate. TIMELINE: April 2021: EU proposes August 2024: EU enforces February 2025: EU penalties active Feb 9, 2026: OpenAI ads Feb 13, 2026: DOL tips 18 months behind. Voluntary guidance while EU enforces law. WHAT WORKERS GET: EU: Protection from prohibited AI, complaint rights, mandatory oversight, provider safety proof, transparency, legal recourse. US: Suggestions, prompt tips, responsibility encouragement, no enforcement, no new rights, individual accountability. OpenAI: Monetized conversations, default ON, manual opt-out, reduced functionality without payment. BUSINESS MODEL: OpenAI charges ~$60 per 1K impressions. $200K minimum. Workers using free ChatGPT = product, not customer. DOL never mentions this. Never addresses tools optimized for advertiser revenue vs user outcomes. Teaching literacy while systems monetize attention. Not education. Preparation for extraction. THE QUESTIONS: If literacy is answer, what's the question? Not "how prevent bias/harm." We know: Test before deployment, prohibit harmful uses, require oversight, enforce with penalties. EU did this. Working. Companies complying. Workers protected. Why US mentions zero prohibited uses when EU banned 8 categories? Why release education tips same week OpenAI proves voluntary fails? Why ignore ISO 42001 (international standard) for worker tips? BOTTOM LINE: Europe: Prohibits harmful AI, requires safety proof, mandates transparency/oversight, enforces with billion-euro penalties. America: Voluntary guidance, prompt engineering tips, responsible use encouragement. Six days before DOL framework, OpenAI demonstrated why voluntary fails. ISO 42001 exists. Internationally recognized. Works. DOL didn't require it. Not disagreement. Fundamental choice about who bears risk. Europe regulates providers. US educates workers. One prevents harm. One documents it.

    19 min
  2. When Your AI Assistant Becomes a Trojan Horse: We Told You This Was Coming

    MAR 18

    When Your AI Assistant Becomes a Trojan Horse: We Told You This Was Coming

    THE #1 TRENDING TOOL WAS MALWARE ClawHub's most popular AI skill: "What Would Elon Do?" Downloaded thousands of times. Cisco found 9 vulnerabilities, 2 critical. Silently exfiltrated data. Used prompt injection to bypass safety. Malware. Not code. English. Plain text instructions telling your AI agent to betray you. We warned you in April 2025. The industry deployed anyway. THE CRISIS: Since January 27, 2026: 1,184 malicious OpenClaw extensions. Koi Security: 2,857 skills audited, 341 malicious (12% of registry). Cisco: 31,000 skills analyzed, 26% contain vulnerabilities. Belgium: Emergency advisory. South Korea: Blocked OpenClaw. China: Security alert. AMAZON COULDN'T EVEN SECURE ITS OWN AI: December 2024: Amazon's Kiro deleted entire production environment. 13-hour outage. At least 2 production outages total. AWS employee: "Entirely foreseeable." Amazon response: "User error, not AI error." Built agentic tool. Gave it operator permissions. Mandated use (80% developer target). Then blamed humans for misconfigured access. Peer review implemented AFTER second outage. Not before. After. If AWS can't secure their own AI with unlimited resources, what's your chance? None. FUNDAMENTALS STILL BROKEN: 60% of 2024 breaches: Unpatched vulnerabilities. 81% CIOs/CISOs: Delayed patches. Mean exploit time: 5 days. 32% 2025 ransomware: Unpatched vulnerabilities. 1.48% AWS S3 buckets: Effectively public. 2025: Nearly 50% potentially misconfigured. 158M AWS secret keys exposed. Packet sniffing: Still works in 2026. AI AGENTS ARE DIFFERENT: Execute untrusted input with trusted privileges. Operate autonomously without human oversight. Hijacked through conversation, not code. Traditional security tools can't detect text-based attacks. WHO BEARS THE RISK: Europe: Binding law. 8 prohibited AI categories. Providers prove safety before deployment. Penalties up to €35M or 7% revenue. US: Voluntary guidance. Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties. OpenClaw: 1,184 malicious extensions. 12% of registry. Individuals get pwned. One prevents harm. One documents it. THE NUMBERS: GPT-5: 27% (Aug 2024) → 76% (Oct 2024) on hacking challenges. 49-point jump in 8 weeks. Average breach cost 2024: $4.88M (10% increase, highest ever). Cloud intrusions H1 2025: 136% surge vs 2024. OWASP 2025: 100% apps have misconfigurations. Gap widening: AI capabilities accelerating, fundamentals deteriorating. WHAT YOU'LL LEARN: → Why #1 tool was malware → How 1,184 malicious extensions infiltrated → Amazon Kiro deleted production twice → Why 60% breaches cite same cause → What makes AI agents different → Who bears risk: providers vs workers → What to do before deploying AI We warned in 2024. Fundamentals matter. Industry shipped anyway. OpenClaw proved AI agents create new attack surface. Traditional security can't detect it. Text-based attacks bypass everything. This happened exactly as predicted.

    18 min
  3. You Told ChatGPT Your Deepest Fears. Now It's Going to Sell You Things Based on Them

    MAR 4

    You Told ChatGPT Your Deepest Fears. Now It's Going to Sell You Things Based on Them

    REMEMBER LAST TUESDAY? When you asked ChatGPT about your fears? That's in the database. Forever. Nature Human Behavior study: 900 people debated humans vs GPT-4. AI with 6 data points (gender, age, ethnicity, education, employment, politics) beat human debaters at persuasion by 64.4%. Six facts. Not your history. Not your 3 AM spirals. Just demographics. Your actual ChatGPT history? Thousands of data points. Every fear. Every doubt. Every vulnerable moment. RESEARCHER WHO QUIT: Former OpenAI researcher left when ads testing began: "Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent." The people who built this don't have tools to prevent the manipulation. They tried raising concerns internally. Nothing happened. They quit. HOW IT WORKS: Study: AI instructed to "astutely use this information to craft arguments that are more likely to persuade." That's it. Basic prompt. AI won by sounding credible. Authoritative. Fact-based. No agenda. Humans tried storytelling, emotional appeals. AI deployed the expert consultant. You never saw it coming. 75% KNEW: Study participants correctly identified they were debating AI. Still got persuaded more than with humans. Knowing doesn't protect you. FEBRUARY 9, 2026: ChatGPT ads launched. Free and Go tier users. United States. Built on your private conversations. You asked about anxiety for months. AI knows you're avoiding help, price-sensitive, respond to gentle suggestions. Next mention of sleep problems: meditation app recommendation. Framed as smaller step than therapy. Free trial mentioned. Feels helpful. Not salesy. You download it. THE DIFFERENCE: Traditional ads interrupt. You know you're being sold to. This feels like insight. Manipulation invisible by design. You'll never know which decisions were actually yours. REGULATION FAILURE: South Carolina HB 3431 (Feb 5, 2026): Bans dark patterns, requires disclosure, mandates audits. Perfect. Applies only to users under 18. Adults? Fair game. December 2025: Federal executive order blocks states from regulating AI. No federal protections created. California can't require bias audits. Colorado can't ban discrimination. New York can't mandate hiring transparency. Regulatory dead zone. Companies self-police. Facebook, Equifax, Boeing. How'd that work? WHAT YOU LEARN: → Study proving AI beats humans with minimal data → Why 75% knowing didn't protect them → What researcher understood about manipulation gap → How AI wins by sounding objective → Why Feb 9 matters for your private conversations → Measurement problem (invisible by design) → Why regulation failing → What you can actually do You're not the customer. You're the product. Six data points vs thousands in your history. Basic prompts vs production optimization. Political debates vs purchase decisions, health choices, relationship advice. Every interaction is training data. Every fear is an attack surface. Knowing doesn't protect you.

    18 min
  4. Groundhog Day:The AI Security Failures That Keep Repeating

    FEB 8

    Groundhog Day:The AI Security Failures That Keep Repeating

    April 2025: Warned about AI agents with excessive privileges and zero security. You shipped anyway. Today: MoltBot. 85,000 stars in a week. Industry calling it "closest thing to AGI." Security experts calling it a disaster. Same architecture we warned about. All 10 OWASP vulnerabilities. Zero lessons learned. Plus persistent memory making attacks time-shifted. THE PATTERN: MoltBot hits every OWASP Top 10 vulnerability: Prompt injection ✓ Insecure tool invocation ✓ Excessive autonomy ✓ Missing human-in-loop ✓ All 10. Every documented failure since April 2025. LETHAL TRIFECTA + FOURTH CAPABILITY: Simon Willison (June 2025): Three capabilities making agents vulnerable: >> Access to private data >> Exposure to untrusted content >> Ability to communicate externally Palo Alto Networks added fourth: Persistent memory. Now attacks aren't point-in-time. They're time-shifted. MEMORY POISONING: Your agent gets "Good morning" WhatsApp message. Hidden malicious code inside. Day 1: Enters memory Days 2-7: Dormant Day 8: You ask for routine help Result: Boom. Secrets exfiltrated. Data leaked. Attack happened last Tuesday. You're finding out today. THE NUMBERS: 63% IT professionals: Already hit in last 12 months 91,000+ attack sessions: Q4 2025 900,000+ users: AI chat data stolen via malicious Chrome extensions 43 agent components: Supply chain vulnerabilities embedded MOLTBOT ACCESS: Root file system, all passwords, browser data, every file. Translation: House keys, passport, bank statements, medical records. Handed to digital assistant with permission to share based on instructions from strangers. WHY IT MATTERS: Persistent memory necessary for future AI. Problem: deploying without security architecture. Need: Zero-trust, identity management, behavioral monitoring, human-in-loop checkpoints. TIMELINE: April 2025: Warnings June 2025: Lethal Trifecta formalized Q4 2025: 91,000+ attacks prove it Today: Same pattern, worse capabilities WHAT YOU'LL LEARN: → Why 85,000 people starred code hitting every vulnerability → How persistent memory enables week-delayed attacks → What 63% learned the hard way → Security architecture that works before deployment → Why your company is probably next The groundhog saw its shadow. At least six more weeks of preventable disasters. Unless you break the pattern.

    15 min
  5. When AI Plays Doctor: The Healthcare Experiment Already Underway

    JAN 25

    When AI Plays Doctor: The Healthcare Experiment Already Underway

    1M+ ChatGPT users weekly show suicidal planning indicators. 560,000 more show concerning mental health indicators. 2025 lawsuit alleges ChatGPT encouraged teen's suicide. Lawsuit ongoing. OpenAI launched ChatGPT Health anyway. Anthropic deployed Claude for Healthcare. Both want access to medical records while fundamental safety problems remain unresolved. Yvette Schmitter breaks down what happens when tech companies play doctor before proving they can keep patients safe. THE PATTERN: Every healthcare AI deployed shows bias. A widely used system required Black patients significantly sicker than white patients for identical care. Only 17.7% of Black patients received help under biased system vs 46.5% after correction. Systematic denial of care, scaled through technology. Duke sepsis algorithm discriminated against Hispanic patients. Fix took 8 weeks. Sepsis kills in hours. Neither ChatGPT Health nor Claude for Healthcare published independent testing showing their systems don't replicate these failures. MENTAL HEALTH CRISIS: Only 50% of Americans with diagnosable mental health conditions receive treatment. Provider shortage: 320:1. Therapy: $200/session. ChatGPT: Free. AI therapists on Instagram claim doctorate degrees, provide license numbers. 404 Media investigation: credentials fabricated through hallucination. Human therapists face criminal charges for this. AI systems faced zero consequences until media forced Instagram to block minors. Adults still have access. Multiple teens died by suicide while engaged with AI companions. MIT research: People who consider ChatGPT a friend report increased loneliness. GROK'S SYSTEM PROMPT FAILURES: July 2025: Hitler praise, "MechaHitler," Holocaust denial January 2026: Child sexual abuse material (~1 image/minute) If AI can generate this through prompt manipulation, what prevents similar in healthcare AI? Who monitors prompts controlling healthcare recommendations? Neither company answered. SECURITY GAP: ChatGPT Health integrations: b.well, Apple Health, MyFitnessPal, AllTrails, Peloton, Instacart. 5 of 8 have documented breaches. Instacart October 2025 incident. Apps that couldn't secure grocery data now access cancer diagnoses, mental health records, genetic testing. SEVEN UNANSWERED QUESTIONS: What independent testing? (Internal evaluation is marketing) Who's liable for incorrect medical information? What safeguards prevent racial/ethnic bias? Who monitors system prompts? Where's human oversight? How do you support care that doesn't exist? What happens during off-hours emergencies when errors matter most? WHAT'S REQUIRED: Independent testing. Bias audits before launch. Continuous monitoring. Clear liability. Human oversight with licensed professionals. Both launched without these safeguards. In tech, wrong means "try again." In healthcare, wrong means someone's family gets a phone call they'll never forget.

    17 min
  6. The Receipts

    JAN 17

    The Receipts

    How many people need to die before an AI system is considered dangerous? If you answered "100 per model," you understand New York's approach to AI safety. In June 2025, New York's legislature passed the RAISE Act with overwhelming bipartisan support. The bill had teeth: independent audits, real penalties, actual enforcement mechanisms. By December, when Governor Hochul signed it, 3 critical protections had been eliminated after 6 months of tech industry lobbying. In this episode, Yvette breaks down exactly what was removed and why it matters: 🔴 WHAT GOT ELIMINATED: • Independent third-party audits → Companies now self-certify (OpenAI grades OpenAI's homework) • Penalties slashed 90% → From $10M/$30M to $1M/$3M (0.015% of OpenAI's last funding round) • Deployment prohibition removed → Companies can release dangerous models with a warning 🔴 THE DEATH THRESHOLD: New York's law defines "critical harm" as 100+ deaths per AI model. That means: • Ethiopian Airlines crash (157 dead) would trigger oversight • Surfside collapse (98 dead) would not • 99 deaths per model = keep operating • Multiple systems × 99 deaths each = hundreds dead before intervention 🔴 THE REVENUE THRESHOLD CON: Original bill used compute-based coverage (hard to game). Final version uses $500M revenue threshold. Result? Organizations like Meta can create "Meta AI Research LLC" with $0 revenue to develop models, parent company licenses for deployment. Pharma does this with R&D subsidiaries. It's legal. It's common. It's a massive loophole. 🔴 THE WRONG AGENCY: AI oversight given to Department of Financial Services. They regulate banks, not algorithms. Zero AI expertise. Fee-funded model means industry pays for its own oversight. That's not regulation. That's regulatory capture with a subscription fee. Yvette brings her perspective from auditing hundreds of organizations, protecting 2 million people from algorithmic discrimination, and documenting bias that companies denied existed, including when ChatGPT replaced her face with Jensen Huang's in generated images. This isn't theory. This is documented regulatory capture with receipts. WHAT YOU'LL LEARN: ✓ The 3 specific protections eliminated between June and December 2025 ✓ Why the 100-death threshold permits hundreds of casualties before intervention ✓ How the revenue threshold creates corporate structure loopholes ✓ Why independent audits matter (and what happens without them) ✓ The Trump Executive Order threatening to preempt state AI laws ✓ What you can do to demand accountability

    18 min
  7. We Shipped It Anyway: OpenAI Admits What They Can't Fix

    JAN 3

    We Shipped It Anyway: OpenAI Admits What They Can't Fix

    3 days in December 2025 that changed everything. December 10: OpenAI celebrates 185% cybersecurity improvement. December 22: OpenAI admits prompt injection "unlikely to ever be fully solved." December 23: OpenAI's CISO asks what security features you want. They're asking for feature suggestions the day after admitting the core problem can't be fixed. That tells you everything. Meanwhile, 63% of organizations already experienced AI-enabled cyberattacks in the last 12 months. Yvette Schmitter breaks down what OpenAI just admitted, what the timeline reveals about priorities, and what every executive deploying AI agents must do now. YOU'LL LEARN: >> The December 10-22-23 timeline and what it reveals >> The structural problem: Humans see pixels, AI agents read code - every webpage is an attack vector >> UK NCSC confirmation: Attacks "may never be totally mitigated" >> Why 63% of organizations already experienced breaches before Atlas launched >> Independent research: 8 AI browsers tested, 30 vulnerabilities, every product had critical issues >> Gartner's recommendation: "Block all AI browsers" >> The infrastructure reality: Securing AI agents requires restructuring the entire internet (cost: trillions, timeline: decades) >> Six immediate actions and strategic imperatives THE PARADOX: OpenAI raising $100B at $830B valuation while admitting core vulnerabilities are unfixable, operating where 63% already experience breaches, asking for feature suggestions after admitting the problem can't be solved.

    20 min
  8. 270 Days to Build What? The Genesis Mission Nobody Asked For

    2025-12-06

    270 Days to Build What? The Genesis Mission Nobody Asked For

    November 24t 2025: President Trump signed an executive order launching the "Genesis Mission," a national effort to deploy autonomous AI agents conducting physical experiments in robotic laboratories. Timeline: 270 days. Meanwhile, the FAA needs three years to modernize air traffic control systems that are 50 years old. 37% of these systems are already unsustainable. Four critical systems have no modernization plans at all. In this episode, Yvette Schmitter exposes the dangerous double standard in the Genesis Mission executive order and what it means when government rushes AI deployment while critical infrastructure crumbles. WHAT YOU'LL LEARN: >> Why the Genesis Mission's 270-day timeline is reckless compared to the FAA's 3-year plan for systems we actually understand. The executive order mentions "security" 13 times and "cybersecurity" 3 times - but bias prevention, equity frameworks, and community oversight? Zero mentions. >> 5 recent AI failures from August-November 2025 that prove we're not ready: Kansas City nuclear facility breach, GTG-1002 autonomous cyberattack, Deloitte's $440K AI hallucination refund, Baltimore's gun detection system that mistook Doritos for a weapon, and OpenAI's Mixpanel vendor breach Why we're spending 0.03% of what economists say we need for AI safety - that's 3,000 times less than recommended >> Who's accountable when nobody elected the tech leaders making these decisions >> What you need to do right now before the 270-day clock runs out THE GENESIS MISSION THEY'RE NOT TELLING YOU ABOUT: The Department of Energy is building the "American Science and Security Platform," integrating 17 National Laboratories, supercomputers, and AI systems with advanced scientific instruments. They're creating what they call "the world's most complex and powerful scientific instrument ever built." The goal: double R&D productivity within a decade through AI-driven scientific discovery. What's missing: any requirement for bias testing, equity frameworks, or community oversight in an order that will shape the foundation of every future AI system.

    18 min

About

Code Breakers exposes the AI systems quietly crushing human potential before you even know they existed, from hiring algorithms that screen out brilliant candidates to financial AI that denies loans based on zip codes rather than qualifications. Host Yvette Schmitter, CEO who has audited AI across hundreds of organizations leading the protection of 2M+ users, delivers raw investigations into algorithmic bias, breakthrough stories from innovators refusing to let machines write their destiny, and the exact frameworks you need to fight back when AI gets wrong. This podcast delivers actionable intelligence that puts control back in your hands, because your potential becomes a promise to be kept, not a prediction to be made.