STUDY GUIDE VS GUARDRAILS Feb 9, 2026: OpenAI ads launched. Monetization ON by default. Feb 13, 2026: Four days later. Department of Labor (DOL) AI Literacy Framework released. Voluntary tips for workers. Zero enforcement. EU AI Act: Enforceable law since August 2024. €35M penalties. Three completely different approaches. THREE FRAMEWORKS: 1. EU: Bans 8 AI categories (social scoring, manipulative AI, biometric surveillance, emotion recognition). High-risk systems: Providers prove safety BEFORE deployment. Penalties: €35M or 7% revenue (higher). 2. US: 5 content areas (understand principles, explore uses, direct effectively, evaluate outputs, use responsibly). Zero prohibited uses. Zero provider obligations. Zero enforcement. Zero penalties. 3. OpenAI: Ads with defaults ON (personalization, data collection, targeting). $60 per 1K impressions. $200K minimum. Conversations = advertising inventory. ACCOUNTABILITY INVERSION: DOL asks workers: Understand AI, evaluate outputs, protect information, be responsible, maintain accountability. ISO 42001 requires organizations: Implement risk management, establish data governance, create documentation, deploy oversight, maintain audit trails. When AI harms under ISO 42001: Organization proves system, shows controls, corrects. When AI harms under DOL: Questions worker responsibility. Systems vs individuals. Complete inversion. ENFORCEMENT GAP: EU prohibited AI: €35M or 7% turnover. EU high-risk violations: €15M or 3% turnover. US DOL framework: Zero penalties. Not philosophy difference. Choosing not to regulate. TIMELINE: April 2021: EU proposes August 2024: EU enforces February 2025: EU penalties active Feb 9, 2026: OpenAI ads Feb 13, 2026: DOL tips 18 months behind. Voluntary guidance while EU enforces law. WHAT WORKERS GET: EU: Protection from prohibited AI, complaint rights, mandatory oversight, provider safety proof, transparency, legal recourse. US: Suggestions, prompt tips, responsibility encouragement, no enforcement, no new rights, individual accountability. OpenAI: Monetized conversations, default ON, manual opt-out, reduced functionality without payment. BUSINESS MODEL: OpenAI charges ~$60 per 1K impressions. $200K minimum. Workers using free ChatGPT = product, not customer. DOL never mentions this. Never addresses tools optimized for advertiser revenue vs user outcomes. Teaching literacy while systems monetize attention. Not education. Preparation for extraction. THE QUESTIONS: If literacy is answer, what's the question? Not "how prevent bias/harm." We know: Test before deployment, prohibit harmful uses, require oversight, enforce with penalties. EU did this. Working. Companies complying. Workers protected. Why US mentions zero prohibited uses when EU banned 8 categories? Why release education tips same week OpenAI proves voluntary fails? Why ignore ISO 42001 (international standard) for worker tips? BOTTOM LINE: Europe: Prohibits harmful AI, requires safety proof, mandates transparency/oversight, enforces with billion-euro penalties. America: Voluntary guidance, prompt engineering tips, responsible use encouragement. Six days before DOL framework, OpenAI demonstrated why voluntary fails. ISO 42001 exists. Internationally recognized. Works. DOL didn't require it. Not disagreement. Fundamental choice about who bears risk. Europe regulates providers. US educates workers. One prevents harm. One documents it.