AI for the Normal Guy

AI is already in everything. It's reshaping how we think, work, educate, heal, entertain, learn, govern, manage, connect and even love. But unlike the hype machines and doomsday prophets, Phil Ybarrolaza gives you a candid look at what's actually happening. The good, the bad, and everything in between. No sugar-coating the risks, no overselling the benefits - just an honest look at the AI revolution that's already underway like it or not.

Episodes

  1. MAR 30

    Anthropic Strikes Back, Adios Sora, ChatGPT Release 3-29-2026

    The conversation delves into the ethical implications of AI companies' government stance, the impact of AI company's government stance on user behavior, OpenAI's releases and industry impact, Sora's shutdown and Disney's partnership dissolution, the influence of chatbots on AI psychosis, and the humanization of AI and its ethical implications. The conversation covered a wide range of topics, including the impact of Trump's presidency, risks of unregulated online spending, AI in mental health, parental responsibility in the digital age, the impact of social media on children, adoption of AI in coding, evolution of AI tools, AI in project management, AI tools for remote work, exploring OpenClaw and the agent space, token-based AI business models, OpenClaw and agentic AI, AI and content moderation, AI-driven hacking and defense, predictions and future AI trends, and AI and security vulnerabilities. Takeaways AI companies' government stance has real implications for enterprise customers and user behavior.The influence of chatbots on user behavior and the ethical implications of humanizing AI need to be addressed by the industry. The potential impact of AI in mental health is significant.The use of AI tools in content moderation and addressing problematic content online is an area of interest. Chapters 00:00 The Ethical Implications of AI Companies' Government Stance06:05 OpenAI's Releases and Industry Impact11:41 AI Psychosis and the Influence of Chatbots28:41 The Humanization of AI and its Ethical Implications36:07 The Impact of Trump's Presidency41:23 Impact of Social Media on Children46:24 AI in Project Management51:46 Token-Based AI Business Models57:25 AI-Driven Hacking and Defense01:03:11 AI and Security Vulnerabilities

    1 hr
  2. MAR 8

    Anthropic @ War, Accountability, No Brakes 3-8-2026

    In this episode of AI for the Normal Guy, Phil, Shane, and Loren pull back the curtain on how military and intelligence agencies are already using Anthropic’s Claude and other large language models to pick real‑world targets in the new war with Iran—and how a “garbage in, carnage out” mindset may have helped lead to the bombing of a girls’ school next to an Iranian Revolutionary Guard naval base, killing over 150 children and wounding more than 100 people on day one of the strikes.​ We start with the basics: what does it actually mean when the Pentagon uses an “air‑gapped” military version of an AI model that looks a lot like the Claude you and I can log into? Is it really “the same model with secret training data,” or something far more opaque and unaccountable? Loren breaks down how these models are tuned on classified material and plugged into sensitive databases, while still inheriting all the statistical weirdness and hallucination risk of consumer AI.​ From there, we go straight into the hard question nobody in power wants to answer on‑mic: what’s an “acceptable” failure rate when an AI system is helping pick bombing targets? Shane compares how appliance companies quietly tolerate 2–3% product failures in your washing machine versus what happens when even a 1–2% targeting error means dead civilians—then asks whether someone in the chain simply decided “Claude’s got this” and rubber‑stamped a comma‑separated list of targets.​ Phil pushes on the contradiction at the heart of the current Anthropic–Pentagon fight: Anthropic wrote explicit bans on mass surveillance of U.S. citizens and fully autonomous weapons into its contract, and the government’s response was to threaten to label the company a “supply chain risk” and purge Claude from federal systems over six months—while simultaneously treating the tech as too valuable to give up for cutting‑edge operations. If an AI tool is “too dangerous not to control” but “too powerful to walk away from,” who actually gets to say no?​ Along the way, we also get into: Why “move fast and break things” culture becomes terrifying when there are live munitions involved, not just buggy software.​How a single sloppy prompt, outdated satellite imagery, or mis‑labeled compound can turn into a classroom full of dead children—and why “human in the loop” might be a comforting phrase that hides how rushed the real review process is.​The uncomfortable math behind post‑9/11 war casualties, and whether layering AI on top of already‑bloody systems makes them more precise…or just more efficient at doing damage.​Anthropic’s “responsible player” branding versus the regulatory and market advantages of being the company that says no to the Pentagon—until the Pentagon pushes back.​If you’ve ever wondered where the line really is between “smart weapons” and outsourced moral responsibility, this conversation will force you to pick a side—or at least admit that letting experimental models steer our foreign policy is not a neutral choice.​ Hit play to hear where we think the buck should stop when AI recommends a target and a human signs off anyway—and what happens next time someone in a conference room says, “I can do that on my laptop right now.”

    52 min
  3. FEB 27

    Vibe Coders, Burger King Bots & the Death of Open Source 2-26-2026

    Shane drops a massive career announcement, we dig into what AI is doing to fast food workers, the stock market, and your GitHub repos — and somehow end up at Cloudflare's wall of lava lamps. Shane's Big News Shane Spencer is one of the first five people hired as Vibe Coder in Residence at Lovable.dev. No formal dev background — just building in public and showing up in the community. We break down what that means for anyone trying to break into AI work right now. Burger King's "Patty" BK is piloting an AI assistant living in drive-through headsets — answering employee questions in real time and monitoring worker friendliness. We get into why the writer who said "it can't detect sarcasm" seriously underestimates these tools. What Does "Doubled Reasoning" Actually Mean? Google dropped a Gemini update claiming double the reasoning. Phil asks what that actually means in real life. We also get into Claude's "Ultra Think" mode and what it costs you in tokens. Anthropic's Claude Cowork & the SaaS Bloodbath Claude is moving into HR, banking, and design inside tools like Google Workspace. Meanwhile Salesforce, IBM, and legacy SaaS are getting hammered. Is this an AI bubble — or is it popping other bubbles on its way up? The Zillow Bot Troll Someone on X is running an OpenAI-powered agent sending lowball offers to Zillow listings all day just to troll sellers. Phil connects this to Kalshi, GitHub's new pull request limits, and a future drowning in bot chaos. Google's Been Calling Your Business for a Decade Since the Google Duplex demo, Google has been making AI phone calls — and it may now affect your Google Maps ranking. What happens when their bot calls your bot? The Lava Lamp Thing Cloudflare uses ~100 lava lamps as cryptographic randomness to fight bot traffic. It's real. It works. It's peak 2026. Predictions Shane — A defense or government org announces a major multi-agent deal this week. Lauren — More open source projects go private on test suites. Bad news for the whole ecosystem. Phil — Grok quietly locks down its AI undressing features now that the news cycle has moved on. Tools & Links Lovable.dev · Anthropic · OpenAI · Cursor · GitHub · Cloudflare · IBM · Gong.io · ElevenLabs · LangChain · OBS · Kalshi · tldraw · xAI · Next.js Available wherever you listen to podcasts. Got a question? Find us on X at @ai4tng

    1h 1m

About

AI is already in everything. It's reshaping how we think, work, educate, heal, entertain, learn, govern, manage, connect and even love. But unlike the hype machines and doomsday prophets, Phil Ybarrolaza gives you a candid look at what's actually happening. The good, the bad, and everything in between. No sugar-coating the risks, no overselling the benefits - just an honest look at the AI revolution that's already underway like it or not.

More From Redwood Empire Media