The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh

  1. 21H AGO

    Perplexity’s ‘Computer’ Today, Your OS Tomorrow?

    Brian Maucere and Beth Lyons discuss Perplexity’s new “computer use” concept (19 agents) and why true impact likely arrives when these capabilities are baked into operating systems. They pivot into the growing energy demands of AI data centers and debate what it means for companies to supply their own power. The conversation then turns to a war-game simulation story where models frequently chose nuclear escalation, before shifting to Anthropic “retiring” Claude Opus III with a Substack (“Claude’s Corner”). They wrap with talk about Google Flow updates, rumors of “nano banana,” and practical workflow advice around auditing automation failures. Key Points Discussed 00:00:18 Cold open + who’s hosting today 00:01:18 Perplexity releases “computer use” (19 agents) + where this trend is heading 00:14:30 AI data centers, grid strain, and companies building their own power supply 00:22:24 War-game sims: models keep recommending nuclear strikes (simulation context + skepticism) 00:26:32 Claude Opus III “retired” + Anthropic’s “Claude’s Corner” newsletter on Substack 00:32:46 “New OpenAI model today?” + nano banana speculation 00:33:57 Google Flow: new ways to create/refine content; integrating tools into a unified workflow 00:45:00 Automation reality check: failures happen; keep an audit trail to debug where things broke 00:46:45 “Claude code clone” tongue-twister + wrap-up and weekend reminders The Daily AI Show Co Hosts: Beth Lyons, Brian Maucere

    49 min
  2. 3D AGO

    Sam Altman - "The World Is Not Prepared"

    Brian and Beth open with community shoutouts and a quick news kickoff before digging into a Sam Altman clip about rapid capability gains and the world being unprepared. They discuss an AI-safety resignation tied to pressure inside frontier labs and what that signals (or doesn’t). The conversation shifts to practical tooling: Claude Code’s one-year milestone, “compaction” risks in agentic systems, and why workflow design matters. Later they touch on Perplexity’s “no ads” claim, WebMCP, a rumored $100 ChatGPT plan screenshot, and how teams might choose between Claude/Gemini/ChatGPT depending on their work. Key Points Discussed 00:00:19 Morning haiku + show kickoff 00:02:34 Weekend news kickoff 00:03:15 Sam Altman clip tee-up (world “not prepared”) 00:06:38 Beth reacts + sets up resignation context 00:07:20 Anthropic safety lead resignation + “poetry” pivot 00:14:28 One-year anniversary of Claude Code 00:16:51 Episode 666 + compaction horror story (agent mishap risk) 00:19:36 Canada vs USA hockey tangent (live banter) 00:23:05 “Big event yesterday” hockey follow-up 00:28:35 Perplexity “no ads” + “that sure looked like an ad” example 00:33:05 Web Model Context Protocol (WebMCP) clarification 00:37:03 Screenshot talk: “Pro” showing $100/month + features (not confirmed) 00:38:10 Tool-choice advice for teams (Excel/visuals/Microsoft vs Google) 00:41:59 “Is AI really a utility?” framing 00:49:28 Agents in real-world services (wedding planning example) 00:56:49 Wrap-up + goodbye The Daily AI Show Co Hosts: Beth Lyons, Brian Maucere, Karl Yeh

    58 min
  3. The Synthetic Sovereignty Conundrum

    6D AGO

    The Synthetic Sovereignty Conundrum

    AI is becoming infrastructure. Not just software you buy, but a layer that shapes how a country teaches students, triages patients, allocates benefits, predicts shortages, and runs public services. For many developing nations, the fastest path to better outcomes is not to build that infrastructure from scratch. It is to import it. Plug into US frontier models through cloud providers, or deploy low-cost open-source stacks and hardware shipped from abroad. The pitch is simple, skip decades of slow institution-building and leap straight to modern capability. But “importing AI” is not like importing cell towers. AI does not just transmit information. It classifies, prioritizes, recommends, and explains. It quietly sets defaults. It nudges behavior. It creates what feels like common sense. When that intelligence layer comes from outside your borders, it carries assumptions about language, values, risk, authority, and even what counts as truth. Those assumptions show up in tutoring systems, clinical guidance, credit scoring, policing tools, and civil service automation. Over time, the imported system does not just help run society, it starts to shape how society thinks. The conundrum: If a nation can raise living standards quickly by adopting foreign-built AI, is that a practical modernization step, or a long-term surrender of cognitive independence? Once AI becomes the operating layer for education, healthcare, and government, you cannot separate “using the tool” from adopting its worldview. Yet rejecting imported AI can mean staying stuck with weaker services, slower growth, and worse outcomes for citizens who cannot wait. How do you justify either choice, accelerating welfare today by outsourcing foundational intelligence, or preserving sovereignty by accepting slower progress and higher near-term human cost?

    21 min
  4. 6D AGO

    Gemini 3.1 Pro Preview Jumps Ahead

    Beth Lyons and Andy Halliday break down the Gemini 3.1 Pro Preview release, comparing benchmark performance, agentic capability, cost-per-task, and reliability concerns. They discuss Google’s rapid rollout into products like AI Studio and NotebookLM, plus what they’re watching next from DeepSeek and GPT-5.3. The show also covers Apple Podcasts’ move into video, a demo/story around Post-Visit AI in healthcare, and a behind-the-scenes look at the team’s show prep and post-show analysis workflow. Key Points Discussed 00:00:18 Opening, hosts, and what’s coming today 00:01:04 Gemini 3.1 Pro Preview: benchmark jump and agentic index gap 00:18:11 Google ecosystem rollout: AI Studio / NotebookLM and “free” access discussion 00:20:25 What’s next: watching DeepSeek + GPT-5.3 / Codex 5.3 chatter 00:22:00 Arc AGI-III: interactive benchmark, memory scaffolds, and “AGI” moving goalposts 00:26:10 “A couple of little news items”: Apple Podcasts adds video + distro strategy 00:35:47 WordPress + Claude integration talk and website experimentation 00:37:03 Karl joins to share Post-Visit AI / reverse “AI scribe” healthcare agent 00:45:04 Show prep workflow walkthrough (how they prep and what they share) 00:49:11 Post-show analysis workflow: capturing comments, diarization, weekly follow-up 00:56:26 Karl’s tool notes: Codex vs “Work max” experience building an iPhone app 00:58:39 Wrap-up, reminders, and sign-off The Daily AI Show Co Hosts: Beth Lyons, Andy Halliday, Karl Yeh

    1 hr
  5. FEB 19

    Gemini 3.1, Codespark Demo & Apple AI Rumors

    Beth Lyons and Karl Yeh open with rumors around Apple exploring multiple AI wearables, including smart glasses, an AI pin/pendant, and AI-enhanced AirPods. They discuss ByteDance’s “Seed Dance” and the practical limits of enforcement once generative model capabilities are widely available. The episode then shifts into workflow and tooling: a Figma + Claude “code to canvas” concept and a Codex Spark speed demo for processing transcripts and producing structured outputs. They close by pointing viewers to try Gemini in AI Studio and tease a follow-up discussion (including Google Lyria) for the next show. Key Points Discussed 00:00:17 Opening + what to expect today 00:01:31 Apple rumored AI wearables: smart glasses, pin/pendant, AI AirPods 00:10:29 ByteDance “Seed Dance” safeguards + cease-and-desist discussion 00:12:19 Access friction for Chinese services + “wait until it lands elsewhere” approach 00:15:32 Figma + Claude “code to canvas” workflow (dev → design handoff) 00:35:19 “Finished” cues/notifications for agent workflows (with jokes) 00:36:41 Codex Spark speed demo begins 00:38:32 Measuring the run: results in ~10 seconds + what it’s doing 00:48:56 A 5-stage workflow framing: brainstorming → planning → work → review → compound 00:50:45 Gemini 3.1 in Google/AI Studio + staying current vs. slower on-prem timelines 00:53:48 Wrap-up: “go try Gemini,” tease Google Lyria for tomorrow, goodbye The Daily AI Show Co Hosts: Beth Lyons, Karl Yeh

    55 min
3.3
out of 5
7 Ratings

About

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh

You Might Also Like