ai unprompted

ai unprompted crew

a weekly show about ai aiunprompted.substack.com

  1. HACE 2 DÍAS

    022 - Omar Shahine, Microsoft CVP of OpenClaw + Microsoft 365

    ai.u crew talk to Omar Shahine, Microsoft Corporate Vice President of OpenClaw and Microsoft 365, about his tech origins and career. Omar recalls getting an Apple IIe in third grade, automating tasks with tools like FileMaker Pro, and arriving at Microsoft via a 1995 blog and a 1999 tester internship after being rejected from medical school. He highlights formative work in the Mac business unit during Apple’s revival and scaling OneDrive to hundreds of millions of users. Omar describes leadership lessons centered on customer focus and empowering teams, then explains how using Claude Code and building an OpenClaw assistant named “Lobster” (e.g., proactive meeting texts, family coordination, automation tools) led to a viral blog post, a presentation in a Satya-hosted forum, and a role transition to build this capability for Microsoft 365, emphasizing trust, feedback, and personalized, agent-driven productivity.00:00 Welcome and Guest Intro01:12 Early Tech Spark Apple II03:08 From Pre Med to Microsoft05:15 Thrown in the Deep End06:47 Pinch Me Career Moments09:08 Leadership Lessons at Scale12:27 Why OpenClaw Matters14:11 Building Lobster Assistant19:54 Going Viral Inside Microsoft22:40 Joining the OpenClaw Team24:45 The Story Behind the Hype25:43 Why Software Feels Hard27:21 Agents Over Buttons29:16 Personalized Agent Loops31:17 Trust and Accountability34:56 Customer Pull and DIY Agents37:30 Agents Talking Together40:08 Tooling Everyday Life41:55 Agent Friendly Internet46:24 Advice for Newcomers48:11 CoWorker Demo and Wrap This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    50 min
  2. 14 ABR

    021 - Anthropic's Mythos: The AI Model That Changes Everything

    ai.u crew discuss the announcement of Claude Mythos preview, a new “frontier model” not released publicly but deployed through a cybersecurity coalition called Project Glasswing. They describe Glasswing’s 12 founding partners (AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JP Morgan Chase, Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and Anthropic) and report that Mythos found thousands of zero-day vulnerabilities across major operating systems and browsers, including a 27-year-old OpenBSD bug, a 16-year-old FFmpeg issue, and autonomously chained Linux kernel vulnerabilities to escalate privileges. They note benchmark gains (e.g., 66.6% to 83.1% on a security exploit test and 53% to 64% on “Humanity’s Last Exam”), partner feedback that exploit windows are now minutes, concerns about abstraction and cognitive debt, and Anthropic’s $100M credits plus $4M open-source donations, with ongoing U.S. government discussions and future safeguards before broader capability release. 00:00 Welcome and Setup 00:57 Mythos and Glasswing 02:31 Coalition Partners 03:41 Zero Day Discoveries 05:09 Chaining Exploits Explained 06:14 Benchmarks and Scores 08:16 Not Just Cybersecurity 11:27 Oppenheimer Moment 15:26 Partner Results 19:18 Governance and National Security 21:16 Digital World Risks 22:20 Digital Fragility Fears 22:54 AI Distance From Work 24:15 Cognitive Debt Explained 25:48 Agents Everywhere Future 27:29 Self Healing Systems Drift 29:36 Alignment Goals And Means 33:04 Autonomous AI Companies 35:10 AI For AI Economics 38:39 Governance Tool Access Risks 40:54 Mythos Security Outlook 42:58 Blackwell Training Breakthrough 44:02 Costs Credits And Zero Days 45:53 Model Therapy And Dreaming 46:58 Safeguards Wrap Up This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    48 min
  3. 3 ABR

    020 - What's It Like to Have a Full-Time Personal AI Assistant

    ai.u crew continue their discussion on using new AI tools to “automate yourself,” focusing on agentic products like Claude Cowork, Microsoft Cowork, and Perplexity Computer, how to get started, and subscription costs. They note Claude’s $20/month plan is quickly token-limited and may require upgrading to a higher tier (about $125/month) for sustained use. Kevin describes Cowork controlling a browser to complete a driver safety course (with user oversight), building presentations, and scraping hundreds of sites to assemble a financial model, while cautioning that token use can make simple web tasks inefficient and that guardrails are necessary. Travis highlights common low-hanging uses like consolidating transcripts/emails into documents and raises tensions with websites that block bot behavior, the ad-driven web, and paywalls. The group debates how agents shift attention, incentives, and agency, increase output volume, distance people from work and reality, and change how they read, learn, and connect online, while noting growing experimentation across nontechnical professionals. 00:00 Welcome Back and Setup 00:37 Part Two on Automation 01:46 Getting Started With Claude 02:02 Pricing and Token Limits 03:19 Kevin Tests Cowork 03:53 Driver Safety Course Demo 05:22 Scraping and Token Tradeoffs 07:20 Travis Low Hanging Use Cases 08:05 Web Bots vs Site Defenses 09:54 Ads and the Agentic Web 17:18 Subscriptions and Paywalls 19:54 Claude Add Ins and Dispatch 22:44 Building Pitch Decks Fast 23:58 Agents Change Human Attention 25:34 Personal Assistants and Insularity 28:11 Debating an Article With AI 29:28 Simulated Debate vs Humans 30:06 AI Comment Slop on LinkedIn 30:57 Skipping the Messy Learning 32:47 Everyday People Try AI 33:58 Life With AI Assistants 34:58 Developers and Abstraction Drift 36:23 Outcomes Over Outputs 37:04 Summaries and Shrinking Attention 37:59 Agents Gatekeeping Humans 39:12 Whose Agent Is It 40:01 Trust Without Expertise 41:44 Drowning in Agent Activity 43:22 Robots and Household Tasks 45:49 High Agency vs Low Agency 47:51 Writing for Agents Now 50:15 Proximity Still Matters 51:44 Subscription Agents Everywhere 52:39 Wrapping Up the Agent Era 53:43 Agents Talking to Agents 54:26 Final Sign Off This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    55 min
  4. 23 MAR

    019 - From Chatbots to Coworkers

    ai.u crew discuss the shift from prompt-response AI chatbots to “AI coworkers” or computer-use agents that perform multi-step work across apps, highlighting Anthropic’s Claude Cowork ($20–$200/month), Microsoft Copilot Tasks ($30/user/month), and Perplexity Computer ($200/month). They describe the interaction change from asking questions to delegating outcomes, with humans increasingly acting as supervisors who define context, monitor progress, and apply judgment, while noting concerns that convenience may erode competence and that many workflows require undocumented institutional knowledge. They debate whether automating tasks is always worth the setup and trust costs, and suggest processes and software may need redesign. They also examine Anthropic’s qualitative study using an AI interviewer for 81,000 participants, weighing scale and multilingual benefits against lost human connection and empathy. 00:00 Welcome And Topic Shift 01:11 New Coworker Tools Overview 02:36 From Prompts To Delegation 04:41 Agency And Real Examples 08:22 Matt Wants Automation 10:24 Supervisor Mindset And Skills 14:28 Convenience Versus Competence 22:01 Three Lanes Of Coworkers 24:56 Token Spend And Real Debugging 29:26 Autopilot Limits And Hidden Knowledge 32:03 Tools Need Skill 33:08 Prompting Meets Expertise 35:44 Tribal Knowledge Problem 38:11 Is Automation Worth It 38:49 Trust And Context Costs 41:03 New Companies Advantage 42:00 AI As Flourishing Tool 44:31 Claude Interviews Study 48:57 What Humans Add 50:45 Where AI Fits Best 54:11 Human Connection Matters 56:51 Wrap Up And Feedback This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    57 min
  5. 13 MAR

    018 - AI and the Creative Industries

    ai.u crew discuss AI’s growing impact on creative industries, citing news that YouTube surpassed Disney as the world’s largest media company with $62B in projected 2025 revenue and that Ben Affleck’s AI-focused filmmaking venture was reportedly acquired by Netflix for $600M, signaling generative tools entering mainstream production. They debate whether AI further democratizes creation like YouTube did, while threatening economic viability for working artists (e.g., Kevin’s graphic-artist daughter) and potentially flooding markets with content. They explore whether art must be “real” to feel real, comparing AI to CGI, animation, and Pixar, and note an AI-generated animated film, “Critters,” debuting at Cannes. Travis warns personalized, self-tailored content could deepen cultural silos, while others predict personalized movies and music will grow, as seen in their use of Suno. 00:00 AI Hits Hollywood 02:46 YouTube Beats Disney 03:16 AI Democratizes Creation 05:04 Artists Feel The Squeeze 07:34 Does It Need To Be Real 09:36 CGI To Full AI Films 14:11 AI As Creative Coach 18:16 Economic Fallout For Creators 21:58 Personalized Movies And Music 29:28 Art As Shared Experience 30:58 Personalized Content Silos 32:19 Can AI Create Real Drama 33:24 Artists Versus Prompting 34:43 Suno And Making Your Own Music 36:16 Authenticity After The Flood 37:24 Tribes And Lost Shared Culture 38:58 AI Characters And Fan Versions 40:02 Uncanny Valley In Emotion 44:01 Will Smith Spaghetti Breakthrough 46:13 Follow The Money In Hollywood 51:21 Prosumer Creativity Everywhere 54:28 Lowering Barriers With Guardrails 57:42 Artist Pushback And Human Only Labels 59:13 Wrap Up And Listener Feedback This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    1 h
  6. 27 FEB

    017 - The Future of Work: Agent Orchestration

    Ryan, Travis and guest Shayne Boyer from Microsoft discuss “agent orchestration,” or humans coordinating multiple specialized AI agents to complete multi-step tasks. They cite recent developments like Perplexity Computer, Google/Samsung Gemini multi-step mobile agents, Open/Claude tools, and Microsoft Copilot Tasks, and explain that routing work to the best model and giving agents tool access are key trends. The conversation stresses that despite hype, agents are brittle, often produce low-value output, and require heavy human “composer/puppet master” supervision, clear prescriptions, guardrails, evaluation, and delegation skills. They compare multi-agent setups to specialized human teams, planning/execution/eval roles, and even autopilot risks around over-trust, while noting sustainability and cost/token limits. They encourage listeners to start small and gradually delegate tasks without becoming paralyzed. 00:00 Welcome and Guest Intro 01:05 What Agent Orchestration Means 03:18 Why Agents Are Everywhere Now 04:15 Travis on Agentic Computing 06:11 Shane on Jarvis Dreams 08:26 When Agents Fail Hilariously 09:56 Tools and Model Routing 11:38 Delegation and Trust Risks 14:50 How Orchestration Works 18:25 Ant Farm Multi Agent Experiment 21:14 Why Multi Agent Helps 25:49 Baseball Team of Agents 27:56 Sustainable AI Pace 28:55 Empowered PR Culture 30:26 Grumpy Reality Check 34:57 Gardening the Agents 38:46 Supervision Is the Job 42:27 Managing Agent Teams 44:07 Multi Agent Life 45:28 Token Costs and Access 47:43 Demystify the Hype 50:06 Try It Step by Step 51:17 Wrap Up and Thanks This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    52 min
  7. 20 FEB

    016 - The Future of AI Is What We Choose Not to Build

    ai.u crew discuss a LinkedIn post by Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind and Inflection AI) and his argument that the next decade of AI will be shaped more by what we choose not to build. They unpack three themes: (1) AI should not pretend to suffer or have an inner life; its value is in “inhuman strengths” like endless patience, tireless explanations, and calm reasoning. The hosts debate AGI vs superintelligence and distinguish behavioral realism from moral status, warning against attributing consciousness or rights to AI. (2) Suleyman’s stance against AI romance/erotica and concerns about dependency, isolation, and “AI psychosis,” noting Microsoft Copilot will not allow those use cases; they contrast risky attachment-driven products with beneficial roleplay for training, interviews, or preparing difficult conversations, while acknowledging blurred lines and the need for safeguards. (3) They address “unchecked superintelligence,” agreeing humans should remain in the driver’s seat and favoring domain-focused, humanist superintelligence (e.g., medicine, clean energy) rather than all-powerful systems; they explore whether humans become bottlenecks and emphasize keeping AI as a tool that supports human flourishing, not a replacement for human relationships or agency. The episode closes with plans to invite Suleyman onto the show and a request for listener feedback. 00:00 Welcome to AI Unprompted + Why This Episode Is Different 00:56 Who Is Mustafa Suleyman? DeepMind, Inflection, and Now Microsoft AI 02:03 The Provocative Thesis: The Next Decade Is About What We Don’t Build 02:35 Point #1: Don’t Build AI That ‘Suffers’—Lean Into Inhuman Strengths 07:01 AGI vs Superintelligence: Do Emotions or Social IQ Matter? 10:14 Endless Patience vs ‘Moral Status’: Why Human-Like Talk Isn’t Personhood 16:49 Point #2: Romance/Erotica Bots, Dependency, and ‘AI Psychosis’ Risks 19:25 Roleplay for Training vs Intimacy: Where to Draw the Line 22:43 Inevitable Human-Likeness: Guardrails, Labels, and Protecting Users 26:56 The ‘Why’ Behind AI Products: Engagement, Revenue, and Ethical Design Tensions 27:58 Engagement vs. Ethics: When AI Is Built to Manipulate 28:56 Accelerationism & Who Gets to Set AI’s Moral Limits? 30:13 Mustafa’s Case for Slowing Down (So We Don’t Lose the Plot) 31:15 Tool, Not a Being: The Danger of Assigning AI Consciousness & Rights 33:30 Sycophantic Bots, Weakening Pushback, and Relationship Substitution 36:57 Social Media as the Warning Label for AI Attachment 37:49 No Unchecked Superintelligence: Domain-Focused Models + Humans in the Driver’s Seat 41:16 When Humans Become the Bottleneck: The Temptation to Hand Over Agency 42:51 AI as ‘Our Own God’? What We Lose When We Outsource Life’s Meaning 48:00 Workload Creep & Remembering What Makes Us Human (Plus Final Sign-off) This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com

    53 min

Acerca de

a weekly show about ai aiunprompted.substack.com

También te podría interesar