Design of AI | Build Products that Customers & Businesses Value

Design of AI

We provide a pragmatic and practical deep dive into what AI can do and how it is transforming industries. We help designers, researchers, and product managers excel in a rapidly changing future. Hosted by: Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/ Make sure to subscribe to our Substack to never miss an episode and receive more strategic insights and news https://designofai.substack.com/ Brought to you by PH1 https://ph1.ca a strategy consultancy specialized in improving the success of your AI product.

  1. 52. Clawd Bot & Moltbook: When Demos Hijack Reality [Jim Love]

    FEB 10

    52. Clawd Bot & Moltbook: When Demos Hijack Reality [Jim Love]

    Viral agent demos are training product teams to trust spectacle instead of outcomes—and that’s how unsafe automation slips into real workflows. In this episode we welcome Jim Love, one to the most respected voices in technology news to unpack what “Claude Bot / open claw” and Moltbook-style experiments actually prove, what they exaggerate, and why the hardest problems aren’t capability—they’re control, security, and measurement. In this episode we cover: Why viral demos distort reality: Hype spotlights novelty, not reliability—so teams miss what breaks when the demo meets real users. Local agents raise risk fast: Local access turns assistants into operators—writing, deleting, impersonating, and expanding blast radius. “It learns” is overstated: Many stacks “learn” by saving state—easy to inspect, steal, poison, and manipulate. Emergence isn’t intelligence: Weird behaviors can emerge at scale without intent—don’t mistake patterns for agency or judgment. Outcomes > inputs, always: Great teams define success, measure impact, and kill distractions—even when the tech looks magical. You’ll leave with a sharper lens for evaluating agent stacks before they create collateral damage you can’t see or stop. Jim Love has spent more than 40 years in technology, working globally as a consultant, leading an international consulting practice, serving as a CIO, and building his own consulting company. He was also CIO and head of content at the iconic publication IT World Canada. Today he runs a new publication Tech Newsday and hosts two widely followed technology podcasts, Cybersecurity Today and Hashtag Trending. He continues to advise a select group of companies, mostly startups looking to deal with AI. Jim is the author of both fiction and non-fiction, including Digital Transformation in the First Person. His latest novel, Elisa: A Tale of Quantum Kisses, explores a near-future shaped by artificial intelligence and became an Audible bestseller shortly after release. Tech Newsday — Jim Love’s publication covering tech, AI, and security. https://technewsday.com/ Hashtag Trending — the podcast feed for fast tech headlines + commentary. https://technewsday.com/podcasts_categories/hashtag-trending/ Elisa: A Tale of Quantum Kisses — Jim’s near-future AI novel (Amazon listing). https://www.amazon.com/Elisa-Quantum-Kisses-Jim-Love/dp/B0DPFZMDGZ If this episode helped, follow/subscribe so you don’t miss what’s next. And if you’re listening on Apple Podcasts or Spotify, leave a rating and a review—it’s the simplest way to help more product teams find the show. Get the ideas, frameworks, and episode takeaways as a written brief—subscribe to the Design of AI Substack. PH1 Research helps product teams improve digital experiences in the AI era—across strategy, benchmarking, and UX evaluations—so you can measure what matters, reduce impact blindness, and ship systems customers actually trust and adopt. Learn more at https://www.ph1.ca/.

    43 min
  2. 51. Agents Will Disrupt Search & Shopping [Devi Parikh, CEO Yutori, ex Meta

    FEB 2

    51. Agents Will Disrupt Search & Shopping [Devi Parikh, CEO Yutori, ex Meta

    While the world is obsessed with the Moltbot/Clawdbot AI agent, founders like Devi Parikh are laying the foundation for how agents will transform search and shopping—agents that monitor, negotiate, and navigate on behalf of users, securely. Search is becoming proactive. Shopping is becoming delegated. And the next interface won’t be a results page—it’ll be agents running quietly in the background, surfacing what matters when it matters. How agents turn search into continuous monitoring Why shopping shifts from browsing to delegation Where value shows up first in real workflows What trust requires before agents can transact The path from alerts → actions → autonomy In this episode, Devi breaks down how Scouts reframes search as “future-facing discovery”: track price drops, in-stock alerts, sales leads, funding news, flights, and local events—then get notified the moment conditions change. We also explore what comes next: moving from monitoring to task completion—where agents can execute purchases and bookings with explicit confirmations, hard guardrails, and a deliberate “trust staircase” designed to prevent surprises. If you enjoyed this episode, follow the podcast and leave a rating + review—it helps more builders find the show. Subscribe to the Design of AI Substack for in-depth AI product strategy resources, operator-grade analysis, and frameworks on what makes AI products succeed (and why they fail). This episode is brought to you by PH1 Research—a strategy + research partner for product leaders shipping AI-enabled experiences. We help teams define success metrics that actually matter, validate value before scaling, and reduce trust and adoption risk through AI strategy, UX evaluation, and evidence-driven product decisions. Devi Parikh is the co-founder and co-CEO of Yutori, and was previously a Senior Director in Generative AI at Meta and an Associate Professor at Georgia Tech. Her research focuses on human–AI collaboration, generative AI, multimodal AI, and AI for creativity. She holds a Ph.D. from Carnegie Mellon University and has received recognitions including the PAMI Mark Everingham Prize. Try Scouts: https://scouts.yutori.com/ Blog: The Bitter Lesson for Web Agents: https://yutori.com/blog/the-bitter-lesson-for-web-agents

    43 min
  3. 50. Designing AI for 2026: Trust, Cost, Orchestration [Yaddy Arroyo]

    JAN 20

    50. Designing AI for 2026: Trust, Cost, Orchestration [Yaddy Arroyo]

    2026 will reward AI products that get three things right: trust, cost, and orchestration. This episode looks ahead at how those forces are reshaping AI product strategy—and what teams need to pay attention to now. Brittany and Arpy are joined by Yaddy Arroyo, who has spent a decade designing multimodal AI systems in financial services, where reliability and governance are table stakes. She's also been one of the key community builders amongst the design community who are leaders within AI orgs. Together, they reflect on what the last two years of AI adoption revealed and how those lessons are directly informing decisions teams are making in 2026. Why trust now shapes AI product successOrchestration matters more than promptingToken costs quietly reshape UX decisionsWhen small models outperform large onesHow AI design roles must evolve in 2026 Episode chapters 01:21 Reflecting on Two Years of AI Adoption 02:52 The Rise of Copilot and AI's Impact on Creativity 03:37 Challenges and Concerns with AI Safety 04:24 Designing AI for Human-Centric Use Cases 04:53 Meta's Investment and Intelligence as a Service 09:25 Hallucinations and the Reliability of LLMs 11:14 The Business Value and Limitations of Gen AI 18:55 Founders and the Rush to Monetize AI 19:25 Token Optimization and UX Challenges 21:31 Personalizing AI Interactions 21:48 Challenges in AI Adoption 22:27 PH One's AI Solutions 22:53 The Orchestration Problem 24:22 AI's Role in Everyday Tasks 26:08 AI in UX and Design 27:55 Future of AI and Small Language Models 30:35 Human in the Loop and UI Generators 37:35 Accountability and AI's Future 42:39 Closing Thoughts and Future Directions The conversation connects early generative AI optimism with today’s realities—probabilistic systems, rising costs, and scaling pressure—and surfaces where momentum is building, from smaller models to on-device intelligence. This episode also marks Episode 50 of Design of AI and two years of conversations with builders, researchers, and leaders shaping AI-powered products—follow the podcast to stay ahead as this next phase unfolds . About PH1The Design of AI podcast is brought to you by PH1, an AI strategy consultancy. PH1 has worked with the biggest corporations in tech to redefine CX in the era of AI through strategic research, prototyping, and aligning product to power. Visit ph1.ca to ask about your project. Go DeeperFor deeper, unfiltered thinking on AI strategy, governance, and product decisions, our Substack (https://designofai.substack.com) is the best place to follow our work. It’s where we go beyond the episodes—breaking down what’s actually changing, what’s overhyped, and what leaders should do next. Connect with the Hosts Contact Arpy if you’re navigating AI product strategy, platform architecture, orchestration, or high-stakes system decisions that need to scale. Contact Brittany if you need clarity on AI UX, research, service design, or evaluating whether an AI product is actually delivering value for users.

    45 min
  4. 49. AI Was Supposed to Help Humans. What Happened? [Ovetta Sampson]

    JAN 2

    49. AI Was Supposed to Help Humans. What Happened? [Ovetta Sampson]

    If you’re building your product on private large language models, you are outsourcing control of your business—your data, your roadmap, and your long‑term defensibility—to companies whose incentives do not align with yours. Ovetta Sampson is a tech industry leader who has spent more than a decade leading engineers, designers, and researchers across some of the most influential organizations in technology, including Google, Microsoft, IDEO, and Capital One. She has designed and delivered machine learning, artificial intelligence, and enterprise software systems across multiple industries, and in 2023 was named one of Business Insider’s Top 15 People in Enterprise Artificial Intelligence. In 2025, Ovetta left her role as Director of AI and Compute Enablement at Google to found Right AI, a consultancy focused on helping organizations minimize the human, organizational, and strategic risks of building and deploying AI. In this episode you'll learn about: Why LLM‑first architectures undermine control and defensibility How enterprise data is unintentionally exposed and reused Where “responsible AI” breaks down in practice When generative AI is the wrong tool What safer, controllable AI systems look like instead If this episode challenged how you’re thinking about AI, make sure you’re following Design of AI wherever you listen to podcasts. Rating and reviewing the show helps more founders, product leaders, and designers find these conversations. For deeper, unfiltered thinking on AI strategy, governance, and product decisions, our Substack (https://designofai.substack.com) is the best place to follow our work. It’s where we go beyond the episodes—breaking down what’s actually changing, what’s overhyped, and what leaders should do next. Ovetta’s work focuses on helping leaders, designers, and organizations reduce human and systemic risk in AI—without defaulting to hype-driven architectures or opaque models. Follow Ovetta on LinkedIn: https://www.linkedin.com/in/ovettasampson/ About Ovetta & her work: https://www.ovetta-sampson.com/ Join her mailing list: https://www.ovetta-sampson.com/mailing-list-qr-code Right AI (consulting & advisory): https://www.rightainow.com/ Free Mindful AI Playbook (QR Code): https://docs.google.com/presentation/d/1Tzsr25r4o0g0Szz4oOSnUvrrrxAuXfhpqcB08KzdTyA/edit?usp=sharing This is episode 49 and was hosted by Arpy Dragffy Guerrero. Follow him on LinkedIn: https://www.linkedin.com/in/adragffy/ The Design of AI podcast is brought to you by PH1, an AI strategy consultancy., PH1 has worked with the biggest corporations in tech to redefine CX in the era of AI through strategic research, prototyping, and aligning product to power.

    48 min
  5. 48. AI Trap: Hard Truths About the Job Market

    12/15/2025

    48. AI Trap: Hard Truths About the Job Market

    2025 is almost over, and it’s time to stop pretending everything is fine. If you work in design, writing, product, research, or agencies, you’ve felt it: fewer jobs, lower rates, shrinking teams—and an industry telling you AI is here to free you while quietly replacing you. In AI Trap, Episode 48, we break down the biggest myths we’ve been sold: AI will free creatives to do more meaningful work AI will create more jobs than it destroys AI will make us smarter and more creative Some of these are partially true. That’s what makes them dangerous. We look at real data, real job market signals, and what’s already happening inside agencies and tech companies. We talk about why creativity is being commoditized, why value is collapsing for most creatives, and the line too many people are crossing: outsourcing their thinking instead of outsourcing their work. --- Please help us: we’re running a short survey alongside this episode. If you work in a creative or knowledge role, your input is critical. It takes about three minutes, and it helps us separate hype from reality. https://tally.so/r/Y5D2Q5 ---- This is episode 48 of the Design of AI podcast. If you found this conversation valuable, please rate and share the show — your support shapes what we explore next. For more AI strategy, creative research, and product insight, subscribe to designofai.substack.com Hosted by Arpy Dragffy Guerrero & Brittany Hobbs ----- Most AI projects fail—not because the technology is weak, but because they’re not designed to deliver real customer value. PH1 Research helps organizations reimagine their customer experience with AI. We pinpoint what customers actually need, prototype and test solutions, and audit AI products before they ship. We’ve worked with teams at Microsoft, Spotify, and fast-growing startups. Learn more at ph1.ca, or reach out directly to our host, Arpy Dragffy.

    30 min
  6. 47. The Future of Human–AI Creativity [Dr. Maya Ackerman]

    12/03/2025

    47. The Future of Human–AI Creativity [Dr. Maya Ackerman]

    AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman — AI-creativity researcher, professor, and author of Creative Machines: AI, Art & Us — explains why the danger isn’t AI itself, but the way we’re designing AI products. She breaks down how today’s tools are unintentionally flattening originality, how “Oracle-mode” models limit imagination, and why we must shift toward building systems that expand human creativity rather than automate it away. For designers, product managers, and builders, this conversation is a blueprint for developing AI tools that inspire exploration, push users beyond predictable patterns, and create space for genuine ingenuity. If you design tools for creative work, this episode reframes what it means to build technology that actually elevates the human mind rather than quietly replacing it. ➤ Why most AI products suppress creativityHow over-alignment and “correctness” kill imaginative output, and what to do instead. ➤ How hallucinations can fuel originalityWhy they’re not failures, but essential sparks for new creative directions. ➤ How AI is reshaping cultural expectations of music, art, and designAnd what this means for teams building creative platforms today. ➤ How to design “Humble Creative Machines”AI that enhances a creator’s skill and taste instead of taking over the process. ➤ How to build incentives that reward curiosity and exploration 02:50 The Role of AI in Enhancing Human Creativity03:26 Historical Perspective on AI and Creativity04:39 The Importance of Novelty and Value in AI Creativity05:52 AI's Potential Beyond Current Applications07:32 Hallucinations: Feature or Bug?08:51 Ethical Considerations in AI Creativity11:13 Humble AI: Elevating Human Creativity24:18 The Role of AI in Creativity25:49 Integrating AI into Creative Workflows26:47 AI as a Creative Assistant27:31 AI in Coding vs. Creative Fields29:51 Incentives in the Creative Domain31:43 Balancing Technology and Humanity39:33 The Future of Education with AI43:37 Practical Tips for Adapting to AI44:04 Collective and Individual Actions for the AI Era45:44 Final Thoughts and Book Promotion Shifting product strategy away from speed and efficiency toward human ingenuity. Creative Machines: AI, Art & Us — Dr. Ackerman’s bookhttps://maya-ackerman.com/creative-machines-book LyricStudio — AI-powered lyric writinghttps://lyricstudio.net “Humble Creative Machines: Creating AI to Elevate You”https://www.theaioptimist.com/p/humble-creative-machines-creating This is Episode 47 of the Design of AI Podcast. If you found this conversation valuable, please rate and share the show — your support shapes what we explore next. For more AI strategy, creative research, and product insight, subscribe to https://designofai.substack.com. Hosted by Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/Brought to you by PH1, a strategy consultancy specializing in prototyping the future of your products and business.https://ph1.ca ----- It’s time for our year-end survey: Professional Success in the era of AI 2026 So many of you that we speak to confide in us that you’re worried about your jobs and how to stay relevant as AI is rapidly evolving. You’ll also help us shape next year’s podcast topics. https://tally.so/r/Y5D2Q5 You can participate at   – it will only take 5 minutes of your time.

    46 min
  7. 46. The AI Commercialization Playbook: Stop Selling Tech, Start Delivering Value [Jessica Randazza Pade, Neurable]

    11/12/2025

    46. The AI Commercialization Playbook: Stop Selling Tech, Start Delivering Value [Jessica Randazza Pade, Neurable]

    The hardest part of building an AI product isn’t the model, it’s the market. Jessica Randazza Pade, one of today’s leading experts in AI commercialization and Head of Brand Activation & Commercialization at Neurable, joins us to share why most AI products fail and how to build ones that don’t. She explains how to close the gap between engineering and adoption, why “AI” is not a value proposition, and how making technology invisible can become your biggest competitive advantage. Her core message: you have to deliver real value if you want to keep your customers tomorrow. What you’ll learn: ➤ Why “AI” isn’t a value prop and how to build for outcomes, not hype. ➤ The three traits every product that actually sells has in common. ➤ How to align engineering and marketing through user feedback loops. ➤ Why “cheaper, faster, better” is a trap and empathy is the real moat. ➤ The next frontier in AI health and personalization. This is episode 46 of the Design of AI Podcast. Please rate the show if you find it valuable, as your ratings and comments shape future topics. For more AI strategy and research, subscribe to our newsletter: designofai.substack.com This episode was hosted by Arpy Dragffy Guerrero — https://linkedin.com/in/adragffy  Brought to you by PH1 Research, a strategy consultancy specializing in mapping the future of your business and product. Jessica leads the team driving brand activation and commercialization at Neurable. Named one of Campaign US’s 40 Over 40 and Elle Magazine’s 40 Under 40, Jessica is an award-winning global digital marketer, business leader, and storyteller. With a proven track record of helping both Fortune 100 companies and startups achieve explosive growth through data-driven, real-time marketing, she has built and sold successful companies and led high-performing teams at IDEO, Danone, and DigitasLBi across the U.S., EMEA, and Greater China. Follow her on LinkedIn: linkedin.com/in/jessicarandazza Learn more about Neurable and their pioneering brain-computer interface https://www.neurable.com/  It’s time for our year-end survey: Professional Success in the era of AI 2026 So many of you that we speak to confide in us that you’re worried about your jobs and how to stay relevant as AI is rapidly evolving. You’ll also help us shape next year’s podcast topics.  You can participate at  https://tally.so/r/Y5D2Q5 – it will only take 5 minutes of your time.

    45 min
  8. 45. Agentics: Rebuilding How We Think, Work, and Create with AI [Kwame Nyanning, Author of Agentic]

    10/23/2025

    45. Agentics: Rebuilding How We Think, Work, and Create with AI [Kwame Nyanning, Author of Agentic]

    The next revolution in AI isn’t another tool, it’s a reconstruction of how organizations think, act, and create meaning. Strategist and designer Kwame Nyanning (Partner & Chief Design Officer at EY Seren) argues that the companies who win the agentic era won’t be the fastest to automate — they’ll be the first to redesign their ontology: the invisible system of goals, relationships, and decisions that define value. His book Agentics, How to design AI agents for impact, growth & innovations reframes AI not as a tool, but as a design medium for intent, alignment, and emergence. What you’ll learn: ➤ How agentic systems blur the line between user, product, and enterprise—and what it means to lead in that new reality. ➤ Why design is shifting from pixel-perfect outputs to intent-perfect systems, where meaning and business-redefining ontology drives performance. ➤ Why the real risk of automation isn’t lost jobs but lost meaning, and how friction, constraint, and emotion become the new luxury. ➤ How agencies and teams can thrive by trading efficiency for proof of alignment and outcomes that matter. —------- This is episode 45 of the Design of AI Podcast.  And please make sure to rate this podcast if you find it valuable. Your podcast ratings and comments are the best way to influence which topics we cover in future episodes.  And if you want more AI strategy & research content, subscribe to our substack newsletter at ⁠https://designofai.substack.com/⁠  Our next AI Product Strategy workshop is happening on November 20. In this 3-hour online workshop you’ll leverage our framework for discovering and pressure-testing disruptive AI product ideas. Get more info at designof.ai/workshop  —---------- This episode was hosted by: Arpy Dragffy Guerrero ⁠https://www.linkedin.com/in/adragffy/⁠  Brittany Hobbs  ⁠https://www.linkedin.com/in/brittanyhobbs/⁠ This episode is brought to you by PH1 Research, a research & strategy consultancy specialized in mapping the future of your business & product. Get more info at ⁠PH1.ca⁠ —-------- Kwame Nyanning is a strategist, designer, and thought leader working at the intersection of AI, enterprise transformation, and human-centered design. Formerly with McKinsey, Frog, and Infosys, and now embedded in EY Saren, Kwame’s work explores how organizations must move beyond incremental automation and instead reimagine their very ontology — the foundational logic of how value and meaning are created. Follow him on Linkedin: https://uk.linkedin.com/in/kwamenyanning Buy his must-read book here Available in paperback or Kindle. Follow Design of AI podcast here: ⁠⁠Spotify⁠ https://open.spotify.com/show/3O11vQKPpKI5ZlJhdRGwnf ⁠⁠⁠⁠Apple Podcasts⁠⁠ ⁠https://podcasts.apple.com/us/podcast/design-of-ai-product-strategy-innovation-career-growth/id1734499859⁠  ⁠YouTube⁠⁠ ⁠https://www.youtube.com/@DesignofAI⁠

    45 min

Ratings & Reviews

4.7
out of 5
3 Ratings

About

We provide a pragmatic and practical deep dive into what AI can do and how it is transforming industries. We help designers, researchers, and product managers excel in a rapidly changing future. Hosted by: Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/ Make sure to subscribe to our Substack to never miss an episode and receive more strategic insights and news https://designofai.substack.com/ Brought to you by PH1 https://ph1.ca a strategy consultancy specialized in improving the success of your AI product.

You Might Also Like