Discussing Stupid: A byte-sized podcast on stupid UX

High Monkey

Discussing Stupid returns to the airwaves to transform digital facepalms into teachable moments—all in the time it takes to enjoy your coffee break! Sponsored by High Monkey, this podcast dives into ‘stupid’ practices across websites and Microsoft collaboration tools, among other digital realms. Our "byte-sized" bi-weekly episodes are packed with expert insights and a healthy dose of humor. Discussions focus on five key areas: Business Process & Collaboration, UX/IA, Inclusive Design, Content & Search, and Performance & SEO. Join us and let’s start making the digital world a bit less stupid, one episode at a time. Visit our website at https://www.discussingstupid.com

  1. 3D AGO

    S3E10 - Intentional AI: The Super Bowl didn't sell AI, it exposed it

    In Episode 10 of Intentional AI, we are taking a short detour in our Intentional AI series to talk about the Super Bowl. Not the game. The ads. A noticeable chunk of them leaned hard into AI. On the surface, it felt like a big moment for the industry. But when you look closer, it raises a different question. Are we watching real progress, or just very expensive hype? We unpack what was actually being sold, what was implied, and what gets left out when AI is positioned as effortless. AI has value. We are not arguing that it does not. But it works best when it is used intentionally and within clear boundaries. When it is marketed as a replacement for thinking, planning, or strategy, that is where things fall apart. If you are trying to separate signal from noise, this one is for you. Previously in the Intentional AI series: Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research and AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for accessibilityEpisode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEOEpisode 7: Why AI can make your content personalization worseEpisode 8: The real value of AI wireframes is NOT the wireframesEpisode 9: Just because AI can create images doesn't mean you should use them New episodes every other Tuesday. For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform. (0:00) - Intro (0:42) - We had to talk about the Super Bowl (2:05) - The numbers behind AI in the Super Bowl (3:55) - How AI is marketed vs reality of AI (7:30) - This is why we started Intentional AI (8:30) - Reflections on the current realities of AI (13:20) - Where does AI make the most sense? (15:30) - Our reaction to the AI generated ads (17:30) - Join us and learn to be responsible with AI (19:00) - Outro **Also disclaimer: there is a math error at 18:15 - the correct calculation is closer to $100-150 million.** Subscribe for email updates on our website: https://www.discussingstupid.com/ Watch us on YouTube: https://www.youtube.com/@discussingstupid Listen on Apple Podcasts, Spotify, or Soundcloud: https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024 https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0 a href="https://links.discussingstupid.com/soundcloud" rel="noopener noreferrer"...

    20 min
  2. FEB 10

    S3E9 - Intentional AI: Just because AI can create images doesn't mean you should use them

    In Episode 9 of the Intentional AI series, Cole and Virgil take on one of the most common and misunderstood uses of AI today: image and graphic generation. From social media visuals to promotional graphics, AI images are fast, easy, and everywhere. The conversation focuses on why images became the public on ramp to AI and why that familiarity creates risk. Visuals feel harmless, but the moment AI starts generating finished looking images, teams inherit decisions around ownership, ethics, and trust that they are often unprepared to make. A central theme of the episode is responsibility escalation. As AI reduces the effort required to create images, the importance of human judgment increases. Treating AI generated visuals as final work can quickly introduce legal, ethical, and reputational problems. Virgil shares a practical experiment where he used a simple prompt to generate three social media promotional graphics from an existing article and tested the results across three tools: Canva, Claude, and Artlist. Canva produced the most generic and repetitive designs. Claude delivered cleaner structure and stronger messaging but struggled with fonts, formats, and variation. Artlist created the most visually interesting outputs, though it introduced workflow limitations and cost concerns. The episode reinforces a consistent conclusion across the series. AI can help jumpstart visual work, but it cannot replace judgment, intent, or responsibility. In this episode, they explore: Why AI images are so tempting to useWhere AI generated graphics actually helpWhy most AI visuals fall flatEthical and ownership risks teams overlookA comparison of Canva, Claude, and Artlist A downloadable Episode Companion Guide is available below with example outputs and tool takeaways. https://links.discussingstupid.com/s3e9companion Previously in the Intentional AI series: Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research and AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for accessibilityEpisode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEOEpisode 7: Why AI can make your content personalization worseEpisode 8: The real value of AI wireframes is NOT the wireframes New episodes every other Tuesday. For more conversations about AI, design, and digital strategy, visit www.discussingstupid.com and subscribe on your favorite podcast platform. (0:00) - Intro (1:40) - You can’t escape AI imagery (3:18) - Why AI images are risky (4:40) - The legal and ethical...

    29 min
  3. JAN 28

    S3E8 - Intentional AI: The real value of AI wireframes is NOT the wireframes

    In Episode 8 of the Intentional AI series, Cole, Virgil, and Chad explore one of the most tempting uses of AI in digital work: wireframing and page layout. With AI now able to generate full wireframes in minutes or even seconds, the promise of speed is undeniable. But speed alone is not the point. The conversation focuses on where AI genuinely helps in the wireframing process and where it introduces new risks. Wireframes are meant to establish structure, hierarchy, and intent, not just visual output. While AI can quickly generate layouts, components, and patterns, it still requires strong human judgment to evaluate what is correct, what is missing, and what could cause problems downstream. A key theme of the episode is escalation of responsibility. As AI reduces the time required to create wireframes, the importance of human review, direction, and decision making increases. Treating AI generated wireframes as finished work can introduce serious risks, especially around accessibility, content fidelity, maintainability, and overall project direction. Virgil shares an experiment where he used AI to first generate a detailed prompt for wireframing, then tested that prompt across three tools: Claude, Google Gemini 3, and Figma Make. The results reveal clear differences in layout quality, accessibility handling, content retention, and how easily the outputs could be integrated into real workflows. Claude produced the strongest layout and structural patterns but failed badly on accessibility and removed large portions of content. Gemini generated simpler wireframes with clearer structure, but used even less content and still struggled with accessibility. Figma Make stood out for workflow integration, retaining all content and allowing direct editing inside Figma, though it also failed accessibility requirements and relied heavily on generic styling and placeholder imagery. Throughout the episode, the group returns to the same conclusion. AI is extremely effective at getting the first portion of wireframing done quickly. It is far less effective at making judgment calls, enforcing standards, or understanding context without guidance. In this episode, they explore: How wireframing fits into the content lifecycleWhy speed changes the risk profile of design workUsing AI to generate prompts instead of starting from scratchWhere AI wireframes succeed and where they failAccessibility and content risks in AI generated layoutsA wireframing comparison of Claude, Gemini 3, and Figma Make A downloadable Episode Companion Guide is available below with tool comparisons and key takeaways. DS-S3-E8-CompanionDoc.pdf Previously in the Intentional AI series: Episode 1: Intentional AI and the Content LifecycleEpisode 2: Maximizing AI for Research & AnalysisEpisode 3: Smarter Content Creation with AIEpisode 4: The role of AI in content managementEpisode 5: How much can you trust AI for...

    29 min
  4. JAN 13

    S3E7 - Intentional AI: Why AI can make your content personalization worse

    In Episode 7 of the Intentional AI series, Cole and Virgil focus on content personalization and why it is one of the most overpromised areas of AI. While personalization is often positioned as simple and automated, doing it well requires far more clarity and intent than most tools suggest. They break personalization into two main approaches. Role based personalization tailors messages for specific audiences or job functions, while behavioral personalization adapts experiences based on how people interact with content over time. The conversation also touches on predictive analysis and where AI may eventually help interpret patterns across analytics data. A central theme of the episode is trust. Using AI for personalization assumes the system understands audience priorities and pain points. Without clear direction, AI fills in the gaps with assumptions. Cole and Virgil explain why personalization has always been difficult to implement, why adoption remains low, and why AI does not remove the need for strategy, measurement, or human judgment. The episode also addresses the risks of personalization. Messages that are too generic get ignored, while messages that feel overly personal can cross into uncomfortable territory. Finding the right balance is still a human responsibility. In the second half of the episode, they continue their ongoing experiment using the same AI written accessibility article from earlier episodes. This time, they test three tools by asking them to generate role based promotional emails for a head of web marketing, a director of information technology, and a C level executive. The results highlight meaningful differences in tone, structure, and assumptions across tools. The takeaway is consistent with the Intentional AI series. AI can support personalization, but only when you define goals, outcomes, and boundaries first. In this episode, they explore: What content personalization actually meansRole based versus behavioral personalizationWhy personalization adoption remains lowThe balance between relevance and creepinessHow AI supports personalization without replacing strategyA role based email comparison of Perplexity, Copilot, and Claude A downloadable Episode Companion Guide is available below with tool comparisons and practical takeaways. DS-S3-E7-CompanionDoc.pdf Previously in the Intentional AI series: Episode 1: Intentional AI and the Content LifecycleEpisode 2: Using AI for Research and AnalysisEpisode 3: AI and Content CreationEpisode 4: Content Management and AIEpisode 5: How much can you trust AI for accessibility?Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEO New episodes every other Tuesday. For more conversations about AI and digital strategy, visit a...

    26 min
  5. 12/16/2025

    S3E6 - Intentional AI: You’re asking AI to solve the wrong problems for SEO/GEO/AEO

    In Episode 6 of the Intentional AI series, Cole, Virgil, and Seth move into the visibility stage of the content lifecycle and tackle a common mistake they see everywhere. Teams keep treating SEO, GEO, and AEO as optimization problems, when in reality they are content quality, structure, and clarity problems. Search engines andgenerative models have both gotten smarter. Keyword tricks, shortcuts, and “secret sauce” tactics no longer work the way they once did. Instead, visibility now depends on clear intent, strong structure, accessible language, and content that actually helps people. The group looks at how SEO history is repeating itself, why organizations keep chasing hacks, and how that mindset actively works against long-term discoverability. They also dig into how SEO, GEO, and AEO overlap, where they differ, and why writing exclusively for AI can backfire by alienating human readers. The conversation covers content modeling, headless-style structures, and why these approaches help machines understand relationships without sacrificing usability. A major focus of the episode is schema. The team explains why schema is becoming increasingly important for generative engines, why it is difficult and error-prone to manage at scale, and where AI can help draft complex schema structures without fully understanding context. This leads to a broader point. AI can accelerate specific tasks, but it cannot replace judgment, prioritization, or review. In the second half of the episode, they continue their ongoing experiment using the same AI-written accessibility article from earlier episodes. They test how three tools approach GEO-focused improvements. Each tool surfaces different insights, none of them are complete on their own, and all of them require human decision-making to be useful. The takeaway is consistent with the theme of the series. AI is powerful when you ask it to solve the right problems, and dangerous when you expect it to fix foundational issues for you. In this episode, they explore: Why SEO, GEO, and AEO fail when treated as optimization tricksHow search has shifted from keywords to clarity, structure, and intentWhere SEO and GEO overlap and where they meaningfully divergeThe risk of writing for AI instead of for peopleWhy content modeling supports both search engines and generative enginesHow AI can assist with schema creation and where humans must interveneWhy repeating the same schema everywhere weakens its valueA GEO-focused comparison of Writesonic, Grammarly, and ClaudeWhy broad prompts underperform and targeted prompts lead to better outcomes A downloadable Episode Companion Guide is available below. It includes tool notes, schema examples, prompt guidance, and practical takeaways for applying AI to search without losing clarity or control. DS-S3-E6-CompanionDoc.pdf Previously in the Intentional AI series: Episode 1: Applying AI to the content lifecycleEpisode 2: Maximizing AI for research and analysisEpisode 3: Smarter content creation with AIEpisode 4: The role of AI in content management AIEpisode 5: How much can you trust AI for accessibility? Upcoming episodes in the Intentional AI series: Jan 6, 2026 – Content PersonalizationJan 20, 2026 – Wireframing and LayoutFeb 3, 2026 – Design and MediaFeb 17, 2026 – Back End DevelopmentMar 3, 2026 – Conversational Search (with special guest)Mar 17, 2026 – Chatbots and Agentic AIMar 31, 2026 – Series Finale and Tool Review Holiday break notice Discussing Stupid will be taking a short break

    26 min
  6. 12/02/2025

    S3E5 - Intentional AI: How much can you trust AI for accessibility?

    In Episode 5 of the Intentional AI series, Cole, Virgil, and Seth shift into another part of the content lifecycle. This time, they focus on accessibility and how AI fits into that work. Accessibility is more than code checks. It is making sure people can actually use and understand what you create. The team walks through what happened when they ran the High Monkey website through an AI accessibility review, where the tool gave helpful guidance, and where it completely misread the page. They also talk about the pieces of accessibility that AI handles surprisingly well, especially language, metaphors, and readability, and why these areas are often missed by standard scanners. In the second half of the episode, they continue the ongoing experiment from earlier episodes. Using the same AI written article from before, they test how three tools handle rewriting it to an adult eighth grade reading level, then compare the results with a readability checker. The differences across models show why simple writing, clear prompts, and human review are still necessary. In this episode, they explore: How AI evaluates accessibility on a real websiteWhere AI tools give useful insights and where they misinterpret contentWhy conversational explanations can help non technical teamsHow to prompt AI to look for the issues you actually care aboutThe importance of plain language and readable writing in accessibilityA readability comparison using Copilot, Perplexity, and GrammarlyWhy simple content supports both accessibility and AI performance A downloadable Episode Companion Guide is available below. It includes key takeaways, tool notes, prompt examples, and practical advice for using AI in accessibility work. DS-S3-E5-CompanionDoc.pdf Upcoming episodes in the Intentional AI series: Dec 16, 2025 - SEO / AEO / GEOJan 6, 2026 - Content PersonalizationJan 20, 2026 - Front End Development and WireframingFeb 3, 2026 - Design and MediaFeb 17, 2026 - Back End DevelopmentMar 3, 2026 - Conversational Search (with special guest)Mar 17, 2026 - Chatbots and Agentic AIMar 31, 2026 - Series Finale and Tool Review Whether you work on websites, content workflows, or internal digital tools, this conversation is about using AI with care. The goal is to work smarter, keep content readable, and avoid handing all of your judgment over to automation. New episodes every other Tuesday. For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform. Chapters (0:00) - Intro (0:46) - Today’s focus: Accessibility with AI (1:20) - We let AI audit HighMonkey.com (4:00) - Finding the human value in AI feedback (6:25) - The power of strategic prompting (12:33) - We tested 3 AI tools for accessibility (14:49) - AI Tool findings (18:17) - Keep all your readers in mind (20:50) - Next episode preview Subscribe for email updates on our website: https://www.discussingstupid.com/ Watch us on YouTube: https://www.youtube.com/@discussingstupid Listen on Apple Podcasts, Spotify, or Soundcloud: a href="https://links.discussingstupid.com/applepodcasts" rel="noopener...

    23 min
  7. 11/11/2025

    S3E4 - Intentional AI: The role of AI in content management

    In Episode 4 of the Intentional AI series, Cole and Virgil move further into the content lifecycle and this time they are focusing on content management. Once your content’s written, the real work begins. Editing, organizing, translating, tagging, all the behind-the-scenes steps that keep content consistent and usable. In this episode, the team looks at how AI can help streamline those tasks and where it still creates new challenges. Joined by returning guest Chad, they break down where AI fits, where it fails, and what happens when you trust it to translate complex content on its own. In this episode, they explore: How AI supports the content management stage of the lifecycleCommon use cases like translation, auto-summary fields, and accessibility checksWhere automation makes sense and where it doesn’tThe biggest risks of AI content management, from oversimplification to data privacyWhy good input (clear, readable content) still determines good outputHow readable, accessible writing improves both human and AI understanding This episode also continues the real-world experiment from previous episodes. Using the accessibility article originally created with Writesonic, the team tests how well three AI tools: Google Translate, DeepL, and ChatGPT, handle translating the piece into Spanish. The results reveal major differences in accuracy, tone, and overall usability across each model. A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content management stage. DS-S3-E4-CompanionDoc.pdf 🦃 Note: We’re taking a short Thanksgiving break, the next episode will drop on December 2, 2025. Upcoming episodes in the Intentional AI series: Dec 2, 2025 — AccessibilityDec 16, 2025 — SEO / AEO / GEOJan 6, 2026 — Content PersonalizationJan 20, 2026 — Front End Development & WireframingFeb 3, 2026 — Design & MediaFeb 17, 2026 — Back End DevelopmentMar 3, 2026 — Conversational Search (with special guest!)Mar 17, 2026 — Chatbots & Agentic AIMar 31, 2026 — Series Finale & Tool Review Whether you’re managing websites, content workflows, or entire digital ecosystems, this conversation is about using AI intentionally, to work smarter without losing the human judgment that keeps content trustworthy. New episodes every other Tuesday. For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform. Chapters (0:00) - Intro (0:50) - Today's focus: Content management with AI (1:58) - Content management opportunities with AI (6:18) - Recurring series theme: Trust (8:34) - Refine your process one step at a time (9:53) - Better content = better everything (10:22) - We tested 3 AI translation tools (12:02) - Cole's "elephant in the room" test (14:28) - Poor content = poor translations (16:58) - True translation happens between people (18:45) - Closing takeaways Subscribe for email updates on our website: https://www.discussingstupid.com/ Watch us on YouTube: https://www.youtube.com/@discussingstupid Listen on Apple Podcasts,...

    21 min
  8. 10/28/2025

    S3E3 - Intentional AI: Smarter content creation with AI

    In Episode 3 of the Intentional AI series, Cole and Virgil move into the next stage of the content lifecycle: content creation. AI can write faster than ever, but that doesn’t mean it writes well. From prompting and editing to maintaining voice and originality, AI-generated content still requires human effort and judgment. In this episode, the team explores where AI can help streamline production and where it can’t replace the creative process. In this episode, they explore: How AI fits into the content creation stage of the lifecycleWhy AI-generated content often takes just as much time as writing from scratchThe key risks of AI content creation, including accuracy, effort, and authenticityHow to maintain your voice, tone, and originality when using AI toolsWhy humans are still responsible for quality control and credibilityWhat happens when you test the same research prompt across three writing tools This episode also continues the real-world experiment from Episode 2. Using the research compiled with Perplexity, the team tests how three content-generation tools—Jenni AI, Perplexity Pro, and Writesonic—handle the same writing task. The results reveal just how differently each model performs when asked to create original, publishable content. A downloadable Episode Companion Guide is available below. It includes key takeaways, tool comparisons, and practical advice for using AI in the content creation stage. DS-S3-E3-CompanionDoc.pdf Upcoming episodes in the Intentional AI series: • Nov 11, 2025 — Content Management • Dec 2, 2025 — Accessibility • Dec 16, 2025 — SEO / AEO / GEO • Jan 6, 2026 — Content Personalization • Jan 20, 2026 — Front End Development & Wireframing • Feb 3, 2026 — Design & Media • Feb 17, 2026 — Back End Development • Mar 3, 2026 — Conversational Search (with special guest!) • Mar 17, 2026 — Chatbots & Agentic AI • Mar 31, 2026 — Series Finale & Tool Review Whether you’re a marketer, strategist, or developer, this conversation is about creating content intentionally and keeping your human voice at the center of it all. New episodes every other Tuesday. For more conversations about AI, digital strategy, and all the ways we get it wrong (and how to get it right), visit www.discussingstupid.com and subscribe on your favorite podcast platform. Chapters (0:00) - Intro (0:30) - Smarter content creation with AI (1:00) - Effort doesn't go away (3:20) - Tool / LLM differences (5:34) - Audience fit & voice (7:44) - We tested 3 tools for AI content creation (10:08) - Testing Jenni AI (13:23) - Testing Perplexity (14:55) - Testing Writesonic (16:55) - Key Takeaways Subscribe for email updates on our website: https://www.discussingstupid.com/ Watch us on YouTube: https://www.youtube.com/@discussingstupid Listen on Apple Podcasts, Spotify, or Soundcloud: https://podcasts.apple.com/us/podcast/discussing-stupid-a-byte-sized-podcast-on-stupid-ux/id1428145024 https://open.spotify.com/show/0c47grVFmXk1cco63QioHp?si=87dbb37a4ca441c0 a...

    20 min

About

Discussing Stupid returns to the airwaves to transform digital facepalms into teachable moments—all in the time it takes to enjoy your coffee break! Sponsored by High Monkey, this podcast dives into ‘stupid’ practices across websites and Microsoft collaboration tools, among other digital realms. Our "byte-sized" bi-weekly episodes are packed with expert insights and a healthy dose of humor. Discussions focus on five key areas: Business Process & Collaboration, UX/IA, Inclusive Design, Content & Search, and Performance & SEO. Join us and let’s start making the digital world a bit less stupid, one episode at a time. Visit our website at https://www.discussingstupid.com