UX - The User Experience Podcast

Jeremy

Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will:- Share learning experiences from myself and UX professionals- Answer most common questions- Read famous blogs....

  1. HACE 4 D

    UX and AI Digest 6: AI Website Design, and How AI Impacts How We Think

    Send us Fan Mail Stop Picking The Wrong Website Builder There's a website that categorises every way you can build with AI right now — and having tried most of them, I want to save you the time I lostThe core problem with chat-only builders like Lovable, Bolt, and similar: once the site is generated, what do you do when you need to move one element? Prompt again and wait?My recommendation: if you want a site you'll actually edit and maintain, use a builder with AI embedded — Wix AI, Framer AI, or Webflow AI — not a pure chat-to-code toolKey limitations to know before you commit: Wix and Framer don't let you export your code — you don't own it; Webflow lets you export HTML/CSS/JS but not the CMS; WordPress.org gives you full ownershipThe broader point: AI is great at generating the first version — it's not great at being your ongoing editor — and most tools aren't designed with that reality in mindIf you just need online presence fast, don't overthink it — pick anything and go; if you need a real product you'll grow, think about lock-in before you startAI Is Rewriting The Rules Of Language — UX Collective Dora's article makes a sharp observation: since late 2022, certain words and patterns have become measurably more common online — "delve," the em dash, a particular kind of hollow corporate fluencyThe deeper risk isn't just that AI-written content sounds the same — it's that it compresses human variability; when everyone uses the same model, the differences in how people express themselves start to disappearAI works on averages — it produces the mean of everything it was trained on — which is why asking it to "write a blog post" produces something technically correct and completely blandThe fix isn't to avoid AI, it's to give it your experiences first — your stories, your perspective, your reasoning — and use it only to help you express what you've already thoughtOn cognitive atrophy: grammar is getting worse among people who use AI to write, for the same reason I can't remember phone numbers anymore — if a tool does it for you, the part of your brain that used to do it quietly switches offDora ends with hope — language has survived the printing press, the telegraph, texting — it will absorb this tooMy concern is narrower: the more we delegate thinking to AI, not just typing, the more our ability to think atrophies — and that's the one thing AI genuinely cannot do for usSupport the show

    26 min
  2. HACE 5 D

    UX and AI Digest Episode 6: Staff Are Too Scared To Use AI, The Questios Designers Should Be Asking, and A Human Approach To Agents.

    Send us Fan Mail Staff Too Scared of the AI Axe to Pick It Up — The Register / Forrester Forrester's AIQ metric — a measure of individual and organisational readiness for AI — shows adoption is lagging badly, and the reasons are tellingTwo culprits: employees aren't trained well enough, and there's an ambient anxiety about job loss that turns people away from the tools altogetherMy take: anxiety is lack of clarity — people fear AI substitution because they haven't mapped what they actually do every day, let alone identified which parts AI could touchThe exercise I'd recommend before any AI training: write out your full task pipeline as if you were handing it to an intern — inputs, outputs, sub-tasks, decision points, all of itThen ask three questions for each task: is it repetitive? Is it unfulfilling? Can AI do it well? Only when you get three yeses should you consider delegating itMost people will find AI touches maybe 5–10% of their work — and that realisation alone does more to reduce fear than any company-wide AI rolloutThe Ground Is Shaking — Why Designers Must Flip The Script on AI — UX Collective Peter's article is one of the best things I've read on this topic — he frames the core question not as "what can AI do?" but "why are we doing this in the first place?"The concept at the centre: Vygotsky's "more knowledgeable other" — the figure who can see both where a learner is and where they need to get to, and who scaffolds the gapSilicon Valley's message to designers right now is: AI is your MKO — let it guide youPeter's argument, and mine: it should be the other way around — we are the masters of purpose, goal, and constraint — AI is the skilled executor, not the directorLanguage is our current interface with machines, but not everything we conceptualise is linguistic — spatial thinking, embodied experience, tacit knowledge — AI can have theoretical knowledge about gravity, but it will never feel itThe choice isn't whether to use AI — that's settled — it's whether you define the parameters or just accept the outputs — whether you build the floor or keep asking why the ground is shakingA Human Approach to Agentic AI — UX Collective Christine's experiment: using a multi-agent AI system to write a book — editor in chief, sales and growth, voice, product, reader advocate — all as sub-agents receiving context and iteratingI find this genuinely fascinating as an experiment in approximating human team work with AIBut I'd push back on one thing: at what point does the context engineering required to replicate a human editor in chief become so large that you'd have been better off with an actual person using AI?There's an asymptotic relationship here — the more you try to replicate what a human does, the more documentation you have to keep feeding the model as the work growsMy real question: how does the output compare to a human collaborator who is also using AI? That comparison is the one worth runningSupport the show

    37 min
  3. HACE 5 D

    After 11 Years In UX, This Is The Mistake I See Everyone Making.

    Send us Fan Mail 🔬 The Observation That Prompted This Rant We measure satisfaction, intention to use, overall liking — and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable🧠 How Experience Actually Works — A Quick Neuroscience Detour Experience isn't one thing — it moves through layers: sensation → perception → judgmentSensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the endMost UX research only captures the judgment — the tip of the iceberg — and skips everything underneath itKnowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change🍷 The Sensory Evaluation Parallel My master's specialisation was in sensory evaluation — how do you extract what someone actually sensed from what they perceived overall?The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall likingWe can and should do the same with software📐 Hassenzahl's Model — The Framework I Keep Coming Back To Three levels: intended qualities (what the conceiver aims to produce) → perceived qualities (what the user actually experiences) → final judgment (satisfaction, purchase intent, etc.)The gap between level one and level two is where most products fail — you can intend a premium feel without ever checking whether users actually perceive it as premiumDecompose until you can't decompose further: "premium" means nothing to an engineer — "high-pitched sound perceived as alarming rather than reassuring" does💡 What I'm Actually Asking UX Researchers to Do When evaluating a product, go beyond overall satisfaction — ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your contextUse rating scales so you can track change over time and compare across studies — even imperfect numbers beat no numbersIf you don't have time or budget to do this with users, do it internally — train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language⚠️ The Cost of Not Doing This You end up doing redundant research rounds because you never captured the full picture the first timeYour feedback loop stays shallow — one round of iteration, and then the team doesn't know what to do nextYou are shooting in the air, and the product improves slowly or not at allSupport the show

    45 min
  4. 30 MAR

    UX and AI Digest Episode 5: Managing Users' Expectations with AI

    Send us Fan Mail 🧠 Most People Just Do What ChatGPT Tells Them — Even When It's Wrong — Futurism https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us A University of Pennsylvania study introduced me to a term I hadn't heard before: cognitive surrender — the tendency to follow AI output without questioning itThe numbers: participants followed correct AI advice 92.7% of the time, and still followed wrong AI advice 79.8% of the time — override rates go up when the AI is wrong, but not by nearly enoughMy read: LLMs are probabilistic by design — errors aren't a bug to be fixed, they're structural — and most users don't understand thatThe convenience factor is the real driver here: the easier something is to access, the less likely you are to question it — habituation kicks in, just like reading the same warning on a cigarette pack every day until you stop seeing itI'd compare "AI can make mistakes" disclaimers to the ingredients list on a Coke bottle — technically there, effectively invisibleWhat I think companies should do: learn from this research and design experiences that actively interrupt blind trust — not just display a static warning and call it doneThe scarier long-term implication: critical thinking is a muscle, and if we outsource thinking itself, we may slowly stop exercising it🤖 Folk Are Getting Dangerously Attached to AI That Always Tells Them They're Right — The Register https://www.theregister.com/2026/03/27/sycophantic_ai_risks/ Stanford researchers reviewed 11 leading AI models and found that sycophancy — AI that praises and agrees with users regardless of accuracy — is prevalent, harmful, and actively reinforces misplaced trustIn every single scenario tested, AI models endorsed wrong choices at a higher rate than humans didThis connects directly to the previous story: cognitive surrender plus sycophantic design is a genuinely worrying combinationOpenAI already had a public incident with this — it's not theoreticalMy concern isn't the technology itself, it's the deployment without sufficient design guardrails — and the parallel to social media is hard to ignore: we now know the harm, and the core design barely changedTwo questions I keep coming back to: what should AI actually be used for when it comes to psychological or social scenarios? And how do we help users recognise and account for AI bias when they're in those moments?Responsible AI shouldn't be a side quest — it should be baked in from the start, the same way research and ethics should beSupport the show

    20 min
  5. 27 MAR

    UX and AI Digest 4 - AI Interface Design at Hark, Who’s Accountable When AI Fails & ChatGPT Shopping

    Send us Fan Mail 🎨 Former Apple Designer Building a New AI Interface at Hark — TechCrunch Brett Adcock is betting that hardware design and AI need to evolve together — the way we interact with intelligent software shouldn’t just be a chatbox bolted onto existing devicesWhat resonated: we are still using the same computers and smartphones even as AI transforms what’s possible — the interface layer hasn’t caught up Hark’s position is interesting: they’re explicitly not building wearables, not putting a layer between humanity and the interfaces we use in the world — so what are they building? I’m curious The reminder here for me is simple: even with AI, you start with user needs, then you figure out what to build, then how to design it — the magic of the technology doesn’t change that order 🔗 https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/⚠️ When AI Experiences Fail, Who Is Held Accountable? — UX Collective This article opens with a case I find genuinely baffling: a man’s father died, he asked Air Canada’s chatbot about bereavement fares, got wrong information, booked accordingly, and the company’s initial defense was that the chatbot is a separate legal entity responsible for its own actionsA tribunal had to formally rule that a company is responsible for its own website — that shouldn’t require a tribunalThe core design challenge: LLMs are non-deterministic — the same question gets a different answer every time, and communicating that uncertainty to end users is genuinely hardThe chain of accountability is long: designer, product manager, vendor, company — and when something goes wrong, everyone points at everyone elseDon Norman’s framing stuck with me — designers are both culpable and structurally constrained, because they’re also inside the system, doing what they’re asked to doJared Spool goes further: if you create something that can be misused, that’s no better than a doctor not washing their hands — the profession is stuck between those two positionsAIGA’s standards of professional practice haven’t been updated since 2010 and contain no language on AI — the legal frameworks are lagging badly behind the technologyMy take: articles like this one are exactly why research matters more, not less — the more uncertainty the technology introduces, the more you need to understand your users and design for failure states 🔗 https://uxdesign.cc/when-ai-experiences-fail-who-is-held-accountable-3f07ce9e6032?source=rss––138adf9c44c—4🛒 ChatGPT Is Now Powering Product Discovery — OpenAI OpenAI announced richer shopping experiences inside ChatGPT — natural language product search, in-chat comparisons, prices, descriptions, and direct purchase flowsHaving spent time in e-commerce, I find this genuinely disruptive — but I also want to push back on the framing that this replaces all other ways of shoppingPeople shop in lots of ways for lots of reasons: touching a product, comparing in-store, shopping socially with friends, going directly to a brand they already trust — chat doesn’t serve all of thoseTwo questions I don’t have answers to yet: how impartial is the chatbot when it decides which products to surface? And how do sellers optimise for being recommended by AI rather than ranked by Google? (AEO — agent engine optimisation — seems to be the emerging term for this)The accountability point from the second article applies here too: what happens when ChatGPT recommends the wrong product and a purchase goes wrong?My broader two cents: I don’t believe in a future without visual interfaces — roughly 60Support the show

    29 min
  6. 26 MAR

    UX and AI Digest Episode 3: What AI brings to UX, Agentic Commerce and AI in Meta Apps

    Send us Fan Mail 🎨 What AI Exposes About Design — Alessandro Molinari on UX Collective Alessandro argues the design process is shrinking — AI is removing the engineering and development bottlenecks, which means the upstream steps (research, strategy, framing) become more criticalThere's an interesting regression happening: we went from command line → graphical metaphors (desktop, folders, files) → now back to text — just with natural language instead of commandsI keep coming back to this: is one modality really enough? The speed factor is real — prototypes that took days now take hours, which is a genuine win for UX researchers who want to test hypotheses they'd normally never have time to design forThe design twin idea resonated with me: feed your AI enough customer data and you can simulate early feedback before going to users — but Alessandro's warning is important: use it in a silo and you end up talking to a ghostPair it with continuous discovery (Teresa Torres) — ongoing customer contact, not just project-based research sprintsBottom line: AI doesn't replace the basics, it just makes the manual parts faster🔗 https://uxdesign.cc/what-ai-exposes-about-design-319029d48441 🛒 Agentic Commerce Runs on Truth and Context — MIT Technology Review The near-future scenario: you tell an agent "book a family trip to Italy, stay within budget, pick hotels we've liked before" — and it just handles itThe ways this can go wrong are obvious, and the article gives a useful framework for managing that riskMy version: think of the agent as an intern who knows nothing about your companyKey actors to define upfront: the user, the agent, the merchant, and who holds liability when the agent acts with permission but against user intentContext is everything: an AI without your preferences, past behaviour, and constraints will produce something generic — feeding it that data is what makes it yoursPractical challenge: loading all that context on every conversation takes time and compute — the recommendation is to compress and optimise those signals so agents can act quickly🔗 https://www.technologyreview.com/2026/03/25/1134516/agentic-commerce-runs-on-truth-and-context/ 📱 Meta Turns to AI to Make Shopping Easier on Instagram and Facebook — TechCrunch Meta is using generative AI to summarise product reviews so users don't have to wade through hundreds of them before buyingIn principle I like this — it's what Reddit Answers does for Reddit threads, and that's genuinely usefulWhat I'm seeing in Meta's screenshots: a big Add to Cart button, an AI-generated summary, and no links to the underlying reviewsThat's the problem — it over-indexes on the purchase action and under-indexes on the user's need to verify, explore, and build trust before spending money🔗  Support the show

    28 min
  7. 24 MAR

    UX and AI Digest Episode 2

    Send us Fan Mail Evaluating AI Agents, Claude's Computer Access & Prompt-Only Enterprise Software 🔬 EVA — A Framework for Evaluating Voice Agents I hadn't realised we lacked a proper evaluation framework for voice agents — this one from Hugging Face caught my attentionWhat I like: it combines two dimensions I've always thought should go together — accuracy (task completion, faithfulness, speech fidelity) and experience (conciseness, conversational flow, turn-taking)My question: is the "experience" side actually measured with real end users, or just by the designers?This connects to a three-step evaluation model I keep coming back to: define your ingredients, evaluate internally, then validate with users — and compare the gapI'll dedicate a full episode to this, but the short version is: if you want to elicit trust or satisfaction, you need to know which product attributes actually produce those outcomes🤖 Claude + Cowork — AI With Access to Your Computer Cowork now lets you authorise Claude to access your files and folders so it can act on your behalf even when you're awayI'm genuinely torn — amazed by the technology, but uncomfortable with the directionMy concern isn't the capability itself, it's the pattern: LLMs arrive, and suddenly we open the gates to everything — recording, transcription, computer access — as if these things naturally belong togetherMy rule of thumb: always assume your data is being used to improve the product — if you have doubts, assume yesI'd love to see more push for private, self-hosted LLMs — but the honest tension is that commercial ones will keep winning on convenience because they have more data to train onIt's not even apples to apples — and that's what makes this hard🖥️ Aragon — What If Enterprise Software Was Just a Prompt? Startup Aragon raised $12M at a $100M valuation to replace enterprise tools like Salesforce, Jira, and Tableau with a single LLM interfaceTheir thesis: buttons and menus are dead, future business is done by promptMy honest reaction: I get why this is being explored — we're mapping the edges of a new territory and seeing what sticksBut one modality for everything? I'm not convinced — when I was building my own website, I actually wanted both: LLM for generation, drag-and-drop for fine-tuning — and that product barely exists yetUsers have 10+ years of muscle memory with their tools — strip that away and you're not simplifying, you're adding frictionNielsen's heuristics exist for a reason: people need control, exit doors, and multiple ways to accomplish a taskSupport the show

    26 min

Información

Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will:- Share learning experiences from myself and UX professionals- Answer most common questions- Read famous blogs....