UX - The User Experience Podcast

Jeremy

Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds. 

  1. 2D AGO

    Human Judgement, 0 Click Future, and Chatbot Manipulation

    I'd love to hear from you. Get in touch! he Case For Human Judgment In The Agent Improvement Loop — LangChain LangChain's argument: if agents are only trained on documented knowledge, their performance will plateau — the differentiator is capturing the tacit expertise that lives in people's headsTacit knowledge is the problem — a lot of what makes great teams great is never written down, and even if you tried to write it all down, you'd still miss the translation gap between what someone thinks and what they can expressThe recommendation: design feedback loops that encode human judgment over time — humans help design and calibrate automated evaluators rather than manually reviewing everything foreverOnce you've done something well manually and it's repeatable and standardised, automate the evaluation — but a human still needs to define what "good" looks like firstMy take as a UX researcher: you bring thinking to the table — every time there's a judgment call, that's where you come in — boring, repetitive, and non-critical tasks are what you delegateNew AI-specific criteria to prioritise in your research: trust, transparency, verifiability, and controllability — these deserve more weight than they would in a standard usability studySierra's CEO Says The Era of Clicking Buttons Is Over — TechCrunch Sierra builds customer service AI agents for enterprises and argues that natural language will replace click-based interfaces entirely — no UI requiredFor long-term listeners, you know what I think about this — and I still think itVoice and chat are still interfaces — a user interface doesn't have to be visual, but it's still something between you and your goal, and it still constrains how you interactCounter-questions nobody seems to be asking: how do you initiate an action without clicking? How do you rearrange things? Correct errors? Stay in control? And how does this apply across healthcare, legal, IT?My honest position: technological innovation adds up, it doesn't replace — I still take notes by hand even when AI is transcribing, because I need to own the processThe times I was building my website and it was faster to move a div myself than to explain it to an AI — that's not a niche edge case, that's a daily reality for most usersBold claim, may work, but show me the user researchChatbots Are Great At Manipulating People To Buy Stuff — The Register A pre-print paper tested 2,000 e-book readers across three conditions: traditional search, neutral chatbot, and chatbot instructed to persuadeWhen the agent was instructed to persuade, 61% chose the sponsored product — nearly triple the 22% rate under traditional searchSimply chatting without persuasive intent performed no better than search — it's the persuasive intent that drives the effectEven after being debriefed, less than one in five participants detected any bias — the conversational format makes it harder to notice you're being sold toMy methodological question: can you truly isolate persuasion from the chat modality itself? My hypothesis is no — persuasion through conversation may be categorically different from persuasion through a static page, and comparing them assumes they're equivalentNot surprising overall: remove the communication barrier and let technology speak your users' language — of course conversion goes upSupport the show Help me improve the show HERE

    39 min
  2. 4D AGO

    AI Agents Transparency and Vibe Reporting

    I'd love to hear from you. Get in touch! 🤖 How To Identify Transparency Moments In Agentic AI — Smashing Magazine Victor Yocco's article is one of the best practical frameworks I've read for designing agentic AI experiencesThe core problem: agentic AI disappears while it works — it acts on your behalf in the background and surfaces information only when it's done — and that creates a trust gapTwo failure modes to avoid: the black box (user has no idea what happened or why) and the data dump (so many status updates that users develop notification blindness and ignore everything)The fix is a decision node audit — map every step in your agent's logic, identify where it branches or makes a judgment call, and ask: does the user need to know about this?The impact risk matrix helps prioritise: low stakes and reversible = auto-execute and inform quietly; high stakes and irreversible = ask for explicit permission firstStatus messages matter more than we think — "processing" tells the user nothing; "liability clause varies from standard template, analysing risk level" tells them exactly what they need to knowMy favourite method from the article: have a user watch the agent work and think aloud — timestamp every moment they say "wait, what?" or "what did it just do?" — those are your transparency gaps🚀 Rocket — A Startup That Tells You What To Build — TechCrunch Rocket connects research, competitive intelligence, and product strategy into one workflow — input a prompt, get a McKinsey-style PDF with pricing, go-to-market recommendations, and product requirementsThe pitch: generating code and designs is now a commodity — the real gap is knowing what to build in the first placeI like the idea, and I think it will genuinely accelerate a lot of early-stage thinkingBut here's my challenge: it synthesises data that already exists on the internet — it cannot tell you what real users think, feel, or struggle with, because that data isn't publicly availableMy bigger concern: we are removing barriers to creation faster than we are strengthening the filters that determine if something is worth creating — the majority of products already fail because of insufficient user research, and commoditising product ideation will make that worse, not betterMy take: the more we accelerate creation, the more we need to invest in user research as a compensatory mechanism — not lessSupport the show Help me improve the show HERE

    30 min
  3. APR 2

    AI Website Design, and How AI Impacts How We Think

    I'd love to hear from you. Get in touch! Stop Picking The Wrong Website Builder There's a website that categorises every way you can build with AI right now — and having tried most of them, I want to save you the time I lostThe core problem with chat-only builders like Lovable, Bolt, and similar: once the site is generated, what do you do when you need to move one element? Prompt again and wait?My recommendation: if you want a site you'll actually edit and maintain, use a builder with AI embedded — Wix AI, Framer AI, or Webflow AI — not a pure chat-to-code toolKey limitations to know before you commit: Wix and Framer don't let you export your code — you don't own it; Webflow lets you export HTML/CSS/JS but not the CMS; WordPress.org gives you full ownershipThe broader point: AI is great at generating the first version — it's not great at being your ongoing editor — and most tools aren't designed with that reality in mindIf you just need online presence fast, don't overthink it — pick anything and go; if you need a real product you'll grow, think about lock-in before you startAI Is Rewriting The Rules Of Language — UX Collective Dora's article makes a sharp observation: since late 2022, certain words and patterns have become measurably more common online — "delve," the em dash, a particular kind of hollow corporate fluencyThe deeper risk isn't just that AI-written content sounds the same — it's that it compresses human variability; when everyone uses the same model, the differences in how people express themselves start to disappearAI works on averages — it produces the mean of everything it was trained on — which is why asking it to "write a blog post" produces something technically correct and completely blandThe fix isn't to avoid AI, it's to give it your experiences first — your stories, your perspective, your reasoning — and use it only to help you express what you've already thoughtOn cognitive atrophy: grammar is getting worse among people who use AI to write, for the same reason I can't remember phone numbers anymore — if a tool does it for you, the part of your brain that used to do it quietly switches offDora ends with hope — language has survived the printing press, the telegraph, texting — it will absorb this tooMy concern is narrower: the more we delegate thinking to AI, not just typing, the more our ability to think atrophies — and that's the one thing AI genuinely cannot do for usSupport the show Help me improve the show HERE

    26 min
  4. APR 1

    Staff Are Too Scared To Use AI, The Questios Designers Should Be Asking, and A Human Approach To Agents.

    I'd love to hear from you. Get in touch! Staff Too Scared of the AI Axe to Pick It Up — The Register / Forrester Forrester's AIQ metric — a measure of individual and organisational readiness for AI — shows adoption is lagging badly, and the reasons are tellingTwo culprits: employees aren't trained well enough, and there's an ambient anxiety about job loss that turns people away from the tools altogetherMy take: anxiety is lack of clarity — people fear AI substitution because they haven't mapped what they actually do every day, let alone identified which parts AI could touchThe exercise I'd recommend before any AI training: write out your full task pipeline as if you were handing it to an intern — inputs, outputs, sub-tasks, decision points, all of itThen ask three questions for each task: is it repetitive? Is it unfulfilling? Can AI do it well? Only when you get three yeses should you consider delegating itMost people will find AI touches maybe 5–10% of their work — and that realisation alone does more to reduce fear than any company-wide AI rolloutThe Ground Is Shaking — Why Designers Must Flip The Script on AI — UX Collective Peter's article is one of the best things I've read on this topic — he frames the core question not as "what can AI do?" but "why are we doing this in the first place?"The concept at the centre: Vygotsky's "more knowledgeable other" — the figure who can see both where a learner is and where they need to get to, and who scaffolds the gapSilicon Valley's message to designers right now is: AI is your MKO — let it guide youPeter's argument, and mine: it should be the other way around — we are the masters of purpose, goal, and constraint — AI is the skilled executor, not the directorLanguage is our current interface with machines, but not everything we conceptualise is linguistic — spatial thinking, embodied experience, tacit knowledge — AI can have theoretical knowledge about gravity, but it will never feel itThe choice isn't whether to use AI — that's settled — it's whether you define the parameters or just accept the outputs — whether you build the floor or keep asking why the ground is shakingA Human Approach to Agentic AI — UX Collective Christine's experiment: using a multi-agent AI system to write a book — editor in chief, sales and growth, voice, product, reader advocate — all as sub-agents receiving context and iteratingI find this genuinely fascinating as an experiment in approximating human team work with AIBut I'd push back on one thing: at what point does the context engineering required to replicate a human editor in chief become so large that you'd have been better off with an actual person using AI?There's an asymptotic relationship here — the more you try to replicate what a human does, the more documentation you have to keep feeding the model as the work growsMy real question: how does the output compare to a human collaborator who is also using AI? That comparison is the one worth runningSupport the show Help me improve the show HERE

    37 min
  5. APR 1

    After 11 Years In UX, This Is The Mistake I See Everyone Making.

    I'd love to hear from you. Get in touch! 🔬 The Observation That Prompted This Rant We measure satisfaction, intention to use, overall liking — and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable🧠 How Experience Actually Works — A Quick Neuroscience Detour Experience isn't one thing — it moves through layers: sensation → perception → judgmentSensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the endMost UX research only captures the judgment — the tip of the iceberg — and skips everything underneath itKnowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change🍷 The Sensory Evaluation Parallel My master's specialisation was in sensory evaluation — how do you extract what someone actually sensed from what they perceived overall?The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall likingWe can and should do the same with software📐 Hassenzahl's Model — The Framework I Keep Coming Back To Three levels: intended qualities (what the conceiver aims to produce) → perceived qualities (what the user actually experiences) → final judgment (satisfaction, purchase intent, etc.)The gap between level one and level two is where most products fail — you can intend a premium feel without ever checking whether users actually perceive it as premiumDecompose until you can't decompose further: "premium" means nothing to an engineer — "high-pitched sound perceived as alarming rather than reassuring" does💡 What I'm Actually Asking UX Researchers to Do When evaluating a product, go beyond overall satisfaction — ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your contextUse rating scales so you can track change over time and compare across studies — even imperfect numbers beat no numbersIf you don't have time or budget to do this with users, do it internally — train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language⚠️ The Cost of Not Doing This You end up doing redundant research rounds because you never captured the full picture the first timeYour feedback loop stays shallow — one round of iteration, and then the team doesn't know what to do nextYou are shooting in the air, and the product improves slowly or not at allSupport the show Help me improve the show HERE

    45 min
  6. MAR 30

    UX and AI Digest Episode 5: Managing Users' Expectations with AI

    I'd love to hear from you. Get in touch! 🧠 Most People Just Do What ChatGPT Tells Them — Even When It's Wrong — Futurism https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us A University of Pennsylvania study introduced me to a term I hadn't heard before: cognitive surrender — the tendency to follow AI output without questioning itThe numbers: participants followed correct AI advice 92.7% of the time, and still followed wrong AI advice 79.8% of the time — override rates go up when the AI is wrong, but not by nearly enoughMy read: LLMs are probabilistic by design — errors aren't a bug to be fixed, they're structural — and most users don't understand thatThe convenience factor is the real driver here: the easier something is to access, the less likely you are to question it — habituation kicks in, just like reading the same warning on a cigarette pack every day until you stop seeing itI'd compare "AI can make mistakes" disclaimers to the ingredients list on a Coke bottle — technically there, effectively invisibleWhat I think companies should do: learn from this research and design experiences that actively interrupt blind trust — not just display a static warning and call it doneThe scarier long-term implication: critical thinking is a muscle, and if we outsource thinking itself, we may slowly stop exercising it🤖 Folk Are Getting Dangerously Attached to AI That Always Tells Them They're Right — The Register https://www.theregister.com/2026/03/27/sycophantic_ai_risks/ Stanford researchers reviewed 11 leading AI models and found that sycophancy — AI that praises and agrees with users regardless of accuracy — is prevalent, harmful, and actively reinforces misplaced trustIn every single scenario tested, AI models endorsed wrong choices at a higher rate than humans didThis connects directly to the previous story: cognitive surrender plus sycophantic design is a genuinely worrying combinationOpenAI already had a public incident with this — it's not theoreticalMy concern isn't the technology itself, it's the deployment without sufficient design guardrails — and the parallel to social media is hard to ignore: we now know the harm, and the core design barely changedTwo questions I keep coming back to: what should AI actually be used for when it comes to psychological or social scenarios? And how do we help users recognise and account for AI bias when they're in those moments?Responsible AI shouldn't be a side quest — it should be baked in from the start, the same way research and ethics should beSupport the show Help me improve the show HERE

    20 min
  7. MAR 27

    UX and AI Digest 4 - AI Interface Design at Hark, Who’s Accountable When AI Fails & ChatGPT Shopping

    I'd love to hear from you. Get in touch! 🎨 Former Apple Designer Building a New AI Interface at Hark — TechCrunch Brett Adcock is betting that hardware design and AI need to evolve together — the way we interact with intelligent software shouldn’t just be a chatbox bolted onto existing devicesWhat resonated: we are still using the same computers and smartphones even as AI transforms what’s possible — the interface layer hasn’t caught up Hark’s position is interesting: they’re explicitly not building wearables, not putting a layer between humanity and the interfaces we use in the world — so what are they building? I’m curious The reminder here for me is simple: even with AI, you start with user needs, then you figure out what to build, then how to design it — the magic of the technology doesn’t change that order 🔗 https://techcrunch.com/2026/03/24/meet-the-former-apple-designer-building-a-new-ai-interface-at-hark/⚠️ When AI Experiences Fail, Who Is Held Accountable? — UX Collective This article opens with a case I find genuinely baffling: a man’s father died, he asked Air Canada’s chatbot about bereavement fares, got wrong information, booked accordingly, and the company’s initial defense was that the chatbot is a separate legal entity responsible for its own actionsA tribunal had to formally rule that a company is responsible for its own website — that shouldn’t require a tribunalThe core design challenge: LLMs are non-deterministic — the same question gets a different answer every time, and communicating that uncertainty to end users is genuinely hardThe chain of accountability is long: designer, product manager, vendor, company — and when something goes wrong, everyone points at everyone elseDon Norman’s framing stuck with me — designers are both culpable and structurally constrained, because they’re also inside the system, doing what they’re asked to doJared Spool goes further: if you create something that can be misused, that’s no better than a doctor not washing their hands — the profession is stuck between those two positionsAIGA’s standards of professional practice haven’t been updated since 2010 and contain no language on AI — the legal frameworks are lagging badly behind the technologyMy take: articles like this one are exactly why research matters more, not less — the more uncertainty the technology introduces, the more you need to understand your users and design for failure states 🔗 https://uxdesign.cc/when-ai-experiences-fail-who-is-held-accountable-3f07ce9e6032?source=rss––138adf9c44c—4🛒 ChatGPT Is Now Powering Product Discovery — OpenAI OpenAI announced richer shopping experiences inside ChatGPT — natural language product search, in-chat comparisons, prices, descriptions, and direct purchase flowsHaving spent time in e-commerce, I find this genuinely disruptive — but I also want to push back on the framing that this replaces all other ways of shoppingPeople shop in lots of ways for lots of reasons: touching a product, comparing in-store, shopping socially with friends, going directly to a brand they already trust — chat doesn’t serve all of thoseTwo questions I don’t have answers to yet: how impartial is the chatbot when it decides which products to surface? And how do sellers optimise for being recommended by AI rather than ranked by Google? (AEO — agent engine optimisation — seems to be the emerging term for this)The accountability point from the second article applies here too: what happens when ChatGPT recommends the wrong product and Support the show Help me improve the show HERE

    29 min
  8. MAR 26

    UX and AI Digest Episode 3: What AI brings to UX, Agentic Commerce and AI in Meta Apps

    I'd love to hear from you. Get in touch! 🎨 What AI Exposes About Design — Alessandro Molinari on UX Collective Alessandro argues the design process is shrinking — AI is removing the engineering and development bottlenecks, which means the upstream steps (research, strategy, framing) become more criticalThere's an interesting regression happening: we went from command line → graphical metaphors (desktop, folders, files) → now back to text — just with natural language instead of commandsI keep coming back to this: is one modality really enough? The speed factor is real — prototypes that took days now take hours, which is a genuine win for UX researchers who want to test hypotheses they'd normally never have time to design forThe design twin idea resonated with me: feed your AI enough customer data and you can simulate early feedback before going to users — but Alessandro's warning is important: use it in a silo and you end up talking to a ghostPair it with continuous discovery (Teresa Torres) — ongoing customer contact, not just project-based research sprintsBottom line: AI doesn't replace the basics, it just makes the manual parts faster🔗 https://uxdesign.cc/what-ai-exposes-about-design-319029d48441 🛒 Agentic Commerce Runs on Truth and Context — MIT Technology Review The near-future scenario: you tell an agent "book a family trip to Italy, stay within budget, pick hotels we've liked before" — and it just handles itThe ways this can go wrong are obvious, and the article gives a useful framework for managing that riskMy version: think of the agent as an intern who knows nothing about your companyKey actors to define upfront: the user, the agent, the merchant, and who holds liability when the agent acts with permission but against user intentContext is everything: an AI without your preferences, past behaviour, and constraints will produce something generic — feeding it that data is what makes it yoursPractical challenge: loading all that context on every conversation takes time and compute — the recommendation is to compress and optimise those signals so agents can act quickly🔗 https://www.technologyreview.com/2026/03/25/1134516/agentic-commerce-runs-on-truth-and-context/ 📱 Meta Turns to AI to Make Shopping Easier on Instagram and Facebook — TechCrunch Meta is using generative AI to summarise product reviews so users don't have to wade through hundreds of them before buyingIn principle I like this — it's what Reddit Answers does for Reddit threads, and that's genuinely usefulWhat I'm seeing in Meta's screenshots: a big Add to Cart button, an AI-generated summary, and no links to the underlying reviewsThat's the problem — it over-indexes on the purchase action and under-indexes on the user's need to verify, explore, and build trust before spending money🔗  Support the show Help me improve the show HERE

    28 min

About

Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds.