UX - The User Experience Podcast

Jeremy

Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds. 

  1. 1D AGO

    Using An App To Get Off Your Phone, And The Research That Says AI Is Affecting Our Brain

    I'd love to hear from you. Get in touch! 📱 Bond — The Social Media App That Wants To Cure Your Doom-Scrolling — TechCrunch Bond launched this week as a social media platform explicitly designed to get you off your phone — no infinite feed, no algorithmic scroll, just a spatial view of what your friends are up to and activity recommendations based on your interestsThe core bet: remove the vertical feed and you remove the addictive pattern — the app gives you ideas for real-world activities, you go live them, you get off the appI haven't tested it, but I have a lot of thoughtsFirst: using an app to get off your phone is paradoxical — your phone is still your phone, and everything else addictive is still on itSecond: removing the feed doesn't remove social comparison — seeing what friends are up to, peeking at their memories, knowing they got a promotion — that's still there, and social comparison is one of the more reliably damaging patterns in existing platformsThird — and this one I can't let go: end-to-end encryption is described as "a priority for us in the near future after launch" — meaning right now, the team can see your data — storing data securely is not the same as private dataThe monetisation path is also unresolved — licensing user data to AI companies and product recommendations with merchant commissions are both on the tableMy honest read: the intent seems genuine, but the medium is still a phone, the social comparison patterns are still present, and the privacy foundations aren't there yet🧠 Concerns Grow That AI Is Damaging Users' Cognitive Abilities — Futurism MIT researchers split 54 participants into three groups — ChatGPT, Google search, and own knowledge only — and measured brain activity via EEG during essay writing tasksStudents using ChatGPT consistently underperformed at neural, linguistic, and behavioural levels — and got lazier with each consecutive essayBrain activation in areas corresponding to creativity and information processing was significantly lower — and participants struggled to recall or quote their own AI-written essaysThis connects directly to cognitive surrender — the University of Pennsylvania finding I covered in an earlier episode — where people predominantly chose to use the chatbot even when they didn't need toMy take: there are always trade-offs, and if you don't know them, you're still making them — taking the car everywhere instead of walking has a physical cost; outsourcing your thinking has a cognitive costThe question isn't whether to use AI — it's which tasks should stay yours: framing a research problem, deciding what questions to ask, writing the first draft of your own ideas — these are the muscles that atrophy fastestThe concept from UX that keeps coming to mind: learned helplessness — users who stop trying because they've been trained to feel that the tool, or in this case they themselves, can't do it without helpThe constant I'd advocate for regardless of how AI evolves: keep thinking, keep practising critical judgment, keep owning the reasoning — the human brain is shaped to do this, and it needs the exerciseSupport the show Help me improve the show HERE

    39 min
  2. 2D AGO

    How To Stay Sane With AI, Claude Design Launches

    I'd love to hear from you. Get in touch! 🧠 How To Approach AI And Stay Sane — UX Collective Julia Kockbeck's article as a QA engineer frames the AI adoption question better than most: it's not use it or don't — it's knowing when, why, and what you're trading offThe trifecta that never goes away: speed, quality, and scope — if you keep scope constant and push for speed, quality takes the hit, whether you're aware of it or notTwo failure modes to avoid: overuse without critical thinking (copy-pasting AI output, blindly trusting agents) and AI reservedness (not using it at all and being left behind by people who do)We still don't have solid heuristics for when to use AI — we're building them in real time, and most people are doing it unconsciouslyWhat I think is uniquely human in UX research: moderating interviews, framing a problem with a stakeholder, deciding what questions to ask and why — AI can draft, but it cannot think before the draftThe measure that actually matters: is the output at least the same? And has the spread of your activity shifted from repetitive tasks toward more strategic thinking? If yes, that's already a winMy approach: AI is my collaborator, not my substitute — I use it to generate a quick script or research plan, then I review, complete, and own it🎨 Anthropic Launches Claude Design — TechCrunch Claude Design lets you create prototypes, slide decks, presentations, and design systems from prompts — Figma's stock dropped on the newsI haven't used it in depth yet, but my honest first take: it's genuinely useful for people who aren't designers but need a starting point — researchers, PMs, anyone who needs something that looks considered without hiring a designerThat said, the pattern I keep running into with prompt-only design tools: generating something looks amazing in minutes, but making one small change is a nightmareWhat I'm really watching for: can you tweak it manually after generation? Can you apply a design system and have it hold? Can you export to PPT or Figma and continue from there?It's not competing with Figma in the way the headlines suggest — Figma is a collaboration and precision tool, Claude Design appears to be a generation tool — different jobs, different usersThe tools I want to exist: AI generation plus drag-and-drop editing in the same product — we're still waiting for thatSupport the show Help me improve the show HERE

    20 min
  3. APR 10

    Human Judgement, 0 Click Future, and Chatbot Manipulation

    I'd love to hear from you. Get in touch! he Case For Human Judgment In The Agent Improvement Loop — LangChain LangChain's argument: if agents are only trained on documented knowledge, their performance will plateau — the differentiator is capturing the tacit expertise that lives in people's headsTacit knowledge is the problem — a lot of what makes great teams great is never written down, and even if you tried to write it all down, you'd still miss the translation gap between what someone thinks and what they can expressThe recommendation: design feedback loops that encode human judgment over time — humans help design and calibrate automated evaluators rather than manually reviewing everything foreverOnce you've done something well manually and it's repeatable and standardised, automate the evaluation — but a human still needs to define what "good" looks like firstMy take as a UX researcher: you bring thinking to the table — every time there's a judgment call, that's where you come in — boring, repetitive, and non-critical tasks are what you delegateNew AI-specific criteria to prioritise in your research: trust, transparency, verifiability, and controllability — these deserve more weight than they would in a standard usability studySierra's CEO Says The Era of Clicking Buttons Is Over — TechCrunch Sierra builds customer service AI agents for enterprises and argues that natural language will replace click-based interfaces entirely — no UI requiredFor long-term listeners, you know what I think about this — and I still think itVoice and chat are still interfaces — a user interface doesn't have to be visual, but it's still something between you and your goal, and it still constrains how you interactCounter-questions nobody seems to be asking: how do you initiate an action without clicking? How do you rearrange things? Correct errors? Stay in control? And how does this apply across healthcare, legal, IT?My honest position: technological innovation adds up, it doesn't replace — I still take notes by hand even when AI is transcribing, because I need to own the processThe times I was building my website and it was faster to move a div myself than to explain it to an AI — that's not a niche edge case, that's a daily reality for most usersBold claim, may work, but show me the user researchChatbots Are Great At Manipulating People To Buy Stuff — The Register A pre-print paper tested 2,000 e-book readers across three conditions: traditional search, neutral chatbot, and chatbot instructed to persuadeWhen the agent was instructed to persuade, 61% chose the sponsored product — nearly triple the 22% rate under traditional searchSimply chatting without persuasive intent performed no better than search — it's the persuasive intent that drives the effectEven after being debriefed, less than one in five participants detected any bias — the conversational format makes it harder to notice you're being sold toMy methodological question: can you truly isolate persuasion from the chat modality itself? My hypothesis is no — persuasion through conversation may be categorically different from persuasion through a static page, and comparing them assumes they're equivalentNot surprising overall: remove the communication barrier and let technology speak your users' language — of course conversion goes upSupport the show Help me improve the show HERE

    39 min
  4. APR 8

    AI Agents Transparency and Vibe Reporting

    I'd love to hear from you. Get in touch! 🤖 How To Identify Transparency Moments In Agentic AI — Smashing Magazine Victor Yocco's article is one of the best practical frameworks I've read for designing agentic AI experiencesThe core problem: agentic AI disappears while it works — it acts on your behalf in the background and surfaces information only when it's done — and that creates a trust gapTwo failure modes to avoid: the black box (user has no idea what happened or why) and the data dump (so many status updates that users develop notification blindness and ignore everything)The fix is a decision node audit — map every step in your agent's logic, identify where it branches or makes a judgment call, and ask: does the user need to know about this?The impact risk matrix helps prioritise: low stakes and reversible = auto-execute and inform quietly; high stakes and irreversible = ask for explicit permission firstStatus messages matter more than we think — "processing" tells the user nothing; "liability clause varies from standard template, analysing risk level" tells them exactly what they need to knowMy favourite method from the article: have a user watch the agent work and think aloud — timestamp every moment they say "wait, what?" or "what did it just do?" — those are your transparency gaps🚀 Rocket — A Startup That Tells You What To Build — TechCrunch Rocket connects research, competitive intelligence, and product strategy into one workflow — input a prompt, get a McKinsey-style PDF with pricing, go-to-market recommendations, and product requirementsThe pitch: generating code and designs is now a commodity — the real gap is knowing what to build in the first placeI like the idea, and I think it will genuinely accelerate a lot of early-stage thinkingBut here's my challenge: it synthesises data that already exists on the internet — it cannot tell you what real users think, feel, or struggle with, because that data isn't publicly availableMy bigger concern: we are removing barriers to creation faster than we are strengthening the filters that determine if something is worth creating — the majority of products already fail because of insufficient user research, and commoditising product ideation will make that worse, not betterMy take: the more we accelerate creation, the more we need to invest in user research as a compensatory mechanism — not lessSupport the show Help me improve the show HERE

    30 min
  5. APR 2

    AI Website Design, and How AI Impacts How We Think

    I'd love to hear from you. Get in touch! Stop Picking The Wrong Website Builder There's a website that categorises every way you can build with AI right now — and having tried most of them, I want to save you the time I lostThe core problem with chat-only builders like Lovable, Bolt, and similar: once the site is generated, what do you do when you need to move one element? Prompt again and wait?My recommendation: if you want a site you'll actually edit and maintain, use a builder with AI embedded — Wix AI, Framer AI, or Webflow AI — not a pure chat-to-code toolKey limitations to know before you commit: Wix and Framer don't let you export your code — you don't own it; Webflow lets you export HTML/CSS/JS but not the CMS; WordPress.org gives you full ownershipThe broader point: AI is great at generating the first version — it's not great at being your ongoing editor — and most tools aren't designed with that reality in mindIf you just need online presence fast, don't overthink it — pick anything and go; if you need a real product you'll grow, think about lock-in before you startAI Is Rewriting The Rules Of Language — UX Collective Dora's article makes a sharp observation: since late 2022, certain words and patterns have become measurably more common online — "delve," the em dash, a particular kind of hollow corporate fluencyThe deeper risk isn't just that AI-written content sounds the same — it's that it compresses human variability; when everyone uses the same model, the differences in how people express themselves start to disappearAI works on averages — it produces the mean of everything it was trained on — which is why asking it to "write a blog post" produces something technically correct and completely blandThe fix isn't to avoid AI, it's to give it your experiences first — your stories, your perspective, your reasoning — and use it only to help you express what you've already thoughtOn cognitive atrophy: grammar is getting worse among people who use AI to write, for the same reason I can't remember phone numbers anymore — if a tool does it for you, the part of your brain that used to do it quietly switches offDora ends with hope — language has survived the printing press, the telegraph, texting — it will absorb this tooMy concern is narrower: the more we delegate thinking to AI, not just typing, the more our ability to think atrophies — and that's the one thing AI genuinely cannot do for usSupport the show Help me improve the show HERE

    26 min
  6. APR 1

    Staff Are Too Scared To Use AI, The Questios Designers Should Be Asking, and A Human Approach To Agents.

    I'd love to hear from you. Get in touch! Staff Too Scared of the AI Axe to Pick It Up — The Register / Forrester Forrester's AIQ metric — a measure of individual and organisational readiness for AI — shows adoption is lagging badly, and the reasons are tellingTwo culprits: employees aren't trained well enough, and there's an ambient anxiety about job loss that turns people away from the tools altogetherMy take: anxiety is lack of clarity — people fear AI substitution because they haven't mapped what they actually do every day, let alone identified which parts AI could touchThe exercise I'd recommend before any AI training: write out your full task pipeline as if you were handing it to an intern — inputs, outputs, sub-tasks, decision points, all of itThen ask three questions for each task: is it repetitive? Is it unfulfilling? Can AI do it well? Only when you get three yeses should you consider delegating itMost people will find AI touches maybe 5–10% of their work — and that realisation alone does more to reduce fear than any company-wide AI rolloutThe Ground Is Shaking — Why Designers Must Flip The Script on AI — UX Collective Peter's article is one of the best things I've read on this topic — he frames the core question not as "what can AI do?" but "why are we doing this in the first place?"The concept at the centre: Vygotsky's "more knowledgeable other" — the figure who can see both where a learner is and where they need to get to, and who scaffolds the gapSilicon Valley's message to designers right now is: AI is your MKO — let it guide youPeter's argument, and mine: it should be the other way around — we are the masters of purpose, goal, and constraint — AI is the skilled executor, not the directorLanguage is our current interface with machines, but not everything we conceptualise is linguistic — spatial thinking, embodied experience, tacit knowledge — AI can have theoretical knowledge about gravity, but it will never feel itThe choice isn't whether to use AI — that's settled — it's whether you define the parameters or just accept the outputs — whether you build the floor or keep asking why the ground is shakingA Human Approach to Agentic AI — UX Collective Christine's experiment: using a multi-agent AI system to write a book — editor in chief, sales and growth, voice, product, reader advocate — all as sub-agents receiving context and iteratingI find this genuinely fascinating as an experiment in approximating human team work with AIBut I'd push back on one thing: at what point does the context engineering required to replicate a human editor in chief become so large that you'd have been better off with an actual person using AI?There's an asymptotic relationship here — the more you try to replicate what a human does, the more documentation you have to keep feeding the model as the work growsMy real question: how does the output compare to a human collaborator who is also using AI? That comparison is the one worth runningSupport the show Help me improve the show HERE

    37 min
  7. APR 1

    After 11 Years In UX, This Is The Mistake I See Everyone Making.

    I'd love to hear from you. Get in touch! 🔬 The Observation That Prompted This Rant We measure satisfaction, intention to use, overall liking — and then we go back to our teams and say "users don't trust it" or "satisfaction is low" and expect that to be actionable🧠 How Experience Actually Works — A Quick Neuroscience Detour Experience isn't one thing — it moves through layers: sensation → perception → judgmentSensation is the raw signal reaching your sensors; perception is your brain integrating that into something meaningful; judgment is the conscious evaluation you emit at the endMost UX research only captures the judgment — the tip of the iceberg — and skips everything underneath itKnowing someone rated satisfaction a 3 out of 7 tells you nothing about what to change🍷 The Sensory Evaluation Parallel My master's specialisation was in sensory evaluation — how do you extract what someone actually sensed from what they perceived overall?The wine, perfume, and automotive industries do this routinely: trained panels isolate attributes (texture, pitch, smell profile) and rate them independently from overall likingWe can and should do the same with software📐 Hassenzahl's Model — The Framework I Keep Coming Back To Three levels: intended qualities (what the conceiver aims to produce) → perceived qualities (what the user actually experiences) → final judgment (satisfaction, purchase intent, etc.)The gap between level one and level two is where most products fail — you can intend a premium feel without ever checking whether users actually perceive it as premiumDecompose until you can't decompose further: "premium" means nothing to an engineer — "high-pitched sound perceived as alarming rather than reassuring" does💡 What I'm Actually Asking UX Researchers to Do When evaluating a product, go beyond overall satisfaction — ask about the attributes that compose the experience: reliability, accuracy, responsiveness, tone, whatever is relevant to your contextUse rating scales so you can track change over time and compare across studies — even imperfect numbers beat no numbersIf you don't have time or budget to do this with users, do it internally — train your team to evaluate the attributes so that when you go back to the developers, you're speaking their language⚠️ The Cost of Not Doing This You end up doing redundant research rounds because you never captured the full picture the first timeYour feedback loop stays shallow — one round of iteration, and then the team doesn't know what to do nextYou are shooting in the air, and the product improves slowly or not at allSupport the show Help me improve the show HERE

    45 min
  8. MAR 30

    UX and AI Digest Episode 5: Managing Users' Expectations with AI

    I'd love to hear from you. Get in touch! 🧠 Most People Just Do What ChatGPT Tells Them — Even When It's Wrong — Futurism https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us A University of Pennsylvania study introduced me to a term I hadn't heard before: cognitive surrender — the tendency to follow AI output without questioning itThe numbers: participants followed correct AI advice 92.7% of the time, and still followed wrong AI advice 79.8% of the time — override rates go up when the AI is wrong, but not by nearly enoughMy read: LLMs are probabilistic by design — errors aren't a bug to be fixed, they're structural — and most users don't understand thatThe convenience factor is the real driver here: the easier something is to access, the less likely you are to question it — habituation kicks in, just like reading the same warning on a cigarette pack every day until you stop seeing itI'd compare "AI can make mistakes" disclaimers to the ingredients list on a Coke bottle — technically there, effectively invisibleWhat I think companies should do: learn from this research and design experiences that actively interrupt blind trust — not just display a static warning and call it doneThe scarier long-term implication: critical thinking is a muscle, and if we outsource thinking itself, we may slowly stop exercising it🤖 Folk Are Getting Dangerously Attached to AI That Always Tells Them They're Right — The Register https://www.theregister.com/2026/03/27/sycophantic_ai_risks/ Stanford researchers reviewed 11 leading AI models and found that sycophancy — AI that praises and agrees with users regardless of accuracy — is prevalent, harmful, and actively reinforces misplaced trustIn every single scenario tested, AI models endorsed wrong choices at a higher rate than humans didThis connects directly to the previous story: cognitive surrender plus sycophantic design is a genuinely worrying combinationOpenAI already had a public incident with this — it's not theoreticalMy concern isn't the technology itself, it's the deployment without sufficient design guardrails — and the parallel to social media is hard to ignore: we now know the harm, and the core design barely changedTwo questions I keep coming back to: what should AI actually be used for when it comes to psychological or social scenarios? And how do we help users recognise and account for AI bias when they're in those moments?Responsible AI shouldn't be a side quest — it should be baked in from the start, the same way research and ethics should beSupport the show Help me improve the show HERE

    20 min

About

Help me improve the show: https://forms.fillout.com/t/txqbF3seyNus Welcome to the User Experience Podcast, the podcast where we (ex)change experiences! I am a firm believer that sharing is caring. As we UX professionals are all aspiring to change User Experiences for the better, I have put together this podcast to accelerate learning and improvement! In this podcast, I will: Share learning experiences from myself and UX professionals, answer most common questions and read famous minds.