Experiencing Data w/ Brian T. O’Neill

Brian T. O’Neill from Designing for Analytics

Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be? While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for? I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI. If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI. Subscribe today on all major platforms or browse the episode archive. Get 1-Page Episode Summaries: https://designingforanalytics.com/experiencing-data-podcast/ About the Host, Brian T. O'Neill: https://designingforanalytics.com/bio/

  1. MAR 17

    190 - Why Discovering Valuable Analytics Use Cases for Your Product Is So Hard (Even with AI)

    I’ve seen this pattern repeatedly with teams building analytics and AI products: the issue usually isn’t the quality of the models or the sophistication of the data. The technology often works just fine. The real breakdown happens earlier—when teams begin with the data they already have and try to figure out what to build, instead of starting with the decisions their customers need to make.     That approach often produces polished dashboards and compelling features that generate interest, but fail to drive real action. The missing piece is context. Decisions in the real world depend on incentives, habits, risk tolerance, and uncertainty—not just clean data. If your product doesn’t reflect that reality, it won’t meaningfully change behavior.     Another common trap is assuming all available data is *evidence* worth surfacing. This “more is better” mindset leads to cluttered analytics tools that offload interpretation onto users. Even conversational AI interfaces can fall into this, encouraging open-ended exploration without helping users reach decisions.     The analytics and AI products that succeed take a different approach. They’re designed around decision-making to reduce uncertainty, fit into real workflows, and guide users toward clear actions. In doing so, they bridge the gap between analytical capability and real-world value, making the product’s intelligence tangible, usable, and worth paying for.     Highlights/ Skip to:   The core mistake I see people making during the discovery process of building an insights product (2:07) Improve your product strategy by working ‘backwards” and understanding what decisions customers are trying to make  (6:06) Insights don’t equal decisions in the real world (7:39) Designing with a goal of improving the lives of users in mind  (11:17) Prototypes as a means of discovery (vs. product/solution validation) (13:48) The bias of data availability (20:39) Using AI and LLMs for discovery and product UX (24:17) Why AI-assisted analytics products should shape UX around making structured decisions (31:03) Overcoming the Invisible Intelligence Gap  (34:57) Final thoughts (37:21)   Links CED: My UX Framework for Designing Analytics Tools That Drive Decision Making https://designingforanalytics.com/ced   Need my help finding the right use cases for your analytics or AI product?  Book a complimentary 1x1 discovery call with me: https://designingforanalytics.com/contact/

    43 min
  2. MAR 5

    189 - The Invisible Intelligence Gap

    I’ve worked with a lot of teams building analytics and insights products and decision-support systems. The pattern I keep seeing isn’t that the math is wrong or the ML / AI models are weak. Much of the time, the technology is fine.     The challenge is that all that [not always artificial!] intelligence is not surfacing as value to your customer. Dashboards look impressive. AI features demo well. Pilots get strong reactions. And then… usage stalls. Sales cycles drag. Teams quietly revert to spreadsheets. Buyers, or rather, prospective buyers, say they “like the vision,” but deals don’t move into the “closed” stage.     If your gut tells you the primary blocker is not your sales process, pricing/packaging, procurement, data quality, or risk/compliance, then you may be suffering from what I call the Invisible Intelligence Gap.      Your product’s intelligence simply isn’t visible to them. Three forces tend to amplify this gap. First, the value translation gap, which is when buyers and users can’t easily connect insights to their own goals. Second is the workflow alignment gap resulting from the product not fitting how work actually gets done. Third, the trust and control gap involves users lacking confidence in how the system reaches conclusions. My frameworks like CED, FOWA, and MIRRR are designed to close these gaps by making value obvious, workflows smoother, and AI more trustworthy.     Highlights/ Skip to: The challenge of insights not providing value to buyers, end-users, and stakeholders (3:20) How the invisible intelligence gap manifests itself (6:42) Common symptoms of the invisible intelligence gap (8:10)  Examples of how changes in human behavior cause the gap (10:00) The (3) amplifiers of the invisible intelligence gap (11:47) The CED framework for addressing the intelligence gap problem (18:28) Addressing the invisible intelligence gap with FOWA (20:14) Using MIRRR to solve the invisible intelligence gap (21:25)

    25 min
  3. FEB 17

    188 - Can’t Close the Sale? Why Your Product’s UX and Workflow Misalignment Are Killing Sales (Part 2)

    I’m continuing my exploration of a hard truth many leaders of analytics software companies run into: deals don’t stall because the tech is weak. Instead, they stall because prospects can’t see the value soon enough or the risk of changing the status quo is too high. This is often a product problem, not a sales one, and obtaining Flow-of-Work Alignment (FOWA) may help you start closing more evals and deals. So what is FOWA? The idea is simple, but demanding: stop showcasing features and start designing experiences that fit into how customers already do their work, create value, and add delight when your product is added into the loop.  Getting to FOWA means tailoring demos with realistic, industry-specific data, reducing mental translation, and minimizing behavior change. In this scenario, improvements become small, testable bets tied to outcomes, not feature checklists. UX and usability are not cosmetic; they should shape trust, adoption, and buyability.  When prospects can clearly see themselves succeeding with your product, value feels obvious, evals progress, and deals close.  Highlights/ Skip to: Steps to implementing Flow-of-Work Alignment (FOWA):  Tailor your demo or POC to map to the prospects' world and their workflow (1:53) Treat product improvements as bets that have to be tested so that observable outcomes are what you’re holding your product team accountable for (3:57) Reducing perceived behavior change (6:39) Realize that your product’s visual design are likely impacting your product’s clarity and its desirability (12:29)  Aligning your sales and product teams around customer outcomes and not feature gaps (18:03) Why you might think FOWA won’t work for your product—and how to reframe those objections (24:22)

    46 min
  4. FEB 4

    187 - Can’t Close the Sale? The Invisible Reasons Prospects Aren’t Buying Your Technically Superior Analytics or AI Product (Part 1)

    I’m digging into a frustrating reality many teams face: even technically superior analytics and AI products routinely lose deals—not because the KPIs or models aren’t good enough, but because buyers and users can’t clearly see how the product fits into their day-to-day work. Your demos and POCs may prove what’s possible, but long time-to-understanding, heavy thinking burden on the user, and required behavior or process changes introduce risk—and risk kills momentum. When value feels complicated, sales don’t move forward.     Adding to the challenge is that many sales efforts focus almost entirely on the fiscal buyer while overlooking the end users who actually have to adopt the product to create outcomes. This buyer–user mismatch, combined with status quo bias, often leads to indecision rather than change. To address this, I explore the idea of thinking about the sales challenge as a product problem—and I introduce the idea of achieving Flow of Work Alignment (FOWA). The goal isn’t better persuasion—it’s clearer value. Strong FOWA means transitioning from demonstrating capabilities to helping customers see themselves—and their workflows—represented in your demos and POCs. The result? Prospects understand your value quickly, ask deeper, contextual questions, and deals move forward.      Highlights/ Skip to: Data products must work harder to expose value clearly to avoid the dreaded “closed-lost” deal stage in your CRM (1:38) Making your data product’s value instantly obvious (5:18) How the “old model” of selling based on capabilities and feature demos can lead to lost sales (7:22) What  Flow-of-Work Alignment is and how it can help you unlock deals (13:02) How to know if you have achieved FOWA or not in your product and sales process (13:58)

    21 min
  5. JAN 20

    186 - Why Powerful AI & Analytics Products Feel Useless to Buyers

    I’m back!  After about 7 years (or more) of bi-weekly publishing, I gave myself a break (to have the flu, in part), but now it’s back to business! In 2026, I’ll be focusing the podcast more on the commercial side of data products. This means more founders, CEOs, and product leader guests at small and mid-sized B2B software companies who are building technically impressive B2B analytics and AI products. With all the focus on AI, I want to focus on things that don’t change: what do value and outcomes look like to buyers and users, and how do we recreate it with analytics and AI? What learnings and changes have leaders had to make on the product and UI/UX side to get buyers to buy and users to use?   So, that brings us to today’s episode.  Today, I’ll explain why I think model quality, analytics data, and raw AI capability are quickly becoming commodities, shifting the real challenge to how effectively companies can translate their data and intelligence into value that buyers and users can clearly understand and defend.  I dig into a core tension in B2B products: fiscal buyers and end users want different things. Buyers need confidence, risk reduction, and defensible ROI, while users care about making their daily work easier and safer. When products try to appeal broadly or force customers to figure out how AI fits into their workflows, adoption breaks down. Instead, I make the case for tightly scoped, workflow-aware solutions that make value obvious, deliver fast time-to-value, and support real decisions and actions.    Highlights/ Skip to: Refocusing the trajectory of the show for 2026 (00:31) Turning your product’s intelligence into clear, actionable solutions so users can see the value without having to figure it out themselves (4:32) You’re selling capability, but buyers are buying relief from a specific pain point (7:33) Asking customers where AI fits into their workflow is poor design (16:57) Buyers and users both require proof of value, but in different ways (20:05) Why incomplete workflows kill trust (24:18) The importance of translating technical capability into something a human is willing to own (30:09)

    38 min
  6. 12/23/2025

    185 - Driving Healthcare Impact by Aligning Teams Around Outcomes with Bill Saltmarsh

    Bill Saltmarsh joins me to discuss where a modern CDO gets the inspiration to “operate in the producty way” in his domain, which is healthcare. Now Vice President of Enterprise Data and Transformation and the Chief Data Officer at Children’s Mercy Kansas City, his early days as an analyst revealed a gap between what stakeholders asked for vs. the outcomes they sought. This convinced him that data teams need to pause, ask better questions, and prioritize meaningful outcomes over quickly churning out dashboards and reports. Bill and I discuss how a producty mindset can be embedded across an organization. He also talks about why data leaders must set firm expectations. We explore the personal and cultural shifts needed for analysts and data scientists to embrace design, facilitation, and deeper discovery, even when it initially seems to slow things down. We also examine how to define value and ROI in healthcare, where a data team's impact is often indirect.  By tying data efforts to organizational OKRs and investing in governance, strong data foundations, and data literacy, he argues that analytics, data, and AI can drive better decisions, enhance patient care, and create durable organizational value. Highlights/ Skip to: What led Bill Saltmarsh to run his team at Children’s Mercy “the producty way” (1:42)  The kinds of environments Bill worked in prior that influenced his current management philosophy (4:36) Why data teams shouldn’t be report factories (6:37)  Setting the standard at the leadership level vs the everyday work (10:53) How Bill is skilling and hiring for non-technical skills (i.e. product, design, etc) (13:51)  Patterns that data professionals go through to know if they’re guiding stakeholders correctly (20:54)  The point when Bill has to think about the financial side of the hospital (26:30) How Bill thinks about measuring the data team’s  contributions to the hospital’s success (30:28) Bill’s philosophy on generative AI (36:00) Links Bill Saltmarsh on LinkedIn

    41 min
  7. 12/09/2025

    184 - Part III: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change

    In this final part of my three-episode series on accelerating sales and adoption in B2B analytics and AI products, I unpack a growing challenge in the age of generative AI: what to do when your product automates a major chunk of a user’s workflow only to reveal an entirely new problem right behind it. Building on Part I and Part II, I look at how AI often collapses the “front half” of a process, pushing the more complex, value-heavy work directly to users. This raises critical questions about product scope, market readiness, competitive risks, and whether you should expand your solution to tackle these newly surfaced problems or stay focused and validate what buyers will actually pay for. I also discuss why achieving customer delight—not mere satisfaction—is essential for earning trust, reducing churn, and creating the conditions where customers become engaged design partners. Finally, I highlight the common pitfalls of DIY product design and why intentional, validated UX work is so important, especially when AI is changing how work gets done faster than ever.   Highlights/ Skip to: Finishing the journey: staying focused, delighting users, and intentional UX (00:35) AI solves problems—and can create new ones for your customers—now what? (2:17) Do AI products have to solve your customers’ downstream “tomorrow” problems too before they’ll pay? (6:24)  Questions that reveal whether buyers will pay for expanded scope (6:45) UX outcomes: moving customers from satisfied to delighted before tackling new problems  (8:11) How obtaining “delight” status in the customer’s mind creates trust, lock-in, and permission to build the next solution (9:54) Designing experiences with intention (not hope) as AI changes workflows (10:40) My “Ten Risks of DIY Product Design…” — why DIY UX often causes self-inflicted friction (11:46)   Links Listen to part I: Episode 182 and part two: Episode 183 Read: “Ten Risks of DIY Product Design On Sales And Adoption Of B2B Data Products”  Stop guessing what is blocking your own product’s adoption and sales: Schedule a Design-Eyes Assessment with me, and in 90 minutes, I'll diagnose whether you're facing a design problem, a product management gap, a positioning issue, or something else entirely. You'll walk away knowing exactly what's standing between your product and the traction you need—so you don't waste time and money on product design "improvements" that won't move your critical KPIs.

    14 min
  8. 11/27/2025

    183 - Part II: Designing with the Flow of Work: Accelerating Sales in B2B Analytics and AI Products by Minimizing Behavior Change

    In this second part of my three-part series (catch Part I via episode 182), I dig deeper into the key idea that sales in commercial data products can be accelerated by designing for actual user workflows—vs. going wide with a “many-purpose” AI and analytics solution that “does more,” but is misaligned with how users’ most important work actually gets done.   To explain this, I will explain the concept of user experience (UX) outcomes, and how building your solution to enable these outcomes may be a dependency for you to get sales traction, and for your customer to see the value of your solution. I also share practical steps to improve UX outcomes in commercial data products, from establishing a baseline definition of UX quality to mapping out users’ current workflows (and future ones, when agentic AI changes their job). Finally, I talk about how approaching product development as small “bets” helps you build small, and learn fast so you can accelerate value creation.    Highlights/ Skip to: Continuing the journey: designing for users, workflows, and tasks (00:32) How UX impacts sales—not just usage and  adoption(02:16) Understanding how you can leverage users’ frustrations and perceived risks as fuel for building an indispensable data product (04:11)  Definition of a UX outcome (7:30) Establishing a baseline definition of product (UX) quality, so you know how to observe and measure improvement (11:04 ) Spotting friction and solving the right customer problems first (15:34) Collecting actionable user feedback (20:02) Moving users along the scale from frustration to satisfaction to delight (23:04) Unique challenges of designing B2B AI and analytics products used for decision intelligence (25:04) Quotes from Today’s Episode One of the hardest parts of building anything meaningful, especially in B2B or data-heavy spaces, is pausing long enough to ask what the actual ‘it’ is that we’re trying to solve. People rush into building the fix, pitching the feature, or drafting the roadmap before they’ve taken even a moment to define what the user keeps tripping over in their day-to-day environment.   And until you slow down and articulate that shared, observable frustration, you’re basically operating on vibes and assumptions instead of behavior and reality.   What you want is not a generic problem statement but an agreed-upon description of the two or three most painful frictions that are obvious to everyone involved, frictions the user experiences visibly and repeatedly in the flow of work.   Once you have that grounding, everything else prioritization, design decisions, sequencing, even organizational alignment suddenly becomes much easier because you’re no longer debating abstractions, you’re working against the same measurable anchor.   And the irony is, the faster you try to skip this step, the longer the project drags on, because every downstream conversation becomes a debate about interpretive language rather than a conversation about a shared, observable experience. __ Want people to pay for your product? Solve an *observable* problem—not a vague information or data problem. What do I mean? “When you’re trying to solve a problem for users, especially in analytical or AI-driven products, one of the biggest traps is relying on interpretive statements instead of observable ones.   Interpretive phrasing like ‘they’re overwhelmed’ or ‘they don’t trust the data’ feels descriptive, but it hides the important question of what, exactly, we can see them doing that signals the problem.   If you can’t film it happening, if you can’t watch the behavior occur in real time, then you don’t actually have a problem definition you can design around.   Observable frustration might be the user jumping between four screens, copying and pasting the same value into different systems, or re-running a query five times because something feels off even though they can’t articulate why.   Those concrete behaviors are what allow teams to converge and say, ‘Yes, that’s the thing, that is the friction we agree must change,’ and that shift from interpretation to observation becomes the foundation for better design, better decision-making, and far less wasted effort.   And once you anchor the conversation in visible behavior, you eliminate so many circular debates and give everyone, from engineering to leadership, a shared starting point that’s grounded in reality instead of theory." __ One of the reasons that measuring the usability/utility/satisfaction of your product’s UX might seem hard is that you don’t have a baseline definition of how satisfactory (or not) the product is right now. As such, it’s very hard to tell if you’re just making product *changes*—or you’re making *improvements* that might make the product worth paying for at all, worth paying more for, or easier to buy. "It’s surprisingly common for teams to claim they’re improving something when they’ve never taken the time to document what the current state even looks like. If you want to create a meaningful improvement, something a user actually feels, you need to understand the baseline level of friction they tolerate today, not what you imagine that friction might be. Establishing a baseline is not glamorous work, but it’s the work that prevents you from building changes that make sense on paper but do nothing to the real flow of work. When you diagram the existing workflow, when you map the sequence of steps the user actually takes, the mismatches between your mental model and their lived experience become crystal clear, and the design direction becomes far less ambiguous. That act of grounding yourself in the current state allows every subsequent decision, prioritizing fixes, determining scope, measuring progress, to be aligned with reality rather than assumptions. And without that baseline, you risk designing solutions that float in conceptual space, disconnected from the very pains you claim to be addressing." __ Prototypes are a great way to learn—if you’re actually treating them as a means to learn, and not a product you intend to deliver regardless of the feedback customers give you.  "People often think prototyping is about validating whether their solution works, but the deeper purpose is to refine the problem itself. Once you put even a rough prototype in front of someone and watch what they do with it, you discover the edges of the problem more accurately than any conversation or meeting can reveal. Users will click in surprising places, ignore the part you thought mattered most, or reveal entirely different frictions just by trying to interact with the thing you placed in front of them. That process doesn’t just improve the design, it improves the team’s understanding of which parts of the problem are real and which parts were just guesses. Prototyping becomes a kind of externalization of assumptions, forcing you to confront whether you’re solving the friction that actually holds back the flow of work or a friction you merely predicted. And every iteration becomes less about perfecting the interface and more about sharpening the clarity of the underlying problem, which is why the teams that prototype early tend to build faster, with better alignment, and far fewer detours." __ Most founders and data people tend to measure UX quality by “counting usage” of their solution. Tracking usage stats, analytics on sessions, etc. The problem with this is that it tells you nothing useful about whether people are satisfied (“meets spec”) or delighted (“a product they can’t live without”). These are product metrics—but they don’t reflect how people feel. There are better measurements to use for evaluating users’ experience that go beyond “willingness to pay.”  Payment is great, but in B2B products, buyers aren’t always users—and we’ve all bought something based on the promise of what it would do for us, but the promise fell short. "In B2B analytics and AI products, the biggest challenge isn’t complexity, it’s ambiguity around what outcome the product is actually responsible for changing.   Teams often define success in terms of internal goals like ‘adoption,’ ‘usage,’ or ‘efficiency,’ but those metrics don’t tell you what the user’s experience is supposed to look like once the product is working well.   A product tied to vague business outcomes tends to drift because no one agrees on what the improvement should feel like in the user’s real workflow.   What you want are visible, measurable, user-centric outcomes, outcomes that describe how the user’s behavior or experience will change once the solution is in place, down to the concrete actions they’ll no longer need to take.   When you articulate outcomes at that level, it forces the entire organization to align around a shared target, reduces the scope bloat that normally plagues enterprise products, and gives you a way to evaluate whether you’re actually removing friction rather than just adding more layers of tooling.   And ironically, the clearer the user outcome is, the easier it becomes to achieve the business outcome, because the product is no longer floating in abstraction, it’s anchored in the lived reality of the people who use it."   Links Listen to part one: Episode 182  Schedule a Design-Eyes Assessment with me and get clarity, now.

    35 min
4.9
out of 5
43 Ratings

About

Does the value of your insights, analytics, or automated intelligence product sometimes feel invisible to buyers and users? Does your product have impressive analytics and AI technology, but user adoption and sales still are not where you want them to be? While it has never been easier to build data-driven products, why does it still seem so hard to build indispensable data products that users can't live without—and will gladly pay for? I’m Brian T. O’Neill, and on Experiencing Data — a Listen Notes top 2% global podcast — I help founders and B2B software product leaders close the Invisible Intelligence Gap through solo episodes and interviews with leaders at the intersection of product management, UX design, analytics, and AI. If you’re building analytics, BI, or automated intelligence (AI) products, this non-technical show will help you better connect your product to outcomes, value, and the human factors that still matter — even in the age of AI. Subscribe today on all major platforms or browse the episode archive. Get 1-Page Episode Summaries: https://designingforanalytics.com/experiencing-data-podcast/ About the Host, Brian T. O'Neill: https://designingforanalytics.com/bio/

You Might Also Like