100 episodes

Are you an enterprise data or product leader seeking to increase the user adoption and business value of your ML/AI and analytical data products?

While it is easier than ever to create ML and analytics from a technology perspective, do you find that getting users to use, buyers to buy, and stakeholders to make informed decisions with data remains challenging?

If you lead an enterprise data team, have you heard that a ”data product” approach can help—but you’re not sure what that means, or whether software product management and UX design principles can really change consumption of ML and analytics?

My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting product designer’s perspective on why simply creating ML models and analytics dashboards aren’t sufficient to routinely produce outcomes for your users, customers, and stakeholders. My goal is to help you design more useful, usable, and delightful data products by better understanding your users, customers, and business sponsor’s needs. After all, you can’t produce business value with data if the humans in the loop can’t or won’t use your solutions.

Every 2 weeks, I release solo episodes and interviews with chief data officers, data product management leaders, and top UX design and research professionals working at the intersection of ML/AI, analytics, design and product—and now, I’m inviting you to join the #ExperiencingData listenership.

Transcripts, 1-page summaries and quotes available at: https://designingforanalytics.com/ed

ABOUT THE HOST
Brian T. O’Neill is the Founder and Principal of Designing for Analytics, an independent consultancy helping technology leaders turn their data into valuable data products. He is also the founder of The Data Product Leadership Community. For over 25 years, he has worked with companies including DellEMC, Tripadvisor, Fidelity, NetApp, Roche, Abbvie, and several SAAS startups. He has spoken internationally, giving talks at O’Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the highly-rated podcast Experiencing Data, advises students in MIT’s Sandbox Innovation Fund and has been published by O’Reilly Media. He is also a professional percussionist who has backed up artists like The Who and Donna Summer, and he’s graced the stages of Carnegie Hall and The Kennedy Center.

Subscribe to Brian’s Insights mailing list at https://designingforanalytics.com/list.

Experiencing Data w/ Brian T. O’Neill - Data Products, Product Management, & UX Design Brian T. O’Neill from Designing for Analytics

    • Technology
    • 5.0 • 39 Ratings

Are you an enterprise data or product leader seeking to increase the user adoption and business value of your ML/AI and analytical data products?

While it is easier than ever to create ML and analytics from a technology perspective, do you find that getting users to use, buyers to buy, and stakeholders to make informed decisions with data remains challenging?

If you lead an enterprise data team, have you heard that a ”data product” approach can help—but you’re not sure what that means, or whether software product management and UX design principles can really change consumption of ML and analytics?

My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting product designer’s perspective on why simply creating ML models and analytics dashboards aren’t sufficient to routinely produce outcomes for your users, customers, and stakeholders. My goal is to help you design more useful, usable, and delightful data products by better understanding your users, customers, and business sponsor’s needs. After all, you can’t produce business value with data if the humans in the loop can’t or won’t use your solutions.

Every 2 weeks, I release solo episodes and interviews with chief data officers, data product management leaders, and top UX design and research professionals working at the intersection of ML/AI, analytics, design and product—and now, I’m inviting you to join the #ExperiencingData listenership.

Transcripts, 1-page summaries and quotes available at: https://designingforanalytics.com/ed

ABOUT THE HOST
Brian T. O’Neill is the Founder and Principal of Designing for Analytics, an independent consultancy helping technology leaders turn their data into valuable data products. He is also the founder of The Data Product Leadership Community. For over 25 years, he has worked with companies including DellEMC, Tripadvisor, Fidelity, NetApp, Roche, Abbvie, and several SAAS startups. He has spoken internationally, giving talks at O’Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the highly-rated podcast Experiencing Data, advises students in MIT’s Sandbox Innovation Fund and has been published by O’Reilly Media. He is also a professional percussionist who has backed up artists like The Who and Donna Summer, and he’s graced the stages of Carnegie Hall and The Kennedy Center.

Subscribe to Brian’s Insights mailing list at https://designingforanalytics.com/list.

    148 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 2)

    148 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 2)

    Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome  Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!) 
     
     
    Highlights/ Skip to:
    (1:05) I introduce a hypothetical  internal LLM tool and what the goal of the tool is for the team who would use it 
    (5:31) Improving access to primary research findings for better UX 
    (10:19) What “quality data” means in a UX context
    (12:18) When LLM accuracy maybe doesn’t matter as much
    (14:03) How AI and LLMs are opening the door for fresh visioning work
    (15:38) Brian’s overall take on LLMs inside enterprise software as of right now
    (18:56) Final thoughts on UX design for LLMs, particularly in the enterprise
    (20:25) My inspiration for these 2 episodes—and how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their website
     
     
    Quotes from Today’s Episode
    “If we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.” - Brian T. O’Neill (8:09)
    “What’s in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word *quality* mean from a product standpoint and a risk reduction one, as seen from an end-users’ perspective? Somebody who’s trying to get work done? This is a different type of quality measurement.” - Brian T. O’Neill (10:40)
    “When we think about fact retrieval use cases in particular, how easily can product teams—internal or otherwise—and end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the model’s responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that we’re playing.” - Brian T. O’Neill (11:22)
    “As somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting started—the blank page—and this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where they’re doing truly generative or creative work—such that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.” - B

    • 26 min
    147 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 1)

    147 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 1)

    Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks. 
     
     
    I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.” 
     
     
    Highlights/ Skip to:
    (1:15) Currently, many LLM feature  initiatives seem to mostly driven by FOMO 
    (2:45) UX Considerations for LLM-enhanced enterprise applications 
    (5:14) Challenges with LLM UIs / user interfaces
    (7:24) Measuring improvement in UX outcomes with LLMs
    (10:36) Accuracy in LLMs and its relevance in enterprise software 
    (11:28) Illustrating key consideration for implementing an LLM-based feature
    (19:00) Leadership and context in AI deployment
    (19:27) Determining UX benchmarks for using LLMs
    (20:14) The dynamic nature of LLM hallucinations and how we design for the unknown
    (21:16) Closing thoughts on Part 1 of designing for AI and LLMs
     
     
    Quotes from Today’s Episode
    “While many product teams continue to race to deploy some sort of GenAI and especially LLMs into their products—particularly this is in the tech sector for commercial software companies—the general sense I’m getting is that this is still more about FOMO than anything else.” - Brian T. O’Neill (2:07)
    “No matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.” - Brian T. O’Neill (3:03)
    “So, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while we’ve given users tremendous flexibility in the form of a Google search-like interface, we’ve also in many cases, limited the UX of these interactions to a text conversation with a machine. We’re back to the CLI in some ways.” - Brian T. O’Neill (5:14)
    “Before and after we insert an LLM into a user’s workflow, we need to know what an improvement in their life or work actually means.”- Brian T. O’Neill (7:24)
    "If it would take the machine a few seconds to process a result versus what might take a day for a worker, what’s the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if you’re concerned about adoption, which a lot of data product leaders are." - Brian T. O’Neill (10:17)
    “So, there’s no right or wrong answer here. These are all range questions, and they’re leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much

    • 25 min
    146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman

    146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman

    Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
     
     
    I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
     
     
    In our chat, we covered:
    Ben's career studying human-computer interaction and computer science. (0:30)
    'Building a culture of safety': Creating and designing ‘safe, reliable and trustworthy’ AI systems. (3:55)
    'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)
    'There’s no such thing as an autonomous device': Designing human control into AI systems. (18:16)
    A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)
    Designing ‘comprehensible, predictable, and controllable’ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)
    Ben's upcoming book on human-centered AI. (35:55)
     
     
    Resources and Links:
    People-Centered Internet: https://peoplecentered.net/
    Designing the User Interface (one of Ben’s earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038X
    Bridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764
    Partnership on AI: https://www.partnershiponai.org/
    AI incident database: https://www.partnershiponai.org/aiincidentdatabase/
    University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/
    ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.html
    Human-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/
    Ben on Twitter: https://twitter.com/benbendc
     
     
    Quotes from Today’s Episode
    The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
     
    The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used

    • 42 min
    145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee

    145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee

    Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list! 
     
    According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation. 
     
    Highlights/ Skip to:
    Malcolm’s definition of a data product (2:10)
    Understanding your customers’ needs is the first step toward quantifying the benefits of your data product (6:34)
    How product makers can gain access to users to build more successful products (11:36) 
    Answering the UX question to get past the adoption stage and provide business value (16:03)
    Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07)
    What people really mean by “data culture" (23:02)
    Malcolm’s data product journey and his changing perspective (32:05)
    Using empathy to provide a better UX in design and data (39:24)
    Avoiding the death of data science by becoming more product-driven (46:23)
    Where the majority of data professionals currently land on their view of product management for data products (48:15)
    Quotes from Today’s Episode
    “My definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. That’s it.” - Malcolm Hawker (3:42)
    “You need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.” - Malcolm Hawker (7:36)
    “So, be a problem solver. Be collaborative. Be somebody who is eager to help make your customers’ lives easier. You hear "no" when people think that you’re a burden. You start to hear more “yeses” when people think that you are actually invested in helping make their lives easier.” - Malcolm Hawker (12:42)
    “We [data professionals] put data on a pedestal. We develop this mindset that the data matters more—as much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.” - Malcolm Hawker (17:07)
    “I hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. It’s like this invisible ally that you’re not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that you’re on track with whatever your goal was. You’re not really going out of mode.” - Brian O’Neill (24:59)
    “But you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just don’t, where you can have something be a product and be a material to a subsequent product.” - Malcolm Hawker (37:57)
    “So, the broader point here is just more of a mindset shift. And you know, maybe these things aren’t necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody

    • 53 min
    144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode)

    144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode)

    Welcome to another curated, Promoted Episode of Experiencing Data! 
    In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflows—and as I often do, I started out by asking Shashank how he and Infocepts define the term “data product.” We discuss a few real-world applications Infocepts has built, and the measurable impact of these data products—as well as some of the challenges they’ve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI.
     
     
    Highlights/ Skip to
    Shashank gives his definition of data products  (01:24)
    We tackle the challenges of user adoption in data products (04:29)
    We discuss the crucial role of integrating actionable insights into data products for enhanced decision-making (05:47)
    Shashank shares insights on the evolution of data products from concept to practical integration (10:35)
    We explore the challenges and strategies in designing user-centric data products (12:30)
    I ask Shashank about typical environments and challenges when starting new data product consultations (15:57)
    Shashank explains how Infocepts incorporates AI into their data solutions (18:55)
    We discuss the importance of understanding user personas and engaging with actual users (25:06)
    Shashank describes the roles involved in data product development’s ideation and brainstorming stages (32:20)
    The issue of proxy users not truly representing end-users in data product design is examined (35:47)
    We consider how organizations are adopting a product-oriented approach to their data strategies (39:48)
    Shashank and I delve into the implications of GenAI and other AI technologies on product orientation and user adoption (43:47)
    Closing thoughts (51:00)
     
     
    Quotes from Today’s Episode
    “Data products, at least to us at Infocepts, refers to a way of thinking about and organizing your data in a way so that it drives consumption, and most importantly, actions.” - Shashank Garg (1:44)
    “The way I see it is [that] the role of a DPM (data product manager)—whether they have the title or not—is benefits creation. You need to be responsible for benefits, not for outputs. The outputs have to create benefits or it doesn’t count. Game over” - Brian O’Neill (10:07)
    We talk about bridging the gap between the worlds of business and analytics... There's a huge gap between the perception of users and the tech leaders who are producing it." - Shashank Garg (17:37)
    “IT leaders often limit their roles to provisioning their secure data, and then they rely on businesses to be able to generate insights and take actions. Sometimes this handoff works, and sometimes it doesn’t because of quality governance.” - Shashank Garg  (23:02)
    “Data is the kind of field where people can react very, very quickly to what’s wrong.”  - Shashank Garg (29:44)
    “It’s much easier to get to a good prototype if we know what the inputs to a prototype are, which include data about the people who are going to use the solution, their usage scenarios, use cases, attitudes, beliefs…all these kinds of things.” - Brian O’Neill (31:49)
    “For data, you need a separate person, and then for designing, you need a separate person, and for analysis, you need a separate person—the more you can combine, I don’t think you can create super-humans who can do all three, four disciplines, but at least two disciplines and can appreciate the third one that makes it easier.” - Shashank Garg (39:20)
    “When we think of AI, we’re all talking about multiple different delivery method

    • 52 min
    143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help

    143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help

    Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you! 
    Highlights/ Skip to 
    I discuss how specific UI/UX design problems can significantly impact business performance (02:51)
    I discuss five common reasons why enterprise software leaders typically reach out for help (04:39)
    The 20 common symptoms I've observed in client engagements that indicate the need for professional UI/UX intervention or training (13:22)
    The dangers of adding too many features or customization and how it can overwhelm users (16:00)
    The issues of integrating  AI into user interfaces and UXs without proper design thinking  (30:08)
    I encourage listeners to apply the insights shared to improve their data products (48:02)
    Quotes from Today’s Episode
    “One of the problems with bad design is that some of it we can see and some of it we can't — unless you know what you're looking for." - Brian O’Neill (02:23)
    “Design is usually not top of mind for an enterprise software product, especially one in the machine learning and analytics space. However, if you have human users, even enterprise ones, their tolerance for bad software is much lower today than in the past.” Brian O’Neill - (13:04)
    “Early on when you're trying to get product market fit, you can't be everything for everyone. You need to be an A+ experience for the person you're trying to satisfy.” -Brian O’Neill (15:39)
    “Often when I see customization, it is mostly used as a crutch for not making real product strategy and design decisions.”  - Brian O’Neill (16:04) 
    "Customization of data and dashboard products may be more of a tax than a benefit. In the marketing copy, customization sounds like a benefit...until you actually go in and try to do it. It puts the mental effort to design a good solution on the user." - Brian O’Neill (16:26)
    “We need to think strategically when implementing Gen AI or just AI in general into the product UX because it won’t automatically help drive sales or increase business value.” - Brian O’Neill (20:50) 
    “A lot of times our analytics and machine learning tools… are insight decision support products. They're supposed to be rooted in facts and data, but when it comes to designing these products, there's not a whole lot of data and facts that are actually informing the product design choices.” Brian O’Neill - (30:37)
    “If your IP is that special, but also complex, it needs the proper UI/UX design treatment so that the value can be surfaced in such a way someone is willing to pay for it if not also find it indispensable and delightful.” - Brian O’Neill (45:02)
    Links
    The (5) big reasons AI/ML and analytics product leaders invest in UI/UX design help: https://designingforanalytics.com/resources/the-5-big-reasons-ai-ml-and-analytics-product-leaders-invest-in-ui-ux-design-help/ 
    Subscribe for free insights on designing useful, high-value enterprise ML and analytical data products: https://designingforanalytics.com/list 
    Access my free frameworks, guides, and additional reading for SAAS leaders on designing high-value ML and analytical data products: https://designingforanalytics.com/resources
    Need help getting your product’s design/UX on track—so you can see more sales, less churn, and higher user adoption? Schedule a free 60-minute Discovery Call with me

    • 50 min

Customer Reviews

5.0 out of 5
39 Ratings

39 Ratings

JMDien ,

Excellent analysis, exceptional design

Brian is bringing it home with this podcast. I initially found the show as a result of my interests in UX design and data analytics. The podcast on UI/UX with LLMs is spot on. Highly recommended.

Adam-from-Texas ,

Time well spent

I met Brian a year ago at a CDO workgroup meeting and was immediately taken with his thinking around how to best leverage data and communicate actionable insights.

CJG, Data Analyst ,

Only listen if you actually want people to use your data products

I can not stress enough how useful this podcast is. Every episode, every interview, helps illuminate ways to ensure our users are delighted at both our data insights and the ease of getting to those insights. I especially appreciate how this podcast tackles thorny challenges like how to define success metrics for data products and what sorts of environments help data folks thrive. These are discussions we need to be having!

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Hard Fork
The New York Times
The Vergecast
The Verge
TED Radio Hour
NPR

You Might Also Like

DataFramed
DataCamp
Data Engineering Podcast
Tobias Macey
Product Thinking
Melissa Perri
Practical AI: Machine Learning, Data Science, LLM
Changelog Media
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
"Upstream" with Erik Torenberg
Erik Torenberg