The Data Journey

Roland Brown

The Data Journey: Big Ideas, Small Time Looking to stay ahead in data architecture, education strategy, and leadership—but short on time? The Data Journey delivers actionable insights in under 10 minutes, weekly. Each episode is designed for busy professionals: quick, practical, and easy to apply. No fluff, no filler—just the strategies and frameworks you need to make smarter decisions, faster. Subscribe to the newsletter at www.thedatajourney.com and transform your coffee break into a mini-masterclass in modern data and leadership.

  1. Episode 73: Why AI Exposes Weak Data Foundations

    5D AGO

    Episode 73: Why AI Exposes Weak Data Foundations

    Most organisations believe they’re starting their AI journey by choosing tools, models, or use cases.In reality, they’re starting a foundation test they’ve been postponing for years. In this series transition episode, Roland Brown connects everything explored from Episode 1 through Episode 70 to a single, uncomfortable truth: AI does not fix data it exposes it. AI removes the human buffer that allowed inconsistency, unclear definitions, and silent quality issues to survive inside dashboards and reports. It consumes data continuously, learns from it, and acts on it. Whatever ambiguity exists in the data estate is no longer hidden it is operationalised. The episode reframes AI as an amplifier rather than a solution.Strong foundations accelerate value.Fragile foundations accelerate failure. Roland walks back through the long arc of the podcast platforms, the Medallion Architecture, governance, metadata, lineage, operating models, reliability, SLAs, observability, and data products showing that none of these were isolated topics. Together, they form the minimum conditions for AI to work safely and sustainably. The core shift is this: traditional analytics tolerated uncertainty because humans added context informally. AI cannot do that. It forces organisations to answer questions they were previously able to defer: • Why does this data exist?• What decision does it support?• Which definition is correct?• Who is accountable when something goes wrong?• How is quality measured not assumed? These are not new questions.They are the same unresolved issues data teams have lived with for years. The episode identifies the first points where weak foundations break under AI pressure: • purpose that was never explicit• definitions that were never reconciled• ownership that was always implied• quality that was never observable In BI, these show up as debates.In AI, they become outcomes. Roland then positions data products as the trust boundary for AI.AI should not consume raw pipelines. It should consume products with: • a clear purpose• an accountable owner• known consumers• explicit and measurable quality expectations This is where the Medallion Architecture quietly becomes critical again.Bronze preserves truth.Silver enforces consistency.Gold expresses intent. AI belongs at the Gold layer where meaning is explicit and responsibility is clear. The episode closes the data product chapter and opens the AI series by redefining what AI readiness actually means. It is not about model sophistication or tooling maturity. The organisations succeeding with AI are the ones with the least fragile data foundations, not the most advanced algorithms. This is not a series about AI trends.It is a series about architecture, operating models, governance, and trust in an AI-scale world. Discover insights on: • Why AI is an amplifier, not a data solution• How AI removes the human buffer that hid data problems• Where weak data foundations fail first• Why unresolved definitions become AI outcomes• How ownership becomes unavoidable in AI-driven decisions• Why data products form the minimum viable trust boundary for AI• What AI readiness really means beyond tooling “AI doesn’t introduce new data problems.It removes your ability to ignore the old ones.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    9 min
  2. Episode 72: From Project Delivery to Product Thinking

    FEB 20

    Episode 72: From Project Delivery to Product Thinking

    Most data initiatives don’t fail because they were badly executed. They fail because success was defined as delivery instead of value. In this episode, Roland Brown brings the entire data products series together by tackling the foundational shift that determines whether everything discussed in Episodes 64 through 71 actually sticks: moving from project delivery to product thinking. Roland explains why traditional project models dominate data work — clear scope, fixed timelines, sign-off milestones — and why these structures are deeply misaligned with how data products behave in real organisations. While projects optimise for completion, data products exist in environments that constantly change: business rules evolve, decisions shift, and consumer needs never stay still. The episode highlights the hidden assumption that breaks most data products:once something is delivered, its value is fixed. Data products don’t work that way. The moment a product goes live, it begins to age. Without ongoing ownership, feedback loops, and deliberate evolution, even well-designed products quietly decay. Roland shows how project thinking surfaces in familiar behaviours:“that was out of scope,”“we delivered what was agreed,”“that’s phase two.” These statements are reasonable in project contexts but destructive in product environments. Product thinking assumes change, while project thinking resists it. Building on earlier episodes, Roland connects product thinking directly to: • consumer-first design• ownership and accountability• data contracts• honest success measurement• sunsetting and lifecycle management• discovery and marketplaces None of these can survive in a delivery-only mindset. The episode reframes success as something ongoing rather than binary. Instead of asking whether something was delivered, product thinking asks whether it is still useful, still trusted, and still relied upon. Ownership no longer expires at go-live, and feedback becomes an input rather than a disruption. A practical reframe is introduced:projects can still build capabilities — but products sustain value. Projects create motion.Products create stability. Confusing the two is how organisations stay busy while trust erodes. The episode closes with a leadership-level reminder:value lives after go-live. When organisations continue to fund, measure, and reward delivery instead of confidence, data products will always underperform — no matter how modern the platform. Discover insights on: • Why delivery is not the same as value• How project thinking quietly undermines data products• What product thinking changes in practice• Why ownership cannot expire at go-live• How feedback and evolution sustain trust• What leaders must change for product models to stick “Delivery is an event.Value is a responsibility.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    9 min
  3. Episode 71: Customer 360 as a Data Product: An End-to-End Example

    FEB 17

    Episode 71: Customer 360 as a Data Product: An End-to-End Example

    Almost every organisation claims to have a Customer 360.Very few trust it. Even fewer use it consistently to make better decisions. In this episode, Roland Brown takes one of the most familiar and most misunderstood concepts in data and walks through it end-to-end as a true data product. Building on the principles established in Episodes 64 through 70, he shows why Customer 360 initiatives so often fail, and how product thinking fundamentally changes the outcome. Roland explains that Customer 360 usually collapses under its own ambition. Teams try to create a single, complete view of the customer, integrating every possible source into one massive model. The result is technically impressive but operationally fragile. Definitions vary, ownership is unclear, trust erodes, and different teams quietly revert to their own versions of the truth. The episode reframes the problem with a simple but powerful shift:Customer 360 is not one product it is a family of data products, each designed for a specific decision and consumer. Instead of asking “what data should go into Customer 360?”, Roland shows why teams must start with the decisions they are trying to support, such as: • retention teams deciding who to intervene with• sales teams prioritising customers• service teams understanding interaction history• risk teams assessing exposure From there, distinct Customer 360 products emerge each with clear intent, scope, cadence, and interface all supported by shared underlying data rather than one monolithic asset. The episode walks step-by-step through how Customer 360 changes when treated as a product: • Intent shifts from completeness to decision support• Ownership moves to the teams accountable for outcomes• Contracts make definitions and expectations explicit• Measurement focuses on adoption, reuse, and trust• Sunsetting removes outdated views before they damage confidence• Discovery surfaces the right customer product for the right decision Roland highlights why ownership is the turning point for Customer 360. When accountability sits with “the data team,” ambiguity becomes normal. When it sits with retention, sales, service, or risk leaders, clarity becomes non-negotiable. A practical walkthrough shows how the same customer data can power multiple high-confidence products — without duplication — when reuse happens beneath the surface rather than at the interface. The episode closes by dismantling a long-held assumption:a single view of the customer is useless if no one trusts what they are seeing. Customer 360 only delivers value when it is designed as a set of owned, trusted, decision-ready products — not as a technical artefact or platform milestone. Discover insights on: • Why most Customer 360 initiatives fail despite heavy investment• How product thinking changes Customer 360 outcomes• Why Customer 360 should be multiple products, not one• How ownership clarifies definitions and trust• The role of contracts, measurement, and lifecycle in customer data• How discovery makes Customer 360 usable at scale “Customer 360 isn’t about seeing everything.It’s about seeing enough to decide with confidence.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    8 min
  4. Episode 70: Data Marketplaces and Discovery: Finding what actually matters

    FEB 13

    Episode 70: Data Marketplaces and Discovery: Finding what actually matters

    Most organisations don’t struggle to find data.They struggle to find data they can trust. In this episode, Roland Brown reframes one of the most hyped topics in modern data architecture, data marketplaces and discovery and explains why discovery is never a tooling problem on its own. Building on the foundations laid in Episodes 64 through 69, he shows why effective discovery is the last mile of trust, not the starting point. Roland challenges the common belief that better search, richer metadata, or AI-powered recommendations automatically solve discovery. He explains why marketplaces fail when they are treated as inventory systems instead of signal amplifiers, surfacing noise rather than clarity. The episode makes a critical distinction:data marketplaces do not create trust they expose it. When ownership is unclear, contracts are implicit, products are poorly measured, and outdated products are never retired, discovery becomes guesswork. Users compensate by asking colleagues, copying old queries, or defaulting to whatever they used last regardless of correctness. Roland defines what a data marketplace actually is in practice:a curated environment where consumers can discover trusted data products, understand the decisions they support, see who owns them, assess fitness for purpose, and act with confidence. Crucially, the episode explains why only data products belong in marketplaces. Datasets are ingredients necessary, reusable, and powerful but exposing everything directly creates confusion. Marketplaces work when they surface a small number of high-confidence products, not every possible asset. Drawing on earlier episodes, Roland shows how disciplined product practices make discovery possible: • Consumer-first design clarifies intent• Ownership provides accountability• Contracts make expectations explicit• Measurement surfaces what actually matters• Sunsetting removes ambiguity When these foundations are in place, discovery becomes fast, contextual, and reliable. A practical revenue example illustrates how weak discovery environments lead to conflicting definitions and endless searching while strong marketplaces guide users directly to the right product for the decision at hand. The episode closes with a counter-intuitive insight:organisations that focus on building marketplaces often fail while those that focus on building disciplined data products find that marketplaces emerge naturally as a reflection of maturity. Discovery is not a feature to be implemented.It is a capability that must be earned. Discover insights on: • Why discovery problems are really trust problems• The difference between data inventories and data marketplaces• Why products not datasets belong in discovery layers• What signals actually matter in a marketplace• How sunsetting improves discovery quality• Why good discovery is an outcome, not a starting point “You don’t discover data.You discover confidence.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    9 min
  5. Episode 69: Killing Bad Data Products: Sunsetting Properly

    FEB 9

    Episode 69: Killing Bad Data Products: Sunsetting Properly

    Most organisations are very good at building data products.They are far less good at stopping them. In this episode, Roland Brown tackles one of the most uncomfortable yet essential capabilities of mature data organisations: sunsetting data products properly. Building directly on the failure modes discussed in Episode 68, he explains why keeping bad or outdated data products alive quietly damages trust far more than removing them. Roland shows that most data products don’t linger because they are still valuable they linger because organisations avoid difficult conversations. “Someone might still be using it.” “We might need it later.” “It took effort to build.” These well-intentioned hesitations result in products that are neither alive nor dead, creating ambiguity, confusion, and false confidence. The episode reframes sunsetting not as deletion or failure, but as a deliberate lifecycle stage one that was already implied in the anatomy of a good data product introduced in Episode 62. Products are born, they mature, they evolve and eventually, they should retire. Roland outlines the clearest signals that a data product should be considered for retirement: • The decision it supported no longer exists• Trust has eroded to the point of constant validation• No one is willing to own the outcome• A clearer, better product has replaced it None of these are failures. They are signals of change. The episode then walks through what responsible sunsetting actually looks like in practice: • Making the decision explicit instead of letting decay continue• Identifying who is still impacted and how• Providing a clear replacement or exit path• Running a managed transition period• Retiring interfaces cleanly and visibly Roland explains why silent decay is far more dangerous than visible retirement. Products that quietly rot teach consumers that data products can’t be trusted not just the bad ones, but all of them. A practical revenue example illustrates how sunsetting, when done transparently, actually increases confidence rather than disrupting it. Consumers know where to go, what to use, and what no longer applies. The episode closes with a powerful maturity signal:healthy data ecosystems are not defined by how many products they have but by how confidently they can let go of the ones that no longer serve decisions. Sunsetting is not an admission of failure.It is an act of respect for consumers, for clarity, and for trust. Discover insights on: • Why bad data products linger longer than they should• The hidden cost of keeping outdated products alive• How to recognise when a product should be retired• Why sunsetting is a lifecycle capability, not cleanup• What responsible, low-risk retirement actually looks like• How killing bad products strengthens the entire ecosystem “A product you’re afraid to killis a product that’s already dangerous.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    7 min
  6. Episode 68: Why most data products fail

    FEB 6

    Episode 68: Why most data products fail

    Most data products don’t fail because the data is wrong.They fail because the conditions required for trust, accountability, and value were never designed in. In this episode, Roland Brown confronts an uncomfortable reality: despite modern platforms, sophisticated pipelines, and well-intentioned teams, most data products still fail to deliver lasting value. Building directly on Episodes 64 through 67, he explains why these failures are rarely dramatic and almost always avoidable. Roland shows that data product failure rarely looks like an outage or a rollback. Instead, it shows up as slow erosion: declining trust, growing workarounds, duplicated logic, and products that technically exist but are no longer relied on. The episode walks through the most common failure modes seen in practice: • Starting with artefacts instead of decisions• Assigning ownership without real authority• Treating trust as a feature instead of infrastructure• Measuring activity instead of confidence• Turning everything into a “product”• Ignoring lifecycle and sunsetting altogether Each failure mode is connected back to earlier episodes in the series, revealing how skipping even one foundational principle consumer-first design, ownership, contracts, or honest measurement quietly undermines everything else. Roland explains why many data products survive on paper long after they’ve failed in practice. They aren’t removed, because no one wants to admit failure. But by lingering, they actively damage the wider ecosystem teaching consumers that data products cannot be trusted. A key insight of the episode is that motion is often mistaken for value. Teams continue delivering pipelines, dashboards, and enhancements, while confidence continues to fall. Without anchoring products to decisions and behaviours, delivery becomes theatre. The episode reframes failure as a design signal rather than a maturity problem. Data products fail when clarity is avoided when teams hesitate to commit to intent, ownership, contracts, measurement, or endings. Roland closes with a critical reminder:most data product failures are not caused by lack of skill, tooling, or effort they are caused by the absence of deliberate design choices. When those choices are made explicitly, failure becomes preventable. Discover insights on: • Why data product failure is usually quiet, not visible• The most common and preventable failure modes• How trust erodes long before usage drops• Why over-productisation damages ecosystems• How ignoring lifecycle guarantees decay• Why clarity not complexity determines success “Data products don’t fail because data is hard.They fail because clarity is avoided.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    8 min
  7. Episode 67: Measuring Data Product Success: Reuse, Adoption, and Trust

    FEB 4

    Episode 67: Measuring Data Product Success: Reuse, Adoption, and Trust

    Most organisations measure their data success by how much they build.Pipelines delivered. Tables published. Dashboards created. And yet, trust still erodes, duplication spreads, and decisions remain slow. In this episode, Roland Brown challenges one of the most entrenched habits in modern data teams: measuring activity instead of value. Building on the foundations laid in Episodes 64, 65, and 66, he explains why traditional data metrics often create the illusion of progress and why real data product success shows up in behaviour, not dashboards. Roland makes a clear distinction between usage and reliance. A data product can be queried frequently and still not be trusted. It can be reused widely and still be reinterpreted every time. When teams measure volume instead of confidence, failure hides behind busy charts. The episode introduces three signals that consistently reveal whether a data product is actually succeeding: • Adoption are the right consumers using the product to make real decisions?• Reuse is the product reducing duplication and rework, or just feeding more downstream variations?• Trust do consumers rely on the product without validation, disclaimers, or reconciliation? Roland explains why adoption is often a lagging indicator and why trust-related behaviours like increased validation, shadow calculations, and side-channel confirmations are some of the earliest signs that a product is in trouble. Drawing on the product anatomy discussed in Episode 62, he shows how success metrics change when data products are treated as long-lived capabilities instead of one-off deliveries: • Adoption aligns to decision cadence, not query counts• Reuse is measured by work eliminated, not consumers added• Trust reveals itself when products are used without hesitation A practical example demonstrates how two products with similar usage statistics can have radically different outcomes one stabilising decisions and the other quietly creating more work depending on whether trust is present. The episode also addresses a common leadership mistake: assuming that low adoption means more training is needed. Roland explains why adoption problems are almost always design or trust problems, not education problems — and why better metrics often reveal uncomfortable truths about ownership, contracts, and intent. The episode closes with a reframing that ties the entire data product arc together:data products do not succeed because they are visible they succeed because they are relied on. When adoption, reuse, and trust are measured honestly, teams stop optimising for output and start optimising for confidence. Discover insights on: • Why activity metrics hide data product failure• The difference between usage and reliance• How to measure reuse without incentivising duplication• Why trust is the most honest and quietest success signal• How behaviour reveals value long before dashboards do• What leaders should measure if they actually care about outcomes “You don’t measure data products by how often they’re touched.You measure them by how rarely they’re questioned.” 🎧 Listen to The Data Journey wherever you get your podcasts, or visit thedatajourney.com

    8 min

About

The Data Journey: Big Ideas, Small Time Looking to stay ahead in data architecture, education strategy, and leadership—but short on time? The Data Journey delivers actionable insights in under 10 minutes, weekly. Each episode is designed for busy professionals: quick, practical, and easy to apply. No fluff, no filler—just the strategies and frameworks you need to make smarter decisions, faster. Subscribe to the newsletter at www.thedatajourney.com and transform your coffee break into a mini-masterclass in modern data and leadership.