Don't Panic! It's Just Data

EM360Tech

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.

  1. Is “Frankenstein Data” Slowing Down AI Transformation in Insurance Enterprises?

    4 DAYS AGO

    Is “Frankenstein Data” Slowing Down AI Transformation in Insurance Enterprises?

    Podcast Series: Don’t Panic It’s Just Data Guest: Mark Duffy, Senior Director, Artificial Intelligence & Analytics at Cognizant and Mark Blake, FSI Industry Practice Lead, Stibo Systems Host: Scott Taylor, The Data Whisperer and Principal Consultant, MetaMeta Consulting Artificial intelligence (AI) is prevalent in the insurance industry now, but many firms are not seeing the results they expected. The issue isn’t with the AI models; it’s pertinent to the data. In the recent episode of the Don’t Panic It’s Just Data podcast, host Scott Taylor, The Data Whisperer and Principal Consultant at MetaMeta Consulting, is joined by Mark Duffy, Senior Director, Artificial Intelligence & Analytics at Cognizant and Mark Blake, FSI Industry Practice Lead at Stibo Systems. The data industry experts address a key misunderstanding about enterprise AI – that companies can innovate their way out of poor data quality. “Some people think AI is a quick fix for data governance,” said host Scott Taylor. “If I need better data, I just use AI.” Experts warn that this belief is what’s holding insurers back. How Frankenstein Data is Impacting AI?Despite significant investments in AI, cloud, and analytics, many insurers remain stuck in pilot mode. According to Mark Blake of Stibo Systems, the problem is the infrastructure. “AI itself isn’t the challenge,” he said. “It’s the ability to scale it, and that comes back to fixing the data.” In reality, most insurance enterprises face fragmented, siloed data across systems. Customer, policy, claims, and product data often don’t align. This results in what Taylor calls “Frankenstein data,” where inconsistent records lead to unreliable outputs. For AI to function effectively at scale, insurers need trusted, governed, and unified data. That’s where data governance and master data management (MDM) come in. “For us to truly gain benefits from AI, the end user really has to trust the data,” stated Mark Duffy of Cognizant. “That trust comes from having the right data foundation in place.” Also Watch: Can Your MDM Strategy Survive the Shift to Real-Time AI Decision-Making? How Master Data Management (MDM) Unlocks Scalable AI?One of the key drivers of AI success in insurance is multi-domain master data management, a system that connects core business data across the enterprise. “You always have to have a starting point,” Blake explained. “Then you expand horizontally across the enterprise.” The “horizontal data layer” enables insurers to unify key entities like customers, products, and partners—often referred to as the “nouns of the business.” When these are standardised, AI models can work consistently and accurately. The business impact is substantial, including more accurate underwriting decisions, reduced claims leakage, improved customer experience and retention and better cross-sell and upsell opportunities. Duffy shared a real-world example in which enhancing data management directly sped up AI adoption. “It gave them trust in the data,” he said. “They could run models faster and gain more value because they weren’t constantly fixing issues.” Instead of spending 80 per cent of their time cleaning data, teams could finally focus on using it. Why AI Is Coercing a Data Strategy ResetFor years, data governance struggled to gain executives' support, but now AI has shifted that.“There’s been a refocus,” Blake said. “They’re looking at data in a way they maybe haven’t done historically.” Today, AI is a priority for boards, driving alignment among CIOs, CDOs, and IT enterprise leaders. “Every C-suite executive wants to do more AI,” Duffy said. “But they’ve realised they can’t do that without the data foundation.” Still, some enterprises believe AI can fix poor data quality. Experts warn that this is a mistake. “You can use AI to support data quality,” Duffy said. “But you’re not going to use AI to build an MDM solution.” What’s the Solution to Frankenstein DataAs insurers develop their AI strategies for the next 12 to 24 months, one key ideology was spotlighted – success depends less on speed and more on structure. “Go back to the root cause,” Blake said to Taylor. “Fix that, and then you can move forward with confidence.” In other words, AI highlights the need for strong data foundations; it doesn’t eradicate them. For insurers serious about AI transformation, that’s no longer optional—it’s where they must begin. Also Watch: From Chaos to Launch: Your Product is Ready, Your Data Isn't Key TakeawaysAI in insurance fails without strong data governance and quality foundations.Master Data Management (MDM) is critical for scaling AI across insurance enterprises.Fragmented “siloed data” is the biggest barrier to AI adoption in insurance.Trusted, unified customer and policy data improves AI accuracy and business outcomes.AI cannot fix bad data—insurers must modernise data management first. Chapters00:00 Introduction to AI Readiness in Insurance03:08 The Importance of Data Foundations06:02 Challenges of Fragmented Data09:06 Modernising Data Foundations for AI11:56 Real-World Use Cases in Insurance15:03 The Role of Master Data Management17:56 Aligning Business and Data Strategies21:06 Final Thoughts on AI and Data Governance For more information, please visit em360tech.com and stibosystems.com. To learn more about AI in the MDM space and how they’re progressing enterprise analytics intelligently, follow: Stibo Systems LinkedIn: @StiboSystems Stibo Systems X: @StiboSystems Stibo Systems YouTube: @StiboSystemsGlobal EM360Tech YouTube: @enterprisemanagement360 EM360Tech LinkedIn: @EM360Tech EM360Tech X: @EM360Tech #MasterDataManagement #DataGovernance #AIinInsurance #EnterpriseTech #BigData #DataStrategy #AIReadiness #InsuranceTechnology #cioinsights #StiboSystems #frankensteindata master data management, MDM, data governance, AI strategy, insurance, enterprise technology, big data, chief data officer, CDO, CIO, data quality, data unification, Stibo Systems, Scott Taylor, Mark Duffy, Mark Blake

    26 min
  2. Can Your MDM Strategy Survive the Shift to Real-Time AI Decision-Making?

    30 APR

    Can Your MDM Strategy Survive the Shift to Real-Time AI Decision-Making?

    Podcast: Don’t Panic! It’s Just Data Guest: Jignesh Patel, Director of Product Strategy at Stibo Systems and Elsebeth Gundersen Jensen, Product Owner at Nets Host: Dr Joe Perez, Data Analytics Expert and Amazon Bestselling Author We’re living in times of an always-on digital economy where there’s no room for data errors. In the recent episode of the Don’t Panic It’s Just Data podcast, host Dr Joe Perez, Data Analytics Expert and Amazon Bestselling Author, sat down with Jignesh Patel, Director of Product Strategy at Stibo Systems and Stibo Systems’ customer, Elsebeth Gundersen Jensen, Product Owner at Nets. Perez pointed out that even the smallest inconsistency can "ripple completely across an entire operation, instantaneously." This reality is prompting enterprise tech leaders to rethink how they manage, govern, and use data, especially with the rapid growth of AI adoption. Overall, the guests send out a clear message – trusted, real-time data is now a crucial part of business infrastructure. Also Watch: From Chaos to Launch: Your Product is Ready, Your Data Isn't What is the Hidden Cost of Untrusted Data?For large enterprises, especially those growing through mergers and acquisitions, fragmented data systems are almost unavoidable. Jensen noted that when combining multiple customer portfolios, inconsistencies often arise in even the simplest fields, like organisation numbers formatted differently in various systems. “When you bring in different customer portfolios, you will also get this scattered data picture that you don’t want in a master data management system,” she explained. According to Patel, the lack of trusted data impacts four key areas which includes customer experience, revenue growth, decision-making, and operational efficiency. Without a unified customer view, enterprises struggle to offer personalised experiences or spot cross-sell opportunities. Moreover, analytics based on unreliable data undermine executive confidence and increase compliance risks. These issues are made worse by speed. Alluding to her observations, Jensen told Perez and Patel that modern customers expect contract changes or service interactions to be updated almost instantly. “They don’t want to wait a day,” she stated. “Everything should be faster, better, and accurate.” Also Watch: Why is a Customer Data Strategy a Competitive Edge? How are Enterprises Mastering Intelligence?Traditionally, Master Data Management (MDM) has focused on creating the “golden record,” a single, reliable version of key business entities like customers or products. While this remains important, Patel believes this idea is changing quickly in the AI era. “MDM is moving beyond data correctness towards what I call mastering intelligence,” he said. “AI systems rely on trusted context—understanding what entities are, how they relate, and the business rules that apply.” This change is part of a larger transformation in enterprise architecture. Decision-making is no longer limited to human-driven dashboards; it is increasingly spreading across applications, analytics platforms, and AI agents acting in real time. In such a setup, inconsistent data does not just create errors but it can amplify it. “AI doesn’t eliminate the need for MDM or data governance. It emphasises it,” stated Patel. For enterprises heavily investing in AI, this insight is vital. Without a strong data foundation, AI models might provide insights but not dependable results. As enterprises move toward AI-driven and even agent-based business models, the need for trusted data will grow even more important. Patel highlights new questions from the C-suite – How will AI agents find my products? Why isn’t my business being recommended? The answer increasingly depends on structured, high-quality data. “AI success is dependent on trustworthy data,” Director of Product Strategy at Stibo Systems says. “MDM and governance are the foundation for the next generation of intelligent business systems.” For enterprise leaders, the key directive to note is in the race to implement AI, data trust is the competitive edge and not only the requirement. Key TakeawaysReal-time trusted data is essential for enterprise AI success and operational resilience.Poor data quality directly impacts customer experience, revenue growth, and compliance.Modern Master Data Management (MDM) is evolving from “golden records” to AI-ready data intelligence.Proactive data governance must replace reactive data cleanup to scale in real-time environments.A unified data model is the foundation for accurate, consistent, and AI-driven business insights. Chapters00:00 Introduction to Data Governance and MDM02:06 The Shift to Real-Time Data05:27 Business Risks of Lacking Trusted Data08:20 Growth Through Mergers and Acquisitions15:29 The Role of MDM in AI Initiatives20:02 Transitioning to Proactive Data Management22:01 Advice for CIOs on Managing Product Data For more information, please visit em360tech.com and stibosystems.com. To learn more about AI in the MDM space and how they’re progressing enterprise analytics intelligently, follow: Stibo Systems LinkedIn: @StiboSystems Stibo Systems X: @StiboSystems Stibo Systems YouTube: @StiboSystemsGlobal EM360Tech YouTube: @enterprisemanagement360 EM360Tech LinkedIn: @EM360Tech EM360Tech X: @EM360Tech Follow: @EM360Tech on YouTube, LinkedIn and X #MDM #DataGovernance #EnterpriseAI #DataQuality #TrustedData #AIStrategy #RealTimeData #DigitalTransformation #StiboSystems #TechPodcast

    27 min
  3. Why Is the Semantic Layer Critical for Data Governance, Compliance, and AI at Scale?

    20 APR

    Why Is the Semantic Layer Critical for Data Governance, Compliance, and AI at Scale?

    Podcast: Don’t Panic It’s Just Data! Guest: Adrian Estala, VP, Field Chief Data & AI Officer, Starburst Host: Doug Laney, Research & Advisory Fellow at BARC and Author of Infonomics & Data Juice After years of heavy investment in data lakes and warehouses, many enterprises still face a frustrating reality. Insights continue to remain slow, fragmented, and hard to trust. In the recent episode of the Don’t Panic It’s Just Data podcast, host Doug Laney, Research & Advisory Fellow at BARC and Author of Infonomics & Data Juice, is joined by Adrian Estala, VP, Field Chief Data & AI Officer at Starburst. They sat down to discuss why more enterprises are adopting a new architectural approach, the business semantic layer, to speed up AI adoption. What’s the Core Issue in AI Data Enterprise?The core issue, Estala argues, is not a lack of infrastructure but an inconsistency between how data is organised and how enterprises think. “No one’s really there yet,” he says, reflecting on a decade of backend optimisation. “We don’t know what ‘perfect’ architecture means, especially in the AI age.” The semantic layer, sometimes called a “context layer,” represents a shift from technical complexity to business usability. Typically, the system requires non-technical users to interpret schemas and pipelines; however, Starburst provides an abstraction that shows data in familiar business terms, along with metadata and governance rules. “If you build it right,” Estala explains, “when a CFO walks in the room and sees their semantic layer, it makes sense to them.” For an enterprise, this is more than just a usability improvement. It reduces duplication, eliminates conflicting metrics, and reduces reliance on IT teams for routine analysis. As Laney notes during the discussion, the goal is not to replace existing systems but to make them “that much more accessible” by layering business meaning on top. Also Watch: AI Is Replacing BI — Here’s What CIOs Need to Know Sovereignty, Governance & the European RealityThe conversation is even more acute in regions like Europe, where data sovereignty has become a major concern. Regulatory pressure has led enterprises to rethink not only where data is stored but also how it is accessed and shared. Estala describes a federated model where data stays within national boundaries while still being usable globally. Organisations set up local clusters in countries like Switzerland or the United Kingdom, build data products locally, and apply strict rules for what can be shared centrally. “I can decide which data products are approved to be shared,” he says, alluding to compliance mechanisms that ensure sensitive information cannot be traced back to individuals. This creates a system that satisfies both regulators and business leaders. Executives no longer need to worry about jurisdictional complexities; they work with a unified view of data that has already been filtered, governed, and approved. “For them, it just feels like it’s already been brought together,” Estala adds. As AI agents and copilots continue to gain popularity, the discussion also spotlights limitations. One such limitation is trust. Without confidence in the underlying data, even the most advanced AI tools struggle to provide meaningful value. “If they don’t trust the answers, it’s just a cool toy,” Estala says, describing a common pattern where initial excitement fades once users doubt the reliability of outputs. The semantic layer also tackles this discrepancy by embedding governance, lineage, and business rules directly into data products. Starburst helps enterprises clearly define which data is exposed to AI systems and under what conditions, making it easier to explain and justify decisions. Currently, Estala observes, AI mainly speeds up existing workflows instead of transforming them. Executives are asking the same questions they always have, but getting answers faster and from broader datasets. The real change, he suggests, will come when trust allows leaders to ask entirely new questions and rethink decision-making. How to Drive Business Value in 90 Days?For CIOs and CDOs eager to move past experimentation, the Chief Data and AI officer outlines a focused, business-led approach. Rather than launching large-scale transformations, he suggests starting with a single domain and building momentum from there. The first phase focuses on collaboration, bringing business stakeholders into the design of the semantic layer and defining the data products that are most important. “We design it with the business team in the room,” he explains, stressing ownership from the start. The next stage shifts to enablement, as teams begin to use and expand these data products themselves. This is where self-service takes root, reducing dependence on IT and promoting more exploratory use of data. By the final phase, enterprises are ready to introduce AI agents on top of a trusted foundation. At that stage, technology becomes almost secondary. “Once you get to a semantic layer that you trust, adding an agent is easy,” Estala says. As enterprises continue to adopt AI at larger scales, their competitive edge will come from algorithms and from how effectively they organise, govern, and contextualise their data. In this sense, the semantic layer is quickly becoming the backbone of modern, AI-driven decision-making. Key TakeawaysSemantic layers make governed data accessible for enterprise AI.Data sovereignty drives federated, compliant data architectures.Trusted AI needs governed, metadata-rich data products.Semantic layers deliver business value within 90 days.Virtual layers reduce duplication and speed up analytics. Chapters00:00 The Shift to Business Semantic Layers08:02 Data Sovereignty and Governance in Modern Strategies13:08 Foundational Capabilities for AI Systems18:11 AI Agents and Decision Making23:04 Practical Steps for Implementing Semantic Layers To learn more about how data products and AI agents are changing enterprise analytics, follow: Starburst LinkedIn: @Starburst Starburst X: @starburstdata Starburst YouTube: @StarburstData EM360Tech YouTube: @enterprisemanagement360 EM360Tech LinkedIn: @EM360Tech EM360Tech X: @EM360Tech Follow: @EM360Tech on YouTube, LinkedIn and X Stay connected for more expert insights, podcast episodes, and enterprise data strategy discussions. #SemanticLayer, #DataGovernance, #EnterpriseAI, #DataStrategy, #DataArchitecture, #AIatScale, #Compliance, #DataSovereignty, #ContextLayer, #AIagents, #DataProducts, #SelfServiceAnalytics, #CIO, #CDO, #Starburst, #AdrianEstala, #DougLaney, #DontPanicItsJustData, #EM360Tech, #TechPodcast

    27 min
  4. AI Is Replacing BI — Here’s What CIOs Need to Know

    8 APR

    AI Is Replacing BI — Here’s What CIOs Need to Know

    Podcast: Don’t Panic! It’s Just Data Guest: Adrian Estala, VP, Field Chief Data & AI Officer, Starburst Host: Shubhangi Dua, Podcast Producer, Host and B2B Tech Journalist, EM360Tech "AI is replacing BI,” stated Adrian Estala, VP and Field Chief Data & AI Officer at Starburst. When Shubhangi Dua, host of Don’t Panic, It’s Just Data, put the statement back to Estala, the tension was intentional. In enterprise tech, few systems are as ingrained as business intelligence (BI) dashboards. For two decades, they have been the common language of decision-making – static reports, polished charts, and visuals that meet compliance standards. However, Estala insists that the change isn't about removing dashboards. It's about staying relevant. “BI isn’t going away,” he explains. “It’s evolving.” How AI is replacing BI?A transformation to AI begins with something deceptively simple – a business semantic layer. Instead of forcing executives to understand data through IT-designed schemas, enterprises are creating context-rich data products using business language. A CFO sees finance terms, not table joins. A loans team sees portfolios, not pipelines. Once this foundation is established, teams can plug the same governed, reusable data product into their business intelligene (BI) tools. This leads to improved performance and consistency rises too. However, the growth doesn’t stop here; businesses typically ask for more. When a conversational agent is added next to a legacy dashboard, using the same trusted data product, the behaviour changes quickly. Leaders start asking questions in natural language, exploring trends they have never charted before. They make forecasts in seconds and adjust their thinking while on the go. What was once a static reporting experience transforms into an interactive analytical dialogue. In one major bank, Estala recalls, a CEO challenged himself to avoid opening a dashboard for two weeks. He didn’t need to; the agent managed everything for him. Also Watch: Are You Scaling Intelligence — or Just Scaling Errors? TakeawaysAI is replacing BI, but it's more about evolution than replacement.Organisations are moving towards data products for better analytics.Engaging business teams early is crucial for successful AI implementation.Conversational agents are transforming how teams interact with data.Data quality and governance are essential in the transition to AI.Business semantic layers help bridge the gap between IT and business needs.Organisations can achieve significant impact with AI in a short time.Don't wait for perfect architecture; start with a Pathfinder approach.Business teams can drive innovation when they understand their data.The future of data engagement lies in combining AI with traditional BI tools. Chapters00:00 The Evolution of BI to AI03:11 Understanding AI's Role in Business Intelligence14:44 Navigating the Transition to AI20:03 Ensuring Data Quality and Governance24:44 The Future of Data Engagement To learn more about how data products and AI agents are changing enterprise analytics, follow: Starburst LinkedIn: @StarburstStarburst X: @starburstdataStarburst YouTube: @StarburstDataEM360Tech YouTube: @enterprisemanagement360EM360Tech LinkedIn: @EM360TechEM360Tech X: @EM360TechFollow: @EM360Tech on YouTube, LinkedIn and X Stay connected for more expert insights, podcast episodes, and enterprise data strategy discussions. #AI #BI #AIvsBI #AIAgents #BusinessIntelligence #DataProducts #EnterpriseAnalytics #DataStrategy #Starburst #DontPanicItsJustData #AdrianEstala #ShubhangiDua #SemanticLayer #CIO #CDO #TechPodcast #DataGovernance #Dashboards

    29 min
  5. Why Data Quality Makes or Breaks AI Success in Supply Chain and Procurement

    24 MAR

    Why Data Quality Makes or Breaks AI Success in Supply Chain and Procurement

    We’re living in an age where new technology promises to improve everything with faster decisions, smarter workflows, and better outcomes. But behind that promise lies a quieter reality, and that is many organisations have that ambition, but readiness often lags behind. In this episode of Don’t Panic! It’s Just Data, host Christina Stathopoulos, Founder of Dare to Data, speaks with Pascal Bensoussan, Chief Product Officer at Ivalua. In this episode, they look at the growing excitement around AI and the reality many organisations face when trying to use it. While ambition is high, readiness often falls short. Focusing on procurement, the conversation explores why many AI initiatives struggle to move beyond early stages and what’s needed to turn that ambition into real, measurable value. Data: The Backbone of AISuccessful AI depends on high-quality, unified data. Fragmented sources, unclean data, and siloed systems make it difficult to build reliable AI applications. As Bensoussan explains: “Fix your data foundation. Without that, you can’t get started with AI. Don’t jump into an AI frenzy hoping it will sort itself out. First, you need a unified transactional and master data model that captures relationships, ensures semantic coherence, and creates a system of truth you can trust.” A unified data model enables AI to work effectively, increasing both its success rate and depth. Organisations should start with use cases that provide tangible value rather than trying to do everything at once. Governance frameworks, monitoring, and maintenance are critical to ensure reliability, security, and meaningful outcomes.  Employee trust is another key factor. Users need confidence in AI outputs, and organisations must address scepticism about how AI might impact roles. Building that trust often requires broader cultural change, which can be one of the hardest barriers. Many teams are used to traditional methods and resist adopting new technologies. By combining solid data foundations with practical, focused use cases and a clear strategy, companies can guide teams through this change, ensuring AI initiatives don’t stall and deliver measurable results. Understanding AI Ambition vs. AI ReadinessAmbition and readiness are not the same. AI ambition refers to the enthusiasm organisations have for integrating AI into operations, driven by the promise of efficiency and insight. AI readiness, on the other hand, measures whether an organisation can actually deploy AI effectively at scale. According to MIT research, 95 per cent of enterprise AI projects fail to move from proof of concept to production. Bensoussan calls this the “GenAI divide”: “The ambition is there because the promise is incredible, but the readiness is often missing because often the foundation is cracked.” Without a clear strategy or roadmap, even organisations with abundant resources can struggle to implement AI successfully. Starting with targeted, achievable use cases helps teams gain confidence, build trust, and generate measurable results before scaling more widely. AI in ProcurementProcurement provides a unique lens for understanding AI adoption. Positioned at the intersection of data, compliance, risk, and finance, it offers significant opportunities but also considerable complexity. One major challenge is that unstructured data like contracts, risk assessments, and supplier communications must be integrated with transactional records, a process that is often time-consuming and difficult. Fragmented systems only add to the challenge, limiting AI’s ability to deliver meaningful, actionable insights. Bensoussan emphasises that seeing the entire process from supplier discovery to payment is essential. A comprehensive view ensures that AI-driven insights are reliable, actionable, and fully traceable, allowing organisations to understand why specific decisions are made and to make more strategic choices. AI in procurement is not about replacing humans; it is about augmenting them. By automating mundane tasks like data retrieval and report generation, professionals can focus on higher-value work, strategic thinking, and deeper evaluation. AI also enables richer insights, helping teams develop more effective strategies and make informed decisions. By addressing data challenges, building trust, and starting with targeted use cases, organisations can turn AI ambition into measurable value. With the right preparation and focus, AI can strengthen procurement operations, enhance decision-making, and unlock new levels of efficiency. For more information, visit www.ivalua.com TakeawaysAI ambition vs. readiness in organisationsBarriers to AI adoption: culture, strategy, data, trust, governanceImportance of unified data models for AI effectivenessPractical AI applications in procurement: sourcing, contracts, invoicingHuman-AI collaboration and the future of work in procurement Chapters00:00 AI Ambition vs. Readiness 05:02 The Procurement Landscape and AI Adoption 09:10 Data Foundations for AI Success 13:03 Unified Data Models in Procurement 16:43 The Human Element in AI Integration 25:57 Real-World Applications of AI Agents 32:22 Key Takeaways for Leaders in AI Adoption

    32 min
  6. Revenue-Ready Data Is Not Magic, It’s Engineering

    19 MAR

    Revenue-Ready Data Is Not Magic, It’s Engineering

    Artificial intelligence is everywhere right now, in boardrooms, strategy meetings, and product roadmaps. Organisations are investing heavily in machine learning, automation, and generative AI, all with the same promise: unlock new revenue and work smarter. In the latest episode of the Don’t Panic It’s Just Data podcast, EM360Tech’s Trisha Pillay explores this challenge with Chief Technology Officer Paul Brownell and Sergio Morales, Data and AI Engineering Leader from Growth Acceleration Partners. Their discussion unpacks why so many AI initiatives fail to translate into revenue and why the real starting point isn’t the model itself, but the data, governance, and engineering practices that make meaningful outcomes possible. But here’s the uncomfortable truth and that is many AI strategies look powerful on paper, but the real financial impact is often unclear. This disconnect, called the revenue data gap, highlights an issue many organisations overlook. AI doesn’t create value on its own especially without strong data foundations, governance, and engineering discipline, even the most ambitious AI strategy will struggle to deliver measurable results. The Revenue Data Gap in Enterprise AIFor many organisations, the excitement surrounding AI can create a tendency to jump straight into experimentation. Teams begin exploring tools, deploying models, or building prototypes without first defining how those initiatives will produce tangible business outcomes. According to Brownell, this is where the first major disconnect appears. Many enterprises approach AI with what he describes as a “shiny object” mentality. They recognise that AI is powerful, but they have not yet defined where the value will actually come from. As a result, organisations may launch projects that generate interesting insights or technical demonstrations but fail to translate into revenue growth or cost reduction. Brownell emphasises the importance of establishing a data hypothesis before pursuing any AI initiative. A data hypothesis outlines the relationship between the data an organisation holds and the business value it expects to extract from it. In practical terms, it asks a simple but critical question: If we analyse this data, what decision or action will it enable, and how will that affect revenue? Without this hypothesis, organisations often find themselves exploring large volumes of data without a clear objective. Some companies may not even know where their most valuable data resides or whether it is reliable enough to support analytical models. Data quality, therefore becomes another major component of the revenue data gap.  Engineering the Foundations for AI That Delivers Business ImpactWhile AI is often portrayed as a revolutionary technology, Morales points out that the engineering challenges behind it are not entirely new. Many of the same principles that guided earlier technology transformations such as cloud adoption or microservices architecture still apply to modern AI deployments. In fact, Morales argues that organisations struggling with AI today are often experiencing the consequences of earlier architectural decisions. Systems built years ago were rarely designed with advanced analytics or AI in mind. As a result, critical data may be trapped inside legacy applications, scattered across departments, or stored in formats that make integration difficult. These limitations become highly visible once organisations attempt to deploy AI at scale. Another major challenge lies in what Morales describes as the velocity mandate. Businesses increasingly expect technology teams to deliver results quickly, particularly when AI initiatives are positioned as strategic priorities. However, building the infrastructure required for reliable AI systems can take significant time and effort. Morales explains that organisations do not necessarily need to choose between speed and stability. Instead, they can adopt a pragmatic approach that focuses on incremental progress. This strategy allows organisations to create early successes that build confidence across the business. Once stakeholders see tangible results from initial projects, it becomes easier to secure the support and investment needed for broader data transformation efforts. Why Data Contracts and Governance Are Critical to AI SuccessOne of the most practical tools discussed is the concept of data contracts. Though less flashy than AI models, they ensure data flows reliably between systems. At their core, data contracts define a dataset’s structure and expectations; schemas, formats, and validation rules. Morales describes them as a way to embed governance directly into data pipelines, automatically catching violations before they disrupt downstream processes. This prevents silent errors that can skew analytics and decisions.  Data contracts aren’t a cure-all, though. Their effectiveness relies on clear organisational ownership and communication around each dataset. In large companies, data often comes from multiple systems managed by different teams, each with distinct priorities. Brownell explains that data contracts create a shared framework for collaboration, letting teams integrate and analyse information confidently. Implementation can be gradual: start with critical datasets for a specific use case and expand governance as needed. This iterative approach improves data reliability without requiring a full infrastructure overhaul. What’s next for AI?While AI tools continue to evolve, the fundamentals of data management remain unchanged. Organisations must understand their data, govern it effectively, and design infrastructure that allows information to move reliably between systems. Closing the revenue data gap, therefore, requires more than deploying new AI models. It demands a strategic approach that begins with clear business objectives, continues through data engineering practices, and is reinforced by governance frameworks such as data contracts. If you would like to learn more visit: https://www.growthaccelerationpartners.com/ Chapters00:00 Introduction to AI Ambitions and Revenue Gaps 02:31 Understanding the Revenue Data Gap 05:47 Challenges of Legacy Architecture in AI 09:11 Closing the Revenue Data Gap 12:29 The Velocity Mandate in AI Implementation 16:42 Strategic and Technical Alignment for AI 18:31 Engineering Considerations for AI Initiatives 22:03 The Role of Data Contracts in AI Success 28:55 Practical Takeaways for AI Implementation TakeawaysThe revenue data gap is a common challenge that organisations face when implementing AI.It’s crucial to define a clear data hypothesis and ensure data quality to drive measurable business impact.Data contracts work only if teams know who owns datasets, how to maintain them, and how changes are communicated.Balancing the velocity mandate with governance is key. Engaging stakeholders and mapping the value chain ensures that AI initiatives are aligned with business needs, ultimately leading to revenue growth.

    30 min
  7. How to Navigate the Trust Paradox in AI Adoption: Insights from Informatica’s 2026 CDO Report

    18 MAR

    How to Navigate the Trust Paradox in AI Adoption: Insights from Informatica’s 2026 CDO Report

    Enterprise AI budgets are climbing, but the data foundations beneath them remain uneven. In this episode of Don’t Panic, It’s Just Data, Kevin Petrie, VP of Research at BARC, and Nathan Turajski, Senior Director, Product Marketing at Informatica, examine the findings of the CDO Insights 2026 report, which argues that executive confidence in AI may be outpacing organisational readiness. The study centres on what it describes as a growing “trust paradox” as Chief Data Officers are accelerating AI initiatives even as data quality, governance maturity, and AI literacy struggle to keep up.  The Trust ParadoxThe report exposes a striking disconnect. Turajski points out that while around 65 per cent of data leaders believe employees trust the data powering AI, 75 per cent say upskilling in data and AI literacy is essential. In other words, confidence is high, but readiness is lagging. This is the trust paradox where employees increasingly rely on AI outputs, while data leaders remain cautious about the quality, governance, and lineage behind those results. The risk is not scepticism but rather overconfidence. When AI-generated answers are accepted without scrutiny, flawed data can quietly scale poor decisions. For CDOs, the challenge is cultural as much as technical. AI Adoption Soars While Data Readiness LagsThe harsh reality is that AI experimentation is no longer confined to innovation teams. It’s spreading across marketing, operations, finance, and customer experience. As a result, scaling from pilot to production requires more than a model and a use case. To make AI work at scale, organisations need a data strategy that ensures consistency across domains, clear and transparent governance, measurable business impact, and sustainable management of their data assets. Data Quality and GovernanceTurajski explains that organisations are increasingly investing in data management and governance, with 86 per cent expanding data initiatives and 39 per cent prioritising upskilling. Metadata integration also helps unify distributed environments, providing the context AI needs to deliver reliable, trustworthy outputs.   Organisations need to remember that AI systems amplify whatever they are given, so if inputs are inconsistent, incomplete, or poorly defined, outputs will reflect those weaknesses which are often at scale. Data quality challenges frequently arise from duplicated or conflicting records, inconsistent definitions across business units, poor lineage visibility, and limited ownership accountability.  For example, a retailer might describe the same product in multiple ways across systems. Without standardisation, AI tools trained on that data produce fragmented insights, and when this occurs across thousands of products and regions, the distortions multiply. The takeaway from data leaders is clear: AI performance cannot be separated from disciplined, high-quality data management. Upskilling and Scaling AI AdoptionBoth Petrie and Turajski stress that technology alone won’t close the gap. Upskilling employees in data literacy, AI fluency, and governance awareness ensures AI experimentation evolves into measurable, real-world results from improved customer experience to faster, more accurate analytics. The 2026 CDO Insights findings position data leaders at the centre of AI transformation. Their mandate extends beyond infrastructure to trust architecture. The trust paradox isn’t a reason to slow down innovation. It’s a reminder that lasting results require as much discipline as ambition. In 2026, the organisations that succeed won’t be the fastest to adopt new technologies, but those that build the most reliable data foundations to support them. To learn more about this, visit informatica.com TakeawaysThe trust paradox highlights a disconnect between employee confidence in AI and leadership's caution.Data leaders recognise the need for upskilling in data and AI literacy.Building a trusted context is essential for effective AI adoption.The vendor landscape for data management is complex and requires careful navigation.AI is being used to enhance customer experience and loyalty.Measurable results from AI adoption are becoming a priority for organisations.Data governance must keep pace with AI use to mitigate risks.Successful organisations are leveraging unified data management platforms to drive AI value. Chapters00:00 Introduction to the CDO Insights Report 03:13 Understanding the Trust Paradox in AI Adoption 08:34 Building Trusted Context for AI 14:11 The Importance of Data Quality and Completeness 20:28 Navigating the Vendor Landscape for Data Management 23:09 From Experimentation to Measurable Results 27:38 Recommendations for CDOs and CISOs

    28 min
  8. Are You Scaling Intelligence — or Just Scaling Errors?

    10 MAR

    Are You Scaling Intelligence — or Just Scaling Errors?

    What if the real advantage in AI lies not in having more data, but in having less? In this episode of the Don’t Panic, It’s Just Data podcast, host Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech, sits down with Herb Blecher, Research Director of Data and Analytics at Enterprise Management Associates (EMA). This conversation challenges a common belief in enterprise tech – that gathering everything ensures insight. Blecher, alluding to the modern-day AI craze, cautions the enterprise audience that just because you can access vast amounts of unstructured data doesn’t mean you should. What is the AI Gold Rush & Why It’s Risky?Unstructured data now fills the enterprise tech space — voice calls, financial documents, customer chats, images, logs, and emails. “With AI and machine learning, we’ve finally figured out how to access and organise it.” However, Blecher offers a stark reality check. AI doesn’t just increase insight; it increases error. When machines transition from calculating numbers to interpreting tone, images, and incomplete context, the chances for mistakes rise significantly. A blurry comma in a financial document, a misread abbreviation, a misplaced decimal. In low-stakes situations, this is inconvenient. In finance or healthcare, it can be disastrous. The danger lies not just in faulty outputs, but in confidently flawed outputs. AI doesn’t hesitate as humans do. It doesn’t say, “This seems off.” It fills in gaps, often convincingly. That confidence, Blecher argues, makes governance essential. The real issue companies face isn’t a lack of data; it’s a lack of careful thought. Also Read: AI is Making “As-Code” Inevitable Why Human-in-the-Loop is Imperative?Governance over hype is the key takeaway from the conversation. AI generating and using data at the same time creates a new situation. In the past, including financial troubles that Blecher experienced directly, human judgment acted as the final protection. Now, companies risk losing that safeguard in their rush to automate. Dua puts it simply – humans are leaders; AI is the helper. The enterprises that succeed with unstructured data aren’t the fastest; they are the most thoughtful. They clearly define their questions first, build feedback loops, monitor continuously, and foster a culture of scepticism. What are the failures? They often look like ambitious automation without safeguards—from flawed document scanning to high-profile AI rollouts like McDonald's testing automated drive-through ordering, where conversational nuance proved more challenging than anticipated. Tone, ambiguity, and context remain distinguishing human areas. What Happens Five Years From Now?Will AI solve data quality issues? No, it will not. However, Blecher believes that data quality problems are here to stay. “What will change is the range of questions we try to answer. As AI develops, companies won’t stop dealing with edge cases; they’ll broaden the edge.” The future doesn’t promise easy automation. It promises increased capability, increased capacity, along with increased responsibility. For CFOs and IT leaders investing in AI-driven data strategies, EMA’s Research Director of Data and Analytics has a final message: Don’t confuse volume with value.Don’t replace governance with optimism.Don’t give up scepticism in a gold rush. AI’s potential is huge. But more data doesn’t always mean better data. In a world eager to gather everything, restraint could be the most radical strategy of all. Key Takeaways More data doesn’t guarantee better insights — clarity of purpose matters more than volume.AI doesn’t just scale intelligence; it scales errors if governance is weak.Unstructured data is powerful, but without context and oversight, it becomes a liability.Human judgment remains essential — especially in high-stakes domains like finance and healthcare.The most successful organisations move deliberately, not impulsively, in the AI gold rush. Chapters00:00 Introduction to Data Quality and Its Importance02:43 The Rise of Unstructured Data05:42 Challenges in Ensuring Data Quality08:46 AI's Role in Data Quality Management11:30 Human Oversight in AI and Data Quality14:47 Opportunities in Data Quality17:32 Governance and Regulation in AI20:25 Real-World Applications and Case Studies23:27 Future of Data Quality and AI26:18 Key Takeaways for Leaders About Herb BlecherHerb leads EMA's Data and Analytics practice. He brings more than two decades of experience building solutions across financial services, data product development, and enterprise analytics. His perspective is shaped by leading national data initiatives for U.S. mortgage servicers and government agencies, as well as driving product innovation and strategy in fast-moving technology environments. Herb's research spans enterprise data and analytics, including data architecture and platform modernisation, analytics and integration, governance, and AI/ML platforms. #AI #DataAnalytics #TechPodcast #B2BTech #DataQuality #UnstructuredData #AIGoldRush #HumanInTheLoop #AICorporate #HerbBlecher #EMAPartners #CFOs #ITLeaders #DataStrategy #DontPanicItsJustData #EM360Tech #PodcastClips #DataInsights

    28 min

Ratings & Reviews

5
out of 5
2 Ratings

About

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.

You Might Also Like