Don't Panic! It's Just Data

EM360Tech

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.

  1. Are You Scaling Intelligence — or Just Scaling Errors?

    4 DAYS AGO

    Are You Scaling Intelligence — or Just Scaling Errors?

    What if the real advantage in AI lies not in having more data, but in having less? In this episode of the Don’t Panic, It’s Just Data podcast, host Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech, sits down with Herb Blecher, Research Director of Data and Analytics at Enterprise Management Associates (EMA). This conversation challenges a common belief in enterprise tech – that gathering everything ensures insight. Blecher, alluding to the modern-day AI craze, cautions the enterprise audience that just because you can access vast amounts of unstructured data doesn’t mean you should. What is the AI Gold Rush & Why It’s Risky?Unstructured data now fills the enterprise tech space — voice calls, financial documents, customer chats, images, logs, and emails. “With AI and machine learning, we’ve finally figured out how to access and organise it.” However, Blecher offers a stark reality check. AI doesn’t just increase insight; it increases error. When machines transition from calculating numbers to interpreting tone, images, and incomplete context, the chances for mistakes rise significantly. A blurry comma in a financial document, a misread abbreviation, a misplaced decimal. In low-stakes situations, this is inconvenient. In finance or healthcare, it can be disastrous. The danger lies not just in faulty outputs, but in confidently flawed outputs. AI doesn’t hesitate as humans do. It doesn’t say, “This seems off.” It fills in gaps, often convincingly. That confidence, Blecher argues, makes governance essential. The real issue companies face isn’t a lack of data; it’s a lack of careful thought. Also Read: AI is Making “As-Code” Inevitable Why Human-in-the-Loop is Imperative?Governance over hype is the key takeaway from the conversation. AI generating and using data at the same time creates a new situation. In the past, including financial troubles that Blecher experienced directly, human judgment acted as the final protection. Now, companies risk losing that safeguard in their rush to automate. Dua puts it simply – humans are leaders; AI is the helper. The enterprises that succeed with unstructured data aren’t the fastest; they are the most thoughtful. They clearly define their questions first, build feedback loops, monitor continuously, and foster a culture of scepticism. What are the failures? They often look like ambitious automation without safeguards—from flawed document scanning to high-profile AI rollouts like McDonald's testing automated drive-through ordering, where conversational nuance proved more challenging than anticipated. Tone, ambiguity, and context remain distinguishing human areas. What Happens Five Years From Now?Will AI solve data quality issues? No, it will not. However, Blecher believes that data quality problems are here to stay. “What will change is the range of questions we try to answer. As AI develops, companies won’t stop dealing with edge cases; they’ll broaden the edge.” The future doesn’t promise easy automation. It promises increased capability, increased capacity, along with increased responsibility. For CFOs and IT leaders investing in AI-driven data strategies, EMA’s Research Director of Data and Analytics has a final message: Don’t confuse volume with value.Don’t replace governance with optimism.Don’t give up scepticism in a gold rush. AI’s potential is huge. But more data doesn’t always mean better data. In a world eager to gather everything, restraint could be the most radical strategy of all. Key Takeaways More data doesn’t guarantee better insights — clarity of purpose matters more than volume.AI doesn’t just scale intelligence; it scales errors if governance is weak.Unstructured data is powerful, but without context and oversight, it becomes a liability.Human judgment remains essential — especially in high-stakes domains like finance and healthcare.The most successful organisations move deliberately, not impulsively, in the AI gold rush. Chapters00:00 Introduction to Data Quality and Its Importance02:43 The Rise of Unstructured Data05:42 Challenges in Ensuring Data Quality08:46 AI's Role in Data Quality Management11:30 Human Oversight in AI and Data Quality14:47 Opportunities in Data Quality17:32 Governance and Regulation in AI20:25 Real-World Applications and Case Studies23:27 Future of Data Quality and AI26:18 Key Takeaways for Leaders About Herb BlecherHerb leads EMA's Data and Analytics practice. He brings more than two decades of experience building solutions across financial services, data product development, and enterprise analytics. His perspective is shaped by leading national data initiatives for U.S. mortgage servicers and government agencies, as well as driving product innovation and strategy in fast-moving technology environments. Herb's research spans enterprise data and analytics, including data architecture and platform modernisation, analytics and integration, governance, and AI/ML platforms. #AI #DataAnalytics #TechPodcast #B2BTech #DataQuality #UnstructuredData #AIGoldRush #HumanInTheLoop #AICorporate #HerbBlecher #EMAPartners #CFOs #ITLeaders #DataStrategy #DontPanicItsJustData #EM360Tech #PodcastClips #DataInsights

    28 min
  2. Is AI Analytics the Missing Link Between Business Users and Data Teams?

    30 JAN

    Is AI Analytics the Missing Link Between Business Users and Data Teams?

    For years, enterprises have discussed data democratisation as if it were an inevitable end goal. An assumption was made that turning on dashboards and training the business would lead to insight following naturally. But according to Barry McCardel, Co-Founder and CEO of Hex Technologies, the reality has been much more complicated. In the recent episode of the Don’t Panic, It’s Just Data podcast, McCardel joined host Kevin Petrie, VP Research and Head of Data Management at BARC, to talk about why access alone has never been enough. He also discussed how artificial intelligence (AI) is forcing the analytics community to rethink the purpose of data. The conversation dives into a familiar issue: how can organisations empower non-technical users without compromising data trust or overwhelming the technical teams responsible for it? “We’ve spent a decade pretending the problem was solved by self-service,” McCardel says. “But what we actually did was move complexity around instead of removing it.” As AI becomes part of analytics platforms, that complexity is finally being addressed. This includes long-standing beliefs about roles, ownership, and teamwork. Addressing the Myth of Data DemocratisationTracing many of the analytics issues faced by organisations in the present day, McCardel alludes to the early self-service BI, which promised that business users could explore data on their own. This was supposed to allow analysts and engineers to focus on more important tasks. In reality, the outcome often included duplicated logic, inconsistent metrics, and a widening trust gap between teams. “Access without context is chaos,” McCardel tells Petrie. “If everyone can answer questions, but everyone answers them differently, you haven’t democratized anything; you’ve just created noise.” This issue has grown more urgent as organisations expand. Different roles—data engineers, analysts, data scientists, and business stakeholders—approach data with distinct goals and skills. Traditional tools forced everyone into the same interfaces, often designed for one group while ignoring the needs of the others. Petrie notes that many companies responded by adding layers of control, but this approach had drawbacks. Stricter guidelines slowed insight generation and pushed business users back into reliance on centralised teams. McCardel argues that the main problem isn’t a lack of governance or tools but a lack of shared understanding. “We’ve treated analytics like a handoff,” he explains. “The data team builds it, the business consumes it. That model doesn’t work when questions are fluid, and decisions are continuous.” He believes AI is revealing the limits of that model and providing a path forward. Also Watch: “Data Teams Suffer from Fragmentation” | Charles Schaefer @ Big Data LDN 2025 AI is the Bridge, Not the ShortcutWhile much of the industry conversation about AI in analytics focuses on automation and natural language querying, the CEO of Hex is cautious about viewing AI as a quick fix. “If AI just gives you faster wrong answers, that’s not progress,” he points out. Instead, he presents AI as a bridge that helps different roles collaborate in the same analytical space without flattening their expertise. In this view, AI helps translate: it turns business questions into structured analysis, brings relevant context to the surface, and makes assumptions clear instead of hidden in code. This is where McCardel sees platforms like Hex playing an important role. Instead of separating technical and non-technical users into different tools, Hex is designed to support collaboration within a single environment. Analysts can create rigorous, transparent logic, while business users can interact with the results, ask follow-up questions, and understand how conclusions were made. “The goal isn’t to turn everyone into a data scientist,” McCardel clarifies. “It’s to let each person contribute at their level without breaking the chain of trust.” Trust, he stresses, is essential in modern analytics. As more insights come from AI, organisations will need clear lineage, better validation, and shared visibility into how answers are created. Black-box analytics may be quick, but they are also fragile. “We’re moving away from the idea that insight is a product you deliver,” McMardel added. “It’s a conversation you participate in.” As AI changes analytics workflows, the challenge for organisations won’t be just adopting the technology. It will redesign how people collaborate around data. The co-founder of Hex suggests that democratisation was never about removing experts from the process. It was about making expertise visible, accessible, and usable. And that, finally, may be something worth not panicking about. TakeawaysAI is reshaping the future of data analytics.Data democratisation remains a significant challenge for organisations.Trustworthiness in data outputs is crucial for effective decision-making.Integration of different user personas is essential for collaboration.Organisations can start using analytics tools without perfect data.Expert users can help build trust in data analytics.Natural language interfaces are key to making data accessible.The role of AI in data exploration is becoming increasingly important.Data quality and governance are critical for successful analytics.Successful AI adoption requires a step-by-step approach. Chapters00:00 Introduction to AI and Data Analytics02:54 The Genesis of Hex Technologies06:04 Challenges in Data Democratisation09:10 AI's Role in Data Exploration12:14 Trust and Context in Data Analytics15:00 The Evolution of Analytics Tools18:10 Integrating Different User Personas21:09 The Importance of Contextual Understanding23:52 Data Preparation and Governance Challenges26:46 Incremental Adoption of AI in Organizations29:57 The Human Element in AI Adoption32:47 Conclusion and Next Steps for Leaders #DataDemocratisation #AIinAnalytics #SelfServiceAnalytics #FutureofData #DataStrategy #BusinessIntelligence #DataGovernance #DataTrust #NaturalLanguageQuery #EnterpriseAnalytics #HexTechnologies #BarryMcCardel #DontPanicItsJustData #KevinPetrie #BARC #CIO #ITLeaders #DataTeams #DataAnalysts #DataScientists #BusinessStakeholders #DataDemocratization #AIforDataTeams #analytics_tool #datastrategy #RethinkAnalyticsStrategy #blackbox #DataFragmentation

    36 min
  3. How To Scale AI in Digital Commerce Effectively

    14 JAN

    How To Scale AI in Digital Commerce Effectively

    Digital commerce teams rarely lack ideas. Most understand how AI, data, and personalisation could improve customer experiences. The problem, as explored in this episode of Don’t Panic, It’s Just Data, is turning those ideas into something that works at scale, in real time, and without slowing the business down. Hosted by Dana Gardner, Principal Analyst at Interarbor Solutions, the discussion brings together Jürgen Obermann, Senior GTM Leader EMEA and Piotr Kobziakowski, Senior Principal Solutions Architect from Vespa.ai. Rather than focusing on hype, the conversation centres on the everyday realities of modern e-commerce systems and why progress often feels harder than it should. When AI Meets Legacy Digital CommerceAI introduces new expectations around speed, relevance, and adaptability. As a result, many digital commerce platforms are built on foundations designed for a different era. Years of development have resulted in fragmented environments, often based on microservices that once provided flexibility but now introduce complexity. As Jürgen explains, even small changes can trigger long delivery cycles. Engineering teams may need months to safely update systems, not because the ideas are difficult, but because the infrastructure has become fragile. Search and Personalisation Are Still DisconnectedSearch is where most e-commerce journeys begin, yet many platforms still rely on keyword-focused approaches that struggle to interpret intent. Customers expect results that reflect who they are, what they want, and why they’re searching. Delivering meaningful personalisation requires systems that combine signals, context, and ranking logic in real time. Without that, experiences remain generic even when data is available. Architecture Becomes the BottleneckThe conversation then turns to architecture. Traditional search stacks, particularly Lucene-based systems, often hit performance limits when vector operations and advanced ranking are introduced. These capabilities tend to be bolted on rather than designed into the core. Piotr highlights a deeper issue, which is fragmentation. Search, ranking, recommendation, feature stores, and inference engines often live in separate systems. Each integration adds latency, duplicates data, and slows innovation. A More Grounded Path ForwardThis episode of Don’t Panic, It’s Just Data offers a calm, practical view of AI in digital commerce. Progress comes not from adding more complexity, but from simplifying how systems work together. When search, personalisation, and recommendation are designed as part of a cohesive whole, digital commerce platforms become easier to evolve and better equipped to serve both customers and the business. For more insights into modern search architectures and AI-native commerce platforms, visit Vespa.ai. TakeawaysMany teams see the potential of AI, but face practical blockers.E-commerce companies struggle with operational, customer experience, and business challenges.AI technologies enable sophisticated personalised search experiences.Architectural bottlenecks often hinder e-commerce systems' performance.AI-native architectures can significantly improve search capabilities.Real-time personalisation is crucial for enhancing user experience.Separate systems for search and recommendations create inefficiencies.Phased migration is essential for transitioning from legacy systems.AI's impact on revenue can be profound when implemented effectively.Vespa is a comprehensive platform that integrates various functionalities. Chapters00:00 Introduction to AI-Driven Search in E-Commerce 01:38 Challenges in Adopting AI for Digital Commerce 04:02 Architectural Bottlenecks in E-Commerce Systems 07:39 Designing AI-Native Search Architectures 12:00 Advancements in Personalisation for E-Commerce 16:21 Inefficiencies of Separate Search and Vector Systems 19:24 Phased Migration to AI-Native Platforms 21:51 Business Implications of AI in Search 23:57 Advice for Technical Leaders in E-Commerce About Vespa.aiVespa.ai is an AI search platform designed for building and operating large-scale, real-time applications. It brings together big data processing, vector search, machine-learned ranking, and real-time inference within a single system, enabling teams to deliver intelligent search, recommendation, and retrieval-augmented generation (RAG) at enterprise scale. With native tensor support, Vespa allows complex ranking and decision logic to run directly in production, rather than being bolted on as separate services. This architecture reduces latency, simplifies system design, and makes it easier to evolve AI-driven applications as data, models, and business needs change.

    25 min
  4. The Modern CFO is the Product Owner of Data

    13 JAN

    The Modern CFO is the Product Owner of Data

    In the recent episode of the Don’t Panic It’s Just Data podcast, Shubhangi Dua, Podcast Producer and B2B Tech journalist at EM360Tech, reports on the podcast shot live in London. Guest speakers, Pavel Dolezal, the CEO at Keboola, sit down with Vineta Bajaj, Group CFO, Holland & Barrett. They get specific about how modern finance leaders move faster: start with one governed source of truth, then layer automation, and only then AI. They explore how the CFO role is evolving. From reporting numbers to also owning the non-financial “whys” behind them. In the age of the AI boom, that shift turns every CFO into a product owner of data. But as Pavel Doležal puts it, without a clean, connected foundation, AI is just noise. According to Vineta Bajaj, Group CFO of Holland & Barrett, the role of the CFO has fundamentally changed. Today’s CFO must act as a product owner for data, not just owning the numbers but also determining how data is defined, structured, and used throughout the business. Finance and Data: A Complete ProductDrawing on her experience with Ocado Group, Rohlik Group (one of the fastest online grocery businesses in the world), and now Holland & Barrett, Bajaj points out that financial problems remain persistent across organisations. Issues such as slow month-end closes, duplicated processes, delayed reporting, and limited decision-making speed are still common. These challenges are even greater in complex businesses that operate across multiple entities and countries. Differing charts of accounts, outsourced finance teams, and fragmented systems create added friction. Bajaj stresses the answer isn’t "add another tool". CFOs should treat finance and data as a complete product, one that serves the business as its customer. This requires understanding finance processes, clearly defining financial and non-financial data, and prioritising what has the greatest impact on the business. The Holland & Barrett CFO further emphasises that CFOs cannot pass this responsibility off to IT or BI teams. When data ownership is outside finance, it becomes someone else’s problem. However, when finance takes ownership of master data and its definitions while working closely with commercial and operational teams, it creates a single source of truth that the entire organisation can trust. Also Watch: The Real Future of Data Isn’t AI — It’s Contextual Automation How to Build the Foundation for Real-Time Financial Intelligence & AIAnalytics, automation, and AI only work if the foundations are solid. Before adding AI assistants or real-time dashboards, CFOs must ensure that finance processes are clean, standardised, and automated. Poorly coded purchase orders, late journal entries, and inconsistent definitions can undermine even the most advanced technology. At Holland & Barrett, this perspective led Bajaj to create a dedicated data function within finance. It ensures accountability for master data, definitions, and governance. The aim is not just to speed up reporting, but to gain deeper insights by linking financial outcomes with non-financial factors such as foot traffic, pricing, customer behaviour, and external influences like weather. This integrated viewpoint allows finance teams to go beyond explaining variances and focus on the key business question: why performance changed and what happens next. It also opens up self-service analytics, reducing reliance on central BI teams and enabling decision-makers to act in real time. Bajaj views AI as a powerful tool but not a shortcut. It prompts organisations to quickly address long-standing data and process issues. When data is well-defined and trusted, AI can facilitate scenario modelling, forecasting, and faster decision-making. Without proper discipline, AI merely adds to the confusion. Ultimately, the future CFO must take an active role. They should engage with data, map out processes, ask difficult questions, and create a clear plan. Those who do will move faster than traditional finance models allow and help their organisations thrive in an AI-driven future. Key TakeawaysThe CFO’s role is evolving from reporter to product owner of data.Slow month-end and fragmented processes block fast decision-making.Finance must own data definitions to create a single source of truth.Financial and non-financial data must be connected to explain the “why.”AI only delivers value when financial data and processes are already clean. Chapters00:00 Introduction: The Modern CFO and Data01:05 What is the New Role of the CFO?03:34 The 3 Biggest Problems in Finance (Month-End, Reporting, Decisions)06:21 Why Every CFO is a Product Owner of Data09:37 Data Ownership: Should Master Data Sit in Finance or IT?13:30 The 3 Steps to Unleash Data Power (Process, Standardisation, Data Lake)17:08 How AI is Forcing Speed and Change in Finance20:25 The Future: Keboola's AI Assistant Roadmap21:36 Wrap-up and Final Thoughts

    23 min
  5. Responsible AI Starts with Responsible Data: Building Trust at Scale

    11/12/2025

    Responsible AI Starts with Responsible Data: Building Trust at Scale

    We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here’s the truth: AI isn’t magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground. In this episode of Don’t Panic, It’s Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence. Why Responsible AI Begins with DataAmy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI’s potential is only as strong as the data that feeds it. Responsible AI isn’t just about implementing the latest algorithms; it’s about embedding ethical and governance principles into every stage of AI development, starting with data quality. Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes. Defining Responsible AI and Data GovernanceResponsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable. Prioritise Data for Responsible AIThe takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively: Invest in data quality and governance — it is the foundation of all AI initiatives.Embed ethical and legal principles in every stage of AI development.Enable collaboration across teams to ensure transparency, accountability, and usability.Start small, prove value, and scale — responsible AI is built step by step. Amy Horowitz’s insight resonates beyond the tech team: “Everyone’s ready for AI — except their data.” It’s a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them. For more insights, visit Informatica. TakeawaysAI is only as good as its data inputs.Data quality has become the number one obstacle to AI success. Organisations must start small and find use cases for data governance.Hallucinations in AI models highlight the need for vigilant data oversight.Reputational damage from AI failures can be severe for organisations.Metadata plays a crucial role in data management and governance.Collaboration between data, AI, and development teams is essential.Data governance is a must-have, not a nice-to-have. Organisations need to enable their lines of business for effective AI implementation.Everyone is ready for AI, except for the quality of their data. Chapters00:00 The Importance of Responsible AI and Trusted Data 02:49 Defining Responsible AI and Data Governance 05:40 Challenges in Data Quality and Governance 08:51 Real-World Examples of Data Quality Issues 11:51 The Role of Employees in Data Governance 14:41 Successful AI Outcomes Through Responsible Data Practices 17:42 The Risks of AI Governance and Reputational Damage 20:42 Collaboration Across Data, AI, and Development Teams 23:34 The Future of Metadata and Data Management 26:42 Key Takeaways for Data and AI Leaders About InformaticaInformatica, founded in 1993, is an enterprise data management company headquartered in Redwood City, California. The company provides software products for data integration, data quality, master data management, and data governance. With approximately 9,000 global customers across various industries, Informatica has positioned itself as a significant player in the data management market.

    26 min
  6. The Missing Piece: How Data and AI Impact Management Unlocks Business Value

    11/12/2025

    The Missing Piece: How Data and AI Impact Management Unlocks Business Value

    “What is the true value of our data and AI initiatives?”  Too often, we drive all our energy into tools, processes, and outputs, but forget to ask ourselves how what we build actually makes a difference. For enterprises, this means looking beyond AI models and dashboards to see how our data drives real, measurable impact. Understanding the difference between output and outcome is what separates activity from transformation. In this episode of Don’t Panic, it's Just Data, host Doug Laney and Nadiem von Heydebrand, CEO and Co-founder of Mindfuel, explore how organisations can turn data and AI efforts into actionable business outcomes. They discuss the concept of the “value layer”, a framework connecting data initiatives to business needs, emphasising the importance of understanding business problems before developing solutions. Nadiem stresses that prioritising initiatives and fostering strong collaboration between business and data teams are critical to unlocking maximum value from data and AI efforts. Why Data and AI Impact Management MattersMany organisations are investing heavily in data and AI, but turning these investments into real business value remains a challenge. This is because a critical gap exists between technical execution and business outcomes. Data and AI teams work on initiatives without first clarifying what business problems they're solving or how success will be measured. Data and AI Impact Management bridges this gap by establishing the “value layer" between business strategy and technical platforms. This approach starts with structured demand management for use cases, enables systematic prioritisation based on actual value potential, and tracks initiatives throughout their lifecycle to ensure they deliver impact against business goals. This shift, from building solutions in search of problems to solving qualified business problems with purpose-built solutions, transforms data and AI teams from technical support functions into strategic partners who deliver value, stronger strategic alignment, and lasting competitive advantage.  Nadiem says, “Applying a product mindset within data initiatives is key, and it's the foundational effort to be able to drive value.”  He also notes that not every use case delivers direct financial impact, and the value layer helps clarify demand, manage use cases effectively, and uncover each initiative’s business value For more insights and solutions, visit Mindfuel TakeawaysOrganisations struggle to connect data initiatives to business outcomes.The value layer is essential for linking data to business demands.Understanding the actual business problem is crucial for success.Value management encompasses the entire lifecycle of initiatives.A product mindset helps focus on outcomes rather than outputs.Not all data use cases have direct dollar values.Data and AI impact management creates transparency for data teams.Establishing a product mindset is key for data products.Connecting processes to the operating model enhances effectiveness.Collaboration between business and data teams is vital for unlocking value. Chapters00:31 Introduction: Don't Panic, It's Just Data 01:37 The Missing Piece: Introducing the Value Layer  07:11 Value Management Lifecycle 10:46 Product Mindset in Data Initiatives 14:10 Distinguishing Value and Impact  17:04 Impact Management and Investment Justification  19:34 Mindfuel's Three-Step Guide to Impact Management  21:00 Conclusion and Key Takeaways About MindfuelMindfuel is a data and AI impact management platform that gives data, analytics and AI teams a single source of truth to prioritise high-impact use cases, connect initiatives to business outcomes, and demonstrate ROAI. It replaces scattered tools and reactive, manual processes with a structured approach to managing use cases and data and AI products. This enables organisations to reduce business case bias, eliminate inefficiencies, and clearly communicate the value of AI initiatives, driving enterprise-wide trust, transparency, and impact.

    23 min
  7. The AI-Ready Data Core: Creating the Foundation for Intelligent Systems

    09/12/2025

    The AI-Ready Data Core: Creating the Foundation for Intelligent Systems

    As AI becomes a central pillar of business decision-making, enterprises face a new challenge, and that is making their data AI-ready. It’s no longer enough to collect and digitise information. For organisations, data must be structured, contextualised, discoverable, and usable—both by humans and intelligent systems. AI can only deliver if your data is truly ready, but most enterprises are drowning in fragmented, incomplete, or slow-to-update data. In this episode of Don't Panic, It's Just Data, host Doug Laney and Sushant Rai, Vice President of Product of AI and Data Strategy at Relito, explore how modern data unification strategies are changing enterprises, enabling AI to deliver faster, more reliable insights. They focus on the shift from traditional Master Data Management (MDM) to next-generation AI-ready data cores, uncovering the risks of fragmented data and the strategies to overcome them. Why AI-Ready Data MattersAI, especially large language models (LLMs), is changing how people interact with data. Analysts, executives, and frontline teams now expect natural language queries and instant, actionable insights. Sushant explains: "AI performs at its best when it has full context, empowered with the right data. This allows AI agents to make decisions and take actions on behalf of your business."When you embed intelligence into your data layer, AI can help you manage and scale your data without drowning your teams in manual work. This will only work if your data is structured, clean, governed, and constantly updated, everything that makes it truly AI-ready. The Data Scale ChallengeThe volume of data being turned over daily is staggering.  As Sushant notes: "The amount of data getting generated every single day is so massive that there’s no way to keep up without AI. Even the largest organizations, with massive data stewardship teams, can’t catch up manually."This gap is driving the change in the modern data platforms, where AI automates stewardship, enriches data continuously, detects anomalies, and maintains quality in real time. Want to learn more about modern data unification and AI-ready platforms? Visit Reltio.com for insights, resources, and case studies. TakeawaysData unification provides a trusted, real-time view of key business elements.Organizations must balance speed and trust in data management.Classic MDM is evolving into modern data unification platforms.Real-time data access is crucial for AI and analytics.AI can enhance data quality and governance processes.Successful data initiatives require clear business outcomes and ownership.Data unification should be viewed as a business platform, not just an IT project.AI agents will play a significant role in automating data governance.Organizations need to focus on both structured and unstructured data.The future of data management involves continuous unification and enrichment of data. Chapters00:00 Introduction to Data Unification and AI 07:52 The Importance of Data Unification in Enterprises 15:44 AI and Data Quality Management 23:20 Organizational Success Factors for Data Initiatives 25:16 Future Trends in Data and AI About ReltioAt Reltio, we believe data should fuel your success in the enterprise AI era. Reltio Data Cloud™ is the agentic data fabric for the enterprise—powering real-time data intelligence and AI transformation. Reltio’s cloud-native SaaS platform delivers unified, trusted, and context-rich data across domains in real time. With Reltio, organizations gain 360-degree views of customers, products, suppliers, and more—mobilized in milliseconds to any application, user, or AI agent. Trusted by the world’s largest enterprises across life sciences, financial services, healthcare, technology, and more, we help organizations fuel frictionless operations, drive innovation, and reduce risk.

    26 min
  8. From Data Steward to AI Strategist: Redefining the Role of the CDO in the Agentic Era

    18/11/2025

    From Data Steward to AI Strategist: Redefining the Role of the CDO in the Agentic Era

    While the role of a chief data officers (CDOs) was traditionally focused on regulatory compliance, it has now expanded to empowering the consistent and effective use of data across organizations to improve business outcomes. One of the most effective ways for CDOs to demonstrate their value is by developing a data strategy that is closely aligned with business goals, processes, and outcomes.  In the latest episode of Tech Transformed, host Kevin Petrie, VP of Research at BARC, speaks with Brett Roscoe, Senior Vice President and GM of Cloud Data Governance and Cloud Ops at Informatica, about the evolving role of CDOs. Their conversation explores how CDOs are transitioning from data stewards to strategic leaders, the importance of data governance, and the challenges of managing unstructured data. The Role of the CDO in the Agentic EraAs Roscoe notes, “CDOs are now pivotal in AI strategy,” reflecting how the role has grown from compliance oversight to guiding enterprise initiatives that directly support organizational goals. In this day and age, CDOs are tasked with ensuring that data is both accessible and reliable, providing a foundation for informed decision-making across business units. This includes establishing policies for data quality, access, and governance, which Roscoe highlights as essential: “data governance is foundational for AI.” At the same time, unstructured data ranging from documents and emails to multimedia adds complexity that requires careful management to make it useful while minimizing risk. “Unstructured data presents challenges,” he adds, emphasizing the need for structured oversight to fully leverage these assets. AI StrategyAlthough technology and analytics are evolving rapidly, the CDO’s role in aligning data with strategic initiatives is critical. By connecting data assets to business processes, CDOs help ensure that initiatives are informed by reliable, well-governed information and can deliver measurable results. For anyone looking to understand the evolving responsibilities of CDOs, the importance of governance, and strategies for handling unstructured data, this episode of Tech Transformed provides a detailed and practical discussion. For more insights, follow Informatica: X: @informaticaInstagram: @informaticacorpFacebook: https://www.facebook.com/InformaticaLLC/LinkedIn: https://www.linkedin.com/company/informatica/ TakeawaysCDOs are now central to shaping AI strategies and driving business growth.Robust data governance is crucial for the successful deployment of AI technologies.Unstructured data presents unique challenges and opportunities for AI development.A balance between centralized governance and federated operations is essential.Securing executive support is vital for the success of CDO-led initiatives.Engaging business stakeholders enhances the impact of AI projects.Demonstrating ROI through clear metrics is key to sustaining AI investments.AI governance must extend beyond data to include models and agents.New measures are needed to ensure the quality and governance of unstructured data.CDOs must navigate the tension between fostering innovation and maintaining governance standards. Chapters00:00:00 Introduction to the Podcast and Guests 00:03:00 Brett Roscoe's Background and Role 00:06:00 The Evolving Role of CDOs 00:09:00 Data Governance as a Foundation for AI 00:12:00 Challenges with Unstructured Data 00:15:00 Governance Frameworks for AI and Data 00:18:00 Centralization vs. Decentralization in Data Governance 00:21:00 CDO Strategies for Success 00:24:00 Conclusion and Future Outlook About InformaticaInformatica, founded in 1993, is an enterprise data management company headquartered in Redwood City, California. The company provides software products for data integration, data quality, master data management, and data governance. With approximately 9,000 global customers across various industries, Informatica has positioned itself as a significant player in the data management market.

    30 min

Ratings & Reviews

5
out of 5
2 Ratings

About

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.