Don't Panic! It's Just Data

EM360Tech

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.

  1. Why Data Quality Makes or Breaks AI Success in Supply Chain and Procurement

    24 MAR

    Why Data Quality Makes or Breaks AI Success in Supply Chain and Procurement

    We’re living in an age where new technology promises to improve everything with faster decisions, smarter workflows, and better outcomes. But behind that promise lies a quieter reality, and that is many organisations have that ambition, but readiness often lags behind. In this episode of Don’t Panic! It’s Just Data, host Christina Stathopoulos, Founder of Dare to Data, speaks with Pascal Bensoussan, Chief Product Officer at Ivalua. In this episode, they look at the growing excitement around AI and the reality many organisations face when trying to use it. While ambition is high, readiness often falls short. Focusing on procurement, the conversation explores why many AI initiatives struggle to move beyond early stages and what’s needed to turn that ambition into real, measurable value. Data: The Backbone of AISuccessful AI depends on high-quality, unified data. Fragmented sources, unclean data, and siloed systems make it difficult to build reliable AI applications. As Bensoussan explains: “Fix your data foundation. Without that, you can’t get started with AI. Don’t jump into an AI frenzy hoping it will sort itself out. First, you need a unified transactional and master data model that captures relationships, ensures semantic coherence, and creates a system of truth you can trust.” A unified data model enables AI to work effectively, increasing both its success rate and depth. Organisations should start with use cases that provide tangible value rather than trying to do everything at once. Governance frameworks, monitoring, and maintenance are critical to ensure reliability, security, and meaningful outcomes.  Employee trust is another key factor. Users need confidence in AI outputs, and organisations must address scepticism about how AI might impact roles. Building that trust often requires broader cultural change, which can be one of the hardest barriers. Many teams are used to traditional methods and resist adopting new technologies. By combining solid data foundations with practical, focused use cases and a clear strategy, companies can guide teams through this change, ensuring AI initiatives don’t stall and deliver measurable results. Understanding AI Ambition vs. AI ReadinessAmbition and readiness are not the same. AI ambition refers to the enthusiasm organisations have for integrating AI into operations, driven by the promise of efficiency and insight. AI readiness, on the other hand, measures whether an organisation can actually deploy AI effectively at scale. According to MIT research, 95 per cent of enterprise AI projects fail to move from proof of concept to production. Bensoussan calls this the “GenAI divide”: “The ambition is there because the promise is incredible, but the readiness is often missing because often the foundation is cracked.” Without a clear strategy or roadmap, even organisations with abundant resources can struggle to implement AI successfully. Starting with targeted, achievable use cases helps teams gain confidence, build trust, and generate measurable results before scaling more widely. AI in ProcurementProcurement provides a unique lens for understanding AI adoption. Positioned at the intersection of data, compliance, risk, and finance, it offers significant opportunities but also considerable complexity. One major challenge is that unstructured data like contracts, risk assessments, and supplier communications must be integrated with transactional records, a process that is often time-consuming and difficult. Fragmented systems only add to the challenge, limiting AI’s ability to deliver meaningful, actionable insights. Bensoussan emphasises that seeing the entire process from supplier discovery to payment is essential. A comprehensive view ensures that AI-driven insights are reliable, actionable, and fully traceable, allowing organisations to understand why specific decisions are made and to make more strategic choices. AI in procurement is not about replacing humans; it is about augmenting them. By automating mundane tasks like data retrieval and report generation, professionals can focus on higher-value work, strategic thinking, and deeper evaluation. AI also enables richer insights, helping teams develop more effective strategies and make informed decisions. By addressing data challenges, building trust, and starting with targeted use cases, organisations can turn AI ambition into measurable value. With the right preparation and focus, AI can strengthen procurement operations, enhance decision-making, and unlock new levels of efficiency. For more information, visit www.ivalua.com TakeawaysAI ambition vs. readiness in organisationsBarriers to AI adoption: culture, strategy, data, trust, governanceImportance of unified data models for AI effectivenessPractical AI applications in procurement: sourcing, contracts, invoicingHuman-AI collaboration and the future of work in procurement Chapters00:00 AI Ambition vs. Readiness 05:02 The Procurement Landscape and AI Adoption 09:10 Data Foundations for AI Success 13:03 Unified Data Models in Procurement 16:43 The Human Element in AI Integration 25:57 Real-World Applications of AI Agents 32:22 Key Takeaways for Leaders in AI Adoption

    32 min
  2. Revenue-Ready Data Is Not Magic, It’s Engineering

    19 MAR

    Revenue-Ready Data Is Not Magic, It’s Engineering

    Artificial intelligence is everywhere right now, in boardrooms, strategy meetings, and product roadmaps. Organisations are investing heavily in machine learning, automation, and generative AI, all with the same promise: unlock new revenue and work smarter. In the latest episode of the Don’t Panic It’s Just Data podcast, EM360Tech’s Trisha Pillay explores this challenge with Chief Technology Officer Paul Brownell and Sergio Morales, Data and AI Engineering Leader from Growth Acceleration Partners. Their discussion unpacks why so many AI initiatives fail to translate into revenue and why the real starting point isn’t the model itself, but the data, governance, and engineering practices that make meaningful outcomes possible. But here’s the uncomfortable truth and that is many AI strategies look powerful on paper, but the real financial impact is often unclear. This disconnect, called the revenue data gap, highlights an issue many organisations overlook. AI doesn’t create value on its own especially without strong data foundations, governance, and engineering discipline, even the most ambitious AI strategy will struggle to deliver measurable results. The Revenue Data Gap in Enterprise AIFor many organisations, the excitement surrounding AI can create a tendency to jump straight into experimentation. Teams begin exploring tools, deploying models, or building prototypes without first defining how those initiatives will produce tangible business outcomes. According to Brownell, this is where the first major disconnect appears. Many enterprises approach AI with what he describes as a “shiny object” mentality. They recognise that AI is powerful, but they have not yet defined where the value will actually come from. As a result, organisations may launch projects that generate interesting insights or technical demonstrations but fail to translate into revenue growth or cost reduction. Brownell emphasises the importance of establishing a data hypothesis before pursuing any AI initiative. A data hypothesis outlines the relationship between the data an organisation holds and the business value it expects to extract from it. In practical terms, it asks a simple but critical question: If we analyse this data, what decision or action will it enable, and how will that affect revenue? Without this hypothesis, organisations often find themselves exploring large volumes of data without a clear objective. Some companies may not even know where their most valuable data resides or whether it is reliable enough to support analytical models. Data quality, therefore becomes another major component of the revenue data gap.  Engineering the Foundations for AI That Delivers Business ImpactWhile AI is often portrayed as a revolutionary technology, Morales points out that the engineering challenges behind it are not entirely new. Many of the same principles that guided earlier technology transformations such as cloud adoption or microservices architecture still apply to modern AI deployments. In fact, Morales argues that organisations struggling with AI today are often experiencing the consequences of earlier architectural decisions. Systems built years ago were rarely designed with advanced analytics or AI in mind. As a result, critical data may be trapped inside legacy applications, scattered across departments, or stored in formats that make integration difficult. These limitations become highly visible once organisations attempt to deploy AI at scale. Another major challenge lies in what Morales describes as the velocity mandate. Businesses increasingly expect technology teams to deliver results quickly, particularly when AI initiatives are positioned as strategic priorities. However, building the infrastructure required for reliable AI systems can take significant time and effort. Morales explains that organisations do not necessarily need to choose between speed and stability. Instead, they can adopt a pragmatic approach that focuses on incremental progress. This strategy allows organisations to create early successes that build confidence across the business. Once stakeholders see tangible results from initial projects, it becomes easier to secure the support and investment needed for broader data transformation efforts. Why Data Contracts and Governance Are Critical to AI SuccessOne of the most practical tools discussed is the concept of data contracts. Though less flashy than AI models, they ensure data flows reliably between systems. At their core, data contracts define a dataset’s structure and expectations; schemas, formats, and validation rules. Morales describes them as a way to embed governance directly into data pipelines, automatically catching violations before they disrupt downstream processes. This prevents silent errors that can skew analytics and decisions.  Data contracts aren’t a cure-all, though. Their effectiveness relies on clear organisational ownership and communication around each dataset. In large companies, data often comes from multiple systems managed by different teams, each with distinct priorities. Brownell explains that data contracts create a shared framework for collaboration, letting teams integrate and analyse information confidently. Implementation can be gradual: start with critical datasets for a specific use case and expand governance as needed. This iterative approach improves data reliability without requiring a full infrastructure overhaul. What’s next for AI?While AI tools continue to evolve, the fundamentals of data management remain unchanged. Organisations must understand their data, govern it effectively, and design infrastructure that allows information to move reliably between systems. Closing the revenue data gap, therefore, requires more than deploying new AI models. It demands a strategic approach that begins with clear business objectives, continues through data engineering practices, and is reinforced by governance frameworks such as data contracts. If you would like to learn more visit: https://www.growthaccelerationpartners.com/ Chapters00:00 Introduction to AI Ambitions and Revenue Gaps 02:31 Understanding the Revenue Data Gap 05:47 Challenges of Legacy Architecture in AI 09:11 Closing the Revenue Data Gap 12:29 The Velocity Mandate in AI Implementation 16:42 Strategic and Technical Alignment for AI 18:31 Engineering Considerations for AI Initiatives 22:03 The Role of Data Contracts in AI Success 28:55 Practical Takeaways for AI Implementation TakeawaysThe revenue data gap is a common challenge that organisations face when implementing AI.It’s crucial to define a clear data hypothesis and ensure data quality to drive measurable business impact.Data contracts work only if teams know who owns datasets, how to maintain them, and how changes are communicated.Balancing the velocity mandate with governance is key. Engaging stakeholders and mapping the value chain ensures that AI initiatives are aligned with business needs, ultimately leading to revenue growth.

    30 min
  3. How to Navigate the Trust Paradox in AI Adoption: Insights from Informatica’s 2026 CDO Report

    18 MAR

    How to Navigate the Trust Paradox in AI Adoption: Insights from Informatica’s 2026 CDO Report

    Enterprise AI budgets are climbing, but the data foundations beneath them remain uneven. In this episode of Don’t Panic, It’s Just Data, Kevin Petrie, VP of Research at BARC, and Nathan Turajski, Senior Director, Product Marketing at Informatica, examine the findings of the CDO Insights 2026 report, which argues that executive confidence in AI may be outpacing organisational readiness. The study centres on what it describes as a growing “trust paradox” as Chief Data Officers are accelerating AI initiatives even as data quality, governance maturity, and AI literacy struggle to keep up.  The Trust ParadoxThe report exposes a striking disconnect. Turajski points out that while around 65 per cent of data leaders believe employees trust the data powering AI, 75 per cent say upskilling in data and AI literacy is essential. In other words, confidence is high, but readiness is lagging. This is the trust paradox where employees increasingly rely on AI outputs, while data leaders remain cautious about the quality, governance, and lineage behind those results. The risk is not scepticism but rather overconfidence. When AI-generated answers are accepted without scrutiny, flawed data can quietly scale poor decisions. For CDOs, the challenge is cultural as much as technical. AI Adoption Soars While Data Readiness LagsThe harsh reality is that AI experimentation is no longer confined to innovation teams. It’s spreading across marketing, operations, finance, and customer experience. As a result, scaling from pilot to production requires more than a model and a use case. To make AI work at scale, organisations need a data strategy that ensures consistency across domains, clear and transparent governance, measurable business impact, and sustainable management of their data assets. Data Quality and GovernanceTurajski explains that organisations are increasingly investing in data management and governance, with 86 per cent expanding data initiatives and 39 per cent prioritising upskilling. Metadata integration also helps unify distributed environments, providing the context AI needs to deliver reliable, trustworthy outputs.   Organisations need to remember that AI systems amplify whatever they are given, so if inputs are inconsistent, incomplete, or poorly defined, outputs will reflect those weaknesses which are often at scale. Data quality challenges frequently arise from duplicated or conflicting records, inconsistent definitions across business units, poor lineage visibility, and limited ownership accountability.  For example, a retailer might describe the same product in multiple ways across systems. Without standardisation, AI tools trained on that data produce fragmented insights, and when this occurs across thousands of products and regions, the distortions multiply. The takeaway from data leaders is clear: AI performance cannot be separated from disciplined, high-quality data management. Upskilling and Scaling AI AdoptionBoth Petrie and Turajski stress that technology alone won’t close the gap. Upskilling employees in data literacy, AI fluency, and governance awareness ensures AI experimentation evolves into measurable, real-world results from improved customer experience to faster, more accurate analytics. The 2026 CDO Insights findings position data leaders at the centre of AI transformation. Their mandate extends beyond infrastructure to trust architecture. The trust paradox isn’t a reason to slow down innovation. It’s a reminder that lasting results require as much discipline as ambition. In 2026, the organisations that succeed won’t be the fastest to adopt new technologies, but those that build the most reliable data foundations to support them. To learn more about this, visit informatica.com TakeawaysThe trust paradox highlights a disconnect between employee confidence in AI and leadership's caution.Data leaders recognise the need for upskilling in data and AI literacy.Building a trusted context is essential for effective AI adoption.The vendor landscape for data management is complex and requires careful navigation.AI is being used to enhance customer experience and loyalty.Measurable results from AI adoption are becoming a priority for organisations.Data governance must keep pace with AI use to mitigate risks.Successful organisations are leveraging unified data management platforms to drive AI value. Chapters00:00 Introduction to the CDO Insights Report 03:13 Understanding the Trust Paradox in AI Adoption 08:34 Building Trusted Context for AI 14:11 The Importance of Data Quality and Completeness 20:28 Navigating the Vendor Landscape for Data Management 23:09 From Experimentation to Measurable Results 27:38 Recommendations for CDOs and CISOs

    28 min
  4. Are You Scaling Intelligence — or Just Scaling Errors?

    10 MAR

    Are You Scaling Intelligence — or Just Scaling Errors?

    What if the real advantage in AI lies not in having more data, but in having less? In this episode of the Don’t Panic, It’s Just Data podcast, host Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech, sits down with Herb Blecher, Research Director of Data and Analytics at Enterprise Management Associates (EMA). This conversation challenges a common belief in enterprise tech – that gathering everything ensures insight. Blecher, alluding to the modern-day AI craze, cautions the enterprise audience that just because you can access vast amounts of unstructured data doesn’t mean you should. What is the AI Gold Rush & Why It’s Risky?Unstructured data now fills the enterprise tech space — voice calls, financial documents, customer chats, images, logs, and emails. “With AI and machine learning, we’ve finally figured out how to access and organise it.” However, Blecher offers a stark reality check. AI doesn’t just increase insight; it increases error. When machines transition from calculating numbers to interpreting tone, images, and incomplete context, the chances for mistakes rise significantly. A blurry comma in a financial document, a misread abbreviation, a misplaced decimal. In low-stakes situations, this is inconvenient. In finance or healthcare, it can be disastrous. The danger lies not just in faulty outputs, but in confidently flawed outputs. AI doesn’t hesitate as humans do. It doesn’t say, “This seems off.” It fills in gaps, often convincingly. That confidence, Blecher argues, makes governance essential. The real issue companies face isn’t a lack of data; it’s a lack of careful thought. Also Read: AI is Making “As-Code” Inevitable Why Human-in-the-Loop is Imperative?Governance over hype is the key takeaway from the conversation. AI generating and using data at the same time creates a new situation. In the past, including financial troubles that Blecher experienced directly, human judgment acted as the final protection. Now, companies risk losing that safeguard in their rush to automate. Dua puts it simply – humans are leaders; AI is the helper. The enterprises that succeed with unstructured data aren’t the fastest; they are the most thoughtful. They clearly define their questions first, build feedback loops, monitor continuously, and foster a culture of scepticism. What are the failures? They often look like ambitious automation without safeguards—from flawed document scanning to high-profile AI rollouts like McDonald's testing automated drive-through ordering, where conversational nuance proved more challenging than anticipated. Tone, ambiguity, and context remain distinguishing human areas. What Happens Five Years From Now?Will AI solve data quality issues? No, it will not. However, Blecher believes that data quality problems are here to stay. “What will change is the range of questions we try to answer. As AI develops, companies won’t stop dealing with edge cases; they’ll broaden the edge.” The future doesn’t promise easy automation. It promises increased capability, increased capacity, along with increased responsibility. For CFOs and IT leaders investing in AI-driven data strategies, EMA’s Research Director of Data and Analytics has a final message: Don’t confuse volume with value.Don’t replace governance with optimism.Don’t give up scepticism in a gold rush. AI’s potential is huge. But more data doesn’t always mean better data. In a world eager to gather everything, restraint could be the most radical strategy of all. Key Takeaways More data doesn’t guarantee better insights — clarity of purpose matters more than volume.AI doesn’t just scale intelligence; it scales errors if governance is weak.Unstructured data is powerful, but without context and oversight, it becomes a liability.Human judgment remains essential — especially in high-stakes domains like finance and healthcare.The most successful organisations move deliberately, not impulsively, in the AI gold rush. Chapters00:00 Introduction to Data Quality and Its Importance02:43 The Rise of Unstructured Data05:42 Challenges in Ensuring Data Quality08:46 AI's Role in Data Quality Management11:30 Human Oversight in AI and Data Quality14:47 Opportunities in Data Quality17:32 Governance and Regulation in AI20:25 Real-World Applications and Case Studies23:27 Future of Data Quality and AI26:18 Key Takeaways for Leaders About Herb BlecherHerb leads EMA's Data and Analytics practice. He brings more than two decades of experience building solutions across financial services, data product development, and enterprise analytics. His perspective is shaped by leading national data initiatives for U.S. mortgage servicers and government agencies, as well as driving product innovation and strategy in fast-moving technology environments. Herb's research spans enterprise data and analytics, including data architecture and platform modernisation, analytics and integration, governance, and AI/ML platforms. #AI #DataAnalytics #TechPodcast #B2BTech #DataQuality #UnstructuredData #AIGoldRush #HumanInTheLoop #AICorporate #HerbBlecher #EMAPartners #CFOs #ITLeaders #DataStrategy #DontPanicItsJustData #EM360Tech #PodcastClips #DataInsights

    28 min
  5. Is AI Analytics the Missing Link Between Business Users and Data Teams?

    30 JAN

    Is AI Analytics the Missing Link Between Business Users and Data Teams?

    For years, enterprises have discussed data democratisation as if it were an inevitable end goal. An assumption was made that turning on dashboards and training the business would lead to insight following naturally. But according to Barry McCardel, Co-Founder and CEO of Hex Technologies, the reality has been much more complicated. In the recent episode of the Don’t Panic, It’s Just Data podcast, McCardel joined host Kevin Petrie, VP Research and Head of Data Management at BARC, to talk about why access alone has never been enough. He also discussed how artificial intelligence (AI) is forcing the analytics community to rethink the purpose of data. The conversation dives into a familiar issue: how can organisations empower non-technical users without compromising data trust or overwhelming the technical teams responsible for it? “We’ve spent a decade pretending the problem was solved by self-service,” McCardel says. “But what we actually did was move complexity around instead of removing it.” As AI becomes part of analytics platforms, that complexity is finally being addressed. This includes long-standing beliefs about roles, ownership, and teamwork. Addressing the Myth of Data DemocratisationTracing many of the analytics issues faced by organisations in the present day, McCardel alludes to the early self-service BI, which promised that business users could explore data on their own. This was supposed to allow analysts and engineers to focus on more important tasks. In reality, the outcome often included duplicated logic, inconsistent metrics, and a widening trust gap between teams. “Access without context is chaos,” McCardel tells Petrie. “If everyone can answer questions, but everyone answers them differently, you haven’t democratized anything; you’ve just created noise.” This issue has grown more urgent as organisations expand. Different roles—data engineers, analysts, data scientists, and business stakeholders—approach data with distinct goals and skills. Traditional tools forced everyone into the same interfaces, often designed for one group while ignoring the needs of the others. Petrie notes that many companies responded by adding layers of control, but this approach had drawbacks. Stricter guidelines slowed insight generation and pushed business users back into reliance on centralised teams. McCardel argues that the main problem isn’t a lack of governance or tools but a lack of shared understanding. “We’ve treated analytics like a handoff,” he explains. “The data team builds it, the business consumes it. That model doesn’t work when questions are fluid, and decisions are continuous.” He believes AI is revealing the limits of that model and providing a path forward. Also Watch: “Data Teams Suffer from Fragmentation” | Charles Schaefer @ Big Data LDN 2025 AI is the Bridge, Not the ShortcutWhile much of the industry conversation about AI in analytics focuses on automation and natural language querying, the CEO of Hex is cautious about viewing AI as a quick fix. “If AI just gives you faster wrong answers, that’s not progress,” he points out. Instead, he presents AI as a bridge that helps different roles collaborate in the same analytical space without flattening their expertise. In this view, AI helps translate: it turns business questions into structured analysis, brings relevant context to the surface, and makes assumptions clear instead of hidden in code. This is where McCardel sees platforms like Hex playing an important role. Instead of separating technical and non-technical users into different tools, Hex is designed to support collaboration within a single environment. Analysts can create rigorous, transparent logic, while business users can interact with the results, ask follow-up questions, and understand how conclusions were made. “The goal isn’t to turn everyone into a data scientist,” McCardel clarifies. “It’s to let each person contribute at their level without breaking the chain of trust.” Trust, he stresses, is essential in modern analytics. As more insights come from AI, organisations will need clear lineage, better validation, and shared visibility into how answers are created. Black-box analytics may be quick, but they are also fragile. “We’re moving away from the idea that insight is a product you deliver,” McMardel added. “It’s a conversation you participate in.” As AI changes analytics workflows, the challenge for organisations won’t be just adopting the technology. It will redesign how people collaborate around data. The co-founder of Hex suggests that democratisation was never about removing experts from the process. It was about making expertise visible, accessible, and usable. And that, finally, may be something worth not panicking about. TakeawaysAI is reshaping the future of data analytics.Data democratisation remains a significant challenge for organisations.Trustworthiness in data outputs is crucial for effective decision-making.Integration of different user personas is essential for collaboration.Organisations can start using analytics tools without perfect data.Expert users can help build trust in data analytics.Natural language interfaces are key to making data accessible.The role of AI in data exploration is becoming increasingly important.Data quality and governance are critical for successful analytics.Successful AI adoption requires a step-by-step approach. Chapters00:00 Introduction to AI and Data Analytics02:54 The Genesis of Hex Technologies06:04 Challenges in Data Democratisation09:10 AI's Role in Data Exploration12:14 Trust and Context in Data Analytics15:00 The Evolution of Analytics Tools18:10 Integrating Different User Personas21:09 The Importance of Contextual Understanding23:52 Data Preparation and Governance Challenges26:46 Incremental Adoption of AI in Organizations29:57 The Human Element in AI Adoption32:47 Conclusion and Next Steps for Leaders #DataDemocratisation #AIinAnalytics #SelfServiceAnalytics #FutureofData #DataStrategy #BusinessIntelligence #DataGovernance #DataTrust #NaturalLanguageQuery #EnterpriseAnalytics #HexTechnologies #BarryMcCardel #DontPanicItsJustData #KevinPetrie #BARC #CIO #ITLeaders #DataTeams #DataAnalysts #DataScientists #BusinessStakeholders #DataDemocratization #AIforDataTeams #analytics_tool #datastrategy #RethinkAnalyticsStrategy #blackbox #DataFragmentation

    36 min
  6. How To Scale AI in Digital Commerce Effectively

    14 JAN

    How To Scale AI in Digital Commerce Effectively

    Digital commerce teams rarely lack ideas. Most understand how AI, data, and personalisation could improve customer experiences. The problem, as explored in this episode of Don’t Panic, It’s Just Data, is turning those ideas into something that works at scale, in real time, and without slowing the business down. Hosted by Dana Gardner, Principal Analyst at Interarbor Solutions, the discussion brings together Jürgen Obermann, Senior GTM Leader EMEA and Piotr Kobziakowski, Senior Principal Solutions Architect from Vespa.ai. Rather than focusing on hype, the conversation centres on the everyday realities of modern e-commerce systems and why progress often feels harder than it should. When AI Meets Legacy Digital CommerceAI introduces new expectations around speed, relevance, and adaptability. As a result, many digital commerce platforms are built on foundations designed for a different era. Years of development have resulted in fragmented environments, often based on microservices that once provided flexibility but now introduce complexity. As Jürgen explains, even small changes can trigger long delivery cycles. Engineering teams may need months to safely update systems, not because the ideas are difficult, but because the infrastructure has become fragile. Search and Personalisation Are Still DisconnectedSearch is where most e-commerce journeys begin, yet many platforms still rely on keyword-focused approaches that struggle to interpret intent. Customers expect results that reflect who they are, what they want, and why they’re searching. Delivering meaningful personalisation requires systems that combine signals, context, and ranking logic in real time. Without that, experiences remain generic even when data is available. Architecture Becomes the BottleneckThe conversation then turns to architecture. Traditional search stacks, particularly Lucene-based systems, often hit performance limits when vector operations and advanced ranking are introduced. These capabilities tend to be bolted on rather than designed into the core. Piotr highlights a deeper issue, which is fragmentation. Search, ranking, recommendation, feature stores, and inference engines often live in separate systems. Each integration adds latency, duplicates data, and slows innovation. A More Grounded Path ForwardThis episode of Don’t Panic, It’s Just Data offers a calm, practical view of AI in digital commerce. Progress comes not from adding more complexity, but from simplifying how systems work together. When search, personalisation, and recommendation are designed as part of a cohesive whole, digital commerce platforms become easier to evolve and better equipped to serve both customers and the business. For more insights into modern search architectures and AI-native commerce platforms, visit Vespa.ai. TakeawaysMany teams see the potential of AI, but face practical blockers.E-commerce companies struggle with operational, customer experience, and business challenges.AI technologies enable sophisticated personalised search experiences.Architectural bottlenecks often hinder e-commerce systems' performance.AI-native architectures can significantly improve search capabilities.Real-time personalisation is crucial for enhancing user experience.Separate systems for search and recommendations create inefficiencies.Phased migration is essential for transitioning from legacy systems.AI's impact on revenue can be profound when implemented effectively.Vespa is a comprehensive platform that integrates various functionalities. Chapters00:00 Introduction to AI-Driven Search in E-Commerce 01:38 Challenges in Adopting AI for Digital Commerce 04:02 Architectural Bottlenecks in E-Commerce Systems 07:39 Designing AI-Native Search Architectures 12:00 Advancements in Personalisation for E-Commerce 16:21 Inefficiencies of Separate Search and Vector Systems 19:24 Phased Migration to AI-Native Platforms 21:51 Business Implications of AI in Search 23:57 Advice for Technical Leaders in E-Commerce About Vespa.aiVespa.ai is an AI search platform designed for building and operating large-scale, real-time applications. It brings together big data processing, vector search, machine-learned ranking, and real-time inference within a single system, enabling teams to deliver intelligent search, recommendation, and retrieval-augmented generation (RAG) at enterprise scale. With native tensor support, Vespa allows complex ranking and decision logic to run directly in production, rather than being bolted on as separate services. This architecture reduces latency, simplifies system design, and makes it easier to evolve AI-driven applications as data, models, and business needs change.

    25 min
  7. The Modern CFO is the Product Owner of Data

    13 JAN

    The Modern CFO is the Product Owner of Data

    In the recent episode of the Don’t Panic It’s Just Data podcast, Shubhangi Dua, Podcast Producer and B2B Tech journalist at EM360Tech, reports on the podcast shot live in London. Guest speakers, Pavel Dolezal, the CEO at Keboola, sit down with Vineta Bajaj, Group CFO, Holland & Barrett. They get specific about how modern finance leaders move faster: start with one governed source of truth, then layer automation, and only then AI. They explore how the CFO role is evolving. From reporting numbers to also owning the non-financial “whys” behind them. In the age of the AI boom, that shift turns every CFO into a product owner of data. But as Pavel Doležal puts it, without a clean, connected foundation, AI is just noise. According to Vineta Bajaj, Group CFO of Holland & Barrett, the role of the CFO has fundamentally changed. Today’s CFO must act as a product owner for data, not just owning the numbers but also determining how data is defined, structured, and used throughout the business. Finance and Data: A Complete ProductDrawing on her experience with Ocado Group, Rohlik Group (one of the fastest online grocery businesses in the world), and now Holland & Barrett, Bajaj points out that financial problems remain persistent across organisations. Issues such as slow month-end closes, duplicated processes, delayed reporting, and limited decision-making speed are still common. These challenges are even greater in complex businesses that operate across multiple entities and countries. Differing charts of accounts, outsourced finance teams, and fragmented systems create added friction. Bajaj stresses the answer isn’t "add another tool". CFOs should treat finance and data as a complete product, one that serves the business as its customer. This requires understanding finance processes, clearly defining financial and non-financial data, and prioritising what has the greatest impact on the business. The Holland & Barrett CFO further emphasises that CFOs cannot pass this responsibility off to IT or BI teams. When data ownership is outside finance, it becomes someone else’s problem. However, when finance takes ownership of master data and its definitions while working closely with commercial and operational teams, it creates a single source of truth that the entire organisation can trust. Also Watch: The Real Future of Data Isn’t AI — It’s Contextual Automation How to Build the Foundation for Real-Time Financial Intelligence & AIAnalytics, automation, and AI only work if the foundations are solid. Before adding AI assistants or real-time dashboards, CFOs must ensure that finance processes are clean, standardised, and automated. Poorly coded purchase orders, late journal entries, and inconsistent definitions can undermine even the most advanced technology. At Holland & Barrett, this perspective led Bajaj to create a dedicated data function within finance. It ensures accountability for master data, definitions, and governance. The aim is not just to speed up reporting, but to gain deeper insights by linking financial outcomes with non-financial factors such as foot traffic, pricing, customer behaviour, and external influences like weather. This integrated viewpoint allows finance teams to go beyond explaining variances and focus on the key business question: why performance changed and what happens next. It also opens up self-service analytics, reducing reliance on central BI teams and enabling decision-makers to act in real time. Bajaj views AI as a powerful tool but not a shortcut. It prompts organisations to quickly address long-standing data and process issues. When data is well-defined and trusted, AI can facilitate scenario modelling, forecasting, and faster decision-making. Without proper discipline, AI merely adds to the confusion. Ultimately, the future CFO must take an active role. They should engage with data, map out processes, ask difficult questions, and create a clear plan. Those who do will move faster than traditional finance models allow and help their organisations thrive in an AI-driven future. Key TakeawaysThe CFO’s role is evolving from reporter to product owner of data.Slow month-end and fragmented processes block fast decision-making.Finance must own data definitions to create a single source of truth.Financial and non-financial data must be connected to explain the “why.”AI only delivers value when financial data and processes are already clean. Chapters00:00 Introduction: The Modern CFO and Data01:05 What is the New Role of the CFO?03:34 The 3 Biggest Problems in Finance (Month-End, Reporting, Decisions)06:21 Why Every CFO is a Product Owner of Data09:37 Data Ownership: Should Master Data Sit in Finance or IT?13:30 The 3 Steps to Unleash Data Power (Process, Standardisation, Data Lake)17:08 How AI is Forcing Speed and Change in Finance20:25 The Future: Keboola's AI Assistant Roadmap21:36 Wrap-up and Final Thoughts

    23 min
  8. Responsible AI Starts with Responsible Data: Building Trust at Scale

    11/12/2025

    Responsible AI Starts with Responsible Data: Building Trust at Scale

    We live in a world where technology moves faster than most organisations can keep up. Every boardroom conversation, every team meeting, even casual watercooler chats now include discussions about AI. But here’s the truth: AI isn’t magic. Its promise is only as strong as the data that powers it. Without trust in your data, AI projects will be built on shaky ground. In this episode of Don’t Panic, It’s Just Data podcast, Amy Horowitz, Group Vice President of Solution Specialist Sales and Business Development at Informatica, joins moderator Kevin Petrie, VP of Research at BARC, to tackle one of the most pressing topics in enterprise technology today: the role of trusted data in driving responsible AI. Their discussion goes beyond buzzwords to focus on actionable insights for organisations aiming to scale AI with confidence. Why Responsible AI Begins with DataAmy opens the conversation with a simple but powerful observation: “No longer is it okay to just have okay data.” This sets the stage for understanding that AI’s potential is only as strong as the data that feeds it. Responsible AI isn’t just about implementing the latest algorithms; it’s about embedding ethical and governance principles into every stage of AI development, starting with data quality. Kevin and Amy emphasise that organisations must look at data not as a byproduct, but as a foundational asset. Without reliable, well-governed data, even the most advanced AI initiatives risk delivering inaccurate, biased, or ineffective outcomes. Defining Responsible AI and Data GovernanceResponsible AI is more than compliance or policy checkboxes. As Amy explains, it is a framework of principles that guide the design, development, deployment, and use of AI. At its core, it is about building trust, ensuring AI systems empower organisations and stakeholders while minimising unintended consequences. Responsible data governance is the practical arm of responsible AI. It involves establishing policies, controls, and processes to ensure that data is accurate, complete, consistent, and auditable. Prioritise Data for Responsible AIThe takeaway from this episode is clear and that is responsible AI starts with responsible data. For organisations looking to harness AI effectively: Invest in data quality and governance — it is the foundation of all AI initiatives.Embed ethical and legal principles in every stage of AI development.Enable collaboration across teams to ensure transparency, accountability, and usability.Start small, prove value, and scale — responsible AI is built step by step. Amy Horowitz’s insight resonates beyond the tech team: “Everyone’s ready for AI — except their data.” It’s a reminder that AI success begins not with the algorithms, but with the trustworthiness and governance of the data powering them. For more insights, visit Informatica. TakeawaysAI is only as good as its data inputs.Data quality has become the number one obstacle to AI success. Organisations must start small and find use cases for data governance.Hallucinations in AI models highlight the need for vigilant data oversight.Reputational damage from AI failures can be severe for organisations.Metadata plays a crucial role in data management and governance.Collaboration between data, AI, and development teams is essential.Data governance is a must-have, not a nice-to-have. Organisations need to enable their lines of business for effective AI implementation.Everyone is ready for AI, except for the quality of their data. Chapters00:00 The Importance of Responsible AI and Trusted Data 02:49 Defining Responsible AI and Data Governance 05:40 Challenges in Data Quality and Governance 08:51 Real-World Examples of Data Quality Issues 11:51 The Role of Employees in Data Governance 14:41 Successful AI Outcomes Through Responsible Data Practices 17:42 The Risks of AI Governance and Reputational Damage 20:42 Collaboration Across Data, AI, and Development Teams 23:34 The Future of Metadata and Data Management 26:42 Key Takeaways for Data and AI Leaders About InformaticaInformatica, founded in 1993, is an enterprise data management company headquartered in Redwood City, California. The company provides software products for data integration, data quality, master data management, and data governance. With approximately 9,000 global customers across various industries, Informatica has positioned itself as a significant player in the data management market.

    26 min

Ratings & Reviews

5
out of 5
2 Ratings

About

Not only do many businesses have more data than they know what to do with, but they also often struggle to gain insights from some of the most valuable data in their possession, leading to many of their crucial data assets going unused. Whether it's issues with data quality, visualization, or management, getting lost in the sea of enterprise data at your possession can make it impossible to make smart, data-driven decisions that improve your business. The "Don't Panic! It's Just Data" podcast delves deep into the power of enterprise data. From groundbreaking vendor solutions to expert-backed best practices for making the most of your data assets, join us as we gather insights from leading tech vendors and professionals who depend on data daily.