The Data Edge: Data Quality & AI Readiness

Stephanie Wiechers & Erwin de Werd

Welcome to the Pearstop podcast series on data management, where experts Stephanie Wiechers and Erwin de Werd dive into the world of data quality, standardization, and the real-world value of information in technical industries. From procurement and facility management to hard services and large-scale manufacturing, we explore how 'messy' data can cost organizations millions—and how to fix it. Join us as we break down complex topics like enterprise-level standardization and Microsoft Fabric into concrete, actionable steps. Whether you're a CEO, an asset manager, or a bid specialist, this series provides the insights you need to turn your data into a fuel for smart decision-making and AI readiness. Don't let your data work against you—learn how to make it your greatest competitive advantage.

Episodes

  1. AI & Data Standards

    17 HR AGO

    AI & Data Standards

    𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗥𝗲𝗹𝗶𝗮𝗯𝗹𝗲 𝗗𝗮𝘁𝗮 𝗟𝗮𝘆𝗲𝗿: 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 "𝗧𝗵𝗲 𝗗𝗮𝘁𝗮 𝗘𝗱𝗴𝗲" 𝗣𝗼𝗱𝗰𝗮𝘀𝘁 In this episode of The Data Edge, Erwin de Werd and guest Stephanie Wiechers explore the critical aspects of data quality, standardization, and data movement for organizations aiming to leverage AI and advanced analytics effectively. They discuss practical challenges and strategic considerations for companies of all sizes seeking to build trustworthy, scalable data infrastructure. 
𝗠𝗮𝗶𝗻 𝗧𝗼𝗽𝗶𝗰𝘀: ✔ The increasing importance of data quality and reliability in AI applications ✔ Challenges in creating and trusting dashboards due to data flaws ✔ How data movement between systems influences decision-making and analytics ✔ The role of standardization in cross-entity data sharing and efficiency ✔ Trends and best practices for adopting data standards and improving data governance ✔ The impact of AI tools like Copilot on data analysis and development ✔ Strategies for smaller businesses to align with industry standards despite resource constraints 
𝗧𝗶𝗺𝗲𝘀𝘁𝗮𝗺𝗽𝘀: 00:00 - Introduction and overview of data quality challenges in AI development 00:30 - The surge in democratized data analysis and its responsibilities 01:34 - Risks of trusting dashboards with potential data flaws 03:07 - The importance of data reliability for decision-making 04:13 - Moving data across systems to enable advanced analytics 05:18 - The significance of data standardization in different industries 06:34 - How data lakes and recent platforms support data integration 07:45 - The role of data quality as a foundation for dashboards and AI models 08:26 - Standardization trends and industry-specific norms 09:13 - Cost considerations and strategic choices in implementing standards 10:27 - Challenges and strategies for smaller companies adopting standards 11:48 - Practical steps for transitioning from non-standard to standardized data 12:18 - Industry standards like UNSPSC and industry-specific frameworks 13:25 - The strategic value of standardization for cost savings and operational efficiency 14:09 - Use cases in procurement and spend analysis 15:13 - The growing importance of data quality and standardization in analytics 16:02 - Final thoughts and future topics 
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 & 𝗟𝗶𝗻𝗸𝘀: • UNSPSC (United Nations Standard Products and Services Code) – Industry-standard classification for products and services

    16 min
  2. Succesfactors for AI

    2 APR

    Succesfactors for AI

    𝗨𝗻𝗹𝗼𝗰𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗮𝗻𝗱 𝗣𝗶𝘁𝗳𝗮𝗹𝗹𝘀 𝗼𝗳 𝗔𝗜 𝗮𝗻𝗱 𝗗𝗮𝘁𝗮 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 In this episode of The Data Edge, Erwin de Werd and Stephanie Wiechers explore how AI can transform data management from a headache into a strategic advantage — if used wisely. They discuss the pitfalls of overhyped AI solutions, the importance of building robust systems, and practical steps to improve data quality. 𝗞𝗲𝘆 𝗧𝗼𝗽𝗶𝗰𝘀: The proliferation of AI "skills" and why over 90% are ineffective How automation, when done properly, enhances data quality and operational efficiency The challenge of discerning quality in AI tools and avoiding superficial solutions Practical examples of AI in lead generation (Dream 100 strategy) and content creation How to build trust in AI-driven data solutions amidst industry hype The importance of authentic, human-centered communication in AI content The distinction between front-end conversation and back-end automation in data management Planning for a future where AI and data quality ensure better decision-making 𝗧𝗶𝗺𝗲𝘀𝘁𝗮𝗺𝗽𝘀: 00:00 - Introduction: Transforming data management with AI 00:30 - Why most AI skills are ineffective and what they entail 01:25 - Explanation of skills as standard operating procedures (SOPs) 02:24 - The explosion of AI skills on platforms like Instagram and their usability 03:20 - The common problem of people not doing the work when using AI tools 03:50 - Strategic laziness: automating repetitive tasks with quality checks 04:32 - Pitfalls of trusting AI outputs without proper validation 04:57 - Challenges in training AI models to produce accurate, high-quality content 05:44 - Limitations of custom GPTs in professional tasks like LinkedIn content 06:22 - The importance of investing effort upfront to create effective automation systems 06:47 - Why cost savings lead to underinvestment in AI automation 07:34 - Challenges of relying on incomplete or careless prompts 07:45 - The habit of short-input prompts and the impact on output quality 08:13 - Building outreach strategies with AI: the Dream 100 example 08:51 - Automating research and outreach to generate leads efficiently 09:35 - Using AI to identify influencers and industry events for strategic networking 10:58 - The need for consistency and authenticity in AI-generated content 12:04 - How good copywriters leverage AI as a starting point, not a replacement 12:51 - Authenticity remains crucial despite the efficiency gains from AI 13:17 - Connecting AI automation in data management with operational layers of business 14:09 - The importance of backend automation for data quality and integrity 15:14 - Trust issues in procurement and other industries regarding AI promises 16:26 - The hype versus reality of AI solutions, and the upcoming industry shakeout 17:08 - Final thoughts: Deepening the conversation in future episodes

    17 min
  3. AI & Human Collaboration

    26 MAR

    AI & Human Collaboration

    🎙️ 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 𝗲𝗱𝗴𝗲 — 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗶𝗻 𝗮𝗶 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 In this episode, Erwin and Stephanie delve into the complexities of data quality in AI projects, emphasizing that messy data often leads to costly mistakes. They explore how human-AI collaboration and understanding the limitations of models like LLMs are crucial for success. 🔑 𝗞𝗘𝗬 𝗧𝗢𝗣𝗜𝗖𝗦 The common misconception that first data categorization is 100% accurate — and why errors are part of the processThe reality of achieving high data quality and near automation (up to 95%) in data processingExpectations vs. reality: Why clients sometimes expect AI to be a 'magic bullet' and how to set realistic goalsThe importance of contextual knowledge and communication to improve model accuracyMethodologies for training AI models as 'new employees', including leveraging human expertise and internal knowledgeA real-world construction project: data categorization challenges, including language issues (tablets as lozenges)Differentiating LLMs like ChatGPT from specialized machine learning modelsThe role of human-AI cooperation in improving data quality and operational efficiencyCreating a knowledge center for clients through ongoing data training and model refinementThe value of building IP within organizations by developing tailored data solutions and models ⏱️ 𝗧𝗜𝗠𝗘𝗦𝗧𝗔𝗠𝗣𝗦 00:00 Introduction: The impact of messy data on industry costs 00:30 Setting the stage: From data quality to correction hiccups 01:14 Why initial categorization often isn't perfect — and it's normal 02:02 The misconception of AI producing perfect results immediately 02:50 Achieving high data quality and near automation possibilities 03:17 Managing client expectations around AI and data processing 04:05 Importance of communication about processes and contextual insights 05:14 When models don't perform as expected: Training methodologies 05:45 Example project in construction: Data categorization challenges 06:47 Using dashboards to identify and fix misclassified data 08:11 Language nuances affecting classification (e.g., tablets as lozenges) 08:58 Differences between LLMs like ChatGPT and task-specific ML models 10:16 The core distinction: General language models vs. specialized models 12:11 Why consistency and rule-based training are vital 13:24 Human-AI collaboration enhancing data accuracy 14:02 Implementing biases and industry knowledge to improve models 15:19 Building an organization's IP through data and model development 16:21 Potential for transparency: Sharing system rules with clients 17:05 Recap: Differentiating AI types and combining human expertise 18:18 Closing: Key takeaways on data, AI, and IP in projects

    19 min
  4. How to reach 95% Data Quality

    13 MAR

    How to reach 95% Data Quality

    Ensuring Data Quality in AI Projects: A Conversation with Stephanie Wiechers In this episode, Erwin de Werd and Stephanie Wiechers explore the crucial role of data quality in AI and data projects. They discuss practical approaches to maintain high accuracy, the challenges of testing AI with AI, and the importance of human oversight to achieve reliable results. Key Topics: The impact of messy data on AI output and decision-makingStrategies for achieving 95% data accuracy for automationThe process of data enhancement using AI and rule-based systemsTesting AI models: AI-to-AI vs. human review approachesCost and time considerations in data quality verificationThe ongoing progress: from 85% to over 95% accuracyThe collaborative role of humans and AI in data validationFuture outlook: the importance of human involvement for reliable AI Timestamps: 00:00 - Introduction: How messy data costs industries billions 00:41 - Importance of data quality in AI and reporting 01:25 - Common issues with data errors impacting insight generation 02:17 - Automating error detection and correction in databases 02:58 - Client quality expectations and the 95% accuracy benchmark 03:26 - Achieving and validating 95% accuracy in AI models 04:01 - Using AI and internal rules for data enhancement 04:41 - Challenges of testing AI with AI and the need for human validation 05:56 - The risk of relying solely on AI for quality checks 06:37 - Human review as a reliable fallback 07:03 - The four-step process for data validation 08:25 - The iterative role of human review and AI learning 09:06 - Balancing internal and outsourced validation efforts 10:17 - Outsourcing testing versus internal validation challenges 11:13 - Current progress: surpassing 85% accuracy 12:00 - Upcoming guest episode and future projects Resources & Links: PeerStop Connect with Stephanie Wiechers: LinkedIn Note: Stay tuned for our next episode featuring a special guest from the field discussing real-world data projects and best practices.

    13 min
  5. The Human Factor in AI-Driven Procurement Data Management

    25 FEB

    The Human Factor in AI-Driven Procurement Data Management

    The Human Factor in AI-Driven Procurement Data Management In this episode, Erwin de Werd and Stephanie Wiechers explore the critical interplay between human expertise and AI in ensuring data integrity and standardization within procurement processes. Discover how organizations leverage AI to enhance categorization accuracy, streamline validation, and safeguard sensitive information. Key Topics The importance of human input in AI-driven data categorizationChallenges of enterprise-level procurement data standardizationCombining rule-based systems with machine learning models for enhanced accuracyThe role of the validation process in ensuring data qualityLeveraging large language models (LLMs) for granular categorizationHow ongoing user feedback refines AI performance over timeData security policies and anonymization in AI trainingPractical steps for integrating AI with existing procurement workflowsThe future of collaborative man-machine approaches in enterprise data management Timestamps 00:00 - Introduction to the role of data quality in AI and enterprise decision-making 00:42 - The importance of the human factor in AI projects 01:37 - Case study: Procurement data integrity challenge in a large organization 02:51 - Standardization challenges across multiple sites and teams 03:44 - AI complexities in categorizing diverse invoice costs 04:48 - Systemizing procurement data processes through AI and human insights 05:42 - Combining rules and machine learning for improved categorization 07:00 - Utilizing large language models for granular and flexible data classification 08:54 - Automating validation and review processes within AI systems 11:04 - Achieving high accuracy through training and feedback loops 12:19 - Validation workflows involving multiple departmental reviews 13:55 - Sharing and securing enterprise data in AI applications 15:02 - The balance between data sharing and confidentiality in AI training 16:16 - Ensuring compliance with corporate data policies and security policies 17:01 - The evolving collaboration between humans and AI in procurement 17:17 - Upcoming series: Field insights from client interviews Connect with Stephanie Wiechers: LinkedIn

    17 min
  6. Unlocking Predictive Maintenance: A Guide

    12 FEB

    Unlocking Predictive Maintenance: A Guide

    Summary In this conversation, Erwin De Werd and Stephanie Wiechers discuss the complexities and actionable steps involved in predictive maintenance. They explore how technology has evolved to enable predictive maintenance, the benefits it offers in terms of operational efficiency and cost reduction, and the challenges companies face in managing data quality. Stephanie emphasizes the importance of a clean database and the role of AI in improving data management practices, ultimately guiding companies towards effective predictive maintenance strategies. Takeaways Predictive maintenance allows for smarter scheduling and planning.Technology advancements have made predictive maintenance more feasible.Data quality is crucial for effective predictive maintenance.Companies can reduce downtime by anticipating maintenance needs.A clean database is essential for accurate predictive maintenance.Quality assurance checks help maintain data integrity.AI can automate data cleaning and improve accuracy.Understanding asset lifecycle can optimize maintenance strategies.Predictive maintenance can lead to cost savings in parts procurement.Initial assessments are key to implementing predictive maintenance. Sound Bites "We wish it was that straightforward.""Reduce the amount of downtime.""Save hours on every service call." Chapters 00:00 Introduction to Predictive Maintenance 02:59 The Evolution of Predictive Maintenance 05:56 Benefits of Predictive Maintenance 08:58 Challenges in Data Management 11:50 Technological Solutions for Data Quality 14:49 Getting Started with Predictive Maintenance

    14 min
  7. Unlocking Data Potential (1/3)

    28 JAN

    Unlocking Data Potential (1/3)

    Summary This episode delves into the intricacies of data management and its pivotal role in driving business success. Experts Stephanie Wiechers and Erwin De Werd from Pearstop discuss the importance of data quality, the value it adds, and practical insights into managing data effectively. The conversation highlights the foundational layers necessary for data management and explores real-world examples of how data can be leveraged to achieve organizational goals. Keywords data management, data quality, business success, Pearstop, data insights Takeaways Data management is crucial for business success.Understanding data quality can drive value.A good data baseline opens up numerous possibilities.Data is the fuel for many business operations.Effective data management requires foundational layers.Data quality ensures accurate and actionable insights.Real-world examples illustrate data's impact.Data management involves both technical and strategic aspects.Pearstop experts share practical insights on data.The series explores data management in depth. Title Options Unlocking the Power of DataMastering Data ManagementData Insights for SuccessThe Art of Data QualityDriving Value with DataData Management EssentialsExploring Data PotentialData Strategies for GrowthThe Future of Data ManagementData-Driven Business Success Sound bites Data management is crucial.Unlocking data's potential.Data is the fuel.Quality data drives value.Foundational layers are key.Real-world data insights.Data management essentials.Strategic data use.Data quality matters.Driving success with data. Chapters 00:00:20 Introduction to Data Management00:00:33 Importance of Data Quality00:01:16 Real-World Data Insights00:01:55 Foundational Layers of Data00:03:10 Data as Business Fuel• • 00:03:56 Series Overview and Future Topics

    18 min
  8. Data management (2/3)

    28 JAN

    Data management (2/3)

    SummaryIn this episode, we explore the critical aspects of data management, enterprise standardization, and AI readiness. The discussion highlights the importance of having a reliable data foundation to leverage AI tools effectively, with insights into the trends and challenges faced by organizations in 2026. Keywordsdata management, enterprise standardization, AI readiness, data quality, Fabric migrations TakeawaysData management is crucial for enterprise success.Standardization enhances data quality and reliability.AI readiness requires a solid data foundation.2026 marks a shift towards reliable data layers.Fabric migrations are becoming more common.Organizations must focus on data quality for AI.AI tools need trustworthy data inputs.Data standardization supports organizational goals.Reliable data layers open new opportunities.AI readiness is a key trend for the future.Title OptionsMastering Data Management for AI SuccessThe Future of Enterprise StandardizationAI Readiness: Building a Solid Data Foundation2026: The Year of Data QualityFabric Migrations: A Growing TrendWhy Data Management MattersStandardization: The Key to Reliable DataAI Tools and the Need for Quality DataUnlocking Opportunities with Reliable DataPreparing for AI: The Importance of Data Sound bites Data management is crucial. Standardization enhances reliability. AI needs a solid foundation. 2026 marks a data shift. Fabric migrations are rising. Focus on data quality. AI tools need trustworthy data. Standardization supports goals. Reliable data opens opportunities. AI readiness is key. Chapter00:01:11 Introduction to Data Management00:02:26 Understanding Data Quality and Standardization00:10:58 The Importance of a Reliable Data Layer00:11:26 AI Readiness and Future Trends

    15 min

About

Welcome to the Pearstop podcast series on data management, where experts Stephanie Wiechers and Erwin de Werd dive into the world of data quality, standardization, and the real-world value of information in technical industries. From procurement and facility management to hard services and large-scale manufacturing, we explore how 'messy' data can cost organizations millions—and how to fix it. Join us as we break down complex topics like enterprise-level standardization and Microsoft Fabric into concrete, actionable steps. Whether you're a CEO, an asset manager, or a bid specialist, this series provides the insights you need to turn your data into a fuel for smart decision-making and AI readiness. Don't let your data work against you—learn how to make it your greatest competitive advantage.