Industry40.tv

Kudzai Manditereza

Each episode of Industry40.tv Podcast will treat you to an in-depth interview with leading AI practitioners, exploring the Application of Artificial Intelligence in Manufacturing and offering practical guidance for successful implementation.

  1. 6日前

    Scaling Agentic AI Workflows in Manufacturing with Causal AI: Bernhard Kratzwald - Co Founder & CTO, EthonAI

    ## Episode: Building and Scaling Agentic AI Workflows in Manufacturing   **Podcast Name:** AI in Manufacturing Podcast  **Episode Title:** How to Build and Scale Agentic AI Workflows in Manufacturing **Guest:** Bernard Kraswald, Co-Founder & CTO at Ethon AI **Host:** Kudzai Manditereza ---   ## Episode Summary   This episode explores how manufacturers can build and scale agentic AI workflows to achieve operational excellence across factories. Bernard Kraswald, Co-Founder and CTO at Ethon AI, explains why traditional continuous improvement methods have reached their limits and how purpose-built industrial AI—grounded in process knowledge graphs and causal reasoning—unlocks the next wave of manufacturing optimization. Key insights include why deep data contextualization through knowledge graphs is essential for agentic AI (not just basic tag hierarchies), how causal AI differs from correlation-based analytics by making root cause findings actionable, and why a layered architecture of data infrastructure, specialized model layer, and application layer prevents hallucinated recommendations in safety-critical environments. Bernard also shares real-world results, including a globally scaled deployment at Siemens that generated over $10 million in documented savings. Whether you're evaluating industrial AI platforms or architecting your data stack for agentic workflows, this episode provides a practical roadmap from data ingestion to autonomous process control. ---   ## Key Questions Answered in This Episode   - What is a process knowledge graph, and why is it essential for agentic AI in manufacturing? - How does causal AI differ from correlation-based analytics in industrial settings? - What architecture layers are needed to run agentic AI workflows reliably in manufacturing? - Why can't general-purpose LLMs like ChatGPT or Claude replace purpose-built industrial AI models? - How do you build a knowledge graph iteratively without delaying ROI? - What does a typical deployment timeline look like for industrial AI platforms? - How should manufacturers handle security and governance when connecting OT systems to cloud-based AI? --- ## Episode Highlights with Timestamps   **[2:27]** – **Bernard's Background & Ethon AI Origin Story** — How a PhD in computer science and collaboration with Fortune 500 manufacturers like Siemens led to founding Ethon AI, now approaching 100 employees with offices in Zurich and New York.   **[4:24]** – **Why Traditional Methods Have Maxed Out** — Bernard explains the "20 cents of every dollar goes to waste" principle and why classic automation and data science have hit diminishing returns, requiring agentic workflows and foundation models for the next improvement frontier.   **[7:49]** – **What Deep Contextualization Really Means** — A detailed walkthrough of why basic UNS tag hierarchies aren't sufficient for agentic AI, using the example of tracing a batch rework problem across tanks, recipes, time series, and operator interventions.   **[12:45]** – **Process Knowledge Graph Explained** — Bernard defines ontologies and knowledge graph triples, showing how semantic meaning enables questions like "which five machines cost the most downtime today" versus simple tag queries.   **[16:02]** – **Build the Graph First or Build the Application First?** — The chicken-and-egg debate on knowledge graph strategy, and why Ethon chose to build the graph behind ROI-delivering applications rather than creating a monolithic model upfront.   **[18:16]** – **Causal AI vs. Correlation Analytics** — The ice cream and shark attacks analogy applied to manufacturing: how causal models turn seasonal production correlations into actionable insights about cooling water temperature adjustments.   **[21:28]** – **The Full Agentic AI Architecture Stack** — Bernard outlines three layers: data infrastructure (connectivity + knowledge graph), model layer (purpose-built causal and inspection models), and application layer (agentic workflows or human interfaces).   **[24:54]** – **Why General-Purpose LLMs Aren't Enough for Manufacturing** — Safety-critical environments require models that understand spec limits, user manuals, and process constraints—not just pattern-matched text generation.   **[29:33]** – **Ethon AI Platform Walkthrough** — A modular enterprise platform that measures what's happening, understands why, suggests improvement actions, and enables autonomous process control through dynamic SOPs and centerline dashboards.   **[37:19]** – **Causal AI's Medical Origins Applied to Manufacturing** — How treating a production process like a patient (healthy or sick) allows causal models to extract actionable knowledge from months of operator interventions and process adjustments.   **[48:03]** – **Deployment Timeline and Forward Deployed Engineers** — Ethon's Palantir-inspired deployment model with on-site engineers, achieving first value consistently in under three months.   **[51:17]** – **Case Studies: Siemens and Lindt & Sprüngli** — Globally scaled deployments with $10M+ documented savings at Siemens (published by the World Economic Forum) and significant waste reductions at Lindt & Sprüngli's chocolate production facilities.   ---   ## Key Takeaways   - **Knowledge graphs are non-negotiable for agentic AI:** A unified namespace provides basic tag context, but agentic workflows require deep semantic relationships—connecting batches to recipes, tanks to flow paths, and time series to operator interventions. Without this ontology layer, AI agents cannot perform meaningful root cause investigation.   - **Causal AI makes insights actionable, not just interesting:** Correlation analytics can tell you production runs better in winter, but causal AI identifies that lower feeding water temperature improves cooling behavior, giving operators a specific lever to pull in summer months. This distinction is critical for safety-critical environments where recommendations must be trustworthy.   - **Purpose-built industrial models prevent hallucination in critical decisions:** By placing a specialized causal model layer between the data infrastructure and the agentic application layer, recommendations are grounded in verified causal relationships rather than LLM pattern matching. The agentic layer enriches these findings with SOPs and documentation but cannot fabricate the underlying analysis.   - **Start with ROI-delivering applications, not infrastructure perfection:** Rather than building a complete knowledge graph before deploying AI, Ethon's approach builds the graph incrementally behind applications that deliver measurable value. Users often don't realize they're building a knowledge graph because they're simply modeling their data while getting returns.   - **Change management is as important as the technology:** Operators and process engineers have solved problems for decades without data-driven tools. AI systems must explain their reasoning through causal chains, build trust incrementally, and integrate into existing workflows without adding friction—even one extra second per task multiplied across thousands of repetitions creates significant resistance.   - **Security requires one-way data flow by design:** When connecting legacy OT systems (some 20-30 years old) to cloud AI, the architecture must ensure information flows only from factory to cloud, with no return path that could serve as an attack vector. Edge-deployable modules handle latency-sensitive tasks like optical inspection independently.   - **Cross-factory intelligence is the next major value unlock:** Most manufacturers still analyze individual lines or factories in isolation. Connecting multiple factories to shared knowledge graph concepts enables cross-site learning—identifying why one line outperforms another and transferring those insights globally.   ---   ## Notable Quotes   > "Every dollar you spend on manufacturing, 20 cents go to waste. That has been true 50 years ago, and it will be true probably 50 years in the future, because there's always 20% to get." — Bernard Kraswald, CTO at Ethon AI   > "The insights you get cannot be hallucinated, because they're coming from this underlying model layer—from this causal model. The LLM agentic layer on top cannot fabricate that." — Bernard Kraswald, CTO at Ethon AI   > "You're never done with building your knowledge graph, because there's always more knowledge you can distill out of it." — Bernard Kraswald, CTO at Ethon AI   > "The only mistake you can make today is not doing anything. The best time to start was yesterday, and the second best time to start would be today." — Bernard Kraswald, CTO at Ethon AI   > "Every AI system will make some mistakes. So here is my best, wholehearted suggestion, and this is why I believe it's true—and now you can click and triple down, follow the root cause links, and investigate everything." — Bernard Kraswald, CTO at Ethon AI   ---   ## Key Concepts Explained   **Process Knowledge Graph** Definition: A semantic data model built on ontologies that assigns meaning to industrial data and defines how different data elements relate to each other—connecting machines, sensors, batches, recipes, and physical flows into a queryable graph structure using subject-predicate-object triples. Why it matters

    55分
  2. 3月17日

    Why The Unified Namespace is The Essential Foundation for Industrial AI & Agentic Operations: Walker Reynolds - President, 4.0 Solutions

    ## Episode: The State of Industrial AI, Unified Namespace, and Knowledge Graphs After PROVE IT 2025   **Podcast Name:** AI in Manufacturing Podcast  **Guest:** Walker Reynolds, President & Solutions Architect at 4.0 Solutions, Founder of the PROVE IT Conference **Host:** Kudzai Manditereza **Target Audience:** Manufacturing data leaders, IT/OT solution architects, and digital transformation professionals   ---   ## Episode Summary   Walker Reynolds, President and Solutions Architect at 4.0 Solutions and founder of the PROVE IT conference, delivers an unfiltered assessment of where industrial AI actually stands in 2025. Drawing from conversations with over 1,000 attendees at this year's PROVE IT conference—70% of whom were end users working in manufacturing—Reynolds identifies three critical industry shifts: AI fatigue is setting in as vendors outpace market readiness, knowledge graphs have emerged as the essential technology for enabling agentic AI in manufacturing, and the gap between digitally mature and immature manufacturers is widening. The conversation covers why most manufacturers still aren't getting value from their unified namespace implementations, the five most practical AI applications seen at PROVE IT, and why autonomous agents are a mathematical impossibility given current LLM reliability. Reynolds closes with his complete recommended technology stack for manufacturers and a prediction that plant floors will see *more* people, not fewer—but they'll be analysts supervising AI agents rather than middle managers managing people.   ---   ## Key Questions Answered in This Episode   - What is the current state of AI adoption in manufacturing in 2025? - Why are some manufacturers failing to get value from unified namespace implementations? - What role do knowledge graphs play in enabling agentic AI for manufacturing? - What are the most practical AI applications for manufacturers right now? - Can AI agents run autonomously in manufacturing operations? - What does the ideal industrial data architecture stack look like for a small to midsize manufacturer? - How does unified namespace serve as the backbone for agentic AI?   ---   ## Episode Highlights with Timestamps   **[1:56]** — **Introduction and episode overview** — Kudzai sets the agenda: PROVE IT conference takeaways, unified namespace adoption status, agentic AI's role, and the ideal industrial data architecture.   **[4:23]** — **Walker Reynolds' background** — From salt mines to tier-one automotive to founding 4.0 Solutions, IoT University, and the PROVE IT conference—plus why he always introduces himself as if no one knows who he is.   **[8:36]** — **Three core observations from PROVE IT 2025** — AI fatigue is real, most end users still ask "where do I start?", and knowledge graphs emerged as the breakout technology everyone now understands they need.   **[20:37]** — **Top five practical AI applications from PROVE IT** — WinCC OA and Tatsoft for AI-assisted development, Atanta Analytics' prompt-to-insights, Thread Cloud's knowledge graph-driven root cause analysis, and Maestro Hub's live module generation with Claude Code.   **[29:08]** — **The knowledge gap in agentic AI adoption** — Reynolds draws an analogy to the leap from algebra to calculus, warning that not every organization has someone who can bridge the gap to agent-based architectures.   **[35:04]** — **Why autonomous agents are a myth** — Current LLMs are 99.9% reliable at best—one error per 1,000 words—compared to a PLC's nine nines of reliability. Agents must be human-supervised.   **[42:55]** — **Why manufacturers fail or succeed with unified namespace** — The differentiator is understanding UNS as the real-time current state of the business, not a historical transaction store.   **[52:09]** — **UNS as the backbone for agentic AI** — How agents use the semantic structure of UNS to navigate operations and then retrieve deeper context via MCP tools.   **[54:40]** — **Walker's complete recommended technology stack** — From Docker and Node-RED to HiveMQ, Litmus, Frameworks 10, Thread Cloud, and Snowflake—the full architecture laid out step by step.   **[59:45]** — **Where AVEVA PI fits** — No need to rip and replace; limit PI to what it's good at (historian), and leverage Aveva's more open Connect platform.   **[1:02:11]** — **Prediction: More people on the plant floor, not fewer** — Fewer middle managers, more analysts supervising AI agents to optimize operations.   ---   ## Key Takeaways   - **Knowledge graphs are the breakout technology of 2025:** Coming out of PROVE IT, even non-technical attendees understood that knowledge graphs—relational context between entities in an infrastructure—are essential for AI agents to navigate and reason through manufacturing systems. Manufacturers should prioritize building fluency in knowledge graph concepts now.   - **AI fatigue is real, and vendors are outpacing market readiness:** Most end users are still asking "where do I start?" while vendors are shipping agentic AI features without clear problem-solution fit. The maturity gap between the most and least digitally advanced manufacturers is widening.   - **Autonomous agents are not viable in manufacturing:** The most reliable LLMs achieve 99.9% accuracy—one error per 1,000 words—while PLCs operate at nine nines of reliability. Agents should be treated as force multipliers for human workers, not autonomous replacements.   - **Unified namespace success depends on understanding what it is—and isn't:** UNS is the real-time current state of the business, semantically organized. Manufacturers who fail with UNS are trying to make it something it's not, such as a historical transaction store. It serves as the originating context that agents use before querying deeper systems.   - **The most practical AI use cases are about building, not automating:** The top applications at PROVE IT involved using AI to accelerate development (natural language to code, dashboards, and workflows), not replacing human decision-making on the plant floor.   - **Predefined workflows inside agents are a game changer:** Rather than letting agents create their own reasoning steps on the fly, giving engineers the ability to predefine part of an agent's workflow dramatically improves reliability and practical value.   - **Start building AI fluency now, even if you haven't started your data journey:** Reynolds mandated his team use chatbots daily in January 2023—not because he knew how AI would be used, but to build fluency. Every manufacturer should be doing the same with knowledge graphs and agent concepts today.   ---   ## Notable Quotes   > "The only person who believes agents can run autonomously are people who don't work with agents." — Walker Reynolds, President at 4.0 Solutions   > "Think of agents as a force multiplier for your workforce, a way of unlocking the potential in people." — Walker Reynolds, President at 4.0 Solutions   > "If you're not getting value out of unified namespace, then you're using it for something that it isn't." — Walker Reynolds, President at 4.0 Solutions   > "We're going to see more people on the plant floor, not less. They're going to be analysts supervising AI to optimize operations." — Walker Reynolds, President at 4.0 Solutions   > "Your homework this year is learn knowledge graphs, because you're going to need them." — Walker Reynolds, President at 4.0 Solutions   ---   ## Key Concepts Explained   **Unified Namespace (UNS)** Definition: A unified namespace is a single, semantically organized source of truth that represents the real-time current state of a business—all events, data, and information models contextualized and normalized in one accessible structure. Why it matters: UNS serves as the foundational architecture for digital transformation and is the originating context layer that AI agents query to understand current operations before reasoning through deeper systems. Episode context: Reynolds emphasized that manufacturers failing with UNS misunderstand its purpose, treating it as a historical data store rather than a real-time state representation.   **Knowledge Graphs** Definition: Knowledge graphs are data structures that represent the relationships between entities (nodes) in a system, providing relational context that enables navigation and reasoning across an infrastructure. Why it matters: AI agents require knowledge graphs to navigate up and down a business's infrastructure, moving from an objective at one layer to the specific data location where answers reside. Episode context: Reynolds identified knowledge graphs as the breakout technology from PROVE IT 2025, with Thread Cloud's root cause analysis demo receiving mid-presentation applause for demonstrating practical agent-driven analysis via knowledge graphs.   **Model Context Protocol (MCP)** Definition: MCP is a protocol that allows AI agents to connect to external tools and data sources, enabling them to retrieve information and perform actions beyond what's contained in their training data. Why it matters: MCP enables agents to go beyond the initial context from UNS and query historical data, work orders, and other systems of record to

    1時間2分
  3. 3月11日

    Unlocking Productivity With Causal Models and Agentic AI in Manufacturing: Michael Carroll - Global Executive in Industrial Innovation & AI , LNS Research

    # AI in Manufacturing Podcast — Episode Show Notes   ## Episode Details - **Podcast Name:** AI in Manufacturing Podcast (Industry40.tv) - **Episode Title:** Unlocking Productivity With Casual Models and Agentic AI in Manufacturing - **Host:** Kudzai Manditereza - **Guest:** Michael Carroll - **Guest Title/Role:** Strategic Advisor & Fellow COO Council at LNS Research; Chief Strategy Officer at Trek AI - **Target Audience:** Manufacturing data leaders, COOs, VP of Operations, IT/OT solution architects, and digital transformation professionals   ---   ## 1. EPISODE SUMMARY   Agentic AI is not another digital tool to add to the manufacturing technology stack — it is a fundamentally different species of software that treats decisions, not transactions, as the atomic unit of work. In this episode, Michael Carroll, Strategic Advisor at LNS Research and Chief Strategy Officer at Trek AI, explains why US manufacturing productivity has been flat since 2010 despite massive investments in digital tools, and why agentic AI with causal reasoning represents the structural fix. Carroll draws on his 15 years leading digital transformation at Georgia Pacific to reveal how the real productivity killer is not a lack of data or technology, but a cognitive overload crisis combined with organizational permission bottlenecks that drain value from companies in real time. He introduces a practical diagnostic framework — mapping inferencing load and permission load — that any operations leader can apply today to identify where value is leaking from their organization and where agentic AI can deliver immediate impact.   ---   ## 2. KEY QUESTIONS ANSWERED IN THIS EPISODE   - Why has US manufacturing productivity been flat since 2010 despite massive digital investments? - What is agentic AI, and how is it fundamentally different from traditional manufacturing software like MES and ERP? - What is causal reasoning, and why does it matter more than explainable AI for manufacturing decisions? - How does the permission architecture in manufacturing organizations destroy value and slow decision velocity? - Where should COOs and VPs of Operations start when preparing their organizations for agentic AI? - Why do alignment meetings signal that a company's numbers can't be trusted? - How should IT and OT organizations restructure their relationship to enable competitive advantage?   ---   ## 3. EPISODE HIGHLIGHTS WITH TIMESTAMPS   **[00:02]** - **Introduction & Guest Background** — Kudzai introduces Michael Carroll and his roles at LNS Research and Trek AI, emphasizing his prolific writing on LinkedIn about industrial AI.   **[04:04]** - **Farm Roots and the Generalist Mindset** — Carroll shares how growing up on a farm in "Knock 'Em Stiff, Ohio" taught him orchestration and generalist thinking that shaped his approach to enterprise transformation.   **[07:43]** - **The Flat Productivity Crisis** — Discussion of US Bureau of Labor Statistics data showing manufacturing productivity has been flat or declining from 2008-2023, despite heavy digitalization investments.   **[09:39]** - **The COVID Productivity Paradox** — Carroll reveals how productivity actually spiked during COVID when corporate distractions were removed, disproving the hypothesis that talent attrition alone caused the decline.   **[13:41]** - **The Cognitive Tipping Point** — Frontline workers now see 8x more information across 50% more equipment than in 1975, but have 50% less experience — creating a cognitive overload that degrades performance.   **[16:56]** - **What Makes an Agent an Agent** — Carroll defines agentic AI through the lens of human agency: an agent shapes outcomes, bears your intention, but the responsibility remains yours.   **[22:46]** - **Judea Pearl's Causal Ladder** — Deep explanation of how Pearl's three-layer causal framework (imagining, doing, observing) provides the mathematical foundation for trustworthy AI decision-making.   **[24:49]** - **Chain of Reasoning vs. Explainability** — Carroll argues that "explainable AI" invites litigation, while causal chains of reasoning provide defensible, legitimate justification for decisions.   **[30:00]** - **The Adaptive Architecture** — Carroll outlines the three-layer future architecture: ubiquitous connectivity, causal reasoning at the edge, and a trust/permission architecture at the center.   **[36:39]** - **The Baum Study: Decision Speed and Performance** — Reference to J. Robert Baum's 2003 study of 318 companies showing decision speed — not decision quality — was the top predictor of company performance.   **[47:22]** - **Causality Replaces Data Models** — Carroll explains why causal models are superior to traditional data models and ontologies, comparing data collection to stock options you wouldn't exercise immediately.   **[53:30]** - **The Practical Starting Framework** — Carroll provides a step-by-step diagnostic: map your current architecture, identify where inferencing load and permission load are highest, and fix those intersection points first.   ---   ## 4. KEY TAKEAWAYS   - **Manufacturing's productivity crisis is a cognitive overload problem, not a data problem:** Since 2010, frontline workers see 8x more information across 50% more equipment than in 1975, but have 50% less experience. More insights have not produced better performance — they have consumed the adaptive capacity workers need to make good decisions.   - **Agentic AI treats decisions as the atomic unit of work, not transactions:** Unlike MES or ERP systems that automate transactions, agentic AI shapes outcomes by understanding what's true about the world, evaluating possible interventions, taking action, and learning from evidence. The responsibility always remains with the human.   - **Causal reasoning provides defensible decisions; explainability invites litigation:** A chain of reasoning built through Judea Pearl's causal framework delivers the legitimate, defensible justification that governance structures require. Explainable AI merely offers interpretations that different stakeholders will contest — which is why alignment meetings exist.   - **Decision speed outperforms decision quality as a predictor of company performance:** J. Robert Baum's 2003 study of 318 companies found that the highest-performing companies made decisions faster than competitors, centralized strategy while decentralizing operations, and only standardized things that were easy to standardize.   - **The value leak happens between decision and action:** The time between knowing what to do and getting permission to do it is where most companies lose tremendous value. Permission architectures built around compliance — not governance — create vicious cycles between operations and IT that stall decision velocity.   - **60% of value creation comes from staying focused:** Carroll's framework breaks down value creation: approximately 20% comes from doing the right things, 20% from doing those things right, and a full 60% from maintaining focus — which fragmented organizations systematically destroy.   - **Start by mapping inferencing load and permission load:** Operations leaders should map how their company gets things done, identify where inferencing load (people synthesizing multiple insights to make decisions) and permission load (organizational gates) are both high, and target those intersection points first.   ---   ## 5. NOTABLE QUOTES   > "We're not trying to be right. We're trying to get this right — because we're experiencing a time in humanity that's never been experienced before." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI   > "No machine will ever feel the consequences of the actions it takes and the decisions it makes — so the responsibility is still yours." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI   > "You know you have a company that can't trust its numbers when you have an alignment meeting — because alignment meetings mean the politics matter more than the numbers." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI   > "Something that automates a work process is not an agent. Something that carries out a task is not an agent. Because it doesn't shape an outcome." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI   > "Structure makes you effective. You've got to go be effective before you can ever be efficient." — Michael Carroll, Strategic Advisor at LNS Research & CSO at Trek AI   ---   ## 6. KEY CONCEPTS EXPLAINED   **Agentic AI (Enterprise Agency)** Definition: Agentic AI is a category of artificial intelligence that shapes outcomes by understanding the current state of the world, evaluating possible interventions, taking action, and learning from evidence — operating on behalf of humans while the responsibility remains with the human. Why it matters: It represents a structural shift from transaction-based software (MES, ERP) to decision-based systems that can collapse the time between insight and action in manufacturing operations. Episode context: Carroll distinguishes agentic AI from task automation by emphasizing that true agents bear your intention and shape outcomes, rather than simply executing predefined workflows.   **Causal Reasoning (Judea Pearl's Ladder of Causation)** Definition: A mathematical framework developed by Turing Prize winner Judea Pearl consisting of three layers —

    1時間1分
  4. 3月5日

    Context Engineering Techniques for Building Reliable Industrial AI Agents: Zach Etier - VP of Architecture , Flow Software

    Podcast Name: AI in Manufacturing Podcast (Industry40.tv) Episode Title: Context Engineering Techniques for Building Reliable Industrial AI Agents Guest: Zach Etier, VP of Architecture at Flow Software Host: Kudzai Manditereza Episode Summary This episode explores context engineering — the discipline of curating and managing the information supplied to AI agents — and why it is the key to building reliable industrial AI systems. Zach Etier, VP of Architecture at Flow Software, joins host Kudzai Manditereza to break down why simply pumping more data into an AI agent's context window actually degrades performance through dilution, hallucination, and lost instructions. Zach walks through three core context engineering techniques — persisting context, summarization/compaction, and isolation via sub-agents — and explains how each one maps to real manufacturing use cases like automated shift-handover reports. The conversation also covers the practical differences between skills, MCP servers, and sub-agents, and why deterministic code should handle calculations while agents handle orchestration. Finally, Zach makes the case that knowledge graphs with formal ontologies will become essential data architecture for scaling industrial AI across the enterprise. Whether you are evaluating your first agent pilot or planning multi-site deployment, this episode provides a concrete framework for engineering context that agents can reliably act on. Key Questions Answered in This Episode What is an industrial AI agent, and how does it differ from a chatbot or general-purpose LLM? Why does giving an AI agent more context actually reduce its performance? What is context engineering, and why is it replacing prompt engineering for agentic AI? What are the three core techniques for managing an AI agent's context window in manufacturing? How should you decide when to use skills vs. MCP servers vs. sub-agents? Why should deterministic code handle calculations instead of letting the AI agent compute them? How do knowledge graphs and ontologies enable enterprise-scale industrial AI? Episode Highlights with Timestamps [00:33] — Meet Zach Etier — Zach introduces his role at Flow Software, his background at Northrop Grumman, and how he leads development of the Atlas knowledge modeling tool. [06:04] — Defining an Industrial AI Agent — A clear breakdown: an agent is an LLM that can call tools in a loop, acting as an orchestrator that reasons on context to decide which tool to invoke next. [09:54] — Shift-Handover Report Demo — Zach describes a concrete use case where an agent passively generates a shift-change report by pulling data from a historian, operator notes, the UNS, and MES/PLC data. [12:58] — From Prompt Engineering to Context Engineering — Why reasoning models and tool-calling changed the game: prompts are static, but agent context is dynamic. [16:37] — The Softmax Dilution Problem — How adding too many tokens dilutes relevant information through the normalization process, causing hallucinations and missed instructions. [17:17] — Lost in the Middle — The Stanford "needle in a haystack" study showing agents recall content at the start and end of the context window but lose information in the middle. [19:50] — Context vs. Knowledge — Context is curated knowledge packaged for a specific task — you don't read the entire equipment manual, only the sections relevant to your troubleshooting task. [25:00] — Three Categories of Industrial Context — Domain knowledge (equipment manuals, SOPs), data context (historian, UNS, MES), human-generated context (operator notes, Excel sheets), and behavioral context (skills, guardrails). [30:00] — Technique 1: Persisting Context — Writing context to the file system so agents (or sub-agents) can read curated information in future sessions. [31:27] — Technique 2: Summarization/Compaction — Condensing large context into essential insights; why auto-compaction sometimes breaks agent behavior. [33:56] — Technique 3: Isolation via Sub-Agents — Spinning up agents with clean context windows to offload research and prevent bloat in the main agent. [36:05] — Deterministic Tools for Calculations — Why OEE and other calculations should be handled by validated scripts exposed as tools, not computed by the probabilistic model. [54:21] — Knowledge Graphs for Enterprise-Scale AI — How ontologies provide a "map of meaning" that helps agents navigate large instance models across multi-site enterprises. [1:02:00] — Federated Knowledge Graphs — Zach's argument for domain experts owning their own models, with governance at the integration interfaces between domains. Key Takeaways Context engineering is the new core competency for industrial AI. It is the practice of populating the context window with only highly relevant, curated information — not dumping in everything you have. The softmax normalization in transformer attention blocks dilutes important tokens when too much irrelevant information is present. Agents recall the start and end of context, not the middle. The "Lost in the Middle" research confirms that instructions and critical data placed in the middle of a large context window are likely to be ignored, leading to hallucinations and forgotten instructions. Use three techniques to manage context: persist, summarize, and isolate. Persist important context to files for future sessions. Summarize large documents down to essential insights. Isolate research and noisy tasks into sub-agents with clean context windows so the main agent stays focused. Deterministic code should handle deterministic tasks. Never let a probabilistic model perform calculations. Write validated scripts for things like OEE, expose them as tools, and let the agent orchestrate when to call them. Skills, MCP, and sub-agents solve different problems. Skills are modular, composable instruction sets with progressive disclosure. MCP servers supply vendor-defined tool context but can bloat the context window. Sub-agents provide isolated context windows for offloading research or preventing context poisoning. Knowledge graphs are the data architecture for scalable industrial AI. An ontology (model of meaning) paired with an instance model gives agents a navigational map of the domain, enabling them to reason across large enterprises rather than drowning in flat instance data. Treat prompts, skills, and agent definitions as code. Source-control them, evaluate them, iterate on them. The organizations building this muscle now are developing an expertise gap that will compound over the coming years. Notable Quotes "Context is curated knowledge that is packaged for a specific task." — Zach Etier, VP of Architecture at Flow Software "If you have something that can be done with a deterministic tool, it should be done with a deterministic tool. Don't use an agent to do a calculation." — Zach Etier, VP of Architecture at Flow Software "You get the reliability by managing the context window." — Zach Etier, VP of Architecture at Flow Software "Agents can't do on-the-job training. The context needs to be digitized and packaged in a way the agent can reason on." — Zach Etier, VP of Architecture at Flow Software "My hope is agents being this passive thing happening in the background — augmenting humans rather than becoming the team." — Zach Etier, VP of Architecture at Flow Software Key Concepts Explained Context Engineering Definition: The practice of curating, managing, and optimizing the information placed into an AI agent's context window so that it contains only highly relevant content with no bloat. Why it matters: It is the primary lever for improving agent reliability and reducing hallucinations in industrial settings. Episode context: Zach contrasted it with prompt engineering, explaining that reasoning models and tool-calling made agent context dynamic rather than static, creating the need for deliberate context management. Context Rot (Softmax Dilution) Definition: The degradation of an agent's ability to reason on relevant information as more tokens are added to the context window, caused by the softmax normalization distributing attention weight across all tokens. Why it matters: It explains why "more data" often leads to worse agent performance, which is counter-intuitive for many engineering teams. Episode context: Zach explained this as the core reason the industry shifted from "give the agent everything" to deliberate context engineering. MCP (Model Context Protocol) Definition: A standard protocol that allows AI agents to connect to external tool servers, where the server supplies tool descriptions and context so the agent knows how to call tools it was never trained on. Why it matters: It enables agents to interact with industrial software like historians, MES, and ERP systems through a standardized interface. Episode context: Zach compared MCP to skills, noting that MCP loads all tool descriptions at once (potential bloat) while the vendor controls the context, whereas skills give users control with progressive disclosure. Knowledge Graph (Ontology + Instance Model) Definition: A data structure combining an ontology (a model of meaning that describes domain concepts and relationships) with an instance model (actual data and values), linked by explicit relationships. Why it matters: It provides AI agents with a navigational map of the domain, enabling reasoning across large, complex enterprise data landscapes. Episode context: Zach described knowledge graphs as the future data architecture for industrial AI, and explained Flow Software's Atlas product as a knowledge modeling tool built on this approach. Isolation (Sub-Agent Pattern) Definition: A context engineering technique where a sub-agent operates in its own context window, separate from the main agent, to perform research or noisy tasks without contaminating the

    1時間12分
  5. 2月26日

    Reducing Waste and Improving Efficiency with Multi-Agent Quality Control in Manufacturing: Wilhelm Klein - Co-Founder & CEO , Zetamotion

    # AI in Manufacturing Podcast — Show Notes   ## Episode: How to Reduce Waste and Improve Efficiency with AI-Powered Quality Control   **Podcast Name:** AI in Manufacturing Podcast (Industry 4.0 TV) **Episode Title:** How to Reduce Waste and Improve Efficiency with AI-Powered Quality Control **Guest:** Willem Klein, CEO & Co-Founder, Zetamotion **Host:** Kudzai Manditereza **Target Audience:** Manufacturing data leaders, IT/OT solution architects, quality control professionals, and digital transformation leaders implementing AI in industrial operations   ---   ## 1. Episode Summary This episode explores how AI-powered quality control can reduce waste and improve efficiency in manufacturing, featuring Willem Klein, CEO and co-founder of Zetamotion. Willem shares why over 90% of industrial AI pilots fail and explains that the real competitive advantage lies not in building bigger AI models, but in designing better end-to-end systems that integrate seamlessly into existing production environments. He introduces Zelia, Zetamotion's AI-powered inspection assistant that reduces model training from weeks of manual data labeling to under an hour using synthetic data and as few as five sample images. The conversation covers the tension between governance and grassroots innovation ("shadow AI"), why manufacturers overwhelmingly prefer edge deployment for quality control data, and why scaling AI across plants is far harder than leadership expects. Willem also shares his vision for fully autonomous inspection systems that configure both software and hardware. Listeners will gain practical insight into what separates successful AI quality control deployments from the 90% that fail.   ---   ## 2. Key Questions Answered in This Episode   - Why do over 90% of industrial AI pilots fail, and what do the successful ones have in common? - What is the difference between a model-centric and system-level approach to AI quality control? - How can manufacturers deploy AI-powered visual inspection without needing an in-house data science team? - What is synthetic data, and how does it reduce the time and cost of training machine vision models? - How should manufacturing leaders balance AI governance with grassroots innovation on the shop floor? - Why do manufacturers prefer edge deployment over cloud for AI-based quality control? - What makes scaling AI quality control across multiple plants and production lines so difficult?   ---   ## 3. Episode Highlights with Timestamps   - **[0:00]** — **Introduction** — Host Kudzai Manditereza introduces the topic of AI-powered quality control and guest Willem Klein of Zetamotion. - **[1:00]** — **Willem's unconventional background** — From Star Trek and the Chaos Computer Club to a PhD in philosophy of technology and technology ethics. - **[5:01]** — **Where Zetamotion fits in the AI landscape** — Willem traces AI history from Turing to the "GPT moment" and explains why most industrial AI pilots fail (90%+ failure rate per MIT study). - **[11:06]** — **The "dark number" of shadow AI projects** — Unsanctioned grassroots AI projects by savvy factory workers signal the importance of empowering domain experts. - **[14:48]** — **Governance vs. flexibility: A virtue ethics approach** — Willem argues for educating engineers and granting reasonable freedom rather than imposing rigid rules. - **[18:08]** — **System-level thinking over model obsession** — Why the best AI model is worthless if the surrounding system is clunky and unusable for operators. - **[21:44]** — **Introducing Zelia** — Zetamotion's AI inspection assistant that uses synthetic data to go from five sample images to a trained model in under an hour. - **[28:27]** — **The full vision for Zelia** — Autonomous end-to-end inspection solution building, including custom dashboards, API connectors, and deployment architecture. - **[33:13]** — **Human-in-the-loop and the "supercharged magnifying glass"** — Why human expertise remains essential for edge cases and continuous improvement. - **[33:46]** — **Time savings: From 100,000 labeled images to five samples** — A glass manufacturing example illustrating weeks or months of saved manpower. - **[35:51]** — **Edge vs. cloud deployment** — Why manufacturers treat QC data as highly sensitive and overwhelmingly prefer on-premise edge solutions. - **[38:10]** — **Scaling challenges across plants** — No two production lines are the same, even when running the same product, and why copy-paste deployment doesn't work. - **[42:44]** — **Future vision: From inspection to physical AI** — Expanding Zelia beyond defect detection toward fully autonomous systems that configure their own hardware.   ---   ## 4. Key Takeaways   - **System-level design beats model performance:** A highly accurate AI model that creates more work for operators than manual inspection will collect dust. Successful AI quality control requires optimizing the entire workflow — UI, integration, reporting, and operator experience — not just the model.   - **Synthetic data dramatically reduces deployment time:** Traditional machine vision projects require collecting and labeling tens of thousands of images over weeks or months. Zetamotion's approach with Zelia requires as few as five good samples and five defect examples per category, achieving alignment in under an hour.   - **Shadow AI signals opportunity, not just risk:** Unsanctioned AI projects by factory workers indicate high-caliber talent and real inefficiencies worth solving. Leaders should channel this energy with reasonable guidelines rather than suppress it with rigid prohibitions.   - **Edge deployment is non-negotiable for most manufacturers:** Quality control data reveals intimate details about product defects and production parameters. Most manufacturers consider this highly sensitive and strongly prefer on-premise edge solutions over cloud-connected systems.   - **Scaling across plants requires contextual adaptation:** No two production lines are identical, even when running the same product. Differences in equipment age, operating parameters, and environmental conditions mean AI models cannot simply be copied from one site to another without intelligent fine-tuning.   - **Democratization is the key unlock:** The biggest barrier to AI adoption in manufacturing isn't model capability — it's accessibility. Giving domain experts tools they can use without AI expertise (similar to how ChatGPT democratized LLMs) is where the real transformation happens.   - **Human-in-the-loop remains essential:** In quality control, novel defects and edge cases appear constantly. AI works best as a "supercharged magnifying glass" that directs human attention to where expertise is needed, with human feedback continuously improving the system.   ---   ## 5. Notable Quotes   > "Think of it not like a robot walking into your factory telling everyone to go home, but rather handing your best people a supercharged magnifying glass that draws their attention exactly where they need to apply their human expertise." — Willem Klein, CEO & Co-Founder, Zetamotion   > "What good is a model if you cannot situate it into the larger context where its performance can actually do very well?" — Willem Klein, CEO & Co-Founder, Zetamotion   > "The better your people, the higher your risk of shadow AI projects happening — because people see inefficiencies and they want to solve them." — Willem Klein, CEO & Co-Founder, Zetamotion   > "Nobody wants to have anyone have full scans of their products including all of the defects and QC parameters. It's like looking into someone's drawers — you see the skeletons in the closet." — Willem Klein, CEO & Co-Founder, Zetamotion   > "We're talking about weeks or months of manpower that can easily be saved by only having to show a couple of examples and defect images." — Willem Klein, CEO & Co-Founder, Zetamotion   ---   ## 6. Key Concepts Explained   **Synthetic Data (for Machine Vision)** Definition: Synthetic data is artificially generated training data created by AI systems to simulate real-world images, eliminating the need to manually collect and label thousands of physical samples. Why it matters: It removes the biggest bottleneck in deploying AI quality control — the months-long process of collecting, labeling, and curating training datasets. Episode context: Willem explained that Zelia uses synthetic data to go from five sample images to a fully trained inspection model in under an hour, compared to traditional approaches requiring 100,000+ hand-labeled images.   **Human-in-the-Loop (HITL)** Definition: A system design approach where AI handles routine tasks autonomously but routes edge cases, novel situations, and final decisions to human operators for judgment and feedback. Why it matters: In manufacturing quality control, new defect types and contamination scenarios appear constantly, making pure automation unreliable without human oversight. Episode context: Willem described Zetamotion's current deployment model as human-in-the-loop, where AI directs operator attention to areas requiring expertise, and human feedback continuously improves the system.

    49分
  6. 2月19日

    A Guide to Implementing AI Agents in Factories: James Zhang - Co-Founder & CPO , OpsMate AI

    Episode Title:** Practical Guidance for Implementing Industrial AI Agents in Manufacturing Guest:** James Zheng, Co-Founder & Chief Product Officer, Optimate AI Host:** Kudzai Manditereza --- ## 1. Episode Summary This episode explores how agentic AI is creating a new category of digital skilled workers for manufacturing, addressing the industry's deepening productivity plateau and skilled labor crisis. James Zheng, Co-Founder and Chief Product Officer of Optimate AI, draws on over a decade of experience building and deploying manufacturing software — from SAP's cloud ERP to PTC's ThingWorx IoT platform — to explain why traditional digital transformation investments have failed to move the productivity needle. Zheng introduces the concept of a "decision intelligence and execution layer" that sits on top of existing systems of record (MES, ERP, CMMS, SCADA) to orchestrate AI agents that augment engineers, technicians, and frontline leaders. The conversation covers practical adoption patterns, the critical role of knowledge graphs and context graphs, why perfect data isn't a prerequisite for getting started, and real-world use cases in automotive and discrete manufacturing. Listeners will walk away with a clear framework for identifying, prioritizing, and scaling agentic AI use cases on the shop floor. --- ## 2. Key Questions Answered in This Episode - What is agentic AI and why should manufacturers care about it now? - What is the skilled labor crisis in manufacturing and how does agentic AI address it? - What is the difference between a knowledge graph and a context graph in industrial AI? - How should manufacturers approach data readiness for AI agent deployment — do you need perfect data? - What are the best first use cases for AI agents on the factory floor? - How does a decision intelligence layer differ from adding a copilot to existing manufacturing software? - How should manufacturing leaders balance top-down AI governance with bottom-up frontline innovation? --- ## 3. Episode Highlights with Timestamps **[0:54]** — **James Zheng's career journey** — From mechanical engineering to SAP, PTC ThingWorx, and founding Optimate AI, tracing the evolution of manufacturing software. **[3:53]** — **Why generative and agentic AI is different** — Zheng explains why this technology finally solves the problem he's spent his entire career pursuing. **[4:38]** — **The manufacturing productivity plateau** — US BLS data shows total factor productivity in manufacturing declined from 2008–2023 despite massive technology investment. **[8:50]** — **The skilled labor crisis defined** — Zheng explains how skilled workers (10% of cost but the most critical factor) are the true bottleneck, not capital or technology. **[11:46]** — **Two ways to apply agentic AI** — Adding copilots to existing tools vs. creating a new decision intelligence and execution layer that mimics how humans actually work. **[18:18]** — **Adoption patterns from other industries** — The progression from question-answering to task automation to autonomous workflows, applied to factory settings. **[24:07]** — **Data readiness: you don't need perfect data** — Why AI agents with reasoning capabilities can handle messy data, unlike legacy machine learning approaches. **[30:56]** — **Knowledge graphs vs. context graphs explained** — A detailed analogy breaking down the "what, where, who" (knowledge graph) from the "how and why" (context graph). **[37:41]** — **Optimate AI platform architecture** — Three layers: Factory Brain (knowledge graph), Agent Scaffolding (orchestration), and codified industry best practices. **[47:59]** — **Real-world customer use cases** — Automotive tier manufacturers reducing material loss and discrete manufacturers cutting cycle times by coaching frontline workers. **[53:28]** — **Advice for manufacturing leaders** — Embrace both top-down governance and bottom-up innovation by putting AI tools directly in frontline workers' hands. --- ## 4. Key Takeaways - **The productivity plateau is a people problem, not a technology problem:** Despite massive investments in IoT, machine learning, and automation from 2008–2023, US manufacturing productivity has stagnated or declined. The limiting factor is skilled labor — the engineers, technicians, and frontline leaders who represent only 10–20% of costs but determine quality, safety, and efficiency. - **Agentic AI creates a new abstraction layer, not a replacement for existing systems:** MES, ERP, CMMS, and SCADA systems remain as systems of record. AI agents sit on top as a "decision intelligence and execution layer" that uses these legacy systems as tools — just as a human worker would — rather than replacing them. - **Start with information access, then progress to task automation:** The most immediate value comes from giving frontline workers a single natural-language interface to find information across siloed systems, potentially saving one hour per person per shift. From there, automate non-value-added tasks like closing work orders before tackling complex autonomous workflows. - **Perfect data is not a prerequisite for AI agent deployment:** Unlike legacy machine learning that required clean, structured data, large language model-powered AI agents can reason through messy data. The real bottleneck is knowledge — specifically procedural and tribal knowledge — not data quality. - **Context graphs capture decision traces, not just static knowledge:** While knowledge graphs store the "what, where, and who," context graphs track the step-by-step reasoning and decision-making process. This decision trace becomes the foundation for self-learning AI systems that improve over time. - **Frontline workers must be the makers, not just users, of AI agents:** Because every factory, line, and shift presents unique problems, standardized playbooks from other domains don't transfer directly. The people doing the daily work need tools to build and customize their own AI agents. - **This technology wave has unprecedented bottom-up pull:** Unlike ERP or MES implementations that were top-down management systems, agentic AI delivers direct individual productivity gains, creating demand from frontline workers who already use ChatGPT in their personal lives. --- ## 5. Notable Quotes > "If you and I don't need perfect data to make a decision and take action, the AI agent does not need it either." — James Zheng, Co-Founder & CPO at Optimate AI > "Through my whole career, I have never seen a technology with this pull — not only from the top but also from the bottom of organizations." — James Zheng, Co-Founder & CPO at Optimate AI > "The bottleneck is not about data today. The bottleneck is more about knowledge." — James Zheng, Co-Founder & CPO at Optimate AI > "The existing software will be there forever. But the value is moving up to this next layer. The current layer will become just another tool — for the AI agent." — James Zheng, Co-Founder & CPO at Optimate AI > "Put the tools into the hands of your frontline team, let them explore. That's usually where magic will happen." — James Zheng, Co-Founder & CPO at Optimate AI --- ## 6. Key Concepts Explained **Skilled Labor Crisis** Definition: A term describing the critical shortage of experienced engineers, technicians, frontline leaders, and CI specialists in manufacturing — the knowledge workers who drive quality, safety, efficiency, and delivery decisions on the shop floor. Why it matters: This workforce segment represents only 10–20% of factory costs but is the primary limiting factor behind the manufacturing productivity plateau. Episode context: Zheng identifies this crisis (a term coined by Gene Farley, CEO of Pault) as the core problem agentic AI is uniquely positioned to solve. **Decision Intelligence and Execution Layer** Definition: A new software abstraction layer that sits on top of existing systems of record (MES, ERP, CMMS, SCADA) and systems of intelligence (IoT platforms, dashboards) to augment human decision-making and automate knowledge work through AI agents. Why it matters: It represents a paradigm shift from giving workers more dashboards to providing AI-powered digital teammates that reason, act, and learn. Episode context: Zheng positions this as the third generation of manufacturing software — after data collection (system of record) and data visualization (system of intelligence). **Context Graph** Definition: A data structure that captures the step-by-step decision trace of how and why decisions were made during problem-solving processes like troubleshooting, recording the reasoning path, hypothesis validation, and corrective actions taken. Why it matters: Context graphs enable AI agents to learn from past decisions and generate

    58分
  7. 2月3日

    Building a Data Foundation for AI-Native Industrial Intelligence: Craig Scott - Founder & CEO , Fuuz

    1. EPISODE SUMMARY This episode explores why most manufacturing AI initiatives fail and what companies must do to build a foundation for AI-native industrial intelligence. Craig Scott, Founder and CEO of Fuuz, an industrial intelligence platform, shares insights from nearly a decade of bridging the gap between shop floor data and enterprise systems. The conversation reveals why the missing "shim" between operational technology and enterprise systems is the root cause of unreliable data in manufacturing, and why model-driven approaches are essential for scaling AI across industrial operations. Craig explains how organizations can achieve a single source of truth by implementing a persistent contextualization layer that governs data before AI ever touches it. Whether you're struggling with fragmented point solutions, evaluating industrial data platforms, or preparing your data infrastructure for AI, this episode provides a practical framework for building scalable industrial intelligence. 2. KEY QUESTIONS ANSWERED IN THIS EPISODE What is fundamentally broken with current manufacturing data infrastructure and how does it impact AI initiatives? Why do most AI pilots fail to scale in manufacturing environments? What is a model-driven approach to industrial data, and why is it superior to in-line data transformation? How do you balance enterprise governance with plant-level flexibility in industrial data architectures? Should manufacturers adopt industry-standard data models like ISA-95 or build custom models? What is the difference between a data lake and an operational intelligence platform? How can manufacturers prepare their data foundation before investing in AI technologies? 3. EPISODE HIGHLIGHTS WITH TIMESTAMPS [0:00] - Introduction — Craig Scott's background from hands-on manufacturing at age 15 to founding Fuuz, and why the company's purple branding represents the merger of "red" (OT) and "blue" (IT) data. [6:56] - What's Fundamentally Broken — Discussion of how critical manufacturing knowledge is leaving the business as the workforce ages, and why data-driven approaches are essential to capture and retain institutional knowledge. [8:09] - The Missing Shim Problem — Craig explains the gap between real-time shop floor data (SCADA/historians) and enterprise systems (ERP/PLM), and why neither system alone provides a single source of truth. [16:20] - MCP and I3x Integration — How Fuuz is implementing Model Context Protocol and aligning with the I3x initiative for standardized GraphQL APIs to enable AI connectivity. [18:52] - Model-Driven vs. In-Line Transformation — Why data transformation tools that reshape data in motion create scaling challenges, and how persistent data models solve enterprise-wide consistency. [24:06] - AI Governance and Hallucination Prevention — Why deterministic data models are essential for trustworthy AI—Claude can't "make up" OEE numbers when the data graph dictates values. [28:41] - Custom vs. Standard Data Models — Discussion of when to use ISA-95 accelerators versus custom models, using an automotive OEM wall-to-wall deployment as an example. [33:46] - Red and Blue Namespace Architecture — How Fuuz balances enterprise governance with plant-level flexibility through extensible tenant-based data models. [37:28] - What Category is Fuuz? — Craig explains how the platform spans MES, WMS, data ops, and application development—an operational intelligence layer, not a data lake. [47:13] - Technical Architecture Deep Dive — Overview of Kubernetes backend, Node.js framework, RabbitMQ messaging, MongoDB with custom ORM, and the hybrid gateway for edge connectivity. [51:16] - Real-World Deployments — Case studies including an automotive OEM running an entire car plant on Fuuz, Highbar Steel's solar-powered green steel mill, and PepsiCo co-packer integrations. [53:52] - Advice for Getting Started — Craig's recommendation to define the problem first, assemble cross-functional IT/OT teams, and start small with the understanding that small problems often expose bigger ones. 4. KEY TAKEAWAYS The "shim" between shop floor and enterprise is the missing piece: ERP and PLM systems are only accurate for the first 15 minutes after data entry. Without a real-time contextualization layer synchronizing shop floor and enterprise data, there is no true single source of truth. Model-driven persistence beats in-line transformation for scale: While edge tools that transform data in motion work for one or two sites, they require re-implementation across every site and system. A persistent data model is defined once and becomes the consistent interface for all enterprise systems. AI governance requires deterministic data models: LLMs cannot reliably do math and will hallucinate if given unstructured data. By forcing AI to read from governed data graphs, organizations can move toward semi-autonomous and eventually autonomous operations with trustworthy outputs. Extensible models balance governance and flexibility: Enterprise IT can define governed core models while individual plants extend them with additional metadata—they can add context but cannot change underlying structures, preserving data integrity while enabling local adaptation. Operational intelligence is not the same as a data lake: Data lakes are good for reporting and analytics but don't help run real-time operations. An operational intelligence platform provides both persistent contextualized state and real-time event streaming for actual operational execution. Start with the problem, not the technology: Many companies approach vendors saying they "need an MES" without understanding why. Defining value drivers first allows solutions to start small and expand as bigger problems reveal themselves. Build tools that enable AI, don't rely on AI as the platform: LLMs are evolving rapidly and may be replaced by new model architectures. Building platforms around deterministic data foundations protects against technical debt from betting on novel technologies. 5. NOTABLE QUOTES "There's a reason why our color is purple, because if you mix red and blue together, it makes purple. We are the part that's in between—the highly structured enterprise data like ERP and PLM and the really unstructured data that's happening on the plant floor." — Craig Scott, CEO at Fuuz "The ERP is a good source of truth for like the first 15 minutes that the data goes into the system, and then immediately, when you start generating real time data from the shop floor, it's out of date. Nothing is in sync anymore." — Craig Scott, CEO at Fuuz "When I connect Claude to Fuuz, Claude can't make anything up. It can't imagine an OEE for my line or my machine because it's being dictated by our data graph." — Craig Scott, CEO at Fuuz "I still look at AI as a tool, and I don't know that we're ready to acknowledge AI as the platform yet. We want to build tools and platforms that enable the technology, not rely on the new technology to be our platform." — Craig Scott, CEO at Fuuz "Data is money, and if we can turn that data into actionable insights, now we can make more money for your business." — Craig Scott, CEO at Fuuz 6. KEY CONCEPTS EXPLAINED Industrial Intelligence Platform Definition: A software layer that sits between operational technology (SCADA, historians, PLCs) and enterprise systems (ERP, PLM, CRM) to provide real-time data contextualization, persistence, and governance. Why it matters: Traditional architectures leave a gap between shop floor data and business systems, causing data inconsistency and preventing AI from accessing trustworthy operational information. Episode context: Craig describes Fuuz as the "shim" or "purple" layer that bridges red (OT) and blue (IT) data, enabling real-time synchronization and a true single source of truth. Model-Driven Architecture Definition: An approach where data models are defined first as persistent, governed structures, and all systems read from and write to this single canonical model rather than transforming data in-line during transit. Why it matters: In-line transformation tools work for small deployments but require re-implementation at every site. Model-driven persistence enables "once and done" enterprise-wide data consistency. Episode context: Craig contrasted this with edge tools that reshape data in motion, explaining that persistent models scale across global enterprises with multiple ERPs and systems. Unified Namespace (UNS) Definition: An architectural pattern that provides a single, hierarchical structure for all operational data, making it accessible to any system that needs it. Why it matters: UNS is gaining adoption as a way to democratize data access, but without persistent contextualized state, it only provides current values—not the historical context needed for operations and AI. Episode context: Craig acknowledged UNS as a great concept but emphasized that operational intelligence requires persistent state of contextualized data, not just real-time streaming. Model Context Protocol (MCP) Definition: A protocol that enables AI systems to connect to and understand data from external platforms through standardized interfaces. Why it matters: MCP allows AI tools like Claude to access governed industrial data without requiring custom integrations or exposing companies to AI hallucination risks. Episode context: Fuuz added MCP capability to expose their data graph to AI systems, ensuring AI outputs are governed by deterministic data rather than generating unreliable information. I3x Initiative Definition: An industry initiative working on standardized GraphQL APIs for industrial data exchange, enabling interoperability between industrial systems. Why it matters: Standardized APIs reduce integration complexity and allow best-of-breed systems to share data through common interfaces. Episode context: Craig mentioned Fuuz has be

    57分
  8. 1月22日

    Driving Operational Excellence in Manufacturing with Practical AI: Mickey Shaposhnik - Founder & CEO , Next Plus

    Traditional MES platforms were built for a manufacturing world that no longer exists. They assume stable product lines. They assume you have time for lengthy implementations, tolerance for complexity, and operators who can navigate digital forms while running production.   But here's the challenge. Today's manufacturing reality is different: ⇨ Markets demand the flexibility to shift from 1.5-liter bottles to 1-liter bottles overnight ⇨ Low volume, high mix production is now the norm ⇨ Tribal knowledge is retiring faster than it's being captured ⇨ Workers stay 2-3 years, not 20, making traditional training models obsolete   The cost of this disconnect? ❌ Frontline workforce unable to contribute operational intelligence at scale ❌ ROI delayed by complexity, not capability ❌ Two-year deployment cycles for basic systems ❌ Digital initiatives stuck in pilot purgatory   That's why leading manufacturers are rethinking execution from the ground up, shifting from monolithic systems to AI-native, human-centric platforms built for today's workforce reality.   This new approach is effective because it’s built with an AI-native mindset, not a digitized version of paper-based processes   ✅ AI-generated SOPs from video, cutting engineering time by 80% ✅ Learning systems that surface troubleshooting guidance from historical fault data ✅ Human-centric design that captures operational intelligence without disrupting workflows ✅ AI-powered interfaces that enable natural interaction; think voice, not dropdowns ✅ Rapid deployment measured in weeks ✅ Scalable without complexity; connect thousands of machines without lengthy integrations   The companies winning today aren’t planning more; they’re executing faster and adapting continuously.   In this episode of the AI in Manufacturing podcast, I speak with Mickey Shaposhnik, Founder and CEO of Next Plus, about how practical, AI-powered frontline execution is redefining operational excellence.   Watch/Listen now

    44分

番組について

Each episode of Industry40.tv Podcast will treat you to an in-depth interview with leading AI practitioners, exploring the Application of Artificial Intelligence in Manufacturing and offering practical guidance for successful implementation.

その他のおすすめ