DataFramed

DataCamp

Welcome to DataFramed, a weekly podcast exploring how artificial intelligence and data are changing the world around us. On this show, we invite data & AI leaders at the forefront of the data revolution to share their insights and experiences into how they lead the charge in this era of AI. Whether you're a beginner looking to gain insights into a career in data & AI, a practitioner needing to stay up-to-date on the latest tools and trends, or a leader looking to transform how your organization uses data & AI, there's something here for everyone. Join co-hosts Adel Nehme and Richie Cotton as they delve into the stories and ideas that are shaping the future of data. Subscribe to the show and tune in to the latest episode on the feed below.

  1. #352 AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

    HÁ 12 H

    #352 AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

    AI agents are spreading across the data and AI industry, promising to automate everything from research to outreach. At the same time, teams are learning that these tools can hallucinate, leak data, or act in surprising ways. In day-to-day work, the challenge is deciding which tasks to hand off, what data to share, and how to keep the output trustworthy. Do your agents actually add value, or just add noise? Are they running in a secured, ring-fenced environment? How do you balance playful experimentation with critical checking when an agent confidently gets a key fact wrong? Danielle leads go-to-market strategy at WNS, Capgemini's AI transformation services arm. Previously, Danielle was Chief Data Officer at American Express and Albertsons. She also write The Remix substack on technology trends, and is an Editorial Board Member for CDO Magazine. In the episode, Richie and Danielle explore AI agents at work, experimentation with guardrails, data privacy, access, tone controls, OpenClaw automation wins and failures, token costs, tying AI plans to P&L strategy, shifts in careers and hiring, how data teams handle unstructured data governance, and much more. Links Mentioned in the Show: WNSConnect with DanielleAI-Native Course: Intro to AI for WorkCatch Danielle speaking at RADAR—April 1Related Episode: AI Agents Are the New Shadow IT (And Your Governance Isn’t Ready) with Stijn Christiaens, CEO at CollibraExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    56min
  2. #351 Will World Models Bring us AGI? with Eric Xing, President & Professor at MBZUAI

    16 DE MAR.

    #351 Will World Models Bring us AGI? with Eric Xing, President & Professor at MBZUAI

    World models are emerging as the next step after large language models, pushing AI from book knowledge toward systems that can simulate the physical and social world. Instead of just generating text or short videos, the goal is steerable simulation with long-horizon consistency and planning. For practitioners, this raises practical choices: what data and representations do you need, and when do you mix symbolic reasoning with generative models? How do you test whether a model can follow actions over minutes, not seconds? And where do you start—robotics, driving safety, or synthetic data generation? Professor Eric Xing is President of Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and a world-leading computer scientist whose work spans statistical machine learning, distributed systems, computational biology, and healthcare AI. A fellow of AAAI, IEEE, and the American Statistical Association, he has authored over 400 research papers cited more than 44,000 times.Before MBZUAI, Eric was a Professor of Computer Science at Carnegie Mellon University, where he also founded the Center for Machine Learning and Health. He is the founder and chief scientist of Petuum Inc., recognized as a World Economic Forum Technology Pioneer, and has held visiting roles at Stanford and Facebook. He holds PhDs in both Molecular Biology and Computer Science. In the episode, Richie and Eric explore world models as simulators for action, the jump from book intelligence to physical and social skills, why long-horizon planning is still hard, architectures, robots, data generation, open K2 Think LLMs, virtual-cell biology, and much more. Links Mentioned in the Show: MBZUAIPan World ModelConnect with EricAI-Native Course: Intro to AI for WorkRelated Episode: Developing Better Predictive Models with Graph Transformers with Jure Leskovec, Pioneer of Graph Transformers, Professor at StanfordExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    1h4min
  3. #350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich

    9 DE MAR.

    #350 How to Make Hard Choices in AI with Atay Kozlovski, Researcher at the University of Zurich

    Across the AI industry, high-stakes tools are being deployed in places where errors can harm people: sepsis alerts in hospitals, identity checks, welfare fraud detection, immigration enforcement, and recommendation systems that shape life outcomes. The pattern is familiar: scale and speed go up, while human review becomes rushed, shallow, or punished for disagreeing. In daily work, that can look like a nurse forced to act on false alarms, or a team using an LLM summary in ways the designers never planned. When should you slow down deployment? How do you detect new “wild” use cases early? And what does responsible tracking and oversight look like under real pressure? Atay Kozlovski is a Postdoctoral Researcher at the University of Zurich’s Center for Ethics. He holds a PhD in Philosophy from the University of Zurich, an MA in PPE from the University of Bern, and a BA from Tel Aviv University. His current research focuses on normative ethics, hard choices, and the ethics of AI. In the episode, Richie and Atay explore why AI failures keep happening, from automation bias to opaque targeting and hiring models. They unpack “meaningful human control,” accountability, and design in healthcare, government, and warfare. You’ll also hear about deepfakes, consent, digital twins, and AI-driven civic engagement, and much more. Links Mentioned in the Show: “Lavender” IDF recommendation systemAmnesty International reports on AI/automation in welfare systems“Meaningful Human Control” (MHC) frameworkConnect with AtayAI-Native Course: Intro to AI for WorkRelated Episode: Harnessing AI to Help Humanity with Sandy Pentland, HAI Fellow at StanfordExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    1h10min
  4. #349 From AI Governance to AI Enablement with Stijn Christiaens, CEO at Collibra

    5 DE MAR.

    #349 From AI Governance to AI Enablement with Stijn Christiaens, CEO at Collibra

    Data governance has been around long enough to develop playbooks, but AI governance is evolving in real time. Industry trends like LLMs, agents, and emerging “swarms” are changing what oversight even means, from data lineage to agent-to-agent provenance. For working teams, the questions are immediate: who leads—legal, security, IT, data, or a new AI role? How do you set standards so engineers aren’t using a different tool for every task? What maturity framework should you measure against, and how often should you reassess as technology shifts? How do you help teams move fast without breaking trust? Stijn is a data governance veteran and one of the leading thinkers in the space. He runs data strategy, data infrastructure, and product evangelism at the data and AI governance company Collibra. Since founding Collibra 18 years ago, Stijn has held several executive positions, including COO and CTO. In the episode, Richie and Stijn explore AI governance failures and wins, risks from agents that can act on systems, creating visibility with an agent registry, how AI governance differs from data governance, ownership across legal, security, IT, and data teams, EU AI Act risk tiers, and much more. Links Mentioned in the Show: CollibraConnect with StijnAI-Native Course: Intro to AI for WorkRelated Episode: The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrustExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    53min
  5. #348 AI Agents in Your Systems: Speed, Security, and New Access Risks with Jeremy Epling, CPO at Vanta

    2 DE MAR.

    #348 AI Agents in Your Systems: Speed, Security, and New Access Risks with Jeremy Epling, CPO at Vanta

    Automation is moving from APIs to full “computer use,” where agents click through screens like a human. That power is transforming evidence collection, access reviews, and repetitive security tasks, but it also raises new risk. In everyday workflows, the safest gains often start with read-only actions, sandboxes, and clear opt-in for anything that writes changes. Do your tools know when an access request is an anomaly? Can you keep humans in the loop with fast review-and-approve steps? And if an agent can browse your systems, how do you stop data from walking out the door before customers or attackers notice? Jeremy Epling is Chief Product Officer at Vanta, where he leads product strategy and execution for the company’s trust management platform. He focuses on helping organizations automate security and compliance, enabling them to build and scale with confidence. Previously, he was VP of Product at GitHub, overseeing Actions, Codespaces, npm, and Packages—core components of the modern developer workflow used by millions worldwide. Before GitHub, Jeremy spent more than 16 years at Microsoft, leading product teams across Azure DevOps Pipelines and Repos, OneDrive, Outlook, Windows, and Internet Explorer. His work has centered on developer platforms, cloud infrastructure, and productivity tools at global scale. In the episode, Richie and Jeremy Epling explore AI-driven security risks, vendor data use and trade-secret leakage, governance and access controls, compliance beyond audits, how agents automate security questionnaires and vendor reviews, how to ship faster safely, human-in-the-loop design, and “computer use” automation, and much more. Links Mentioned in the Show: VantaVanta State of Trust ReportConnect with JeremyAI-Native Course: Intro to AI for WorkRelated Episode: Governing Pandora's Box: Managing AI Risks with Andrea Bonime-Blanc, CEO at GEC Risk AdvisoryExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

    44min
  6. #347 Let's Get Physical with AI with Ivan Poupyrev, CEO at Archetype AI

    23 DE FEV.

    #347 Let's Get Physical with AI with Ivan Poupyrev, CEO at Archetype AI

    Physical AI is showing up across the industry as sensors, connected devices, and foundation models move from the cloud into the real world. After years of IoT wiring everything to the internet, the big shift is turning raw measurements and video into meaning, not just dashboards. For day-to-day teams, that changes how you monitor equipment, detect failures, and decide what to do next. When thousands of sensor streams hit storage, who turns them into insights and recommendations fast enough to matter? Can one model generalize across different sensors and conditions? And what must run on the asset versus the cloud? Dr. Ivan Poupyrev is CEO and Founder of Archetype AI, where he is building a multimodal AI foundation model that combines real-time sensor data and natural language to help people and organizations better understand and act on the physical world. The company is developing a developer platform to unlock new applications of Physical AI across industries. Previously, he was Director of Engineering at Google’s Advanced Technology and Projects (ATAP) division, where he founded and led large cross-functional teams to create Soli, a radar-based sensing platform, and Jacquard, a connected apparel platform powered by smart textiles and embedded ML. These technologies shipped in more than 15 products across 33 countries, including collaborations with Levi’s, YSL, Adidas, and Samsonite, and were integrated into flagship devices such as Pixel 4 and Nest products. His work has been widely published, recognized with major international awards, and featured in global media. In the episode, Richie and Ivan explore physical AI beyond robotics, turning IoT sensor streams into insights, recommendations, and automation, why physical foundation models differ from LLMs, sensor-fusion wins like wind-turbine failure alerts, edge deployment and privacy, how to pick a first project in practice, and much more. Links Mentioned in the Show: Archetype AIAttention Is All You Need (Original Transformer Architecture Paper)A Mathematical Theory of Communication (Shannon, 1948)Connect with IvanAI-Native Course: Intro to AI for WorkRelated Episode: Enterprise AI Agents with Jun Qian, VP of Generative AI Services at OracleExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile appEmpower your business with world-class data and AI skills with DataCamp for business

    46min
  7. #346 Get Quantum Ready with Yonatan Cohen, CTO at Quantum Machines

    16 DE FEV.

    #346 Get Quantum Ready with Yonatan Cohen, CTO at Quantum Machines

    Quantum computing is advancing fast, but it comes with a core industry challenge: noise. The big promise—better simulations, faster optimization, and maybe new kinds of AI—depends on quantum error correction and scaling from physical qubits to reliable logical qubits. For working professionals, that translates into system design questions, not just theory. How do you budget for the overhead of error correction? What does a hybrid quantum‑classical workflow look like when classical processors must process error data in real time? If a quantum approach shows “advantage” today, how do you know a better classical heuristic won’t catch up next month? Where should you focus first: hardware readiness or use cases? Dr. Yonatan Cohen is a physicist, entrepreneur, and co-founder of Quantum Machines, where he serves as Chief Technology Officer. He earned his Ph.D. at the Weizmann Institute of Science in Israel, focusing on quantum electronics, superconducting–semiconducting devices, and microfabrication. He is also a co-founder and former managing director of the Weizmann Institute’s entrepreneurship program and has published extensively in peer-reviewed journals, with recognized contributions to quantum computing. As CTO, Dr. Cohen has played a key role in developing the Quantum Orchestration Platform, a first-of-its-kind control and operating system for quantum computers that accelerates the path to practical, useful quantum systems. In the episode, Richie and Yonatan explore near-term quantum simulation, encryption risks, the open question of quantum AI, noisy qubits and error correction, physical vs logical scaling, the need for algorithms and use cases, how to try quantum coding via Amazon Braket, and much more. Links Mentioned in the Show: Quantum MachinesAmazon BraketIBM QiskitNVIDIA Cuda QuantumGoogle CirqConnect with YonatanAI-Native Course: Intro to AI for WorkRelated Episode: Developing Better Predictive Models with Graph Transformers with Jure Leskovec, Pioneer of Graph Transformers, Professor at StanfordExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    49min
  8. #345 How to Drive Innovation with Brian Solis, Head of Global Innovation at ServiceNow

    9 DE FEV.

    #345 How to Drive Innovation with Brian Solis, Head of Global Innovation at ServiceNow

    AI moves fast, and the news cycle can feel like a fire hose. New tools like agents and digital twins promise to help, but they also add more choices and noise. In day-to-day work, the challenge is less about knowing every breakthrough and more about deciding what matters, then making time to act. How do you cut meetings down, say no without friction, and still ship real work? How do you open your mind to new ideas while avoiding hype? And when you do spot a signal, how do you turn it into action across teams, stakeholders, and shifting priorities. As the Head of Global Innovation at ServiceNow, Brian Solis drives vision and strategy for future-focused innovation. He has three decades of experience as a technology leader, and Forbes called him "one of the more creative and brilliant business minds of our time". Previously, Brian was VP of Global Innovation at Salesforce. He has written nine books, including the best selling "Mindshift". Brian is an author of the ServiceNow Enterprise AI Maturity Index 2025 Report. In the episode, Richie and Brian explore the challenges of staying updated with AI advancements, the importance of mindset shifts for innovation, the role of storytelling in driving change, and practical strategies for managing information overload, fostering organizational transformation, and much more. Links Mentioned in the Show: Brian’s Book: MindshiftServiceNowConnect with BrianAI-Native Course: Intro to AI for WorkRelated Episode: The New Paradigm for Enterprise AI Governance with Blake Brannon, Chief Innovation Officer at OneTrustExplore AI-Native Learning on DataCamp New to DataCamp? Learn on the go using the DataCamp mobile app Empower your business with world-class data and AI skills with DataCamp for business

    1h8min
4,9
de 5
11 avaliações

Sobre

Welcome to DataFramed, a weekly podcast exploring how artificial intelligence and data are changing the world around us. On this show, we invite data & AI leaders at the forefront of the data revolution to share their insights and experiences into how they lead the charge in this era of AI. Whether you're a beginner looking to gain insights into a career in data & AI, a practitioner needing to stay up-to-date on the latest tools and trends, or a leader looking to transform how your organization uses data & AI, there's something here for everyone. Join co-hosts Adel Nehme and Richie Cotton as they delve into the stories and ideas that are shaping the future of data. Subscribe to the show and tune in to the latest episode on the feed below.

Você também pode gostar de