Trustworthy AI : De-risk business adoption of AI

Pamela Gupta

Description:  Creating AI Trust is a very complex and hard problem. It is not clear what it is and how it can be operationalized.  We will demystify what is Trustworthy AI, efficient adoption and leveraging it for reducing risks in AI programs.McKinsey reports indicates companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT or profitability to their use of AI—are more likely than others to follow Trustworthy AI best practices, including explainability. Further, organizations that establish digital trust among consumers through responsible practices such as making AI explainable are more likely to see their annual revenue and profitability grow at rates of 10 percent or more.

  1. AI Governance Isn't Optional Anymore —What ISO 42001 Auditors Look For

    1D AGO

    AI Governance Isn't Optional Anymore —What ISO 42001 Auditors Look For

    Trustworthy AI: De-risk Adoption of Business AI — with Pamela Gupta Most organizations have AI policies. Few have AI governance that holds up under audit. There's a difference — and that difference is where legal exposure, regulatory risk, and operational failure live. In this episode, I sit down with Dallas Bishoff, the newly appointed U.S. Vice-Chair of ISO Steering Committee 27 (Information Security, Cybersecurity & Privacy) and one of the first ISO 42001 Lead Auditors in the world. Dallas is both writing the rules and auditing against them — a rare perspective that every AI governance leader needs to hear. We go deep on: — The business case for ISO 42001 beyond compliance — what CEOs and boards actually need to understand — What auditors look for on day one of an AI governance audit — and what tells them within the first hour whether governance is real or theater — Red flags that expose paper governance programs with no operational teeth — How ISO 42001 handles third-party and vendor AI risk — where most enterprise AI exposure actually lives — The convergence of ITIL v5 (released January 2026) and ISO 42001 — why keeping IT service management and AI governance in silos is a costly mistake — Whether ISO 42001 is equipped for agentic AI — systems that act autonomously without human oversight — The medical device wake-up call: layering AI governance on top of existing quality management standards after AI-enabled surgical devices were linked to serious patient harm — Global regulatory pressure from the EU AI Act and what multinational organizations should prioritize now I also discuss how my AI TIPS™ framework complements ISO 42001 to create a complete governance stack.  If you're a board director, CISO, privacy officer, or compliance leader evaluating AI governance readiness — this is the conversation you need to hear. Guest: Dallas Bishoff — U.S. Vice-Chair, ISO SC27 | ISO 42001 Lead Auditor | Author, ISO 42001 Pro Tips Newsletter Host: Pamela Gupta — Creator, AI TIPS™ Framework | Founder, Trusted AI | CISSP, CISM, CSSLP | 2025 Joseph J. Wasserman Award, ISACA Subscribe and follow Pamela Gupta on LinkedIn for weekly AI governance intelligence. Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    36 min
  2. AI Governance 2026: What Leaders Need to Know Now

    FEB 2

    AI Governance 2026: What Leaders Need to Know Now

    Navigating Old & New AI  Regulations, Liability Risks, and the Strategic Pivot for Q1. Welcome to Trustworthy AI: De-risk Adoption of Business AI. I'm Pamela Gupta, Founder of Trusted AI. This month I'm doing something a little different — a solo episode to kick off 2026. No guest today, just me and you, because a lot has happened in the last 60 days that I want to unpack. If you're a C-suite executive, a board member, a CISO, or anyone responsible for AI in your organization, this episode is your briefing. I'm going to cover: The new US Executive Order on AI and what it signalsWhy AI litigation just got real — and what it means for your vendorsEU AI Act enforcement kicking inAnd three things you should do this quarterLet's get into it. There are slides accompanying the podcast, see https://youtu.be/X_0Yba6Hszg Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    17 min
  3. 12/21/2025

    Trustworthy Agentic AI Rests on Strong Data Governance

    When AI moves from tool to agent, content bloat transforms from inefficient to dangerous. Today, I'm joined by our sponsor, RecordPoint, a leader in AI governance and data lifecycle management. They've been helping highly regulated organizations—from government agencies to financial services—build what they call "ART": Accurate, Relevant, and Trusted data foundations for AI systems. I am speaking with Joe Pearce who is the Head of Product at RecordPoint, Joe Pearce leads the innovation, strategy, and roadmap for Data and AI Governance Platforms.  We're going to explore why agentic AI demands a fundamentally different approach to data governance, what happens when organizations get it wrong, and how forward-thinking leaders are transforming their content management from passive archives into active AI strategy engines. Because here's the reality: your AI agents are only as trustworthy as the data you're giving them access to. And if that data is cluttered with ROT, you haven't solved the hallucination problem—you've just moved it from the public web to your private chaos. Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    34 min
  4. AI Governance for AI Value

    12/01/2025

    AI Governance for AI Value

    75% of companies are now using generative AI. But only a third have responsible controls in place. That's not just a statistic—it's a ticking time bomb. Today, I'm speaking with Dr. Paul Dongha, Head of Responsible AI at NatWest Group and co-author of the newly released 'Governing the Machine.' He's spent three decades bridging AI innovation with ethical implementation in one of the world's most regulated industries. If you want to know how to make AI governance an accelerator rather than a blocker, this is the conversation you need to hear. If you're navigating the EU AI Act, building assurance platforms, or trying to earn customer trust while scaling AI, this conversation provides the roadmap. We compare notes on my AI TIPS model for operationalizing AI Governance and the Framework Ray Eitel-Porter (Author), Paul Dongha (Author), Miriam Vogel (Author) present in Governing the machine. Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    54 min
  5. AI Cyber Threats at Warp Speed: Decoding the Attack Flow with MITRE ATLAS

    10/30/2025

    AI Cyber Threats at Warp Speed: Decoding the Attack Flow with MITRE ATLAS

    AI Cyber Threats at Warp Speed: Decoding the Attack Flow with MITRE ATLAS Is your organization ready for the AI Cybersecurity threat wave? What is the role of AI Cybersecurity in a holistic AI Governance program? What are the Industry partnerships from MITRE that every organization should be aware of and why? The landscape of AI risk is evolving at an accelerated rate, demanding a security framework built specifically for the unique attack surfaces of Machine Learning and Generative AI. Join host Pamela Gupta as she welcomes Walker Dimon, the MITRE ATLAS Lead, who is focused on advancing security for these rapidly evolving AI systems. This conversation reveals the critical flow and severity of modern AI threats: • Mapping the Adversary's Path: The MITRE ATLAS Matrix organizes the progression of attack tactics providing practitioners with a common language and taxonomy for AI threats.  • New, Realized Threats: The focus has shifted from predictive AI attacks (like data poisoning) to complex generative AI exploits. Walker explains that ATLAS techniques are only added if they are "realized"—meaning there is real-world evidence of actual adversaries using these TTPs against victim systems. • The LLM Evolution: Learn about the need for new attacks taxonomies, including the recent addition of triggered injection, to capture the delayed adversarial behavior unique to complex Agentic AI systems. • Walker explains how CISOs can immediately use ATLAS for threat modeling by mapping data flows and user access points to the matrix. • It is a resource for mitigation strategies, offering strategies and exemplars like using open repository guardrail packages (e.g., Nemo guardrails) to define boundary conditions and prevent system compromise. Tune in to understand the dynamic nature of AI risks and get actionable guidance on leveraging the MITRE ATLAS Matrix to build trustworthy, safe, and secure AI systems. We discuss Red Teaming, Prompt Injection attacks and a new category introduced "triggered injection".  I had done a deep dive in my last episode on Agentic AI attacks, that was an example of this new attack. Also, Pamela  poses “Lightening Question - one AI security myth to retire, the most under-hyped attack vector ?” Walker’s response  may surprise you. Last, Thanks to our sponsor RecordPoint, you can get more information about their unified data and governance platform.    Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    42 min
  6. Business Impact of Weaponized AI Agents

    10/21/2025

    Business Impact of Weaponized AI Agents

    I'm Pamela Gupta—2025 Joseph J. Wasserman Award Honoree, the highest honor in information security and risk governance. I'm globally ranked number three in Risk Management and number seven in Cybersecurity by Thinkers360. But here's what I'm most proud of: I help organizations turn AI from a risk into revenue. In my work across 120 countries and with Fortune 500 companies, I've operationalized AI governance frameworks that don't just check compliance boxes—they enable business teams to launch AI initiatives in 60 days instead of staying stuck for months. I created the AI TIPS framework—Trust, Integrity, Pillars, and Sustainability—four years before NIST published their AI Risk Management Framework. I've advised the U.S. Department of Defense on AI strategy. I've built AI Centers of Excellence for critical infrastructure companies. And I've designed governance systems on platforms like IBM watsonx that automate policy enforcement while enabling innovation at scale. My mission is simple: De-risk AI adoption so organizations can confidently embrace the most transformative technology of our generation. Because when AI governance is done right, it's not a barrier—it's an accelerator." Good [morning/afternoon] everyone. Today we're going to talk about one of the most significant AI security vulnerabilities discovered in 2024—and why it matters to every organization deploying AI agents. This is ForcedLeak. CVSS 9.4. Critical severity. It affected Salesforce Agentforce, a platform used by thousands of enterprise customers. This episode is for any and every organization to hear and act on as AI gets integrated into every product globally. Qs or Comments? Contact me at https://www.linkedin.com/in/buildingtrustedaiholistically/ Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    37 min
  7. Enterprise Agentic AI Governance & Security

    09/23/2025

    Enterprise Agentic AI Governance & Security

    “A lack of AI governance will be a showstopper for unlocking the value of agentic systems.” Join us for a session on Enterprise Agentic AI Governance & Security with Jim Reavis, CEO Cloud Security Alliance Cloud Security Alliance, Hosted by Pamela Gupta for a special Trustworthy AI Podcast to set the stage for upcoming Cybersecurity Awareness Month in October. #Agentic #AI Represents an advancement in autonomous systems, increasingly enabled by Large Language Models (LLMs) and generative AI. While agentic AI predates modern LLMs, their integration has "significantly expanded their scale, capabilities, and associated risks." We will discuss: Segment 1: The Why — Understanding the Urgency for AI Governance for Agentic AI. Segment 2: The How — Building AI-Native Assurance  Pamela's AI TIPS framework for AI Governance and Cloud Security Alliance's AI Control management System. Conclusion: The Future — Governance, Innovation, and Collaboration for a safer future. Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    37 min
  8. Operationalizing Responsible AI in Healthcare

    09/01/2025

    Operationalizing Responsible AI in Healthcare

    Today in Trustworthy AI: De-risk adoption of AI I am honored to be speaking with a industry thought leader  Nicoleta J. Economou-Zavlanos, PhD, is the Director of the Duke Health - AI Evaluation & Governance Program and the founding director of the Algorithm-Based Clinical Decision Support (ABCDS) Oversight initiative. She leads Duke Health’s efforts to drive the responsible integration of AI in healthcare by establishing robust evaluation frameworks and oversight practices.  AI in healthcare can transform lives—or land organizations in lawsuits if governance isn’t built in from the start. That’s why I’m so excited to sit down with Dr. Nicoleta Economou of Duke Health. Together, we’ll discuss how her team is making Responsible AI real through: A living governance model (ABCDS Oversight)Multi-institutional collaboration (CHAI + TRAIN)Forward-looking alignment with policies and federal rulesIf you want to see how a health system is leading nationally in AI governance—and what lessons you can apply—this is a session you won’t want to miss. Can Trustworthy AI help De-Risk adoption of AI? ‘Can Trustworthy AI can be instrumental in helping organizations gain a competitive edge and promote better business outcomes, including accelerated innovation with AI’.? With extensive experience in global industry leadership in areas of Business Strategy, Technology, and Cybersecurity, Pamela helps clients in creating a strategic approach to achieving business value with AI by adopting a holistic risk based approach to AI Trust. She defined 8 essential pillars of trustworthy AI. Read more details at Trustedai.ai website. Her insights have shaped the way we look at the impact of Cyberwarfare on Business, strategies for efficient digital transformation, and governance views on Algorithmic failures. Join Pamela as she delves into her signature framework, AI TIPS, standing for Artificial Intelligence Trust, Integrity, Pillars and Sustainability. This podcast is all about operationalizing governance and building Trustworthy AI systems from the ground up. For questions or comments on this podcast reach out to me.

    44 min

Ratings & Reviews

3.7
out of 5
3 Ratings

About

Description:  Creating AI Trust is a very complex and hard problem. It is not clear what it is and how it can be operationalized.  We will demystify what is Trustworthy AI, efficient adoption and leveraging it for reducing risks in AI programs.McKinsey reports indicates companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT or profitability to their use of AI—are more likely than others to follow Trustworthy AI best practices, including explainability. Further, organizations that establish digital trust among consumers through responsible practices such as making AI explainable are more likely to see their annual revenue and profitability grow at rates of 10 percent or more.