The AI Governance Briefing

Dr. Tuboise Floyd

The AI Governance Briefing is an independent AI governance and strategy podcast for operators navigating institutions disrupted by artificial intelligence. Hosted by Dr. Tuboise Floyd, PhD — founder, researcher, and principal analyst at Human Signal. The market has split in two. The consumption economy trades in noise, checklists, and compliance theater. The investment economy trades in signal infrastructure, physics, and sovereignty. The AI Governance Briefing is the intelligence feed for the investment economy. We do not trade in content. We trade in leverage. Each episode applies the TAIMScore™ framework, GASP™ and the L.E.A.C. Protocol™ to reverse-engineer real institutional AI failures — and build governance infrastructure before autonomous systems break the institution. Hosted alongside Creative Director Jeremy Jarvis, the show covers asymmetric strategy, critical infrastructure, and the physics of risk for government contracting and builder sectors. New episodes, visual briefs, and honest playbooks at humansignal.io/podcast © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. The AI Governance Briefing is an independent media and research platform. All episode content — including analysis, case studies, and framework application — is provided for educational and informational purposes only. Nothing in any episode constitutes legal, regulatory, compliance, financial, or professional advice. No advisory or consulting relationship is created by listening to or engaging with this content. Guest opinions are those of the guest alone and do not represent the positions of Human Signal or Dr. Tuboise Floyd. Case studies and institutional failure analyses are based on publicly available information and are presented as pedagogical tools — not legal findings or regulatory determinations. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

  1. 11 APR

    AI Accountability Is Broken. Here's Why

    Episode SummaryIn this episode of Human Signal, Dr. Tobias Floyd delivers a pointed analysis of why enterprise AI governance is failing at the structural level. The problem isn't a lack of policy — it's that governance was designed for a world that no longer exists. Distributed AI — running across edge devices, vendor stacks, and multi-agent pipelines — has dissolved the single point of control that traditional compliance frameworks depend on. Key Takeaway 1: Distributed AI Is a Governance Condition, Not a Technology TrendThe shift to distributed AI isn't just an infrastructure evolution — it's a fundamental change in where accountability lives. When AI executes across multiple nodes, devices, or third-party systems without unified oversight, you're no longer in a governance framework. You're in a governance gap. Every edge deployment, every federated model, every multi-agent workflow is an accountability question first, a technology question second. Key Takeaway 2: The Architecture of Blame Is Predictable — and AvoidableThe pattern behind every major AI failure in recent years is the same: the vendor says the output was within spec; the integrator says the client configured the workflow; the client says legal approved the policy; legal says the policy covered the old system. Nobody owns the failure. The reason isn't bad actors — it's structural ambiguity. When no one owns the decision at the node, blame distributes as efficiently as the AI does. Key Takeaway 3: "Permitted" Is Not the Same as "Admissible"A policy that allows a model to run is not the same as governance that can see what the model is doing. This visibility gap — between what is authorized on paper and what is observable in execution — is where accountability collapses. Functional governance requires audit trails, intervention triggers, and independence from vendor contracts built into the architecture itself, not appended to it. Dr. Floyd's 3 Diagnostic Questions1. Who owns the decision at the node — not the system, the decision? If the answer is vague, you have a gap.2. What is the escalation path? A single risk officer cannot handle fifty simultaneous failures across fifty nodes. The architecture must match the distribution.3. What accountability exists without the vendor? If your governance breaks when the vendor changes the API, you don't have governance — you have vendor dependency.Dr. Floyd's 3 Requirements for Functional GovernanceVisibility at every execution point. If you cannot see the node, you cannot govern the node.Accountability without humans in every loop. Humans cannot scale to distributed AI. Audit trails and intervention triggers must be designed into the system.Independence. The governance structure must survive vendor changes and contract terminations. Closing ReflectionThe winners in the AI era won't be the organizations with the best technology. They'll be the ones with the structural discipline to govern it. This week, ask yourself three things: Can you name every device where your AI is making decisions? If your vendor changed the model tonight, how long would it take you to find out? And who is responsible when failure happens inside a workflow you don't control? Architect for reality — or discover reality when the system fails. Subscribe to Human Signal for weekly AI governance briefings from Dr. Tuboise Floyd. 5. Chapters / Timestamps0:00 - The Illusion of Governance 0:32 - Distributed AI Outruns Policy 1:10 - The Architecture of Blame 1:52 - The Trust Gap Framework 2:18 - Permitted ≠ Admissible 2:45 - Redesigning Accountability Architecture 3:28 - 3 Diagnostic Questions 4:10 - What Functional Governance Actually Requires ABOUT THE HOST Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure. PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT LinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.io TRANSCRIPT Full transcript available upon request at support@humansignal.io LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. TagsAI governance, AI accountability, distributed AI, AI policy, responsible AI, AI compliance, AI risk management, AI at the edge, federated learning, multi-agent systems, edge computing AI, AI governance framework, AI accountability gap, AI oversight, trust gap framework, AI leadership, AI regulation, AI vendor risk, governance architecture, AI decision making, AI audit trail, AI policy failure, Dr. Tobias Floyd, Human Signal, The Signal AI briefing This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    5 min
  2. AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”

    29 MAR

    AI Governance Open Forum: Critical Thinking, Risk, and “Never Blindly Trust—Always Verify”

    EPISODE DESCRIPTION At a laid-back campus event, students are invited to ask questions about AI governance to Taiye Lambo, founder of the Holistic Information Security Practitioner Institute (HISPI), and Dr. Tuboise Floyd of Human Signal, an AI governance researcher and podcast host. Speakers emphasize that AI literacy is a civic and professional survival skill: employers expect workers to critically evaluate AI outputs, frame AI literacy as risk awareness, and focus on asking the right questions rather than becoming data scientists. Discussion covers deepfakes and short-form media, overreliance on AI (including a lawyer citing fabricated ChatGPT case law), "never blindly trust, always verify," and the need for continuous auditing, accountability, and an "honest human in the loop," especially in clinical and environmental contexts. Students are advised to build strong domain knowledge, think critically, pursue internships, and invest in AI governance and risk certifications over tool-specific training. ⏱️ Chapters 00:00 Welcome and Setup 00:52 Meet the Experts 01:57 Taiye on Governance Focus 02:53 Dr. Floyd Background and Podcast 04:39 Open Forum Begins 05:02 AI Literacy for Careers 07:23 Threat or Opportunity Poll 10:01 AI Literacy Beyond STEM 10:49 Spotting Deepfakes in Shorts 15:35 Using AI Without Replacing Learning 16:14 Lawyer Case and Overtrusting AI 18:08 Never Blindly Trust — Verify 19:06 Wikipedia Analogy and Real Risks 20:31 Business Ethics Reality Check 21:06 Continuous Audits in Clinics 21:28 Human in the Loop Matters 22:04 Environmental AI Data Gaps 23:13 Public Trust and Accountability 23:33 Honest Human Oversight 25:28 Tokens and Hallucinations 26:51 Bias in Training Data 27:56 Interviewing in the AI Era 30:28 AI Disruption and Generational Shift 33:21 High-Stakes AI Blind Spots 36:02 Rapid Fire Career Advice 41:03 Closing and Next Steps GUEST Taiye Lambo, Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org 🔗 https://projectcerebellum.com LinkedIn: linkedin.com/in/taiyelambo TAIMScore™ Assessor Workshop 🔗 https://humansignal.io/taimscore_assessor_workshop SUBSCRIBE & SUPPORT Subscribe now to lock in the feed. This isn't just content — it's a continuing briefing for the Builder Class. Support Human Signal — help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support Every contribution sustains the signal. ABOUT THE HOST Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure. PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT LinkedIn: linkedin.com/in/drtuboisefloyd Email: tuboise@humansignal.io TRANSCRIPT Full transcript available upon request at support@humansignal.io TAGS AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership #AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. Takeaways: The importance of AI governance is paramount for ensuring ethical decision-making and risk mitigation.Employers now expect candidates to critically evaluate AI outputs rather than merely utilize them without scrutiny.AI literacy transcends technical skills, emerging as a vital competency for professional survival in today's market.The integration of human oversight in AI systems is essential to prevent unintended consequences and ensure accountability.Understanding the implications of AI data training is crucial, especially in high-stakes environments like healthcare.Being proactive in seeking knowledge through books and mentorship is vital for navigating the evolving landscape of AI. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    43 min
  3. When Your Vendor Becomes Your Vulnerability

    26 MAR

    When Your Vendor Becomes Your Vulnerability

    EPISODE DESCRIPTION In this episode, Dr. Tuboise Floyd breaks down the Korean Air / KC&D supply chain breach — a forensic autopsy of what happens when data governance doesn’t travel with the data. In December 2025, Korean Air disclosed that 30,000 employee records were stolen. The breach didn’t come through Korean Air’s systems. It came through KC&D Service — a catering subsidiary spun off and sold to private equity in 2020. Five years later, KC&D was still holding Korean Air employee data on an unpatched Oracle ERP server. The Cl0p ransomware group exploited CVE-2025-61882 — CVSS 9.8 — and published 500GB on a dark web leak site. Six TAIMScore™ controls failed simultaneously. Three domains. All because the data moved out of sight — not out of risk. This is a Failure File. Not a warning. A forensic record. Key Topics: ∙ Supply chain governance and third-party vendor risk ∙ What happens when a divestiture doesn’t include data governance ∙ The Oracle EBS zero-day and its 100+ organizational victims ∙ TAIMScore™ forensic: GOVERN, MAP, and MANAGE domain failures ∙ The one question every institution needs to ask today GUESTS No guests. Solo episode. TAIMScore™ Assessor Workshop https://humansignal.io/taimscore_assessor_workshop SUBSCRIBE & SUPPORT Subscribe now to lock in the feed. This isn’t just content — it’s a continuing briefing for the Builder Class. Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support Every contribution sustains the signal. ABOUT THE HOST Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure. PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Tech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT LinkedIn: linkedin.com/in/tuboise Email: tuboise@humansignal.io TRANSCRIPT Full transcript available upon request at hello@humansignal.io TAGS/KEYWORDS AI Governance, Supply Chain Risk, Third-Party Vendor Risk, Data Breach, Korean Air, KC&D, Cl0p Ransomware, Oracle EBS, CVE-2025-61882, TAIMScore, TAIM Framework, Failure File, Institutional Risk, Dr. Tuboise Floyd, Human Signal HASHTAGS #AIGovernance #SupplyChainRisk #DataBreach #TAIMScore #FailureFile #ThirdPartyRisk #CyberSecurity #InstitutionalRisk #HumanSignal #AIGovernanceBriefing LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    6 min
  4. AI Governance: Balancing Innovation With Risk Management

    25 MAR

    AI Governance: Balancing Innovation With Risk Management

    EPISODE DESCRIPTION In this episode, Dr. Tuboise Floyd is joined by Col. Kathy Swacina (USA Ret.), CIO of Sherpawerx, and Taiye Lambo, Founder and Chief Artificial Intelligence Officer of Holistic Information Security Practitioner Institute (HISPI), to discuss Project Cerebellum, AI Governance, and balancing innovation with risk management. We delve into the critical need for a holistic control layer in AI development. Without appropriate checks and balances, the rapid race to be first with AI could lead to dire consequences. The discussion touches on the role of CIOs, the moral compass of AI systems, and the potential risks of operating without proper oversight. This is a cautionary tale for executives and developers alike, emphasizing the importance of a balanced approach to AI innovation and governance. Key Topics: ​Project Cerebellum and holistic AI control layers ​The race to AI deployment vs. responsible governance ​The evolving role of CIOs in AI oversight ​Building moral compasses into AI systems ​Risk management frameworks that actually work GUESTS Col. Kathy Swacina (USA Ret.) CIO, Sherpawerx 🔗 https://sherpawerx.com/ Taiye Lambo Founder & Chief Artificial Intelligence Officer Holistic Information Security Practitioner Institute (HISPI) 🔗 https://www.hispi.org/ 🔗 https://projectcerebellum.com TAIMScore™ Assessor Workshop https://humansignal.io/taimscore_assessor_workshop SUBSCRIBE & SUPPORT Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class. Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support Every contribution sustains the signal. ABOUT THE HOST Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure. PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Tech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT ​LinkedIn: linkedin.com/in/tuboise ​Email: tuboise@humansignal.io ​GoFundMe: https://gofund.me/117dd0d3d TRANSCRIPT Full transcript available upon request at hello@humansignal.io TAGS/KEYWORDS AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership HASHTAGS #AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    51 min
  5. When Your GPS Happily Drives You Into The Sea

    6 MAR

    When Your GPS Happily Drives You Into The Sea

    EPISODE DESCRIPTION An Amazon delivery van reportedly got stranded on the Broomway — one of Britain's most dangerous tidal tracks in Essex — after blindly following GPS directions toward Foulness Island. No alert. No override. No human in the loop. This isn't a story about bad technology. It's a story about ungoverned automation making context-free decisions about human movement in the physical world. And it's exactly the kind of incident the HISPI Project Cerebellum AI Incidents database exists to document — so organizations can stop repeating the same failures. 🗺️ The Incident 🔬 TAIMScore™ Failure Analysis Running this incident through a TAIMScore™ lens reveals failure across three critical dimensions: ❌ Safety — FAIL No guardrails for hazardous geographic areas. The routing system had no awareness of tidal zones, flood-risk roads, or environmental danger conditions. A system operating in the physical world with zero environmental context is an unacceptable safety liability. ❌ Trust — FAIL When workers discover that guidance systems can route them into danger, trust collapses — not just in that system, but in all automated guidance. The second-order effect is that workers either override systems entirely (defeating the purpose) or follow blindly (accepting the risk). Neither is acceptable. ❌ Responsibility — FAIL Who owns the risk when an algorithm routes a human into danger? The driver? The dispatcher? The software vendor? The organization deploying the tool? Without clear accountability architecture, no one owns it — until someone gets hurt. 🎯 The Core Thesis The technology works exactly as designed. The governance around it does not exist. 🔗 Resources & Links Referenced Tools & Projects - HISPI Project Cerebellum — AI Incidents Database tracking real-world AI failures across sectors - TAIMScore™ — Structured scoring framework for AI governance and risk assessment - TAIMScore™ Assessor Workshop — Live sessions where teams score real incidents and design controls Workshop Registration 🔗 humansignal.io/taimscore_assessor_workshop The Broomway - One of the oldest roads in England, dating to the 1600s - Runs across tidal mudflats in the Thames Estuary - Floods rapidly and without visible warning - Has claimed numerous lives historically - Considered one of the most dangerous roads in the United Kingdom PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Tech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT LinkedIn: linkedin.com/in/tuboise Email: tuboise@humansignal.io GoFundMe: https://gofund.me/117dd0d3d TRANSCRIPT Full transcript available upon request at support@humansignal.io TAGS/KEYWORDS AI Governance, Risk Management, AI Policy, Tech Leadership, Institutional AI, Future of Work, AI Ethics, Governance Failure, Enterprise AI, Government AI, Spokane Transit, Amazon Hiring Bias, Workflow Design HASHTAGS #HumanSignalFailureFile #AIGovernance #TAIMScore #ProjectCerebellum #AIFailure #UngoverneAutomation #GPSFailure #LogisticsAI #AIRisk #ResponsibleAI #AIAccountability #HumanInTheLoop #AIIncidents #HISPI #AIEthics LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™.. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    2 min
  6. Making Digital Accessibility Work In The AI Era

    2 MAR

    Making Digital Accessibility Work In The AI Era

    Dr Tuboise Floyd hosts Dr Michele A Williams to explain why digital accessibility failures present across 97 percent of the web create equity resilience and trust risks that AI can magnify at scale. Williams contrasts medical vs social models of disability addresses ableism and language person first vs identity first and argues checklists cannot replace lived experience or disabled participation in UX research and leadership. They discuss how inaccessible code tools and AI trained on inaccessible data produce issues like missing labels, broken keyboard paths and poor semantic structure. And warn against disability dongles that add tech instead of removing systemic barriers. Dr. Williams outlines a practical 90 day plan establish a baseline with scans and process mapping. GUEST Dr Michele A Williams Making Accessibility Work UX and Accessibility Consultant Author of Accessible UX Research Smashing Media https://mawconsultingllc.com https://www.linkedin.com/in/micheleawilliams1 Accessible UX Research Publisher Smashing Magazine https://www.smashingmagazine.com/2025/06/accessible-ux-research-pre-release/ YouTube https://youtu.be/pxXLNsbyJhc?si=Dt9mf2HK4AtyCx6_ 00:00 Accessibility Wake Up Call 00:57 Meet Dr Michele Williams 02:07 Equity Resilience Trust 04:01 Disability Mindset Shift 05:59 Why Lived Experience Matters 07:14 Person First vs Identity First 13:01 AI Promise and Harm 15:23 Social Model In Practice 19:58 Beyond Screen Readers 25:02 Exclusion Inside Real Teams 26:58 Semantic Code Chaos 28:32 Standards Lag Tech 29:12 Siri Zoom Panic 31:23 Disability Dongles 33:36 AI Hype Reality 37:25 Beyond Checklists 40:32 90 Day Baseline 42:30 Change Defaults 44:17 Normalize Inclusion 46:47 Nothing About Us 49:13 One Action This Week 50:35 Closing Credits SUBSCRIBE & SUPPORT Subscribe now to lock in the feed. This isn't just content; it's a continuing briefing for the Builder Class. Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support Every contribution sustains the signal. ABOUT THE HOST Dr. Tuboise Floyd is the founder of Human Signal, a strategy lab and podcast for people deploying AI inside government agencies, universities, and enterprise systems. A PhD social scientist and former federal contracting strategist, he reverse-engineers system failures and designs AI governance controls that survive real humans, real incentives, and real pressure. PRODUCTION NOTES Host & Producer: Dr. Tuboise Floyd Creative Director: Jeremy Jarvis Tech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. CONNECT LinkedIn: linkedin.com/in/tuboise Email: tuboise@humansignal.io https://humansignal.io/ TRANSCRIPT Full transcript available upon request at hello@humansignal.io TAGS/KEYWORDS AI Governance, Risk Management, Innovation, Project Cerebellum, CIO Leadership, AI Ethics, Military Technology, Cybersecurity, AI Policy, Enterprise AI, Government AI, Technology Leadership HASHTAGS #AIGovernance #RiskManagement #Innovation #AIPolicy #CIOLeadership #AIEthics #HumanSignal #MilitaryTech #Cybersecurity #ProjectCerebellum LEGAL © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. Companies mentioned in this episode: Smashing MagazineAccessibe Takeaways: The staggering statistic reveals that 97% of the web remains rife with accessibility barriers that hinder disabled individuals.Accessibility is not merely a compliance issue but a vital consideration that impacts user experience and organizational culture.To create truly inclusive products, it is essential to incorporate the perspectives and experiences of disabled individuals throughout the design process.Artificial intelligence must not be viewed as the sole solution for accessibility; rather, it should be integrated thoughtfully with human expertise and oversight. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    52 min
  7. Digital Accessibility In An AI World

    23 FEB

    Digital Accessibility In An AI World

    Digital Accessibility In An AI World 2026 As a podcast host exploring the intersection of humanity and technology I keep asking: Are we really including everyone in our digital transformation? Dr Michele A Williams, Owner & Accessibility Consultant, new book Accessible UX Research challenges us to move beyond checklists and truly design with not for disabled users. Coming soon to the Human Signal podcast: Dr Michele A Williams PhD joins us to break down how to make digital accessibility work in an AI world. https://mawconsultingllc.com/ Accessibility is not just about digital spaces. Accessibility is about fundamental human rights. What Gen X leaders and professionals need to know: ✓ How to spot invisible exclusion in UX research and code ✓ Moving beyond compliance checklists to build truly inclusive systems ✓ Using AI for captions and alt text without creating new barriers ✓ 90 day accessibility practices your team can sustain Because real inclusion means ensuring everyone has access to the places and systems they need whether digital or physical. Production notes: Tech Specs: Recorded with true analog warmth. No artificial polish, no algorithmic smoothing. Just pure signal and real presence for leaders who value authentic sound. 📧 Contact & Subscribe LinkedIn: linkedin.com/in/tuboise Email: tuboise@humansignal.io https://humansignal.io/ Support Human Signal: Help fuel six months of new episodes, visual briefs, and honest playbooks. 🔗 https://humansignal.io/support Every contribution sustains the signal. Transcript Full transcript available upon request at hello@humansignal.io Tags #HumanSignal #DigitalAccessibility #ArtificialIntelligence #InclusiveDesign #UXResearch #GenXLeaders #TechLeadership #Accessibility Legal © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy

    2 min

Trailers

About

The AI Governance Briefing is an independent AI governance and strategy podcast for operators navigating institutions disrupted by artificial intelligence. Hosted by Dr. Tuboise Floyd, PhD — founder, researcher, and principal analyst at Human Signal. The market has split in two. The consumption economy trades in noise, checklists, and compliance theater. The investment economy trades in signal infrastructure, physics, and sovereignty. The AI Governance Briefing is the intelligence feed for the investment economy. We do not trade in content. We trade in leverage. Each episode applies the TAIMScore™ framework, GASP™ and the L.E.A.C. Protocol™ to reverse-engineer real institutional AI failures — and build governance infrastructure before autonomous systems break the institution. Hosted alongside Creative Director Jeremy Jarvis, the show covers asymmetric strategy, critical infrastructure, and the physics of risk for government contracting and builder sectors. New episodes, visual briefs, and honest playbooks at humansignal.io/podcast © 2026 Dr. Tuboise Floyd. All rights reserved. Content is part of the Presence Signaling Architecture® (PSA), GASP™ and L.E.A.C. Protocol™. The AI Governance Briefing is an independent media and research platform. All episode content — including analysis, case studies, and framework application — is provided for educational and informational purposes only. Nothing in any episode constitutes legal, regulatory, compliance, financial, or professional advice. No advisory or consulting relationship is created by listening to or engaging with this content. Guest opinions are those of the guest alone and do not represent the positions of Human Signal or Dr. Tuboise Floyd. Case studies and institutional failure analyses are based on publicly available information and are presented as pedagogical tools — not legal findings or regulatory determinations. This podcast uses the following third-party services for analysis: OP3 - https://op3.dev/privacy