M365.FM - Modern work, security, and productivity with Microsoft 365

Mirko Peters (Microsoft 365 consultant and trainer)

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

  1. You Don’t Have a Microsoft Tool Problem — You Have a People Problem

    -12 Ч

    You Don’t Have a Microsoft Tool Problem — You Have a People Problem

    Most Microsoft 365 governance initiatives fail — not because the platform is too complex, but because organizations govern tools instead of systems. In this episode, we break down why assigning “Teams owners,” “SharePoint admins,” and “Purview specialists” guarantees chaos at scale, and how fragmented ownership turns Microsoft 365 into a distributed decision engine with no accountability. You’ll learn the real governance failure patterns leaders miss, the litmus test that exposes whether your tenant is actually governed, and the system-first operating model that fixes identity drift, collaboration sprawl, automation risk, and compliance theater. If your tenant looks “configured” but still produces incidents, audits surprises, and endless exceptions — this episode explains why. Who This Episode Is For (Search Intent Alignment) This episode is for you if you are searching for:Microsoft 365 governance best practicesWhy Microsoft 365 governance failsTeams sprawl and SharePoint oversharingIdentity governance problems in Entra IDPower Platform governance and Power Automate riskPurview DLP and compliance not workingCopilot security and data exposure concernsHow to design an operating model for Microsoft 365This is not a tool walkthrough. It’s a governance reset. Key Topics Covered 1. Why Microsoft 365 Governance Keeps Failing Most organizations blame complexity, licensing, or “user behavior.” The real failure is structural: unclear accountability, siloed tool ownership, and governance treated as configuration instead of enforcement over time. 2. Governing Tools vs Governing Systems Microsoft 365 is not a collection of independent apps. It is a single platform making thousands of authorization decisions every minute across identity, collaboration, data, and automation. Tool-level ownership cannot control system-level behavior. 3. Microsoft 365 as a Distributed Decision Engine Every click, link, share, and flow run is a policy decision. If identity, permissions, and policies drift, the platform still executes — just not in ways leadership can predict or defend. 4. The Org Chart Problem Fragmented ownership creates “conditional chaos”:Teams admins optimize adoptionSharePoint admins lock down storageSecurity tightens Conditional AccessCompliance rolls out PurviewMakers automate everythingEach role succeeds locally — and fails globally. 5. Failure Pattern #1: Identity Blind Spots Standing privilege, mis-scoped roles, forgotten guests, and unmanaged service principals turn governance into luck. Identity is not a directory — it’s an authorization compiler. 6. Failure Pattern #2: Collaboration Sprawl & Orphaned Workspaces Teams and SharePoint sites multiply without lifecycle ownership. Owners leave. Data remains. Search amplifies exposure. Copilot accelerates impact. 7. Failure Pattern #3: Automation Without Governance Power Automate is delegated execution, not a toy. Default environments, unrestricted connectors, and personal flows become invisible production systems that outlive their creators. 8. Compliance Theater and Purview Illusions Having DLP, retention, and labels does not mean you are governed. Policies without owners become noise. Alerts without authority become ignored. Compliance without consequences is theater. 9. The Leadership Litmus Test Ask one question to expose governance reality: “If this setting changes today, who feels it first — and how would we know?” If the answer is a tool name, you don’t have governance. 10. The System-First Governance Model Real governance has three parts:Intent — business-owned constraintsEnforcement — defaults that hold under pressureFeedback — routine drift detection and correction11. Role Reset: From Tool Owners to System Governors This episode defines the roles most organizations are missing:Platform Governance LeadIdentity & Access StewardInformation Flow OwnerAutomation Integrity OwnerGovernance is not a committee. It’s outcome ownership. What You’ll Walk Away WithA mental model for Microsoft 365 governance that actually matches platform behaviorA way to explain governance failures to executives without blaming usersA litmus test leaders can use immediatelyA practical operating model that reduces exceptions instead of managing themLanguage to stop funding “more admins” and start funding accountability Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 18 мин.
  2. The Resilience Mandate: Leading Security in the Age of AI

    -1 ДН.

    The Resilience Mandate: Leading Security in the Age of AI

    Most organizations believe they are well secured because they have deployed modern controls: phishing-resistant MFA, EDR, Conditional Access, a Zero Trust roadmap, and dashboards full of reassuring green checks. And yet breaches keep happening. Not because tools are missing—but because trust was never engineered as a system. This episode dismantles the illusion of control and reframes security as an operating capability, not a checklist. We explore why identity-driven incidents dominate modern breaches, how authorization failures hide inside “normal business,” and why decision latency—not lack of detection—is what turns minor compromises into enterprise-level crises. The conversation is anchored in real Microsoft platform mechanics, not theory, and focuses on one executive outcome: reducing Mean Time to Respond (MTTR) for identity-driven incidents. Opening Theme — The Control Illusion Security coverage feels like control. It isn’t. Coverage tells you what features are enabled. Control is about whether your trust model is enforceable when reality changes. This episode introduces the core shift leaders must make: from prevention fantasy to resilience discipline, and from dashboards to decision speed. Why “Well-Secured” Organizations Still Get Breached Breaches don’t happen because a product wasn’t bought. They happen because trust models decay quietly over time. Most enterprises still operate on outdated assumptions:Authentication is treated as a finish lineNetworks are assumed to be a boundaryPermissions are assumed to represent intentAlerts are mistaken for responseIn reality, identity has become the enterprise control plane. And attackers don’t need to “break in” anymore—they operate using the pathways organizations have already built. MFA can be perfect, and the breach still succeeds, because the failure mode isn’t login. It’s authorization. Identity Is the Control Plane, Not a Directory Identity is no longer a place where users live. It is a distributed decision engine that determines who can act, what they can change, and how far damage can spread. Every file access, API call, admin action, workload execution, and AI agent request is an authorization decision. When identity is treated like plumbing instead of architecture, access becomes accidental, over-permissioned, and ungovernable under pressure. Human and non-human identities—service principals, automation, connectors, and agents—now make up a massive portion of enterprise authority, often with minimal ownership or review. Authorization Failures Beat Authentication Failures The most damaging incidents don’t look like hacking. They look like work. Authorization failures hide inside legitimate behavior:Valid tokensAllowed API callsApproved rolesStanding privilegesOAuth grants that “made something work”Privilege creep isn’t misconfiguration—it’s entropy. Access accumulates because removal feels risky and slow. Over time, the organization loses the ability to answer critical questions during an incident:What breaks if we revoke this access?Who owns this identity?Is it safe to act now?When hesitation sets in, attackers win on time. Redefining Success: From Prevention Fantasy to Resilience Discipline “No breaches” is not a strategy. It’s weather. Prevention reduces probability. Resilience reduces impact. The real objective is bounded failure: limiting what a compromised identity can do, how long it can act, and how quickly the organization can recover. This shifts executive language from tools to outcomes:Continuity — Can the business keep operating during containment?Trust preservation — Can stakeholders see that you are in control?Decision speed — How fast can you detect, decide, enforce, and recover?MTTR becomes the most honest security metric leadership has. Identity Governance as a Business Discipline Governance is not about saying “no.” It’s about making “yes” safe. Real identity governance introduces time, ownership, and accountability into access:Access is scoped, sponsored, and expiresPrivilege is eligible, not standingReviews restate intent instead of rubber-stamping historyContractors, partners, and machine identities are first-class riskWithout governance, access becomes archaeology. And during an incident, archaeology becomes paralysis. Scenario 1 — Entra ID: Governance + ITDR as the Foundation This episode reframes Entra as a trust compiler, not a directory. When identity governance and Identity Threat Detection & Response (ITDR) are treated as foundational:Access becomes intentional and time-boundPrivileged actions are elevated quickly but temporarilyIdentity signals drive enforcement, not just investigationResponse actions are safe because access design is cleanGovernance removes political hesitation. ITDR turns signals into decisive containment. Zero Trust Is Not a Product Rollout Turning on Conditional Access is not Zero Trust. Zero Trust is an operating model where trust decisions are dynamic, exceptions are governed, and enforcement actually happens. Programs fail when:Exceptions accumulate without expirationOwnership is unclear across identity, endpoint, network, and appsTrust assumptions are documented but unenforceableReal Zero Trust reduces friction for normal work and constrains abnormal behavior—without relying on constant prompts. Trust Decays Continuously, Not at Login The session—not the login screen—is the modern attack surface. Authentication proves who you are once. Trust must be continuously evaluated after that. When risk changes and enforcement doesn’t, attackers are granted time by design. Continuous trust requires revocation that happens in business time, not token-expiry time. Scenario 2 — Continuous Access Evaluation (CAE) CAE makes Zero Trust real by collapsing the gap between decision and enforcement. When risk changes:Sessions are re-evaluated in near real timeAccess is revoked inside the app, not hours laterPrecision containment replaces blanket shutdownsCAE exposes maturity fast: which apps honor revocation, which rely on legacy assumptions, and where exception culture quietly undermines the trust model. Detection Without Response Is Expensive Telemetry Alerting is not containment. Most organizations are rich in signal and poor in action. Analysts become human middleware, stitching context across tools while attackers exploit latency. Resilience requires a conversion layer:Pre-defined, reversible containment actionsClear authorityAutomation that removes human latencyHumans focused on judgment, not mechanicsScenario 3 — Defender Signals Routed into ServiceNow This scenario shows how detection becomes coordinated response:Defender correlates identity, endpoint, SaaS, and cloud signalsServiceNow governs execution, approvals, and recoveryAutomation handles first-response mechanicsHumans decide the high-blast-radius callsMTTR becomes measurable, improvable, and defensible at the board level. Safe Autonomy: The Real Objective The goal isn’t more control—it’s safe autonomy. Teams must move fast without creating existential risk. That requires:Dynamic trust decisionsEnforceable constraintsFast revocationRecovery designed as a systemWhen revocation is slow, security compensates with friction. When revocation is fast, autonomy becomes safe. The Leadership Metric: Reduce MTTR MTTR is not a SOC metric. It’s an enterprise resilience KPI. Leaders should demand visibility into:Time to detectTime to decideTime to enforceTime to recoverIf any link is slow, the organization is granting attackers time—by design. Executive Takea Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 19 мин.
  3. The Architecture of Excellence: Why AI Makes Humans Irreplaceable

    -2 ДН.

    The Architecture of Excellence: Why AI Makes Humans Irreplaceable

    Most organizations still talk about AI like it’s a faster stapler: a productivity feature you turn on. That framing is comforting—and wrong. Work now happens through AI, with AI, and increasingly because of AI. Drafts appear before debate. Summaries replace discussion. Outputs begin to masquerade as decisions. This episode argues that none of this makes humans less relevant—it makes them more critical. Because judgment, context, and accountability do not automate. To understand why, the episode introduces a simple but powerful model: collaboration has structural, cognitive, and experiential layers—and AI rewires all three. 1. The Foundational Misunderstanding: “Deploy Copilot” The core mistake most organizations make is treating Copilot like a feature rollout instead of a sociotechnical redesign. Copilot is not “a tool inside Word.” It is a participant in how decisions get formed. The moment AI drafts proposals, summarizes meetings, and suggests next steps, it starts shaping what gets noticed—and what disappears. That’s not assistance. That’s framing. Three predictable failures follow: Invisible co-authorship, where accountability for errors becomes unclearSpeed up, coherence down, where shared understanding erodesOwnership migration, where humans shift from authors to reviewersThe result isn’t better collaboration—it’s epistemic drift. The organization stops owning how it knows. 2. The Three-Layer Collaboration Model To avoid slogans, the episode introduces a practical framework: Structural: meetings, chat, documents, workflows, and where work “lives”Cognitive: sensemaking, framing, trade-offs, and shared mental modelsExperiential: psychological safety, ownership, pride, and voiceMost organizations only manage the structural layer. AI touches all three simultaneously. Optimizing one while ignoring the others creates speed without resilience. 3–5. Structural Drift: From Events to Artifacts Meetings are no longer events—they are publishing pipelines. Chat shifts from dialogue to confirmation. Documents become draft-first battlegrounds where optimization replaces reasoning. AI-generated recaps, summaries, and drafts become the organization’s memory by repetition, not accuracy. Whoever controls the artifact controls the narrative. Governance quietly moves from people to prose. 6–10. Cognitive Shift: From Assistance to Co-Authorship Copilot doesn’t just help write—it proposes mental scaffolding. Humans move from constructing models to reviewing them. Authority bias creeps in: “the AI suggested” starts ending conversations. Alternatives disappear. Assumptions go unstated. Epistemic agency erodes. Work Graph and Work IQ intensify this effect by making context machine-readable. Relevance increases—but so does the danger of treating inferred narrative as truth. Context becomes the product. Curation becomes power. 11–13. Experiential Impact: Voice, Ownership, and Trust Psychological safety changes shape. Disagreeing with AI output feels like disputing reality. Dissent goes private. Errors become durable. Productivity rises, but psychological ownership weakens. People ship work they can’t fully defend. Pride blurs. Accountability diffuses. Viva Insights can surface these signals—but only if leaders treat them as drift detectors, not surveillance tools. 14. The Productivity Paradox AI increases efficiency while quietly degrading coherence. Outputs multiply. Understanding thins. Teams align on text, not intent. Speed masks fragility—until rework, reversals, and incidents expose it. This is not an adoption problem. It’s a decision architecture problem. 15. The Design Principle: Intentional Friction Excellence requires purposeful friction at high-consequence moments. Three controls keep humans irreplaceable: Human-authored problem framingMandatory alternativesVisible reasoning and ownershipFriction is not bureaucracy. It is steering. 16. Case Study: Productivity Up, Confidence Sideways A real team adopted Copilot and gained speed—but lost debate, ownership, and confidence. Recovery came not from reducing AI use, but from making AI visible, separating generation from approval, and restoring human judgment where consequences lived. 17–18. Leadership Rule & Weekly Framework Make AI visible where accountability matters. Every week, leaders should ask: Does this require judgment and liability?Does this shape trust, power, or culture?Would removing human authorship reduce learning or debate?If yes: human-required, with visible ownership and reasoning. If no: automate aggressively. 19. Collaboration Norms for the AI Era Recaps are input, not truthChat must preserve space for dissentDocuments must name owners and assumptionsCanonical context must be intentionalThese are not cultural aspirations. They are entropy controls. Conclusion — The Question You Can’t Outsource AI doesn’t replace humans. It exposes which humans still matter. The real leadership question is not how to deploy Copilot. It’s this: Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 17 мин.
  4. The End of Outsourced Judgment: Why Your AI Strategy is Scaling Confusion

    -3 ДН.

    The End of Outsourced Judgment: Why Your AI Strategy is Scaling Confusion

    Most organizations think their AI strategy is about adoption: licenses, prompts, champions. They’re wrong. The real failure is simpler and more dangerous—outsourcing judgment to a probabilistic system and calling it productivity. Copilot isn’t a faster spreadsheet or deterministic software. It’s a cognition engine that produces plausible language at scale. This episode explains why treating cognition like a tool creates an open loop where confusion scales faster than capability—and why collaboration, not automation, is the only sustainable model. Chapter 1 — Why Tool Metaphors Fail Tool metaphors assume determinism: you act, the system executes, and failure is traceable. Copilot breaks that contract. It generates confident, coherent output that looks like understanding—but coherence is not correctness. The danger isn’t hallucination. It’s substitution. AI outputs become plans, policies, summaries, and narratives that feel “done,” even when no human ever accepted responsibility for what they imply. Without explicitly inverting the relationship—AI proposes, humans decide—judgment silently migrates to the machine. Chapter 2 — Cognitive Collaboration (Without Romance) Cognitive collaboration isn’t magical. It’s mechanical. The AI expands the option space. Humans collapse it into a decision. That requires four non-negotiable human responsibilities: Intent: stating what you are actually trying to accomplishFraming: defining constraints, audience, and success criteriaVeto power: rejecting plausible but wrong outputsEscalation: forcing human checkpoints on high-impact decisionsIf those aren’t designed into the workflow, Copilot becomes a silent decision-maker by default. Chapter 3 — The Cost Curve: AI Scales Ambiguity Faster Than Capability AI amplifies what already exists. Messy data scales into messier narratives. Unclear decision rights scale into institutional ambiguity. Avoided accountability scales into plausible deniability. The real cost isn’t hallucination—it’s the rework tax: Verification of confident but ungrounded claimsCleanup of misaligned or risky artifactsIncident response and reputational repairAI shifts labor from creation to evaluation. Evaluation is harder to scale—and most organizations never budget time for it. Chapter 4 — The False Ladder: Automation → Augmentation → Collaboration Organizations like to believe collaboration is just “more augmentation.” It isn’t. Automation executes known intent. Augmentation accelerates low-stakes work. Collaboration produces decision-shaping artifacts. When leaders treat collaboration like augmentation, they allow AI-generated drafts to function as judgments—without redefining accountability. That’s how organizations slide sideways into outsourced decision-making. Chapter 5 — Mental Models to Unlearn This episode dismantles three dangerous assumptions: “AI gives answers” — it gives hypotheses, not truth“Better prompts fix outcomes” — prompts can’t replace intent or authority“We’ll train users later” — early habits become culturePrompt obsession is usually a symptom of fuzzy strategy. And “training later” just lets the system teach people that speed matters more than ownership. Chapter 6 — Governance Isn’t Slowing You Down—It’s Preventing Drift Governance in an AI world isn’t about controlling models. It’s about controlling what the organization is allowed to treat as true. Effective governance enforces: Clear decision rightsBoundaries around data and interpretationAudit trails that survive incidentsWithout enforcement, AI turns ambiguity into precedent—and precedent into policy. Chapter 7 — The Triad: Cognition, Judgment, Action This episode introduces a simple systems model: Cognition proposes possibilitiesJudgment selects intent and tradeoffsAction enforces consequencesBreak any link and you get noise, theater, or dangerous automation. Most failed AI strategies collapse cognition and judgment into one fuzzy layer—and then wonder why nothing sticks. Chapter 8 — Real-World Failure Scenarios We walk through three places outsourced judgment fails fast: Security incident triage: analysis without enforced responseHR policy interpretation: plausible answers becoming doctrineIT change management: polished artifacts replacing real risk acceptanceIn every case, the AI didn’t cause the failure. The absence of named human decisions did. Chapter 9 — What AI Actually Makes More Valuable AI doesn’t replace thinking. It industrializes decision pressure. The skills that matter more, not less: Judgment under uncertaintyProblem framingContext awarenessEthical ownership of consequencesStrong teams use AI as scaffolding. Weak teams use it as an authority proxy. Over time, the gap widens. Chapter 10 — Minimal Prescriptions That Remove Deniability No frameworks. No centers of excellence. Just three irreversible changes: Decision logs with named ownersJudgment moments embedded in workflowsIndividual accountability, not committee diffusionIf you can’t answer who decided what, why, and under which tradeoffs in under a minute—you didn’t scale capability. You scaled plausible deniability. Conclusion — Reintroducing Judgment Into the System AI scales whatever you already are. If you lack clarity, it scales confusion. The fix isn’t smarter models—it’s making judgment unavoidable. Stop asking “What does the AI say?” Start asking “Who owns this decision?” Subscribe for the next episode, where we break down how to build judgment moments directly into the M365–ServiceNow operating model. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 15 мин.
  5. Showback Is Not Accountability

    -4 ДН.

    Showback Is Not Accountability

    Most organizations believe showback creates accountability. It doesn’t. Showback creates visibility—and visibility feels like control. Dashboards appear. Reports circulate. Cost reviews get scheduled. Everyone relaxes. But nothing in the system is forced to change. A dashboard is not a decision. A report is not an escalation path. A monthly cost review is not governance. This episode dismantles the illusion. You can instrument cloud spend perfectly and still drift into financial chaos. Real governance only exists when visibility turns into enforced decisions—with owners, guardrails, workflows, and consequences. 1. The Definitions Everyone Blurs (and Why It Matters) Words matter because platforms only respond to what is enforced—not what is intended. Showback is attribution without impact. It answers “Who did we think spent this money?” It produces telemetry: tags, allocation models, dashboards. Telemetry is useful. Telemetry is not a control. Chargeback is impact without intelligence. It answers “Who pays?” The spend hits a cost center or P&L. Behavior changes—but often in destructive ways. Teams optimize for looking cheap instead of being effective. Conflict replaces clarity when ownership models are weak. Accountability is neither of these. Accountability is owned decisions + enforced constraints + an audit trail. It means a human can say: “This spend exists because we chose it, we can justify it, and we accept the trade-offs.” And the platform can say: “No.” Not metaphorically. Literally. If your system cannot deny a bad deployment, quarantine unowned spend, escalate a breach, or expire an exception, you are not governing. You are persuading. And persuasion does not scale. 2. Why Showback Fails at Scale: Observer With No Actuator Showback fails for the same reason monitoring fails without response. It observes but cannot act. Cloud spend is not one big decision—it’s thousands of micro-decisions made daily: SKU choices, regions, retention settings, redundancy, idle compute, “temporary” environments, premium licenses. Monthly reports cannot correct daily behavior. So dashboards become rituals: Teams explain spikesNarratives replace outcomesMeetings repeatNothing changesThe system trains everyone to optimize for explanation, not correction. The result is predictable: cost drift becomes normalized, then defended. Anyone trying to stop it is labeled as “slowing delivery.” That label kills governance faster than bad data ever could. This is not a failure of discipline. It is a failure of system design. 3. Cost Entropy: Why Spend Drifts Even With Good Intentions Cloud cost behaves like security posture: it degrades unless continuously constrained. Tags decay. Owners change. Teams reorganize. Subscriptions multiply. Shared services blur accountability. “Temporary” resources become permanent because the platform never asks you to renew the decision. This is cost entropy—the unavoidable decay of ownership, attribution, and intent unless renewal is enforced. When entropy wins: Unallocated spend growsExceptions pile upAllocation models lie confidentlyFinance argues with engineering over spreadsheetsNobody can answer “who owns this?” fast enough to actThis isn’t because tagging is “bad hygiene.” It’s because tagging is optional. Optional metadata produces optional accountability. 4. Failure Mode #1: Informed Teams, No Obligation “We gave teams the data.” So what? Awareness without obligation is trivia. Obligation without authority is cruelty. Dashboards tell teams what already happened. They don’t change starting conditions. They don’t force closure. They don’t require decisions to end in accept, mitigate, escalate, or reforecast. So the same offenders show up every month. The same subscriptions spike. The same workloads drift. And the organization learns the real rule: nothing happens. Repeated cost spikes are not a cost problem. They are a governance failure the organization is tolerating. 5. Failure Mode #2: Exception Debt and Policy Without Teeth Policies exist. Standards are published. Exceptions pile up. Exceptions are not edge cases—they are the operating model. And when exceptions have no owner, no scope, no expiry, and no enforcement, they become permanent bypasses. Policy without enforcement is not governance. It’s documentation with a logo. Exceptions multiply ambiguity, break allocation, and collapse enforcement. Over time, the only people who understand the “real rules” are the ones who were in old meetings—and they leave. Real exceptions must have: An accountable ownerA defined blast radiusA justification tied to business intentAn enforced end dateIf an exception doesn’t expire, it isn’t an exception. It’s a new baseline you were too polite to name. 6. Failure Mode #3: Shadow Spend Outside the Graph The most dangerous spend is the spend you never allocated in the first place. Shadow subscriptions, trial tenants, departmental SaaS, “temporary” Azure subscriptions, Power Platform environments—cloud removed the friction that once made these visible. Showback dashboards can be perfectly accurate and still fundamentally wrong, because they only show the governed part of the system. Meanwhile the real risk hides in the long tail of small, unowned, invisible spend. Once spend escapes the graph: Cost governance collapsesSecurity posture fragmentsAccountability disappearsAt that point, governance isn’t a design problem. It’s a detective story—and you always lose those eventually. 7. Governance Is Not Documentation. It Is Enforced Intent Governance is not what your policy says. It’s what the platform will and will not allow. Real governance operates at creation time, not review time. That means: Constraints that block bad defaultsAlarms that trigger decisionsWorkflows that force closureAudit trails that prove accountabilityGuidelines are optional by design. Constraints are not. If the system tolerates non-compliance by default, you chose speed over control. That may be intentional—but don’t call it governance. 8. The System of Action: Guardrails, Alarms, Actuators Escaping the showback trap requires three enforceable systems working together: Guardrails Azure Policy to constrain creation: required tags, allowed regions, approved SKUs, dev/test restrictions. Not recommendations. Constraints. Alarms Budgets as escalation contracts, not FYI emails. Owned alerts, response windows, and defined escalation paths. Actuation Workflow automation (ServiceNow, Power Automate) that turns anomalies into work items with owners, SLAs, decisions, and evidence. No email. No memory. Miss any one of these and governance collapses back into theater. 9. Ownership as the Real Control Plane Ownership is not a tag. It is authority. A real owner can approve spend, accept risk, and say no. Distribution lists, FinOps teams, and “IT” are not owners. They are routing failures. Ownership must exist at: Boundary level (tenant/subscription)Workload/product levelShared platform levelAnd ownership must be enforced at creation time. After that, resources become politically protected—and you keep paying. 10. From Cost Control to Value-Driven Governance The goal is not savings. Savings are a side effect. The real goal is spend that is: IntentionalAttributablePredictableDefensibleShowback tells you what happened. Governance determines what is allowed to happen next. When ownership is enforced, exceptions expire, and anomalies force decisions, cloud spend stops being a surprise and starts being strategy executed through infrastructure. Final Takeaway Showback is not accountability. It is an observer pattern with no actuator. Until your platform can force ownership, deny bad defaults, expire exceptions, and require decisions with evidence, you are not governing cloud spend. You are watching it drift—beautifully instrumented, perfectly explained, and completely uncontrolled. The next episode breaks down how to implement this system of action step by step. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 16 мин.
  6. The Governance Illusion: Why Your Tenant Is Beyond Control

    -5 ДН.

    The Governance Illusion: Why Your Tenant Is Beyond Control

    Most organizations believe Microsoft 365 governance is something they do. They are wrong. Governance isn’t a project you complete—it’s a condition that must survive every day after the project ends. Microsoft 365 is not a static system you finish configuring. It is an ecosystem that continuously creates new Teams, Sites, apps, flows, agents, and access paths whether you planned for them or not. This episode strips away the illusion: why policy existing doesn’t mean policy is enforced, why “compliant” doesn’t mean “controlled,” and what predictable control actually looks like when nobody is selling you a fairy tale. 1. Configuration Isn’t Control The foundational misunderstanding behind most failed governance programs is simple: configuration is mistaken for control. Configuration is what you set once. Control is what holds when the platform behaves unpredictably—on a Friday night, during turnover, or when Microsoft ships new defaults. Microsoft 365 is a distributed decision engine. Entra evaluates identity signals. SharePoint evaluates links. Teams evaluates membership. Power Platform evaluates connectors and execution context. Copilot queries across whatever survives those decisions. Intent lives in policy documents. Configuration lives in admin centers. Behavior is the only thing that matters. Most governance programs stop at visibility—dashboards, reports, and quarterly reviews. That’s governance theater. Visibility without consequence is not control. Control fails in the gap between the control plane (settings) and the work plane (where creation and sharing actually happen). Governance collapses when humans are expected to remember. If enforcement relies on memory, reviews, or good intentions, outcomes become probabilistic. Drift isn’t accidental—it’s guaranteed. 2. Governance Fundamentals That Survive Reality Real governance treats Microsoft 365 like a living authorization graph that continuously decays. Only four primitives survive that reality: Ownership – Every resource must have accountable humans. Ownership is not metadata; it’s an operational circuit. Without it, governance is impossible. Lifecycle – Inactivity is not safety. Assets must expire, renew, archive, or die. Time—not memory—keeps systems clean. Enforcement – Policies must block, force, expire, escalate, or remediate. Anything else is a suggestion. Transparency – On demand, you must answer: what exists, who owns it, and why access is allowed—without stitching together five portals and a spreadsheet. Everything else is decoration. 3. The Failure Loop: Projects End, Drift Begins Governance programs don’t fail during rollout. They fail afterward. One-time deployments create starting conditions, not sustained control. Drift accumulates through exceptions, bypasses, and new surfaces. New Teams get created from new places. Sharing happens through faster paths. Automation spreads. Defaults change. Organizations respond with more reviews. Reviews become queues. Queues create fatigue. Fatigue creates rubber-stamping. Rubber-stamping turns uncertainty into permanent approval. The tenant decays not because people are careless—but because the platform keeps producing state faster than humans can validate it. 4. The Five Erosion Patterns Tenant decay follows predictable paths: Sprawl – Uncontrolled creation plus weak lifecycle.Sharing Drift – File-level access diverges from workspace intent.Data Exfiltration – Legitimate export paths become silent leaks.AI Exposure – Copilot accelerates discovery of existing mistakes.Ownerless Resources – Assets persist without accountability.These patterns compound. Sprawl creates sharing drift. Sharing drift feeds Copilot. Automation industrializes mistakes. Ownerless resources prevent cleanup. None of this is random. It’s structural. 5. Teams Sprawl Isn’t a People Problem Teams sprawl is an architectural outcome, not a training failure. Creation pathways multiply. Templates accelerate duplication. Retirement is optional. Archiving creates a false sense of closure. Guest access persists longer than projects. Naming policies give cosmetic order without control. Teams governance fails because Teams is not the system. Microsoft 365 primitives are. If you don’t enforce ownership, lifecycle, and time-bound access at the primitive layer, Teams sprawl is guaranteed. 6. Channels, Guests, and Conditional Chaos Private and shared channels break the “Team membership equals access” model. Guests persist. Owners leave. Conditional Access gates sign-in but doesn’t clean permissions. Archiving feels like governance. It isn’t. Teams governance only works when creation paths are constrained, ownership is enforced, access is time-bound, and expiration is unavoidable. 7–8. SharePoint: Where Drift Becomes Permanent SharePoint is where governance quietly dies. Permissions drift at the file level. Inheritance breaks forever. Links feel temporary but persist. Labels classify content without governing access. External sharing controls don’t retroactively fix exposure. Copilot doesn’t cause this. It reveals it. If you can’t inventory broken inheritance, stale links, and ownerless sites, your SharePoint estate is already ungovernable. 9–10. Power Automate as an Exfiltration Fabric Low-code does not mean low-risk. Flows become production systems without review. Connectors move data legitimately into illegitimate contexts. Execution identity is ambiguous. Owners leave. Flows keep running. DLP helps—but without context, it creates overblocking, exceptions, and drift. Governance requires inventory, ownership, tiered environments, and least-privilege execution—not just connector rules. 11–12. Copilot and Agents Copilot doesn’t create risk—it removes friction that once hid it. It collapses discovery time. It surfaces stale truth. It rewards messy estates. Agents compound this by introducing action, not just insight. Agents must be treated like identities: Scoped buildersControlled toolsGoverned publishingEnforced ownershipExpiration and reviewUnaccountable agents are not innovation. They are execution risk. 13–15. Identity and Power Platform Reality Entra governs authentication—not tenant hygiene. Identity lifecycle does not clean up Teams, sites, flows, apps, or agents. App registrations become the same entropy problem in a different costume. Citizen development at scale demands environments, promotion paths, and execution controls. Otherwise the tenant becomes a shared workstation with enterprise permissions. 16. The Silo Tax Native governance doesn’t converge because it wasn’t designed to. Admin centers reflect product teams, not tenant reality. Policy meanings differ by workload. Telemetry fragments. Lifecycle doesn’t exist end-to-end. Governance fails in the seams—where no admin center owns the incident. 17–18. Control Patterns That Work Ownership enforcement turns orphaned assets into remediated incidents. Risk-based prioritization turns noise into throughput. Rank by blast radius, data sensitivity, external exposure, and automation—not by policy count. Measure fixes, not findings. Enforce consequence, not awareness. 19. AI-Ready Governance AI-ready governance is not a new program. It’s ownership and risk applied to the surfaces AI accelerates. Baseline what Copilot can see before rollout. Govern agents before they govern you. Treat AI artifacts like software—not toys. 20. Why a Unified Governance Layer Exists Native tools are necessary but insufficient. A unified governance layer exists to: Maintain cross-service inventoryEnforce ownership everywhereApply lifecycle deterministicallyDrive risk-based consequenceProduce audit-grade proofNot perfect control. Predictable control. Conclusion Governance fails when configuration is mistaken for control. Microsoft 365 will happily preserve your mistakes forever. If you want a governable tenant, stop chasing settings. Enforce ownership, lifecycle, and risk-based consequence continuously—across every workload. The next episode walks through how to implement these control patterns end-to-end. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 29 мин.
  7. MCP: The End of Custom AI Glue

    -6 ДН.

    MCP: The End of Custom AI Glue

    Everyone is suddenly talking about MCP—but most people are describing it wrong. This episode argues that MCP is not a plugin system, not an API wrapper, and not “function calling, but standardized.” Those frames miss the point and guarantee that teams will simply recreate the same brittle AI glue they’re trying to escape. MCP is a security and authority boundary. As enterprises rush to integrate large language models into real systems—Graph, SharePoint, line-of-business APIs—the comfortable assumption has been that better prompts, better tools, or better agent frameworks will solve the problem. They won’t. The failure mode isn’t model intelligence. It’s unbounded action. Models don’t call APIs. They make probabilistic decisions about which described tools to request. And when those requests are executed against deterministic systems with real blast radius, ambiguity turns into incidents. MCP exists to insert a hard stop: a protocol-level choke point where identity, scope, auditability, and failure behavior can be enforced without trusting the model to behave. This episode builds that argument from first principles, walks through the architectural failures that made MCP inevitable, and then places MCP precisely inside a Microsoft-native world—where Entra, Conditional Access, and audit are the real control plane. Long-Form Show Notes MCP Isn’t About Intelligence — It’s About Authority The core misunderstanding this episode dismantles is simple but dangerous: the idea that LLMs “call APIs.” They don’t. An LLM never touches Graph, SharePoint, or your backend directly. It only sees text and structured tool descriptions. The actual execution happens somewhere else—inside a host process that decides which tools exist, what schemas they accept, and what identity is used when they run. That means the real problem isn’t how smart the model is. It’s who is allowed to act, and under what constraints. MCP formalizes that boundary. The Real Failure Mode: Probabilistic Callers Meet Deterministic Systems APIs assume disciplined, deterministic callers. LLMs are probabilistic planners. That collision creates a unique failure mode: Ambiguous tool names lead to wrong tool selectionOptional parameters get “improvised” into unsafe inputsPartial failures get treated as signals to retry elsewhereEmpty responses get interpreted as “no data exists”And eventually, authority leaks without anyone noticingPrompt injection doesn’t bypass auth—it steers the caller. Without a hard orchestration boundary, you’re not securing APIs. You’re hoping a stochastic process won’t make a bad decision. Custom AI Glue Is an Entropy Generator Before MCP, every team built its own bridge: bespoke Graph wrappersad-hoc SharePoint connectorsmiddleware services with long-lived service principals“temporary” permissions that never got revokedEach one felt reasonable. Together they created: tool sprawlpermission creeppolicy driftinconsistent loggingand integrations that fail quietly, not loudlyThat’s the worst possible failure mode for agentic systems—because the model fills in the gaps confidently. Custom AI glue doesn’t stay glue. It becomes policy, without governance. Why REST, Plugins, Functions, and Frameworks All Failed The episode walks through the industry’s four failed patterns: REST Everywhere REST assumes callers understand semantics. LLMs guess. Ambiguity turns into behavior.Plugin Ecosystems Plugins centralize distribution, not governance. They concentrate integration debt inside a vendor’s abstraction layer.Function Calling Function calling is a local convention, not a protocol. Every team reinvents discovery, auth, logging, and policy—badly.Agent Frameworks Frameworks accelerate prototypes, not ecosystems. They hide boundary decisions instead of standardizing them.Each attempt solved a short-term pain while making long-term coordination harder. Why a Protocol Was Inevitable Protocols exist when systems need to interoperate without sharing assumptions. HTTP didn’t win because it was elegant. OAuth didn’t win because it was pleasant. They won because they pinned down authority and interaction boundaries. MCP does the same thing for model-driven tool use. It doesn’t standardize “intelligence.” It standardizes how capabilities are described, discovered, and invoked—and where control lives when they are. MCP, Precisely Defined MCP is a protocol that defines: how an AI host discovers external capabilitieshow tools, resources, and prompts are declaredhow calls are invoked over a standard envelopehow isolation and context boundaries are enforcedThe model proposes. The host decides. The protocol constrains. That’s the point. Why MCP Fits Microsoft Environments So Cleanly Microsoft environments are identity-first by necessity: Entra defines what’s possibleConditional Access defines what survives riskAudit defines what’s defensible afterwardDropping agentic tool use into that world without a hard boundary would be reckless. MCP aligns naturally with Microsoft’s control-plane instincts: Entra for authorityAPIM for edge governancemanaged identity and OBO for accountabilitycentralized logging for survivabilityThis isn’t “AI plumbing.” It’s integration governance catching up to probabilistic systems. The Core Claim of the Episode This episode stakes one central claim and then proves it architecturally: MCP isn’t AI tooling. It’s an integration security boundary masquerading as developer ergonomics. Once you see that, everything snaps into focus: why custom glue collapses at scalewhy overlapping tools create chaoswhy identity flow design matters more than promptsand why enterprises can’t afford to let models act directlyWhat Comes Next Later in the episode—and in follow-ups—we: design a Microsoft-native MCP reference architecturewalk through Entra On-Behalf-Of vs managed identity tradeoffsshow where APIM belongs in real deploymentsand demonstrate how MCP becomes the point where authority actually stopsBecause protocols don’t make things easy. They make them governable. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 43 мин.
  8. The Carbon Control Plane: Microsoft’s Impossible Audit

    30 ЯНВ.

    The Carbon Control Plane: Microsoft’s Impossible Audit

    Corporate Social Responsibility is usually treated like branding. This episode argues that view is obsolete. CSR now functions as a control plane—a governance system that constrains how companies operate under real physical limits: power, carbon, water, land, and regulation. Using Microsoft as a case study, the episode examines what happens when sustainability stops being a pledge and starts becoming infrastructure. Microsoft promises to be carbon negative by 2030, yet its emissions have risen as cloud and AI capacity expands. Rather than dismissing this as hypocrisy, the episode treats it as an audit problem: can a planetary-scale technology company enforce sustainability while continuing to grow? The discussion focuses on three concrete artifacts: Microsoft’s sustainability reporting, its internal carbon fee, and Cloud for Sustainability. Together, they reveal how carbon is being turned into something budgetable, enforceable, and operational—while also exposing where the system strains under AI growth, scope 3 emissions, data-center power density, and reliance on carbon removal markets. The episode concludes with practical guidance: sustainability only works when it changes defaults. Treat carbon like cost. Assign ownership. Enforce constraints. Audit flows, not intentions. Long-Form Show Notes CSR Isn’t Charity — It’s Governance Most companies treat CSR as a marketing layer: reports, donations, pledges, and aspirational language. That model fails under modern constraints. Today, CSR exists because resources are scarce and accountable—not because companies became enlightened. Real CSR changes decisions. It introduces tradeoffs between people, planet, and profit inside procurement, architecture, and finance. If sustainability does not affect budgets, defaults, or enforcement, it is culture—not control. Why Sustainability Became a Business Requirement Environmental responsibility became mandatory because stakeholders hardened their demands. Customers now audit suppliers. Employees evaluate long-term alignment. Investors price unmanaged risk. Regulators demand traceability. And data centers turned “digital” into physical infrastructure competing for grid capacity. Sustainability moved from messaging into the machinery of how companies operate. Once that happened, vibes stopped working. The Fraud Boundary: Marketing vs. Mechanisms Greenwashing rarely looks like outright lying. It looks like storytelling that leads measurement. When narrative comes first, metrics become flexible and accountability disappears. The real fraud boundary is simple: Did the organization change defaults?Did it change budgets?Did it create consequences someone can feel?If not, CSR is decorative. Microsoft as the Case Study Microsoft commits to becoming carbon negative by 2030 and removing its historical emissions by 2050. These are accounting claims, not values statements. They require defined boundaries, scopes, and enforcement mechanisms. At the same time, Microsoft is scaling cloud and AI infrastructure at planetary scale. That growth is inherently physical. The tension between scale and sustainability is not rhetorical—it’s architectural. Carbon Negative Is Not a Feeling “Carbon negative” only exists as a balance sheet outcome. Emissions must be measured within clear scopes, and removals must exceed them inside the same boundary. Reduction, replacement, and removal are separate levers. Confusing them allows net claims to survive without system change. Scope 3 emissions—supply chains and indirect impacts—are where the math becomes probabilistic and the audit gets hard. Artifact #1: The Internal Carbon Fee Microsoft’s internal carbon fee treats emissions as a real cost that hits business unit budgets. This moves carbon out of values language and into financial decision-making. The fee covers multiple scopes and reinvests proceeds into decarbonization efforts. Its power lies in making carbon loud during planning, forcing tradeoffs in regions, architectures, utilization patterns, and procurement. But incentives only work if measurement holds. Weak attribution turns enforcement into accounting disputes instead of behavior change. Measurement Is an Identity Problem Carbon accounting is not just data collection—it’s attribution. Who caused the emissions? Which decision owns the consequence? Scope 3 data relies on estimates, suppliers, and delayed reporting. Once emissions affect budgets, teams fight boundaries instead of behavior. A carbon control plane only works if responsibility is defensible. The Emissions Reality Microsoft’s reported emissions increased between 2020 and 2023 due to rapid AI and data-center expansion. This does not automatically invalidate its commitments. It reveals a constraint: growth happens faster than decarbonization propagates. The carbon control plane is not designed to prevent emissions from ever rising. It exists to ensure increases happen knowingly, with tradeoffs explicit and costs internalized. Carbon Removal: Buying Time, Buying Risk Microsoft is securing large, multi-year carbon removal contracts. This indicates that removals are now a structural dependency, not an emergency tool. Removals keep net claims viable under growth, but they introduce risks: quality, permanence, verification, market dependency, and moral hazard. The audit question becomes whether removals compensate for residual emissions—or enable unchecked expansion. AI Changes Everything AI turns sustainability from optimization into capacity planning. Training and inference behave differently, with inference often dominating long-term emissions. Carbon budgeting for AI workloads becomes unavoidable. Efficiency, model selection, routing, caching, and utilization now determine emissions at scale. Governance must assign ownership to both model creators and model consumers. Data Centers End the Abstraction High-density AI hardware forces liquid cooling, grid negotiations, land use tradeoffs, and long-term energy procurement. Power is no longer elastic. Communities, utilities, and regulators are now part of the architecture. At this scale, sustainability is availability risk. Artifact #2: Cloud for Sustainability Cloud for Sustainability attempts to turn emissions into a managed data domain—an ERP-like system for carbon. Its value is not inspiration, but instrumentation. Without enforcement, it becomes reporting theater. Wired into budgets, procurement, and design reviews, it becomes part of the control plane. Hidden Carbon: Low-Code and Automation Sprawl Power Platform sprawl creates always-on background compute with no ownership. Zombie flows and duplicated automations become invisible infrastructure load—carbon debt hiding behind convenience. Governance requires ownership, lifecycle management, deletion pipelines, and treating automation as consumption, not “innovation.” Reduction vs. Outsourcing Guilt Removals are not inherently bad. They are necessary when growth outpaces reduction. The real test is hierarchy: reduce first, replace where possible, remove what remains. Net claims live or die by system design, not moral framing. What Listeners Should Challenge Whether reductions can realistically outpace AI growthHow dependent net claims are on removalsWhether supplier reporting equals supplier decarbonizationWhere enforcement actually livesWhether regulation is sufficient to keep net claims honestWhat Other Organizations Can Steal Don’t copy slogans. Copy mechanics: Measurable, bounded goalsCarbon tied to budgetsNamed ownersRegular disclosureSustainability treated as operational riskOne-Week Implementation Pick one real workload. Assign one owner. Define a boundary. Estimate emissions. Ask: Would we deploy this the same way if carbon behaved like cost? Make one change. Set one default. Introduce one gate. That’s how control planes start. Final Thought The carbon control plane is not about virtue. It’s behavioral design. Systems change when constraints do. Sustainability only works when it makes the wasteful path expensive and the efficient path obvious. If you want the next layer—how to distinguish governance from greenwashing under pressure—stay tuned for the next episode. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support. If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.

    1 ч. 18 мин.

Оценки и отзывы

5
из 5
Оценок: 3

Об этом подкасте

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.