M365.FM - Modern work, security, and productivity with Microsoft 365

Mirko Peters - Founder of m365.fm, m365.show and m365con.net

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

  1. Building and deploying production grade AI agents with Microsoft Foundry with Edgar McOchieng [MVP]

    15H AGO

    Building and deploying production grade AI agents with Microsoft Foundry with Edgar McOchieng [MVP]

    In this deep-dive episode of the M365 FM podcast, Mirko Peters welcomes Edgar McOchieng for an extensive conversation about enterprise AI architecture, Microsoft Foundry, scalable AI agents, and the real-world challenges organizations face when deploying production-grade AI systems. Edgar shares his journey from discovering Microsoft Azure during university in Kenya to becoming a Microsoft MVP focused on Microsoft Foundry, Business Applications, data engineering, and AI-driven enterprise solutions. He also talks about his passion for mentorship and community building through “Ochieng Labs,” where students and early-career developers gain hands-on experience with Power Platform, Microsoft Fabric, Copilot Studio, and modern AI engineering practices. BUILDING REAL-WORLD ENTERPRISE AI APPLICATIONS The conversation explores how organizations can move beyond AI experimentation and start building reliable, secure, and scalable AI applications that deliver measurable business value. Edgar explains how his team created an enterprise AI platform capable of connecting to SharePoint, OneDrive, Outlook, Microsoft Graph, AWS, and Google Cloud environments to help employees retrieve organizational knowledge faster and reduce data silos across departments. Listeners will learn how Retrieval-Augmented Generation (RAG), vector search, semantic indexing, embeddings, and enterprise search architectures play a critical role in modern AI systems. Edgar breaks down how AI applications can access live organizational knowledge instead of relying solely on static training data, helping businesses build more accurate and context-aware AI assistants. HYBRID AI ARCHITECTURES AND AI COST OPTIMIZATION A major focus of this episode is enterprise AI cost management and hybrid AI infrastructure design. Edgar openly discusses the challenges organizations face with rising AI costs caused by heavy usage of premium cloud-based large language models such as Anthropic Claude and GPT services. He explains how his team introduced a hybrid orchestration model that intelligently switches between local small language models and cloud-hosted LLMs depending on the complexity of the task. This hybrid AI approach dramatically reduced operational expenses while maintaining scalability and performance. The discussion also covers rate limiting, token management, AI workload monitoring, hosted agents, orchestration layers, and why enterprises increasingly need ownership and control over their AI infrastructure. MICROSOFT FOUNDRY, COPILOT STUDIO, AND AI DEVELOPMENT WORKFLOWS Edgar describes Microsoft Foundry as a powerful “model playground” where developers can experiment with multiple AI models, create hosted agents, build orchestration pipelines, evaluate model safety, apply guardrails, and integrate enterprise systems using MCP connectors. He also explains the differences between Microsoft 365 Copilot, Copilot Studio, and Microsoft Foundry — helping listeners understand when each platform is the right choice depending on customization requirements and technical maturity. The episode also dives into prompt engineering, AI workflows, GitHub Copilot, VS Code integrations, CI/CD pipelines with GitHub Actions, evaluation pipelines, hallucination testing, and the growing importance of developer tooling in AI application development. Edgar shares practical insights into how AI engineering teams structure, test, deploy, and continuously improve enterprise AI systems in production environments. AI GOVERNANCE, SECURITY, AND ENTERPRISE MONITORING Another key topic throughout the conversation is AI governance, observability, security, and responsible AI implementation. Edgar explains why governance and monitoring are becoming more important than simply selecting the “best” AI model. Organizations need visibility into user behavior, AI usage patterns, permissions, hallucination risks, security controls, and compliance requirements. The discussion also covers multi-tenant enterprise AI architectures, tenant isolation, data partitioning, hosted AI agents, containerization, Kubernetes integrations, Power Platform connectivity, Logic Apps orchestration, and enterprise-grade monitoring systems designed to support scalable AI workloads. THE FUTURE OF ENTERPRISE AI Toward the end of the episode, Mirko and Edgar discuss several hot topics shaping the future of enterprise AI, including small language models (SLMs), prompt engineering, orchestration-driven AI workflows, fine-tuning versus data grounding, and the long-term sustainability of relying entirely on external AI providers. Edgar argues that organizations increasingly need flexibility, transparency, governance, and infrastructure ownership to remain competitive as AI adoption continues to accelerate. This episode is packed with practical insights for enterprise architects, AI engineers, cloud developers, CTOs, IT leaders, Microsoft professionals, startup founders, and anyone interested in understanding how Microsoft Foundry and Azure AI technologies are reshaping modern enterprise software development and intelligent automation. IN THIS EPISODEBuilding production-grade AI agents with Microsoft FoundryDesigning scalable hybrid AI architectures for enterprisesImplementing AI governance, observability, and monitoringReducing enterprise AI costs using local and hosted modelsRetrieval-Augmented Generation (RAG) and vector searchHosted AI agents, orchestration layers, and prompt flowsEnterprise integrations with Microsoft Graph, SharePoint, and Power PlatformMulti-tenant AI architectures and secure data isolationAI evaluation pipelines, guardrails, and hallucination preventionCI/CD strategies for enterprise AI deploymentsKEY TECHNOLOGIES DISCUSSEDMicrosoft FoundryAzure AI ServicesMicrosoft 365 CopilotCopilot StudioMicrosoft FabricPower PlatformGitHub CopilotMCP ConnectorsVector DatabasesRetrieval-Augmented Generation (RAG)KubernetesLogic AppsAzure Hosted AgentsWHO SHOULD LISTEN This episode is highly recommended for enterprise architects, AI engineers, Microsoft consultants, cloud developers, CTOs, CIOs, IT decision-makers, Power Platform professionals, startup founders, security teams, and technology leaders looking to understand how enterprise AI systems can be designed, governed, scaled, and optimized using Microsoft’s modern AI ecosystem. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    1h 2m
  2. Is Your Microservice Architecture a Ticking Time Bomb for Speed

    21H AGO

    Is Your Microservice Architecture a Ticking Time Bomb for Speed

    You adopted microservices because you wanted speed. Faster deployments. Faster teams. Faster product delivery. But somewhere along the journey, a simple feature stopped feeling simple. What used to be one local code change now requires cross-team coordination, API reviews, rollout sequencing, schema checks, tracing updates, retry planning, and governance approvals. The old bureaucracy never disappeared. It simply moved from the org chart directly into the runtime. And increasingly, organizations are realizing the tradeoff is no longer worth it. Recent industry research shows that forty-two percent of organizations are actively consolidating microservices back into larger deployment units. That statistic alone signals something important: many teams are discovering that the operational and coordination overhead of distributed systems has started consuming the very delivery speed those systems were supposed to create. In this episode, we unpack the deeper model behind that slowdown. This is not another simplistic “monolith versus microservices” debate. This conversation focuses on how distributed architectures quietly create runtime friction, organizational drag, and delivery bottlenecks inside modern .NET environments — especially for teams that adopted service boundaries long before they truly needed them. Because once the architecture begins fragmenting the flow of change, the cost starts showing up everywhere. THE ARCHITECTURAL ILLUSION OF PROGRESS Microservices were sold as autonomy. The promise sounded almost perfect: split systems into independent services, give teams ownership, scale components independently, and deploy faster without coordination bottlenecks. On paper, the model looked mature. But the architecture carried assumptions many organizations skipped right past. Microservices assume: Stable domain boundariesMature platform engineeringStrong DevOps capabilitiesOperational readinessLong-term team ownershipReliable observabilityClear contract disciplineIn many organizations, none of those conditions existed yet. And that is where the model starts fighting the organization itself. This episode explores why smaller and mid-sized engineering organizations often feel the pain first. Research consistently shows that for teams under roughly twenty to thirty engineers, coordination overhead frequently outweighs the scaling advantages of physical service separation. Instead of autonomy, teams inherit dependency chains with extra operational layers attached to every business change. We break down how: One feature update becomes multiple synchronized deploymentsSimple business logic turns into distributed coordinationAPI ownership becomes a negotiation processService boundaries create organizational silos“Independent deployment” often increases release frictionArchitectural complexity gets mistaken for engineering maturityBecause adding more boxes to a diagram does not automatically create speed. Sometimes it simply creates more places where work can stop. THE HIDDEN TAX OF DISTRIBUTED COMPLEXITY One of the most deceptive things about microservices is that every service can appear individually clean while the production system becomes massively heavier underneath. This episode dives into the hidden runtime tax of distributed systems inside modern .NET environments. Inside a single process, code communicates at memory speed. Across service boundaries, that same interaction becomes: Network trafficSerializationAuthenticationTimeout handlingRetry logicCorrelation trackingDistributed tracingPartial failure managementAnd those mechanics introduce costs that compound quickly. We explore how a simple business transaction can quietly transform into: Multiple outbound HTTP or gRPC callsCascading latency chainsRetry stormsExpanded observability overheadIncreased debugging complexityMore cloud infrastructure consumptionBecause the real system is not just the services. It is everything between them. This episode also examines the operational impact of observability and service mesh adoption in .NET ecosystems. Distributed tracing, telemetry, mTLS enforcement, and sidecar proxies absolutely provide value — but they also introduce measurable overhead in memory usage, latency, throughput, and operational maintenance. We discuss: Istio vs Linkerd operational tradeoffsSidecar memory overhead in Kubernetes clustersObservability performance costsInstrumentation latency impactWhy distributed debugging consumes dramatically more engineering timeHow platform complexity becomes a staffing problemSmall teams feel this pressure first because they rarely have dedicated platform engineering departments to absorb the operational load. The result is that developers stop spending most of their time building products and start spending it operating distributed infrastructure. HOW API CONTRACTS TURN INTO DIGITAL RED TAPE  Once runtime complexity grows, the next slowdown appears in team coordination. API contracts are meant to create trust between services, but in many organizations, those contracts slowly evolve into rigid borders that require negotiation before every change.  Something as small as renaming a single field can trigger: Consumer coordinationSchema reviewsVersioning debatesApproval workflowsRollout sequencingExtended backward compatibility maintenanceThe technical change may take minutes. The organizational choreography around it can consume days. This episode explores how API governance frequently drifts into digital bureaucracy, especially when organizations lack strong automated contract validation pipelines. We discuss: Why low contract testing adoption creates fearHow brittle API governance slows deliveryWhy teams duplicate endpoints instead of evolving interfacesThe dangers of over-versioningGovernance drift inside enterprise architectureManual review bottlenecksCI-driven contract enforcementHow AI coding tools accelerate coding but not organizational validation Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min
  3. Why Your Microservices Are Turning the Cloud Toxic

    1D AGO

    Why Your Microservices Are Turning the Cloud Toxic

    One slow dependency can quietly poison an entire cloud platform long before any dashboard shows a major outage. The systems still appear healthy. CPU looks normal. Containers remain online. Health checks keep passing. Yet underneath the surface, capacity is already collapsing because the architecture was built on a dangerous assumption: every remote call will return quickly enough to keep the platform moving. That assumption breaks the moment real pressure arrives. In this episode, we dive deep into the mechanics behind cascading latency failures in modern .NET microservice environments and explain why “slow” is often more dangerous than “down.” Most teams prepare for crashes. Very few prepare for toxic waiting states that silently spread through APIs, queues, databases, gateways, and worker services until the entire platform grinds itself into exhaustion. This is not another discussion about generic retries or simplistic cloud scaling advice. This episode is about failure containment, resource protection, and architectural resilience under real-world pressure. Because the real problem isn’t usually the first failed request. It’s everything that gets trapped waiting behind it. SILENT LATENCY IS THE REAL CLOUD KILLER Modern distributed systems are incredibly good at hiding their own deterioration. A dependency becomes slower by a few hundred milliseconds. Then a few seconds. Requests begin stacking up quietly inside ASP.NET pipelines while outbound HTTP calls hold sockets open longer and longer. Connection pools start draining. Queues begin filling. Upstream APIs wait longer to respond while downstream services struggle to recover. Nothing appears catastrophic at first. That’s exactly why latency spreads so effectively. Unlike a hard outage, slow degradation gets admitted into the system and multiplied across every dependent service. A failed call is rejected immediately. A slow call infects everything upstream. This episode explores how those waiting states become invisible capacity killers inside .NET systems, especially in high-traffic cloud architectures where services depend heavily on identity providers, APIs, databases, third-party platforms, and shared infrastructure. We break down: Why slow dependencies are more dangerous than dead onesHow async code still consumes valuable platform resourcesWhy healthy-looking dashboards often hide collapsing throughputHow queue growth becomes a symptom of delayed completion ratesWhy adding more replicas frequently makes the problem worseBecause scaling a waiting room doesn’t solve the dependency poisoning the system underneath it. WHY RETRIES OFTEN MAKE OUTAGES WORSE Retries feel safe. In small systems, they usually are. But inside distributed cloud environments, retries can quickly become synchronized load amplification attacks against already struggling dependencies. This episode explains why retry logic changes completely once systems operate at scale. A single failed request can multiply into waves of duplicate traffic as every service instance follows the exact same retry behavior at the exact same time. Inside the .NET ecosystem, resilience frameworks make retries deceptively easy to implement. Developers add policies with good intentions, believing they’re improving stability. But poorly designed retry strategies frequently extend outages instead of containing them. We explore how: Long timeout windows increase pressure across the platformRetried requests consume even more thread time and socket capacityRetry storms create artificial traffic spikesOverloaded services become trapped in endless recovery loopsBroad retry policies generate massive cloud waste and instabilityThis episode reframes retries for what they really are under pressure: Load generation. Not protection. You’ll also learn when retries do make sense, including how to safely handle transient faults, temporary network interruptions, and idempotent operations without accidentally creating synchronized platform-wide self-harm. BULKHEAD ISOLATION: STOPPING ONE FAILURE FROM TAKING DOWN EVERYTHING One of the most important concepts covered in this episode is bulkhead isolation. Most cloud teams believe their services are isolated because they run in separate containers or repositories. But if those services still share outbound connections, execution pools, database bottlenecks, or queue consumers, then the failure path remains shared. And shared pools become toxic during latency events. This episode explains how bulkhead isolation creates hard architectural boundaries that prevent one failing dependency from stealing resources from unrelated workloads. We discuss practical .NET resilience design strategies including: Per-dependency concurrency limitsDedicated outbound HTTP client policiesIsolated queue consumersSeparate execution paths for critical workloadsReserved capacity for revenue-generating flowsTenant-level isolation strategiesBusiness-priority-driven workload separationBecause under pressure, equal access to shared resources becomes one of the fastest ways to collapse an entire platform. You’ll hear real-world examples of how reporting systems, background synchronization jobs, and low-priority workloads unintentionally starve checkout systems, identity flows, and customer-facing APIs simply because nobody created boundaries between them. This is where resilience stops being a technical optimization and becomes a business decision. CIRCUIT BREAKERS AND CONTROLLED FAILURE Once failures start spreading, the platform needs a way to stop panic from multiplying. That’s where circuit breakers become essential. This episode breaks down how circuit breakers act as real-time traffic control systems for unstable dependencies. Instead of allowing every request to independently discover failure through expensive timeouts, breakers create shared system memory that quickly stops doomed traffic before it spreads resource exhaustion upstream. We cover: Closed, open, and half-open circuit statesWhy fast rejection is healthier than slow waitingHow breaker thresholds influence platform behaviorThe dangers of generic one-size-fits-all resilience policiesProper timeout and breaker composition in .NETDependency-specific resilience tuning strategiesWhy upstream systems must cooperate with degraded modesYou’ll also learn why many teams accidentally sabotage their own circuit breaker strategies by continuing to aggressively feed traffic into failing dependencies from queues, schedulers, and upstream APIs. A breaker alone cannot save a platform that refuses to acknowledge degraded conditions. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    21 min
  4. From Figma design to the PowerApps with Lukas Pavelka [MVP]

    1D AGO

    From Figma design to the PowerApps with Lukas Pavelka [MVP]

    Development is changing faster than most teams can process. A few years ago, building enterprise applications meant long development cycles, hand-coded UI layers, endless testing loops, and massive backlogs between design teams and developers. Now AI agents can write code, generate layouts, repair syntax, optimize workflows, and even help translate entire applications into more than one hundred languages. But that shift creates a new question: If AI can generate applications faster than ever before, what actually separates good development from dangerous development? In this episode of the M365 FM Podcast, Mirko Peters sits down with Microsoft MVP Lukas Pavelka to explore the intersection of Figma, PowerApps, AI-assisted coding, Power BI, and the rapidly changing future of enterprise application development. The conversation goes far beyond low-code hype. This episode explores what really happens when AI agents enter the development lifecycle, how Figma is evolving into a complete ecosystem, why governance and security still matter deeply in AI-driven coding, and how developers can use tools like Copilot, Claude, GitHub Copilot, and vibe coding without losing control of their own codebase. FROM JAVA DEVELOPER TO FIGMA AND POWERAPPS CREATOR Lukas Pavelka started as a traditional Java developer more than twenty years ago before eventually transitioning into Power Platform development, automation, and AI-assisted application design. The turning point came through design. After discovering Figma through his wife’s design work, Lukas realized there was a major gap between beautiful design systems and practical PowerApps development workflows. That led to the creation of his PowerApps for Figma plugin, designed to help Power Platform developers move much faster between design and implementation. Today, Lukas develops multiple products focused on bridging design, automation, AI, and low-code development, including: PowerApps for FigmaPower BI for FigmaMy Bot Admin for Telegram automationThe discussion explores how these products evolved from internal productivity ideas into community-focused tools aimed at helping developers, makers, and Power Platform teams reduce repetitive work and improve enterprise UI quality. WHY FIGMA IS BECOMING MUCH BIGGER THAN DESIGN One of the most fascinating parts of this episode is the discussion around Figma’s evolution. Lukas explains why Figma is no longer just a design platform. It is becoming a complete ecosystem that increasingly overlaps with development, prototyping, presentations, AI-assisted workflows, and enterprise application delivery. The conversation covers: Figma design systemsReusable component librariesPowerApps UI translationYAML exportComponent variantsMulti-language enterprise appsDesign consistency across projectsLukas also explains how his plugins allow Power Platform developers to create scalable design systems that can be reused across enterprise projects while dramatically reducing repetitive UI work. The discussion highlights a major shift happening inside enterprise development: Good UX is no longer optional. Organizations increasingly realize that internal business applications must feel modern, intuitive, and scalable if they want employees to actually use them effectively. AI, VIBE CODING, AND THE REALITY OF MODERN DEVELOPMENT This episode dives deeply into AI-assisted development and the rise of “vibe coding.” Lukas shares practical experiences using GitHub Copilot, Claude, Visual Studio integrations, AI agents, and prompt-based coding workflows to accelerate development. But the conversation stays grounded in reality. One of the strongest themes throughout the episode is that AI coding still requires strong technical understanding. Lukas explains why developers cannot simply rely on AI-generated code without understanding architecture, debugging, security, versioning, and governance. The discussion explores: Prompt engineering for developersAI-assisted debuggingModel selection strategiesToken cost managementVersioning challengesSecure coding practicesMCP and Model Context ProtocolAI coding limitationsA major insight from the episode is that AI coding works best when prompts stay highly focused and scoped to one specific task at a time. Broader prompts often cause AI agents to rewrite working code unnecessarily or introduce instability into existing projects. The episode also explores how AI development changes the role of the developer itself. Instead of writing every line manually, developers increasingly supervise, guide, validate, secure, and orchestrate AI-generated output. THE BUSINESS REALITY OF AI DEVELOPMENT The conversation also moves into the economics behind AI-assisted development. Lukas and Mirko discuss token costs, cloud compute limitations, GPU demand, electricity consumption, and the growing operational cost of running large-scale AI systems. The episode examines: Claude pricingGitHub Copilot limitsAI token consumptionGPU infrastructureElectricity challengesAI model specializationCloud economicsOne particularly interesting part of the discussion focuses on how different AI models perform better for different development tasks. Some models perform better for frontend design work, others for deeper reasoning, debugging, or enterprise coding scenarios. This creates a new challenge for developers: Understanding not only how to code, but also which AI model to use for which type of work. SECURITY, GOVERNANCE, AND THE RISKS OF AI CODING As AI-generated development accelerates, governance becomes increasingly important. Lukas explains why developers still need to understand exactly what their code is doing, even when AI agents generate large portions of it automatically. The episode explores the growing risks around: Security vulnerabilitiesPoor governanceExposed repositoriesUnsafe promptsWeak versioning practicesAI-generated technical debtOne of the strongest warnings throughout the episode is simple: AI can accelerate bad development just as easily as good development. Without proper architecture, security awareness, governance structures, and development knowledge, organizations risk creating large amounts of insecure code much faster than before. WHAT COMES NEXT FOR AI DEVELOPMENT The future discussed in this episode moves beyond simple text prompts. Lukas explains why voice-driven development, AI skills, reusable agent capabilities, and contextual AI orchestration are becoming the next major wave in application delivery. The discussion explores how future AI systems may: Understand spoken instructionsBuild applications conversationallyReuse trained development skillsOrchestrate workflows automaticallyConnect through MCP serversGenerate full enterprise UI systemsAt the same time, both Lukas and Mirko emphasize that strong development fundamentals remain essential. The tools are changing rapidly. But architecture, security, UX thinking, governance, and operational understanding still matter mo Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    36 min
  5. The Invisible Employee: Is Your Next Hire Actually an AI Agent

    1D AGO

    The Invisible Employee: Is Your Next Hire Actually an AI Agent

    Ticketing looks clean on paper. You get the numbers. The queues. The dashboards. But the real cost of support usually starts long before a ticket ever appears. An employee loses access to a file. A Teams meeting fails seconds before it starts. A sharing link breaks. Someone retries the same action over and over, asks a coworker for help, or wastes twenty minutes trying to fix something manually before they finally give up and open a support request. That hidden productivity loss rarely shows up in queue reports. In this episode of the M365 FM Podcast, Mirko Peters explores why the traditional help desk model is breaking under the scale and complexity of modern Microsoft 365 environments and what replaces it. The future of support is not faster triage. It is autonomous, invisible, policy-driven intervention that happens before users even realize they need help. THE DEATH OF THE TICKET The old support model still follows the same operational pattern. Something breaks, a user notices it, a ticket gets created, and IT begins translating the issue into categories, priorities, queues, and escalation paths. Then the waiting begins. That process feels normal because organizations have operated this way for decades, but the ticket itself is not the service. The ticket is evidence that support arrived too late. By the time the incident reaches the queue, the employee has already lost context, momentum, and productive work time. Modern Microsoft 365 estates are simply too dynamic for manual triage to scale efficiently anymore. Organizations now operate across Teams, SharePoint, Exchange, Intune, Entra, Defender, Copilot, hybrid devices, and Conditional Access policies simultaneously. The number of edge-case combinations grows faster than human-driven routing models can realistically absorb. Most organizations respond by adding another portal, another chatbot, or another workflow layer. But in reality, that usually increases friction instead of removing it. This episode breaks down why reactive ITIL-style operations are becoming structural bottlenecks and why most support labor still gets trapped inside repetitive routing, categorization, and clarification work instead of prevention and resilience engineering. THE INVISIBLE EMPLOYEE MODEL So what actually replaces the ticket? Not another chatbot. Not another AI assistant waiting for prompts. The invisible employee model introduces autonomous operational agents embedded directly inside Microsoft 365 workflows. These agents behave more like digital workers than simple software features. They operate with their own identity, defined permissions, governance boundaries, operational memory, and approval rules. Instead of waiting for users to describe problems manually, the invisible employee continuously monitors the environment for friction and operational drift. It can detect: Sign-in failuresLicense mismatchesSharing issuesDevice compliance driftThen it acts safely inside policy before the issue escalates into a formal support event. Support no longer begins inside a portal. It begins exactly where the interruption happens, whether that is Teams, Outlook, SharePoint, or Entra. This episode explains why support is shifting from reactive ticket handling into proactive operational correction embedded directly inside daily work. THE ARCHITECTURE OF PREEMPTION Mirko breaks down how autonomous support actually works inside Microsoft 365. The model follows a simple operational chain: event, reasoning, orchestration, and verification. Microsoft 365 already generates massive amounts of telemetry through Entra, Intune, Defender, Teams, SharePoint, Exchange, and Microsoft Graph. The real transformation happens when agents can interpret those signals, compare current state against desired state, trigger approved remediation, and verify outcomes automatically. The discussion explores real-world scenarios like access remediation, Conditional Access enforcement, meeting recovery, SharePoint sharing failures, and license mismatch correction. A critical point throughout the episode is that autonomous systems cannot rely on isolated AI responses. They require continuous feedback loops that detect issues, test conditions, apply fixes, validate outcomes, retry safely, and escalate when necessary. That feedback-driven architecture is what separates operational AI from simple chatbot automation. GOVERNANCE, TRUST, AND AGENT IDENTITY Once support starts acting autonomously, governance becomes the most important part of the system. Every support agent must be treated like a real operational worker inside the tenant. That means agents require Entra identities, defined ownership, lifecycle governance, least-privilege access, approval boundaries, and complete auditability. This episode explores why organizations cannot scale autonomous support safely if they do not fully understand which agents already exist in their environment and what those agents are allowed to do. The conversation also examines: Human approval pathsRuntime monitoringRollback logicOperational accountabilityThe key message is clear. Autonomous support only works when governance, trust, visibility, and operational control scale together. THE NEW ROI OF INVISIBLE SUPPORT Traditional support metrics focus on visible activity like tickets closed, calls handled, and SLA performance. But invisible support creates value through prevented interruption. The biggest operational gains come from reduced context switching, faster restoration, fewer escalations, lower manual effort, and smoother employee workflows. Mirko explains why organizations need entirely new KPI models for AI-driven support operations. The conversation covers autonomous resolution rates, prevented incidents, reduced manual touches, productivity recovery, and why AI-driven support can dramatically reduce operational costs when implemented correctly. This is where IT stops acting like a reactive cost center and starts behaving like a reliability layer embedded directly into daily work. WHAT THIS DOES TO THE SUPPORT TEAM One of the biggest misconceptions around AI-driven support is that it eliminates people. In reality, the role of the support engineer changes completely. Teams move away from repetitive ticket handling and toward workflow orchestration, guardrail design, policy tuning, governance engineering, and exception management. The future support engineer becomes part reliability architect, part governance operator, and part automation supervisor. That shift requires organizations to rethink how support teams are trained, measured, and structured. IMPLEMENTATION AND PAYOFF The rollout strategy matters. Mirko recommends starting with one high-friction support flow inside Microsoft 365 instead of attempting a massive transformation project all at once. Access remediation, meeting recovery, sharing issues, and device compliance workflows are often strong starting points because the patterns are frequent and measurable. The critical design questions become: What defines healthy state?Which events indicate drift?Which actions are safe to automate?Where does human escalation begin?Once those foundations are in place, support stops acting like a front desk reacting to incidents and starts operating like an intelligent reliability engine embedded directly into Microsoft 365 itself. CONCLUSION Support is shifting from visible reaction to embedded prevention. The ticket was never the service. It was proof the service showed up too late. If you are leading Microsoft 365 operations, AI governance, Copilot adoption, identity architecture, support modernization, or enterprise automation strategy, this episode provides a practical blueprint for understanding where autonomous support is heading next. Subscribe to the M365 FM Podcast for more deep dives into Microsoft 365, Copilot, Entra, AI agents, governance, automation, and modern enterprise operating models. Connect with Mirko Peters on LinkedIn and share the episode with teams exploring the future of AI-driven support and operational automation. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min
  6. Digital Identity is Broken: How Entra External ID Fixes the Trust Gap

    2D AGO

    Digital Identity is Broken: How Entra External ID Fixes the Trust Gap

    Identity used to be simple. Employees logged into corporate systems from managed devices inside a controlled network perimeter. Security teams built walls, directories stored accounts, and trust lived inside one organization. That world no longer exists. Today, customers move across apps and devices constantly. Partners collaborate across tenants. Contractors join and leave projects every week. AI agents and automated workflows request access without ever touching the traditional sign-in path older identity systems were designed for. Yet most identity architectures still behave like everything happens inside a border. That mismatch creates one of the biggest hidden operational problems in modern business: the trust gap. In this episode of the M365 FM Podcast, Mirko Peters breaks down why identity is no longer just an authentication problem. It is now a business growth problem, a customer experience problem, a governance problem, and increasingly, a digital trust problem. THE DEATH OF THE PERIMETER Most identity systems still rely on rebuilding trust from scratch inside every application, every onboarding flow, and every partner portal. Every time a customer registers again, every time a contractor creates another account, and every time a partner has to manually prove the same information twice, organizations create friction, duplicate data, and larger attack surfaces. The costs are massive. Research continues to show that complicated registration processes directly reduce conversion rates. Password problems still overwhelm support teams. Centralized identity silos create larger breach targets while slowing users down at the exact moment businesses want faster onboarding and smoother digital experiences. This episode explores why identity can no longer be treated as a static account sitting in a directory. Instead, the future moves toward portable trust. WHY PORTABLE IDENTITY CHANGES EVERYTHING Mirko explains the shift from account-centric identity to claim-centric identity. Rather than asking whether an organization owns an account record for a person, the better question becomes: What does this user, partner, customer, or system need to prove right now? That shift changes everything. The discussion covers how passkeys accelerated this transformation by replacing shared secrets with stronger proof tied to users and devices. Microsoft’s reported improvements in login speed and success rates demonstrate that stronger security and lower friction no longer need to compete against each other. The episode also explains why decentralized identity is often misunderstood inside enterprises. Decentralized identity does not mean the end of governance or enterprise control. It means trust becomes portable, verifiable, and policy-driven rather than dependent on one giant central identity store holding every attribute forever. WHERE ENTRA EXTERNAL ID FITS Mirko breaks down the architectural distinction many executives confuse. Entra External ID acts as the orchestration and governance layer for customer and partner identity journeys. Verified ID provides portable proof through verifiable credentials. Together, they create a hybrid model where organizations can modernize external identity without immediately abandoning every traditional CIAM pattern they already rely on. The episode also dives deep into the practical realities of migration from Azure AD B2C, including:Just-in-time password migrationModern Graph-centered architectureFederation and lifecycle controlBeyond architecture, this conversation focuses heavily on business impact. Identity friction directly affects customer conversion rates, support ticket volumes, partner onboarding speed, fraud exposure, operational costs, and product release timelines. GOVERNANCE, RISK, AND DIGITAL SOVEREIGNTY Technology alone does not solve the problem. Governance becomes the central challenge. This episode explores the tension between user sovereignty, enterprise assurance, legal accountability, and operational recovery. Portable identity only works when organizations clearly define issuer trust, revocation processes, lifecycle governance, and policy enforcement. That is why Mirko frames Entra not as a magic decentralized identity platform, but as a practical orchestration layer where trust, proof, and governance can finally work together. The final section of the episode delivers a practical operating blueprint leaders can actually implement. Rather than attempting a massive identity transformation overnight, organizations should begin with one external journey where identity friction already creates visible business pain. The key questions every organization must answer are:What proof needs to travel?What policy must remain central?What risk events require step-up verification?The organizations that solve those questions well will move faster, onboard users more efficiently, reduce operational overhead, and create more scalable ecosystems without multiplying identity silos. IMPLEMENTATION PAYOFF AND CONCLUSION Identity is no longer about protecting a border. It is about carrying trust across systems, organizations, devices, and automated workflows without forcing users to repeatedly rebuild proof from zero. If you are leading Microsoft 365, Entra, Zero Trust, security architecture, identity governance, or customer identity modernization initiatives, this episode gives you a strategic framework for understanding where identity is heading next and how Microsoft’s Entra platform fits into that transition. Subscribe to the M365 FM Podcast for more deep dives into Microsoft 365 architecture, governance, automation, AI, identity, and modern enterprise strategy. Connect with Mirko Peters on LinkedIn and share the episode with teams working on identity modernization, external collaboration, CIAM, and Zero Trust transformation. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    22 min
  7. Using PowerShell to automate all things Azure and Microsoft 365 with Matthew Dowst [MVP]

    2D AGO

    Using PowerShell to automate all things Azure and Microsoft 365 with Matthew Dowst [MVP]

    In this episode of the M365 podcast, host Mirko Peters sits down with PowerShell expert and automation architect Matthew Dowst. With over 20 years of experience, Matthew shares deep insights into automation across Microsoft 365 and Azure, drawing from his work in enterprise environments, community contributions, and real-world problem solving. The discussion explores how PowerShell has evolved, why it remains critical despite new tools like Copilot and Power Automate, and what the future holds for administrators. WHAT POWERSHELL REALLY IS: MORE THAN JUST SCRIPTING A central theme of the conversation is the identity of PowerShell. Is it a developer tool or an admin tool? According to Matthew, it is both—and that duality is exactly what makes it powerful. PowerShell enables simple administrative commands while also supporting full-scale automation solutions. It acts as a bridge between infrastructure, APIs, and services, allowing professionals to move beyond manual work into programmable environments. FROM SMALL SCRIPTS TO ENTERPRISE AUTOMATION Matthew shares how many professionals start with small, repeatable scripts—often in help desk or monitoring scenarios—and gradually expand into building full automation platforms. PowerShell’s object-oriented nature allows scripts to evolve into modular systems, where reusable functions and logic blocks can be combined into complex workflows. This progression highlights a key mindset shift: automation is not about isolated scripts, but about building adaptable systems. THE ROLE OF MICROSOFT GRAPH AND MODERN MODULES A major evolution in recent years has been the introduction of Microsoft Graph modules in PowerShell. Previously, administrators had to deal with fragmented tooling across services like Azure AD, SharePoint, and Exchange. The Graph ecosystem has unified access, making automation more consistent and standardized. While direct API calls still offer flexibility and control, PowerShell provides a more user-friendly abstraction, covering the majority of real-world use cases. POWERSHELL VS APIs: CONTROL VS MAINTAINABILITY The discussion highlights an important trade-off: using PowerShell modules versus direct API calls. PowerShell modules are easier to maintain and understand, especially in controlled environments. However, APIs provide tighter control and versioning when deploying solutions externally. This balance between convenience and precision is a recurring theme in automation design. WHY POWERSHELL STILL MATTERS IN THE AGE OF AI With the rise of Copilot and AI-driven tools, one might assume that PowerShell becomes less relevant. However, Matthew argues the opposite. PowerShell provides transparency and control—admins can inspect scripts before execution, ensuring predictable outcomes. AI may assist in generating scripts, but PowerShell remains the execution layer that professionals trust. AUTOMATION AT SCALE: WHERE GUI TOOLS FAIL Graphical interfaces are useful for one-off tasks, but they quickly break down at scale. PowerShell shines when dealing with hundreds or thousands of objects, enabling consistent and repeatable actions. The ability to process large datasets, automate bulk operations, and integrate logic makes it indispensable in enterprise environments. REAL-WORLD USE CASE: LOG4J VULNERABILITY RESPONSE One of the most compelling examples shared is how PowerShell was used during the Log4j security crisis. Matthew built a script that scanned entire environments—across Azure VMs and hybrid systems—to detect vulnerabilities. The script could even power on machines, scan them, and shut them down again, all in parallel. This level of automation enabled rapid identification and response, something impossible to achieve manually. REPORTING, VISIBILITY, AND CROSS-TENANT INSIGHTS PowerShell is also a powerful tool for reporting and visibility. The episode highlights scenarios where built-in Microsoft tools fall short, such as accurately tracking external sharing in SharePoint and OneDrive. By using PowerShell, organizations can extract precise, meaningful insights instead of overwhelming, noisy data. COST CONSIDERATIONS AND AZURE AUTOMATION From a financial perspective, PowerShell itself is essentially free to run locally. Even when using Azure Automation, the costs remain minimal compared to the value delivered. This makes it a highly cost-effective solution for enterprise automation. COMMON MISTAKES IN POWERSHELL AUTOMATION Matthew outlines several common pitfalls: Not designing scripts to be restartablePoor error handling and loggingAutomating inefficient processes instead of improving themOverloading scripts with too many responsibilitiesA key takeaway is that automation should be resilient and modular, allowing partial failures without breaking the entire process. TESTING IN CONSTANTLY CHANGING ENVIRONMENTS Testing automation in Microsoft environments is challenging due to constant updates and API changes. Matthew discusses strategies such as mocking APIs, replaying requests, and using dedicated test tenants. Building pipelines that reset environments to known states is critical for reliable testing. POWERSHELL AND THE FUTURE OF MICROSOFT ECOSYSTEMS PowerShell is not going away. Microsoft continues to invest in it, especially through its integration with .NET and Microsoft Graph. The company’s commitment ensures that anything achievable in the GUI will also be possible via PowerShell. As APIs expand, PowerShell’s capabilities grow alongside them. ADVICE FOR NEW AND FUTURE ADMINS For those starting out, the best way to learn PowerShell is practical: Recreate GUI tasks using scriptsSave and reuse scripts as templatesFocus on repeatability and scalabilityBuild a habit of automation earlyThis approach helps transform everyday tasks into reusable solutions. HOT TAKES AND KEY INSIGHTS The episode concludes with several strong opinions: Managing Microsoft 365 without PowerShell is inefficientPower Automate complements, not replaces, PowerShellGUI-based automation does not scale for enterprisesMost organizations struggle with process issues, not toolingMicrosoft Graph will enhance PowerShell, not replace itFINAL THOUGHTS The overarching message is clear: PowerShell remains a foundational skill for modern IT professionals. It empowers administrators to move from reactive work to proactive automation, delivering efficiency, consistency, and scalability. As the Microsoft ecosystem evolves, PowerShell continues to adapt—making it more relevant than ever. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    43 min
  8. Is Your Copilot Safe: Stop Prompt Injections with Azure Logic Apps

    2D AGO

    Is Your Copilot Safe: Stop Prompt Injections with Azure Logic Apps

    Your Copilot problem isn’t a feature issue—it’s a trust failure in the model behind it. Most organizations still believe safety lives in prompts, permissions, and a few edge filters. But attackers don’t need to break your prompt—they just need to poison the context around it. That’s where everything collapses. Hidden payloads inside emails, SharePoint files, or form inputs sit quietly until Copilot retrieves them and treats them like instructions. Incidents like EchoLeak and ShareLeak already proved the pattern—and patches didn’t fix the root cause. Because Copilot operates across Microsoft 365, one poisoned input can propagate fast. This episode shows why the real fix isn’t another dashboard—it’s inserting Azure Logic Apps as a control layer before execution. THE REAL DANGER IS THE ARCHITECTURE, NOT THE PROMPT The traditional approach assumes you can secure AI by writing better prompts. Strong system messages, delimiters, and user guidance feel logical—but they don’t create real security boundaries. The model processes everything in a shared language channel where data and instructions compete equally. That’s the flaw. Once Copilot starts retrieving from Microsoft Graph—emails, files, chats—the attack surface explodes. You’re no longer securing a conversation; you’re securing a live stream of mixed-trust inputs. Indirect prompt injection becomes the real threat: attackers plant malicious instructions in content long before it’s ever retrieved. When Copilot pulls that data later, it blends it into context—and the model follows it. The result? Sensitive data exposure, manipulated outputs, or even downstream actions triggered by poisoned inputs. WHY BASIC DEFENSES FAIL IN PRODUCTION Most teams rely on familiar controls—better prompts, delimiters, regex filters, and user training. These aren’t useless, but they’re not enforcement—they’re persuasion. A system prompt can suggest behavior, but it cannot block malicious content once it enters the model’s context. Regex helps catch obvious phrases, but it fails against subtle or semantic attacks. Even advanced detection tools fall short if they only alert after execution. A log entry isn’t containment. A SIEM alert isn’t prevention. By the time you investigate, the damage may already be done. The core mistake is simple: teams analyze outputs but don’t control inputs. That order is backwards. Real security starts before the model runs. THE LOGIC APP FIREWALL MODEL Azure Logic Apps changes the control point. Instead of reacting after Copilot acts, you intercept inputs before execution. Logic Apps acts as a policy enforcement layer in the workflow. It normalizes incoming data, inspects it, scores risk, and decides what happens next. The process is simple but powerful: trigger, normalize, inspect, score, decide, and route. First, fast checks like regex flag obvious risks. Then deeper inspection happens using Azure AI Content Safety Prompt Shields, analyzing both prompts and retrieved documents together. Add threat intelligence from Microsoft Defender or external feeds to enrich the decision. The result is a scored workflow, not a binary filter. Low-risk inputs pass, medium-risk inputs get sanitized or reviewed, and high-risk inputs are blocked entirely. Every piece of context—user input, files, emails, tool arguments—is treated as untrusted until proven safe. WHAT THE WORKFLOW DOES AT RUNTIME In production, this isn’t just keyword scanning—it’s context-aware decisioning. Every request is enriched with metadata: who sent it, where it came from, and what action it triggers. Inputs are separated into trust zones—user prompt, retrieved content, history, and tool parameters—so risk can be traced accurately. Data is normalized to remove encoding tricks and inconsistencies. A fast pattern scan flags suspicious language, followed by deep analysis via Prompt Shields. Threat intelligence adds external context, and everything feeds into a composite risk score. That score determines the outcome: allow, sanitize, quarantine, require approval, or block. Every decision is logged with a full audit trail, turning each blocked attempt into intelligence for future tuning. HOW TO TUNE FOR LOW NOISE AND REAL BUSINESS USE Building the workflow is easy—making it usable is the real challenge. Start small with high-risk scenarios like tool-enabled actions or sensitive data flows. Tune regex for recall, not perfection, and rely on scoring to reduce noise. Keep false positives below two percent to maintain user trust—because once friction rises, users will find workarounds. Focus on meaningful metrics: detection time, containment speed, and actual impact on decisions. Optimize cost by choosing the right Logic Apps plan based on usage patterns. Store only essential audit data to avoid creating new privacy risks. And align everything with governance frameworks like NIST AI RMF and Microsoft Purview. This isn’t just detection—it’s an operational model. WHAT THIS CHANGES FOR LEADERS AND ARCHITECTS This approach fundamentally shifts where security lives. It moves from configuration and prompts into the transaction path itself. Every Copilot interaction becomes an input channel that must be evaluated. For architects, this means designing interception points for every connector, plugin, and workflow. For security teams, it creates a unified response model across SOC, M365 admins, and AI owners. And for leadership, it reframes AI risk as a business process issue, not just a technical one. The cost of preventing an attack is always lower than cleaning one up—and with Copilot embedded in daily tools like Outlook, Teams, and SharePoint, the stakes are higher than ever. IMPLEMENTATION PAYOFF AND CLOSE The shift is simple: stop treating prompt injection as a wording problem and start treating it as runtime control over untrusted context. Map one Copilot workflow this week. Identify the last safe interception point. Build a Logic App that inspects, scores, and controls that path before execution. That’s where real security begins. If you want more practical insights on securing Copilot and Microsoft 365, subscribe, leave a review, and connect with Mirko Peters on LinkedIn. Tell me which scenario you’re trying to secure next—and we’ll break it down. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min

Ratings & Reviews

5
out of 5
3 Ratings

About

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

You Might Also Like