M365.FM - Modern work, security, and productivity with Microsoft 365

Mirko Peters - Founder of m365.fm, m365.show and m365con.net

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

  1. Azure Policy Isn't Enough: The Secret to Real-Time Cloud Savings

    1 HR AGO

    Azure Policy Isn't Enough: The Secret to Real-Time Cloud Savings

    Your Azure bill usually starts going wrong long before finance ever notices the number. That’s the real problem. Most FinOps teams still operate on a reactive model built around dashboards, reports, alerts, exports, and month-end review cycles. But cloud spend doesn’t wait for governance meetings. It starts the second someone deploys the wrong SKU, selects an expensive region, skips ownership tags, enables premium defaults, or launches a service that scales faster than governance can respond. And while all of that is happening, Azure Policy often sits quietly in audit mode... documenting the damage instead of preventing it. In this episode, Mirko Peters breaks down why traditional FinOps approaches fail in modern Azure environments and why real cloud savings only happen when cost control moves directly into the deployment path. Instead of treating governance as reporting after the money is already spent, this episode explores how Azure Policy can become a real-time enforcement engine that blocks waste before billing ever starts. Because if your platform still relies on alerts instead of enforcement, AI workloads, autoscaling services, premium storage defaults, and weak deployment standards will continue multiplying cloud spend while your dashboards politely try to catch up. WHY REACTIVE FINOPS KEEPS FAILING Most FinOps programs produce visibility, but visibility is not control. That distinction changes everything. Traditional cloud governance usually follows the same cycle: observe spend, generate reports, investigate anomalies, open conversations, and then attempt remediation after the expensive deployment already exists. The issue is that cloud consumption moves too fast for that model. By the time a report explains the problem, the VM is already running, the premium disk is attached, the AI workload has already processed tokens, and the storage account is already growing. The conversation shifts from prevention to cleanup. And cleanup is always slower, more political, and more expensive. This episode explains why consumption-based cloud platforms fundamentally break older governance models built around delayed financial visibility. In Azure, spend happens in motion. Short-lived resources can generate cost in minutes, autoscale systems can multiply billing events rapidly, and AI services can create unpredictable spikes long before month-end reporting catches up. Mirko also explores the hidden second layer of waste most organizations ignore: the operational cost of remediation itself. Once bad deployments exist, companies don’t just pay for the resources. They also pay for the human cleanup loop around them — ticket reviews, owner tracing, escalation meetings, remediation planning, and endless coordination across engineering, finance, and platform teams.  WHAT AZURE POLICY ACTUALLY DOES — AND WHERE MOST TEAMS MISUSE IT Azure Policy is far more than a compliance dashboard. At its core, it operates directly inside the Azure Resource Manager request path, which means it evaluates deployments before resources are successfully created. That makes Azure Policy one of the few governance tools capable of turning financial intent into real technical enforcement. This episode walks through how Azure Policy actually works internally, including: ARM request evaluationPolicy effects and execution orderModify versus Deny behaviorAppend and DeployIfNotExists logicAudit timing and compliance behaviorDenyAction protection scenariosManagement group assignment strategyMirko explains why most organizations misunderstand Azure Policy entirely. Having policy assignments does not mean governance exists. In many environments, policies remain stuck in audit mode for months or years, collecting non-compliance reports while the deployment path stays fully open. You’ll also learn why timing matters, why compliance dashboards are not real-time operational control surfaces, and why poorly scoped policy assignments often create governance drift instead of actual enforcement. TURNING AZURE POLICY INTO A REAL-TIME BUDGET MACHINE This is where the operating model changes completely. Instead of observing overspend after the fact, organizations can encode financial intent directly into deployment rules. That means: Blocking oversized VM families in development environmentsRestricting premium disks outside productionDenying unsupported regionsRequiring ownership and cost-routing tagsEnforcing approved deployment patternsPreventing unaccountable spend before it beginsMirko explains why budgets alone do not control architecture. Patterns do. A written budget only suggests that teams should spend less. Policy enforcement changes what the platform physically allows. Once financial standards become deployment constraints, cost discipline stops depending on memory, meetings, and follow-up behavior. It becomes part of the platform contract itself. This episode also explores how Azure Policy initiatives, management groups, reusable parameters, and layered assignment strategies help organizations scale FinOps enforcement consistently across large Azure estates. WHERE MOST POLICY-DRIVEN FINOPS PROGRAMS COLLAPSE One of the biggest mistakes organizations make is confusing observation with enforcement. Many teams believe they have governance simply because they collect non-compliance reports. But if engineers can still deploy the same expensive patterns tomorrow, nothing has actually changed. This episode dives deep into the most common Azure Policy rollout failures, including: Audit-forever governance modelsOver-aggressive deny rolloutsPolicy surprise during deploymentsPoor landing zone defaultsWeak pipeline integrationAssignment sprawlUnmanaged exemption growthBroken developer experienceMisaligned enforcement timingMirko explains why deny itself is not the problem. Surprise is. The episode also explores how governance programs unintentionally teach bypass behavior when exemptions become easier than fixing deployment templates. Over time, standards lose authority, and policy slowly turns into documentation theater instead of runtime control. THE ROLLOUT MODEL THAT PRESERVES ENGINEERING VELOCITY Strong governance should accelerate delivery, not slow it down. That only happens when rules are visible early, deployment paths are already compliant, and engineers understand the standards before they reach Azure Resource Manager. This episode outlines a practical rollout path that starts narrow and scales safely: Audit with a defined end dateRepair templates and landing zones firstAlign Infrastructure-as-Code modulesAdd CI/CD pipeline validationEnable deny in non-production environments firstIntroduce controlled exception handlingPackage controls into reusable initiativesMirko also explains why vague freedom slows teams down more than clear boundaries do. Engineers move faster when regions, SKUs, tags, and approved patterns are predictable instead of constantly changing through tribal knowledge and late-stage governance surprises. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min
  2. Stop Paying for Nothing: Build an Automated Azure Cleanup Engine

    13 HRS AGO

    Stop Paying for Nothing: Build an Automated Azure Cleanup Engine

    Cloud platforms love to promise efficiency. Azure tells you to pay only for what you use. But most organizations are not paying for active usage anymore. They are paying for forgotten infrastructure, abandoned projects, stale environments, orphaned disks, idle virtual machines, and resources nobody remembers creating. The billing meter never stops simply because a sprint ended or a team moved on. That is the real problem with manual governance. Cleanup depends on memory, spare time, and someone eventually noticing the cost report after the spend has already landed. Finance sees rising cloud bills. Engineering starts hunting through old tickets. Teams debate ownership while unused resources continue burning budget in the background. The cloud makes spinning things up incredibly easy, but shutting things down safely and consistently is where most organizations fail. In this episode, Mirko Peters breaks down how to build an automated Azure cleanup engine that removes waste before it scales into chaos. Instead of relying on manual reviews and reactive cost reports, the model combines Azure Policy, intelligent tagging, Resource Graph, and Logic Apps to continuously identify resources that no longer deserve to exist. The result is a governance approach that moves from “someone should clean this up” to a repeatable lifecycle control system that actually works. WHY CLOUD WASTE NEVER REALLY GOES AWAY Most cloud waste is not caused by oversized virtual machines or premium database tiers. The deeper issue is lifecycle drift. Projects start quickly, teams deploy temporary resources, proof-of-concept environments get created, and then priorities change. The work disappears, but the infrastructure survives. Over time, these forgotten assets turn into background noise that quietly inflates cloud spend month after month. Weak tagging makes the problem even worse. When resources lack ownership, expiry dates, or cost center alignment, cloud bills lose context. Organizations can see the spend, but they cannot see the story behind it. Accountability becomes blurry, cleanup slows down, and manual governance creates endless delays that protect waste instead of eliminating it. This episode explains why governance fails when it sits outside the delivery process and why the solution is not more reports, but stronger lifecycle enforcement built directly into the platform.  THE GOVERNANCE MODEL BEHIND THE CLEANUP ENGINE The architecture is intentionally simple: Azure Policy becomes the lawTags provide the operational contextLogic Apps execute the cleanup actionsResource Graph continuously discovers lifecycle driftMirko walks through how to structure governance correctly using management groups, resource group inheritance, audit-first rollout strategies, and progressive enforcement models that move from Audit to Modify and finally to Deny once the organization is ready. You will learn why governance systems often fail when policies, automation, and tagging become overly complex — and how keeping the model small and explainable dramatically improves adoption and trust across engineering teams. THE TAGGING STRATEGY THAT MAKES SAFE DELETION POSSIBLE Tags are not decorative metadata. They are the decision engine behind automated cleanup. This episode explores the exact tag model needed to support safe lifecycle automation, including: OwnerEnvironmentCostCenterExpiryDate or TTLCleanupActionExceptionReasonYou will hear why strong tagging transforms deletion from a risky guess into a controlled operational decision, and why inheritance through resource groups is far more scalable than forcing manual tagging on every deployment. Mirko also explains how poor taxonomy design destroys automation credibility, why free-text exception handling creates governance drift, and how to build a tagging system teams will actually follow instead of bypassing. BUILDING THE LOGIC APP CLEANUP FLOW The cleanup workflow itself lives inside Azure Logic Apps Consumption, keeping operational costs low while allowing the engine to scale dynamically as cleanup demand changes. The episode covers the complete orchestration model: Discovery through Azure Resource Graph, validation paths, dependency checks, lock handling, approval flows, deletion branching by resource type, retry logic, managed identities, audit logging, and dry-run safety modes. Instead of relying on one giant deletion script, the cleanup engine becomes a structured orchestration platform capable of making consistent lifecycle decisions at scale. You will also learn why: Deletion order matters in AzureResource locks often break automationSoft-delete changes expected behaviorGovernance policies can accidentally block cleanup workflowsQuarantine flows are safer than immediate deletion in uncertain scenariosMEASURING WHETHER THE ENGINE IS ACTUALLY WORKING Savings alone are not enough. This episode introduces a better measurement model that tracks both reclaimed cost and prevented cost through lifecycle enforcement. Mirko explains why the true success metric is not just how much waste gets deleted, but how much unnecessary spend never appears in the first place. The discussion includes: Effective Avoidance RateTag quality metricsOwnership clarityWorkflow success and skip analysisDrift monitoringAutomation ROI versus manual governance effortBecause the real goal is not cleaner reports. The real goal is building a platform where ownership stays visible, lifecycle drift stays low, and cloud waste stops scaling faster than the organization itself. IMPLEMENTATION PAYOFF The best way to begin is small. Start with one cleanup class like unattached disks or expired development resource groups. Prove the tagging model. Validate the workflow. Run in audit mode first. Build trust through evidence instead of fear. This episode is ultimately about changing how organizations think about governance. Cloud waste is not a reporting problem. It is a lifecycle control problem. If you are responsible for Azure architecture, platform engineering, governance, FinOps, cloud operations, or enterprise automation, this episode gives you a practical blueprint for building automated cleanup systems that scale with the cloud instead of constantly chasing it. Follow Mirko Peters on LinkedIn for more deep dives into Azure architecture, governance automation, AI infrastructure, and modern cloud operating models. And if this episode helped you rethink cloud governance, leave a review and share it with your team. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    21 min
  3. The Truth About Microsoft Security and Copilot Readiness with Åsne Holtklimpen [MVP/MCT]

    1 DAY AGO

    The Truth About Microsoft Security and Copilot Readiness with Åsne Holtklimpen [MVP/MCT]

    AI adoption is accelerating across every industry, but many organizations are still asking the same critical question: Are we truly ready for Microsoft Copilot? In this episode of the m365.fm podcast, Mirko Peters sits down with Microsoft MVP and MCT Åsne Holtklimpen to uncover the real truth about Microsoft Security, Copilot readiness, data governance, and why AI is exposing long-hidden problems inside Microsoft 365 environments. MICROSOFT COPILOT IS NOT CREATING SECURITY RISKS — IT IS REVEALING THEM This episode goes far beyond the usual AI buzzwords. Instead of focusing only on productivity gains, Åsne explains why organizations must first understand their data, secure their environments, and establish proper governance before fully embracing Microsoft Copilot, AI agents, and automation tools. From SharePoint oversharing to sensitivity labels, Purview, Conditional Access, and Zero Trust strategies, this conversation is packed with practical insights for IT leaders, Microsoft 365 administrators, CIOs, CISOs, consultants, and business decision-makers. Åsne shares real-world experiences from working with organizations across the Nordic region, helping companies prepare their Microsoft 365 tenants for AI adoption while balancing productivity with security and compliance. The discussion highlights one important reality: Copilot does not create security problems — it exposes the problems that already exist. Overexposed SharePoint sites, outdated files, broken permissions, forgotten Teams channels, and uncontrolled sharing become significantly more visible once AI tools can access organizational data at scale. HOW MICROSOFT PURVIEW, SENSITIVITY LABELS, AND DLP SUPPORT AI SECURITY The conversation also dives deep into why Microsoft Purview plays a crucial role in modern AI governance. Åsne explains how sensitive information types, sensitivity labels, Data Loss Prevention (DLP), Conditional Access policies, and SharePoint governance can help organizations secure their data before enabling Copilot across the enterprise. If your company is discussing Copilot readiness, AI governance, or Microsoft Security strategies, this episode provides an honest and practical roadmap for getting started the right way. THE HIDDEN DANGERS OF SHAREPOINT AND TEAMS OVERSHARING One of the biggest takeaways from this episode is that “Copilot readiness” is really a Microsoft 365 data governance challenge. Organizations that spent years oversharing files, migrating content during the pandemic, and creating uncontrolled collaboration environments are now facing the reality that AI can quickly surface sensitive or outdated information. Åsne explains why proper governance, classification, cleanup, and ownership are no longer optional — they are foundational requirements for secure AI adoption. The discussion also explores how forgotten Teams sites, unused SharePoint folders, and legacy collaboration environments create serious exposure risks. Many companies still have sharing links active from years ago, with no ownership or lifecycle strategy in place. AI tools can amplify these problems if organizations fail to clean up their Microsoft 365 environments before enabling Copilot. ZERO TRUST, CONDITIONAL ACCESS, AND MODERN MICROSOFT SECURITY STRATEGIES  Mirko and Åsne discuss why Zero Trust security principles are more important than ever in the AI era. Organizations must move beyond traditional perimeter security and start protecting identities, devices, data, and access policies holistically. The episode highlights how Conditional Access policies combined with Purview sensitivity labels can significantly reduce the risk of unauthorized access to sensitive information. The conversation also covers why many organizations still struggle with basic security practices such as MFA enforcement, secure identity management, and endpoint governance. Without these foundations, deploying AI solutions like Microsoft Copilot can create unnecessary exposure and operational risks. HOW TO PREPARE EMPLOYEES FOR AI ADOPTION IN MICROSOFT 365 Another major theme throughout the episode is user education and adoption. Employees must understand how AI tools interact with existing permissions, how data spreads across Teams and SharePoint, and why deleting outdated or unnecessary files is critical for maintaining a healthy AI-ready environment. Åsne explains why organizations must stop behaving like “data hoarders” and start implementing proper lifecycle management across Microsoft 365. The episode also explores how businesses should introduce Copilot gradually using pilot groups, governance strategies, and clear use cases instead of blindly enabling AI organization-wide. Proper training, communication, and executive sponsorship are essential for successful AI transformation initiatives. WHY EXECUTIVES, CISOS, AND IT LEADERS MUST TAKE AI GOVERNANCE SERIOUSLY Mirko and Åsne also discuss how leadership teams often underestimate the importance of governance because security projects do not immediately generate revenue. However, the long-term risks of non-compliance, data exposure, identity compromise, and AI misuse can create massive financial and reputational damage for organizations that fail to prepare. This episode offers valuable guidance for executives trying to balance innovation, risk management, and digital transformation in the age of AI. Åsne shares practical examples from customer projects where organizations believed they had no sensitive information stored in Microsoft 365, only to discover large amounts of exposed personal data through Microsoft Purview assessments. These real-world examples demonstrate why governance and visibility are essential before scaling AI initiatives.  IN THIS EPISODEWhy Microsoft Copilot exposes existing security and governance problemsHow Microsoft Purview supports AI governance and data protectionThe role of sensitivity labels, DLP, and Conditional Access in Copilot readinessWhy SharePoint and Teams oversharing creates serious AI security risksHow organizations should prepare employees and leadership for AI adoptionThe importance of data classification and Zero Trust strategiesCommon mistakes companies make when rushing into AI and Copilot deploymentsWhy AI governance is ultimately a Microsoft 365 governance challengeTHE FUTURE OF AI SECURITY, COMPLIANCE, AND MICROSOFT 365 GOVERNANCE The episode also explores the future of AI security and why organizations will need even stronger governance strategies over the next several years. As cybercriminals increasingly adopt AI technologies themselves, companies must evolve their security posture, improve governance maturity, and invest in secure Microsoft 365 foundations to stay protected. Åsne explains that AI will not eliminate security challenges — in many ways, it may intensify them. This makes governance, compliance, classification, and identity protection more important than ever before for organizations operating in modern cloud environments.  WHY THIS EPISODE MATTERS FOR MICROSOFT 365 PROFESSIONALS If your organization is planning to deploy Microsoft Copilot, Copilot Studio, AI agents, or any generative AI solution within Microsoft 365, this episode is essential listening. It delivers practical guidance without the marketing hype and provides a realistic perspective on what secure AI adoption actually requires. Whether you are a Microsoft 365 administrator, security architect, IT consultant, compliance officer, or business leader, you will gain actionable insights into:AI governance best practicesMicrosoft Security and Purview strategiesCopilot readiness assessmentsData classification and protectionSecure collaboration in SharePoint and TeamsBalancing productivity and compliance in the AI eraCONNECT WITH ÅSNE HOLTKLIMPEN Åsne Holtklimpen is a Microsoft MVP and Microsoft Certified Trainer (MCT) specializing in Microsoft 365, Microsoft Security, Purview, governance, compliance, and Copilot readiness. She works with organizations across the Nordic region to help them securely adopt AI technologies while building strong governance foundations.  Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    47 min
  4. Building and deploying production grade AI agents with Microsoft Foundry with Edgar McOchieng [MVP]

    2 DAYS AGO

    Building and deploying production grade AI agents with Microsoft Foundry with Edgar McOchieng [MVP]

    In this deep-dive episode of the M365 FM podcast, Mirko Peters welcomes Edgar McOchieng for an extensive conversation about enterprise AI architecture, Microsoft Foundry, scalable AI agents, and the real-world challenges organizations face when deploying production-grade AI systems. Edgar shares his journey from discovering Microsoft Azure during university in Kenya to becoming a Microsoft MVP focused on Microsoft Foundry, Business Applications, data engineering, and AI-driven enterprise solutions. He also talks about his passion for mentorship and community building through “Ochieng Labs,” where students and early-career developers gain hands-on experience with Power Platform, Microsoft Fabric, Copilot Studio, and modern AI engineering practices. BUILDING REAL-WORLD ENTERPRISE AI APPLICATIONS The conversation explores how organizations can move beyond AI experimentation and start building reliable, secure, and scalable AI applications that deliver measurable business value. Edgar explains how his team created an enterprise AI platform capable of connecting to SharePoint, OneDrive, Outlook, Microsoft Graph, AWS, and Google Cloud environments to help employees retrieve organizational knowledge faster and reduce data silos across departments. Listeners will learn how Retrieval-Augmented Generation (RAG), vector search, semantic indexing, embeddings, and enterprise search architectures play a critical role in modern AI systems. Edgar breaks down how AI applications can access live organizational knowledge instead of relying solely on static training data, helping businesses build more accurate and context-aware AI assistants. HYBRID AI ARCHITECTURES AND AI COST OPTIMIZATION A major focus of this episode is enterprise AI cost management and hybrid AI infrastructure design. Edgar openly discusses the challenges organizations face with rising AI costs caused by heavy usage of premium cloud-based large language models such as Anthropic Claude and GPT services. He explains how his team introduced a hybrid orchestration model that intelligently switches between local small language models and cloud-hosted LLMs depending on the complexity of the task. This hybrid AI approach dramatically reduced operational expenses while maintaining scalability and performance. The discussion also covers rate limiting, token management, AI workload monitoring, hosted agents, orchestration layers, and why enterprises increasingly need ownership and control over their AI infrastructure. MICROSOFT FOUNDRY, COPILOT STUDIO, AND AI DEVELOPMENT WORKFLOWS Edgar describes Microsoft Foundry as a powerful “model playground” where developers can experiment with multiple AI models, create hosted agents, build orchestration pipelines, evaluate model safety, apply guardrails, and integrate enterprise systems using MCP connectors. He also explains the differences between Microsoft 365 Copilot, Copilot Studio, and Microsoft Foundry — helping listeners understand when each platform is the right choice depending on customization requirements and technical maturity. The episode also dives into prompt engineering, AI workflows, GitHub Copilot, VS Code integrations, CI/CD pipelines with GitHub Actions, evaluation pipelines, hallucination testing, and the growing importance of developer tooling in AI application development. Edgar shares practical insights into how AI engineering teams structure, test, deploy, and continuously improve enterprise AI systems in production environments. AI GOVERNANCE, SECURITY, AND ENTERPRISE MONITORING Another key topic throughout the conversation is AI governance, observability, security, and responsible AI implementation. Edgar explains why governance and monitoring are becoming more important than simply selecting the “best” AI model. Organizations need visibility into user behavior, AI usage patterns, permissions, hallucination risks, security controls, and compliance requirements. The discussion also covers multi-tenant enterprise AI architectures, tenant isolation, data partitioning, hosted AI agents, containerization, Kubernetes integrations, Power Platform connectivity, Logic Apps orchestration, and enterprise-grade monitoring systems designed to support scalable AI workloads. THE FUTURE OF ENTERPRISE AI Toward the end of the episode, Mirko and Edgar discuss several hot topics shaping the future of enterprise AI, including small language models (SLMs), prompt engineering, orchestration-driven AI workflows, fine-tuning versus data grounding, and the long-term sustainability of relying entirely on external AI providers. Edgar argues that organizations increasingly need flexibility, transparency, governance, and infrastructure ownership to remain competitive as AI adoption continues to accelerate. This episode is packed with practical insights for enterprise architects, AI engineers, cloud developers, CTOs, IT leaders, Microsoft professionals, startup founders, and anyone interested in understanding how Microsoft Foundry and Azure AI technologies are reshaping modern enterprise software development and intelligent automation. IN THIS EPISODEBuilding production-grade AI agents with Microsoft FoundryDesigning scalable hybrid AI architectures for enterprisesImplementing AI governance, observability, and monitoringReducing enterprise AI costs using local and hosted modelsRetrieval-Augmented Generation (RAG) and vector searchHosted AI agents, orchestration layers, and prompt flowsEnterprise integrations with Microsoft Graph, SharePoint, and Power PlatformMulti-tenant AI architectures and secure data isolationAI evaluation pipelines, guardrails, and hallucination preventionCI/CD strategies for enterprise AI deploymentsKEY TECHNOLOGIES DISCUSSEDMicrosoft FoundryAzure AI ServicesMicrosoft 365 CopilotCopilot StudioMicrosoft FabricPower PlatformGitHub CopilotMCP ConnectorsVector DatabasesRetrieval-Augmented Generation (RAG)KubernetesLogic AppsAzure Hosted AgentsWHO SHOULD LISTEN This episode is highly recommended for enterprise architects, AI engineers, Microsoft consultants, cloud developers, CTOs, CIOs, IT decision-makers, Power Platform professionals, startup founders, security teams, and technology leaders looking to understand how enterprise AI systems can be designed, governed, scaled, and optimized using Microsoft’s modern AI ecosystem. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    1hr 2min
  5. Is Your Microservice Architecture a Ticking Time Bomb for Speed

    2 DAYS AGO

    Is Your Microservice Architecture a Ticking Time Bomb for Speed

    You adopted microservices because you wanted speed. Faster deployments. Faster teams. Faster product delivery. But somewhere along the journey, a simple feature stopped feeling simple. What used to be one local code change now requires cross-team coordination, API reviews, rollout sequencing, schema checks, tracing updates, retry planning, and governance approvals. The old bureaucracy never disappeared. It simply moved from the org chart directly into the runtime. And increasingly, organizations are realizing the tradeoff is no longer worth it. Recent industry research shows that forty-two percent of organizations are actively consolidating microservices back into larger deployment units. That statistic alone signals something important: many teams are discovering that the operational and coordination overhead of distributed systems has started consuming the very delivery speed those systems were supposed to create. In this episode, we unpack the deeper model behind that slowdown. This is not another simplistic “monolith versus microservices” debate. This conversation focuses on how distributed architectures quietly create runtime friction, organizational drag, and delivery bottlenecks inside modern .NET environments — especially for teams that adopted service boundaries long before they truly needed them. Because once the architecture begins fragmenting the flow of change, the cost starts showing up everywhere. THE ARCHITECTURAL ILLUSION OF PROGRESS Microservices were sold as autonomy. The promise sounded almost perfect: split systems into independent services, give teams ownership, scale components independently, and deploy faster without coordination bottlenecks. On paper, the model looked mature. But the architecture carried assumptions many organizations skipped right past. Microservices assume: Stable domain boundariesMature platform engineeringStrong DevOps capabilitiesOperational readinessLong-term team ownershipReliable observabilityClear contract disciplineIn many organizations, none of those conditions existed yet. And that is where the model starts fighting the organization itself. This episode explores why smaller and mid-sized engineering organizations often feel the pain first. Research consistently shows that for teams under roughly twenty to thirty engineers, coordination overhead frequently outweighs the scaling advantages of physical service separation. Instead of autonomy, teams inherit dependency chains with extra operational layers attached to every business change. We break down how: One feature update becomes multiple synchronized deploymentsSimple business logic turns into distributed coordinationAPI ownership becomes a negotiation processService boundaries create organizational silos“Independent deployment” often increases release frictionArchitectural complexity gets mistaken for engineering maturityBecause adding more boxes to a diagram does not automatically create speed. Sometimes it simply creates more places where work can stop. THE HIDDEN TAX OF DISTRIBUTED COMPLEXITY One of the most deceptive things about microservices is that every service can appear individually clean while the production system becomes massively heavier underneath. This episode dives into the hidden runtime tax of distributed systems inside modern .NET environments. Inside a single process, code communicates at memory speed. Across service boundaries, that same interaction becomes: Network trafficSerializationAuthenticationTimeout handlingRetry logicCorrelation trackingDistributed tracingPartial failure managementAnd those mechanics introduce costs that compound quickly. We explore how a simple business transaction can quietly transform into: Multiple outbound HTTP or gRPC callsCascading latency chainsRetry stormsExpanded observability overheadIncreased debugging complexityMore cloud infrastructure consumptionBecause the real system is not just the services. It is everything between them. This episode also examines the operational impact of observability and service mesh adoption in .NET ecosystems. Distributed tracing, telemetry, mTLS enforcement, and sidecar proxies absolutely provide value — but they also introduce measurable overhead in memory usage, latency, throughput, and operational maintenance. We discuss: Istio vs Linkerd operational tradeoffsSidecar memory overhead in Kubernetes clustersObservability performance costsInstrumentation latency impactWhy distributed debugging consumes dramatically more engineering timeHow platform complexity becomes a staffing problemSmall teams feel this pressure first because they rarely have dedicated platform engineering departments to absorb the operational load. The result is that developers stop spending most of their time building products and start spending it operating distributed infrastructure. HOW API CONTRACTS TURN INTO DIGITAL RED TAPE  Once runtime complexity grows, the next slowdown appears in team coordination. API contracts are meant to create trust between services, but in many organizations, those contracts slowly evolve into rigid borders that require negotiation before every change.  Something as small as renaming a single field can trigger: Consumer coordinationSchema reviewsVersioning debatesApproval workflowsRollout sequencingExtended backward compatibility maintenanceThe technical change may take minutes. The organizational choreography around it can consume days. This episode explores how API governance frequently drifts into digital bureaucracy, especially when organizations lack strong automated contract validation pipelines. We discuss: Why low contract testing adoption creates fearHow brittle API governance slows deliveryWhy teams duplicate endpoints instead of evolving interfacesThe dangers of over-versioningGovernance drift inside enterprise architectureManual review bottlenecksCI-driven contract enforcementHow AI coding tools accelerate coding but not organizational validation Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min
  6. Why Your Microservices Are Turning the Cloud Toxic

    3 DAYS AGO

    Why Your Microservices Are Turning the Cloud Toxic

    One slow dependency can quietly poison an entire cloud platform long before any dashboard shows a major outage. The systems still appear healthy. CPU looks normal. Containers remain online. Health checks keep passing. Yet underneath the surface, capacity is already collapsing because the architecture was built on a dangerous assumption: every remote call will return quickly enough to keep the platform moving. That assumption breaks the moment real pressure arrives. In this episode, we dive deep into the mechanics behind cascading latency failures in modern .NET microservice environments and explain why “slow” is often more dangerous than “down.” Most teams prepare for crashes. Very few prepare for toxic waiting states that silently spread through APIs, queues, databases, gateways, and worker services until the entire platform grinds itself into exhaustion. This is not another discussion about generic retries or simplistic cloud scaling advice. This episode is about failure containment, resource protection, and architectural resilience under real-world pressure. Because the real problem isn’t usually the first failed request. It’s everything that gets trapped waiting behind it. SILENT LATENCY IS THE REAL CLOUD KILLER Modern distributed systems are incredibly good at hiding their own deterioration. A dependency becomes slower by a few hundred milliseconds. Then a few seconds. Requests begin stacking up quietly inside ASP.NET pipelines while outbound HTTP calls hold sockets open longer and longer. Connection pools start draining. Queues begin filling. Upstream APIs wait longer to respond while downstream services struggle to recover. Nothing appears catastrophic at first. That’s exactly why latency spreads so effectively. Unlike a hard outage, slow degradation gets admitted into the system and multiplied across every dependent service. A failed call is rejected immediately. A slow call infects everything upstream. This episode explores how those waiting states become invisible capacity killers inside .NET systems, especially in high-traffic cloud architectures where services depend heavily on identity providers, APIs, databases, third-party platforms, and shared infrastructure. We break down: Why slow dependencies are more dangerous than dead onesHow async code still consumes valuable platform resourcesWhy healthy-looking dashboards often hide collapsing throughputHow queue growth becomes a symptom of delayed completion ratesWhy adding more replicas frequently makes the problem worseBecause scaling a waiting room doesn’t solve the dependency poisoning the system underneath it. WHY RETRIES OFTEN MAKE OUTAGES WORSE Retries feel safe. In small systems, they usually are. But inside distributed cloud environments, retries can quickly become synchronized load amplification attacks against already struggling dependencies. This episode explains why retry logic changes completely once systems operate at scale. A single failed request can multiply into waves of duplicate traffic as every service instance follows the exact same retry behavior at the exact same time. Inside the .NET ecosystem, resilience frameworks make retries deceptively easy to implement. Developers add policies with good intentions, believing they’re improving stability. But poorly designed retry strategies frequently extend outages instead of containing them. We explore how: Long timeout windows increase pressure across the platformRetried requests consume even more thread time and socket capacityRetry storms create artificial traffic spikesOverloaded services become trapped in endless recovery loopsBroad retry policies generate massive cloud waste and instabilityThis episode reframes retries for what they really are under pressure: Load generation. Not protection. You’ll also learn when retries do make sense, including how to safely handle transient faults, temporary network interruptions, and idempotent operations without accidentally creating synchronized platform-wide self-harm. BULKHEAD ISOLATION: STOPPING ONE FAILURE FROM TAKING DOWN EVERYTHING One of the most important concepts covered in this episode is bulkhead isolation. Most cloud teams believe their services are isolated because they run in separate containers or repositories. But if those services still share outbound connections, execution pools, database bottlenecks, or queue consumers, then the failure path remains shared. And shared pools become toxic during latency events. This episode explains how bulkhead isolation creates hard architectural boundaries that prevent one failing dependency from stealing resources from unrelated workloads. We discuss practical .NET resilience design strategies including: Per-dependency concurrency limitsDedicated outbound HTTP client policiesIsolated queue consumersSeparate execution paths for critical workloadsReserved capacity for revenue-generating flowsTenant-level isolation strategiesBusiness-priority-driven workload separationBecause under pressure, equal access to shared resources becomes one of the fastest ways to collapse an entire platform. You’ll hear real-world examples of how reporting systems, background synchronization jobs, and low-priority workloads unintentionally starve checkout systems, identity flows, and customer-facing APIs simply because nobody created boundaries between them. This is where resilience stops being a technical optimization and becomes a business decision. CIRCUIT BREAKERS AND CONTROLLED FAILURE Once failures start spreading, the platform needs a way to stop panic from multiplying. That’s where circuit breakers become essential. This episode breaks down how circuit breakers act as real-time traffic control systems for unstable dependencies. Instead of allowing every request to independently discover failure through expensive timeouts, breakers create shared system memory that quickly stops doomed traffic before it spreads resource exhaustion upstream. We cover: Closed, open, and half-open circuit statesWhy fast rejection is healthier than slow waitingHow breaker thresholds influence platform behaviorThe dangers of generic one-size-fits-all resilience policiesProper timeout and breaker composition in .NETDependency-specific resilience tuning strategiesWhy upstream systems must cooperate with degraded modesYou’ll also learn why many teams accidentally sabotage their own circuit breaker strategies by continuing to aggressively feed traffic into failing dependencies from queues, schedulers, and upstream APIs. A breaker alone cannot save a platform that refuses to acknowledge degraded conditions. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    21 min
  7. From Figma design to the PowerApps with Lukas Pavelka [MVP]

    3 DAYS AGO

    From Figma design to the PowerApps with Lukas Pavelka [MVP]

    Development is changing faster than most teams can process. A few years ago, building enterprise applications meant long development cycles, hand-coded UI layers, endless testing loops, and massive backlogs between design teams and developers. Now AI agents can write code, generate layouts, repair syntax, optimize workflows, and even help translate entire applications into more than one hundred languages. But that shift creates a new question: If AI can generate applications faster than ever before, what actually separates good development from dangerous development? In this episode of the M365 FM Podcast, Mirko Peters sits down with Microsoft MVP Lukas Pavelka to explore the intersection of Figma, PowerApps, AI-assisted coding, Power BI, and the rapidly changing future of enterprise application development. The conversation goes far beyond low-code hype. This episode explores what really happens when AI agents enter the development lifecycle, how Figma is evolving into a complete ecosystem, why governance and security still matter deeply in AI-driven coding, and how developers can use tools like Copilot, Claude, GitHub Copilot, and vibe coding without losing control of their own codebase. FROM JAVA DEVELOPER TO FIGMA AND POWERAPPS CREATOR Lukas Pavelka started as a traditional Java developer more than twenty years ago before eventually transitioning into Power Platform development, automation, and AI-assisted application design. The turning point came through design. After discovering Figma through his wife’s design work, Lukas realized there was a major gap between beautiful design systems and practical PowerApps development workflows. That led to the creation of his PowerApps for Figma plugin, designed to help Power Platform developers move much faster between design and implementation. Today, Lukas develops multiple products focused on bridging design, automation, AI, and low-code development, including: PowerApps for FigmaPower BI for FigmaMy Bot Admin for Telegram automationThe discussion explores how these products evolved from internal productivity ideas into community-focused tools aimed at helping developers, makers, and Power Platform teams reduce repetitive work and improve enterprise UI quality. WHY FIGMA IS BECOMING MUCH BIGGER THAN DESIGN One of the most fascinating parts of this episode is the discussion around Figma’s evolution. Lukas explains why Figma is no longer just a design platform. It is becoming a complete ecosystem that increasingly overlaps with development, prototyping, presentations, AI-assisted workflows, and enterprise application delivery. The conversation covers: Figma design systemsReusable component librariesPowerApps UI translationYAML exportComponent variantsMulti-language enterprise appsDesign consistency across projectsLukas also explains how his plugins allow Power Platform developers to create scalable design systems that can be reused across enterprise projects while dramatically reducing repetitive UI work. The discussion highlights a major shift happening inside enterprise development: Good UX is no longer optional. Organizations increasingly realize that internal business applications must feel modern, intuitive, and scalable if they want employees to actually use them effectively. AI, VIBE CODING, AND THE REALITY OF MODERN DEVELOPMENT This episode dives deeply into AI-assisted development and the rise of “vibe coding.” Lukas shares practical experiences using GitHub Copilot, Claude, Visual Studio integrations, AI agents, and prompt-based coding workflows to accelerate development. But the conversation stays grounded in reality. One of the strongest themes throughout the episode is that AI coding still requires strong technical understanding. Lukas explains why developers cannot simply rely on AI-generated code without understanding architecture, debugging, security, versioning, and governance. The discussion explores: Prompt engineering for developersAI-assisted debuggingModel selection strategiesToken cost managementVersioning challengesSecure coding practicesMCP and Model Context ProtocolAI coding limitationsA major insight from the episode is that AI coding works best when prompts stay highly focused and scoped to one specific task at a time. Broader prompts often cause AI agents to rewrite working code unnecessarily or introduce instability into existing projects. The episode also explores how AI development changes the role of the developer itself. Instead of writing every line manually, developers increasingly supervise, guide, validate, secure, and orchestrate AI-generated output. THE BUSINESS REALITY OF AI DEVELOPMENT The conversation also moves into the economics behind AI-assisted development. Lukas and Mirko discuss token costs, cloud compute limitations, GPU demand, electricity consumption, and the growing operational cost of running large-scale AI systems. The episode examines: Claude pricingGitHub Copilot limitsAI token consumptionGPU infrastructureElectricity challengesAI model specializationCloud economicsOne particularly interesting part of the discussion focuses on how different AI models perform better for different development tasks. Some models perform better for frontend design work, others for deeper reasoning, debugging, or enterprise coding scenarios. This creates a new challenge for developers: Understanding not only how to code, but also which AI model to use for which type of work. SECURITY, GOVERNANCE, AND THE RISKS OF AI CODING As AI-generated development accelerates, governance becomes increasingly important. Lukas explains why developers still need to understand exactly what their code is doing, even when AI agents generate large portions of it automatically. The episode explores the growing risks around: Security vulnerabilitiesPoor governanceExposed repositoriesUnsafe promptsWeak versioning practicesAI-generated technical debtOne of the strongest warnings throughout the episode is simple: AI can accelerate bad development just as easily as good development. Without proper architecture, security awareness, governance structures, and development knowledge, organizations risk creating large amounts of insecure code much faster than before. WHAT COMES NEXT FOR AI DEVELOPMENT The future discussed in this episode moves beyond simple text prompts. Lukas explains why voice-driven development, AI skills, reusable agent capabilities, and contextual AI orchestration are becoming the next major wave in application delivery. The discussion explores how future AI systems may: Understand spoken instructionsBuild applications conversationallyReuse trained development skillsOrchestrate workflows automaticallyConnect through MCP serversGenerate full enterprise UI systemsAt the same time, both Lukas and Mirko emphasize that strong development fundamentals remain essential. The tools are changing rapidly. But architecture, security, UX thinking, governance, and operational understanding still matter mo Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    36 min
  8. The Invisible Employee: Is Your Next Hire Actually an AI Agent

    3 DAYS AGO

    The Invisible Employee: Is Your Next Hire Actually an AI Agent

    Ticketing looks clean on paper. You get the numbers. The queues. The dashboards. But the real cost of support usually starts long before a ticket ever appears. An employee loses access to a file. A Teams meeting fails seconds before it starts. A sharing link breaks. Someone retries the same action over and over, asks a coworker for help, or wastes twenty minutes trying to fix something manually before they finally give up and open a support request. That hidden productivity loss rarely shows up in queue reports. In this episode of the M365 FM Podcast, Mirko Peters explores why the traditional help desk model is breaking under the scale and complexity of modern Microsoft 365 environments and what replaces it. The future of support is not faster triage. It is autonomous, invisible, policy-driven intervention that happens before users even realize they need help. THE DEATH OF THE TICKET The old support model still follows the same operational pattern. Something breaks, a user notices it, a ticket gets created, and IT begins translating the issue into categories, priorities, queues, and escalation paths. Then the waiting begins. That process feels normal because organizations have operated this way for decades, but the ticket itself is not the service. The ticket is evidence that support arrived too late. By the time the incident reaches the queue, the employee has already lost context, momentum, and productive work time. Modern Microsoft 365 estates are simply too dynamic for manual triage to scale efficiently anymore. Organizations now operate across Teams, SharePoint, Exchange, Intune, Entra, Defender, Copilot, hybrid devices, and Conditional Access policies simultaneously. The number of edge-case combinations grows faster than human-driven routing models can realistically absorb. Most organizations respond by adding another portal, another chatbot, or another workflow layer. But in reality, that usually increases friction instead of removing it. This episode breaks down why reactive ITIL-style operations are becoming structural bottlenecks and why most support labor still gets trapped inside repetitive routing, categorization, and clarification work instead of prevention and resilience engineering. THE INVISIBLE EMPLOYEE MODEL So what actually replaces the ticket? Not another chatbot. Not another AI assistant waiting for prompts. The invisible employee model introduces autonomous operational agents embedded directly inside Microsoft 365 workflows. These agents behave more like digital workers than simple software features. They operate with their own identity, defined permissions, governance boundaries, operational memory, and approval rules. Instead of waiting for users to describe problems manually, the invisible employee continuously monitors the environment for friction and operational drift. It can detect: Sign-in failuresLicense mismatchesSharing issuesDevice compliance driftThen it acts safely inside policy before the issue escalates into a formal support event. Support no longer begins inside a portal. It begins exactly where the interruption happens, whether that is Teams, Outlook, SharePoint, or Entra. This episode explains why support is shifting from reactive ticket handling into proactive operational correction embedded directly inside daily work. THE ARCHITECTURE OF PREEMPTION Mirko breaks down how autonomous support actually works inside Microsoft 365. The model follows a simple operational chain: event, reasoning, orchestration, and verification. Microsoft 365 already generates massive amounts of telemetry through Entra, Intune, Defender, Teams, SharePoint, Exchange, and Microsoft Graph. The real transformation happens when agents can interpret those signals, compare current state against desired state, trigger approved remediation, and verify outcomes automatically. The discussion explores real-world scenarios like access remediation, Conditional Access enforcement, meeting recovery, SharePoint sharing failures, and license mismatch correction. A critical point throughout the episode is that autonomous systems cannot rely on isolated AI responses. They require continuous feedback loops that detect issues, test conditions, apply fixes, validate outcomes, retry safely, and escalate when necessary. That feedback-driven architecture is what separates operational AI from simple chatbot automation. GOVERNANCE, TRUST, AND AGENT IDENTITY Once support starts acting autonomously, governance becomes the most important part of the system. Every support agent must be treated like a real operational worker inside the tenant. That means agents require Entra identities, defined ownership, lifecycle governance, least-privilege access, approval boundaries, and complete auditability. This episode explores why organizations cannot scale autonomous support safely if they do not fully understand which agents already exist in their environment and what those agents are allowed to do. The conversation also examines: Human approval pathsRuntime monitoringRollback logicOperational accountabilityThe key message is clear. Autonomous support only works when governance, trust, visibility, and operational control scale together. THE NEW ROI OF INVISIBLE SUPPORT Traditional support metrics focus on visible activity like tickets closed, calls handled, and SLA performance. But invisible support creates value through prevented interruption. The biggest operational gains come from reduced context switching, faster restoration, fewer escalations, lower manual effort, and smoother employee workflows. Mirko explains why organizations need entirely new KPI models for AI-driven support operations. The conversation covers autonomous resolution rates, prevented incidents, reduced manual touches, productivity recovery, and why AI-driven support can dramatically reduce operational costs when implemented correctly. This is where IT stops acting like a reactive cost center and starts behaving like a reliability layer embedded directly into daily work. WHAT THIS DOES TO THE SUPPORT TEAM One of the biggest misconceptions around AI-driven support is that it eliminates people. In reality, the role of the support engineer changes completely. Teams move away from repetitive ticket handling and toward workflow orchestration, guardrail design, policy tuning, governance engineering, and exception management. The future support engineer becomes part reliability architect, part governance operator, and part automation supervisor. That shift requires organizations to rethink how support teams are trained, measured, and structured. IMPLEMENTATION AND PAYOFF The rollout strategy matters. Mirko recommends starting with one high-friction support flow inside Microsoft 365 instead of attempting a massive transformation project all at once. Access remediation, meeting recovery, sharing issues, and device compliance workflows are often strong starting points because the patterns are frequent and measurable. The critical design questions become: What defines healthy state?Which events indicate drift?Which actions are safe to automate?Where does human escalation begin?Once those foundations are in place, support stops acting like a front desk reacting to incidents and starts operating like an intelligent reliability engine embedded directly into Microsoft 365 itself. CONCLUSION Support is shifting from visible reaction to embedded prevention. The ticket was never the service. It was proof the service showed up too late. If you are leading Microsoft 365 operations, AI governance, Copilot adoption, identity architecture, support modernization, or enterprise automation strategy, this episode provides a practical blueprint for understanding where autonomous support is heading next. Subscribe to the M365 FM Podcast for more deep dives into Microsoft 365, Copilot, Entra, AI agents, governance, automation, and modern enterprise operating models. Connect with Mirko Peters on LinkedIn and share the episode with teams exploring the future of AI-driven support and operational automation. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

    20 min

About

Welcome to the M365.FM — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365.FM brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. M365.FM is part of the M365-Show Network. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.

You Might Also Like