M365 Show Podcast

Mirko Peters

Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

  1. HACE 2 H

    Stop Paying for Cloud VMs: Run Azure on a Mini PC

    🙋‍♀️ Who’s this forCIOs/CFOs cutting runaway cloud spend without losing governanceIT Architects/Platform Teams standardizing control across hybrid/edgeDevOps/SRE needing local latency + cloud-grade automationRetail/Manufacturing/Healthcare edge deploying at dozens/hundreds of sitesSecurity/GRC teams wanting unified audit, RBAC, and policy across on-prem + cloud🔍 Key Topics Covered 1) The Cloud Without the CloudAzure = muscle (hardware) + brain (control plane). You can rent the brain while supplying your own muscle.Azure Arc “badges” non-Azure machines/clusters so Policy, Defender, Monitor, RBAC apply from the same portal.Azure Local brings core Azure services to those Arc-managed boxes: VMs, AKS, networking—on your desk.2) The Mini-PC RevolutionSmall form-factor hardware (Intel i5/i7, Ryzen; 16–64 GB RAM; NVMe SSD) is enough for a mini region.Mail-and-plug edge rollout: ship pre-vouchered units, plug power/Ethernet, machine appears in Azure ready for policy.Benefits: near-zero latency, tiny power draw (~40–50 W), no colo, centralized lifecycle via Arc.3) Escaping the AD TrapSkip building a domain forest for two nodes. Use certificate-based identity with Azure Key Vault.Vault stores cluster certs/keys/BitLocker secrets; machines mutually auth with zero-trust simplicity; unified audit via Azure.4) Deploying Your Private Azure RegionZero-touch provisioning: voucher USB → phone home → enroll → Arc claims nodes.Create a site, run validation, deploy Azure Local (compute/network/storage RP, AKS).Provision VMs or AKS via the same wizards you use in public Azure; enable GitOps for auto-updates at the edge.5) The Economics of Taking the Cloud HomeArc registration: free; you pay mainly for optional governance/observability (Defender, Policy, Monitor).Replace 24×7 VM rent with once-off hardware + electricity; keep Azure security/compliance intact.Hybrid sweet spot: stable workloads local; burst/global workloads stay in public regions.✅ Implementation Checklist (Copy/Paste) A) Hardware & NetworkMini-PC with VT-x/AMD-V, 32–64 GB RAM, NVMe SSD (OS) + NVMe SSD (data)Reliable Ethernet; optional secondary node for HA/live migrationB) Arc & IdentityEnroll nodes with Azure Arc; attach to Resource Group/SubscriptionChoose Key Vault–backed local identity (no AD); enable RBAC + PIMStore secrets/certs in Key Vault; enable audit loggingC) Azure Local DeploymentVoucher USB → zero-touch enrollment → assign to SiteRun readiness checks (firmware, NICs, storage throughput)Deploy Azure Local (compute/network/storage RPs, AKS)D) Governance & SecurityApply Azure Policy: tagging, region residency, baseline hardeningEnable Defender for Cloud and Azure Monitor/Log AnalyticsSet up Update Management and Backup where neededE) WorkloadsCreate VMs via Azure Portal; configure availability across nodesDeploy AKS; wire GitOps for continuous delivery at edge sitesStandardize images (Packer) and IaC (Bicep/Terraform) for repeatabilityF) Cost & OpsTrack Monitor/Defender/Logs usage; tune retention and samplingRight-size hardware; plan 3-year refresh; keep a cold spareRun quarterly DR drills (voucher re-enroll, GitOps redeploy)🧠 Key TakeawaysKeep Azure’s brain, own the brawn. Arc + Local gives cloud-grade control without the per-hour meter.Mini-PCs are enough. Ship, plug, enroll—edge sites behave like mini regions.Ditch legacy AD at the edge. Key Vault–based certificates give lighter, auditable zero-trust.Same portal, policies, and audit. Hybrid without the governance gaps.Opex → Capex. Predictable spend, local performance, centralized security.🧩 Reference Architecture (one-liner) Voucher USB → Arc-enrolled nodes → Azure Local (compute/network/storage/AKS) → Policy/Defender/Monitor → VMs & AKS via Portal/GitOps; identity & secrets in Key Vault (no AD). 🔎 Search tags Azure Arc, Azure Local, Hybrid cloud, Edge computing, Mini-PC cluster, Key Vault certificates, Zero-touch provisioning, Arc-enabled servers, AKS at the edge, Azure Policy governance, Defender for Cloud, Cloud cost reduction, Capex vs Opex IT, GitOps Azure, On-prem Azure management 🎯 Final CTA If you’re done renting cycles, bring the cloud home: keep Azure governance, run your compute locally, and make your bill boring again. Follow for the build-out guide to image standards, GitOps patterns, and cost-guardrails for multi-site edge fleets. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  2. HACE 14 H

    Stop Typing to Copilot: Use Your Voice NOW!

    🔍 Key Topics Covered 1) Opening — The Problem with Typing to CopilotTyping (~40 wpm) throttles an assistant built for millisecond reasoning; speech (~150 wpm) restores flow.M365 already talks (Teams, Word dictation, transcripts); the one place that should be conversational—Copilot—still expects QWERTY.Voice carries nuance (intonation, urgency) that text strips away; your “AI collaborator” deserves a bandwidth upgrade.2) Enter Voice Intelligence — GPT-4o Realtime APITrue duplex: low-latency audio in/out over WebSocket; interruptible responses; turn-taking that feels human.Understands intent from audio (not just post-hoc transcripts). Dialogue forms during your utterance.Practical wins: hands-free CRM lookups, live policy Q&A, mid-sentence pivots without restarting prompts.3) The Brain — Azure AI Search + RAGRAG = retrieve before generate: ground answers in governed company content.Vector + semantic search finds meaning, not just keywords; citations keep legal phrasing intact.Security by design: RBAC-scoped retrieval, confidential computing options, and a middle-tier proxy that executes tools, logs calls, and enforces policy.4) The Mouth — Secure M365 Voice IntegrationUX in Copilot Studio / Power Apps / Teams; cognition in Azure; secrets stay server-side.Entra ID session context ≫ biometrics: no voice enrollment required; identity rides the session.DLP, info barriers, Purview audit: speech becomes just another compliant modality (like email/chat).5) Deploying the Voice-Driven Knowledge LayerThe blueprint: Prepare → Index → Proxy → Connect → Govern → Maintain.Avoid platform throttling: Power Platform orchestrates; Azure handles heavy audio + retrieval at scale.Outcome: real-time, cited, department-scoped answers—fast enough for live meetings, safe enough for Legal.✅ Implementation Checklist (Copy/Paste) A) Data & IndexingConsolidate source docs (policies/FAQs/standards) in Azure Blob with clean metadata (dept, sensitivity, version).Create Azure AI Search index (hybrid: vector + semantic); schedule incremental re-index.Attach metadata filters (dept/sensitivity) for RBAC-aware retrieval.B) Security & GovernanceRegister data sources in Microsoft Purview; enable lineage scans & sensitivity labels.Enforce Azure Policy for tagging/region residency; use Managed Identity, PIM, Conditional Access.Route telemetry to Log Analytics/Sentinel; enable DLP policies for transcripts/answers.C) Middle-Tier Proxy (critical)Expose endpoints for: search(), ground(), respond().Implement rate limits, tool-call auditing, per-dept scopes, and response citation tagging.Store keys in Key Vault; never ship tokens to client apps.D) Voice UXBuild a Copilot Studio agent or Power App in Teams with mic I/O bound to proxy.Connect GPT-4o Realtime through the proxy; support barge-in (interrupt) and partial responses.Present sources (doc title/section) with each answer; allow “open source” actions.E) Ops & CostBudget alerts for audio/compute; autoscale retrieval and Realtime workers.Event-driven re-index on content updates; nightly compaction & embedding refresh.Quarterly red-team of prompt injection & data leakage paths; rotate secrets by runbook.🧠 Key TakeawaysVoice removes the human I/O bottleneck; GPT-4o Realtime removes the latency; Azure AI Search removes the hallucination.The proxy layer is the unsung hero—tool execution, scoping, logging, and policy all live there.Treat speech as a first-class, compliant modality inside M365—auditable, governed, and fast. 🧩 Reference Architecture (one-liner) Mic (Teams/Power App) → Proxy (auth, RAG, policy, logging) → Azure AI Search (vector/semantic) → GPT-4o Realtime (voice out) → M365 compliance (DLP/Purview/Sentinel). 🎯 Final CTA Give Copilot a voice—and a memory inside policy. If this saved you keystrokes (or meetings), follow/subscribe for the next deep dive: hardening your proxy against prompt injection while keeping responses interruptible and fast. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  3. HACE 1 DÍA

    Stop Your Cloud Migration: You Are Not AI Ready

    🔍 Key Topics Covered 1) The Cloud Migration Warning (Opening) “Cloud-first” ≠ AI-capable. VMs in Azure don’t buy you governance, lineage, or identity discipline.Lift-and-shift moves location, not logic—you just rehosted sprawl in someone else’s data center.AI needs fluid, governed, traceable data pipelines; static, siloed estates suffocate Copilots and LLMs.2) The Cloud Migration Trap — Why Lift-and-Shift Fails AI Speed over structure: legacy directory trees, inconsistent tagging, and brittle dependencies survive the move.Security debt at scale: replicated roles/keys enable contextual AI over-reach (Copilot reads what users shouldn’t).Governance stalls: human reviews can’t keep up with AI’s data recombination; lineage gaps become compliance risk.Cost shock: scattered data + unoptimized workloads = orchestration friction and runaway cloud bills.3) Pillar 1 — Data Readiness Readiness = structure, lineage, governance (or your AI outputs are eloquent nonsense).Azure Fabric unifies analytics, but it can’t normalize chaos you lifted as-is.Purview + Fabric: enforce classification/lineage; stop “temporary” shadow stores; standardize tags/schemas.Litmus test: If you can’t trace origin→transformations→access for your top 10 datasets in 1 hour, you’re not AI-ready.4) Pillar 2 — Infrastructure & MLOps Maturity Mature orgs migrate control, not just apps: policy-driven platforms, orchestrated compute, reproducible pipelines.Azure AI Foundry + Azure ML: experiment tracking, lineage, gated promotion to prod—if you actually wire them in.DevOps → MLOps: datasets/models/metrics as code; provenance by default; automated approvals & rollbacks.Arc/Defender/Sentinel: hybrid observability with centralized policy; treat infra as ephemeral & governed.5) Pillar 3 — Talent & Governance Gap Tools don’t replace competence. You need governance technologists (read YAML and regs).Convert roles: DBAs → data custodians; network → identity stewards; compliance → AI risk auditors.Governance ≠ secrecy; it’s structured transparency with executable proof (not slideware).Align to NIST AI RMF, ISO/IEC 42001—but enforce via code, not policy PDFs.6) Case Study — Fintrax: The Cost of Premature Cloud Perfect “Cloud First” optics; AI pilot collapses under data sprawl, inherited perms, and lineage gaps.Result: compliance incident, 70% cost overrun, “AI is too expensive” myth—caused by governance, not GPUs.Lesson: migration is logistics; readiness is architecture + discipline.7) The 3-Step AI-Ready Cloud Strategy (Do This Next) Unify → Fortify → Automate Unify your data estateInventory/consolidate; standardize naming & tagging; centralize under Fabric + Purview.Pipe Defender/Sentinel/Log Analytics signals into Fabric for cross-domain visibility.Fortify with governance-as-codeAzure Policy/Blueprints/Bicep enforce classification, residency, least privilege.Map Purview labels → Policy aliases; use Managed Identity, PIM, Conditional Access.Continuous validation in CI/CD; drift detection and auto-remediation.Automate intelligence feedbackReal-time telemetry (Fabric RTI + Azure Monitor) → policy actions (throttle, quarantine, alert).Cost guards and anomaly detection wired to budgets and risk thresholds.Treat governance as a living control loop, not a quarterly audit.🧠 Key Takeaways Cloud ≠ AI. Without structure/lineage/identity discipline, you’re just modernizing chaos.Lift-and-shift preserves risk: permissions sprawl + lineage gaps + Copilot = breach-at-scale potential.AI readiness is provable: Unify data + Fortify with code + Automate feedback = traceable, scalable intelligence.Success metric has changed: from “% servers migrated” to “% decisions traceable and defensible.”✅ Implementation Checklist (Copy/Paste) Data & Visibility Full inventory of subscriptions, RGs, storage accounts, lakes; close orphaned assets.Standardize naming/tagging; enforce via Azure Policy.Register sources in Purview; enable lineage scans; apply default sensitivity labels.Consolidate analytics into Fabric; define gold/curated zones with contracts.Identity & Access Replace keys/CS strings with Managed Identity; enforce PIM for elevation.Conditional Access on all admin planes; disable legacy auth; rotate secrets in Key Vault.RBAC review: least-privilege baselines for Copilot/LLM services.MLOps & Governance-as-Code Track datasets/models/metrics in Azure ML/Foundry; enable lineage and gated promotions.Encode policies in Bicep/Blueprints; integrate checks in CI/CD (policy test gates).Log everything to Log Analytics/Sentinel; build dashboards for lineage, access, drift.Operations & Cost Budgets + alerts; anomaly detection on spend and data egress.Tiered storage lifecycle; archive stale data; minimize cross-region chatter.Incident runbooks for data leaks/model rollback; table-top exercises quarterly.🎯 Final CTA If your roadmap still reads like a relocation plan, it’s time to redraw it as an AI architecture. Follow/subscribe for practical deep dives on Fabric + Foundry patterns, governance-as-code templates, and reference pipelines that compile—not just impress in slides. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  4. HACE 1 DÍA

    The NVIDIA Blackwell Architecture: Why Your Data Fabric is Too Slow

    🔍 Key Topics Covered 1) The Real Problem: Your Data Fabric Can’t Keep Up“AI-ready” software on 2013-era plumbing = GPUs waiting on I/O.Latency compounds across thousands of GPUs, every batch, every epoch—that’s money.Cloud abstractions can’t outrun bad transport (CPU–GPU copies, slow storage lanes, chatty ETL).2) Anatomy of Blackwell — A Cold, Ruthless Physics UpgradeGrace-Blackwell Superchip (GB200): ARM Grace + Blackwell GPU, coherent NVLink-C2C (~960 GB/s) → fewer copies, lower latency.NVL72 racks with 5th-gen NVLink Switch Fabric: up to ~130 TB/s of all-to-all bandwidth → a rack that behaves like one giant GPU.Quantum-X800 InfiniBand: 800 Gb/s lanes with congestion-aware routing → low-jitter cluster scale.Liquid cooling (zero-water-waste architectures) as a design constraint, not a luxury.Generational leap vs. Hopper: up to 35× inference throughput, better perf/watt, and sharp inference cost reductions.3) Azure’s Integration — Turning Hardware Into Scalable IntelligenceND GB200 v6 VMs expose the NVLink domain; Azure stitches racks with domain-aware scheduling.NVIDIA NIM microservices + Azure AI Foundry = containerized, GPU-tuned inference behind familiar APIs.Token-aligned pricing, reserved capacity, and spot economics → right-sized spend that matches workload curves.Telemetry-driven orchestration (thermals, congestion, memory) keeps training linear instead of collapse-y.4) The Data Layer — Feeding the Monster Without Starving ItSpeed shifts the bottleneck to ingestion, ETL, and governance.Microsoft Fabric unifies pipelines, warehousing, real-time streams—now with a high-bandwidth circulatory system into Blackwell.Move from batch freight to capillary flow: sub-ms coherence for RL, streaming analytics, and continuous fine-tuning.Practical wins: vectorization/tokenization no longer gate throughput; shorter convergence, predictable runtime.5) Real-World Payoff — From Trillion-Parameter Scale to Cost ControlBenchmarks show double-digit training gains and order-of-magnitude inference throughput.Faster iteration = shorter roadmaps, earlier launches, and lower $/token in production.Democratized scale: foundation training, multimodal simulation, RL loops now within mid-enterprise reach.Sustainability bonus: perf/watt improvements + liquid-cooling reuse → compute that reads like a CSR win.🧠 Key TakeawaysLatency is a line item. If the interconnect lags, your bill rises.Grace-Blackwell + NVLink + InfiniBand collapse CPU–GPU and rack-to-rack delays into microseconds.Azure ND GB200 v6 makes rack-scale Blackwell a managed service with domain-aware scheduling and token-aligned economics.Fabric + Blackwell = a data fabric that finally moves at model speed.The cost of intelligence is collapsing; the bottleneck is now your pipeline design, not your silicon.✅ Implementation Checklist (Copy/Paste) Architecture & CapacityProfile current jobs: GPU utilization vs. input wait; map I/O stalls.Size clusters on ND GB200 v6; align NVLink domains with model parallelism plan.Enable domain-aware placement; avoid cross-fabric chatter for hot shards.Data Fabric & PipelinesMove batch ETL to Fabric pipelines/RTI; minimize hop count and schema thrash.Co-locate feature stores/vector indexes with GPU domains; cut CPU–GPU copies.Adopt streaming ingestion for RL/online learning; enforce sub-ms SLAs.Model OpsUse NVIDIA NIM microservices for tuned inference; expose via Azure AI endpoints.Token-aligned autoscaling; schedule training to off-peak pricing windows.Bake telemetry SLOs: step time, input latency, NVLink utilization, queue depth.Governance & SustainabilityKeep lineage & DLP in Fabric; shift from blocking syncs to in-path validation.Track perf/watt and cooling KPIs; report cost & carbon per million tokens.Run canary datasets each release; fail fast on topology regressions.If this helped you see where the real bottleneck lives, follow the show and turn on notifications. Next up: AI Foundry × Fabric—operational patterns that turn Blackwell throughput into production-grade velocity, with guardrails your governance team will actually sign. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  5. HACE 2 DÍAS

    Stop Using Default Gateway Settings: Fix Your Power Platform Connectivity NOW!

    🔍 Key Topics Covered 1) The Misunderstood Middleman — What the Gateway Actually Does The real flow: Service → Gateway cluster → Host → Data source → Return (auth, TLS, translation, buffering—not a “dumb relay”).Modes that matter: Standard (enterprise/clustered), Personal (single-user—don’t use for shared), VNet Gateway (Azure VNet for zero inbound).Why memory, CPU, encryption, and temp files make the Gateway a processing engine, not a pipe.2) Default Settings = Hidden Performance Killers Concurrency: default = “polite queue”; fix by raising parallel queries (within host capacity).Buffer sizing: avoid disk spill; give RAM breathing room.AV exclusions: exclude Gateway install/cache/log paths from real-time scanning.StreamBeforeRequestCompletes: great on low-latency LANs; risky over high-latency VPNs.Updates reset tweaks: post-update amnesia can tank refresh time—re-apply your tuning.3) The Network Factor — Routing, Latency & Cold-Potato Reality Let traffic egress locally to the nearest Microsoft edge POP; ride the Microsoft global backbone.Stop hair-pinning through corporate VPNs/proxies “for control” (adds hops, latency, TLS inspection delays).Use Microsoft Network routing preference for sensitive/interactive analytics; reserve “Internet option” for bulk/low-priority.Latency compounds; bad routing nullifies every other optimization.4) Hardware & Hosting — Build a Real Gateway Host Practical specs: ≥16 GB RAM, 8+ physical cores, SSD/NVMe for cache/logs.VMs are fine if CPU/memory are reserved (no overcommit); otherwise go physical.Clusters (2+ nodes) for load & resilience; keep versions/configs aligned.Measure what matters: Gateway Performance report + PerfMon (CPU, RAM, private bytes, query duration).5) Proactive Optimization & Maintenance Don’t auto-update to prod; stage, test, then promote.Keep/restore config backups (cluster & data source settings).Weekly health dashboards: correlate spikes with refresh schedules; spread workloads.PowerShell health checks (status, version, queue depth); scheduled proactive restarts.Baseline & document: OS build, .NET, ports, AV exclusions; treat Gateway like real infrastructure.🧠 Key Takeaways The Gateway is infrastructure, not middleware: tune it, monitor it, scale it.Fix the two killers: routing (egress local → MS backbone) and concurrency/buffers (match to host).Spec a host like you mean it: RAM, cores, SSD, cluster.Protect performance from updates: stage, verify, and only then upgrade.Latency beats hardware every time—get off the VPN detour.✅ Implementation Checklist (Copy/Paste) Verify mode: Standard Gateway (not Personal); cluster at least 2 nodes.Raise concurrency per data source/node; increase buffers (monitor RAM).Place cache/logs on SSD/NVMe; set AV exclusions for Gateway paths.Review StreamBeforeRequestCompletes based on network latency.Route egress locally; bypass VPN/proxy for M365/Power Platform endpoints.Confirm Microsoft Network routing preference for analytic traffic.Host sizing: ≥16 GB RAM, 8+ cores, reserved if virtualized.Enable & review Gateway Performance report; add PerfMon counters.Implement PowerShell health checks + scheduled, graceful service restarts.Stage updates on a secondary node; keep config/version backups; document baseline.🎧 Listen & Subscribe If this episode shaved 40 minutes off your refresh window, follow the show and turn on notifications. Next up: routing optimization across M365—edge POP testing, endpoint allow-lists, and how to spot fake “healthy” paths that quietly burn your SLA. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  6. HACE 2 DÍAS

    Stop Dragging Planner Tasks: Automate NOW

    🔍 Key Topics Covered 1) Understanding the Planner–Copilot Connection Planner = structure and boards; you shouldn’t be the workflow engine.Copilot Studio adds reasoning + orchestration (intent → right tool).Power Automate is still your backend conveyor belt for triggers/rules.Together: Copilot interprets, Automate executes, Planner stays tidy.2) Building the Agent in Copilot Studio Create a new agent (e.g., “Task Planner”).Write tight Instructions: scope = create/list/update Planner tasks; answer concisely; don’t speculate.Wire identity & connections with the right M365 account (owns the target plan).Remember: Instructions = logic/behavior, Tools = capability.3) Adding Planner Tools: Create, List, Update Create a task: lock Group ID/Plan ID as custom values; keep Title dynamic.Tool description tip: “Create one or more tasks from user intent; summarize long titles; don’t ask for titles if implied.”List tasks: same Group/Plan; description: “Retrieve tasks for reasoning and response.”Update a task: dynamic Task ID, Due date accepts natural language (“tomorrow”, “next Friday”).Description: “Change due dates/details of an existing task using natural language dates.”Test flows: “List my open tasks,” “Create two tasks…,” “Set design review due Friday.”4) Deploying to Microsoft 365 Copilot Publish → Channels → Microsoft 365/Teams; approve permissions.Use in Teams or M365 Copilot: “Create three tasks for next week’s sprint,” “Mark backlog review due next Wednesday.”Chain reasoning: “List pending tasks, then set all to Friday.”First-run connector approvals may re-prompt; approve once.5) Automation Strategy & Limitations Right tool, right layer: deterministic triggers → Power Automate; interpretive requests → Copilot.Improve reliability with good tool descriptions (they act like prompts).Governance: DLP, RBAC, owner accounts, audit of connections; monitor failures/latency.Context window limits—keep commands concise.Licensing/tenant differences can affect grounding/features.Document Group/Plan IDs, connector owners, last publish date.🧠 Key Takeaways Stop dragging cards—speak tasks into existence.Copilot Studio reasons; Planner stores; Power Automate runs rules.Lock Group/Plan IDs; keep titles/dates dynamic; write clear tool descriptions.Publish to Microsoft 365 Copilot so commands run where you work.Govern from day one: least privilege, logging, DLP, change control.✅ Implementation Checklist (Copy/Paste) Create Copilot Studio agent “Task Planner” with clear scope & tone.Connect Planner with the account that owns the target Group/Plan.Add tools: Create task, List tasks, Update task.Set Group ID/Plan ID as custom fixed values; keep Title/Due Date dynamic.Write strong tool descriptions (intent cues, natural language dates).Test: create → list → update flows; confirm due-date parsing.Publish to Microsoft 365/Teams; approve connector permissions.Monitor analytics; document IDs/owners; enforce DLP/RBAC.Train users to issue short, clear commands (one intent at a time).Iterate descriptions as you spot misfires.🎧 Listen & Subscribe If this cut ten clicks from your day, follow the show and turn on notifications. Next up: blending Copilot Studio + Power Automate for meeting-to-tasks pipelines that auto-assign and schedule sprints—no dragging required. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    24 min
  7. HACE 3 DÍAS

    The Autonomous Agent Excel Hack

    🔍 Key Topics Covered 1) The Anatomy of an Autonomous Agent (Blueprint) What “autonomous” means in Copilot Studio: Trigger → Logic → Orchestration.Division of labor: Power Automate (email trigger, SharePoint staging, outbound reply) + Copilot Studio Agent (read Excel table, generate answers, write back).End-to-end path: Email → SharePoint → Copilot Studio → Power Automate → Reply.Why RFIs are perfect: predictable schema (Question/Answer), high repetition, low tolerance for errors.2) Feeding the Machine — Input Flow Design (Power Automate) Trigger: New email in shared mailbox; filter .xlsx only (ditch PDFs/screenshots).Structure check: enforce a named table (e.g., Table1) with columns like Question/Answer.Staging: copy to SharePoint for versioning, stable IDs, and compliance.Pass File ID + Message ID to the agent with a clear, structured prompt (scope, action, destination).3) The AI Brain — Generative Answer Loop (Copilot Studio) Topic takes File ID, runs List Rows in Table, iterates rows deterministically.One question at a time to prevent context bleed; disable “send message” and store outputs in a variable.Generate answer → Update matching row in the same Excel table via SharePoint path.Knowledge grounding options:Internal (SharePoint/Dataverse) for precision & compliance.Web (Bing grounding) for general info—use cautiously in regulated contexts.Result: a clean read → reason → respond → record loop.4) The Write-Back & Reply Mechanism (Power Automate) Timing guardrails: brief delay to ensure SharePoint commits changes (sync tolerance).Get File Content (binary) → Send email reply with the updated workbook attached, preserve thread via Message ID.Resilience: table-not-found → graceful error email; consider batching/parallelism for large sheets.5) Scaling, Governance, and Reality Checks Quotas & throttling exist—design for bounded autonomy and least privilege.When volume grows: migrate from raw Excel to Dataverse/SharePoint lists for concurrency and reliability.Telemetry & audits: monitor flow runs, agent transcripts, and export logs; adopt DLP, RBAC, change control.Human-in-the-loop QA for sampled outputs; combine automated checks with manual review.Future-proofing: this pattern extends to multi-agent orchestration (specialized bots collaborating).🧠 Key Takeaways Automation ≠ typing faster. It’s removing typing entirely.Use Power Automate to detect, validate, stage, and dispatch; use Copilot Studio to read, reason, and write back.Enforce named tables and clean schemas—merged cells are the enemy.Prefer internal knowledge grounding for reliable, compliant answers.Design for governance from day one: least privilege, logs, and graceful failure paths.✅ Implementation Checklist (Copy/Paste Ready) Shared mailbox created; Power Automate trigger: New email (with attachments).Filter .xlsx; reject non-Excel files with a friendly notice.Enforce named table (Table1) with Question/Answer columns.Copy to SharePoint library; capture File ID + Message ID.Call Copilot Studio Agent with structured parameters (file scope, action, reply target).In Copilot: List rows → per-row Generate Answer (internal grounding) → Update row.Back in Power Automate: Delay 60–120s, Get File Content, Reply with attachment (threaded).Error paths: missing table/columns → notify sender; log run IDs.Monitoring: flow history, agent transcripts, log exports to Log Analytics/Sentinel.Pilot on a small RFI set; then consider Dataverse for scale.🎧 Listen & Subscribe If this frees you from another week of copy-paste purgatory, follow the show and turn on notifications. Next up: evolving this pattern from Excel into Dataverse-first multi-agent workflows—because true autonomy comes with proper data design. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 min
  8. HACE 3 DÍAS

    M365 Show - Microsoft 365 Digital Workplace Daily - The Secret to Putting SQL Data in Copilot Studio

    🔍 Key Topics Covered 1) Why Copilots Fail Without ContextLLMs without data grounding = fluent hallucinations and confident nonsense.The real memory lives in SQL Server—orders, invoices, inventory—behind the firewall.Hybrid parity goal: cloud intelligence with on-prem control, zero data exposure.2) The Power Platform Data Gateway — Spine of Hybrid AINot “middleware”—your encrypted, outbound-only tunnel (no inbound firewall punches).Gateway clusters for high availability; one gateway serves Power BI, Power Apps, Power Automate, and Copilot Studio.No replication: queries only, end-to-end TLS, AAD/SQL/Windows auth, and auditable telemetry.3) Teaching Copilot to Read SQL (Knowledge Sources)Add Azure SQL via Gateway in Copilot Studio; choose the right auth (SQL, Windows, or AAD-brokered).Expose clean views (well-named columns, read-optimized joins) for clarity and performance.Live answers: conversational context drives real-time T-SQL through the gateway—no CSV exports.4) Giving Copilot Hands — Actions & Write-BacksDefine SQL Actions (insert/update/execute stored procs) with strict parameter prompts.Separate read vs write connections/privileges for least privilege; confirmations for critical ops.Every write is encrypted, logged, and governed—from chat intent to committed row.5) Designing the Hybrid Brain — Architecture & ScaleFour-part model: SQL (memory) → Gateway (spine) → Copilot/Power Platform (brain) → Teams/Web (face).Scale with gateway clusters, indexes, read-optimized views, and nightly metadata refresh.Send logs to Log Analytics/Sentinel; prove compliance with user/time/action traces.🧠 Key TakeawaysCopilot without SQL context = eloquent guesswork. Ground it via the Data Gateway.The gateway is outbound-only, encrypted, auditable—no database exposure.Use Knowledge Sources for live reads and SQL Actions for safe, governed writes.Design for least privilege, versioned views, and telemetry from day one.Hybrid done right = real-time answers + compliant operations.✅ Implementation Checklist (Practical)Install & register On-Premises Data Gateway; create a cluster (2+ nodes).Create environment connections: separate read (SELECT) and write (INSERT/UPDATE) creds.In Copilot Studio: Add Knowledge → Azure SQL via gateway → select read-optimized views.Verify live queries (small, filtered result sets; correct data types).Define SQL Actions with clear parameter labels & confirmations.Enable telemetry export to Log Analytics/Sentinel; document runbooks.Index & maintain views; schedule metadata refresh.Pen test: cert chain, outbound rules, least privilege review.Pilot with a narrow use case (e.g., “invoice lookup + create customer”).Roll out with RBAC, DLP policies, and change control.🎧 Listen & Subscribe If this saved you from another late-night CSV shuffle, follow the show and turn on notifications. Next up: extending the same architecture to legacy APIs and flat-file systems—because proper wiring beats magic every time. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    21 min

Acerca de

Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

También te podría interesar