M365 Show Podcast

Mirko Peters

Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

  1. -9 Ч

    The NVIDIA Blackwell Architecture: Why Your Data Fabric is Too Slow

    🔍 Key Topics Covered 1) The Real Problem: Your Data Fabric Can’t Keep Up“AI-ready” software on 2013-era plumbing = GPUs waiting on I/O.Latency compounds across thousands of GPUs, every batch, every epoch—that’s money.Cloud abstractions can’t outrun bad transport (CPU–GPU copies, slow storage lanes, chatty ETL).2) Anatomy of Blackwell — A Cold, Ruthless Physics UpgradeGrace-Blackwell Superchip (GB200): ARM Grace + Blackwell GPU, coherent NVLink-C2C (~960 GB/s) → fewer copies, lower latency.NVL72 racks with 5th-gen NVLink Switch Fabric: up to ~130 TB/s of all-to-all bandwidth → a rack that behaves like one giant GPU.Quantum-X800 InfiniBand: 800 Gb/s lanes with congestion-aware routing → low-jitter cluster scale.Liquid cooling (zero-water-waste architectures) as a design constraint, not a luxury.Generational leap vs. Hopper: up to 35× inference throughput, better perf/watt, and sharp inference cost reductions.3) Azure’s Integration — Turning Hardware Into Scalable IntelligenceND GB200 v6 VMs expose the NVLink domain; Azure stitches racks with domain-aware scheduling.NVIDIA NIM microservices + Azure AI Foundry = containerized, GPU-tuned inference behind familiar APIs.Token-aligned pricing, reserved capacity, and spot economics → right-sized spend that matches workload curves.Telemetry-driven orchestration (thermals, congestion, memory) keeps training linear instead of collapse-y.4) The Data Layer — Feeding the Monster Without Starving ItSpeed shifts the bottleneck to ingestion, ETL, and governance.Microsoft Fabric unifies pipelines, warehousing, real-time streams—now with a high-bandwidth circulatory system into Blackwell.Move from batch freight to capillary flow: sub-ms coherence for RL, streaming analytics, and continuous fine-tuning.Practical wins: vectorization/tokenization no longer gate throughput; shorter convergence, predictable runtime.5) Real-World Payoff — From Trillion-Parameter Scale to Cost ControlBenchmarks show double-digit training gains and order-of-magnitude inference throughput.Faster iteration = shorter roadmaps, earlier launches, and lower $/token in production.Democratized scale: foundation training, multimodal simulation, RL loops now within mid-enterprise reach.Sustainability bonus: perf/watt improvements + liquid-cooling reuse → compute that reads like a CSR win.🧠 Key TakeawaysLatency is a line item. If the interconnect lags, your bill rises.Grace-Blackwell + NVLink + InfiniBand collapse CPU–GPU and rack-to-rack delays into microseconds.Azure ND GB200 v6 makes rack-scale Blackwell a managed service with domain-aware scheduling and token-aligned economics.Fabric + Blackwell = a data fabric that finally moves at model speed.The cost of intelligence is collapsing; the bottleneck is now your pipeline design, not your silicon.✅ Implementation Checklist (Copy/Paste) Architecture & CapacityProfile current jobs: GPU utilization vs. input wait; map I/O stalls.Size clusters on ND GB200 v6; align NVLink domains with model parallelism plan.Enable domain-aware placement; avoid cross-fabric chatter for hot shards.Data Fabric & PipelinesMove batch ETL to Fabric pipelines/RTI; minimize hop count and schema thrash.Co-locate feature stores/vector indexes with GPU domains; cut CPU–GPU copies.Adopt streaming ingestion for RL/online learning; enforce sub-ms SLAs.Model OpsUse NVIDIA NIM microservices for tuned inference; expose via Azure AI endpoints.Token-aligned autoscaling; schedule training to off-peak pricing windows.Bake telemetry SLOs: step time, input latency, NVLink utilization, queue depth.Governance & SustainabilityKeep lineage & DLP in Fabric; shift from blocking syncs to in-path validation.Track perf/watt and cooling KPIs; report cost & carbon per million tokens.Run canary datasets each release; fail fast on topology regressions.If this helped you see where the real bottleneck lives, follow the show and turn on notifications. Next up: AI Foundry × Fabric—operational patterns that turn Blackwell throughput into production-grade velocity, with guardrails your governance team will actually sign. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 мин.
  2. -21 Ч

    Stop Using Default Gateway Settings: Fix Your Power Platform Connectivity NOW!

    🔍 Key Topics Covered 1) The Misunderstood Middleman — What the Gateway Actually Does The real flow: Service → Gateway cluster → Host → Data source → Return (auth, TLS, translation, buffering—not a “dumb relay”).Modes that matter: Standard (enterprise/clustered), Personal (single-user—don’t use for shared), VNet Gateway (Azure VNet for zero inbound).Why memory, CPU, encryption, and temp files make the Gateway a processing engine, not a pipe.2) Default Settings = Hidden Performance Killers Concurrency: default = “polite queue”; fix by raising parallel queries (within host capacity).Buffer sizing: avoid disk spill; give RAM breathing room.AV exclusions: exclude Gateway install/cache/log paths from real-time scanning.StreamBeforeRequestCompletes: great on low-latency LANs; risky over high-latency VPNs.Updates reset tweaks: post-update amnesia can tank refresh time—re-apply your tuning.3) The Network Factor — Routing, Latency & Cold-Potato Reality Let traffic egress locally to the nearest Microsoft edge POP; ride the Microsoft global backbone.Stop hair-pinning through corporate VPNs/proxies “for control” (adds hops, latency, TLS inspection delays).Use Microsoft Network routing preference for sensitive/interactive analytics; reserve “Internet option” for bulk/low-priority.Latency compounds; bad routing nullifies every other optimization.4) Hardware & Hosting — Build a Real Gateway Host Practical specs: ≥16 GB RAM, 8+ physical cores, SSD/NVMe for cache/logs.VMs are fine if CPU/memory are reserved (no overcommit); otherwise go physical.Clusters (2+ nodes) for load & resilience; keep versions/configs aligned.Measure what matters: Gateway Performance report + PerfMon (CPU, RAM, private bytes, query duration).5) Proactive Optimization & Maintenance Don’t auto-update to prod; stage, test, then promote.Keep/restore config backups (cluster & data source settings).Weekly health dashboards: correlate spikes with refresh schedules; spread workloads.PowerShell health checks (status, version, queue depth); scheduled proactive restarts.Baseline & document: OS build, .NET, ports, AV exclusions; treat Gateway like real infrastructure.🧠 Key Takeaways The Gateway is infrastructure, not middleware: tune it, monitor it, scale it.Fix the two killers: routing (egress local → MS backbone) and concurrency/buffers (match to host).Spec a host like you mean it: RAM, cores, SSD, cluster.Protect performance from updates: stage, verify, and only then upgrade.Latency beats hardware every time—get off the VPN detour.✅ Implementation Checklist (Copy/Paste) Verify mode: Standard Gateway (not Personal); cluster at least 2 nodes.Raise concurrency per data source/node; increase buffers (monitor RAM).Place cache/logs on SSD/NVMe; set AV exclusions for Gateway paths.Review StreamBeforeRequestCompletes based on network latency.Route egress locally; bypass VPN/proxy for M365/Power Platform endpoints.Confirm Microsoft Network routing preference for analytic traffic.Host sizing: ≥16 GB RAM, 8+ cores, reserved if virtualized.Enable & review Gateway Performance report; add PerfMon counters.Implement PowerShell health checks + scheduled, graceful service restarts.Stage updates on a secondary node; keep config/version backups; document baseline.🎧 Listen & Subscribe If this episode shaved 40 minutes off your refresh window, follow the show and turn on notifications. Next up: routing optimization across M365—edge POP testing, endpoint allow-lists, and how to spot fake “healthy” paths that quietly burn your SLA. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 мин.
  3. -1 ДН.

    Stop Dragging Planner Tasks: Automate NOW

    🔍 Key Topics Covered 1) Understanding the Planner–Copilot Connection Planner = structure and boards; you shouldn’t be the workflow engine.Copilot Studio adds reasoning + orchestration (intent → right tool).Power Automate is still your backend conveyor belt for triggers/rules.Together: Copilot interprets, Automate executes, Planner stays tidy.2) Building the Agent in Copilot Studio Create a new agent (e.g., “Task Planner”).Write tight Instructions: scope = create/list/update Planner tasks; answer concisely; don’t speculate.Wire identity & connections with the right M365 account (owns the target plan).Remember: Instructions = logic/behavior, Tools = capability.3) Adding Planner Tools: Create, List, Update Create a task: lock Group ID/Plan ID as custom values; keep Title dynamic.Tool description tip: “Create one or more tasks from user intent; summarize long titles; don’t ask for titles if implied.”List tasks: same Group/Plan; description: “Retrieve tasks for reasoning and response.”Update a task: dynamic Task ID, Due date accepts natural language (“tomorrow”, “next Friday”).Description: “Change due dates/details of an existing task using natural language dates.”Test flows: “List my open tasks,” “Create two tasks…,” “Set design review due Friday.”4) Deploying to Microsoft 365 Copilot Publish → Channels → Microsoft 365/Teams; approve permissions.Use in Teams or M365 Copilot: “Create three tasks for next week’s sprint,” “Mark backlog review due next Wednesday.”Chain reasoning: “List pending tasks, then set all to Friday.”First-run connector approvals may re-prompt; approve once.5) Automation Strategy & Limitations Right tool, right layer: deterministic triggers → Power Automate; interpretive requests → Copilot.Improve reliability with good tool descriptions (they act like prompts).Governance: DLP, RBAC, owner accounts, audit of connections; monitor failures/latency.Context window limits—keep commands concise.Licensing/tenant differences can affect grounding/features.Document Group/Plan IDs, connector owners, last publish date.🧠 Key Takeaways Stop dragging cards—speak tasks into existence.Copilot Studio reasons; Planner stores; Power Automate runs rules.Lock Group/Plan IDs; keep titles/dates dynamic; write clear tool descriptions.Publish to Microsoft 365 Copilot so commands run where you work.Govern from day one: least privilege, logging, DLP, change control.✅ Implementation Checklist (Copy/Paste) Create Copilot Studio agent “Task Planner” with clear scope & tone.Connect Planner with the account that owns the target Group/Plan.Add tools: Create task, List tasks, Update task.Set Group ID/Plan ID as custom fixed values; keep Title/Due Date dynamic.Write strong tool descriptions (intent cues, natural language dates).Test: create → list → update flows; confirm due-date parsing.Publish to Microsoft 365/Teams; approve connector permissions.Monitor analytics; document IDs/owners; enforce DLP/RBAC.Train users to issue short, clear commands (one intent at a time).Iterate descriptions as you spot misfires.🎧 Listen & Subscribe If this cut ten clicks from your day, follow the show and turn on notifications. Next up: blending Copilot Studio + Power Automate for meeting-to-tasks pipelines that auto-assign and schedule sprints—no dragging required. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    24 мин.
  4. -1 ДН.

    The Autonomous Agent Excel Hack

    🔍 Key Topics Covered 1) The Anatomy of an Autonomous Agent (Blueprint) What “autonomous” means in Copilot Studio: Trigger → Logic → Orchestration.Division of labor: Power Automate (email trigger, SharePoint staging, outbound reply) + Copilot Studio Agent (read Excel table, generate answers, write back).End-to-end path: Email → SharePoint → Copilot Studio → Power Automate → Reply.Why RFIs are perfect: predictable schema (Question/Answer), high repetition, low tolerance for errors.2) Feeding the Machine — Input Flow Design (Power Automate) Trigger: New email in shared mailbox; filter .xlsx only (ditch PDFs/screenshots).Structure check: enforce a named table (e.g., Table1) with columns like Question/Answer.Staging: copy to SharePoint for versioning, stable IDs, and compliance.Pass File ID + Message ID to the agent with a clear, structured prompt (scope, action, destination).3) The AI Brain — Generative Answer Loop (Copilot Studio) Topic takes File ID, runs List Rows in Table, iterates rows deterministically.One question at a time to prevent context bleed; disable “send message” and store outputs in a variable.Generate answer → Update matching row in the same Excel table via SharePoint path.Knowledge grounding options:Internal (SharePoint/Dataverse) for precision & compliance.Web (Bing grounding) for general info—use cautiously in regulated contexts.Result: a clean read → reason → respond → record loop.4) The Write-Back & Reply Mechanism (Power Automate) Timing guardrails: brief delay to ensure SharePoint commits changes (sync tolerance).Get File Content (binary) → Send email reply with the updated workbook attached, preserve thread via Message ID.Resilience: table-not-found → graceful error email; consider batching/parallelism for large sheets.5) Scaling, Governance, and Reality Checks Quotas & throttling exist—design for bounded autonomy and least privilege.When volume grows: migrate from raw Excel to Dataverse/SharePoint lists for concurrency and reliability.Telemetry & audits: monitor flow runs, agent transcripts, and export logs; adopt DLP, RBAC, change control.Human-in-the-loop QA for sampled outputs; combine automated checks with manual review.Future-proofing: this pattern extends to multi-agent orchestration (specialized bots collaborating).🧠 Key Takeaways Automation ≠ typing faster. It’s removing typing entirely.Use Power Automate to detect, validate, stage, and dispatch; use Copilot Studio to read, reason, and write back.Enforce named tables and clean schemas—merged cells are the enemy.Prefer internal knowledge grounding for reliable, compliant answers.Design for governance from day one: least privilege, logs, and graceful failure paths.✅ Implementation Checklist (Copy/Paste Ready) Shared mailbox created; Power Automate trigger: New email (with attachments).Filter .xlsx; reject non-Excel files with a friendly notice.Enforce named table (Table1) with Question/Answer columns.Copy to SharePoint library; capture File ID + Message ID.Call Copilot Studio Agent with structured parameters (file scope, action, reply target).In Copilot: List rows → per-row Generate Answer (internal grounding) → Update row.Back in Power Automate: Delay 60–120s, Get File Content, Reply with attachment (threaded).Error paths: missing table/columns → notify sender; log run IDs.Monitoring: flow history, agent transcripts, log exports to Log Analytics/Sentinel.Pilot on a small RFI set; then consider Dataverse for scale.🎧 Listen & Subscribe If this frees you from another week of copy-paste purgatory, follow the show and turn on notifications. Next up: evolving this pattern from Excel into Dataverse-first multi-agent workflows—because true autonomy comes with proper data design. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    23 мин.
  5. -2 ДН.

    M365 Show - Microsoft 365 Digital Workplace Daily - The Secret to Putting SQL Data in Copilot Studio

    🔍 Key Topics Covered 1) Why Copilots Fail Without ContextLLMs without data grounding = fluent hallucinations and confident nonsense.The real memory lives in SQL Server—orders, invoices, inventory—behind the firewall.Hybrid parity goal: cloud intelligence with on-prem control, zero data exposure.2) The Power Platform Data Gateway — Spine of Hybrid AINot “middleware”—your encrypted, outbound-only tunnel (no inbound firewall punches).Gateway clusters for high availability; one gateway serves Power BI, Power Apps, Power Automate, and Copilot Studio.No replication: queries only, end-to-end TLS, AAD/SQL/Windows auth, and auditable telemetry.3) Teaching Copilot to Read SQL (Knowledge Sources)Add Azure SQL via Gateway in Copilot Studio; choose the right auth (SQL, Windows, or AAD-brokered).Expose clean views (well-named columns, read-optimized joins) for clarity and performance.Live answers: conversational context drives real-time T-SQL through the gateway—no CSV exports.4) Giving Copilot Hands — Actions & Write-BacksDefine SQL Actions (insert/update/execute stored procs) with strict parameter prompts.Separate read vs write connections/privileges for least privilege; confirmations for critical ops.Every write is encrypted, logged, and governed—from chat intent to committed row.5) Designing the Hybrid Brain — Architecture & ScaleFour-part model: SQL (memory) → Gateway (spine) → Copilot/Power Platform (brain) → Teams/Web (face).Scale with gateway clusters, indexes, read-optimized views, and nightly metadata refresh.Send logs to Log Analytics/Sentinel; prove compliance with user/time/action traces.🧠 Key TakeawaysCopilot without SQL context = eloquent guesswork. Ground it via the Data Gateway.The gateway is outbound-only, encrypted, auditable—no database exposure.Use Knowledge Sources for live reads and SQL Actions for safe, governed writes.Design for least privilege, versioned views, and telemetry from day one.Hybrid done right = real-time answers + compliant operations.✅ Implementation Checklist (Practical)Install & register On-Premises Data Gateway; create a cluster (2+ nodes).Create environment connections: separate read (SELECT) and write (INSERT/UPDATE) creds.In Copilot Studio: Add Knowledge → Azure SQL via gateway → select read-optimized views.Verify live queries (small, filtered result sets; correct data types).Define SQL Actions with clear parameter labels & confirmations.Enable telemetry export to Log Analytics/Sentinel; document runbooks.Index & maintain views; schedule metadata refresh.Pen test: cert chain, outbound rules, least privilege review.Pilot with a narrow use case (e.g., “invoice lookup + create customer”).Roll out with RBAC, DLP policies, and change control.🎧 Listen & Subscribe If this saved you from another late-night CSV shuffle, follow the show and turn on notifications. Next up: extending the same architecture to legacy APIs and flat-file systems—because proper wiring beats magic every time. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    21 мин.
  6. -2 ДН.

    The Custom Connector Lie: How to Really Add MCP to Copilot Studio

    🔍 Key Topics Covered 1) The Illusion of Simplicity Why the “Add Tool → Model Context Protocol” UI only surfaces built-ins (Dataverse/SharePoint/etc.).The difference between “appears in the list” and actually exchanging streamable context.Why your “connected MCP” is often a placebo until you build the bridge.2) What MCP Actually Is (and Isn’t) MCP as a lingua franca for agents and context sources—tools, actions, schemas, parameters, tokens.Streaming-first behavior: partial, evented payloads for live reasoning (not bulk dumps).Protocol ≠ data source: MCP standardizes the handshake and structure so AI can reason with governed context.3) Building a Real Custom Connector (The Unvarnished Path) Where to start: create connector in Power Apps Make, not inside Copilot Studio.Template choice matters (streamable variant) and why “no-auth” is common in tenant-isolated setups.The two silent killers:Host must be the bare domain (no https://, no /api/mcp).Base URL must not duplicate route prefixes (avoid /api/mcp/api/mcp).Schema alignment to MCP spec: exact casing, array vs object types, required fields.Enable streaming (chunked transfer) or expect truncation/timeouts.Certificates & proxies: trust chains, CDNs that strip streaming headers, and why “optimizations” break MCP.Naming & caching quirks: unique names, patient publication, and avoiding “refresh-loop purgatory.”4) Testing & Verification That Actually Proves It Works Visibility test: does your MCP tool appear in Copilot Studio after propagation?Metadata handshake: do tool descriptions & parameters arrive from your server?Functional probes: ask controlled queries and watch for markdown + citations arriving as a stream.Failure decoding:Empty responses → URL path misalignment.Truncated markdown → missing chunked transfer.“I don’t know how to help” → schema mismatch.Connection flaps → SSL/CA chain or proxy stripping.Network sanity checks: confirm data: event chunks vs single payload dumps.5) Why This Matters Beyond the Demo Governance & auditability: sanctioned sources, explicit logs, repeatable citations.Security posture: least-privilege connectors as embassy checkpoints (not open tunnels).Zero-hallucination culture: MCP narrows the AI to approved truth.Future-proofing: aligning to inter-agent standards as enterprise prerequisites.🧠 Key Takeaways MCP ≠ data feed. It’s a protocol for structured, streamable context exchange.Custom connectors ≠ shortcuts. They’re protocol translators you must design with schema + streaming discipline.The MCP dropdown lists native servers; your custom MCP needs a real bridge to appear and function.Testing is a protocol rehearsal—check visibility, metadata, streaming, and citations before you claim success.Done right, MCP transforms Copilot from chatbot to compliant analyst with traceable sources.✅ Implementation Checklist (Practical & Brutally Honest) Create connector in Power Apps Make (solution-aware).Choose streamable MCP template; leave auth minimal unless policy requires more.Host = bare domain only; Base URL = correct, no duplicate prefixes.Align request/response schemas to MCP spec (casing, shapes, required fields).Enable streaming; verify Transfer-Encoding: chunked.Use valid TLS; avoid proxies that strip streaming headers.Publish and wait (don’t refresh-loop).In Copilot Studio: add tool, confirm metadata import.Run controlled queries; confirm incremental render + citations.Log & monitor: document failures, headers, and schema diffs for reuse.🎧 Listen & Subscribe If this episode saved you from another “connected but silent” demo, follow the show and turn on notifications. Future episodes land like a compliant connector: once, on time, fully streamed, with citations. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    24 мин.
  7. -3 ДН.

    STOP Building Cloud Flows! Use Agent Flows Instead

    🔍 Key Topics Covered 1) The Hidden Price Tag of Cloud Flows Why “build an Automated Cloud Flow” often means “start a licensing tab.”Premium connector ripple effect: add Dataverse/SQL/Salesforce and everyone touching the flow may need premium.API call quotas & throttling: the invisible brake on your “set it and forget it” automations.AI Builder double-pay: automation fees here, AI credits there—two currencies, one outcome: sprawl.2) Enter Agent Flows — Automation with a Copilot Brain Lives in Copilot Studio; billed by messages/actions, not by who uses it.Premium & custom connectors included under consumption.AI capabilities (classification, extraction, summarization) aligned to the same credit pool.Triggers from conversation, intent, or signals—automation that interprets before it executes.3) When Agent Flows Replace Cloud Flows (and When They Don’t) Use Agent Flows for chat/intent-driven, personal, or AI-assisted tasks where usage is bursty and user-specific.Keep Cloud Flows for shared, scheduled, multi-owner orchestration across teams.Migration path: make the Cloud Flow solution-aware → switch plan to Copilot Studio → it becomes an Agent Flow (one-way).Governance parity: drafts, versions, audit logs, RBAC—now inside Copilot Studio.4) The Math: Why Consumption Wins Cloud Flows = “buffet priced per person.” Great if maxed; wasteful if idle.Agent Flows = “à la carte per action.” Costs scale linearly with actual work.Transparent cost tracing by flow, connector, and hour; predictable quotas; no surprise overages.Optimization matters: consolidate actions, reduce chat hops, and you literally pay less.5) Strategy Shift — Automation Goes AI-Native Cloud Flows built the highways; Agent Flows drive themselves along them.Consolidate small, conversational automations into Copilot Studio to reduce double-licensing.Treat every automation as a service inside an intelligent platform, not a one-off per-user asset.Roadmap reality: AI-native orchestration becomes the default entry point; Cloud Flows remain the backend muscle.🧠 Key Takeaways Cloud Flows automate structure; Agent Flows automate intelligence.If it starts in Copilot/chat, is personalized, or spiky in usage—move it to Agent Flows.If it’s shared, scheduled, cross-team infrastructure—Cloud Flows still shine.Message-based billing converts licensing drama into straight arithmetic.Make “solution-aware” your default; design with governance, versioning, and quotas in mind.🎯 Who Should Listen Power Platform makers tired of hitting premium walls.IT leaders/CFOs chasing cost control and clean licensing.Automation architects moving to AI-native orchestration.Ops leaders who want predictable spend and audit-ready governance.🧩 Practical Checklist: Pick the Right Flow Trigger is conversational or AI-driven? → Agent FlowNeeds premium connectors but limited users? → Agent Flow (consumption)Shared, scheduled, cross-department approvals? → Cloud FlowLong-running batch or high-visibility orchestration? → Cloud FlowDesire tight cost tracing & quotas? → Agent Flow in Copilot Studio🎧 Listen & Subscribe If this episode saved your budget—or your weekend—follow the show and turn on notifications. New episodes land like a well-governed quota: predictable, clean, on time. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    20 мин.
  8. -3 ДН.

    Code Interpreter vs. Azure Functions: Stop The Python Misuse!

    🔍 Key Topics Covered 1️⃣ The Python Problem in Power PlatformWhy “Python runs natively” doesn’t mean “Python runs anywhere.”The rise of Code Interpreter inside Copilot Studio—and the chaos that followed.The real reason flows time out and files hit 512 MB limits.Why using Azure Functions for everything—or nothing—is equally misguided.2️⃣ The Code Interpreter: Microsoft’s New Python SandboxHow Code Interpreter works inside Copilot Studio (the “glass terrarium” analogy).Admin controls: why Python execution is disabled by default.What it can actually do: CSV transformations, data cleanup, basic analytics.Key limitations: no internet calls, no pip installs, and strict timeouts.Why Microsoft made it intentionally safe and limited for business users.Real-world examples of using it correctly for ad-hoc data prep and reporting.3️⃣ Azure Functions: Python Without Training WheelsWhat makes Azure Functions the true enterprise-grade Python runtime.The difference between sandbox snippets and event-driven microservices.How Azure Functions scales automatically, handles dependencies, and logs everything.Integration with Power Automate and Power Apps for secure, versioned automation.Governance, observability, and why IT actually loves this model.Example: processing gigabytes of sales data without breaking a sweat.4️⃣ The Illusion of ConvenienceWhy teams keep mistaking Code Interpreter for production infrastructure.How “sandbox convenience” turns into “production chaos.”The cost illusion: why “free inside Power Platform” still burns your capacity.The hidden governance risks of unmonitored Copilot scripts.How Azure Functions delivers professional reliability vs. chat-prompt volatility.5️⃣ The Decision Framework — When to Use WhichA practical rulebook for choosing the right tool:Code Interpreter = immediate, disposable, interactive.Azure Functions = recurring, scalable, governed.Governance and compliance boundaries between Power Platform and Azure.Security contrasts: sandbox vs. managed identities and VNET isolation.Maintenance and version control differences—why prompts don’t scale.The “Prototype-to-Production Loop”: start ideas in Code Interpreter, deploy in Functions.How to align analysts and architects in one workflow.6️⃣ The Enterprise Reality CheckHow quotas, throttles, and limits affect Python inside Power Platform.Understanding compute capacity and why Code Interpreter isn’t truly “free.”Security posture: sandbox isolation vs. Azure-grade governance.Cost models: prepaid licensing vs. consumption billing.Audit readiness: why Functions produce evidence and prompts produce panic.Real-world governance failure stories—and how to prevent them.7️⃣ Final Takeaway: Stop the MisuseCode Interpreter is for experiments, not enterprise pipelines.Azure Functions is for scalable, auditable, production-ready automation.Mixing them up doesn’t make you clever—it makes you a liability.Prototype fast in Copilot, deploy properly in Azure.Because “responsible architecture” isn’t a buzzword—it’s how you keep your job.🧠 Key TakeawaysCode Interpreter = sandbox: great for small data prep, visualizations, or lightweight automations inside Copilot Studio.Azure Functions = infrastructure: perfect for production workloads, scalable automation, and secure integration across systems.Don’t confuse ease for capability. The sandbox is for testing; the Function is for delivering.Prototype → Promote → Deploy: the golden loop that balances agility with governance.Governance, monitoring, and cost management matter as much as performance.🔗 Episode Mentions & ResourcesMicrosoft Docs: Python in Power Platform (Code Interpreter)Azure Functions OverviewPower Platform Admin Center — Enable Code ExecutionCopilot Studio for Power Platform🎧 Listen & Subscribe If this episode saved you from another flow timeout or a late-night “why did it fail again?” crisis—subscribe wherever you get your podcasts. Follow for upcoming deep dives into:Copilot in the enterpriseAI governance frameworksLow-code meets pro-code: the future of automationHit Follow, enable notifications, and let every new episode arrive like a scheduled task—on time, with zero drama. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support. Follow us on: LInkedIn Substack

    21 мин.

Об этом подкасте

Welcome to the M365 Show — your essential podcast for everything Microsoft 365, Azure, and beyond. Join us as we explore the latest developments across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, and the entire Microsoft ecosystem. Each episode delivers expert insights, real-world use cases, best practices, and interviews with industry leaders to help you stay ahead in the fast-moving world of cloud, collaboration, and data innovation. Whether you're an IT professional, business leader, developer, or data enthusiast, the M365 Show brings the knowledge, trends, and strategies you need to thrive in the modern digital workplace. Tune in, level up, and make the most of everything Microsoft has to offer. Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

Вам может также понравиться