M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. Your Fabric Data Lake Is Too Slow: The NVMe Fix

    2 小時前

    Your Fabric Data Lake Is Too Slow: The NVMe Fix

    Opening: “Your Data Lake Has a Weight Problem” Most Fabric deployments today are dragging their own anchors. Everyone blames the query, the spark pool, the data engineers—never the storage. But the real culprit? You’re shoveling petabytes through something that behaves like a shared drive from 2003. What’s that? Your trillion-row dataset refreshes slower than your Excel workbook from college? Precisely. See, modern Fabric and Power Platform setups rely on managed storage tiers—easy, elastic, and, unfortunately, lethargic. Each request canyon‑echoes across the network before anything useful happens. All those CPUs and clever pipelines are idling, politely waiting on the filesystem to respond. The fix isn’t more nodes or stronger compute. It’s proximity. When data sits closer to the processor, everything accelerates. That’s what Azure Container Storage v2 delivers, with its almost unfair advantage: local NVMe disks. Think of it as strapping rockets to your data lake. By the end of this, your workloads will sprint instead of crawl. Section 1: Why Fabric and Power Platform Feel Slow Let’s start with the illusion of power. You spin up Fabric, provision a lakehouse, connect Power BI, deploy pipelines—and somehow it all feels snappy… until you hit scale. Then, latency starts leaking into every layer. Cold path queries crawl. Spark operations shimmer with I/O stalls. Even “simple” joins act like they’re traveling through a congested VPN. The reason is embarrassingly physical: your compute and your data aren’t in the same room. Managed storage sounds glamorous—elastic capacity, automatic redundancy, regional durability—but each of those virtues adds distance. Every read or write becomes a small diplomatic mission through Azure’s network stack. The CPU sends a request, the storage service negotiates, data trickles back through virtual plumbing, and congratulations—you’ve just paid for hundreds of milliseconds of bureaucracy. Multiply that by millions of operations per job, and your “real-time analytics” have suddenly time-traveled to yesterday. Compare that to local NVMe storage. Managed tiers behave like postal services: reliable, distributed, and painfully slow when you’re in a hurry. NVMe, though, speaks directly to the server’s PCIe lanes—the computational equivalent of whispering across a table instead of mailing a letter. The speed difference isn’t mystical; it’s logistical. Where managed disks cap IOPS in the tens or hundreds of thousands, local NVMe easily breaks into the millions. Five GB per second reads aren’t futuristic—they’re Tuesday afternoons. Here’s the paradox: scaling up your managed storage costs you more and slows you down. Every time you chase performance by adding nodes, you multiply the data paths, coordination overhead, and, yes, the bill. Azure charges for egress; apparently, physics charges for latency. You’re not upgrading your system—you’re feeding a very polite bottleneck. What most administrators miss is that nothing is inherently wrong with Fabric or Power Platform. Their architecture expects closeness. It’s your storage choice that creates long-distance relationships between compute and data. Imagine holding a conversation through walkie-talkies while sitting two desks apart. That delay, the awkward stutter—that’s your lakehouse right now. So when your Power BI dashboard takes twenty seconds to refresh, don’t blame DAX or Copilot. Blame the kilometers your bytes travel before touching a processor. The infrastructure isn’t slow. It’s obediently obeying a disastrous topology. Your data is simply too far from where the thinking happens. Section 2: Enter Azure Container Storage v2 Enter Azure Container Storage v2, Microsoft’s latest attempt to end your I/O agony. It’s not an upgrade; it’s surgery. The first version, bless its heart, was a Frankenstein experiment—a tangle of local volume managers, distributed metadata databases, and polite latency that no one wanted to talk about. Version two threw all of that out the airlock. No LVM. No etcd. No excuses. It’s lean, rewritten from scratch, and tuned for one thing only: raw performance. Now, a quick correction before the average administrator hyperventilates. You might remember the phrase “ephemeral storage” from ACStor v1 and dismiss it as “temporary, therefore useless.” Incorrect. Ephemeral didn’t mean pointless; it meant local, immediate, and blazing fast—perfect for workloads that didn’t need to survive an apocalypse. V2 doubles down on that idea. It’s built entirely around local NVMe disks, the kind soldered onto the very servers running your containers. The point isn’t durability; it’s speed without taxes. Managed disks? Gone. Yes, entirely removed from ACStor’s support matrix. Microsoft knew you already had a dozen CSI drivers for those, each with more knobs than sense. What customers actually used—and what mattered—was the ephemeral storage, the one that let containers scream instead of whisper. V2 focuses exclusively on that lane. If your node doesn’t have NVMe, it’s simply not invited to the party. Underneath it all, ACStor v2 still talks through the standard Container Storage Interface, that universal translator Kubernetes uses to ask politely for space. Microsoft, being generous for once, even open‑sourced the local storage driver that powers it. The CSI layer means it behaves like any other persistent volume—just with reflexes of a racehorse. The driver handles the plumbing; you enjoy the throughput. And here’s where it gets delicious: automatic RAID striping. Every NVMe disk on your node is treated as a teammate, pooled together and striped in unison. No parity, no redundancy—just full bandwidth, every lane open. The result? Every volume you carve, no matter how small, enjoys the combined performance of the entire set of disks. It’s like buying one concert ticket and getting the whole orchestra. Two NVMes might give you a theoretical million IOPS. Four could double that. All while Azure politely insists you’re using the same hardware you were already paying for. Let’s talk eligibility, because not every VM deserves this level of competence. You’ll find the NVMe gifts primarily in the L‑series machines—Azure’s storage‑optimized line designed for high I/O workloads. That includes the Lsv3 and newer variants. Then there are the NC series, GPU‑accelerated beasts built for AI and high‑throughput analytics. Even some Dv6 and E‑class VMs sneak in local NVMe as “temporary disks.” Temporary, yes. Slow, no. Each offers sub‑millisecond latency and multi‑gigabyte‑per‑second throughput without renting a single managed block. And the cost argument evaporates. Using local NVMe costs you nothing extra; it’s already baked into the VM price. You’re quite literally sitting on untapped velocity. When people complain that Azure is expensive, they usually mean they’re paying for managed features they don’t need—elastic SANs, managed redundancy, disks that survive cluster death. For workloads like staging zones, temporary Spark caches, Fabric’s transformation buffers, or AI model storage, that’s wasted money. ACStor v2 liberates you from that dependency. You’re no longer obliged to rent speed you already own. So what you get is brutally simple: localized data paths, zero extra cost, and performance that rivals enterprise flash arrays. You remove the middlemen—no SAN controllers, no network hops, no storage gateways—and connect compute directly to the bytes that fuel it. Think of it as stripping latency fat off your infrastructure diet. Most of all, ACStor v2 reframes how you think about cloud storage. It doesn’t fight the hardware abstraction layer; it pierces it. Kubernetes persists, Azure orchestrates, but your data finally moves at silicon speed. That’s not a feature upgrade—that’s an awakening. Section 3: The NVMe Fix—How Local Storage Outruns the Cloud OK, let’s dissect the magic word everyone keeps whispering in performance circles—NVMe. It sounds fancy, but at its core, it’s just efficiency, perfected. Most legacy storage systems use protocols like AHCI, which serialize everything—one lane, one car at a time. NVMe throws that model in the trash. It uses parallel queues, directly mapped to the CPU’s PCIe lanes. Translation: instead of a single checkout line at the grocery store, you suddenly have thousands, all open, all scanning groceries at once. That’s not marketing hype—it’s electrical reality. Now compare that to managed storage. Managed storage is… bureaucracy with disks. Every read or write travels through virtual switches, hypervisor layers, service fabrics, load balancers, and finally lands on far‑away media. It’s the postal service of data: packages get delivered, sure, but you wouldn’t trust it with your split‑second cache operations. NVMe, on the other hand, is teleportation. No queues, no customs, no middle management—just your data appearing where it’s needed. It’s raw PCIe bandwidth turning latency into an urban legend. And here’s the kicker: ACStor v2 doesn’t make NVMe faster—it unleashes it. Remember that automatic RAID striping from earlier? Picture several NVMe drives joined in perfect harmony. RAID stripes data across every disk simultaneously, meaning reads and writes occur in parallel. You lose redundancy, yes, but gain a tsunami of throughput. Essentially, each disk handles a fraction of the workload, so the ensemble performs at orchestra tempo. The result is terrifyingly good: in Microsoft’s own internal benchmarking, two NVMe drives hit around 1.2 million input/output operations per second with a throughput of roughly five gigabytes per second. That’s the sort of number that makes enterprise arrays blush. To visualize it, think of Spark running its temporary shuffles, those massive in

    21 分鐘
  2. Stop Paying the Multi-Cloud Network Tax

    14 小時前

    Stop Paying the Multi-Cloud Network Tax

    Everyone says they love multi‑cloud—until the invoice arrives. The marketing slides promised agility and freedom. The billing portal delivered despair. You thought connecting Azure, AWS, and GCP would make your environment “resilient.” Instead, you’ve built a networking matryoshka doll—three layers of identical pipes, each pretending to be mission‑critical. The truth is, your so‑called freedom is just complexity with better branding. You’re paying three providers for the privilege of moving the same gigabyte through three toll roads. And each insists the others are the problem. Here’s what this video will do: expose where the hidden “multi‑cloud network tax” lives—in your latency, your architecture, and worst of all, your interconnect billing. The cure isn’t a shiny new service nobody’s tested. It’s understanding the physics—and the accounting—of data that crosses clouds. So let’s peel back the glossy marketing and watch what actually happens when Azure shakes hands with AWS and GCP. Section 1 – How Multi‑Cloud Became a Religion Multi‑cloud didn’t start as a scam. It began as a survival instinct. After years of being told “stick with one vendor,” companies woke up one morning terrified of lock‑in. The fear spread faster than a zero‑day exploit. Boards demanded “vendor neutrality.” Architects began drawing diagrams full of arrows between logos. Thus was born the doctrine of hybrid everything. Executives adore the philosophy. It sounds responsible—diversified, risk‑aware, future‑proof. You tell investors you’re “cloud‑agnostic,” like someone bragging about not being tied down in a relationship. But under that independence statement is a complicated prenup: every cloud charges cross‑border alimony. Each platform is its own sovereign nation. Azure loves private VNets and ExpressRoute; AWS insists on VPCs and Direct Connect; GCP calls theirs VPC too, just to confuse everyone, then changes the exchange rate on you. You could think of these networks as countries with different visa policies, currencies, and customs agents. Sure, they all use IP packets, but each stamps your passport differently and adds a “service fee.” The “three passports problem” hits early. You spin up an app in Azure that needs to query a dataset in AWS and a backup bucket in GCP. You picture harmony; your network engineer pictures a migraine. Every request must leave one jurisdiction, pay export tax in egress charges, stand in a customs line at the interconnect, and be re‑inspected upon arrival. Repeat nightly if it’s automated. Now, you might say, “But competition keeps costs down, right?” In theory. In practice, each provider optimizes its pricing to discourage leaving. Data ingress is free—who doesn’t like imports?—but data egress is highway robbery. Once your workload moves significant bytes out of any cloud, the other two hit you with identical tolls for “routing convenience.” Here’s the best part—every CIO approves this grand multi‑cloud plan with champagne optimism. A few months later, the accountant quietly screams into a spreadsheet. The operational team starts seeing duplicate monitoring platforms, three separate incident dashboards, and a DNS federation setup that looks like abstract art. And yet, executives still talk about “best of breed,” while the engineers just rename error logs to “expected behavior.” This is the religion of multi‑cloud. It demands faith—faith that more providers equal more stability, faith that your team can untangle three IAM hierarchies, and faith that the next audit won’t reveal triple billing for the same dataset. The creed goes: thou shalt not be dependent on one cloud, even if it means dependence on three others. Why do smart companies fall for it? Leverage. Negotiation chips. If one provider raises prices, you threaten to move workloads. It’s a power play, but it ignores physics—moving terabytes across continents is not a threat; it’s a budgetary self‑immolation. You can’t bluff with latency. Picture it: a data analytics pipeline spanning all three hyperscalers. Azure holds the ingestion logic, AWS handles machine learning, and GCP stores archives. It looks sophisticated enough to print on investor decks. But underneath that graphic sits a mesh of ExpressRoute, Direct Connect, and Cloud Interconnect circuits—each billing by distance, capacity, and cheerfully vague “port fees.” Every extra gateway, every second provider monitoring tool, every overlapping CIDR range adds another line to the invoice and another failure vector. Multi‑cloud evolved from a strategy into superstition: if one cloud fails, at least another will charge us more to compensate. Here’s what most people miss: redundancy is free inside a single cloud region across availability zones. The moment you cross clouds, redundancy becomes replication, and replication becomes debt—paid in dollars and milliseconds. So yes, multi‑cloud offers theoretical freedom. But operationally, it’s the freedom to pay three ISPs, three security teams, and three accountants. We’ve covered why companies do it. Next, we’ll trace an actual packet’s journey between these digital borders and see precisely where that freedom turns into the tariff they don’t include in the keynote slides. Section 2 – The Hidden Architecture of a Multi‑Cloud Handshake When Azure talks to AWS, it’s not a polite digital handshake between equals. It’s more like two neighboring countries agreeing to connect highways—but one drives on the left, the other charges per axle, and both send you a surprise invoice for “administrative coordination.” Here’s what actually happens. In Azure, your virtual network—the VNet—is bound to a single region. AWS uses a Virtual Private Cloud, or VPC, bound to its own region. GCP calls theirs a VPC too, as if a shared name could make them compatible. It cannot. Each one is a sovereign network space, guarded by its respective gateway devices and connected to its provider’s global backbone. To route data between them, you have to cross a neutral zone called a Point of Presence, or PoP. Picture an international airport where clouds trade packets instead of passengers. Microsoft’s ExpressRoute, Amazon’s Direct Connect, and Google’s Cloud Interconnect all terminate at these PoPs—carrier‑neutral facilities owned by colocation providers like Equinix or Megaport. These are the fiber hotels of the internet, racks of routers stacked like bunk beds for global data. Traffic leaves Azure’s pristine backbone, enters a dusty hallway of cross‑connect cables, and then climbs aboard AWS’s network on the other side. You pay each landlord separately: one for Microsoft’s port, one for Amazon’s port, and one for the privilege of existing between them. There’s no magic tunnel that silently merges networks. There’s only light—literal light—traveling through glass fibers, obeying physics while your budget evaporates. Each gigabyte takes the scenic route through bureaucracy and optics. Providers call it “private connectivity.” Accountants call it “billable.” Think of the journey like shipping containers across three customs offices. Your Azure app wants to send data to an AWS service. At departure, Azure charges for egress—the export tariff. The data is inspected at the PoP, where interconnect partners charge “handling fees.” Then AWS greets it with free import, but only after you’ve paid everyone else. Multiply this by nightly sync jobs, analytics pipelines, and cross‑cloud API calls, and you’ve built a miniature global trade economy powered by metadata and invoices. You do have options, allegedly. Option one: a site‑to‑site VPN. It’s cheap and quick—about as secure as taping two routers back‑to‑back and calling it enterprise connectivity. It tunnels through the public internet, wrapped in IPsec encryption, but you still rely on shared pathways where latency jitters like a caffeine addict. Speeds cap around a gigabit per second, assuming weather and whimsy cooperate. It’s good for backup or experimentation, terrible for production workloads that expect predictable throughput. Option two: private interconnects like ExpressRoute and Direct Connect. Those give you deterministic performance at comically nondeterministic pricing. You’re renting physical ports at the PoP, provisioning circuits from multiple telecom carriers, and managing Microsoft‑ or Amazon‑side gateway resources just to create what feels like a glorified Ethernet cable. FastPath, the Azure feature that lets traffic bypass a gateway to cut latency, is a fine optimization—like removing a tollbooth from an otherwise expensive freeway. But it doesn’t erase the rest of the toll road. Now layer in topology. A proper enterprise network uses a hub‑and‑spoke model. The hub contains your core resources, security appliances, and outbound routes. The spokes—individual VNets or VPCs—peer with the hub to gain access. Add multiple clouds, and each one now has its own hub. Connect these hubs together, and you stack delay upon delay, like nesting dolls again but made of routers. Every hop adds microseconds and management overhead. Engineers eventually build “super‑hubs” or “transit centers” to simplify routing, which sounds tidy until billing flows through it like water through a leaky pipe. You can route through SD‑WAN overlays to mask the complexity, but that’s cosmetic surgery, not anatomy. The packets still travel the same geographic distance, bound by fiber realities. Electricity moves near the speed of light; invoices move at the speed of “end of month.” Let’s not forget DNS. Every handshake assumes both clouds can resolve each other’s private names. Without consistent name resolution, TLS connections collapse in confusion. Engineers end up forwarding DN

    23 分鐘
  3. Master Internal Newsletters With Outlook

    1 天前

    Master Internal Newsletters With Outlook

    Opening – Hook + Teaching Promise Most internal company updates suffer the same tragic fate—posted in Teams, immediately buried by “quick question” pings and emoji reactions. The result? Critical updates vanish into digital noise before anyone even scrolls. There’s a simpler path. Outlook, the tool you already use every day, can quietly become your broadcast system: branded, consistent, measurable. You don’t need new software. You have Exchange. You have distribution groups. You have automation built into Microsoft 365—you just haven’t wired it all together yet. In the next few minutes, I’ll show you exactly how to build a streamlined newsletter pipeline inside M365: define your target audiences using Dynamic Distribution Groups, send from a shared mailbox for consistent branding, and track engagement using built‑in analytics. Clean, reliable, scalable. No external platforms, no noise. Let’s start at the root problem—most internal communications fail because nobody clarifies who the updates are actually for. Section 1 – Build the Foundation: Define & Target Audiences Audience definition is the part everyone skips. The instinct is to shove announcements into the “All Staff” list and call it inclusive. It’s not. It’s lazy. When everyone receives everything, relevance dies, and attention follows. You don’t need a thousand readers; you need the right hundred. That’s where Exchange’s Dynamic Distribution Groups come in. Dynamic Groups are rule‑based audiences built from Azure Active Directory attributes—essentially, self‑updating mailing lists. Define one rule for “Department equals HR,” another for “Office equals London,” and a third for “License type equals E5.” Exchange then handles who belongs, updating automatically as people join, move, or leave. No manual list editing, no “Who added this intern to Executive Announcements?” drama. These attributes live inside Azure AD because, frankly, Microsoft likes order. Each user record includes department, title, region, and manager relationships. You’re simply telling Exchange to filter users using those properties. For example, set a dynamic group called “Sales‑West,” rule: Department equals Sales AND Office starts with ‘West’. People who move between regions switch groups automatically. That’s continuous hygiene without administrative suffering. For stable or curated audiences—like a leadership insider group or CSR volunteers—Dynamic rules are overkill. Use traditional Distribution Lists. They’re static by design: the membership doesn’t change unless an administrator adjusts it. It’s perfect for invitations, pilot teams, or any scenario where you actually want strict control. Think of Dynamic as the irrigation system and traditional Distribution Lists as watering cans. Both deliver; one just automates the tedium. Avoid overlap. Never nest dynamic and static groups without checking membership boundaries, or you’ll double‑send and trigger the “Why did I get this twice?” complaints. Use clear naming conventions: prefix dynamic groups with DG‑Auto and static ones with DL‑Manual. Keep visibility private unless the team explicitly needs to see these lists in the global address book. Remember: discovery equals misuse. The result is calm segmentation. HR newsletters land only with HR. Regional sales digests reach their territories without polluting everyone’s inbox. The right message finds the right people automatically. And once your audiences self‑maintain, the whole communication rhythm stabilizes—you can finally trust your send lists instead of praying. Now that you know exactly who will receive your newsletter, it’s time to define a single, consistent voice behind it. Because nothing undermines professionalism faster than half a dozen senders all claiming to represent “Communications.” Once you establish a proper sender identity, everything clicks—from trust to tracking. Section 2 – Establish the Sender Identity: Shared Mailbox & Permissions Let’s deal with the most embarrassing problem first: sending from individual mailboxes. Every company has that one person who fires the “Team Update” from their personal Outlook account—subject line in Comic Sans energy—even when they mean well. The problem isn’t just aesthetic; it’s operational. When the sender goes on leave, changes roles, or, heaven forbid, leaves the company, the communication channel collapses. Replies go to a void. Continuity dies. A proper internal newsletter needs an institutional identity—something people recognize the moment they see it in their inbox. That’s where the shared mailbox comes in. In Exchange Admin Center, create one for your program—“news@company.com,” “updates@orgname,” or if you prefer flair, “inside@company.” The name doesn’t matter; consistency does. This mailbox is the company’s broadcast persona, not a person. Once created, configure “Send As” and “Send on Behalf of” permissions. The difference matters. “Send As” means the message truly originates from the shared address—completely impersonated. “Send on Behalf” attaches a trace of the sender like a return address: “Alex Wilson on behalf of News@Company.” Use the latter when you want transparency of authorship, the former when you want unified branding. For regular bulletins, “Send As” usually keeps things clean. Grant these permissions to your communications team, HR team, or anyone responsible for maintaining the cadence. Now, folders. Because every publication, even an internal one, accumulates the detritus of feedback and drafts. In the shared mailbox, create a “Drafts” folder for upcoming editions, a “Published” folder for archives, and an “Incoming Replies” folder with clean rules that categorize responses. Use Outlook’s built-in Rules and Categories to triage automatically—mark OOF replies as ignored, tag genuine comments for the editor, and file analytics notifications separately. This is your miniature publishing hub. Enable the shared calendar, too. That’s where the editorial team schedules editions. Mark send dates, review days, and submission cutoffs. It’s not glamorous, but when your next issue’s reminder pops up at 10 a.m. Monday, you’ll suddenly look terrifyingly organized. Let’s not forget compliance. Apply retention and archiving policies so nothing accidentally disappears. Internal newsletters qualify as formal communication under many governance policies. Configure the mailbox to retain sent items indefinitely or at least per your compliance team’s retention window. That also gives you searchable institutional memory—instantly retrievable context when someone asks, “Didn’t we announce this already in April?” Yes. And here’s the proof. Finally, avoid rookie traps. Don’t set automatic replies from the shared mailbox; you’ll create infinite loops of “Thank you for your email” between systems. Restrict forwarding to external addresses to prevent leaks. And disable public visibility unless the whole organization must discover it—let trust come from the content, not accidental access. By now, you have a consistent voice, a neat publishing archive, and shared team control. Congratulations—you’ve just removed the two biggest failure points of internal communication: mixed branding and personal dependency. Now we can address how that voice looks when it speaks. Visual consistency is not vanity; it’s reinforcement of authority. Section 3 – Design & Compose: Create the Newsletter Template The moment someone opens your message, design dictates whether they keep reading. Outlook, bless it, is not famous for beauty—but with discipline, you can craft clarity. The rule: simple, branded, repeatable. You’re not designing a marketing brochure; you’re designing recognition. Start with a reusable Outlook template. Open a new message from your shared mailbox, switch to the “View” tab, and load the HTML theme or stationery that defines your brand palette—company colors, typography equivalents, and a clear header image. Save it as an Outlook Template file (*.oft). This becomes your default canvas for every edition. Inside that layout, divide content into predictable blocks. At the top, a short banner or headline zone—no taller than 120 pixels—so it still looks right in preview panes. Below that, your opening paragraph: a concise summary of what’s inside. Never rely on the subject line alone; people scan by body preview in Outlook’s message list. If the first two lines look like something corporate sent under duress, they’ll skip it. Follow that with modular blocks: one for HR, one for Sales, one for IT. These sections aren’t random—they mirror the organizational silos that your Dynamic Groups target. Use subtle colored borders or headings for consistency. Include one clear call‑to‑action per section—“Access the HR Portal,” “View Q3 Targets,” “Review Maintenance Window.” Avoid turning the email into a link farm; prioritize two or three actions max. At the bottom, include a consistent footer—company logo, confidentiality line, and a “Sent via Outlook Newsletter” tag with the shared mailbox address. You’re not hiding that this is internal automation; you’re validating it. Regulatory disclaimers or internal-only markings can live here too. To maintain branding integrity, store the master template on SharePoint or Teams within your communications space. Version it. Rename each revision clearly—“NewsletterTemplate_v3_July2024”—and restrict edit rights to your design custodian. When someone inevitably decides to “improve” the font by changing it to whatever’s trendy that week, you’ll know exactly who disrupted the consistency. For actual composing, Outlook’s modern editor supports HTML snippets. You can drop in tables for structured content, inse

    22 分鐘
  4. Master Dataverse Security: Stop External Leaks Now

    1 天前

    Master Dataverse Security: Stop External Leaks Now

    Opening – The Corporate Leak You Didn’t See Coming Let’s start with a scene. A vendor logs into your company’s shiny new Power App—supposed to manage purchase orders, nothing more. But somehow, that same guest account wanders a little too far and stumbles into a Dataverse table containing executive performance data. Salaries, evaluations, maybe a few “candid” notes about the CFO’s management style. Congratulations—you’ve just leaked internal data, and it didn’t even require hacking. The problem? Everyone keeps treating Dataverse like SharePoint. They assume “permissions” equal “buckets of access,” so they hand out roles like Halloween candy. What they forget is that Dataverse is not a document library; it’s a relational fortress built on scoped privileges and defined hierarchies. Ignore that, and you’re effectively handing visitor passes to your treasury. Dataverse security isn’t complicated—it’s just precise. And precision scares people. Let’s tear down the myths one layer at a time. Section 1 – The Architecture of Trust: How Dataverse Actually Manages Security Think of Dataverse as a meticulously engineered castle. It’s not one big door with one big key—it’s a maze of gates, guards, courtyards, and watchtowers. Every open path is intentional. Every privilege—Create, Read, Write, Delete, Append, Append To, Assign, and Share—is like a specific key that opens a specific gate. Yet most administrators toss all the keys to everyone, then act surprised when the peasants reach the royal library. Let’s start at the top: Users, Teams, Security Roles, and Business Units. Those four layers define who you are, what you can do, where you can do it, and which lineage of the organization you belong to. This is not merely classification—it’s containment. A User is simple: an identity within your environment, usually tied to Entra ID. A Team is a collection of users bound to a security role. Think of a Team like a regiment in our castle—soldiers who share the same clearance level. The Security Role defines privileges at a granular level, like “Read Contacts” or “Write to a specific table.” The Business Unit? That’s the physical wall of the castle—the zone of governance that limits how far you can roam. Now, privileges are where most people’s understanding falls off a cliff. Each privilege has a scope—User, Business Unit, Parent:Child, or Organization. Think of “scope” as the radius of your power. * User scope means you control only what you personally own. * Business Unit extends that control to everything inside your local territory. * Parent:Child cascades downward—you can act across your domain and all its subdomains. * Organization? That’s the nuclear option: full access to every record, in every corner of the environment. When roles get assigned with “Organization” scope across mixed internal and external users, something terrifying happens: Dataverse stops caring who owns what. Guests suddenly can see everything, often without anyone realizing it. It’s like issuing master keys to visiting musicians because they claim they’ll only use the ballroom. Misalignment usually comes from lazy configuration. Most admins reason, “If everyone has organization-level read, data sharing will be easier.” Sure, easier—to everyone. The truth? Efficiency dies the moment external users appear. A single organizational-scope privilege defeats your careful environment separation, because the Dataverse hierarchy trusts your role definitions absolutely. It doesn’t argue; it executes. Here’s how the hierarchy actually controls visibility. Business Units form a tree. At the top, usually “Root,” sit your global admins. Under that, branches for departments or operating regions, each with child units. Users belong to exactly one Business Unit at a time—like residents locked inside their section of the castle. When you grant scope at the “Business Unit” level, a user sees their realm but not others. Grant “Parent:Child,” and they see their kingdom plus every village below it. Grant “Organization,” and—surprise—they now have a spyglass overlooking all of Dataverse. Here’s where the conceptual mistake occurs. People assume roles layer together like SharePoint permissions—give a narrow one, add a broad one, and Dataverse will average them out. Wrong. Roles in Dataverse combine privileges additively. The broadest privilege overrides the restrictive ones. If a guest owns two roles—one scoped to their Business Unit and another with Organization-level read—they inherit the broader power. Or in castle terms: one stolen master key beats fifty locked doors. Now, add Teams. A guest may join a project team that owns specific records. If that team’s role accidentally inherits higher privileges, every guest in that team sees far more than they should. Inheritance is powerful, but also treacherous. That’s why granular layering matters—assign user-level roles for regular access and use teams only for specific, temporary visibility. Think of the scope system as concentric rings radiating outward. The inner ring—User scope—is safest, private ownership. The next ring—Business Unit—expands collaboration inside departments. The third ring—Parent:Child—covers federated units like regional offices under corporate control. And beyond that outer ring—Organization—lies the open field, where anything left unguarded can be seen by anyone with the wrong configuration. The castle walls don’t matter if you’ve just handed your enemy the surveyor’s map. Another classic blunder: cloning system administrator roles for testing. That creates duplicate “superuser” patterns everywhere. Suddenly the intern who’s “testing an app” holds Organization-level privilege over customer data. Half the security incidents in Dataverse environments result not from hacking, but from convenience. What you need to remember—and this is the dry but crucial part—is that Dataverse’s architecture of trust is mathematical. Each privilege assignment is a Boolean value: you either have access or you do not. There’s no “probably safe” middle ground. You can’t soft-fence external users; you have to architect their isolation through Business Units and minimize their privileges to what they demonstrably need. To summarize this foundation without ruining the mystery of the next section: Users and Teams define identities, Security Roles define rights, and Business Units define boundaries. The mistake that creates leaks isn’t ignorance—it’s false confidence. People assume Dataverse forgives imprecision. It doesn’t. It obediently enforces whatever combination of roles you define. Now that you understand that structure, we can safely move from blueprints to battlefield—seeing what actually happens when those configurations collide. Or, as I like to call it, “breaking the castle to understand why it leaks.” Section 2 – The Leak in Action: Exposing the Vendor Portal Fiasco Let’s reenact the disaster. You build a vendor portal. The goal is simple—vendors should update purchase orders and see their own invoices. You create a “Vendor Guest” role and, to save time, clone it from the standard “Salesperson” role. After all, both deal with contacts and accounts, right? Except, small difference: Salesperson roles often have Parent:Child or even Organization-level access to the Contact table. The portal doesn’t know your intent; it just follows those permissions obediently. The vendor logs in. Behind the scenes, Dataverse checks which records this guest can read. Because their security role says “Read Contacts: Parent:Child,” Dataverse happily serves up all contacts under the parent business unit—the one your internal sales team lives under. In short: the vendor just inherited everyone’s address book. Now, picture what that looks like in the front-end Power App you proudly shipped. The app pulls data through Dataverse views. Those views aren’t filtering by ownership because you trusted role boundaries. So that helpful “My Clients” gallery now lists clients from every region, plus internal test data, partner accounts, executive contacts, maybe even HR records if they also share the same root table. You didn’t code the leak; you configured it. Here’s how it snowballs. Business Units in Dataverse often sit in hierarchy: “Corporate” at the top, “Departments” beneath, “Projects” below that. When you assign a guest to the “Vendors” business unit but give their role privileges scoped at Parent:Child, they can see every record the top business unit owns—and all its child units. The security model assumes you’re granting that intentionally. Dataverse doesn’t second-guess your trust. The ugly part: these boundaries cascade across environments. Export that role as a managed solution and import it into production, and you just replicated the flaw. Guests in your staging environment now have the same privileges as guests in production. And because many admins skip per-environment security audits, you’ll only discover this when a vendor politely asks why they can view “Corporate Credit Risk” data alongside invoice approvals. Now, let’s illustrate correct versus incorrect scopes without the whiteboard.In the incorrect setup, your guest has “Read” at the Parent:Child or Organization level. Dataverse returns every record the parent unit knows about. In the correct setup, “Read” is scoped to User, plus selective Share privileges for records you explicitly assign. The result? The guest’s Power App now displays only their owned vendor record—or any record an internal user specifically shared. This difference feels microscopic in configuration but enormous in consequence. Think of it like DNS misconfiguration: swap two values, and suddenly traffic answers from the wrong zone. Sa

    19 分鐘
  5. Stop Using Power BI Wrong: The $10,000 Data Model Fix

    2 天前

    Stop Using Power BI Wrong: The $10,000 Data Model Fix

    Opening – The $10,000 Problem Your Power BI dashboard is lying to you. Not about the numbers—it’s lying about the cost. Every time someone hits “refresh,” every time a slicer moves, you’re quietly paying a performance tax. And before you smirk, yes, you are paying it, whether through wasted compute time, overage on your Power BI Premium capacity, or the hours your team spends waiting for that little yellow spinner to go away. Inefficient data models are invisible budget vampires. Every bloated column and careless join siphons money from your department. And when I say “money,” I mean real money—five figures a year for some companies. That’s the $10,000 problem. The fix isn’t a plug‑in, and it’s not hidden in the latest update. It’s architectural—a redesign of how your model thinks. By the end, you’ll know how to build a Power BI model that runs faster, costs less, and survives real enterprise workloads without crying for mercy. Section 1 – The Inefficiency Tax Think of your data model like a kitchen. A good chef arranges knives, pans, and spices so they can reach everything in two steps. A bad chef dumps everything into one drawer and hopes for the best. Most Power BI users? They’re the second chef—except their “drawer” is an imported Excel file from 2017, stuffed with fifty columns nobody remembers adding. This clutter is what we call technical debt. It’s all the shortcuts, duplicates, and half‑baked relationships that make your model work “for now” but break everything six months later. Every query in that messy model wanders the kitchen hunting for ingredients. Every refresh is another hour of the engine rummaging through the junk drawer. And yes, I know why you did it. You clicked “Import” on the entire SQL table because it was easier than thinking about what you actually needed. Or maybe you built calculated columns for everything because “that’s how Excel works.” Congratulations—you’ve just graduated from spreadsheet hoarder to BI hoarder. Those lazy choices have consequences. Power BI stores each unnecessary column, duplicates the data in the model, and expands memory use exponentially. Every time you add a fancy visual calling fifteen columns, your refresh slows. Slow refreshes become delayed dashboards; delayed dashboards mean slower decisions. Multiply that delay across two hundred analysts, and you’ll understand why your cloud bill resembles a ransom note. The irony? It’s not Power BI’s fault. It’s yours. The engine is fast. The DAX engine is clever. But your model? It’s a tangle of spaghetti code disguised as business insight. Ready to fix it? Good. Let’s rebuild your model like an adult. Section 2 – The Fix: Dimensional Modeling Dimensional modeling, also known as the Star Schema, is what separates a Power BI professional from a Power BI hobbyist. It’s the moment when your chaotic jumble of Excel exports grows up and starts paying rent. Here’s how it works. At the center of your star is a Fact Table—the raw events or transactions. Think of it as your receipts. Each record represents something that happened: a sale, a shipment, a login, whatever your business actually measures. Around that core, you build Dimension Tables—the dictionary that describes those receipts. Product, Customer, Date, Region—each gets its own neat dimension. This is the difference between hoarding and organization. Instead of stacking every possible field inside one table, you separate descriptions from events. The fact table stays lean: tons of rows, few columns. The dimensions stay wide: fewer rows, but rich descriptions. It’s relational modeling the way nature intended. Now, some of you get creative and build “many‑to‑many” relationships because you saw it once in a forum. Stop. That’s not creativity—that’s self‑harm. In a proper star, all relationships are one‑to‑many, pointing outward from dimension to fact. The dimension acts like a lookup—one Product can appear in many Sales, but each Sale points to exactly one Product. Break that rule, and you unleash chaos on your DAX calculations. Let’s talk cardinality. Power BI hates ambiguity. When relationships aren’t clear, it wastes processing power guessing. Imagine trying to index a dictionary where every word appears on five random pages—it’s miserable. One‑to‑many relationships give the engine a direct path. It knows exactly which filter context applies to which fact—no debates, no circular dependencies, no wasted CPU cycles pretending to be Sherlock Holmes. And while we’re cleaning up, stop depending on “natural keys.” Your “ProductName” might look unique until someone adds a space or mis‑types a letter. Instead, create surrogate keys—numeric or GUID IDs that uniquely identify each row. They’re lighter and safer, like nametags for your data. Maybe you’re wondering, “Why bother with all this structure?” Because structured models scale. The DAX engine doesn’t have to guess your intent; it reads the star and obeys simple principles: one direction, one filter, one purpose. Measures finally return results you can trust. Suddenly, your dashboards refresh in five minutes instead of an hour, and you can remove that awkward ‘Please wait while loading’ pop‑up your team pretends not to see. Here’s the weird part—once you move to a star schema, everything else simplifies. Calculated columns? Mostly irrelevant. Relationships? Predictable. Even your DAX gets cleaner because context is clearly defined. You’ll spend less time debugging relationships and more time actually analyzing numbers. Think of your new model as a modular house: each dimension a neat, labeled room; the fact table, the main hallway connecting them all. Before, you had a hoarder’s flat where you tripped over data every time you moved. Now, everything has its place, and the performance difference feels like you just upgraded from a landline modem to fiber optics. When you run this properly, Power BI’s Vertipaq engine compresses your model efficiently because the columnar storage finally makes sense. Duplicate text fields vanish, memory usage drops, and visuals render faster than your executives can say, “Can you export that to Excel?” But don’t celebrate yet. A clean model is only half the equation. The other half lives in the logic—the DAX layer. It’s where good intentions often become query‑level disasters. So yes, even with a star schema, you can still sabotage performance with what I lovingly call “DAX gymnastics.” In other words, it’s time to learn some discipline—because the next section is where we separate the data artists from the financial liabilities. Section 3 – DAX Discipline & Relationship Hygiene Yes, your DAX is clever. No, it’s not efficient. Clever DAX is like an overengineered Rube Goldberg machine—you’re impressed until you realize all it does is count rows. You see, DAX isn’t supposed to be “brilliant”; it’s supposed to be fast, predictable, and boring. That’s the genius you should aspire to—boring genius. Let’s start with the foundation: row context versus filter context. They’re not twins; they’re different species. Row context is each individual record being evaluated—think of it like taking attendance in a classroom. Filter context is the entire class after you’ve told everyone wearing red shirts to leave. Most people mix them up, then wonder why their SUMX runs like a snail crossing molasses. The rule? When you iterate—like SUMX or FILTER—you’re creating row context. When you use CALCULATE, you’re changing the filter context. Know which one you’re touching, or Power BI will happily drain your CPU while pretending to understand you. The greatest performance crime in DAX is calculated columns. They feel familiar because Excel had them—one formula stretched down an entire table. But in Power BI, that column is persisted; it bloats your dataset permanently. Every refresh recalculates it row by row. If your dataset has ten million rows, congratulations, you’ve just added ten million unnecessary operations to every refresh. That’s the computing equivalent of frying eggs one at a time on separate pans. Instead, push that logic back where it belongs—into Power Query. Do your data shaping there, where transformations happen once at load time, not repeatedly during report render. Let M language do the heavy lifting; it’s designed for preprocessing. The DAX engine should focus on computation during analysis, not household chores during refresh. Then there’s the obsession with writing sprawling, nested measures that reference one another eight layers deep. That’s not “modular,” that’s “recursive suffering.” Every dependency means another context transition the engine must trace. Instead, create core measures—like Total Sales or Total Cost—and build higher‑order ones logically on top. CALCULATE is your friend; it’s the clean switchboard operator of DAX. When used well, it rewires filters efficiently without dragging the entire model into chaos. Iterator functions—SUMX, AVERAGEX—are fine when used sparingly, but most users weaponize them unnecessarily. They iterate row by row when a simple SUM could do the job in one columnar sweep. Vertipaq, the in‑memory engine behind Power BI, is built for columnar operations. You slow it down every time you force it to behave like Excel’s row processor. Remember: DAX doesn’t care about your creative flair; it respects efficiency and clarity. Now about relationships—those invisible lines you treat like decoration. Single‑direction filters are the rule; bidirectional is an emergency switch, not standard practice. A bidirectional relationship is like handing out master keys to interns. Sure, it’s convenient until someone deletes the customers table while filtering products. It invites ambiguity, for

    14 分鐘
  6. 2 天前

    Stop Writing GRC Reports: Use This AI Agent Instead

    Opening — The Pain of Manual GRC Let’s talk about Governance, Risk, and Compliance reports—GRC, the three letters responsible for more caffeine consumption than every SOC audit combined. Somewhere right now, there’s a poor analyst still copying audit logs into Excel, cell by cell, like it’s 2003 and macros are witchcraft. They’ll start with good intentions—a tidy workbook, a few filters—and end up with forty tabs of pivot tables that contradict each other. Compliance, supposedly a safeguard, becomes performance art: hours of data wrangling to reassure auditors that everything is “under control.” Spoiler: it rarely is. Manual GRC reporting is what happens when organizations mistake documentation for insight. You pull data from Microsoft Purview, export it, stretch it across spreadsheets, and call it governance. The next week, new activities happen, the data shifts, and suddenly, your pristine charts are lies told in color gradients. Audit trails that should enforce accountability end up enforcing burnout. What’s worse, most companies treat Purview as a vault—something to be broken into only before an audit. Its audit logs quietly accumulate terabytes of data on who did what, where, and when. Useful? Absolutely. Readable? Barely. Each entry is a JSON blob so dense it could bend light. And yes, you can parse them manually—if weekends are optional and sanity is negotiable. Now, contrast that absurdity with the idea of an AI Agent. Not a “magic” Copilot that just guesses the answers, but an automated, rules-driven agent constructed from Microsoft’s own tools: Copilot Studio for natural language intelligence, Power Automate for task orchestration, and Purview as the authoritative source of audit truth. In other words, software that does what compliance teams have always wanted—fetch, analyze, and explain—with zero sighing and no risk of spilling coffee on the master spreadsheet. Think of it as outsourcing your GRC reporting to an intern who never complains, never sleeps, and reads JSON like English. By the end of this explanation, you’ll know exactly how to build it—from connecting your Purview logs to automating report scheduling—all inside Microsoft’s ecosystem. And yes, we’ll cover the logic step that turns this from a simple automation into a fully autonomous auditor. For now, focus on this: compliance shouldn’t depend on caffeine intake. Machines don’t get tired, and they certainly don’t mislabel columns. There’s one logic layer, one subtle design choice, that makes this agent reliable enough to send reports without supervision. We’ll get there, but first, let’s understand what the agent actually is. What makes this blend of Copilot Studio and Power Automate something more than a flow with a fancy name? Section 1: What the GRC Agent Actually Is Let’s strip away the glamour of “AI” and define what this thing truly is: a structured automation built on Microsoft’s stack, masquerading as intelligence. The GRC Agent is a three-headed creature—each head responsible for one part of the cognitive process. Purview provides the raw memory: audit logs, classification data, and compliance events. Power Automate acts as the nervous system: it collects signals, filters noise, and ensures the process runs on schedule. Copilot Studio, finally, is the mouth and translator—it takes the technical gibberish of logs and outputs human-readable summaries: “User escalated privileges five times in 24 hours, exceeding policy threshold.” That’s English, not JSON. Here’s the truth: 90% of compliance tasks aren’t judgment calls—they’re pattern recognition. Yet, analysts still waste hours scanning columns of “ActivityType” and “ResultStatus” when automation could categorize and summarize those patterns automatically. That’s why this approach works—because the system isn’t trying to think like a person; it’s built to organize better than one. Let’s break down those components. Microsoft Purview isn’t just a file labeling tool; it’s your compliance black box. Every user action across Microsoft 365—sharing a document, creating a policy, modifying a retention label—gets logged. But unless you’re fluent in parsing nested JSON structure, you’ll never surface much insight. That’s the source problem: data abundance, zero readability. Next, Power Automate. It’s not glamorous, but it’s disciplined. It triggers on time, never forgets, and treats every step like gospel. You define a schedule—say, daily at 8 a.m.—and it invokes connectors to pull the latest Purview activity. When misconfigured, humans panic; when misconfigured here, the flow quietly fails but logs the failure in perfect detail. Compliance loves logs. Power Automate provides them with religious regularity. And finally, Copilot Studio, which turns structured data into a narrative. You feed it a structured summary—maybe a JSON table counting risky actions per user—and it outputs natural language “risk summaries.” This is where the illusion of intelligence appears. It’s not guessing; it’s following rules embedded in the prompt you design. For example, you instruct it: “Summarize notable risk activities, categorize by severity, and include one recommendation per category.” The output feels like an analyst’s memo, but it’s algorithmic honesty dressed in grammar. Now, let’s address the unspoken irony. Companies buy dashboards promising visibility—glossy reports, color-coded indicators—but dashboards don’t explain. They display. The GRC Agent, however, writes. It translates patterns into sentences, eliminating the interpretive gap that’s caused countless “near misses” in compliance reviews. When your executive asks for “last month’s risk patterns,” you don’t send them a Power BI link you barely trust—you send them a clean narrative generated by a workflow that ran at 8:05 a.m. while you were still getting coffee. Why haven’t more teams done this already? Because most underestimate how readable automation can be. They see AI as unpredictable, when in fact, this stack is deterministic—you define everything. The logic, the frequency, the scope, even the wording tone. Autonomy isn’t random; it’s disciplined automation with language skills. Before this agent can “think,” though, it must see. That means establishing a data pipeline that gives it access to the right slices of Purview audit data—no more, no less. Without that visibility, you’ll automate blindness. So next, we’ll connect Power Automate to Purview, define which events matter, and teach our agent where to look. Only then can we teach it what to think. Section 2: Building the Purview Data Pipeline Before you can teach your GRC agent to think, you have to give it eyes—connected directly to the source of truth: Microsoft Purview’s audit logs. These logs track who touched what, when, and how. Unfortunately, they’re stored in a delightful structural nightmare called JSON. Think of JSON as the engineer’s equivalent of legal jargon: technically precise, practically unreadable. The beauty of Power Automate is that it reads this nonsense fluently, provided you connect it correctly. Step one is Extract. You start with either Purview’s built‑in connector or, if you like pain, an HTTP action where you call the Purview Audit Log API directly. Both routes achieve the same thing: a data stream representing everything that’s happened inside your tenant—file shares, permission changes, access violations, administrator logins, and more. The more disciplined approach is to restrict scope early. Yes, you could pull the entire audit feed, but that’s like backing up the whole internet because you lost a PDF. Define what events actually affect compliance. Otherwise, your flow becomes an unintentional denial‑of‑service on your own patience. Now, access control. Power Automate acts only as the permissions it’s granted. If your flow’s service account can’t read Purview’s Audit Log, your agent will stare into the void and dutifully report “no issues found.” That’s not reassurance; that’s blindness disguised as success. Make sure the service account has the Audit Logs Reader role within Purview and that it can authenticate without MFA interruptions. AI is obedient, but it’s not creative—it won’t click an authenticator prompt at 2 a.m. Assign credentials carefully and store them in Azure Key Vault or connection references so you remain compliant while keeping automation alive. Once data extraction is stable, you move to Filter. No one needs every “FileAccessed” event for the cafeteria’s lunch menu folder. Instead, filter for real risk identifiers: UserLoggedInFromNewLocation, RoleAssignmentChanged, ExternalSharingInvoked, LabelPolicyModified. These tell stories auditors actually care about. You can filter at the query stage (using the API’s parameters) or downstream inside Power Automate with conditional logic—whichever keeps the payload manageable. Remember, you’re not hoarding; you’re curating. Then comes the part that separates professionals from those who think copy‑paste is automation: Feed. You’ll convert those JSON blobs into structured columns—something your later Copilot module can interpret. A simple method is using the “Parse JSON” action with a defined schema pulled from a sample Purview event. If the terms “nested arrays” cause chest discomfort, welcome to compliance coding. Each property—UserId, Operation, Workload, ResultStatus, ClientIP—becomes its own variable. You’re essentially teaching your future AI agent vocabulary words before conversation begins. At this stage, you’ll discover the existential humor of Microsoft’s data formats. Some audit fields present as arrays even when they hold single values. Others hide outcomes under three layers of nesting, like Russian dolls of ambigui

    22 分鐘
  7. 3 天前

    Advanced Copilot Agent Governance with Microsoft Purview

    Opening – Hook + Teaching Promise You’re leaking data through Copilot Studio right now, and you don’t even know it.Every time one of your bright, shiny new Copilot Agents runs, it inherits your permissions—every SharePoint library, every Outlook mailbox, every Dataverse table. It rummages through corporate data like an overeager intern who found the master key card. And unlike that intern, it doesn’t get tired or forget where the confidential folders are. That’s the part too many teams miss: Copilot Studio gives you power automation wrapped in charm, but under the hood, it behaves precisely like you. If your profile can see finance data, your chatbot can see finance data. If you can punch through a restricted connector, so can every conversation your coworkers start with “Hey Copilot.” The result? A quiet but consistent leak of context—those accidental overshares hidden inside otherwise innocent answers. By the end of this podcast, you’ll know exactly how to stop that. You’ll understand how to apply real Data Loss Prevention (DLP) policies to Copilot Studio so your agents stop slurping up whatever they please.We’ll dissect why this happens, how Power Platform’s layered DLP enforcement actually works, and what Microsoft’s consent model means when your AI assistant suddenly decides it’s an archivist. And yes, there’s one DLP rule that ninety percent of admins forget—the one that truly seals the gap. It isn’t hidden in a secret portal, it’s sitting in plain sight, quietly ignored. Let’s just say that after today, your agents will act less like unsupervised interns and more like disciplined employees who understand the word confidential. Section 1: The Hidden Problem – Agents That Know Too Much Here’s the uncomfortable truth: every Copilot Agent you publish behaves as an extension of the user who invokes it. Not a separate account. Not a managed identity unless you make it one. It borrows your token, impersonates your rights, and goes shopping in your data estate. It’s convenient—until someone asks about Q2 bonuses and the agent obligingly quotes from the finance plan. Copilot Studio links connectors with evangelical enthusiasm. Outlook? Sure. SharePoint? Absolutely. Dataverse? Why not. Each connector seems harmless in isolation—just another doorway. Together, they form an entire complex of hallways with no security guard. The metaphor everyone loves is “digital intern”: energetic, fast, and utterly unsupervised. One minute it’s fetching customer details, the next it’s volunteering the full sales ledger to a chat window. Here’s where competent organizations trip. They assume policy inheritance covers everything: if a user has DLP boundaries, surely their agents respect them. Unfortunately, that assumption dies at the boundary between the tenant and the Power Platform environment. Agents exist between those layers—too privileged for tenant restrictions, too autonomous for simple app policies. They occupy the gray space Microsoft engineers politely call “service context.” Translation: loophole. Picture this disaster class scenario. A marketing coordinator connects the agent to Excel Online for campaign data, adds Dataverse for CRM insights, then saves without reviewing the connector classification. The DLP policy in that environment treats Excel as Business and Dataverse as Non‑Business. The moment someone chats, data crosses from one side to the other, and your compliance officer’s blood pressure spikes. Congratulations—your Copilot just built a makeshift export pipeline. The paradox deepens because most admins configure DLP reactively. They notice trouble only after strange audit alerts appear or a curious manager asks, “Why is Copilot quoting private Teams posts?” By then the event logs show legitimate user tokens, meaning your so‑called leak looks exactly like proper usage. Nothing technically broke; it simply followed rules too loosely written. This is why Microsoft keeps repeating that Copilot Studio doesn’t create new identities—it extends existing ones. So when you wonder who accessed that sensitive table, the answer may be depressing: you did, or at least your delegated shadow did. If your Copilot can see finance data, so can every curious chatbot session your employees open, because it doesn’t need to authenticate twice. It already sits inside your trusted session like a polite hitchhiker with full keychain access. What most teams need to internalize is that “AI governance” isn’t just a fancy compliance bullet. It’s a survival layer. Permissions without containment lead to what auditors politely call “context inference.” That’s when a model doesn’t expose a file but paraphrases its contents from cache. Try explaining that to regulators. Now, before you panic and start ripping out connectors, understand the goal isn’t to eliminate integration—it’s to shape it. DLP exists precisely to draw those bright lines: what counts as Business, what belongs in quarantine, what never touches network A if it speaks to network B. Done correctly, Copilot Studio becomes powerful and predictable. Done naively, it’s the world’s most enthusiastic leaker wrapped in a friendly chat interface. So yes, the hidden problem isn’t malevolence; it’s inheritance. Your agents know too much because you granted them omniscience by design. The good news is that omniscience can be filtered. But to design the filter, you need to know how the data actually travels—through connectors, through logs, through analytic stores that never made it into your compliance diagram. So, let’s dissect how data really moves inside your environment before we patch the leak—because until you understand the route, every DLP rule you write is just guesswork wrapped in false confidence. Section 2: How Data Flows Through Copilot Studio Let’s trace the route of one innocent‑looking question through Copilot Studio. A user types, “Show me our latest sales pipeline.” That request doesn’t travel in a straight line. It starts at the client interface—web, Teams, or embedded app—then passes through the Power Platform connector linked to a service like Dataverse. Dataverse checks the user’s token, retrieves the data, and delivers results back to the agent runtime. The runtime wraps those results into text and logs portions of the conversation for analytics. By the time the answer appears on‑screen, pieces of it have touched four different services and at least two separate audit systems. That hopscotch path is the first vulnerability. Each junction—user token, connector, runtime, analytics—is a potential exfiltration point. When you grant a connector access, you’re not only allowing data retrieval. You’re creating a transit corridor where temporary cache, conversation snippets, and telemetry coexist. Those fragments may include sensitive values even when your output seems scrubbed. That’s why understanding the flow beats blindly trusting the UI’s cheerful checkboxes. Now, connectors themselves come in varieties: Standard, Premium, and Custom. Standard connectors—SharePoint, Outlook, OneDrive—sit inside Microsoft’s managed envelope. Premium ones bridge into higher‑value systems like SQL Server or Salesforce. Custom connectors are the real wild cards; they can point anywhere an API and an access token exist. DLP treats each tier differently. A policy may forbid combining Custom with Business connectors, yet admins often test prototypes in mixed environments “just once.” Spoiler: “just once” quickly becomes “in production.” Even connectors that feel safe—Excel Online, for instance—can betray you when paired with dynamic output. Suppose your agent queries an Excel sheet storing regional revenue, summarizes it, and pushes the result into a chat where context persists. The summarized numbers might later mingle with different data sources in analytics. The spreadsheet itself never left your tenant, but the meaning extracted from it did. That’s information leakage by inference, not by download. Add another wrinkle: Microsoft’s defaults are scoped per environment, not across the tenant. Each Power Platform environment—Development, Test, Production—carries its own DLP configuration unless you deliberately replicate the policy. So when you say, “We already have a tenant‑wide DLP,” what you really have is a polite illusion. Unless you manually enforce the same classification each time a new environment spins up, your shiny Copilot in the sandbox might still pipe confidential records straight into a Non‑Business connector. Think of it as identical twins who share DNA but not discipline. And environments multiply. Teams love spawning new ones for pilots, hackathons, or region‑specific bots. Every time they do, Microsoft helpfully clones permissions but not necessarily DLP boundaries. That’s why governance by memo—“Please remember to secure your environment”—fails. Data protection needs automation, not trust. Let me illustrate with a story that’s become folklore in cautious IT circles. A global enterprise built a Copilot agent for customer support, proudly boasting an airtight app‑level policy. They assumed the DLP tied to that app extended to all sub‑components. When compliance later reviewed logs, they discovered the agent had been cross‑referencing CRM details stored in an unmanaged environment. The culprit? The DLP lived at the app layer; the agent executed at environment scope. The legal team used words not suitable for slides. The truth is predictable yet ignored: DLP boundaries form at the connector‑environment intersection, not where marketing materials claim. Once a conversation begins, the system logs user input, connector responses, and telemetry into the conversation analytics store.

    22 分鐘
  8. Stop Building Ugly Power Apps: Master Containers Now

    3 天前

    Stop Building Ugly Power Apps: Master Containers Now

    Opening – The Ugly Truth About Power Apps Most Power Apps look like they were designed by someone who fell asleep halfway through a PowerPoint presentation. Misaligned buttons, inconsistent fonts, half-broken responsiveness—the digital equivalent of mismatched socks at a corporate gala. The reason is simple: people skip Containers. They drag labels and icons wherever their mouse lands, then paste formulas like duct tape. Meanwhile, your branding department weeps. But here’s the fix: Containers and component libraries. Build once, scale everywhere, and stay perfectly on-brand. You’ll learn how to make Power Apps behave like professional software—responsive, consistent, and downright governed. IT loves structure; users love pretty. Congratulations—you’ll finally please both. Section 1 – Why Your Apps Look Amateur Let’s diagnose the disease before prescribing the cure. Most citizen-developed apps start as personal experiments that accidentally go global. One manager builds a form for vacation requests, another copies it, changes the color scheme “for personality,” and within six months the organization’s internal apps look like they were developed by twelve different companies fighting over a color wheel. Each app reinvents basic interface patterns—different header heights, inconsistent padding, and text boxes that resize like they’re allergic to symmetry. The deeper issue? Chaos of structure. Without Containers, Power Apps devolve into art projects. Makers align controls by eye and then glue them in place with fragile X and Y formulas—each tweak a cascading disaster. Change one label width and twenty elements shift unexpectedly, like dominoes in an earthquake. So when an executive asks, “Can we add our new logo?” you realize that simple graphic replacement means hours of manual realignment across every screen. That’s not design; that’s punishment. Now compare that to enterprise expectations—governance, consistency, reliability. In business, brand identity isn’t vanity; it’s policy. The logo’s position, the shade of blue, the margins around headers—all of it defines the company’s visible integrity. Research on enterprise UI consistency shows measurable payoffs: users trust interfaces that look familiar, navigate faster, make fewer mistakes, and report higher productivity. When your Power Apps look like cousins who barely talk, adoption plummets. Employees resist tools that feel foreign, even when functionality is identical. Every inconsistent pixel is a maintenance debt note waiting to mature. Skip Containers and you multiply that debt with each button and text box. Update the layout once? Congratulations: you’ve just updated it manually everywhere else too. And the moment one screen breaks responsiveness, mobile users revolt. The cost of ignoring layout structure compounds until IT steps in with an “urgent consolidation initiative,” which translates to rebuilding everything you did that ignored best practices. It’s tragic—and entirely avoidable. Power Apps already includes the cure. It’s been there this whole time, quietly waiting in the Insert panel: Containers. They look boring. They sound rigid. But like any strong skeleton, they keep the body from collapsing. And once you understand how they work, you stop designing hunchbacked monsters disguised as apps. Section 2 – Containers: The Physics of Layout A container in Power Apps is not decoration—it’s gravitational law. It defines how elements exist relative to one another. You get two major species: horizontal and vertical. The horizontal container lays its children side by side, distributing width according to flexible rules; the vertical one stacks them. Combine them—nest them, actually—and you create a responsive universe that obeys spatial logic instead of pixel guessing. Without containers, you’re painting controls directly on the canvas and telling each, “Stay exactly here forever.” Switch device orientation or resolution, and your app collapses like an untested building. Containers, however, introduce physics: controls adapt to available space, fill, shrink, or stretch depending on context. The app behaves more like a modern website than a static PowerPoint. Truly responsive design—no formulas, no prayers. Think in architecture: start with a screen container (the foundation). Inside it, place a header container (the roofline), a content container (the interior rooms), and perhaps a sidebar container (the utility corridor). Each of those can contain their own nested containers for buttons, icons, and text elements. Everything gets its coordinates from relationships, not arbitrary numbers. If you’ve ever arranged furniture by actual room structure rather than coordinates in centimeters, congratulations—you already understand the philosophy. Each container brings properties that mimic professional layout engines: flexible width, flexible height, padding, gap, and alignment. Flexible width lets a container’s children share space proportionally—two buttons could each take 50%, or a navigation section could stretch while icons remain fixed. Padding ensures breathing room, keeping controls from suffocating each other. Gaps handle the space between child elements—no more hacking invisible rectangles to create distance. Alignment decides whether items hug the start, end, or center of their container, both horizontally and vertically. Together, these rules transform your canvas from a static grid into a living, self-balancing structure. Now, I know what you’re thinking: “But I lose drag-and-drop freedom.” Yes… and thank goodness. That freedom is the reason your apps looked like abstract art. Losing direct mouse control forces discipline. Elements no longer wander off by one unintended pixel. You position objects through intent—“start, middle, end”—rather than by chance. You don’t drag things; you define relationships. This shift feels restrictive only to the untrained. Professionals call it “layout integrity.” Here’s a fun pattern: over-nesting. Beginners treat containers like Russian dolls, wrapping each control in another container until performance tanks. Don’t. Use them with purpose: structure major regions, not every decorative glyph. And for all that is logical, name them properly. “Container1,” “Container2,” and “Container10” are not helpful when debugging. Adopt a naming convention—cnt_Header, cnt_Main, cnt_Sidebar. It reads like a blueprint rather than a ransom note. Another rookie mistake: ignoring the direction indicators in the tree view. Every container shows whether it’s horizontal or vertical through a tiny icon. It’s the equivalent of an arrow on a road sign. Miss it, and your buttons suddenly stack vertically when you swore they’d line up horizontally. Power Apps isn’t trolling you; you simply ignored physics. Let’s examine responsiveness through an example. Imagine a horizontal container hosting three icons: Home, Reports, and Settings. On a wide desktop screen, they align left to right with equal gaps. On a phone, the available width shrinks, and the same container automatically stacks them vertically. No formulas, no conditional visibility toggles—just definition. You’ve turned manual labor into consistent behavior. That’s the engineering leap from “hobby project” to “enterprise tool.” Power Apps containers also support reordering—directly from the tree view, no pixel dragging required. You can move the sidebar before the main content or push the header below another region with a single “Move to Start” command. It’s like rearranging Lego pieces rather than breaking glued models. Performance-wise, containers remove redundant recalculations. Without them, every formula reevaluates positions on screen resize. With them, spatial rules—like proportional gaps and alignment—are computed once at layout level, reducing lag. It’s efficiency disguised as discipline. There’s one psychological barrier worth destroying: the illusion that formulas equal control. Many makers believe hand-coded X and Y logic gives precision. The truth? It gives you maintenance headaches and no scalability. Containers automate positioning mathematically and produce the same accuracy across devices. You’re not losing control; you’re delegating it to a system that doesn’t get tired or misclick. Learn to mix container types strategically. Vertical containers for stacking sections—header atop content atop footer. Horizontal containers within each for distributing child elements—buttons, fields, icons. Nesting them creates grids as advanced as any web framework, minus the HTML anxiety. The result is both aesthetic and responsive. Resize the window and watch everything realign elegantly, not collapse chaotically. Here’s the ultimate irony: you don’t need a single positioning formula. Zero. Entire screens built through containers alone automatically adapt to tablets, desktops, and phones. Every update you make—adding a new field, changing a logo—respects the defined structure. So when your marketing department introduces “Azure Blue version 3,” you just change one style property in the container hierarchy, not sixteen screens of coordinates. Once you master container physics, your organization can standardize layouts across dozens of apps. You’ll reduce support tickets about “missing buttons” or “crushed labels.” UI consistency becomes inevitable, not aspirational. This simple structural choice enforces the visual discipline your corporation keeps pretending to have in PowerPoint presentations. And once every maker builds within the same invisible skeleton, quality stops being a coincidence. That’s when we move from personal creativity to governed design. Or, if you prefer my version: elegance through geometry. Section 3 – Component Libraries: Corporate Branding on Autopilot Co

    23 分鐘

簡介

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

你可能也會喜歡