M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. 5 GIỜ TRƯỚC

    The Castle Gate Is Open—Is Your Entra ID Secured?

    Imagine your company’s digital castle with wide‑open gates. Everyone can stroll right in—vendors, employees who left years ago, even attackers dressed as your CFO. That’s what an unprotected identity perimeter looks like. Before we roll initiative on today’s breach boss, hit Subscribe so you get weekly security briefings without missing the quest log. Here’s the twist: in the Microsoft cloud, your castle gate is no longer a firewall—it’s Entra ID. In this video, you’ll get a practical overview of the essential locks—MFA, Conditional Access, Privileged Identity Management, and SSO—and the first steps to harden them. Because building walls isn’t enough when attackers can just blink straight past them. The New Castle Walls The new castle walls aren’t made of stone anymore. Once upon a time, you could build a giant moat, man every tower, and assume attackers would line up politely at the front gate. That model worked when business stayed behind a single perimeter, tucked safely inside racks of servers under one roof. But now your kingdom lives in clouds, browsers, and every laptop that walks out of the office. The walls didn’t just crack—they dissolved. Back then, firewalls were your dragons, roaring at the edge of the network. You trusted that anything inside those walls belonged there. Cubicles, desktops bolted under desks, devices you imaged yourself—every user was assumed trustworthy just by virtue of being within the perimeter. It was simpler, but it also hinged on one assumption: that the moat was wide enough, and attackers couldn’t simply skip it. That assumption crumbled fast. Cloud apps scattered your resources far beyond the citadel. Remote work spread employees everywhere from home offices to airport lounges. And bring-your-own-device policies let personal tablets and home laptops waltz right into the mix. Each shift widened the attack surface, and suddenly the moat wasn’t holding anyone back. In this new reality, firewalls didn’t vanish, but their ability to guard the treasure dropped sharply. An attacker doesn’t charge at your perimeter anymore; they slip past by grabbing a user’s credentials. A single leaked password can work like a skeleton key, no brute force required. That’s why the focus shifted. Identity became the castle wall. In the cloud, Microsoft secures the platform itself, but what lives within it—your configuration, your policies, your user access—that’s on you. That shared-responsibility split is the reason identity is now your primary perimeter. Your “walls” are no longer walls at all; they’re the constant verification points that decide whether someone truly belongs. Think of a password like a flimsy wooden door bolted onto your vault. It exists, but it’s laughably fragile. Add multi-factor authentication, and suddenly that wooden plank is replaced with a gate that slams shut unless the right key plus the right proof line up. It forces attackers to push harder, and often that effort leaves traces you can catch before they crown themselves royalty inside your systems. Identity checks aren’t just a speed bump—they’re where almost every modern attack begins. When a log-in comes from across the globe at 3 a.m. under an employee’s name, a perimeter-focused model shrugs and lets it pass. To the old walls, credentials are enough. But to a system built around identity, that’s the moment where the guard at the door says, “Wait—prove it.” Failure to control this space means intruders walk in dressed like your own staff. You won’t catch them with alerts about blocked ports or logon attempts at your firewall. They’re already inside, blending seamlessly with daily activity. That’s where data gets siphoned, ransomware gets planted, and attackers live quietly for months. So the new castle walls aren’t firewalls in a server room. They’re the tools that protect who can get in: identity protections, context checks, and policies wrapped around every account. And the main gate in that setup is Microsoft Entra ID. If it’s weak, every other safeguard collapses because entry has already been granted. Which leaves us at the real question administrators wrestle with: if keeping the gate means protecting identity, what does it look like to rely on just a single password? So if the walls no longer work, what becomes the gate? Identity—and Entra ID is the gatekeeper. And as we’ll see next, trusting passwords alone is like rolling a D20 and hitting a natural 1 every time. Rolling a Natural 1 with Passwords Passwords have long been the front door key for digital systems, but that lock is both brittle and predictable. For years, typing a string of characters into a box was the default proof of identity. It was cheap, simple, and everyone understood it. But that very simplicity created deep habits—habits attackers quickly learned to exploit. The main problem is reuse. People juggle so many accounts that recycling the same password across services feels inevitable. When one forum gets breached, those stolen logins often unlock doors at work too. Credential dumps sold on dark-web marketplaces mean attackers don’t even need to bother guessing—they just buy the keys already labeled. That’s a massive flaw when your entire perimeter depends on “something you know.” Even when users try harder, the math still works against them. Complex passwords laced with symbols and numbers might look tough, but machines can rattle through combinations at astonishing speed. Patterned choices—birthdays, company names, seasonal phrases—make it faster still. A short password today can fall to brute force in seconds, and no amount of rotating “Spring2024!” to “Summer2024!” changes that. On top of that, no lock can withstand social engineering when users get tricked into handing over the key. Phishing strips away even good password practices with a simple fake login screen. A convincing email and a spoofed domain are usually enough. At that point, attackers don’t outsmart a password policy—they just outsmart the person holding it. This is why passwords remain necessary, but never sufficient. Microsoft’s own guidance is clear: strong authentication requires layering defenses. That means passwords are only one factor among several, not the one defense holding back a breach. Without that layering, your user login page may as well be guarded by a cardboard cutout instead of a castle wall. The saving throw here is multi-factor authentication. MFA doesn’t replace your password—it backs it up. You supply a secret you know, but you must also confirm something you have or something you are. That extra check stops credential stuffing cold and makes stolen dumps far less useful. In practice, the difference is night and day: with MFA, logging in requires access to more than a leaked string of text. Entra ID supports multiple forms of this protection—push approvals, authenticator codes, even physical tokens. Which method you pick depends on your organization’s needs, but the point is consistency. Layering MFA across accounts drastically lowers the success rate of attacks because stolen credentials on their own lose most of their value. Policies enforcing periodic password changes or quirky complexity rules can actually backfire, creating predictable user behaviors. By contrast, MFA works with human tendencies instead of against them. It accepts that people will lean toward convenience, and it cushions those habits with stronger verification windows. If you only remember one thing from this section: passwords are the old wooden door—MFA is your reinforced gate. One is technically a barrier; the other turns casual attempts into real work for an attacker. And the cost bump to criminals is the whole point. Of course, even armor has gaps. MFA shields you against stolen passwords, but it doesn’t answer the question of context: who is logging in, from where, on what device, and at what time. That’s where the smarter systems step in. Imagine a guard at the castle gate who doesn’t just check if you have a key, but also notices if you’re arriving from a faraway land at 3 a.m. That’s where the real gatekeeping evolves. The Smart Bouncer at the Gate Picture a castle gate with a bouncer who doesn’t just wave you through because you shouted the right password. This guard checks your ID, looks for tells that don’t match the photo, and asks why you’re showing up at this hour. That’s Conditional Access in your Microsoft cloud. It’s not just another lock; it’s the thinking guard that evaluates signals like device compliance, user risk, and geographic location, then decides in real time whether to allow, block, or demand more proof. MFA alone is strong armor, but armor isn’t judgment. Social engineering and fatigue attacks can still trick a user into approving a fraudulent prompt at three in the morning, turning a “yes” into a false green light. Conditional Access closes that gap. If the login context looks suspicious—wrong city, unhealthy device, or risk scores that don’t align—policies can force another verification step or block the attempt outright. It’s the difference between blind acceptance and an actual interrogation. Take a straightforward scenario. An employee account logs in from across the globe at an odd hour, far from their normal region. Username, password, and MFA all check out. A traditional system shrugs. Conditional Access instead notices the anomaly, cross-references location and time, and triggers additional controls—like requiring another factor or denying the sign-in entirely. The bouncer doesn’t just say “you match the description”; it notices that nothing else makes sense. What makes this especially effective is how flexible the rules can be. A common early win is to ensure older, insecure authentication methods aren’t allowe

    19 phút
  2. 17 GIỜ TRƯỚC

    The Hidden Engine Inside Microsoft Fabric

    Here’s the part that changes the game: in Microsoft Fabric, Power BI doesn’t have to shuttle your data back and forth. With OneLake and Direct Lake mode, it can query straight from the lake with performance on par with import mode. That means greatly reduced duplication, no endless exports, and less wasted time setting up fragile refresh schedules. The frame we’ll use is simple: input with Dataflows Gen2, process inside the lakehouse with pipelines, and output through semantic models and Direct Lake reports. Each step adds a piece to the engine that keeps your data ecosystem running. And it all starts with the vault that makes this possible. OneLake: The Data Vault You Didn’t Know You Already Owned OneLake is the part of Fabric that Microsoft likes to describe as “OneDrive for your data.” At first it sounds like a fluffy pitch, but the mechanics back it up. All workloads tap into a single, cloud-backed reservoir where Power BI, Synapse, and Data Factory already know how to operate. And since the lake is built on open formats like Delta Lake and Parquet, you’re not being locked into a proprietary vault that you can’t later escape. Think of it less as marketing spin and more as a managed, standardized way to keep everything in one governed stream. Compare that to the old way most of us handled data estates. You’d inherit one lake spun up by a past project, somebody else funded a warehouse, and every department shared extracts as if Excel files on SharePoint were the ultimate source of truth. Each system meant its own connectors and quirks, which failed just often enough to wreck someone’s weekend. What you ended up with wasn’t a single strategy for data, but overlapping silos where reconciling dashboards took more energy than actually using the numbers. A decent analogy is a multiplayer game where every guild sets up its own bank. Some have loose rules—keys for everyone—while others throw three-factor locks on every chest. You’re constantly remembering which guild has which currency, which chest you can still open, and when the locks reset. Moving loot between them turns into a burden. That’s the same energy when every department builds its own lake. You don’t spend time playing the game—you spend it accounting for the mess. OneLake tries to change that approach by providing one vault. Everyone drops their data into a single chest, and Fabric manages consistent access. Power BI can query it, Synapse can analyze it, and Data Factory can run pipelines through it—all without fragmenting the store or requiring duplicate copies. The shared chest model cuts down on duplication and arguments about which flavor of currency is real, because there is just one governed vault under a shared set of rules. Now, here’s where hesitation kicks in. “Everything in one place” sounds sleek for slide decks, but having a single dependency raises real red flags. If the lake goes sideways, that could ripple through dashboards and reports instantly. The worry about a single point of failure is valid. But Microsoft attempts to offset that risk with built-in resilience tools baked into Fabric itself, along with governance hooks that are not bolted on later. Instead of an “instrumented by default” promise, consider the actual wiring: OneLake integrates directly with Microsoft Purview. That means lineage tracking, sensitivity labeling, and endorsement live alongside your data from the start. You’re not bolting on random scanners or third-party monitors—metadata and compliance tags flow in as you load data, so auditors and admins can trace where streams came from and where they went. Observability and governance aren’t wishful thinking; they’re system features you get when you use the lake. For administrators still nervous about centralization, Purview isn’t the only guardrail. Fabric also provides monitoring dashboards, audit logs, and admin control points. And if you have particularly strict network rules, there are Azure-native options such as managed private endpoints or trusted workspace configs to help enforce private access. The right pattern will depend on the environment, but Microsoft has at least given you levers to pilot access rather than leaving you exposed. That’s why the “OneDrive for data” image sticks. With OneDrive, you put files in one logical spot and then every Microsoft app can open them without you moving them around manually. You don’t wonder if your PowerPoint vanished into some other silo—it surfaces across devices because it’s part of the same account fabric. OneLake applies that model to data estates. Place it once. Govern it once. Then let the workloads consume it directly instead of spawning yet another copy. The simplicity isn’t perfect, but it does remove a ton of the noise many enterprises suffer from when shadow IT teams create mismatched lakes under local rules. Once you start to see Power BI, Synapse, and pipeline tools working against the same stream instead of spinning up different ones, the “OneLake” label makes more sense. Your environment stops feeling like a dozen unsynced chests and starts acting like one shared vault. And that sets us up for the real anxiety point: knowing the vault exists is one thing; deciding when to hit the switch that lights it up inside your Power BI tenant is another. That button is where most admins pause, because it looks suspiciously close to a self-destruct. Switching on Fabric Without Burning Down Power BI Switching on Fabric is less about tearing down your house and more about adding a new wing. In the Power BI admin portal, under tenant settings, sits the control that makes it happen. By default, it’s off so admins have room to plan. Flip it on, and you’re not rewriting reports or moving datasets. All existing workspaces stay the same. What you unlock are extra object types—lakehouses, pipelines, and new levers you can use when you’re ready. Think of it like waking up to see new abilities appear on your character’s skill tree; your old abilities are untouched, you’ve just got more options. Now, just because the toggle doesn’t break anything doesn’t mean you should sprint into production. Microsoft gives you flexibility to enable Fabric fully across the tenant, but also lets you enable it for selected users, groups, or even on a per-capacity basis. That’s your chance to keep things low-risk. Instead of rolling it out for everyone overnight, spin up a test capacity, give access only to IT or a pilot group, and build one sandbox workspace dedicated to experiments. That way the people kicking tires do it safely, without making payroll reporting the crash test dummy. When Fabric is enabled, new components surface but don’t activate on their own. Lakehouses show up in menus. Pipelines are available to build. But nothing auto-migrates and no classic dataset is reworked. It’s a passive unlock—until you decide how to use it. On a natural 20, your trial team finds the new menus, experiments with a few templates, and moves on without disruption. On a natural 1, all that really happens is the sandbox fills with half-finished project files. Production dashboards still hum the same tune as yesterday. The real risk comes later when workloads get tied to capacities. Fabric isn’t dangerous because of the toggle—it’s dangerous if you mis-size or misplace workloads. Drop a heavy ingestion pipeline into a tiny trial SKU and suddenly even a small query feels like it’s moving through molasses. Or pile everything from three departments into one slot and watch refreshes queue into next week. That’s not a Fabric failure; that’s a deployment misfire. Microsoft expects this, which is why trial capacities exist. You can light up Fabric experiences without charging production compute or storage against your actual premium resources. Think of trial capacity as a practice arena: safe, ring-fenced, no bystanders harmed when you misfire a fireball. Microsoft even provides Contoso sample templates you can load straight in. These give you structured dummy data to test pipelines, refresh cycles, and query behavior without putting live financials or HR data at risk. Here’s the smart path. First, enable Fabric for a small test group instead of the entire tenant. Second, assign a trial capacity and build a dedicated sandbox workspace. Third, load up one of Microsoft’s example templates and run it like a stress test. Walk pipelines through ingestion, check your refresh schedules, and keep an eye on runtime behavior. When you know what happens under load in a controlled setting, you’ve got confidence before touching production. The mistakes usually happen when admins skip trial play altogether. They toss workloads straight onto undersized production capacity or let every team pile into one workspace. That’s when things slow down or queue forever. Users don’t see “Fabric misconfiguration”; they just see blank dashboards. But you avoid those natural 1 rolls by staging and testing first. The toggle itself is harmless. The wiring you do afterward decides whether you get smooth uptime or angry tickets. Roll Fabric into production after that and cutover feels almost boring. Reports don’t break. Users don’t lose their favorite dashboards. All you’ve done is make new building blocks available in the same workspaces they already know. Yesterday’s reports stay alive. Tomorrow’s teams get to summon lakehouses and pipelines as needed. Turning the toggle was never a doomsday switch—it was an unlock, a way to add an expansion pack without corrupting the save file. And once those new tools are visible, the next step isn’t just staring at them—it’s feeding them. These lakehouses won’t run on air. They need steady inputs to keep the system alive, and that means turning to the pipelines that actually stream fuel into the lake. Dataflows Gen2:

    19 phút
  3. 1 NGÀY TRƯỚC

    Autonomous Agents Gone Rogue? The Hidden Risks

    Imagine logging into Teams and being greeted by a swarm of AI agents, each promising to streamline your workday. They’re pitching productivity—yet without rules, they can misinterpret goals and expand access in ways that make you liable. It’s like handing your intern a company credit card and hoping the spend report doesn’t come back with a yacht on it. Here’s the good news: in this episode you’ll walk away with a simple framework—three practical controls and some first steps—to keep these agents useful, safe, and aligned. Because before you can trust them, you need to understand what kind of coworkers they’re about to become. Meet Your New Digital Coworkers Meet your new digital coworkers. They don’t sit in cubicles, they don’t badge in, and they definitely never read the employee handbook. These aren’t the dusty Excel macros we used to babysit. Agents observe, plan, and act because they combine three core ingredients: memory, entitlements, and tool access. That’s the Microsoft-and-BCG framework, and it’s the real difference—your new “colleague” can keep track of past interactions, jump between systems you’ve already trusted, and actually use apps the way a person would. Sure, the temptation is to joke about interns again. They show up full of energy but have no clue where the stapler lives. Same with agents—they charge into your workflows without really understanding boundaries. But unlike an intern, they can reach into Outlook, SharePoint, or Dynamics the moment you deploy them. That power isn’t just quirky—it’s a governance problem. Without proper data loss prevention and entitlements, you’ve basically expanded the attack surface across your entire stack. If you want a taste of how quickly this becomes real, look at the roadmap. Microsoft has already teased SharePoint agents that manage documents directly in sites, not just search results. Imagine asking an assistant to “clean up project files,” and it actually reorganizes shared folders across teams. Impressive on a slide deck, but also one wrong misinterpretation away from archiving the wrong quarter’s financials. That’s not a theoretical risk—that’s next year’s ops ticket. Old-school automation felt like a vending machine. You punched one button, the Twix dropped, and if you were lucky it didn’t get stuck. Agents are nothing like that. They can notice the state of your workflow, look at available options, and generate steps nobody hard-coded in advance. It’s adaptive—and that’s both the attraction and the hazard. On a natural 1, the outcome isn’t a stuck candy bar—it’s a confident report pulling from three systems with misaligned definitions, presented as gospel months later. Guess who signs off when Finance asks where the discrepancy came from? Still, their upside is obvious. A single agent can thread connections across silos in ways your human teams struggle to match. It doesn’t care if the data’s in Teams, SharePoint, or some Dynamics module lurking in the background. It will hop between them and compile results without needing email attachments, calendar reminders, or that one Excel wizard in your department. From a throughput perspective, it’s like hiring someone who works ten times faster and never stops to microwave fish in the breakroom. But speed without alignment is dangerous. Agents don’t share your business goals; they share the literal instructions you feed them. That disconnect is the “principal-agent problem” in a tech wrapper. You want accuracy and compliance; they deliver a closest-match interpretation with misplaced confidence. It’s not hostility—it’s obliviousness. And oblivious with system-level entitlements can burn hotter than malice. That’s how you get an over-eager assistant blasting confidential spreadsheets to external contacts because “you asked it to share the update.” So the reality is this: agents aren’t quirky sidelines; they’re digital coworkers creeping into core workflows, spectacularly capable yet spectacularly clueless about context. You might fall in love with their demo behavior, but the real test starts when you drop them into live processes without the guardrails of training or oversight. And here’s your curiosity gap: stick with me, because in a few minutes we’ll walk through the three things every agent needs—memory, entitlements, and tools—and why each one is both a superpower and a failure point if left unmanaged. Which sets up your next job: not just using tools, but managing digital workers as if they’re part of your team. And that comes with no HR manual, but plenty of responsibility. Managers as Bosses of Digital Workers Imagine opening your performance review and seeing a new line: “Managed 12 human employees and 48 AI agents.” That isn’t sci‑fi bragging—it’s becoming a real metric of managerial skill. Experts now say a manager’s value will partly be judged on how many digital workers they can guide, because prompting, verification, and oversight are fast becoming core leadership abilities. The future boss isn’t just delegating to people; they’re orchestrating a mix of staff and software. That shift matters because AI agents don’t work like tools you leave idle until needed. They move on their own once prompted, and they don’t raise a hand when confused. Your role as a manager now requires skills that look less like writing memos and more like defining escalation thresholds—when does the agent stop and check with you, and when does it continue? According to both PwC and the World Economic Forum, the three critical managerial actions here are clear prompting, human‑in‑the‑loop oversight, and verification of output. If you miss one of these, the risk compounds quickly. With human employees, feedback is constant—tone of voice, quick questions, subtle hesitation. Agents don’t deliver that. They’ll hand back finished work regardless of whether their assumptions made sense. That’s why prompting is not casual phrasing; it’s system design. A single vague instruction can ripple into misfiled data, careless access to records, or confident but wrong reports. Testing prompts before deploying them becomes as important as reviewing project plans. Verification is the other half. Leaders are used to spot‑checking for quality but may assume automation equals precision. Wrong assumption. Agents improvise, and improvisation without review can be spectacularly damaging. As Ayumi Moore Aoki points out, AI has a talent for generating polished nonsense. Managers cannot assume “professional tone” means “factually correct.” Verification—validating sources, checking data paths—is leadership now. Oversight closes the loop. Think of it less like old‑school micromanagement and more like access control. Babak Hodjat phrases it as knowing the boundaries of trust. When you hand an agent entitlements and tool access, you still own what it produces. Managers must decide in advance how much power is appropriate, and put guardrails in place. That oversight often means requiring human approval before an agent makes potentially risky changes, like sending data externally or modifying records across core systems. Here’s the uncomfortable twist: your reputation as a manager now depends on how well you balance people and digital coworkers. Too much control and you suffocate the benefits. Too little control and you get blind‑sided by errors you didn’t even see happening. The challenge isn’t choosing one style of leadership—it’s running both at once. People require motivation and empathy. Agents require strict boundaries and ongoing calibration. Keeping them aligned so they don’t disrupt each other’s workflows becomes part of your daily management reflex. Think of your role now as a conductor—not in the HR department sense, but literally keeping time with two different sections. Human employees bring creativity and empathy. AI agents bring speed and reach. But if no one directs them, the result is discord. The best leaders of the future will be judged not only on their team’s morale, but on whether human and digital staff hit the same tempo without spilling sensitive data or warping decision‑making along the way. On a natural 1, misalignment here doesn’t just break a workflow—it creates a compliance investigation. So the takeaway is simple. Your job title didn’t change, but the content of your role did. You’re no longer just managing people—you’re managing assistant operators embedded in every system you use. That requires new skills: building precise prompts, testing instructions for unintended consequences, validating results against trusted sources, and enforcing human‑in‑the‑loop guardrails. Success here is what sets apart tomorrow’s respected managers from the ones quietly ushered into “early retirement.” And because theory is nice but practice is better, here’s your one‑day challenge: open your Copilot or agent settings and look for where human‑in‑the‑loop approvals or oversight controls live. If you can’t find them, that gap itself is a finding—it means you don’t yet know how to call back a runaway process. Now, if managing people has always begun with onboarding, it’s fair to ask: what does onboarding look like for an AI agent? Every agent you deploy comes with its own starter kit. And the contents of that kit—memory, entitlements, and tools—decide whether your new digital coworker makes you look brilliant or burns your weekend rolling back damage. The Three Pieces Every Agent Needs If you were to unpack what actually powers an agent, Microsoft and BCG call it the starter kit: three essentials—memory, entitlements, and tools. Miss one, and instead of a digital coworker you can trust, you’ve got a half-baked bot stumbling around your environment. Get them wrong, a

    20 phút
  4. 1 NGÀY TRƯỚC

    SharePoint Premium Is Not What You Think

    If you want advantage on governance, hit subscribe—it’s the stat buff that keeps your castle standing. Now, imagine giving Copilot the keys to your company’s content… but forgetting to lock the doors. That’s what happens when advanced AI runs inside a weak governance structure. SharePoint Premium doesn’t just boost productivity with AI—it includes SharePoint Advanced Management, or SAM, which adds walls like Restricted Access Control, Data Access Governance, and site lifecycle tools. SAM helps reduce oversharing and manage access, but you still need policies and owners to act. In this run, you’ll see how to spot overshared sites, enforce Restricted Access Control, and even run access reviews so your walls aren’t guarded by ducks. Which brings us to the question—does a moat really keep you safe? Why Your Castle Needs More Than a Moat Basic permissions feel comforting until you realize they don’t scale with the way AI works. Copilot can read, understand, and surface content from SharePoint and OneDrive at lightning speed. That’s great for productivity, but it also means anything shared too broadly becomes easier to discover. Role-based access control alone doesn’t catch this. It’s the illusion of safety—strong in theory, but shallow when one careless link spreads access wider than planned. The real problem isn’t that Copilot leaks data on its own—it’s that misconfigured sharing creates a larger surface area for Copilot to surface insights. A forgotten contract library with wide-open links looks harmless until the system happily indexes the files and makes them searchable. Suddenly, what was tucked in a corner turns into part of the knowledge backbone. Oversharing isn’t always dramatic—it’s often invisible, and that’s the bigger risk. This is where SharePoint Advanced Management comes in. Basic RBAC is your moat, but SAM adds walls and watchtowers. The walls are the enforcement policies you configure, and the watchtowers are your Data Access Governance views. DAG reports give administrators visibility into potentially overshared sites—what’s shared externally, how many files carry sensitivity labels, or which sites are using broad groups like “Everyone except external users.” With these views, you don’t just walk in circles telling yourself everything’s locked down—you can actually spot the fires smoldering on the horizon. DAG isn’t item-by-item forensics; it’s site-level intelligence. You see where oversharing is most likely, who the primary admin is, and how sensitive content might be spread. That’s usually enough to trigger a meaningful review, because now IT and content owners know *where* to look instead of guessing. Think of it as a high tower with a spyglass. You don’t see each arrow in flight, but you notice which gates are unguarded. Like any tool, DAG has limits. Some reports show only the top 100 sites in the admin center for the past 30 days, with CSV exports going up to 10,000 rows—and in some cases, up to a million. Reports can take hours to generate, and you can only run them once a day. That means you’re not aiming for nonstop surveillance. Instead, DAG gives you recurring, high-level intelligence that you still need to act on. Without people stepping in, a report is just a scroll pinned to the wall. So what happens when you act on it? Let’s go back to the contract library example. Running audits by hand across every site is impossible. But from that DAG report, you might spot the one site with external links still live from a completed project. It’s not an obvious problem until you see it—yet that one gate could let the wrong person stroll past your defenses. Now, instead of combing through thousands of sites, you zero in on the one that matters. And here’s the payoff: using DAG doesn’t just show you a problem, it shows you unknown problems. It shifts the posture from “assume everything’s fine” to “prove everything is in shape.” It’s better than running around with a torch hoping you see something—because the tower view means you don’t waste hours on blind patrols. But here’s the catch: spotting risk is only half the battle. You still need people inside the castle to care enough to fix it. A moat and tower don’t matter if the folks in charge of the gates keep leaving them open. That’s where we look next—because in this defense system, the site owners aren’t just inhabitants. They’re supposed to be the guards. Turning Site Owners into Castle Guards In practice, a lot of governance gaps come from the way responsibilities are split. IT builds the systems, but the people closest to the content—the site owners—know who actually needs to be inside. They have the local context, which means they’re the only ones who can spot when a guest account or legacy teammate no longer belongs. That’s why SharePoint Advanced Management includes a feature built for them: Site Access Reviews. Most SAM features live in the hands of admins through the SharePoint admin center. But Site Access Reviews are different—they directly involve site owners. Instead of IT chasing down every outdated permission on every site, the feature pushes a prompt to the owner: here’s your list of who has access, now confirm who should stay. It’s a simple checklist, but it shifts the job from overloaded central admins to the people who actually understand the project history. The difference might not sound like much, but it rewires the whole governance model. Without this, IT tries to manage hundreds or thousands of sites blind, often relying on stale org charts or detective work through audit logs. With Site Access Reviews, IT delegates the check to owners who know who wrapped up the project six months ago and which externals should have been removed with it. No spreadsheets, no endless ticket queues. Just a structured prompt that makes ownership real. Take a common example: a project site is dormant, external sharing was never tightened, and a guest account is still roaming around months after the last handoff. Without this feature, IT has to hunt and guess. With Site Access Reviews, the site owner gets a nudge and can end that access in seconds. It’s not flashy—it’s scheduled housekeeping. But it prevents the quiet risks that usually turn into breach headlines. Another benefit is how the system links together. Data Access Governance reports highlight where oversharing is most likely: sites with broad groups like “Everyone” or external links. From there, you can initiate Site Access Reviews as a corrective step. One tool spots the gates left open, the other hands the keys back to the people running that tower. And if you’re managing at scale, there’s support for automation. If you run DAG outputs and use the PowerShell support, you can script actions or integrate with wider workflows so this isn’t just a manual cycle—it scales with the size of your tenant. The response from business units is usually better than admins expect. At first glance, a site owner might view this as extra work. But in practice, it gives them more control. They’re no longer left wondering why IT revoked a permission without warning. They’re the ones making the call, backed by clear data. Governance stops feeling like top-down enforcement and starts feeling like shared stewardship. And for IT, this is a huge relief. Instead of being the bottleneck handling every request, they set the policies, generate the DAG reports, and review overall compliance. They oversee the castle walls, but they don’t have to patrol every hallway. Owners do their part, AI provides the intelligence, and IT stays focused on bigger strategy rather than micromanaging. The system works because the roles are divided cleanly. In day-to-day terms, this keeps access drift from building up unchecked. Guest accounts don’t linger for years because owners are reminded to prune them. Overshared sites get revisited at regular intervals. Admins still manage the framework, but the continual maintenance is distributed. That’s a stronger model than endless firefighting. Seen together, Site Access Reviews with DAG reporting become less about command and control, and more about keeping the halls tidy so Copilot and other AI tools don’t surface content that never should have been visible. It’s proactive, not reactive. You get fewer surprises, fewer blind spots, and far less stress when auditors come asking hard questions. Of course, not every problem is about who should be inside the castle. Sometimes the bigger question is what kind of lock you’re putting on each door. Because even if owners are doing their reviews, not every room in your estate needs the same defenses. The Difference Between Bolting the Door and Locking the Vault Sometimes the real challenge isn’t convincing people to care about access—it’s choosing the right type of lock once they do. In SharePoint, that choice often comes down to two very different tools: Block Download and Restricted Access Control. Both guard sensitive content, but they work in distinct ways, and knowing the difference saves you from either choking off productivity or leaving gaps wider than you realize. Block Download is the lighter hand. It lets users view files in the browser but prevents downloading, printing, or syncing them. That also means no pulling the content into Office desktop apps or third‑party programs—the data stays inside your controlled web session. It’s a “look, but don’t carry” model. Administrators can configure it at the site level or even tie it to sensitivity labels so only marked content gets that extra protection. Some configurations, like applying it for Teams recordings, do require PowerShell, so it’s worth remembering this isn’t always a toggle in the UI. Restricted Access Control—or RAC—operates at a tougher level. Instead

    18 phút
  5. 2 NGÀY TRƯỚC

    Copilot Studio: Simple Build, Hidden Traps

    Imagine rolling out your first Copilot Studio agent, and instead of impressing anyone, it blurts out something flimsy like, “I think the policy says… maybe?” That’s the natural 1 of bot building. But with a couple of fixes—clear instructions, grounding it in the actual policy doc—you can turn that blunder into a natural 20 that cites chapter and verse. By the end of this video, you’ll know how to recreate a bad response in the Test pane, fix it so the bot cites the real doc, and publish a working pilot. Quick aside—hit Subscribe now so these walkthroughs auto‑deploy to your playlist. Of course, getting a clean roll in the test window is easy. The real pain shows up when your bot leaves the dojo and stumbles in the wild. Why Your Perfect Test Bot Collapses in the Wild So why does a bot that looks flawless in the test pane suddenly start flailing once it’s pointed at real users? The short version: Studio keeps things padded and polite, while the real world has no such courtesy. In Studio, the inputs you feed are tidy. Questions are short, phrased cleanly, and usually match the training examples you prepared. That’s why it feels like a perfect streak. But move into production, and people type like people. A CFO asks, “How much can I claim when I’m at a hotel?” A rep might type “hotel expnse limit?” with a typo. Another might just say, “Remind me again about travel money.” All of those mean the same thing, but if you only tested “What is the expense limit?” the bot won’t always connect the dots. Here’s a way to see this gap right now: open the Test pane and throw three variations at your bot—first the clean version, then a casual rewrite, then a version with a typo. Watch the responses shift. Sometimes it nails all three. Sometimes only the clean one lands. That’s your first hint that beautiful test results don’t equal real‑world survival. The technical reason is intent coverage. Bots rely on trigger phrases and topic definitions to know when to fire a response. If all your examples look the same, the model gets brittle. A single synonym can throw it. The fix is boring, but it works: add broader trigger phrases to your Topics, and don’t just use the formal wording from your policy doc. Sprinkle in the casual, shorthand, even slightly messy phrasing people actually use. You don’t need dozens, just enough to cover the obvious variations, then retest. Channel differences make this tougher. Studio’s Test pane is only a simulation. Once you publish to a channel like Teams, SharePoint, or a demo website, the platform may alter how input text is handled or how responses render. Teams might split lines differently. A web page might strip formatting. Even small shifts—like moving a key phrase to another line—can change how the model weighs it. That’s why Microsoft calls out the need for iterative testing across channels. A bot that passes in Studio can still stumble when real-world formatting tilts the terrain. Users also bring expectations. To them, rephrasing a question is normal conversation. They aren’t thinking about intents, triggers, or semantic overlap. They just assume the bot understands like a co-worker would. One bad miss—especially in a demo—and confidence is gone. That’s where first-time builders get burned: the neat rehearsal in Studio gave them false security, but the first casual user input in Teams collapsed the illusion. Let’s ground this with one more example. In Studio, you type “What’s the expense limit?” The bot answers directly: “Policy states $200 per day for lodging.” Perfect. Deploy it. Now try “Hey, what can I get back for a hotel again?” Instead of citing the policy, the bot delivers something like “Check with HR” or makes a fuzzy guess. Same intent, totally different outcome. That swap—precise in rehearsal, vague in production—is exactly what we’re talking about. The practical takeaway is this: treat Studio like sparring practice. Useful for learning, but not proof of readiness. Before moving on, try the three‑variation test in the Test pane. Then broaden your Topics to include synonyms and casual phrasing. Finally, when you publish, retest in each channel where the bot will live. You’ll catch issues before your users do. And there’s an even bigger trap waiting. Because even if you get phrasing and channels covered, your bot can still crash if it isn’t grounded in the right source. That’s when it stops missing questions and starts making things up. Imagine a bot that sounds confident but is just guessing—that’s where things get messy next. The Rookie Mistake: Leaving Your Bot Ungrounded The first rookie mistake is treating Copilot Studio like a crystal ball instead of a rulebook. When you launch an agent without grounding it in real knowledge, you’re basically sending a junior intern into the boardroom with zero prep. They’ll speak quickly, they’ll sound confident—and half of what they say will collapse the second anyone checks. That’s the trap of leaving your bot ungrounded. At first, the shine hides it. A fresh build in Studio looks sharp: polite greetings, quick replies, no visible lag. But under the hood, nothing solid backs those words. The system is pulling patterns, not facts. Ungrounded bots don’t “know” anything—they bluff. And while a bluff might look slick in the Test pane, users out in production will catch it instantly. The worst outcome isn’t just weak answers—it’s hallucinations. That’s when a bot invents something that looks right but has no basis in reality. You ask about travel reimbursements, and instead of declining politely, the bot makes up a number that sounds plausible. One staffer books a hotel based on that bad output, and suddenly you’re cleaning up expense disputes and irritated emails. The sentence looked professional. The content was vapor. The Contoso lab example makes this real. In the official hands-on exercise, you’re supposed to upload a file called Expenses_Policy.docx. Inside, the lodging limit is clearly stated as $200 per night. Now, if you skip grounding and ask your shiny new bot, “What’s the hotel policy?” it may confidently answer, “$100 per night.” Totally fabricated. Only when you actually attach that Expenses_Policy.docx does the model stop winging it. Grounded bots cite the doc: “According to the corporate travel policy, lodging is limited to $200 per day.” That difference—fabrication versus citation—is all about the grounding step. So here’s exactly how you fix it in the interface. Go to your agent in Copilot Studio. From the Overview screen, click Knowledge. Select + Add knowledge, then choose to upload a file. Point it at Expenses_Policy.docx or another trusted source. If you’d rather connect to a public website or SharePoint location, you can pick that too—but files are cleaner. After uploading, wait. Indexing can take 10 minutes or more before the content is ready. Don’t panic if the first test queries don’t pull from it immediately. Once indexing finishes, rerun your question. When it’s grounded correctly, you’ll see the actual $200 answer along with a small citation showing it came from your uploaded doc. That citation is how you know you’ve rolled the natural 20. One common misconception is assuming conversational boosting will magically cover the gaps. Boosting doesn’t invent policy awareness—it just amplifies text patterns. Without a knowledge source to anchor, boosting happily spouts generic filler. It’s like giving that intern three cups of coffee and hoping caffeine compensates for ignorance. The lab docs even warn about this: if no match is found in your knowledge, boosting may fall back to the model’s baked-in general knowledge and return vague or inaccurate answers. That’s why you should configure critical topics to only search your added sources when precision matters. Don’t let the bot run loose in the wider language model if the stakes are compliance, finance, or HR. The fallout from ignoring this step adds up fast. Ungrounded bots might work fine for chit‑chat, but once they answer about reimbursements or leave policies, they create real helpdesk tickets. Imagine explaining to finance why five employees all filed claims at the wrong rate—because your bot invented a limit on the fly. The fix costs more than just uploading the doc on day one. Grounding turns your agent from an eager but clueless intern into what gamers might call a rules lawyer. It quotes the book, not its gut. Attach the Expenses_Policy.docx, and suddenly the system enforces corporate canon instead of improvising. Better still, responses give receipts—clear citations you can check. That’s how you protect trust. On a natural 1, you’ve built a confident gossip machine that spreads made-up rules. On a natural 20, you’ve built a grounded expert, complete with citations. The only way to get the latter is by feeding it verified knowledge sources right from the start. And once your bot can finally tell the truth, you hit the next challenge: shaping how it tells that truth. Because accuracy without personality still makes users bounce. Teaching Your Bot Its Personality Personality comes next, and in Copilot Studio, you don’t get one for free. You have to write it in. This is where you stop letting the system sound like a test dummy and start shaping it into something your users actually want to talk to. In practice, that means editing the name, description, and instruction fields that live on the Overview page. Leave them blank, and you end up with canned replies that feel like an NPC stuck in tutorial mode. Here’s the part many first-time builders miss—the system already has a default style the second you hit “create.” If you don’t touch the fields, you’ll get a bland greeter with no authority and no context. Contex

    19 phút
  6. 2 NGÀY TRƯỚC

    Why Your Intranet Search Sucks (And How to Fix It)

    You know that moment when you search your intranet, type the exact title of a document, and it still vanishes into the void? That’s not bad luck—that’s bad Information Architecture. Before we start the dungeon crawl, hit subscribe so you don’t miss future best‑practice loot drops. Here’s what you’ll walk away with today: a quick checklist to spot what’s broken, fixes that make Copilot actually useful, and the small design choices that stop search from failing. Well‑planned IA is the prerequisite for a high‑performing intranet, and most orgs don’t realize it until users are already frustrated. So the real question is: where in the map is your IA breaking down? The Hidden Dungeon Map: The Six Core Elements If you want a working intranet, you need more than scattered pages and guesswork. The backbone is what I call the hidden dungeon map: six core elements that hold the whole architecture together. They’re not optional. They’re not interchangeable. They are the framework that keeps your content visible and usable: global navigation, hub navigation, local navigation, metadata, search, and personalization. Miss one, and the structure starts to wobble. Think of them as your six party roles. Global navigation is the tank that points everyone in the right direction. Hub navigation is the healer, tying related sites into something that actually works together. Local navigation is your DPS, cutting through site-level clicks with precision. Metadata is the scout, marking everything so it can be tracked and recovered later. Search is the wizard, powerful but only as good as the spell components—your metadata and navigation. And personalization is the bard, tuning the experience so the right message gets to the right person at the right time. That’s the full roster. Straightforward, but deadly when ignored. The trouble is, most intranet failures aren’t loud. They don’t trigger red banners. They creep in quietly. Users stop trying search because they never find what they need, or they bounce from one site to the next until they give up. Silent cuts like that build into a trust problem. You can see it in real terms if you ask: can someone outside your team find last year’s travel policy in under 90 seconds? If not, your IA is hiding more than it’s helping. Another problem is imbalance. Organizations love to overbuild one element while neglecting another. Giant navigation menus stacked three levels deep look impressive, but if your documents are all tagged with “final_v2,” search will flop. Relying only on the wizard when the scout never did its job is a natural 1 roll, every time. The reverse is also true: some teams treat metadata like gospel but bury their global links under six clicks. Each element leans on the others. If one role is left behind, the raid wipes. And here’s the hard truth—AI won’t save you from bad architecture. Copilot or semantic search can’t invent metadata that doesn’t exist. It can’t magically create navigation where no hub structure was set. The machine is only as effective as the groundwork you’ve already done. If you feed it chaos, you’ll get chaos back. Smart investments at the architecture level are what make the flashy tools worth using. It’s also worth pointing out this isn’t a solo job. Information architecture is a team sport, spread across roles. Global navigation usually falls with intranet owners and comms leads. Hubs are often run by hub owners and business stakeholders. Local navigation and metadata involve site owners and content creators. IT admins sit across the whole thing, wiring compliance and governance in. It’s cross-team by design, which means you need agreement on map-making before the characters hit the dungeon. When all six parts are set up, something changes. Navigation frames the world so people don’t get lost. Hubs bind related zones into meaningful regions. Metadata tags the loot. Search pulls it on demand. Personalization fine-tunes what matters to each player. That balance means you’re not improvising every fix or losing hours in scavenger hunts—it means you’re building a system where both humans and AI can actually succeed. That’s the real win condition. Before we move on, here’s a quick action you can take. Pause, pick one of the six elements—navigation, metadata, or search—and run a light audit. Don’t overthink it. Just ask if it’s working right now. That single diagnostic step can save you from months of frustration later. Because from here, we’re about to get specific. There are three different maps built into every intranet, and knowing how they overlap is the first real test of whether users make progress—or wander in circles. World Map vs. Local Maps: Global, Hub, and Local Navigation Every intranet lives on three distinct maps: the world map, the regional maps, and the street-level sketch. In platform terms, that’s global navigation, hub navigation, and local navigation. If those maps don’t agree, your users aren’t adventuring—they’re grinding random encounters with no idea which way is north. Global navigation is the overworld view. It tells everyone what lands exist and how major territories connect. In Microsoft 365, you unlock it through the SharePoint app bar, which shows up on every site once a home site is set. It’s tenant-wide by design. Global nav isn’t there to list every page or document—it’s the continental outline: Home, News, Resources, Tools. Broad categories everyone in the company should trust. If this skeleton bends out of shape, people don’t even know which continent they spawned on. Hub navigation works like a regional map. Join a guild hall in an RPG and you see trainers, quest boards, shops—the things tied to that one region. Hubs in SharePoint do exactly that. They unify related sites like HR, Finance, or legal so they don’t float around as disconnected islands. Hub nav appears just below the suite bar, over the site’s local nav, and every site joined to that hub respects the same links and shared branding. It’s also security-trimmed: if a user doesn’t have access to a site in the hub, they won’t see its content surface magically. Permissions don’t change by association. Use audience targeting if you want private links to show up only for the right people. That stops mixed parties from thinking they missed a questline they were never allowed to run. Local navigation is the street map—the hand-drawn dungeon sketch you keep updating as you poke around. It’s specific to a single site and guides users from one page, list, library, or task to another inside that domain. On a team site it’s on the left as the quick launch. On a communication site it’s up top instead. Local nav should cover tactical moves: policies, project docs, calendars. The player should find common quests inside two clicks. If they’re digging five levels down and retracing breadcrumbs, the dungeon layout is broken. The real failure comes when these maps don’t line up. Global says “HR,” hub says “People Services,” and local nav buries benefits documents under “Archive/Old-Version-Uploads.” Users follow one map, get looped back to another, and realize none of them match. Subsites layered five deep create breadcrumb trails that collapse the moment you reorganize, leading to dead ends in Teams or Outlook links. It only takes a few busted trails before staff stop trying navigation altogether and fire off emails instead. That’s when trust in the intranet collapses. There are also technical boundaries worth noting. Each nav level can technically handle up to 500 links per tier, but stuffing them in is like stocking a bag with 499 health potions. Sure, it fits—but no one can use it. A practical rule is to keep hub nav under a hundred links. Anything more and users can’t scan it without scrolling fatigue. Use those limits as sanity checks when you’re tempted to add “just one more” menu. Here’s how to test this in practice—two checks you can run right now in under a minute. First, open the SharePoint app bar. Do those links boil down to your real global categories—Home, News, Tools—or are they trying to be a department sitemap? Second, pick a single site. Check the local nav. Count how many clicks it takes to hit the top three tasks. If it’s more than two, you’re making users roll a disadvantage check every time. When these three layers match, things click. Users trust the overworld for direction, the hubs for context, and the locals for getting work done. Better still, AI tools see the same paths. Copilot doesn’t misplace scrolls if the maps agree on where those scrolls live. The system doesn’t feel like a coin toss; it behaves predictably for both people and machines. But even the best navigation can’t label a blade if every sword in the vault is called “Item_final_V3.” That’s a different kind of invisibility. The runes you carve into your gear—your metadata—are what make search cast real spells instead of fumbles. Metadata: The Magic Runes of Search When navigation gives you the map, metadata gives the legend. Metadata—the magic runes of search—is what tells SharePoint and AI tools what a file actually is, not just what it happens to be named. Without it, everything blurs into vague boxes and folders. With it, your system knows the difference between a project plan, a travel policy, and a vendor contract. The first rule: use columns and content types in your document libraries and Site Pages library. This isn’t overkill—it’s the translation layer that lets search and highlighted content web parts actually filter and roll up the right files. A tagged field like “Region = West” doesn’t just decorate the document; it becomes a lever for search, dynamic rollups, even audience-targeted news feeds. AI copilots look for those same properties. If th

    18 phút
  7. 3 NGÀY TRƯỚC

    Copilot Studio vs. Teams Toolkit: Critical Differences

    Rolling out Microsoft 365 Copilot feels like unlocking a legendary item—until you realize it only comes with the starter kit. Out of the box, it draws on baseline model knowledge and the content inside your tenant. Useful, but what about your dusty SOPs, the HR playbook, or that monster ERP system lurking in the corner? Without connectors, grounding, or custom agents, Copilot can’t tap into those. The good news—you can teach it. The trick is knowing when to reach for Copilot Studio, when to switch to Teams Toolkit, and how governance, monitoring, and licensing fit into the run. Because here’s the real twist: building your first agent isn’t the final boss fight. It’s just the tutorial. The Build Isn’t the Boss Fight You test your first agent, the prompts work, the demo data looks spotless, and for a second you feel like you’ve cleared the game. That’s the trap. The real work starts once you aim that same build at production, where the environment plays by very different rules. Too many makers assume a clean answer in testing equals mission accomplished. In reality, that’s just story mode on easy difficulty. Production doesn’t care if your proof-of-concept responded well on your dev laptop. What production demands is stability under stress, with compliance checks, identity guardrails, and uptime standards breathing down its neck. And here’s where the first boss monsters appear. Scalability: can the agent handle enterprise load without choking? That’s where monitoring and diagnostic logs from the Copilot Control System matter. Stale grounding: when data in SharePoint or Dataverse changes, does the agent still tether to the right snapshot? Connectors and Graph grounding are the safeguards. Compliance and auditability: if a regulator or internal auditor taps you on the shoulder, can the agent’s history be reviewed with Purview logs and sensitivity labels in place? If any of these fail, the “victory screen” vanishes fast. Running tests in Copilot Studio is like sparring in a training arena with infinite health potions. You can throw spells, cycle prompts, and everything looks shiny. But in live use, every firewall block is a fizzled cast, and an overloaded external data source slows replies to a crawl. That’s the moment when users stop calling it smart and start filing tickets. The most common natural 1 roll comes from teams who put off governance. They tell themselves it’s something to layer on later. But postponing governance almost always leads to ugly surprises. Scaling issues, data mismatches, or compliance gaps show up at exactly the wrong moment. Security and compliance aren’t optional side quests. They’re part of the campaign map. Now let’s talk architecture, because Copilot’s brain isn’t a single block. You’ve got the foundation model—the raw language engine. On top, the orchestrator, which lines up what functions get called and when. Microsoft 365 Copilot provides that orchestration by default, so every request has structure. Then comes grounding—the tether back to enterprise content so answers aren’t fabricated. Finally, the skills—your custom plugins or connectors to do actual tasks. If you treat those four pieces as detached silos, the whole tower wobbles. A solid skill without grounding is just a fancy hallucination. Foundation with no compliance controls becomes a liability. Only when the layers are treated as one stack does the agent stay sturdy. So what does a “win” even look like in the wild? It’s not answering a demo prompt neatly. That’s practice mode. The mark of success is holding up under real-world conditions: mid-payroll crunch, data migrations in motion, compliance officers watching, all with a high request load. That’s where an agent proves it deserves to run. And here’s another reason many builds fail: organizations think of them as throwaway projects, not operational systems. Somebody spins up a prototype, shows off a flashy demo, then leaves it unmonitored. Soon, different departments build their own, none of them documented, all of them chewing tokens unchecked. Without a simple operational manual—who owns the connectors, who audits grounding, who checks credit consumption—the landscape turns into a mess of unsynced mini-bosses. Flip the perspective, and it gets much easier. If you start with an operational mindset, the design shifts. You don’t just care about whether the first test looked clean. You harden for the day-to-day campaign. Audit logs, admin gates, backups, health checks—those build trust while keeping the thing alive under pressure. Admins already have usable controls in the Microsoft 365 admin center, where scenarios can be managed and diagnostic feedback surfaces early. Leaning on those tools is what separates a novelty agent from a reliable operator. That’s why building alone doesn’t crown a winner. The test environment gets you to level one. Real deployment, with governance and monitoring in place, is where the actual survival challenge kicks off. And before you march too far into that, you’ll need the right weapon for the fight. Microsoft gives you two—different kits, different rules. Choose wrong, and it’ll feel like bringing a plastic sword to a raid. Copilot Studio vs. Teams Toolkit: Choosing Your Weapon That’s where the real question lands: which tool do you reach for—Copilot Studio or the Teams Toolkit, also called the Microsoft 365 Agents Toolkit? They sound alike, both claim to “extend Copilot,” but they serve very different groups of builders and needs. The wrong choice costs you time, budget, and possibly credibility when your shiny demo wilts in production. Copilot Studio is the maker’s arena. It’s a low‑code, visual builder designed for speed and clarity. You get drag‑and‑drop flows, templates, guided dialogs, and built‑in analytics. Studio comes bundled with a buffet of connectors to Microsoft 365 data sources, so a power user can pull SharePoint content, monitor Teams messages, or surface HR policy docs without ever touching code. You can test, adjust, and publish directly into Microsoft 365 Copilot or even release as a standalone agent with minimal friction. For a department that needs a working workflow this quarter—not next fiscal year—Studio is the fast track. Over 160,000 customers already use Studio for exactly this: reconciling financial data, onboarding employees, or answering product questions in retail. The reason isn’t mystery—it simply lowers the bar. If your team already fiddles in PowerApps or automates routine reports in Power Automate, Studio feels like home turf. You don’t need to be a software engineer. You just need a clear goal and basic low‑code chops to click, configure, and deploy. Now, cross over to the Teams Toolkit. This is where full‑stack developers thrive. The Toolkit plugs into VS Code, not a drag‑and‑drop canvas. Here, you architect declarative agents with structured rules, or you go further and create custom engine agents where you define orchestration, model calls, and API handling from scratch. You get scaffolding, debugging, configuration, and publishing routes not just inside Copilot, but across Teams, Microsoft 365 apps, the web, and external channels. If Copilot Studio is prefab furniture from the catalog, Toolkit is milling your own planks and wiring the house yourself. The freedom is spectacular—but you’re also responsible for every nail and fuse. The real confusion? Both say “extend Copilot.” In practice, Studio means extending within Microsoft’s defined guardrails: safe connectors, administrative controls, and lightweight governance. The Toolkit means rewriting the guardrails: rolling your own orchestration, calling external LLMs, or building agent behaviors Microsoft didn’t provide out of the box. One approach keeps you safe with templates. The other gives you raw power and expects you to wield it responsibly. A lot of folks think “tool choice equals different UI.” Nope. End‑users see the same prompt box and answer card whether you built the agent in Studio or with Toolkit. That’s by design—the UX layer is unified. What actually changes is behind the curtain: grounding options, scalability, and administrative control. That’s why this decision is operational, not cosmetic. Here’s a practical rule: some grounding capabilities—things like SharePoint content, Teams chats and meetings, embedded files, Dataverse data, or connectors into email and people search—only light up if your tenant has Microsoft 365 Copilot licensing or Copilot Studio metering turned on. If you don’t have that entitlement, picking Studio won’t unlock those tricks. That single licensing check can be the deciding factor for which route you need. So how do you simplify the choice? Roll a quick checklist. One: need fast, auditable, admin‑controlled agents that power users can stand up without bugging IT? Pick Copilot Studio. Two: need custom orchestration, external AI models, or deep integration work stitched straight into enterprise backbones? Pick the Agents Toolkit. Three: don’t trust the labels—trust your team’s actual skill set and goals. The metaphor I use is housing. Studio is prefab—you pick colors and cabinets, but the plumbing and wiring are already safe. Toolkit is raw land—you design every inch, but also carry all the risks if the design buckles. Both can yield a beautiful home. One is faster and less complex, the other is limitless but fragile unless managed well. Both collapse without grounding. Your chosen weapon handles the build, but if it isn’t fed the right data, it just makes confident nonsense faster. A Studio agent without connectors is a parrot. A Toolkit agent without grounding is a custom‑coded parrot. Either way, you’re still living with a bird squawking guesses at your users. And that brings us to the real

    20 phút
  8. Stop Blaming Users—Your Pipeline Is the Problem

    3 NGÀY TRƯỚC

    Stop Blaming Users—Your Pipeline Is the Problem

    Ever wonder why your Dataverse pipeline feels like it’s built out of duct tape and bad decisions? You’re not alone. Most of us end up picking between Synapse Link and Dataflow Gen2 without a clear idea of which one actually fits. That’s what kills projects — picking wrong. Here’s the promise: by the end of this, you’ll know which to choose based on refresh frequency, storage ownership and cost, and rollback safety — the three things that decide whether your project hums along or blows up at 2 a.m. For context, Dataflow Gen2 caps out at 48 refreshes per day (about every 30 minutes), while Synapse Link can push as fast as every 15 minutes if you’re willing to manage compute. Hit subscribe to the M365.Show newsletter at m365 dot show for the full cheat sheet and follow the M365.Show Linkedin page for MVP livestreams. Now, let’s put the scalpel on the table and talk about control. The Scalpel on the Table: Synapse Link’s Control Obsession You ever meet that one engineer who measures coffee beans with a digital scale? Not eyeball it, not a scoop, but grams on the nose. That’s the Synapse Link personality. This tool isn’t built for quick fixes or “close enough.” It’s built for the teams who want to tune, monitor, and control every moving part of their pipeline. If that’s your style, you’ll be thrilled. If not, there’s a good chance you’ll feel like you’ve been handed a jet engine manual when all you wanted was a light switch. At its core, Synapse Link is Microsoft giving you the sharpest blade in the drawer. You decide which Dataverse tables to sync. You can narrow it to only the fields you need, dictate refresh schedules, and direct where the data lands. And here’s the important part: it exports data into your own Azure Data Lake Storage Gen2 account, not into Microsoft’s managed Dataverse lake. That means you own the data, you control access, and you satisfy those governance and compliance folks who ask endless questions about where data physically lives. But that freedom comes with a trade-off. If you want Delta files that Fabric tools can consume directly, it’s up to you to manage that conversion — either by enabling Synapse’s transformation or spinning up Spark jobs. No one’s doing it for you. Control and flexibility, yes. But also your compute bill, your responsibility. And speaking of responsibility, setup is not some two-click wizard. You’re provisioning Azure resources: an active subscription, a resource group, a storage account with hierarchical namespace enabled, plus an app registration with the right permissions or a service principal with data lake roles. Miss one setting, and your sync won’t even start. It’s the opposite of a low-code “just works” setup. This is infrastructure-first, so anyone running it needs to be comfortable with the Azure portal and permissions at a granular level. Let’s go back to that freedom. The draw here is selective syncing and near-real-time refreshes. With Synapse Link, refreshes can run as often as every 15 minutes. For revenue forecasting dashboards or operational reporting — think sales orders that need to appear in Fabric within the hour — that precision is gold. Teams can engineer their pipelines to pull only the tables they need, partition the outputs into optimal formats, and minimize unnecessary storage. It’s exactly the kind of setup you’d want if you’re running pipelines with transformations before shipping data into a warehouse or lakehouse. But precision has a cost. Every refresh you tighten, every table you add, every column you leave in “just in case” spins up compute jobs. That means resources in Azure are running on your dime. Which also means finance is involved sooner than later. The bargain you’re striking is clear: total control plus table-level precision equals heavy operational overhead if you’re not disciplined with scoping and scheduling. Let me share a cautionary tale. One enterprise wanted fine-grain control and jumped into Synapse Link with excitement. They scoped tables carefully, enabled hourly syncs, even partitioned their exports. It worked beautifully for a while — until multiple teams set up overlapping links on the same dataset. Suddenly, they had redundant refreshes running at overlapping intervals, duplicated data spread across multiple lakes, and governance meetings that felt like crime-scene investigations. The problem wasn’t the tool. It was that giving everyone surgical precision with no central rules led to chaos. The lesson: governance has to be baked in from day one, or Synapse Link will expose every gap in your processes. From a technical angle, it’s impressive. Data lands in Parquet, not some black-box service. You can pipe it wherever you want — Lakehouse, Warehouse, or even external analytics platforms. That open format and storage ownership are exactly what makes engineers excited. Synapse Link isn’t trying to hide the internals. It’s exposing them and expecting you to handle them properly. If your team already has infrastructure for pipeline monitoring, cost management, and security — Synapse Link slots right in. If you don’t, it can sink you fast. So who’s the right audience? If you’re a data engineer who wants to trace each byte, control scheduling down to the quarter-hour, and satisfy compliance by controlling exactly where the data lives, Synapse Link is the right choice. A concrete example: you’re running near-real-time sales feeds into Fabric for forecasting. You only need four tables, but you need them every 15 minutes. You want to avoid extra Dataverse storage costs while running downstream machine learning pipelines. Synapse Link makes perfect sense there. If you’re a business analyst who just wants to light up a Power BI dashboard, this is the wrong tool. It’s like giving a surgical kit to someone who just wanted to open Amazon packages. Bottom line, Synapse Link gives surgical-grade control of your Dataverse integration. That’s freeing if you have the skills, infrastructure, and budgets to handle it. But without that, it’s complexity overload. And let’s be real: most teams don’t need scalpel-level control just to get a dashboard working. Sometimes speed and simplicity mean more than precision. And that’s where the other option shows up — not the scalpel, but the multitool. Sometimes you don’t need surgical precision. You just need something fast, cheap, and easy enough to get the job done without bleeding everywhere. The Swiss Army Knife That Breaks Nail Files: Dataflow Gen2’s Low-Code Magic If Synapse Link is for control freaks, Dataflow Gen2 is for the rest of us who just want to see something on a dashboard before lunch. Think of it as that cheap multitool hanging by the cash register at the gas station. It’s not elegant, it’s not durable, but it can get you through a surprising number of situations. The whole point here is speed — moving Dataverse data into Fabric without needing a dedicated data engineer lurking behind every button click. Where Synapse feels like a surgical suite, Dataflow Gen2 is more like grabbing the screwdriver out of the kitchen drawer. Any Power BI user can pick tables, apply a few drag‑and‑drop transformations, and send the output straight into Fabric Lakehouses or Warehouses. No SQL scripts, no complex Azure provisioning. Analysts, low‑code makers, and even the guy in marketing who runs six dashboards can spin up a Dataflow in minutes. Demo time: imagine setting up a customer engagement dashboard, pulling leads and contact tables straight from Dataverse. You’ll have visuals running before your coffee goes cold. Sounds impressive — but the gotchas show up the minute you start scheduling refreshes. Here’s the ceiling you can’t push through: Dataflow Gen2 runs refreshes up to 48 times a day — that’s once every 30 minutes at best. No faster. And unlike Synapse, you don’t get true incremental loads or row‑level updates. What happens is one of two things: append mode, which keeps adding to the Delta table in OneLake, or overwrite mode, which completely replaces the table contents during each run. That’s great if you’re testing a demo, but it can be disastrous if you’re depending on precise tracking or rollback. A lot of teams miss this nuance and assume it works like a transactionally safe system. It’s not — it’s bulk append or wholesale replace. I’ve seen the pain firsthand. One finance dashboard was hailed as a success story after a team stood it up in under an hour with Dataflow Gen2. Two weeks later, their nightly overwrite job was wiping historical rows. To leadership, the dashboard looked fine. Under the hood? Years of transaction history were half scrambled and permanently lost. That’s not a “quirk” — that’s structural. Dataflow doesn’t give you row‑level delta tracking or rollback states. You either keep every refresh stacked up with append (risking bloat and duplication) or overwrite and pray the current version is correct. Now, let’s talk money. Synapse makes you pull out the checkbook for Azure storage and compute. With Dataflow Gen2, it’s tied to Fabric capacity units. That’s a whole different kind of silent killer. It doesn’t run up Azure GB charges — instead, every refresh eats into a pool of capacity. If you don’t manage refresh frequency and volume, you’ll burn CUs faster than you expect. At first you barely notice; then, during mid‑day loads, your workspace slows to a crawl because too many Dataflows are chewing the same capacity pie. The users don’t blame poor scheduling — they just say “Fabric is slow.” That’s how sneaky the cost trade‑off works. And don’t overlook governance here. Dataflow Gen2 feels almost too open-handed. You can pick tables, filter columns, and mash them into golden datasets… right up until refresh jobs coll

    22 phút

Giới Thiệu

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Có Thể Bạn Cũng Thích