M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. -6 Ч

    Your SharePoint Is Stuck in 2013 (Here’s the Fix)

    Your SharePoint isn’t outdated because you’re lazy—it’s outdated because legacy workflows are basically bosses that refuse to retire. If you want the cheat codes for modernizing SharePoint, hit Subscribe so these walkthroughs land in your feed. Here’s the twist: you don’t need to swing a +5 developer sword. With Power Platform, you can shape apps and automate flows straight from the lists you already have. And once AI Builder and Copilot Studio join the party, those repetitive file-tagging goblins vanish. And yes—when you use AI Builder with SharePoint, model training data lives in Microsoft Dataverse, accessible only to the model owner or approved admins. The point is simple: you can upgrade your dungeon into a modern AI-powered hub without starting over. Which raises the real question—why does your SharePoint still feel stuck in 2013? Why Your SharePoint Still Feels Like a Dungeon When you step into an older SharePoint environment, it often feels less like a collaboration hub and more like walking through a maze built years ago that hasn’t kept up with the rest of the game. Subsites sprawl like abandoned corridors, workflows stall in dark corners, and somewhere an InfoPath form refuses to give up. The result is a space that functions, but in the most lumbering way possible. Here’s the real drag: SharePoint was always meant to be the backbone of teamwork in Microsoft 365. But in many organizations, it never grew past the early levels. Lists and libraries stacked up inside subsites, reliable enough to hold files or track rows of data, but clunky to navigate and slow to adapt. The core is still solid—you’ve got the map of the dungeon—but without shortcuts or automation, you’re spending your time retracing steps. And that gap is where frustration lives. Other platforms have built-in intelligence—tools that automatically categorize, bots that respond in seconds, dashboards that refresh in real time. When your SharePoint environment leaves you rummaging through folders by hand or chasing down approvals with emails, the contrast is sharp. It’s not that SharePoint is obsolete. SharePoint data still matters—you modernize how you interact with it, not necessarily toss the data. But the way you use it now feels stuck in slow motion. Take a simple helpdesk scenario. A ticket enters your SharePoint list—a clean start. Ideally, it moves automatically into the right hands, gets tracked, and closes out smoothly. Instead, in an older setup, it drifts between folders like an item cursed to never land where it belongs. By the time support touches it, the requester is frustrated, managers are escalating, and the team looks unresponsive. The bottleneck isn’t staff competence—it’s brittle workflows that refuse to cooperate. That brittleness is tied to legacy workflows—especially those infamous 2010 and 2013 styles. Back when they arrived, they were powerful for their time, but today they’re a liability. They’re hard-coded, fragile, and break the moment you try to adjust them for modern business needs. Here’s the piece that makes this urgent: SharePoint 2010 workflows are already retired, and Microsoft has disabled SharePoint 2013 workflows for new tenants (April 2, 2024) and scheduled full retirement for SharePoint 2013 workflows in SharePoint Online on April 2, 2026 — so this isn’t optional if you’re migrating to the modern cloud. Quick win: run a simple inventory of any classic workflows or InfoPath forms in your environment — note them down, because those are the boss fights you’ll want to replace first. Sticking to old workflows is like running a Windows XP tower in an office full of modern devices. It technically boots and runs. At first, you think, hey—no license fee, no extra cost. But the hidden expense piles up: wasted clicks, missed notifications, and constant detours just to find the right file. Nothing implodes spectacularly. Instead, small inefficiencies accumulate until your team slowly stops trusting the system. Part of why this happens is the eternal tug-of-war between users and IT. Users want speed—like filling out forms on their phone or automating low-level tasks. IT worries (legitimately) about compliance, data residency, and governance. Modern tools promise efficiency, but adopting them always feels like rolling the dice: streamline the user’s life, or risk reading the dreaded “policy violation” alert. That tension explains why so many installations stay frozen in time. But here’s the thing: you don’t need to torch your environment and start over. SharePoint modernization isn’t a rebuild—it’s an upgrade in how you interact with what you already have. Your lists, libraries, and stored data still serve as the core. Modern tools like Power Platform simply layer on smarter workflows, adaptive apps, and accessible dashboards. Think of it less as tearing down the dungeon and more as unlocking fast travel: same map, new ways to move through it. And when you swap fragile workflows for modern automation, the payoff is immediate. That same helpdesk ticket can enter today, get logged instantly, assigned correctly, and tracked without anyone digging through folders. Notifications fire, dashboards update, and staff get visibility instead of suspense. For users, it feels like the system finally joined their side. On a natural 20, modernization even lets you reuse the cobwebs—the old structures—to build rope for climbing higher. You don’t abandon the environment. You evolve it. You keep the bones, but change the muscle so it actually supports how people want to work today. That’s the real win: efficiency without losing history. And once the workflows stop dragging you down, attention shifts to another big opportunity hiding in plain sight: those so-called “boring” lists. You may see them as simple spreadsheets, but there’s more potential there than most people realize. Turning Lists into Playable Power Apps This is where SharePoint starts feeling less like baggage and more like potential: lists can be turned into apps with Power Apps. The same data that looks dry in rows and columns can power a mobile-ready interface that your team actually wants to use. Instead of scrolling through cells, you tap, snap, and submit—with less friction and fewer groans. Think of the list as the backend engine. It hums along keeping data aligned, but on its own it asks you to fight through clunky forms and finicky clicks. When you connect that list into Power Apps, you suddenly add a front end that feels responsive and clean. The list still stores the information underneath, but what users see and tap on now behaves like a modern app instead of a spreadsheet in disguise. The usual hesitation hits quick: “But I’m not a developer.” That fear has kept plenty of admins from clicking the “Create App” button. You picture syntax errors, missed semicolons, maybe blowing away the whole list with one wrong keystroke. But reality plays out differently. No mountains of code, no black-screen console full of warnings—just drag fields, reorder layouts, adjust colors. Within minutes you’re holding a working interface built on top of your data. And here’s the kicker: Power Apps can generate a canvas app from a SharePoint list quickly—you don’t need to port your data or write backend code; the canvas app points directly to the list as its data source. That’s why people describe it as nearly one-click. It’s shaping, not coding. For advanced custom logic there’s Power Fx, but you don’t need to touch it unless you want to. The most obvious pain Power Apps solves is manual entry. In a plain SharePoint list, you’re wrestling dropdowns, adding attachments through awkward buttons, and hoping nobody fat-fingers a date. On mobile it’s worse—pinch-zoom gymnastics just to fill in a single item. That’s when motivation dies, because the tool feels like punishment instead of support. Now picture this: your team keeps an expense tracking list. Nobody likes updating it, receipts pile up, and reconciling takes weeks. Rebuild it as a Power App and suddenly field staff open it on their phone, snap a photo of a receipt, enter the number, and tap submit. Done. The data drops straight into the list, formatted correctly, already attached. What was a chore becomes muscle memory. That’s the magic worth keeping in focus. Power Apps canvas apps connect directly to lists, instantly interactive, no messy migrations. You don’t risk data loss. You don’t rebuild the backend. You just place a usable shield over the skeleton. Users get clear buttons and mobile-friendly forms, and you get better adoption because nobody has to fight the UI anymore. Here’s a quick win you can test right now: open any SharePoint list, hit the “Power Apps” menu, choose “Create an app,” and let it scaffold the shell for you. Change a field label, shift a button, hide a column you know is useless. In under ten minutes you’ll already have a version you could hand to the team that runs smoothly on desktop and mobile. Try it once, and you’ll never look at a list the same way again. Once that lightbulb turns on, it’s hard to stop. That contacts list becomes a tap-to-call phone book. The onboarding checklist becomes an app new hires actually breeze through without digging in a browser. Even asset inventory—the dusty pile of laptop records—comes alive when you can scan and update with a phone camera. Each little upgrade chips away at the friction that made SharePoint feel frozen in time. And the payoff comes fast: adoption rises, data quality improves, and your lists stop being a bottleneck. You don’t have to beg users to enter data; they’ll do it because it’s easier now. The skeleton is the same, but the armor makes it functional in today’s environment. But here’s the catch: an app alone only solves half the pr

    21 мин.
  2. -18 Ч

    The Castle Gate Is Open—Is Your Entra ID Secured?

    Imagine your company’s digital castle with wide‑open gates. Everyone can stroll right in—vendors, employees who left years ago, even attackers dressed as your CFO. That’s what an unprotected identity perimeter looks like. Before we roll initiative on today’s breach boss, hit Subscribe so you get weekly security briefings without missing the quest log. Here’s the twist: in the Microsoft cloud, your castle gate is no longer a firewall—it’s Entra ID. In this video, you’ll get a practical overview of the essential locks—MFA, Conditional Access, Privileged Identity Management, and SSO—and the first steps to harden them. Because building walls isn’t enough when attackers can just blink straight past them. The New Castle Walls The new castle walls aren’t made of stone anymore. Once upon a time, you could build a giant moat, man every tower, and assume attackers would line up politely at the front gate. That model worked when business stayed behind a single perimeter, tucked safely inside racks of servers under one roof. But now your kingdom lives in clouds, browsers, and every laptop that walks out of the office. The walls didn’t just crack—they dissolved. Back then, firewalls were your dragons, roaring at the edge of the network. You trusted that anything inside those walls belonged there. Cubicles, desktops bolted under desks, devices you imaged yourself—every user was assumed trustworthy just by virtue of being within the perimeter. It was simpler, but it also hinged on one assumption: that the moat was wide enough, and attackers couldn’t simply skip it. That assumption crumbled fast. Cloud apps scattered your resources far beyond the citadel. Remote work spread employees everywhere from home offices to airport lounges. And bring-your-own-device policies let personal tablets and home laptops waltz right into the mix. Each shift widened the attack surface, and suddenly the moat wasn’t holding anyone back. In this new reality, firewalls didn’t vanish, but their ability to guard the treasure dropped sharply. An attacker doesn’t charge at your perimeter anymore; they slip past by grabbing a user’s credentials. A single leaked password can work like a skeleton key, no brute force required. That’s why the focus shifted. Identity became the castle wall. In the cloud, Microsoft secures the platform itself, but what lives within it—your configuration, your policies, your user access—that’s on you. That shared-responsibility split is the reason identity is now your primary perimeter. Your “walls” are no longer walls at all; they’re the constant verification points that decide whether someone truly belongs. Think of a password like a flimsy wooden door bolted onto your vault. It exists, but it’s laughably fragile. Add multi-factor authentication, and suddenly that wooden plank is replaced with a gate that slams shut unless the right key plus the right proof line up. It forces attackers to push harder, and often that effort leaves traces you can catch before they crown themselves royalty inside your systems. Identity checks aren’t just a speed bump—they’re where almost every modern attack begins. When a log-in comes from across the globe at 3 a.m. under an employee’s name, a perimeter-focused model shrugs and lets it pass. To the old walls, credentials are enough. But to a system built around identity, that’s the moment where the guard at the door says, “Wait—prove it.” Failure to control this space means intruders walk in dressed like your own staff. You won’t catch them with alerts about blocked ports or logon attempts at your firewall. They’re already inside, blending seamlessly with daily activity. That’s where data gets siphoned, ransomware gets planted, and attackers live quietly for months. So the new castle walls aren’t firewalls in a server room. They’re the tools that protect who can get in: identity protections, context checks, and policies wrapped around every account. And the main gate in that setup is Microsoft Entra ID. If it’s weak, every other safeguard collapses because entry has already been granted. Which leaves us at the real question administrators wrestle with: if keeping the gate means protecting identity, what does it look like to rely on just a single password? So if the walls no longer work, what becomes the gate? Identity—and Entra ID is the gatekeeper. And as we’ll see next, trusting passwords alone is like rolling a D20 and hitting a natural 1 every time. Rolling a Natural 1 with Passwords Passwords have long been the front door key for digital systems, but that lock is both brittle and predictable. For years, typing a string of characters into a box was the default proof of identity. It was cheap, simple, and everyone understood it. But that very simplicity created deep habits—habits attackers quickly learned to exploit. The main problem is reuse. People juggle so many accounts that recycling the same password across services feels inevitable. When one forum gets breached, those stolen logins often unlock doors at work too. Credential dumps sold on dark-web marketplaces mean attackers don’t even need to bother guessing—they just buy the keys already labeled. That’s a massive flaw when your entire perimeter depends on “something you know.” Even when users try harder, the math still works against them. Complex passwords laced with symbols and numbers might look tough, but machines can rattle through combinations at astonishing speed. Patterned choices—birthdays, company names, seasonal phrases—make it faster still. A short password today can fall to brute force in seconds, and no amount of rotating “Spring2024!” to “Summer2024!” changes that. On top of that, no lock can withstand social engineering when users get tricked into handing over the key. Phishing strips away even good password practices with a simple fake login screen. A convincing email and a spoofed domain are usually enough. At that point, attackers don’t outsmart a password policy—they just outsmart the person holding it. This is why passwords remain necessary, but never sufficient. Microsoft’s own guidance is clear: strong authentication requires layering defenses. That means passwords are only one factor among several, not the one defense holding back a breach. Without that layering, your user login page may as well be guarded by a cardboard cutout instead of a castle wall. The saving throw here is multi-factor authentication. MFA doesn’t replace your password—it backs it up. You supply a secret you know, but you must also confirm something you have or something you are. That extra check stops credential stuffing cold and makes stolen dumps far less useful. In practice, the difference is night and day: with MFA, logging in requires access to more than a leaked string of text. Entra ID supports multiple forms of this protection—push approvals, authenticator codes, even physical tokens. Which method you pick depends on your organization’s needs, but the point is consistency. Layering MFA across accounts drastically lowers the success rate of attacks because stolen credentials on their own lose most of their value. Policies enforcing periodic password changes or quirky complexity rules can actually backfire, creating predictable user behaviors. By contrast, MFA works with human tendencies instead of against them. It accepts that people will lean toward convenience, and it cushions those habits with stronger verification windows. If you only remember one thing from this section: passwords are the old wooden door—MFA is your reinforced gate. One is technically a barrier; the other turns casual attempts into real work for an attacker. And the cost bump to criminals is the whole point. Of course, even armor has gaps. MFA shields you against stolen passwords, but it doesn’t answer the question of context: who is logging in, from where, on what device, and at what time. That’s where the smarter systems step in. Imagine a guard at the castle gate who doesn’t just check if you have a key, but also notices if you’re arriving from a faraway land at 3 a.m. That’s where the real gatekeeping evolves. The Smart Bouncer at the Gate Picture a castle gate with a bouncer who doesn’t just wave you through because you shouted the right password. This guard checks your ID, looks for tells that don’t match the photo, and asks why you’re showing up at this hour. That’s Conditional Access in your Microsoft cloud. It’s not just another lock; it’s the thinking guard that evaluates signals like device compliance, user risk, and geographic location, then decides in real time whether to allow, block, or demand more proof. MFA alone is strong armor, but armor isn’t judgment. Social engineering and fatigue attacks can still trick a user into approving a fraudulent prompt at three in the morning, turning a “yes” into a false green light. Conditional Access closes that gap. If the login context looks suspicious—wrong city, unhealthy device, or risk scores that don’t align—policies can force another verification step or block the attempt outright. It’s the difference between blind acceptance and an actual interrogation. Take a straightforward scenario. An employee account logs in from across the globe at an odd hour, far from their normal region. Username, password, and MFA all check out. A traditional system shrugs. Conditional Access instead notices the anomaly, cross-references location and time, and triggers additional controls—like requiring another factor or denying the sign-in entirely. The bouncer doesn’t just say “you match the description”; it notices that nothing else makes sense. What makes this especially effective is how flexible the rules can be. A common early win is to ensure older, insecure authentication methods aren’t allowe

    19 мин.
  3. -1 ДН.

    The Hidden Engine Inside Microsoft Fabric

    Here’s the part that changes the game: in Microsoft Fabric, Power BI doesn’t have to shuttle your data back and forth. With OneLake and Direct Lake mode, it can query straight from the lake with performance on par with import mode. That means greatly reduced duplication, no endless exports, and less wasted time setting up fragile refresh schedules. The frame we’ll use is simple: input with Dataflows Gen2, process inside the lakehouse with pipelines, and output through semantic models and Direct Lake reports. Each step adds a piece to the engine that keeps your data ecosystem running. And it all starts with the vault that makes this possible. OneLake: The Data Vault You Didn’t Know You Already Owned OneLake is the part of Fabric that Microsoft likes to describe as “OneDrive for your data.” At first it sounds like a fluffy pitch, but the mechanics back it up. All workloads tap into a single, cloud-backed reservoir where Power BI, Synapse, and Data Factory already know how to operate. And since the lake is built on open formats like Delta Lake and Parquet, you’re not being locked into a proprietary vault that you can’t later escape. Think of it less as marketing spin and more as a managed, standardized way to keep everything in one governed stream. Compare that to the old way most of us handled data estates. You’d inherit one lake spun up by a past project, somebody else funded a warehouse, and every department shared extracts as if Excel files on SharePoint were the ultimate source of truth. Each system meant its own connectors and quirks, which failed just often enough to wreck someone’s weekend. What you ended up with wasn’t a single strategy for data, but overlapping silos where reconciling dashboards took more energy than actually using the numbers. A decent analogy is a multiplayer game where every guild sets up its own bank. Some have loose rules—keys for everyone—while others throw three-factor locks on every chest. You’re constantly remembering which guild has which currency, which chest you can still open, and when the locks reset. Moving loot between them turns into a burden. That’s the same energy when every department builds its own lake. You don’t spend time playing the game—you spend it accounting for the mess. OneLake tries to change that approach by providing one vault. Everyone drops their data into a single chest, and Fabric manages consistent access. Power BI can query it, Synapse can analyze it, and Data Factory can run pipelines through it—all without fragmenting the store or requiring duplicate copies. The shared chest model cuts down on duplication and arguments about which flavor of currency is real, because there is just one governed vault under a shared set of rules. Now, here’s where hesitation kicks in. “Everything in one place” sounds sleek for slide decks, but having a single dependency raises real red flags. If the lake goes sideways, that could ripple through dashboards and reports instantly. The worry about a single point of failure is valid. But Microsoft attempts to offset that risk with built-in resilience tools baked into Fabric itself, along with governance hooks that are not bolted on later. Instead of an “instrumented by default” promise, consider the actual wiring: OneLake integrates directly with Microsoft Purview. That means lineage tracking, sensitivity labeling, and endorsement live alongside your data from the start. You’re not bolting on random scanners or third-party monitors—metadata and compliance tags flow in as you load data, so auditors and admins can trace where streams came from and where they went. Observability and governance aren’t wishful thinking; they’re system features you get when you use the lake. For administrators still nervous about centralization, Purview isn’t the only guardrail. Fabric also provides monitoring dashboards, audit logs, and admin control points. And if you have particularly strict network rules, there are Azure-native options such as managed private endpoints or trusted workspace configs to help enforce private access. The right pattern will depend on the environment, but Microsoft has at least given you levers to pilot access rather than leaving you exposed. That’s why the “OneDrive for data” image sticks. With OneDrive, you put files in one logical spot and then every Microsoft app can open them without you moving them around manually. You don’t wonder if your PowerPoint vanished into some other silo—it surfaces across devices because it’s part of the same account fabric. OneLake applies that model to data estates. Place it once. Govern it once. Then let the workloads consume it directly instead of spawning yet another copy. The simplicity isn’t perfect, but it does remove a ton of the noise many enterprises suffer from when shadow IT teams create mismatched lakes under local rules. Once you start to see Power BI, Synapse, and pipeline tools working against the same stream instead of spinning up different ones, the “OneLake” label makes more sense. Your environment stops feeling like a dozen unsynced chests and starts acting like one shared vault. And that sets us up for the real anxiety point: knowing the vault exists is one thing; deciding when to hit the switch that lights it up inside your Power BI tenant is another. That button is where most admins pause, because it looks suspiciously close to a self-destruct. Switching on Fabric Without Burning Down Power BI Switching on Fabric is less about tearing down your house and more about adding a new wing. In the Power BI admin portal, under tenant settings, sits the control that makes it happen. By default, it’s off so admins have room to plan. Flip it on, and you’re not rewriting reports or moving datasets. All existing workspaces stay the same. What you unlock are extra object types—lakehouses, pipelines, and new levers you can use when you’re ready. Think of it like waking up to see new abilities appear on your character’s skill tree; your old abilities are untouched, you’ve just got more options. Now, just because the toggle doesn’t break anything doesn’t mean you should sprint into production. Microsoft gives you flexibility to enable Fabric fully across the tenant, but also lets you enable it for selected users, groups, or even on a per-capacity basis. That’s your chance to keep things low-risk. Instead of rolling it out for everyone overnight, spin up a test capacity, give access only to IT or a pilot group, and build one sandbox workspace dedicated to experiments. That way the people kicking tires do it safely, without making payroll reporting the crash test dummy. When Fabric is enabled, new components surface but don’t activate on their own. Lakehouses show up in menus. Pipelines are available to build. But nothing auto-migrates and no classic dataset is reworked. It’s a passive unlock—until you decide how to use it. On a natural 20, your trial team finds the new menus, experiments with a few templates, and moves on without disruption. On a natural 1, all that really happens is the sandbox fills with half-finished project files. Production dashboards still hum the same tune as yesterday. The real risk comes later when workloads get tied to capacities. Fabric isn’t dangerous because of the toggle—it’s dangerous if you mis-size or misplace workloads. Drop a heavy ingestion pipeline into a tiny trial SKU and suddenly even a small query feels like it’s moving through molasses. Or pile everything from three departments into one slot and watch refreshes queue into next week. That’s not a Fabric failure; that’s a deployment misfire. Microsoft expects this, which is why trial capacities exist. You can light up Fabric experiences without charging production compute or storage against your actual premium resources. Think of trial capacity as a practice arena: safe, ring-fenced, no bystanders harmed when you misfire a fireball. Microsoft even provides Contoso sample templates you can load straight in. These give you structured dummy data to test pipelines, refresh cycles, and query behavior without putting live financials or HR data at risk. Here’s the smart path. First, enable Fabric for a small test group instead of the entire tenant. Second, assign a trial capacity and build a dedicated sandbox workspace. Third, load up one of Microsoft’s example templates and run it like a stress test. Walk pipelines through ingestion, check your refresh schedules, and keep an eye on runtime behavior. When you know what happens under load in a controlled setting, you’ve got confidence before touching production. The mistakes usually happen when admins skip trial play altogether. They toss workloads straight onto undersized production capacity or let every team pile into one workspace. That’s when things slow down or queue forever. Users don’t see “Fabric misconfiguration”; they just see blank dashboards. But you avoid those natural 1 rolls by staging and testing first. The toggle itself is harmless. The wiring you do afterward decides whether you get smooth uptime or angry tickets. Roll Fabric into production after that and cutover feels almost boring. Reports don’t break. Users don’t lose their favorite dashboards. All you’ve done is make new building blocks available in the same workspaces they already know. Yesterday’s reports stay alive. Tomorrow’s teams get to summon lakehouses and pipelines as needed. Turning the toggle was never a doomsday switch—it was an unlock, a way to add an expansion pack without corrupting the save file. And once those new tools are visible, the next step isn’t just staring at them—it’s feeding them. These lakehouses won’t run on air. They need steady inputs to keep the system alive, and that means turning to the pipelines that actually stream fuel into the lake. Dataflows Gen2:

    19 мин.
  4. -1 ДН.

    Autonomous Agents Gone Rogue? The Hidden Risks

    Imagine logging into Teams and being greeted by a swarm of AI agents, each promising to streamline your workday. They’re pitching productivity—yet without rules, they can misinterpret goals and expand access in ways that make you liable. It’s like handing your intern a company credit card and hoping the spend report doesn’t come back with a yacht on it. Here’s the good news: in this episode you’ll walk away with a simple framework—three practical controls and some first steps—to keep these agents useful, safe, and aligned. Because before you can trust them, you need to understand what kind of coworkers they’re about to become. Meet Your New Digital Coworkers Meet your new digital coworkers. They don’t sit in cubicles, they don’t badge in, and they definitely never read the employee handbook. These aren’t the dusty Excel macros we used to babysit. Agents observe, plan, and act because they combine three core ingredients: memory, entitlements, and tool access. That’s the Microsoft-and-BCG framework, and it’s the real difference—your new “colleague” can keep track of past interactions, jump between systems you’ve already trusted, and actually use apps the way a person would. Sure, the temptation is to joke about interns again. They show up full of energy but have no clue where the stapler lives. Same with agents—they charge into your workflows without really understanding boundaries. But unlike an intern, they can reach into Outlook, SharePoint, or Dynamics the moment you deploy them. That power isn’t just quirky—it’s a governance problem. Without proper data loss prevention and entitlements, you’ve basically expanded the attack surface across your entire stack. If you want a taste of how quickly this becomes real, look at the roadmap. Microsoft has already teased SharePoint agents that manage documents directly in sites, not just search results. Imagine asking an assistant to “clean up project files,” and it actually reorganizes shared folders across teams. Impressive on a slide deck, but also one wrong misinterpretation away from archiving the wrong quarter’s financials. That’s not a theoretical risk—that’s next year’s ops ticket. Old-school automation felt like a vending machine. You punched one button, the Twix dropped, and if you were lucky it didn’t get stuck. Agents are nothing like that. They can notice the state of your workflow, look at available options, and generate steps nobody hard-coded in advance. It’s adaptive—and that’s both the attraction and the hazard. On a natural 1, the outcome isn’t a stuck candy bar—it’s a confident report pulling from three systems with misaligned definitions, presented as gospel months later. Guess who signs off when Finance asks where the discrepancy came from? Still, their upside is obvious. A single agent can thread connections across silos in ways your human teams struggle to match. It doesn’t care if the data’s in Teams, SharePoint, or some Dynamics module lurking in the background. It will hop between them and compile results without needing email attachments, calendar reminders, or that one Excel wizard in your department. From a throughput perspective, it’s like hiring someone who works ten times faster and never stops to microwave fish in the breakroom. But speed without alignment is dangerous. Agents don’t share your business goals; they share the literal instructions you feed them. That disconnect is the “principal-agent problem” in a tech wrapper. You want accuracy and compliance; they deliver a closest-match interpretation with misplaced confidence. It’s not hostility—it’s obliviousness. And oblivious with system-level entitlements can burn hotter than malice. That’s how you get an over-eager assistant blasting confidential spreadsheets to external contacts because “you asked it to share the update.” So the reality is this: agents aren’t quirky sidelines; they’re digital coworkers creeping into core workflows, spectacularly capable yet spectacularly clueless about context. You might fall in love with their demo behavior, but the real test starts when you drop them into live processes without the guardrails of training or oversight. And here’s your curiosity gap: stick with me, because in a few minutes we’ll walk through the three things every agent needs—memory, entitlements, and tools—and why each one is both a superpower and a failure point if left unmanaged. Which sets up your next job: not just using tools, but managing digital workers as if they’re part of your team. And that comes with no HR manual, but plenty of responsibility. Managers as Bosses of Digital Workers Imagine opening your performance review and seeing a new line: “Managed 12 human employees and 48 AI agents.” That isn’t sci‑fi bragging—it’s becoming a real metric of managerial skill. Experts now say a manager’s value will partly be judged on how many digital workers they can guide, because prompting, verification, and oversight are fast becoming core leadership abilities. The future boss isn’t just delegating to people; they’re orchestrating a mix of staff and software. That shift matters because AI agents don’t work like tools you leave idle until needed. They move on their own once prompted, and they don’t raise a hand when confused. Your role as a manager now requires skills that look less like writing memos and more like defining escalation thresholds—when does the agent stop and check with you, and when does it continue? According to both PwC and the World Economic Forum, the three critical managerial actions here are clear prompting, human‑in‑the‑loop oversight, and verification of output. If you miss one of these, the risk compounds quickly. With human employees, feedback is constant—tone of voice, quick questions, subtle hesitation. Agents don’t deliver that. They’ll hand back finished work regardless of whether their assumptions made sense. That’s why prompting is not casual phrasing; it’s system design. A single vague instruction can ripple into misfiled data, careless access to records, or confident but wrong reports. Testing prompts before deploying them becomes as important as reviewing project plans. Verification is the other half. Leaders are used to spot‑checking for quality but may assume automation equals precision. Wrong assumption. Agents improvise, and improvisation without review can be spectacularly damaging. As Ayumi Moore Aoki points out, AI has a talent for generating polished nonsense. Managers cannot assume “professional tone” means “factually correct.” Verification—validating sources, checking data paths—is leadership now. Oversight closes the loop. Think of it less like old‑school micromanagement and more like access control. Babak Hodjat phrases it as knowing the boundaries of trust. When you hand an agent entitlements and tool access, you still own what it produces. Managers must decide in advance how much power is appropriate, and put guardrails in place. That oversight often means requiring human approval before an agent makes potentially risky changes, like sending data externally or modifying records across core systems. Here’s the uncomfortable twist: your reputation as a manager now depends on how well you balance people and digital coworkers. Too much control and you suffocate the benefits. Too little control and you get blind‑sided by errors you didn’t even see happening. The challenge isn’t choosing one style of leadership—it’s running both at once. People require motivation and empathy. Agents require strict boundaries and ongoing calibration. Keeping them aligned so they don’t disrupt each other’s workflows becomes part of your daily management reflex. Think of your role now as a conductor—not in the HR department sense, but literally keeping time with two different sections. Human employees bring creativity and empathy. AI agents bring speed and reach. But if no one directs them, the result is discord. The best leaders of the future will be judged not only on their team’s morale, but on whether human and digital staff hit the same tempo without spilling sensitive data or warping decision‑making along the way. On a natural 1, misalignment here doesn’t just break a workflow—it creates a compliance investigation. So the takeaway is simple. Your job title didn’t change, but the content of your role did. You’re no longer just managing people—you’re managing assistant operators embedded in every system you use. That requires new skills: building precise prompts, testing instructions for unintended consequences, validating results against trusted sources, and enforcing human‑in‑the‑loop guardrails. Success here is what sets apart tomorrow’s respected managers from the ones quietly ushered into “early retirement.” And because theory is nice but practice is better, here’s your one‑day challenge: open your Copilot or agent settings and look for where human‑in‑the‑loop approvals or oversight controls live. If you can’t find them, that gap itself is a finding—it means you don’t yet know how to call back a runaway process. Now, if managing people has always begun with onboarding, it’s fair to ask: what does onboarding look like for an AI agent? Every agent you deploy comes with its own starter kit. And the contents of that kit—memory, entitlements, and tools—decide whether your new digital coworker makes you look brilliant or burns your weekend rolling back damage. The Three Pieces Every Agent Needs If you were to unpack what actually powers an agent, Microsoft and BCG call it the starter kit: three essentials—memory, entitlements, and tools. Miss one, and instead of a digital coworker you can trust, you’ve got a half-baked bot stumbling around your environment. Get them wrong, a

    20 мин.
  5. -2 ДН.

    SharePoint Premium Is Not What You Think

    If you want advantage on governance, hit subscribe—it’s the stat buff that keeps your castle standing. Now, imagine giving Copilot the keys to your company’s content… but forgetting to lock the doors. That’s what happens when advanced AI runs inside a weak governance structure. SharePoint Premium doesn’t just boost productivity with AI—it includes SharePoint Advanced Management, or SAM, which adds walls like Restricted Access Control, Data Access Governance, and site lifecycle tools. SAM helps reduce oversharing and manage access, but you still need policies and owners to act. In this run, you’ll see how to spot overshared sites, enforce Restricted Access Control, and even run access reviews so your walls aren’t guarded by ducks. Which brings us to the question—does a moat really keep you safe? Why Your Castle Needs More Than a Moat Basic permissions feel comforting until you realize they don’t scale with the way AI works. Copilot can read, understand, and surface content from SharePoint and OneDrive at lightning speed. That’s great for productivity, but it also means anything shared too broadly becomes easier to discover. Role-based access control alone doesn’t catch this. It’s the illusion of safety—strong in theory, but shallow when one careless link spreads access wider than planned. The real problem isn’t that Copilot leaks data on its own—it’s that misconfigured sharing creates a larger surface area for Copilot to surface insights. A forgotten contract library with wide-open links looks harmless until the system happily indexes the files and makes them searchable. Suddenly, what was tucked in a corner turns into part of the knowledge backbone. Oversharing isn’t always dramatic—it’s often invisible, and that’s the bigger risk. This is where SharePoint Advanced Management comes in. Basic RBAC is your moat, but SAM adds walls and watchtowers. The walls are the enforcement policies you configure, and the watchtowers are your Data Access Governance views. DAG reports give administrators visibility into potentially overshared sites—what’s shared externally, how many files carry sensitivity labels, or which sites are using broad groups like “Everyone except external users.” With these views, you don’t just walk in circles telling yourself everything’s locked down—you can actually spot the fires smoldering on the horizon. DAG isn’t item-by-item forensics; it’s site-level intelligence. You see where oversharing is most likely, who the primary admin is, and how sensitive content might be spread. That’s usually enough to trigger a meaningful review, because now IT and content owners know *where* to look instead of guessing. Think of it as a high tower with a spyglass. You don’t see each arrow in flight, but you notice which gates are unguarded. Like any tool, DAG has limits. Some reports show only the top 100 sites in the admin center for the past 30 days, with CSV exports going up to 10,000 rows—and in some cases, up to a million. Reports can take hours to generate, and you can only run them once a day. That means you’re not aiming for nonstop surveillance. Instead, DAG gives you recurring, high-level intelligence that you still need to act on. Without people stepping in, a report is just a scroll pinned to the wall. So what happens when you act on it? Let’s go back to the contract library example. Running audits by hand across every site is impossible. But from that DAG report, you might spot the one site with external links still live from a completed project. It’s not an obvious problem until you see it—yet that one gate could let the wrong person stroll past your defenses. Now, instead of combing through thousands of sites, you zero in on the one that matters. And here’s the payoff: using DAG doesn’t just show you a problem, it shows you unknown problems. It shifts the posture from “assume everything’s fine” to “prove everything is in shape.” It’s better than running around with a torch hoping you see something—because the tower view means you don’t waste hours on blind patrols. But here’s the catch: spotting risk is only half the battle. You still need people inside the castle to care enough to fix it. A moat and tower don’t matter if the folks in charge of the gates keep leaving them open. That’s where we look next—because in this defense system, the site owners aren’t just inhabitants. They’re supposed to be the guards. Turning Site Owners into Castle Guards In practice, a lot of governance gaps come from the way responsibilities are split. IT builds the systems, but the people closest to the content—the site owners—know who actually needs to be inside. They have the local context, which means they’re the only ones who can spot when a guest account or legacy teammate no longer belongs. That’s why SharePoint Advanced Management includes a feature built for them: Site Access Reviews. Most SAM features live in the hands of admins through the SharePoint admin center. But Site Access Reviews are different—they directly involve site owners. Instead of IT chasing down every outdated permission on every site, the feature pushes a prompt to the owner: here’s your list of who has access, now confirm who should stay. It’s a simple checklist, but it shifts the job from overloaded central admins to the people who actually understand the project history. The difference might not sound like much, but it rewires the whole governance model. Without this, IT tries to manage hundreds or thousands of sites blind, often relying on stale org charts or detective work through audit logs. With Site Access Reviews, IT delegates the check to owners who know who wrapped up the project six months ago and which externals should have been removed with it. No spreadsheets, no endless ticket queues. Just a structured prompt that makes ownership real. Take a common example: a project site is dormant, external sharing was never tightened, and a guest account is still roaming around months after the last handoff. Without this feature, IT has to hunt and guess. With Site Access Reviews, the site owner gets a nudge and can end that access in seconds. It’s not flashy—it’s scheduled housekeeping. But it prevents the quiet risks that usually turn into breach headlines. Another benefit is how the system links together. Data Access Governance reports highlight where oversharing is most likely: sites with broad groups like “Everyone” or external links. From there, you can initiate Site Access Reviews as a corrective step. One tool spots the gates left open, the other hands the keys back to the people running that tower. And if you’re managing at scale, there’s support for automation. If you run DAG outputs and use the PowerShell support, you can script actions or integrate with wider workflows so this isn’t just a manual cycle—it scales with the size of your tenant. The response from business units is usually better than admins expect. At first glance, a site owner might view this as extra work. But in practice, it gives them more control. They’re no longer left wondering why IT revoked a permission without warning. They’re the ones making the call, backed by clear data. Governance stops feeling like top-down enforcement and starts feeling like shared stewardship. And for IT, this is a huge relief. Instead of being the bottleneck handling every request, they set the policies, generate the DAG reports, and review overall compliance. They oversee the castle walls, but they don’t have to patrol every hallway. Owners do their part, AI provides the intelligence, and IT stays focused on bigger strategy rather than micromanaging. The system works because the roles are divided cleanly. In day-to-day terms, this keeps access drift from building up unchecked. Guest accounts don’t linger for years because owners are reminded to prune them. Overshared sites get revisited at regular intervals. Admins still manage the framework, but the continual maintenance is distributed. That’s a stronger model than endless firefighting. Seen together, Site Access Reviews with DAG reporting become less about command and control, and more about keeping the halls tidy so Copilot and other AI tools don’t surface content that never should have been visible. It’s proactive, not reactive. You get fewer surprises, fewer blind spots, and far less stress when auditors come asking hard questions. Of course, not every problem is about who should be inside the castle. Sometimes the bigger question is what kind of lock you’re putting on each door. Because even if owners are doing their reviews, not every room in your estate needs the same defenses. The Difference Between Bolting the Door and Locking the Vault Sometimes the real challenge isn’t convincing people to care about access—it’s choosing the right type of lock once they do. In SharePoint, that choice often comes down to two very different tools: Block Download and Restricted Access Control. Both guard sensitive content, but they work in distinct ways, and knowing the difference saves you from either choking off productivity or leaving gaps wider than you realize. Block Download is the lighter hand. It lets users view files in the browser but prevents downloading, printing, or syncing them. That also means no pulling the content into Office desktop apps or third‑party programs—the data stays inside your controlled web session. It’s a “look, but don’t carry” model. Administrators can configure it at the site level or even tie it to sensitivity labels so only marked content gets that extra protection. Some configurations, like applying it for Teams recordings, do require PowerShell, so it’s worth remembering this isn’t always a toggle in the UI. Restricted Access Control—or RAC—operates at a tougher level. Instead

    18 мин.
  6. -2 ДН.

    Copilot Studio: Simple Build, Hidden Traps

    Imagine rolling out your first Copilot Studio agent, and instead of impressing anyone, it blurts out something flimsy like, “I think the policy says… maybe?” That’s the natural 1 of bot building. But with a couple of fixes—clear instructions, grounding it in the actual policy doc—you can turn that blunder into a natural 20 that cites chapter and verse. By the end of this video, you’ll know how to recreate a bad response in the Test pane, fix it so the bot cites the real doc, and publish a working pilot. Quick aside—hit Subscribe now so these walkthroughs auto‑deploy to your playlist. Of course, getting a clean roll in the test window is easy. The real pain shows up when your bot leaves the dojo and stumbles in the wild. Why Your Perfect Test Bot Collapses in the Wild So why does a bot that looks flawless in the test pane suddenly start flailing once it’s pointed at real users? The short version: Studio keeps things padded and polite, while the real world has no such courtesy. In Studio, the inputs you feed are tidy. Questions are short, phrased cleanly, and usually match the training examples you prepared. That’s why it feels like a perfect streak. But move into production, and people type like people. A CFO asks, “How much can I claim when I’m at a hotel?” A rep might type “hotel expnse limit?” with a typo. Another might just say, “Remind me again about travel money.” All of those mean the same thing, but if you only tested “What is the expense limit?” the bot won’t always connect the dots. Here’s a way to see this gap right now: open the Test pane and throw three variations at your bot—first the clean version, then a casual rewrite, then a version with a typo. Watch the responses shift. Sometimes it nails all three. Sometimes only the clean one lands. That’s your first hint that beautiful test results don’t equal real‑world survival. The technical reason is intent coverage. Bots rely on trigger phrases and topic definitions to know when to fire a response. If all your examples look the same, the model gets brittle. A single synonym can throw it. The fix is boring, but it works: add broader trigger phrases to your Topics, and don’t just use the formal wording from your policy doc. Sprinkle in the casual, shorthand, even slightly messy phrasing people actually use. You don’t need dozens, just enough to cover the obvious variations, then retest. Channel differences make this tougher. Studio’s Test pane is only a simulation. Once you publish to a channel like Teams, SharePoint, or a demo website, the platform may alter how input text is handled or how responses render. Teams might split lines differently. A web page might strip formatting. Even small shifts—like moving a key phrase to another line—can change how the model weighs it. That’s why Microsoft calls out the need for iterative testing across channels. A bot that passes in Studio can still stumble when real-world formatting tilts the terrain. Users also bring expectations. To them, rephrasing a question is normal conversation. They aren’t thinking about intents, triggers, or semantic overlap. They just assume the bot understands like a co-worker would. One bad miss—especially in a demo—and confidence is gone. That’s where first-time builders get burned: the neat rehearsal in Studio gave them false security, but the first casual user input in Teams collapsed the illusion. Let’s ground this with one more example. In Studio, you type “What’s the expense limit?” The bot answers directly: “Policy states $200 per day for lodging.” Perfect. Deploy it. Now try “Hey, what can I get back for a hotel again?” Instead of citing the policy, the bot delivers something like “Check with HR” or makes a fuzzy guess. Same intent, totally different outcome. That swap—precise in rehearsal, vague in production—is exactly what we’re talking about. The practical takeaway is this: treat Studio like sparring practice. Useful for learning, but not proof of readiness. Before moving on, try the three‑variation test in the Test pane. Then broaden your Topics to include synonyms and casual phrasing. Finally, when you publish, retest in each channel where the bot will live. You’ll catch issues before your users do. And there’s an even bigger trap waiting. Because even if you get phrasing and channels covered, your bot can still crash if it isn’t grounded in the right source. That’s when it stops missing questions and starts making things up. Imagine a bot that sounds confident but is just guessing—that’s where things get messy next. The Rookie Mistake: Leaving Your Bot Ungrounded The first rookie mistake is treating Copilot Studio like a crystal ball instead of a rulebook. When you launch an agent without grounding it in real knowledge, you’re basically sending a junior intern into the boardroom with zero prep. They’ll speak quickly, they’ll sound confident—and half of what they say will collapse the second anyone checks. That’s the trap of leaving your bot ungrounded. At first, the shine hides it. A fresh build in Studio looks sharp: polite greetings, quick replies, no visible lag. But under the hood, nothing solid backs those words. The system is pulling patterns, not facts. Ungrounded bots don’t “know” anything—they bluff. And while a bluff might look slick in the Test pane, users out in production will catch it instantly. The worst outcome isn’t just weak answers—it’s hallucinations. That’s when a bot invents something that looks right but has no basis in reality. You ask about travel reimbursements, and instead of declining politely, the bot makes up a number that sounds plausible. One staffer books a hotel based on that bad output, and suddenly you’re cleaning up expense disputes and irritated emails. The sentence looked professional. The content was vapor. The Contoso lab example makes this real. In the official hands-on exercise, you’re supposed to upload a file called Expenses_Policy.docx. Inside, the lodging limit is clearly stated as $200 per night. Now, if you skip grounding and ask your shiny new bot, “What’s the hotel policy?” it may confidently answer, “$100 per night.” Totally fabricated. Only when you actually attach that Expenses_Policy.docx does the model stop winging it. Grounded bots cite the doc: “According to the corporate travel policy, lodging is limited to $200 per day.” That difference—fabrication versus citation—is all about the grounding step. So here’s exactly how you fix it in the interface. Go to your agent in Copilot Studio. From the Overview screen, click Knowledge. Select + Add knowledge, then choose to upload a file. Point it at Expenses_Policy.docx or another trusted source. If you’d rather connect to a public website or SharePoint location, you can pick that too—but files are cleaner. After uploading, wait. Indexing can take 10 minutes or more before the content is ready. Don’t panic if the first test queries don’t pull from it immediately. Once indexing finishes, rerun your question. When it’s grounded correctly, you’ll see the actual $200 answer along with a small citation showing it came from your uploaded doc. That citation is how you know you’ve rolled the natural 20. One common misconception is assuming conversational boosting will magically cover the gaps. Boosting doesn’t invent policy awareness—it just amplifies text patterns. Without a knowledge source to anchor, boosting happily spouts generic filler. It’s like giving that intern three cups of coffee and hoping caffeine compensates for ignorance. The lab docs even warn about this: if no match is found in your knowledge, boosting may fall back to the model’s baked-in general knowledge and return vague or inaccurate answers. That’s why you should configure critical topics to only search your added sources when precision matters. Don’t let the bot run loose in the wider language model if the stakes are compliance, finance, or HR. The fallout from ignoring this step adds up fast. Ungrounded bots might work fine for chit‑chat, but once they answer about reimbursements or leave policies, they create real helpdesk tickets. Imagine explaining to finance why five employees all filed claims at the wrong rate—because your bot invented a limit on the fly. The fix costs more than just uploading the doc on day one. Grounding turns your agent from an eager but clueless intern into what gamers might call a rules lawyer. It quotes the book, not its gut. Attach the Expenses_Policy.docx, and suddenly the system enforces corporate canon instead of improvising. Better still, responses give receipts—clear citations you can check. That’s how you protect trust. On a natural 1, you’ve built a confident gossip machine that spreads made-up rules. On a natural 20, you’ve built a grounded expert, complete with citations. The only way to get the latter is by feeding it verified knowledge sources right from the start. And once your bot can finally tell the truth, you hit the next challenge: shaping how it tells that truth. Because accuracy without personality still makes users bounce. Teaching Your Bot Its Personality Personality comes next, and in Copilot Studio, you don’t get one for free. You have to write it in. This is where you stop letting the system sound like a test dummy and start shaping it into something your users actually want to talk to. In practice, that means editing the name, description, and instruction fields that live on the Overview page. Leave them blank, and you end up with canned replies that feel like an NPC stuck in tutorial mode. Here’s the part many first-time builders miss—the system already has a default style the second you hit “create.” If you don’t touch the fields, you’ll get a bland greeter with no authority and no context. Contex

    19 мин.
  7. -3 ДН.

    Why Your Intranet Search Sucks (And How to Fix It)

    You know that moment when you search your intranet, type the exact title of a document, and it still vanishes into the void? That’s not bad luck—that’s bad Information Architecture. Before we start the dungeon crawl, hit subscribe so you don’t miss future best‑practice loot drops. Here’s what you’ll walk away with today: a quick checklist to spot what’s broken, fixes that make Copilot actually useful, and the small design choices that stop search from failing. Well‑planned IA is the prerequisite for a high‑performing intranet, and most orgs don’t realize it until users are already frustrated. So the real question is: where in the map is your IA breaking down? The Hidden Dungeon Map: The Six Core Elements If you want a working intranet, you need more than scattered pages and guesswork. The backbone is what I call the hidden dungeon map: six core elements that hold the whole architecture together. They’re not optional. They’re not interchangeable. They are the framework that keeps your content visible and usable: global navigation, hub navigation, local navigation, metadata, search, and personalization. Miss one, and the structure starts to wobble. Think of them as your six party roles. Global navigation is the tank that points everyone in the right direction. Hub navigation is the healer, tying related sites into something that actually works together. Local navigation is your DPS, cutting through site-level clicks with precision. Metadata is the scout, marking everything so it can be tracked and recovered later. Search is the wizard, powerful but only as good as the spell components—your metadata and navigation. And personalization is the bard, tuning the experience so the right message gets to the right person at the right time. That’s the full roster. Straightforward, but deadly when ignored. The trouble is, most intranet failures aren’t loud. They don’t trigger red banners. They creep in quietly. Users stop trying search because they never find what they need, or they bounce from one site to the next until they give up. Silent cuts like that build into a trust problem. You can see it in real terms if you ask: can someone outside your team find last year’s travel policy in under 90 seconds? If not, your IA is hiding more than it’s helping. Another problem is imbalance. Organizations love to overbuild one element while neglecting another. Giant navigation menus stacked three levels deep look impressive, but if your documents are all tagged with “final_v2,” search will flop. Relying only on the wizard when the scout never did its job is a natural 1 roll, every time. The reverse is also true: some teams treat metadata like gospel but bury their global links under six clicks. Each element leans on the others. If one role is left behind, the raid wipes. And here’s the hard truth—AI won’t save you from bad architecture. Copilot or semantic search can’t invent metadata that doesn’t exist. It can’t magically create navigation where no hub structure was set. The machine is only as effective as the groundwork you’ve already done. If you feed it chaos, you’ll get chaos back. Smart investments at the architecture level are what make the flashy tools worth using. It’s also worth pointing out this isn’t a solo job. Information architecture is a team sport, spread across roles. Global navigation usually falls with intranet owners and comms leads. Hubs are often run by hub owners and business stakeholders. Local navigation and metadata involve site owners and content creators. IT admins sit across the whole thing, wiring compliance and governance in. It’s cross-team by design, which means you need agreement on map-making before the characters hit the dungeon. When all six parts are set up, something changes. Navigation frames the world so people don’t get lost. Hubs bind related zones into meaningful regions. Metadata tags the loot. Search pulls it on demand. Personalization fine-tunes what matters to each player. That balance means you’re not improvising every fix or losing hours in scavenger hunts—it means you’re building a system where both humans and AI can actually succeed. That’s the real win condition. Before we move on, here’s a quick action you can take. Pause, pick one of the six elements—navigation, metadata, or search—and run a light audit. Don’t overthink it. Just ask if it’s working right now. That single diagnostic step can save you from months of frustration later. Because from here, we’re about to get specific. There are three different maps built into every intranet, and knowing how they overlap is the first real test of whether users make progress—or wander in circles. World Map vs. Local Maps: Global, Hub, and Local Navigation Every intranet lives on three distinct maps: the world map, the regional maps, and the street-level sketch. In platform terms, that’s global navigation, hub navigation, and local navigation. If those maps don’t agree, your users aren’t adventuring—they’re grinding random encounters with no idea which way is north. Global navigation is the overworld view. It tells everyone what lands exist and how major territories connect. In Microsoft 365, you unlock it through the SharePoint app bar, which shows up on every site once a home site is set. It’s tenant-wide by design. Global nav isn’t there to list every page or document—it’s the continental outline: Home, News, Resources, Tools. Broad categories everyone in the company should trust. If this skeleton bends out of shape, people don’t even know which continent they spawned on. Hub navigation works like a regional map. Join a guild hall in an RPG and you see trainers, quest boards, shops—the things tied to that one region. Hubs in SharePoint do exactly that. They unify related sites like HR, Finance, or legal so they don’t float around as disconnected islands. Hub nav appears just below the suite bar, over the site’s local nav, and every site joined to that hub respects the same links and shared branding. It’s also security-trimmed: if a user doesn’t have access to a site in the hub, they won’t see its content surface magically. Permissions don’t change by association. Use audience targeting if you want private links to show up only for the right people. That stops mixed parties from thinking they missed a questline they were never allowed to run. Local navigation is the street map—the hand-drawn dungeon sketch you keep updating as you poke around. It’s specific to a single site and guides users from one page, list, library, or task to another inside that domain. On a team site it’s on the left as the quick launch. On a communication site it’s up top instead. Local nav should cover tactical moves: policies, project docs, calendars. The player should find common quests inside two clicks. If they’re digging five levels down and retracing breadcrumbs, the dungeon layout is broken. The real failure comes when these maps don’t line up. Global says “HR,” hub says “People Services,” and local nav buries benefits documents under “Archive/Old-Version-Uploads.” Users follow one map, get looped back to another, and realize none of them match. Subsites layered five deep create breadcrumb trails that collapse the moment you reorganize, leading to dead ends in Teams or Outlook links. It only takes a few busted trails before staff stop trying navigation altogether and fire off emails instead. That’s when trust in the intranet collapses. There are also technical boundaries worth noting. Each nav level can technically handle up to 500 links per tier, but stuffing them in is like stocking a bag with 499 health potions. Sure, it fits—but no one can use it. A practical rule is to keep hub nav under a hundred links. Anything more and users can’t scan it without scrolling fatigue. Use those limits as sanity checks when you’re tempted to add “just one more” menu. Here’s how to test this in practice—two checks you can run right now in under a minute. First, open the SharePoint app bar. Do those links boil down to your real global categories—Home, News, Tools—or are they trying to be a department sitemap? Second, pick a single site. Check the local nav. Count how many clicks it takes to hit the top three tasks. If it’s more than two, you’re making users roll a disadvantage check every time. When these three layers match, things click. Users trust the overworld for direction, the hubs for context, and the locals for getting work done. Better still, AI tools see the same paths. Copilot doesn’t misplace scrolls if the maps agree on where those scrolls live. The system doesn’t feel like a coin toss; it behaves predictably for both people and machines. But even the best navigation can’t label a blade if every sword in the vault is called “Item_final_V3.” That’s a different kind of invisibility. The runes you carve into your gear—your metadata—are what make search cast real spells instead of fumbles. Metadata: The Magic Runes of Search When navigation gives you the map, metadata gives the legend. Metadata—the magic runes of search—is what tells SharePoint and AI tools what a file actually is, not just what it happens to be named. Without it, everything blurs into vague boxes and folders. With it, your system knows the difference between a project plan, a travel policy, and a vendor contract. The first rule: use columns and content types in your document libraries and Site Pages library. This isn’t overkill—it’s the translation layer that lets search and highlighted content web parts actually filter and roll up the right files. A tagged field like “Region = West” doesn’t just decorate the document; it becomes a lever for search, dynamic rollups, even audience-targeted news feeds. AI copilots look for those same properties. If th

    18 мин.
  8. -3 ДН.

    Copilot Studio vs. Teams Toolkit: Critical Differences

    Rolling out Microsoft 365 Copilot feels like unlocking a legendary item—until you realize it only comes with the starter kit. Out of the box, it draws on baseline model knowledge and the content inside your tenant. Useful, but what about your dusty SOPs, the HR playbook, or that monster ERP system lurking in the corner? Without connectors, grounding, or custom agents, Copilot can’t tap into those. The good news—you can teach it. The trick is knowing when to reach for Copilot Studio, when to switch to Teams Toolkit, and how governance, monitoring, and licensing fit into the run. Because here’s the real twist: building your first agent isn’t the final boss fight. It’s just the tutorial. The Build Isn’t the Boss Fight You test your first agent, the prompts work, the demo data looks spotless, and for a second you feel like you’ve cleared the game. That’s the trap. The real work starts once you aim that same build at production, where the environment plays by very different rules. Too many makers assume a clean answer in testing equals mission accomplished. In reality, that’s just story mode on easy difficulty. Production doesn’t care if your proof-of-concept responded well on your dev laptop. What production demands is stability under stress, with compliance checks, identity guardrails, and uptime standards breathing down its neck. And here’s where the first boss monsters appear. Scalability: can the agent handle enterprise load without choking? That’s where monitoring and diagnostic logs from the Copilot Control System matter. Stale grounding: when data in SharePoint or Dataverse changes, does the agent still tether to the right snapshot? Connectors and Graph grounding are the safeguards. Compliance and auditability: if a regulator or internal auditor taps you on the shoulder, can the agent’s history be reviewed with Purview logs and sensitivity labels in place? If any of these fail, the “victory screen” vanishes fast. Running tests in Copilot Studio is like sparring in a training arena with infinite health potions. You can throw spells, cycle prompts, and everything looks shiny. But in live use, every firewall block is a fizzled cast, and an overloaded external data source slows replies to a crawl. That’s the moment when users stop calling it smart and start filing tickets. The most common natural 1 roll comes from teams who put off governance. They tell themselves it’s something to layer on later. But postponing governance almost always leads to ugly surprises. Scaling issues, data mismatches, or compliance gaps show up at exactly the wrong moment. Security and compliance aren’t optional side quests. They’re part of the campaign map. Now let’s talk architecture, because Copilot’s brain isn’t a single block. You’ve got the foundation model—the raw language engine. On top, the orchestrator, which lines up what functions get called and when. Microsoft 365 Copilot provides that orchestration by default, so every request has structure. Then comes grounding—the tether back to enterprise content so answers aren’t fabricated. Finally, the skills—your custom plugins or connectors to do actual tasks. If you treat those four pieces as detached silos, the whole tower wobbles. A solid skill without grounding is just a fancy hallucination. Foundation with no compliance controls becomes a liability. Only when the layers are treated as one stack does the agent stay sturdy. So what does a “win” even look like in the wild? It’s not answering a demo prompt neatly. That’s practice mode. The mark of success is holding up under real-world conditions: mid-payroll crunch, data migrations in motion, compliance officers watching, all with a high request load. That’s where an agent proves it deserves to run. And here’s another reason many builds fail: organizations think of them as throwaway projects, not operational systems. Somebody spins up a prototype, shows off a flashy demo, then leaves it unmonitored. Soon, different departments build their own, none of them documented, all of them chewing tokens unchecked. Without a simple operational manual—who owns the connectors, who audits grounding, who checks credit consumption—the landscape turns into a mess of unsynced mini-bosses. Flip the perspective, and it gets much easier. If you start with an operational mindset, the design shifts. You don’t just care about whether the first test looked clean. You harden for the day-to-day campaign. Audit logs, admin gates, backups, health checks—those build trust while keeping the thing alive under pressure. Admins already have usable controls in the Microsoft 365 admin center, where scenarios can be managed and diagnostic feedback surfaces early. Leaning on those tools is what separates a novelty agent from a reliable operator. That’s why building alone doesn’t crown a winner. The test environment gets you to level one. Real deployment, with governance and monitoring in place, is where the actual survival challenge kicks off. And before you march too far into that, you’ll need the right weapon for the fight. Microsoft gives you two—different kits, different rules. Choose wrong, and it’ll feel like bringing a plastic sword to a raid. Copilot Studio vs. Teams Toolkit: Choosing Your Weapon That’s where the real question lands: which tool do you reach for—Copilot Studio or the Teams Toolkit, also called the Microsoft 365 Agents Toolkit? They sound alike, both claim to “extend Copilot,” but they serve very different groups of builders and needs. The wrong choice costs you time, budget, and possibly credibility when your shiny demo wilts in production. Copilot Studio is the maker’s arena. It’s a low‑code, visual builder designed for speed and clarity. You get drag‑and‑drop flows, templates, guided dialogs, and built‑in analytics. Studio comes bundled with a buffet of connectors to Microsoft 365 data sources, so a power user can pull SharePoint content, monitor Teams messages, or surface HR policy docs without ever touching code. You can test, adjust, and publish directly into Microsoft 365 Copilot or even release as a standalone agent with minimal friction. For a department that needs a working workflow this quarter—not next fiscal year—Studio is the fast track. Over 160,000 customers already use Studio for exactly this: reconciling financial data, onboarding employees, or answering product questions in retail. The reason isn’t mystery—it simply lowers the bar. If your team already fiddles in PowerApps or automates routine reports in Power Automate, Studio feels like home turf. You don’t need to be a software engineer. You just need a clear goal and basic low‑code chops to click, configure, and deploy. Now, cross over to the Teams Toolkit. This is where full‑stack developers thrive. The Toolkit plugs into VS Code, not a drag‑and‑drop canvas. Here, you architect declarative agents with structured rules, or you go further and create custom engine agents where you define orchestration, model calls, and API handling from scratch. You get scaffolding, debugging, configuration, and publishing routes not just inside Copilot, but across Teams, Microsoft 365 apps, the web, and external channels. If Copilot Studio is prefab furniture from the catalog, Toolkit is milling your own planks and wiring the house yourself. The freedom is spectacular—but you’re also responsible for every nail and fuse. The real confusion? Both say “extend Copilot.” In practice, Studio means extending within Microsoft’s defined guardrails: safe connectors, administrative controls, and lightweight governance. The Toolkit means rewriting the guardrails: rolling your own orchestration, calling external LLMs, or building agent behaviors Microsoft didn’t provide out of the box. One approach keeps you safe with templates. The other gives you raw power and expects you to wield it responsibly. A lot of folks think “tool choice equals different UI.” Nope. End‑users see the same prompt box and answer card whether you built the agent in Studio or with Toolkit. That’s by design—the UX layer is unified. What actually changes is behind the curtain: grounding options, scalability, and administrative control. That’s why this decision is operational, not cosmetic. Here’s a practical rule: some grounding capabilities—things like SharePoint content, Teams chats and meetings, embedded files, Dataverse data, or connectors into email and people search—only light up if your tenant has Microsoft 365 Copilot licensing or Copilot Studio metering turned on. If you don’t have that entitlement, picking Studio won’t unlock those tricks. That single licensing check can be the deciding factor for which route you need. So how do you simplify the choice? Roll a quick checklist. One: need fast, auditable, admin‑controlled agents that power users can stand up without bugging IT? Pick Copilot Studio. Two: need custom orchestration, external AI models, or deep integration work stitched straight into enterprise backbones? Pick the Agents Toolkit. Three: don’t trust the labels—trust your team’s actual skill set and goals. The metaphor I use is housing. Studio is prefab—you pick colors and cabinets, but the plumbing and wiring are already safe. Toolkit is raw land—you design every inch, but also carry all the risks if the design buckles. Both can yield a beautiful home. One is faster and less complex, the other is limitless but fragile unless managed well. Both collapse without grounding. Your chosen weapon handles the build, but if it isn’t fed the right data, it just makes confident nonsense faster. A Studio agent without connectors is a parrot. A Toolkit agent without grounding is a custom‑coded parrot. Either way, you’re still living with a bird squawking guesses at your users. And that brings us to the real

    20 мин.

Об этом подкасте

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Вам может также понравиться