M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. HACE 3 H

    SharePoint Premium Is Not What You Think

    If you want advantage on governance, hit subscribe—it’s the stat buff that keeps your castle standing. Now, imagine giving Copilot the keys to your company’s content… but forgetting to lock the doors. That’s what happens when advanced AI runs inside a weak governance structure. SharePoint Premium doesn’t just boost productivity with AI—it includes SharePoint Advanced Management, or SAM, which adds walls like Restricted Access Control, Data Access Governance, and site lifecycle tools. SAM helps reduce oversharing and manage access, but you still need policies and owners to act. In this run, you’ll see how to spot overshared sites, enforce Restricted Access Control, and even run access reviews so your walls aren’t guarded by ducks. Which brings us to the question—does a moat really keep you safe? Why Your Castle Needs More Than a Moat Basic permissions feel comforting until you realize they don’t scale with the way AI works. Copilot can read, understand, and surface content from SharePoint and OneDrive at lightning speed. That’s great for productivity, but it also means anything shared too broadly becomes easier to discover. Role-based access control alone doesn’t catch this. It’s the illusion of safety—strong in theory, but shallow when one careless link spreads access wider than planned. The real problem isn’t that Copilot leaks data on its own—it’s that misconfigured sharing creates a larger surface area for Copilot to surface insights. A forgotten contract library with wide-open links looks harmless until the system happily indexes the files and makes them searchable. Suddenly, what was tucked in a corner turns into part of the knowledge backbone. Oversharing isn’t always dramatic—it’s often invisible, and that’s the bigger risk. This is where SharePoint Advanced Management comes in. Basic RBAC is your moat, but SAM adds walls and watchtowers. The walls are the enforcement policies you configure, and the watchtowers are your Data Access Governance views. DAG reports give administrators visibility into potentially overshared sites—what’s shared externally, how many files carry sensitivity labels, or which sites are using broad groups like “Everyone except external users.” With these views, you don’t just walk in circles telling yourself everything’s locked down—you can actually spot the fires smoldering on the horizon. DAG isn’t item-by-item forensics; it’s site-level intelligence. You see where oversharing is most likely, who the primary admin is, and how sensitive content might be spread. That’s usually enough to trigger a meaningful review, because now IT and content owners know *where* to look instead of guessing. Think of it as a high tower with a spyglass. You don’t see each arrow in flight, but you notice which gates are unguarded. Like any tool, DAG has limits. Some reports show only the top 100 sites in the admin center for the past 30 days, with CSV exports going up to 10,000 rows—and in some cases, up to a million. Reports can take hours to generate, and you can only run them once a day. That means you’re not aiming for nonstop surveillance. Instead, DAG gives you recurring, high-level intelligence that you still need to act on. Without people stepping in, a report is just a scroll pinned to the wall. So what happens when you act on it? Let’s go back to the contract library example. Running audits by hand across every site is impossible. But from that DAG report, you might spot the one site with external links still live from a completed project. It’s not an obvious problem until you see it—yet that one gate could let the wrong person stroll past your defenses. Now, instead of combing through thousands of sites, you zero in on the one that matters. And here’s the payoff: using DAG doesn’t just show you a problem, it shows you unknown problems. It shifts the posture from “assume everything’s fine” to “prove everything is in shape.” It’s better than running around with a torch hoping you see something—because the tower view means you don’t waste hours on blind patrols. But here’s the catch: spotting risk is only half the battle. You still need people inside the castle to care enough to fix it. A moat and tower don’t matter if the folks in charge of the gates keep leaving them open. That’s where we look next—because in this defense system, the site owners aren’t just inhabitants. They’re supposed to be the guards. Turning Site Owners into Castle Guards In practice, a lot of governance gaps come from the way responsibilities are split. IT builds the systems, but the people closest to the content—the site owners—know who actually needs to be inside. They have the local context, which means they’re the only ones who can spot when a guest account or legacy teammate no longer belongs. That’s why SharePoint Advanced Management includes a feature built for them: Site Access Reviews. Most SAM features live in the hands of admins through the SharePoint admin center. But Site Access Reviews are different—they directly involve site owners. Instead of IT chasing down every outdated permission on every site, the feature pushes a prompt to the owner: here’s your list of who has access, now confirm who should stay. It’s a simple checklist, but it shifts the job from overloaded central admins to the people who actually understand the project history. The difference might not sound like much, but it rewires the whole governance model. Without this, IT tries to manage hundreds or thousands of sites blind, often relying on stale org charts or detective work through audit logs. With Site Access Reviews, IT delegates the check to owners who know who wrapped up the project six months ago and which externals should have been removed with it. No spreadsheets, no endless ticket queues. Just a structured prompt that makes ownership real. Take a common example: a project site is dormant, external sharing was never tightened, and a guest account is still roaming around months after the last handoff. Without this feature, IT has to hunt and guess. With Site Access Reviews, the site owner gets a nudge and can end that access in seconds. It’s not flashy—it’s scheduled housekeeping. But it prevents the quiet risks that usually turn into breach headlines. Another benefit is how the system links together. Data Access Governance reports highlight where oversharing is most likely: sites with broad groups like “Everyone” or external links. From there, you can initiate Site Access Reviews as a corrective step. One tool spots the gates left open, the other hands the keys back to the people running that tower. And if you’re managing at scale, there’s support for automation. If you run DAG outputs and use the PowerShell support, you can script actions or integrate with wider workflows so this isn’t just a manual cycle—it scales with the size of your tenant. The response from business units is usually better than admins expect. At first glance, a site owner might view this as extra work. But in practice, it gives them more control. They’re no longer left wondering why IT revoked a permission without warning. They’re the ones making the call, backed by clear data. Governance stops feeling like top-down enforcement and starts feeling like shared stewardship. And for IT, this is a huge relief. Instead of being the bottleneck handling every request, they set the policies, generate the DAG reports, and review overall compliance. They oversee the castle walls, but they don’t have to patrol every hallway. Owners do their part, AI provides the intelligence, and IT stays focused on bigger strategy rather than micromanaging. The system works because the roles are divided cleanly. In day-to-day terms, this keeps access drift from building up unchecked. Guest accounts don’t linger for years because owners are reminded to prune them. Overshared sites get revisited at regular intervals. Admins still manage the framework, but the continual maintenance is distributed. That’s a stronger model than endless firefighting. Seen together, Site Access Reviews with DAG reporting become less about command and control, and more about keeping the halls tidy so Copilot and other AI tools don’t surface content that never should have been visible. It’s proactive, not reactive. You get fewer surprises, fewer blind spots, and far less stress when auditors come asking hard questions. Of course, not every problem is about who should be inside the castle. Sometimes the bigger question is what kind of lock you’re putting on each door. Because even if owners are doing their reviews, not every room in your estate needs the same defenses. The Difference Between Bolting the Door and Locking the Vault Sometimes the real challenge isn’t convincing people to care about access—it’s choosing the right type of lock once they do. In SharePoint, that choice often comes down to two very different tools: Block Download and Restricted Access Control. Both guard sensitive content, but they work in distinct ways, and knowing the difference saves you from either choking off productivity or leaving gaps wider than you realize. Block Download is the lighter hand. It lets users view files in the browser but prevents downloading, printing, or syncing them. That also means no pulling the content into Office desktop apps or third‑party programs—the data stays inside your controlled web session. It’s a “look, but don’t carry” model. Administrators can configure it at the site level or even tie it to sensitivity labels so only marked content gets that extra protection. Some configurations, like applying it for Teams recordings, do require PowerShell, so it’s worth remembering this isn’t always a toggle in the UI. Restricted Access Control—or RAC—operates at a tougher level. Instead

    18 min
  2. HACE 15 H

    Copilot Studio: Simple Build, Hidden Traps

    Imagine rolling out your first Copilot Studio agent, and instead of impressing anyone, it blurts out something flimsy like, “I think the policy says… maybe?” That’s the natural 1 of bot building. But with a couple of fixes—clear instructions, grounding it in the actual policy doc—you can turn that blunder into a natural 20 that cites chapter and verse. By the end of this video, you’ll know how to recreate a bad response in the Test pane, fix it so the bot cites the real doc, and publish a working pilot. Quick aside—hit Subscribe now so these walkthroughs auto‑deploy to your playlist. Of course, getting a clean roll in the test window is easy. The real pain shows up when your bot leaves the dojo and stumbles in the wild. Why Your Perfect Test Bot Collapses in the Wild So why does a bot that looks flawless in the test pane suddenly start flailing once it’s pointed at real users? The short version: Studio keeps things padded and polite, while the real world has no such courtesy. In Studio, the inputs you feed are tidy. Questions are short, phrased cleanly, and usually match the training examples you prepared. That’s why it feels like a perfect streak. But move into production, and people type like people. A CFO asks, “How much can I claim when I’m at a hotel?” A rep might type “hotel expnse limit?” with a typo. Another might just say, “Remind me again about travel money.” All of those mean the same thing, but if you only tested “What is the expense limit?” the bot won’t always connect the dots. Here’s a way to see this gap right now: open the Test pane and throw three variations at your bot—first the clean version, then a casual rewrite, then a version with a typo. Watch the responses shift. Sometimes it nails all three. Sometimes only the clean one lands. That’s your first hint that beautiful test results don’t equal real‑world survival. The technical reason is intent coverage. Bots rely on trigger phrases and topic definitions to know when to fire a response. If all your examples look the same, the model gets brittle. A single synonym can throw it. The fix is boring, but it works: add broader trigger phrases to your Topics, and don’t just use the formal wording from your policy doc. Sprinkle in the casual, shorthand, even slightly messy phrasing people actually use. You don’t need dozens, just enough to cover the obvious variations, then retest. Channel differences make this tougher. Studio’s Test pane is only a simulation. Once you publish to a channel like Teams, SharePoint, or a demo website, the platform may alter how input text is handled or how responses render. Teams might split lines differently. A web page might strip formatting. Even small shifts—like moving a key phrase to another line—can change how the model weighs it. That’s why Microsoft calls out the need for iterative testing across channels. A bot that passes in Studio can still stumble when real-world formatting tilts the terrain. Users also bring expectations. To them, rephrasing a question is normal conversation. They aren’t thinking about intents, triggers, or semantic overlap. They just assume the bot understands like a co-worker would. One bad miss—especially in a demo—and confidence is gone. That’s where first-time builders get burned: the neat rehearsal in Studio gave them false security, but the first casual user input in Teams collapsed the illusion. Let’s ground this with one more example. In Studio, you type “What’s the expense limit?” The bot answers directly: “Policy states $200 per day for lodging.” Perfect. Deploy it. Now try “Hey, what can I get back for a hotel again?” Instead of citing the policy, the bot delivers something like “Check with HR” or makes a fuzzy guess. Same intent, totally different outcome. That swap—precise in rehearsal, vague in production—is exactly what we’re talking about. The practical takeaway is this: treat Studio like sparring practice. Useful for learning, but not proof of readiness. Before moving on, try the three‑variation test in the Test pane. Then broaden your Topics to include synonyms and casual phrasing. Finally, when you publish, retest in each channel where the bot will live. You’ll catch issues before your users do. And there’s an even bigger trap waiting. Because even if you get phrasing and channels covered, your bot can still crash if it isn’t grounded in the right source. That’s when it stops missing questions and starts making things up. Imagine a bot that sounds confident but is just guessing—that’s where things get messy next. The Rookie Mistake: Leaving Your Bot Ungrounded The first rookie mistake is treating Copilot Studio like a crystal ball instead of a rulebook. When you launch an agent without grounding it in real knowledge, you’re basically sending a junior intern into the boardroom with zero prep. They’ll speak quickly, they’ll sound confident—and half of what they say will collapse the second anyone checks. That’s the trap of leaving your bot ungrounded. At first, the shine hides it. A fresh build in Studio looks sharp: polite greetings, quick replies, no visible lag. But under the hood, nothing solid backs those words. The system is pulling patterns, not facts. Ungrounded bots don’t “know” anything—they bluff. And while a bluff might look slick in the Test pane, users out in production will catch it instantly. The worst outcome isn’t just weak answers—it’s hallucinations. That’s when a bot invents something that looks right but has no basis in reality. You ask about travel reimbursements, and instead of declining politely, the bot makes up a number that sounds plausible. One staffer books a hotel based on that bad output, and suddenly you’re cleaning up expense disputes and irritated emails. The sentence looked professional. The content was vapor. The Contoso lab example makes this real. In the official hands-on exercise, you’re supposed to upload a file called Expenses_Policy.docx. Inside, the lodging limit is clearly stated as $200 per night. Now, if you skip grounding and ask your shiny new bot, “What’s the hotel policy?” it may confidently answer, “$100 per night.” Totally fabricated. Only when you actually attach that Expenses_Policy.docx does the model stop winging it. Grounded bots cite the doc: “According to the corporate travel policy, lodging is limited to $200 per day.” That difference—fabrication versus citation—is all about the grounding step. So here’s exactly how you fix it in the interface. Go to your agent in Copilot Studio. From the Overview screen, click Knowledge. Select + Add knowledge, then choose to upload a file. Point it at Expenses_Policy.docx or another trusted source. If you’d rather connect to a public website or SharePoint location, you can pick that too—but files are cleaner. After uploading, wait. Indexing can take 10 minutes or more before the content is ready. Don’t panic if the first test queries don’t pull from it immediately. Once indexing finishes, rerun your question. When it’s grounded correctly, you’ll see the actual $200 answer along with a small citation showing it came from your uploaded doc. That citation is how you know you’ve rolled the natural 20. One common misconception is assuming conversational boosting will magically cover the gaps. Boosting doesn’t invent policy awareness—it just amplifies text patterns. Without a knowledge source to anchor, boosting happily spouts generic filler. It’s like giving that intern three cups of coffee and hoping caffeine compensates for ignorance. The lab docs even warn about this: if no match is found in your knowledge, boosting may fall back to the model’s baked-in general knowledge and return vague or inaccurate answers. That’s why you should configure critical topics to only search your added sources when precision matters. Don’t let the bot run loose in the wider language model if the stakes are compliance, finance, or HR. The fallout from ignoring this step adds up fast. Ungrounded bots might work fine for chit‑chat, but once they answer about reimbursements or leave policies, they create real helpdesk tickets. Imagine explaining to finance why five employees all filed claims at the wrong rate—because your bot invented a limit on the fly. The fix costs more than just uploading the doc on day one. Grounding turns your agent from an eager but clueless intern into what gamers might call a rules lawyer. It quotes the book, not its gut. Attach the Expenses_Policy.docx, and suddenly the system enforces corporate canon instead of improvising. Better still, responses give receipts—clear citations you can check. That’s how you protect trust. On a natural 1, you’ve built a confident gossip machine that spreads made-up rules. On a natural 20, you’ve built a grounded expert, complete with citations. The only way to get the latter is by feeding it verified knowledge sources right from the start. And once your bot can finally tell the truth, you hit the next challenge: shaping how it tells that truth. Because accuracy without personality still makes users bounce. Teaching Your Bot Its Personality Personality comes next, and in Copilot Studio, you don’t get one for free. You have to write it in. This is where you stop letting the system sound like a test dummy and start shaping it into something your users actually want to talk to. In practice, that means editing the name, description, and instruction fields that live on the Overview page. Leave them blank, and you end up with canned replies that feel like an NPC stuck in tutorial mode. Here’s the part many first-time builders miss—the system already has a default style the second you hit “create.” If you don’t touch the fields, you’ll get a bland greeter with no authority and no context. Contex

    19 min
  3. HACE 1 DÍA

    Why Your Intranet Search Sucks (And How to Fix It)

    You know that moment when you search your intranet, type the exact title of a document, and it still vanishes into the void? That’s not bad luck—that’s bad Information Architecture. Before we start the dungeon crawl, hit subscribe so you don’t miss future best‑practice loot drops. Here’s what you’ll walk away with today: a quick checklist to spot what’s broken, fixes that make Copilot actually useful, and the small design choices that stop search from failing. Well‑planned IA is the prerequisite for a high‑performing intranet, and most orgs don’t realize it until users are already frustrated. So the real question is: where in the map is your IA breaking down? The Hidden Dungeon Map: The Six Core Elements If you want a working intranet, you need more than scattered pages and guesswork. The backbone is what I call the hidden dungeon map: six core elements that hold the whole architecture together. They’re not optional. They’re not interchangeable. They are the framework that keeps your content visible and usable: global navigation, hub navigation, local navigation, metadata, search, and personalization. Miss one, and the structure starts to wobble. Think of them as your six party roles. Global navigation is the tank that points everyone in the right direction. Hub navigation is the healer, tying related sites into something that actually works together. Local navigation is your DPS, cutting through site-level clicks with precision. Metadata is the scout, marking everything so it can be tracked and recovered later. Search is the wizard, powerful but only as good as the spell components—your metadata and navigation. And personalization is the bard, tuning the experience so the right message gets to the right person at the right time. That’s the full roster. Straightforward, but deadly when ignored. The trouble is, most intranet failures aren’t loud. They don’t trigger red banners. They creep in quietly. Users stop trying search because they never find what they need, or they bounce from one site to the next until they give up. Silent cuts like that build into a trust problem. You can see it in real terms if you ask: can someone outside your team find last year’s travel policy in under 90 seconds? If not, your IA is hiding more than it’s helping. Another problem is imbalance. Organizations love to overbuild one element while neglecting another. Giant navigation menus stacked three levels deep look impressive, but if your documents are all tagged with “final_v2,” search will flop. Relying only on the wizard when the scout never did its job is a natural 1 roll, every time. The reverse is also true: some teams treat metadata like gospel but bury their global links under six clicks. Each element leans on the others. If one role is left behind, the raid wipes. And here’s the hard truth—AI won’t save you from bad architecture. Copilot or semantic search can’t invent metadata that doesn’t exist. It can’t magically create navigation where no hub structure was set. The machine is only as effective as the groundwork you’ve already done. If you feed it chaos, you’ll get chaos back. Smart investments at the architecture level are what make the flashy tools worth using. It’s also worth pointing out this isn’t a solo job. Information architecture is a team sport, spread across roles. Global navigation usually falls with intranet owners and comms leads. Hubs are often run by hub owners and business stakeholders. Local navigation and metadata involve site owners and content creators. IT admins sit across the whole thing, wiring compliance and governance in. It’s cross-team by design, which means you need agreement on map-making before the characters hit the dungeon. When all six parts are set up, something changes. Navigation frames the world so people don’t get lost. Hubs bind related zones into meaningful regions. Metadata tags the loot. Search pulls it on demand. Personalization fine-tunes what matters to each player. That balance means you’re not improvising every fix or losing hours in scavenger hunts—it means you’re building a system where both humans and AI can actually succeed. That’s the real win condition. Before we move on, here’s a quick action you can take. Pause, pick one of the six elements—navigation, metadata, or search—and run a light audit. Don’t overthink it. Just ask if it’s working right now. That single diagnostic step can save you from months of frustration later. Because from here, we’re about to get specific. There are three different maps built into every intranet, and knowing how they overlap is the first real test of whether users make progress—or wander in circles. World Map vs. Local Maps: Global, Hub, and Local Navigation Every intranet lives on three distinct maps: the world map, the regional maps, and the street-level sketch. In platform terms, that’s global navigation, hub navigation, and local navigation. If those maps don’t agree, your users aren’t adventuring—they’re grinding random encounters with no idea which way is north. Global navigation is the overworld view. It tells everyone what lands exist and how major territories connect. In Microsoft 365, you unlock it through the SharePoint app bar, which shows up on every site once a home site is set. It’s tenant-wide by design. Global nav isn’t there to list every page or document—it’s the continental outline: Home, News, Resources, Tools. Broad categories everyone in the company should trust. If this skeleton bends out of shape, people don’t even know which continent they spawned on. Hub navigation works like a regional map. Join a guild hall in an RPG and you see trainers, quest boards, shops—the things tied to that one region. Hubs in SharePoint do exactly that. They unify related sites like HR, Finance, or legal so they don’t float around as disconnected islands. Hub nav appears just below the suite bar, over the site’s local nav, and every site joined to that hub respects the same links and shared branding. It’s also security-trimmed: if a user doesn’t have access to a site in the hub, they won’t see its content surface magically. Permissions don’t change by association. Use audience targeting if you want private links to show up only for the right people. That stops mixed parties from thinking they missed a questline they were never allowed to run. Local navigation is the street map—the hand-drawn dungeon sketch you keep updating as you poke around. It’s specific to a single site and guides users from one page, list, library, or task to another inside that domain. On a team site it’s on the left as the quick launch. On a communication site it’s up top instead. Local nav should cover tactical moves: policies, project docs, calendars. The player should find common quests inside two clicks. If they’re digging five levels down and retracing breadcrumbs, the dungeon layout is broken. The real failure comes when these maps don’t line up. Global says “HR,” hub says “People Services,” and local nav buries benefits documents under “Archive/Old-Version-Uploads.” Users follow one map, get looped back to another, and realize none of them match. Subsites layered five deep create breadcrumb trails that collapse the moment you reorganize, leading to dead ends in Teams or Outlook links. It only takes a few busted trails before staff stop trying navigation altogether and fire off emails instead. That’s when trust in the intranet collapses. There are also technical boundaries worth noting. Each nav level can technically handle up to 500 links per tier, but stuffing them in is like stocking a bag with 499 health potions. Sure, it fits—but no one can use it. A practical rule is to keep hub nav under a hundred links. Anything more and users can’t scan it without scrolling fatigue. Use those limits as sanity checks when you’re tempted to add “just one more” menu. Here’s how to test this in practice—two checks you can run right now in under a minute. First, open the SharePoint app bar. Do those links boil down to your real global categories—Home, News, Tools—or are they trying to be a department sitemap? Second, pick a single site. Check the local nav. Count how many clicks it takes to hit the top three tasks. If it’s more than two, you’re making users roll a disadvantage check every time. When these three layers match, things click. Users trust the overworld for direction, the hubs for context, and the locals for getting work done. Better still, AI tools see the same paths. Copilot doesn’t misplace scrolls if the maps agree on where those scrolls live. The system doesn’t feel like a coin toss; it behaves predictably for both people and machines. But even the best navigation can’t label a blade if every sword in the vault is called “Item_final_V3.” That’s a different kind of invisibility. The runes you carve into your gear—your metadata—are what make search cast real spells instead of fumbles. Metadata: The Magic Runes of Search When navigation gives you the map, metadata gives the legend. Metadata—the magic runes of search—is what tells SharePoint and AI tools what a file actually is, not just what it happens to be named. Without it, everything blurs into vague boxes and folders. With it, your system knows the difference between a project plan, a travel policy, and a vendor contract. The first rule: use columns and content types in your document libraries and Site Pages library. This isn’t overkill—it’s the translation layer that lets search and highlighted content web parts actually filter and roll up the right files. A tagged field like “Region = West” doesn’t just decorate the document; it becomes a lever for search, dynamic rollups, even audience-targeted news feeds. AI copilots look for those same properties. If th

    18 min
  4. HACE 1 DÍA

    Copilot Studio vs. Teams Toolkit: Critical Differences

    Rolling out Microsoft 365 Copilot feels like unlocking a legendary item—until you realize it only comes with the starter kit. Out of the box, it draws on baseline model knowledge and the content inside your tenant. Useful, but what about your dusty SOPs, the HR playbook, or that monster ERP system lurking in the corner? Without connectors, grounding, or custom agents, Copilot can’t tap into those. The good news—you can teach it. The trick is knowing when to reach for Copilot Studio, when to switch to Teams Toolkit, and how governance, monitoring, and licensing fit into the run. Because here’s the real twist: building your first agent isn’t the final boss fight. It’s just the tutorial. The Build Isn’t the Boss Fight You test your first agent, the prompts work, the demo data looks spotless, and for a second you feel like you’ve cleared the game. That’s the trap. The real work starts once you aim that same build at production, where the environment plays by very different rules. Too many makers assume a clean answer in testing equals mission accomplished. In reality, that’s just story mode on easy difficulty. Production doesn’t care if your proof-of-concept responded well on your dev laptop. What production demands is stability under stress, with compliance checks, identity guardrails, and uptime standards breathing down its neck. And here’s where the first boss monsters appear. Scalability: can the agent handle enterprise load without choking? That’s where monitoring and diagnostic logs from the Copilot Control System matter. Stale grounding: when data in SharePoint or Dataverse changes, does the agent still tether to the right snapshot? Connectors and Graph grounding are the safeguards. Compliance and auditability: if a regulator or internal auditor taps you on the shoulder, can the agent’s history be reviewed with Purview logs and sensitivity labels in place? If any of these fail, the “victory screen” vanishes fast. Running tests in Copilot Studio is like sparring in a training arena with infinite health potions. You can throw spells, cycle prompts, and everything looks shiny. But in live use, every firewall block is a fizzled cast, and an overloaded external data source slows replies to a crawl. That’s the moment when users stop calling it smart and start filing tickets. The most common natural 1 roll comes from teams who put off governance. They tell themselves it’s something to layer on later. But postponing governance almost always leads to ugly surprises. Scaling issues, data mismatches, or compliance gaps show up at exactly the wrong moment. Security and compliance aren’t optional side quests. They’re part of the campaign map. Now let’s talk architecture, because Copilot’s brain isn’t a single block. You’ve got the foundation model—the raw language engine. On top, the orchestrator, which lines up what functions get called and when. Microsoft 365 Copilot provides that orchestration by default, so every request has structure. Then comes grounding—the tether back to enterprise content so answers aren’t fabricated. Finally, the skills—your custom plugins or connectors to do actual tasks. If you treat those four pieces as detached silos, the whole tower wobbles. A solid skill without grounding is just a fancy hallucination. Foundation with no compliance controls becomes a liability. Only when the layers are treated as one stack does the agent stay sturdy. So what does a “win” even look like in the wild? It’s not answering a demo prompt neatly. That’s practice mode. The mark of success is holding up under real-world conditions: mid-payroll crunch, data migrations in motion, compliance officers watching, all with a high request load. That’s where an agent proves it deserves to run. And here’s another reason many builds fail: organizations think of them as throwaway projects, not operational systems. Somebody spins up a prototype, shows off a flashy demo, then leaves it unmonitored. Soon, different departments build their own, none of them documented, all of them chewing tokens unchecked. Without a simple operational manual—who owns the connectors, who audits grounding, who checks credit consumption—the landscape turns into a mess of unsynced mini-bosses. Flip the perspective, and it gets much easier. If you start with an operational mindset, the design shifts. You don’t just care about whether the first test looked clean. You harden for the day-to-day campaign. Audit logs, admin gates, backups, health checks—those build trust while keeping the thing alive under pressure. Admins already have usable controls in the Microsoft 365 admin center, where scenarios can be managed and diagnostic feedback surfaces early. Leaning on those tools is what separates a novelty agent from a reliable operator. That’s why building alone doesn’t crown a winner. The test environment gets you to level one. Real deployment, with governance and monitoring in place, is where the actual survival challenge kicks off. And before you march too far into that, you’ll need the right weapon for the fight. Microsoft gives you two—different kits, different rules. Choose wrong, and it’ll feel like bringing a plastic sword to a raid. Copilot Studio vs. Teams Toolkit: Choosing Your Weapon That’s where the real question lands: which tool do you reach for—Copilot Studio or the Teams Toolkit, also called the Microsoft 365 Agents Toolkit? They sound alike, both claim to “extend Copilot,” but they serve very different groups of builders and needs. The wrong choice costs you time, budget, and possibly credibility when your shiny demo wilts in production. Copilot Studio is the maker’s arena. It’s a low‑code, visual builder designed for speed and clarity. You get drag‑and‑drop flows, templates, guided dialogs, and built‑in analytics. Studio comes bundled with a buffet of connectors to Microsoft 365 data sources, so a power user can pull SharePoint content, monitor Teams messages, or surface HR policy docs without ever touching code. You can test, adjust, and publish directly into Microsoft 365 Copilot or even release as a standalone agent with minimal friction. For a department that needs a working workflow this quarter—not next fiscal year—Studio is the fast track. Over 160,000 customers already use Studio for exactly this: reconciling financial data, onboarding employees, or answering product questions in retail. The reason isn’t mystery—it simply lowers the bar. If your team already fiddles in PowerApps or automates routine reports in Power Automate, Studio feels like home turf. You don’t need to be a software engineer. You just need a clear goal and basic low‑code chops to click, configure, and deploy. Now, cross over to the Teams Toolkit. This is where full‑stack developers thrive. The Toolkit plugs into VS Code, not a drag‑and‑drop canvas. Here, you architect declarative agents with structured rules, or you go further and create custom engine agents where you define orchestration, model calls, and API handling from scratch. You get scaffolding, debugging, configuration, and publishing routes not just inside Copilot, but across Teams, Microsoft 365 apps, the web, and external channels. If Copilot Studio is prefab furniture from the catalog, Toolkit is milling your own planks and wiring the house yourself. The freedom is spectacular—but you’re also responsible for every nail and fuse. The real confusion? Both say “extend Copilot.” In practice, Studio means extending within Microsoft’s defined guardrails: safe connectors, administrative controls, and lightweight governance. The Toolkit means rewriting the guardrails: rolling your own orchestration, calling external LLMs, or building agent behaviors Microsoft didn’t provide out of the box. One approach keeps you safe with templates. The other gives you raw power and expects you to wield it responsibly. A lot of folks think “tool choice equals different UI.” Nope. End‑users see the same prompt box and answer card whether you built the agent in Studio or with Toolkit. That’s by design—the UX layer is unified. What actually changes is behind the curtain: grounding options, scalability, and administrative control. That’s why this decision is operational, not cosmetic. Here’s a practical rule: some grounding capabilities—things like SharePoint content, Teams chats and meetings, embedded files, Dataverse data, or connectors into email and people search—only light up if your tenant has Microsoft 365 Copilot licensing or Copilot Studio metering turned on. If you don’t have that entitlement, picking Studio won’t unlock those tricks. That single licensing check can be the deciding factor for which route you need. So how do you simplify the choice? Roll a quick checklist. One: need fast, auditable, admin‑controlled agents that power users can stand up without bugging IT? Pick Copilot Studio. Two: need custom orchestration, external AI models, or deep integration work stitched straight into enterprise backbones? Pick the Agents Toolkit. Three: don’t trust the labels—trust your team’s actual skill set and goals. The metaphor I use is housing. Studio is prefab—you pick colors and cabinets, but the plumbing and wiring are already safe. Toolkit is raw land—you design every inch, but also carry all the risks if the design buckles. Both can yield a beautiful home. One is faster and less complex, the other is limitless but fragile unless managed well. Both collapse without grounding. Your chosen weapon handles the build, but if it isn’t fed the right data, it just makes confident nonsense faster. A Studio agent without connectors is a parrot. A Toolkit agent without grounding is a custom‑coded parrot. Either way, you’re still living with a bird squawking guesses at your users. And that brings us to the real

    20 min
  5. Stop Blaming Users—Your Pipeline Is the Problem

    HACE 2 DÍAS

    Stop Blaming Users—Your Pipeline Is the Problem

    Ever wonder why your Dataverse pipeline feels like it’s built out of duct tape and bad decisions? You’re not alone. Most of us end up picking between Synapse Link and Dataflow Gen2 without a clear idea of which one actually fits. That’s what kills projects — picking wrong. Here’s the promise: by the end of this, you’ll know which to choose based on refresh frequency, storage ownership and cost, and rollback safety — the three things that decide whether your project hums along or blows up at 2 a.m. For context, Dataflow Gen2 caps out at 48 refreshes per day (about every 30 minutes), while Synapse Link can push as fast as every 15 minutes if you’re willing to manage compute. Hit subscribe to the M365.Show newsletter at m365 dot show for the full cheat sheet and follow the M365.Show Linkedin page for MVP livestreams. Now, let’s put the scalpel on the table and talk about control. The Scalpel on the Table: Synapse Link’s Control Obsession You ever meet that one engineer who measures coffee beans with a digital scale? Not eyeball it, not a scoop, but grams on the nose. That’s the Synapse Link personality. This tool isn’t built for quick fixes or “close enough.” It’s built for the teams who want to tune, monitor, and control every moving part of their pipeline. If that’s your style, you’ll be thrilled. If not, there’s a good chance you’ll feel like you’ve been handed a jet engine manual when all you wanted was a light switch. At its core, Synapse Link is Microsoft giving you the sharpest blade in the drawer. You decide which Dataverse tables to sync. You can narrow it to only the fields you need, dictate refresh schedules, and direct where the data lands. And here’s the important part: it exports data into your own Azure Data Lake Storage Gen2 account, not into Microsoft’s managed Dataverse lake. That means you own the data, you control access, and you satisfy those governance and compliance folks who ask endless questions about where data physically lives. But that freedom comes with a trade-off. If you want Delta files that Fabric tools can consume directly, it’s up to you to manage that conversion — either by enabling Synapse’s transformation or spinning up Spark jobs. No one’s doing it for you. Control and flexibility, yes. But also your compute bill, your responsibility. And speaking of responsibility, setup is not some two-click wizard. You’re provisioning Azure resources: an active subscription, a resource group, a storage account with hierarchical namespace enabled, plus an app registration with the right permissions or a service principal with data lake roles. Miss one setting, and your sync won’t even start. It’s the opposite of a low-code “just works” setup. This is infrastructure-first, so anyone running it needs to be comfortable with the Azure portal and permissions at a granular level. Let’s go back to that freedom. The draw here is selective syncing and near-real-time refreshes. With Synapse Link, refreshes can run as often as every 15 minutes. For revenue forecasting dashboards or operational reporting — think sales orders that need to appear in Fabric within the hour — that precision is gold. Teams can engineer their pipelines to pull only the tables they need, partition the outputs into optimal formats, and minimize unnecessary storage. It’s exactly the kind of setup you’d want if you’re running pipelines with transformations before shipping data into a warehouse or lakehouse. But precision has a cost. Every refresh you tighten, every table you add, every column you leave in “just in case” spins up compute jobs. That means resources in Azure are running on your dime. Which also means finance is involved sooner than later. The bargain you’re striking is clear: total control plus table-level precision equals heavy operational overhead if you’re not disciplined with scoping and scheduling. Let me share a cautionary tale. One enterprise wanted fine-grain control and jumped into Synapse Link with excitement. They scoped tables carefully, enabled hourly syncs, even partitioned their exports. It worked beautifully for a while — until multiple teams set up overlapping links on the same dataset. Suddenly, they had redundant refreshes running at overlapping intervals, duplicated data spread across multiple lakes, and governance meetings that felt like crime-scene investigations. The problem wasn’t the tool. It was that giving everyone surgical precision with no central rules led to chaos. The lesson: governance has to be baked in from day one, or Synapse Link will expose every gap in your processes. From a technical angle, it’s impressive. Data lands in Parquet, not some black-box service. You can pipe it wherever you want — Lakehouse, Warehouse, or even external analytics platforms. That open format and storage ownership are exactly what makes engineers excited. Synapse Link isn’t trying to hide the internals. It’s exposing them and expecting you to handle them properly. If your team already has infrastructure for pipeline monitoring, cost management, and security — Synapse Link slots right in. If you don’t, it can sink you fast. So who’s the right audience? If you’re a data engineer who wants to trace each byte, control scheduling down to the quarter-hour, and satisfy compliance by controlling exactly where the data lives, Synapse Link is the right choice. A concrete example: you’re running near-real-time sales feeds into Fabric for forecasting. You only need four tables, but you need them every 15 minutes. You want to avoid extra Dataverse storage costs while running downstream machine learning pipelines. Synapse Link makes perfect sense there. If you’re a business analyst who just wants to light up a Power BI dashboard, this is the wrong tool. It’s like giving a surgical kit to someone who just wanted to open Amazon packages. Bottom line, Synapse Link gives surgical-grade control of your Dataverse integration. That’s freeing if you have the skills, infrastructure, and budgets to handle it. But without that, it’s complexity overload. And let’s be real: most teams don’t need scalpel-level control just to get a dashboard working. Sometimes speed and simplicity mean more than precision. And that’s where the other option shows up — not the scalpel, but the multitool. Sometimes you don’t need surgical precision. You just need something fast, cheap, and easy enough to get the job done without bleeding everywhere. The Swiss Army Knife That Breaks Nail Files: Dataflow Gen2’s Low-Code Magic If Synapse Link is for control freaks, Dataflow Gen2 is for the rest of us who just want to see something on a dashboard before lunch. Think of it as that cheap multitool hanging by the cash register at the gas station. It’s not elegant, it’s not durable, but it can get you through a surprising number of situations. The whole point here is speed — moving Dataverse data into Fabric without needing a dedicated data engineer lurking behind every button click. Where Synapse feels like a surgical suite, Dataflow Gen2 is more like grabbing the screwdriver out of the kitchen drawer. Any Power BI user can pick tables, apply a few drag‑and‑drop transformations, and send the output straight into Fabric Lakehouses or Warehouses. No SQL scripts, no complex Azure provisioning. Analysts, low‑code makers, and even the guy in marketing who runs six dashboards can spin up a Dataflow in minutes. Demo time: imagine setting up a customer engagement dashboard, pulling leads and contact tables straight from Dataverse. You’ll have visuals running before your coffee goes cold. Sounds impressive — but the gotchas show up the minute you start scheduling refreshes. Here’s the ceiling you can’t push through: Dataflow Gen2 runs refreshes up to 48 times a day — that’s once every 30 minutes at best. No faster. And unlike Synapse, you don’t get true incremental loads or row‑level updates. What happens is one of two things: append mode, which keeps adding to the Delta table in OneLake, or overwrite mode, which completely replaces the table contents during each run. That’s great if you’re testing a demo, but it can be disastrous if you’re depending on precise tracking or rollback. A lot of teams miss this nuance and assume it works like a transactionally safe system. It’s not — it’s bulk append or wholesale replace. I’ve seen the pain firsthand. One finance dashboard was hailed as a success story after a team stood it up in under an hour with Dataflow Gen2. Two weeks later, their nightly overwrite job was wiping historical rows. To leadership, the dashboard looked fine. Under the hood? Years of transaction history were half scrambled and permanently lost. That’s not a “quirk” — that’s structural. Dataflow doesn’t give you row‑level delta tracking or rollback states. You either keep every refresh stacked up with append (risking bloat and duplication) or overwrite and pray the current version is correct. Now, let’s talk money. Synapse makes you pull out the checkbook for Azure storage and compute. With Dataflow Gen2, it’s tied to Fabric capacity units. That’s a whole different kind of silent killer. It doesn’t run up Azure GB charges — instead, every refresh eats into a pool of capacity. If you don’t manage refresh frequency and volume, you’ll burn CUs faster than you expect. At first you barely notice; then, during mid‑day loads, your workspace slows to a crawl because too many Dataflows are chewing the same capacity pie. The users don’t blame poor scheduling — they just say “Fabric is slow.” That’s how sneaky the cost trade‑off works. And don’t overlook governance here. Dataflow Gen2 feels almost too open-handed. You can pick tables, filter columns, and mash them into golden datasets… right up until refresh jobs coll

    22 min
  6. How AI Agents Spot Angry Customers Before You Do

    HACE 3 DÍAS

    How AI Agents Spot Angry Customers Before You Do

    What if your contact center could recognize a frustrated customer before they even said a word? That’s not science fiction—it’s sentiment analytics at work inside Dynamics 365 Contact Center. Before we roll initiative on today’s patch boss, hit subscribe so these briefings auto-deploy to your queue instead of waiting on hold. Here’s how it works: your AI agent scans tone, word choice, and pacing, then routes the case to the right human before tempers boil over. In this walkthrough, we’ll break down sentiment routing and show how Copilot agents handle the repetitive grind while your team tackles the real fights. And to see why that shift matters, you first have to understand what life in a traditional center feels like when firefighting never ends. Why Old-School Contact Centers Feel Like Permanent Firefighting In an old-school contact center, the default mode isn’t support—it’s survival. You clock in knowing the day will be a long sprint through tickets that already feel behind before you even log on. The tools don’t help you anticipate; they just throw the next case onto the pile. That’s why the whole operation feels less like steady service and more like emergency response on loop. You start your shift, headset ready, and the queues are already stacked. Phones ringing, chat windows pinging, emails blinking red. The real problem isn’t the flood of channels; it’s the silence in between them. Sure, you might see a customer’s name and a new case ID. But the context—the email they already sent, the chat transcript from ten minutes ago, the frustration building—is hidden. It’s like joining a campaign raid without the map or character sheets, while the monsters are already rolling initiative against you. That lack of context creates repetition. You ask for details the customer already gave. You verify the order again. You type notes that live in one system but never make it to the next. The customer is exasperated—they told the same story yesterday, and now they’re stuck telling it again. Without omnichannel integration, those conversations often don’t surface instantly across other channels, so every interaction feels like starting over from level one. The loop is obvious. The customer gets impatient, wondering why the company seems forgetful. You grow tired of smoothing over the same irritation call after call. The frustration compounds, and neither side leaves happy. Industry coverage and vendor studies link this very pattern—repetition, long waits, lack of context—to higher churn for both customers and agents. Every extra “let me pull that up” moment costs loyalty and morale. And morale is already thin on the contact center floor. Instead of problem-solving, most of what you’re doing is juggling scripts and copy-paste rituals. It stops feeling like skill-based play and starts feeling like a tutorial that never ends. Agents burn out fast because there’s little sense of progress, no room for creative fixes, just a queue of new fires to stamp out. Supervisors, meanwhile, aren’t dealing with strategy—they’re patching leaks. Shaving seconds off handle times or tweaking greeting scripts becomes the fix, when the real bottleneck is the fragmented system itself. You can optimize edges all day long, but a leaky bucket never holds water. Without unified insight, everyone is running, but the operation doesn’t feel efficient. The consequence? Customers lose patience from being forced into repeats, agents lose motivation from endless restarts, and managers lose stability from the turnover that follows. Costs climb as you’re stuck recruiting, training, and re-training staff just to maintain baseline service. It’s a cycle that punishes everyone involved while leaving the root cause untouched. So when people describe contact center life as firefighting, they aren’t exaggerating. You’re not planning; you’re barely keeping pace. The systems don’t talk, the history doesn’t follow the customer, and the same blazes flare up again and again. Both customers and agents know it, and both sides feel trapped in a dungeon where the final boss is frustration itself. Which raises the real question: what if we could spot the ember before the smoke alarm goes off? How AI Learns to Spot Frustration Before You Can Ever notice how some systems can clock someone’s mood faster than you can even process the words? That’s the deal with sentiment AI inside Dynamics 365 Copilot. It isn’t guessing from body language—it’s analyzing tone, phrasing, pacing, and the emotional weight behind each line. Where you might get worn down after a full day on phones or chat, the algorithm doesn’t fatigue. It keeps collecting signals all the way through. On the surface, the mechanics look simple. But under the hood, it’s natural language processing paired with sentiment analysis. Conversations—whether spoken or typed—are broken down and assessed not just for meaning, but for emotional context. “I need help” registers differently than “Why do I always have to call you for this?” The first is neutral; the second carries embedded frustration. Those layers are exactly what the system learns to read. Now picture being eight hours deep into a shift. You’ve dealt with billing, a hardware swap, a password reset gone sideways, and one customer who refuses the steps you already emailed. At that point, your focus slips. You skim too fast, you miss that slight rise in tension during a call. Meanwhile, the AI has no such blind spots. It sees the all-caps chat with “unacceptable” three times and recognizes it’s a churn risk. Rather than waiting for you to stumble on it, the platform nudges that case higher up the queue. That’s where routing changes the game. Traditionally, it’s first come, first served. Whoever is next in line gets answered, regardless of urgency. With sentiment models active, the order shifts. Urgent or emotional cases are surfaced sooner, and they land with the agents who are best equipped to diffuse them. If you want a visual, imagine the system dropping a glowing marker on the board—the message that this encounter is boss-level, not a background mob. The principle isn’t mystical—it’s applied pattern recognition. Dynamics 365 processes text and speech through NLP and sentiment analysis, turning words, phrasing, and even pauses into usable signals. These signals then guide routing. Angry customer mentions “cancel”? Escalate. High-value account gets impatient? Prioritize. And supervisors aren’t locked out of the process; they can tune those rules. Some teams weight high-value customers most, others give churn threats top priority. It’s just configuration, not a black box guessing on its own. And while the flashy bits often focus on keywords, voice and transcript analytics can also surface things like long pauses or repeated clusters of heated terms. These aren’t always hard-coded red flags, but they’re added signals the model considers. Where you might chalk up a pause to background noise, the system at least tags it as something worth noting in context with everything else. So when you hit that inbox or call queue, you’re not opening blind. There’s a sentiment indicator already in place—a quick read on whether the person is calm, annoyed, or ready to escalate. It doesn’t do the talking for you, but it tells you: this one’s heating up, maybe skip the script fluff and move straight into problem solving. That early signal cuts off extra rounds of repetition, saving both sides from another cycle of frustration. It might sound like a small optimization, but scale changes everything. Across thousands of contacts, AI-driven triage reduces wait times, gets high-risk cases in front of senior agents, and lowers stress since you’re not constantly guessing where to focus first. Dumb queues vanish. Instead, they’re replaced by intent-driven queues where the hardest fights land exactly where they should. And once you’ve got that emotional heatmap running, your perspective shifts. Sentiment detection isn’t just about spotting problems—it’s about freeing you to act strategically. Because when AI can keep watch for spikes of frustration, the obvious next step is: what else can it take off your plate? Could it handle copying data, logging details, and grinding through the endless ticket forms? That’s the next piece of the story, where these systems stop being mood readers and start acting like tireless interns, carrying the paperwork so your team doesn’t have to. Autonomous Agents: Your New Support Interns That Never Forget Think of it this way: sentiment spotting tells you which cases are heating up. But what happens once those cases hit your queue? That’s where autonomous agents step in—digital interns inside Dynamics 365 that handle repetitive case work so you don’t have to micromanage the clerical side. They don’t lead the party, but they keep things organized and consistent, sparing your live team from the grind. Microsoft breaks them into three main types: the Case Management agent, the Customer Intent agent, and the Customer Knowledge Management agent. Case Management focuses on creating and updating tickets. Customer Intent builds out an intent library from historical conversations, so the system can better predict what a customer actually needs. Knowledge Management, meanwhile, generates and maintains the articles your team leans on every day. Each one automates a specific slice of the service loop. Take Case Management first. Normally, every ticket requires you to type out customer details, set categories, and match timestamps. The AI parses the text, populates fields, and organizes entries against the right tags. When you configure rules, it can trigger follow-up actions or even auto-resolve straightforward scenarios—like closing a case once a customer conf

    19 min
  7. Ditch Passwords—How Real Azure Apps Secure Everything

    HACE 3 DÍAS

    Ditch Passwords—How Real Azure Apps Secure Everything

    Here’s a fun fact: embedding credentials in your Azure apps is basically handing out house keys at a bus stop. Entra ID and managed identities let you lock every door without juggling keyrings or hoping nobody notices the Post-It note under your keyboard. The good news—you don’t need to be a cryptography wizard to do this. I’ll show you step by step how to swap secrets for tokens and sleep better at night. The Doormat Key Problem Why do so many Azure apps still stash passwords in config files like we’re all still writing VBScript in 2003? Seriously, it’s 2024. We have cloud-native security systems that mint tokens on demand, yet someone somewhere is still committing a literal `sa` password to their repo like it’s a badge of honor. And the excuse is always the same: “We hard‑code it just to save time.” Save time today, and then spend weeks cleaning up the mess when it leaks. That's not a shortcut. That’s procrastination with extra steps. The problem is bigger than laziness. Developers think dropping usernames and passwords into a web.config file or appsettings.json is harmless because it stays internal. Except nothing ever stays internal. That config gets copied to dev, test, staging, three different QA branches, backups, and a laptop someone left on a plane. That’s not a secret; that’s a distributed broadcast. Add in Git, where “oops, wrong push” has put more production passwords public than I care to count, and you’ve got an incident queue that writes itself. Here’s the part nobody likes to admit: these “quick fixes” don’t just risk exposure—they guarantee it over time. Secrets are slippery. They creep into log files because you forgot to sanitize an exception. They hide in screenshots shared over Teams. They get zipped into backups sitting unencrypted in blob storage because no one paid for the vault tier. All it takes is one bored attacker scanning public repos for obvious strings—`Password123!` is still a goldmine—and suddenly your entire app is wide open. One of my favorites? Remember when thousands of credentials showed up in public GitHub a few years back because devs used personal repos for “just testing”? Attackers didn’t even have to try. They ran keyword scans, found connection strings, and walked straight into production resources. No zero‑day. No Hollywood hacking montage. Just copy, paste, profit. That’s what hard‑coding secrets buys you—a house where the burglar doesn’t even need to pick a lock. The key’s under the mat, and you spray‑painted “KEY IS UNDER REACT APP SETTINGS” on the front porch. You wouldn’t leave your front door unlocked with the garage code written on a sticky note, but that’s exactly how connection strings behave when they include credentials. Sure, it works. Until a neighbor—by which I mean some anonymous botnet—figures out where you hid them. Microsoft has been very clear these days: hard‑coded credentials are being pushed into the same bucket as Internet Explorer and Clippy. Deprecated. You can limp along with them, but expect disappointment, breakage, and an audit log screaming at you. Add to that the sprawl problem. Each environment needs its own settings, so now you’ve got a password per dev box, an admin string in staging, another one production, and nobody knows if they’re rotated. Different teams hoard slightly out‑of‑date copies. Someone comments out an old connection string instead of deleting it. Congratulations: your app is a digital junk drawer of skeleton keys. Attackers love it because it’s a buffet. And let’s not even mention what happens when contractors get read‑access to your repos. You think they only take the code? The takeaway here is simple: the real danger isn’t just a password leaking. It’s the way secrets breed. Once you let them into configs, they replicate across environments, backups, scripts, and documentation. You cannot manage that sprawl. You cannot contain it with “clever” obfuscation tricks. It’s not a problem you patch; it’s a problem you eliminate. Stop thinking about where to hide the key. Instead, stop using keys at all. That’s why tokens exist. They don’t behave like passwords. They aren’t long‑lived, they aren’t static, and they don’t sit in files for years daring the wrong person to find them. The cure for password sprawl isn’t to hide the passwords better—it’s to replace them with something that self‑destructs when it’s misused. Tokens let you do exactly that, and Entra ID is the system handing them out. Okay, so if we throw the doormat keys away, how do tokens avoid turning into even messier problems for us admins? Let’s talk about why they actually make life easier instead of harder. Why Tokens Beat Passwords Every Time If passwords are car keys, tokens are valet tickets—you use them for a single ride, and they’re worthless once the trip’s done. Nobody makes a sneaky copy of a valet ticket, and if they try, someone spots it right away. That’s the fundamental difference: passwords are static. Tokens are temporary and scoped. Which means if they leak, the blast radius is tiny, and the clock is already ticking before they expire. So what even is a token? In Azure land, your app hits Entra ID and says, “Hey, can I get permission to do this thing?” Entra ID checks who or what’s asking, does its paperwork, and then hands back a signed package: the access token. That token is short‑lived. It’s tied to the user or service identity. It’s got limits on what it can touch. And when the time window closes, it’s dead. Nobody resets it, nobody rotates it—it just vanishes. Of course, when you tell devs this, the first instinct is panic. “Oh no, now we’ve got rotating tokens flying around, do we have to cache them, store them, chase refresh codes, build replacement logic?” The answer: no, stop sweating. The Microsoft Identity libraries do that plumbing for you. The SDKs literally grab, refresh, and dispose like janitors cleaning up after a conference. You don’t have to reinvent OAuth. You just call the function and focus on your actual app logic. Compare it with the old school way. A static password is like handing someone the master key to your entire building. They don’t just get into their floor; they can hit the executive suite, the server room, even the candy stash in HR’s drawer. Now take a token: it’s a guest pass that only works for Floor 3, and it shuts off at 5 p.m. If someone tries to use it after hours, access denied. No guessing, no lingering. It’s scope matched to function, not wide open duct tape. Tokens also carry brains inside them. Ever open up a token? It’s like inspecting a boarding pass—there’s your flight, seat row, gate, and zone. Tokens store claims: roles, scopes, tenant IDs, even the user’s basic info. That means your API isn’t just trusting “this person logged in.” It’s checking “this person is in finance, has the payments role, and was approved today.” You can build way tighter rules directly into your app logic, without managing ten different password sets. Here’s a real comparison. One shop I worked with had a database locked only with SQL credentials hard‑coded everywhere. That account had system admin. Predictably, one contractor copied it to a tool, and bam—the entire database was their playground. Every table exposed. Now look at a newer system using tokens scoped only to what the app needed: read access on a single schema. Even if someone stole that token, all they could do was select records, and only until the token expired. It turned a nightmare into a minor annoyance. Now I can hear some devs groaning: “Tokens sound neat in theory, but they’ll break my workflow. I don’t have time to micromanage renewal or expiration.” That’s the myth. Nobody’s expecting you to refresh them manually. The libraries you’re already using in .NET or Node grab the refresh token, swap it invisibly, and keep rolling. You don’t even know it happened unless you were sniffing packets. Which is the point—stop babysitting secrets; let the system handle it. Think of Entra ID as the passport office. You show up, prove who you are, and they stamp an ID that border guards (your APIs) trust. Expired stamp? No boarding. Wrong stamp? Denied. It centralizes the identity question in one authority. Your apps don’t argue, your APIs don’t babysit passwords—they just check the stamp and let you through. That’s infinitely better than juggling a dozen mystery keys and wondering which ones are still valid. The best part for us IT folks: no more forced password rotations every 90 days, no more scripts running at 3 a.m. to reset service accounts, no more debates about whether to store secrets in Key Vault or under a heavy dose of wishful thinking. Tokens expire on their own, and that expiration is your safety net. If something leaks, it’s already doomed to stop working. You didn’t fix anything—you just built a system that auto‑heals by design. So yes, tokens handle user logins securely and cleanly. Great. But it’s not just about users clicking sign‑in. The real test is when apps need to talk to other apps, or services fire off requests at three in the morning without a human anywhere near a keyboard. That’s where things shift from neat to necessary. Meet Managed Identities: Service Principals, but Less Dumb Imagine if your app could march into Azure services without ever typing a password, because it already came out of the factory with its own stamped identity. That’s not science fiction—it’s how Managed Identities work. And once you start using them, the whole idea of shuffling service principals around with expiring secrets will feel as dated as carrying a pager. Here’s the old mess we used to live with: service principals. You’d register one, generate a client secret or ma

    20 min
  8. I Replaced 500 Measures Instantly—Here’s How

    HACE 4 DÍAS

    I Replaced 500 Measures Instantly—Here’s How

    Ever stared at a Power BI model with 500 measures, all named like a toddler smashing a keyboard? That endless scroll of “what-does-this-even-mean” is a special kind of pain. If you want fewer helpdesk tickets about broken reports, hit subscribe now—future you will thank you when it’s cleanup time. The good news? Power BI now has project- and text-first formats that let you treat models more like code. That means bulk edits, source-control-style safety nets, and actual readability. I’ll walk through a real cleanup: bulk renaming, color find-and-replace, and measure documentation in minutes. And it all starts with seeing how bad those 500 messy names really are. When 500 Measures Look Like Goblin Script It feels less like data modeling and more like trying to raid a dungeon where every potion is labeled “Item1,” “Item2,” “Item3.” You know one of them heals, but odds are you’ll end up drinking poison. That’s exactly how scrolling through a field list packed with five hundred cryptic measures plays out—you’re navigating blind, wasting time just figuring out what’s safe to click. Now swap yourself with a business analyst trying to build a report. They open the model expecting clarity but see line after line of nonsense labels: “M1,” “Total1,” “NewCalc2.” It’s not impossible to work with—just painfully slow. Every choice means drilling back, cross-referencing, or second-guessing what the calculation actually does. Seconds turn into minutes, minutes add up to days, and the simple act of finding the right measure becomes the real job. With a handful of measures, sloppy names are irritating but tolerable. Scale that up, and the cracks widen fast. What used to be small friction balloons into a major drag on the entire team’s productivity. Confusion spreads, collaboration stalls, and duplicated effort sneaks in as people re-create calculations instead of trusting what’s already there. Poor naming doesn’t just clutter the field list—it reshapes how people work with the model. It’s a bit like Active Directory where half your OUs are just called “test.” You can still hunt down users if you’re patient, but you’d never onboard a new hire into that mess. The same goes here. New analysts try to ramp up, hit the wall of cryptic names, and end up burning time deciphering the basics instead of delivering insights. Complexity rises, learning curves get steeper, and the whole workflow slows to a crawl. You feel the tax most clearly in real-world reporting. Take something as simple as revenue. Instead of one clean measure, you’ve got “rev_calc1,” “revenueTest2,” and “TotalRev_Final.” Which one is the source of truth? Everyone pauses to double-check, then re-check again. That delay ripples outward—updates arrive late, dashboards need extra reviews, and trust in the reports slides downhill. So people try to fix it the hard way: renaming by hand. But manual cleanup is the natural 1 of measure management. Each rename takes clicks, dialog boxes, and round-trips. It’s slow, boring, and guaranteed to fall behind before you’ve even finished. By the time you clean up twenty labels, two more requests land on your desk. It’s spoon-versus-dragon energy, and the dragon always wins. The point isn’t that renaming is technically difficult—it’s that you’re locked into brittle tools that force one painful click at a time. What you really want is a spell that sweeps through the entire inventory in one pass: rename, refactor, document, done. That curiosity is the opening to a more scalable approach. Because this isn’t just about sloppily named measures. It’s about the container itself. Right now, most models feel like sealed vaults—you tap around the outside but never see inside. And that’s why the next move matters. When we look at how Power BI stores its models, you’ll see just how much the container format shapes everything, from version control to bulk edits. Ever try to diff a PBIX in Git? That’s like comparing two JPEGs—you don’t see the meaning, just the noise. Binary Black Box vs. Human-Readable PBIP That’s where the real fork in the road shows up—binary PBIX files versus the newer project-style PBIP format. PBIX has always been the default, but it’s really just a closed container. Everything—reports, models, measures—is packed into one binary file that’s not designed for human eyes. You can work with it fine in Power BI Desktop, but the moment you want to peek under the hood or compare changes over time, the file isn’t built for that. PBIX files aren’t friendly to textual diffs, which makes them hard to manage with modern developer workflows. Quick note: if you’re documenting or teaching this, confirm the exact constraints in Microsoft’s official docs before stating it absolutely. Now picture trying to adjust a set of measures spread across dozens of reports. With PBIX, you’re clicking dialogs, hunting through dropdowns, copy-pasting by hand. You don’t have a reliable way to scan across projects, automate changes, or track exactly what shifted. It works at small scale, but the overhead stacks up fast. PBIP changes the layout completely. Instead of one sealed file, your work expands into a structured project folder. The visuals and the data model are each split into separate files, stored as text. The difference is night and day—now you can actually read, edit, and manage those pieces like source code. Microsoft has moved toward reusability before with templates (.PBIT) that let you standardize reports. PBIP takes the same idea further, but at the level of your whole project and model. Once your files are text, you can bring in standard tools. Open a measure in VS Code. Wire the folder to Git. Suddenly, a change shows up as a clean side-by-side diff: the old formula on the left, the new one on the right. No binary sludge, no guesswork. That transparency is the keystone. But it’s not only about visibility. You also gain revertability. A mistake no longer means “hope you made a manual backup.” It’s a matter of checking out a prior commit and moving on. And because the files are text, you gain automation. Need to apply formatting standards or swap a naming convention across hundreds of measures? Scripts can handle that in seconds. Those three beats—visibility, revertability, automation—are the real payoff. They turn Power BI projects from isolated files into artifacts that play by the same rules as code, making your analytics far easier to manage at scale. It doesn’t turn every business user into a software engineer, but it does mean that anyone managing a large model suddenly has options beyond “click and pray.” In practice, the shift to PBIP means ditching the black-box vibe and picking up a kit that’s readable, testable, and sustainable. Instead of stashing slightly different PBIX versions all over your desktop, you carry one source-controlled copy with a clean history. Instead of hoping you remember what changed last sprint, you can point to actual commits. And instead of being the bottleneck for every adjustment, you can spread responsibility across a team because the files themselves are transparent. Think of PBIX as a locked chest where you only get to see the loot after hauling it back to one specific cave. PBIP is more like a library of scrolls—open, legible, and organized. You can read them, copy them, or even apply batch changes without feeling like you’re breaking the seal on sacred text. The bottom line is this: PBIP finally gives you the clarity you’ve been missing. But clarity alone doesn’t fix the grunt work. Even with text-based projects, renaming 500 messy measures by hand is still tedious. That’s where the next tool enters, and it’s the one that actually makes those bulk edits feel like cheating. Why TMDL Is Basically a Cheat Code Now enter TMDL—short for Tabular Model Definition Language—a format that lays out the guts of your semantic model as plain text. Think of it less like cracking open a black box and more like spreading your entire character sheet on the table. Measures, columns, expressions, relationships—they’re all there in a standard syntax you can read and edit. No hidden menus, no endless scrolling. Just text you can parse, search, and modify. It’s worth a quick caution here: the exact behavior depends on your file format and tooling. Microsoft documentation should always be your source of truth. But the verified shift is this—where PBIP gives you a project folder, a tabular definition file exposes that model in editable text. That’s a major difference. It turns model management into something any text editor, automation script, or version-control workflow can help with, instead of limiting you to clicks inside Power BI Desktop. And that solves a big limitation. If you’ve ever tried renaming hundreds of fields using only the UI, you know the grind—each tiny rename chained to point-and-click loops. Even in PBIP without a model definition layer, the structure isn’t designed to make massive, organized replacements easy. TMDL fills that hole by laying the whole framework bare, so you're no longer stuck in click-by-click combat. Here’s a straightforward example. Suppose your reports all use a specific shade of blue and it needs to change. Before, you’d open every formatting pane, scroll menus, and repeat—hours gone. In a text-based model file, those values exist as editable strings. You can global-replace “#3399FF” with “#0066CC” in seconds. That’s the kind of move that feels like rolling double damage on a tedious chore. Of course, confirm that your file format supports those edits and always keep a backup before you script a bulk change. This is where the design shows. The format is structured and consistent, not ad hoc. By representing your model in neatly organized text

    17 min

Acerca de

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

También te podría interesar