M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. Survive Your First D365 API Call (Barely)

    -8 H

    Survive Your First D365 API Call (Barely)

    You’ve got D365 running, and management drops the classic: “Integrate it with that tool over there.” Sounds simple, right? Except misconfigured permissions create compliance headaches, and using the wrong entity can grind processes to a halt. That’s why today’s survival guide is blunt and step‑by‑step. Here’s the roadmap: one, how to authenticate with Azure AD and actually get a token. Two, how to query F&O data cleanly with OData endpoints. Three, when to lean on custom services—and how to guard them so they don’t blow up on you later. We’ll register an app, grab a token, make a call, and set guardrails you can defend to both your CISO and your sanity. Integration doesn’t need duct tape—it needs the right handshake. And that’s where we start. Meet the F&O API: The 'Secret Handshake' Meet the Finance and Operations API: the so‑called “secret handshake.” It isn’t black magic, and you don’t need to sacrifice a weekend to make it work. Think of it less like wizardry and more like knowing the right knock to get through the right door. The point is simple: F&O won’t let you crawl in through the windows, but it will let you through the official entrance if you know the rules. A lot of admins still imagine Finance and Operations as some fortress with thick walls and scary guards. Fine, sure—but the real story is simpler. Inside that fortress, Microsoft already built you a proper door: the REST API. It’s not a hidden side alley or a developer toy. It’s the documented, supported way in. Finance and Operations exposes business data through OData/REST endpoints—customers, vendors, invoices, purchase orders—the bread and butter of your ERP. That’s the integration path Microsoft wants you to take, and it’s the safest one you’ve got. Where do things go wrong? It usually happens when teams try to skip the API. You’ve seen it: production‑pointed SQL scripts hammered straight at the database, screen scraping tools chewing through UI clicks at robot speed, or shadow integrations that run without anyone in IT admitting they exist. Those shortcuts might get you quick results once or twice, but they’re fragile. They break the second Microsoft pushes a hotfix, and when they break, the fallout usually hits compliance, audit, or finance all at once. In contrast, the API endpoints give you a structured, predictable interface that stays supported through updates. Here’s the mindset shift: Microsoft didn’t build the F&O API as a “bonus” feature. This API is the playbook. If you call it, you’re supported, documented, and when issues come up, Microsoft support will help you. If you bypass it, you’re basically duct‑taping integrations together with no safety net. And when that duct tape peels off—as it always does—you’re left explaining missing transactions to your boss at month‑end close. Nobody wants that. Now, let’s get into what the API actually looks like. It’s RESTful, so you’ll be working with standard HTTP verbs: GET, POST, PATCH, DELETE. The structure underneath is OData, which basically means you’re querying structured endpoints in a consistent way. Every major business entity you care about—customers, vendors, invoices—has its shelf. You don’t rummage through piles of exports or scrape whatever the UI happens to show that day. You call “/Customers” and you get structured data back. Predictable. Repeatable. No surprises. Think of OData like a menu in a diner. It’s not about sneaking into the kitchen and stirring random pots. The menu lists every dish, the ingredients are standardized, and when you order “Invoice Lines,” you get exactly that—every single time. That consistency is what makes automation and integration even possible. You’re not gambling on screen layouts or guessing which Excel column still holds the vendor ID. You’re just asking the system the right way, and it answers the right way. But OData isn’t your only option. Sometimes, you need more than an entity list—you need business logic or steps that OData doesn’t expose directly. That’s where custom services come in. Developers can build X++‑based services for specialized workflows, and those services plug into the same API layer. Still supported, still documented, just designed for the custom side of your business process. And while we’re on options, there’s one more integration path you shouldn’t ignore: Dataverse dual‑write. If your world spans both the CRM side and F&O, dual‑write gives you near real‑time, two‑way sync between Dataverse tables and F&O data entities. It maps fields, supports initial sync, lets you pause/resume or catch up if you fall behind, and it even provides a central log so you know what synced and when. That’s a world away from shadow integrations, and it’s exactly why a lot of teams pick it to keep Customer Engagement and ERP data aligned without hand‑crafted hacks. So the takeaway is this: the API isn’t an optional side door. It’s the real entrance. Use it, and you build integrations that survive patches, audits, and real‑world use. Ignore it, and you’re back to fragile scripts and RPA workarounds that collapse when the wind changes. Microsoft gave you the handshake—now it’s on you to use it. All of that is neat—but none of it matters until you can prove who you are. On to tokens. Authentication Without Losing Your Sanity Authentication Without Losing Your Sanity. Let’s be real: nothing tests your patience faster than getting stonewalled by a token error that helpfully tells you “Access Denied”—and nothing else. You’ve triple‑checked your setup, sacrificed three cups of coffee to the troubleshooting gods, and still the API looks at you like, “Who are you again?” It’s brutal, but it’s also the most important step in the whole process. Without authentication, every other clever thing you try is just noise at a locked door. Here’s the plain truth: every single call into Finance and Operations has to be approved by Azure Active Directory through OAuth 2.0. No token, no entry. Tokens are short‑lived keys, and they’re built to keep random scripts, rogue apps, or bored interns from crashing into your ERP. That’s fantastic for security, but if you don’t have the setup right, it feels like yelling SQL queries through a window that doesn’t open. So how do you actually do this without going insane? Break it into three practical steps: * Register the app in Azure AD. This gives you a Client ID, and you’ll pair it with either a client secret or—much better—a certificate for production. That app registration becomes the official identity of your integration, so don’t skip documenting what it’s for. * Assign the minimum API permissions it needs. Don’t go full “God Mode” just because it’s easier. If your integration just needs Vendors and Purchase Orders, scope it exactly there. Least privilege isn’t a suggestion; it’s the only way to avoid waking up to compliance nightmares down the line. * Get admin consent, then request your token using the client credentials flow (for app‑only access) or delegated flow (if you need it tied to a user). Once Azure AD hands you that token, that’s your golden ticket—good for a short window of time. For production setups, do yourself a favor and avoid long‑lived client secrets. They’re like sticky notes with your ATM PIN on them: easy for now, dangerous long‑term. Instead, go with certificate‑based authentication or managed identities if you’re running inside Azure. One extra hour to configure it now saves you countless fire drills later. Now let’s talk common mistakes—because we’ve all seen them. Don’t over‑grant permissions in Azure. Too many admins slap on every permission they can find, thinking they’ll trim it back later. Spoiler: they never do. That’s how you get apps capable of erasing audit logs when all they needed was “read Customers.” Tokens are also short‑lived on purpose. If you don’t design for refresh and rotation, your integration will look great on day one and then fail spectacularly 24 hours later. Here’s the practical side. When you successfully fetch that OAuth token from Azure AD, you’re not done—you actually have to use it. Every API request you send to Finance and Operations has to include it in the header: Authorization: Bearer OData Endpoints: Your New Best Friend OData endpoints: your new best friend. Picture this as the part where the API stops being a locked door and starts being an organized shelf. Up until now, it’s all been about access—tokens, scopes, and proving you should be in the room. With OData, you’re not sneaking through windows or pawing through random SQL tables; you’ve got clean, documented endpoints lined up: Customers, Vendors, Invoices, Purchase Orders, all waiting politely at predictable URLs. You need customers? Hit /Customers. Invoices? /VendorInvoices. It’s standardized, not guesswork. Contrast that with the “Export to Excel” culture we’ve all lived through. Hit that button and in seconds your data is outdated. The moment a record changes—updated address, new sales order—that exported file lies to you. With OData, you’re not emailing aging snapshots; you’re pulling live transactional data. Plug that into Power BI and suddenly your dashboards reflect what’s happening now, not what happened last week. It’s the difference between staring at a Polaroid and watching a livestream. Guess which one your CFO trusts when arguing about current numbers. The real power sits in CRUD: Create, Read, Update, Delete. In OData terms: POST, GET, PATCH, DELETE. A GET reads records, POST creates new ones, PATCH updates, and DELETE… deletes (use with caution). It’s simple: four verbs for almost every transactional integration you’ll need. No voodoo, no obscure syntax—just basic dat

    17 min
  2. Microsoft Fabric Explained: No Code, No Nonsense

    -20 H

    Microsoft Fabric Explained: No Code, No Nonsense

    Here’s a fun corporate trick: Microsoft managed to confuse half the industry by slapping the word “house” on anything with a data label. But here’s what you’ll actually get out of the next few minutes: we’ll nail down what OneLake really is, when to use a Warehouse versus a Lakehouse, and why Delta and Parquet keep your data from turning into a swamp of CSVs. That’s three concrete takeaways in plain English. Want the one‑page cheat sheet? Subscribe to the M365.Show newsletter. Now, with the promise clear, let’s talk about Microsoft’s favorite game: naming roulette. Lakehouse vs Warehouse: Microsoft’s Naming Roulette When people first hear “Lakehouse” and “Warehouse,” it sounds like two flavors of the same thing. Same word ending, both live inside Fabric, so surely they’re interchangeable—except they’re not. The names are what trip teams up, because they hide the fact that these are different experiences built on the same storage foundation. Here’s the plain breakdown. A Warehouse is SQL-first. It expects structured tables, defined schemas, and clean data. It’s what you point dashboards at, what your BI team lives in, and what delivers fast query responses without surprises. A Lakehouse, meanwhile, is the more flexible workbench. You can dump in JSON logs, broken CSVs, or Parquet files from another pipeline and not break the system. It’s designed for engineers and data scientists who run Spark notebooks, machine learning jobs, or messy transformations. If you want a visual, skip the sitcom-length analogy: think of the Warehouse as a labeled pantry and the Lakehouse as a garage with the freezer tucked next to power tools. One is organized and efficient for everyday meals. The other has room for experiments, projects, and overflow. Both store food, but the vibe and workflow couldn’t be more different. Now, here’s the important part Microsoft’s marketing can blur: neither exists in its own silo. Both Lakehouses and Warehouses in Fabric store their tables in the open Delta Parquet format, both sit on top of OneLake, and both give you consistent access to the underlying files. What’s different is the experience you interact with. Think of Fabric not as separate buildings, but as two different rooms built on the same concrete slab, each furnished for a specific kind of work. From a user perspective, the divide is real. Analysts love Warehouses because they behave predictably with SQL and BI tools. They don’t want to crawl through raw web logs at 2 a.m.—they want structured tables with clean joins. Data engineers and scientists lean toward Lakehouses because they don’t want to spend weeks normalizing heaps of JSON just to answer “what’s trending in the logs.” They want Spark, Python, and flexibility. So the decision pattern boils down to this: use a Warehouse when you need SQL-driven, curated reporting; use a Lakehouse when you’re working with semi-structured data, Spark, and exploration-heavy workloads. That single sentence separates successful projects from the ones where teams shout across Slack because no one knows why the “dashboard” keeps choking on raw log files. And here’s the kicker—mixing up the two doesn’t just waste time, it creates political messes. If management assumes they’re interchangeable, analysts get saddled with raw exports they can’t process, while engineers waste hours building shadow tables that should’ve been Lakehouse assets from day one. The tools are designed to coexist, not to substitute for each other. So the bottom line: Warehouses serve reporting. Lakehouses serve engineering and exploration. Same OneLake underneath, same Delta Parquet files, different optimizations. Get that distinction wrong, and your project drags. Get it right, and both sides of the data team stop fighting long enough to deliver something useful to the business. And since this all hangs on the same shared layer, it raises the obvious question—what exactly is this OneLake that sits under everything? OneLake: The Data Lake You Already Own Picture this: you move into a new house, and surprise—there’s a giant underground pool already filled and ready to use. That’s what OneLake is in Fabric. You don’t install it, you don’t beg IT for storage accounts, and you definitely don’t file a ticket for provisioning. It’s automatically there. OneLake is created once per Fabric tenant, and every workspace, every Lakehouse, every Warehouse plugs into it by default. Under the hood, it actually runs on Azure Data Lake Storage Gen2, so it’s not some mystical new storage type—it’s Microsoft putting a SaaS layer on top of storage you probably already know. Before OneLake, each department built its own “lake” because why not—storage accounts were cheap, and everyone believed their copy was the single source of truth. Marketing had one. Finance had one. Data science spun one up in another region “for performance.” The result was a swamp of duplicate files, rogue pipelines, and zero coordination. It was SharePoint sprawl, except this time the mistakes showed up in your Azure bill. Teams burned budget maintaining five lakes that didn’t talk to each other, and analysts wasted nights reconciling “final_v2” tables that never matched. OneLake kills that off by default. Think of it as the single pool everyone has to share instead of each team digging muddy holes in their own backyards. Every object in Fabric—Lakehouses, Warehouses, Power BI datasets—lands in the same logical lake. That means no more excuses about Finance having its “own version” of the data. To make sharing easier, OneLake exposes a single file-system namespace that stretches across your entire tenant. Workspaces sit inside that namespace like folders, giving different groups their place to work without breaking discoverability. It even spans regions seamlessly, which is why shortcuts let you point at other sources without endless duplication. The small print: compute capacity is still regional and billed by assignment, so while your OneLake is global and logical, the engines you run on top of it are tied to regions and budgets. At its core, OneLake standardizes storage around Delta Parquet files. Translation: instead of ten competing formats where every engine has to spin its own copy, Fabric speaks one language. SQL queries, Spark notebooks, machine learning jobs, Power BI dashboards—they all hit the same tabular store. Columnar layout makes queries faster, transactional support makes updates safe, and that reduces the nightmare of CSV scripts crisscrossing like spaghetti. The structure is simple enough to explain to your boss in one diagram. At the very top you have your tenant—that’s the concrete slab the whole thing sits on. Inside the tenant are workspaces, like containers for departments, teams, or projects. Inside those workspaces live the actual data items: warehouses, lakehouses, datasets. It’s organized, predictable, and far less painful than juggling dozens of storage accounts and RBAC assignments across three regions. On top of this, Microsoft folds in governance as a default: Purview cataloging and sensitivity labeling are already wired in. That way, OneLake isn’t just raw storage, it also enforces discoverability, compliance, and policy from day one without you building it from scratch. If you’ve lived the old way, the benefits are obvious. You stop paying to store the same table six different times. You stop debugging brittle pipelines that exist purely to sync finance copies with marketing copies. You stop getting those 3 a.m. calls where someone insists version FINAL_v3.xlsx is “the right one,” only to learn HR already published FINAL_v4. OneLake consolidates that pain into a single source of truth. No heroic intern consolidating files. No pipeline graveyard clogging budgets. Just one layer, one copy, and all the engines wired to it. It’s not magic, though—it’s just pooled storage. And like any pool, if you don’t manage it, it can turn swampy real fast. OneLake gives you the centralized foundation, but it relies on the Delta format layer to keep data clean, consistent, and usable across different engines. That’s the real filter that turns OneLake into a lake worth swimming in. And that brings us to the next piece of the puzzle—the unglamorous technology that keeps that water clear in the first place. Delta and Parquet: The Unsexy Heroes Ever heard someone drop “Delta Parquet” in a meeting and you just nodded along like you totally understood it? Happens to everyone. The truth is, it’s not a secret Microsoft code name or Star Trek tech—it’s just how Fabric stores tabular data under the hood. Every Lakehouse and Warehouse in Fabric writes to **Delta Parquet format**, which sounds dull until you realize it’s the reason your analytics don’t fall apart the second SQL and Spark meet in the same room. Let’s start with Parquet. Parquet is a file format that stores data in columns instead of rows. That simple shift is a game-changer. Think of it this way: if your data is row-based, every query has to slog through every field in every record, even if all you asked for was “Customer_ID.” It’s like reading every Harry Potter book cover-to-cover just to count how many times “quidditch” shows up. Columnar storage flips that around—you only read the column you need. It’s like going straight to the dictionary index under “Q” and grabbing just the relevant bits. That means queries run faster, fewer bytes are read, and your cloud bill doesn’t explode every time someone slices 200 million rows for a dashboard. Parquet delivers raw performance and efficiency. Without it, large tables turn into a laggy nightmare and cost far more than they should. With it, analysts can run their reports inside a coffee break instead of during an all-hands meeting. But Pa

    19 min
  3. Breaking Power Pages Limits With VS Code Copilot

    -1 J

    Breaking Power Pages Limits With VS Code Copilot

    You know that sinking feeling when your Power Pages form won’t validate and the error messages are about as useful as a ‘404 Brain Not Found’? That pain point is exactly what we’ll fix today. We’re covering five moves: streamlining Liquid templates, speeding JavaScript and form validation, getting plain-English code explanations, integrating HTML with Bootstrap for responsive layouts, and simplifying web API calls. One quick caveat—you’ll need VS Code with the Power Platform Tools extension, GitHub Copilot Chat, and your site content pulled down through the Power Platform CLI with Dataverse authentication. That setup makes Copilot context-aware. With that in place, Copilot stops lobbing random snippets. It gives contextual, iterative code that cuts down trial-and-error. I’ll show you the exact prompts so you can replicate results yourself. And since most pain starts with JavaScript, let’s roll into what happens when your form errors feel like a natural 1. When JavaScript Feels Like a Natural 1 JavaScript can turn what should be a straightforward form check into a disaster fast. One misplaced keystroke, and instead of stopping bad input, the whole flow collapses. That’s usually when you sit there staring at the screen, wondering how “banana” ever got past your carefully written validation logic. You know the drill: a form that looks harmless, a validator meant to filter nonsense, and a clever user typing the one thing you didn’t account for. Suddenly your console logs explode with complaints, and every VS Code tab feels like another dead end. The small errors hit the hardest—a missing semicolon, or a scope bug that makes sense in your head but plays out like poison damage when the code runs. These tiny slips show up in real deployments all the time, and they explain why broken validation is such a familiar ticket in web development. Normally, your approach is brute force. You tweak a line, refresh, get kicked back by another error, then repeat the cycle until something finally sticks. An evening evaporates, and the end result is often just a duct-taped script that runs—no elegance, no teaching moment. That’s why debugging validation feels like the classic “natural 1.” You’re rolling, but the outcome is stacked against you. Here’s where Copilot comes in. Generic Copilot suggestions sometimes help, but a lot of the time they look like random fragments pulled from a half-remembered quest log—useful in spirit, wrong in detail. That’s because plain Copilot doesn’t know the quirks of Power Pages. But add the @powerpages participant, and suddenly it’s not spitting boilerplate; it’s offering context-aware code shaped to fit your environment. Microsoft built it to handle Power Pages specifics, including Liquid templates and Dataverse bindings, which means the suggestions account for the features that usually trip you up. And it’s not just about generating snippets. The @powerpages integration can also explain Power Pages-specific constructs so you don’t just paste and pray—you actually understand why a script does what it does. That makes debugging less like wandering blindfolded and more like working alongside someone who already cleared the same dungeon. For example, you can literally type this prompt into Copilot Chat: “@powerpages write JavaScript code for form field validation to verify the phone field value is in the valid format.” That’s not just theory—that’s a reproducible, demo-ready input you’ll see later in this walkthrough. The code that comes back isn’t a vague web snippet; it’s directly applicable and designed to compile in your Power Pages context. That predictability is the real shift. With generic Copilot, it feels like you’ve pulled in a bard who might strum the right chord, but half the time the tune has nothing to do with your current battle. With @powerpages, it’s closer to traveling with a ranger who already knows where the pitfalls are hiding. The quest becomes less about surviving traps and more about designing clear user experiences. The tool doesn’t replace your judgment—it sharpens it. You still decide what counts as valid input and how errors should guide the user. But instead of burning cycles on syntax bugs and boolean typos, you spend your effort making the workflow intuitive. Correctly handled, those validation steps stop being roadblocks and start being part of a smooth narrative for whoever’s using the form. It might not feel like a flashy win, but stopping the basic failures is what saves you from a flood of low-level tickets down the line. Once Copilot shoulders the grunt work of generating accurate validation code, your time shifts from survival mode to actually sharpening how the app behaves. That difference matters. Because when you see how well-targeted commands change the flow of code generation, you start wondering what else those commands can unlock. And that’s when the real advantage of using Copilot with Power Pages becomes clear. Rolling Advantage with Copilot Commands Rolling advantage here means knowing the right commands to throw into Copilot instead of hoping the dice land your way. That’s the real strength of using the @powerpages participant—it transforms Copilot Chat from a generic helper into a context-aware partner built for your Power Pages environment. Here’s how you invoke it. Inside VS Code, open the Copilot Chat pane, and then type your prompt with “@powerpages” at the front. That tag is what signals Copilot to load the Power Pages brain instead of the vanilla mode. You can ask for validators, Liquid snippets, even Dataverse-bound calls, and Copilot will shape its answers to fit the system you’re actually coding against. Now, before that works, you need the right loadout: Visual Studio Code installed, the Power Platform Tools extension, the GitHub Copilot Chat extension, and the Power Platform CLI authenticated against your Dataverse environment. The authentication step matters the most, because Copilot only understands your environment once you’ve actually pulled the site content into VS Code while logged in. Without that, it’s just guessing. And one governance caveat: some Copilot features for Power Pages are still in preview, and tenant admins control whether they’re enabled through the Copilot Hub and governance settings. Don’t be surprised if features demoed here are switched off in your org—that’s an admin toggle, not a bug. Here’s the difference once you’re set up. Regular Copilot is like asking a bard for battlefield advice: you’ll get a pleasant tune, maybe some broad commentary, but none of the detail you need when you’re dealing with Liquid templates or Dataverse entity fields. The @powerpages participant is closer to a ranger who’s already mapped the terrain. It’s not just code that compiles; it’s code that references the correct bindings, fits into form validators, and aligns with how Power Pages actually runs. One metaphor, one contrast, one payoff: usable context-aware output instead of fragile generic snippets. Let’s talk results. If you ask plain Copilot for a validation routine, you’ll probably get a script that works in a barebones HTML form. Drop it into Power Pages, though, and you’ll hit blind spots—no recognition of entity schema, no clue what Liquid tags are doing, and definitely no awareness of Dataverse rules. It runs like duct tape: sticky but unreliable. Throw the same request with @powerpages in the lead, and suddenly you’ve got validators that don’t just run—they bind to the right entity field references you actually need. Same request, context-adjusted output, no midnight patch session required. And this isn’t just about generating scripts. Commands like “@powerpages explain the following code {% include ‘Page Copy’ %}” give you plain-English walkthroughs of Liquid or Power Pages-specific constructs. You’re not copy-pasting blind; you’re actually building understanding. That’s a different kind of power—because you’re learning the runes while also casting them. The longer you work with these commands, the more your workflow shifts. Instead of patching errors alone at 2 AM, you’re treating Copilot like a second set of eyes that already knows what broke inside 90% of similar builds. The commands don’t give you cheat codes; they give you insight and working samples that respect the environment’s quirks. They save you from endless rework cycles. On a regular Copilot roll, you’ll land somewhere in the middle. Decent script, extra rework, some trial-and-error before it fits. On an @powerpages roll, it feels like a natural 20—validation scripts slot neatly into your Power Pages form, Liquid includes actually parse, Dataverse bindings don’t throw errors. It’s not luck; it’s what happens when Copilot knows the terrain you’re coding against. That advantage only gets you the first draft, though. A working snippet doesn’t mean a finished experience. Users type weird inputs, error messages need detail, and scripts have to evolve past the skeleton draft that Copilot hands you. And that’s where the next challenge begins. From Script Skeletons to Fully Armored Code From Script Skeletons to Fully Armored Code starts where most validation does—in bare bones. You get a first draft: a plain script that technically runs but blocks only the most obvious nonsense. Think of it as a level-one fighter holding a cardboard shield. It works for one or two weak hits, but the moment real users start swinging, the thing splinters. Anyone who has shipped a form knows the script never lasts long in its starter state. You guard against easy mistakes, but then a user pastes an emoji into a phone field or types “tomorrow” for their birthdate. Suddenly your defense collapses, tickets hit your inbox, and you’re left with brittle validati

    19 min
  4. SOC Team vs. Rogue Copilot: Who Wins?

    -1 J

    SOC Team vs. Rogue Copilot: Who Wins?

    Copilot vs SOC team is basically Mortal Kombat with data. Copilot shouts “Finish Him!” by pulling up the files a user can already touch—but if those files were overshared or poorly labeled, sensitive info gets put in the spotlight. Fast, brutal, and technically “working as designed.” On the other side, your SOC team’s combos aren’t uppercuts, they’re DSPM dashboards, Purview policies, and Defender XDR hooks. The question isn’t if they can fight back—it’s who lands the fatality first. If you want these incident playbooks in your pocket, hit subscribe. Now, picture your first Copilot alert rolling onto the dashboard. When Your First AI Alert Feels Like a Glitch You log in for another shift, coffee still warm, and the SOC dashboard throws up something unfamiliar: “Copilot accessed a confidential financial file.” On the surface, it feels like a mistake. Maybe a noisy log blip. Except…it’s not malware, not phishing, not a Powershell one-liner hiding in the weeds. It’s AI—and your feeds now include an artificial coworker touching sensitive files. The first reaction is confusion. Did Copilot just perform its expected duty, or is someone abusing it as cover? Shrugging could mean missing actual data exfiltration. Overreacting could waste hours untangling an innocent document summary. Either way, analysts freeze because it doesn’t fit the kill-chain models they drilled on. It’s neither ransomware nor spam. It’s a new category. Picture a junior analyst already neck-deep in noisy spam campaigns and malicious attachments. Suddenly this alert lands in their queue: “Copilot touched a file.” There’s no playbook. Do you terminate the process? Escalate? Flag it as noise and move on? With no context, the team isn’t executing standard procedure—they’re rolling dice on something critical. That’s exactly why Purview Data Security Posture Management for AI exists. Instead of static logs, it provides centralized visibility across your data, users, and activities. When Copilot opens a file, you see how that intersects with your sensitive-data map. Did it enter a folder labeled “Finance”? Was a sharing policy triggered after? Did someone else gain access downstream? Suddenly, an ambiguous line becomes a traceable event. It’s no longer a blurry screenshot buried in the logs—it’s a guided view of where Copilot went and what it touched. [Pause here in delivery—let the audience imagine that mental mini-map.] Then resume: DSPM correlates sensitive-data locations, risky user activities, and likely exfiltration channels. It flags sequences like a sensitivity label being downgraded, followed by access or sharing, then recommends concrete DLP or Insider Risk rules to contain it. Instead of speculation, you’re handed practical moves. This doesn’t remove all uncertainty. But it reduces the blind spots. DSPM grounds each AI alert with added context—file sensitivity, label history, the identity requesting access. That shifts the question from “is this real?” to “what next action does this evidence justify?” And that’s the difference between guesswork and priority-driven investigation. Many security leaders admit there’s a maturity gap when it comes to unifying data security, governance, and AI. The concern isn’t just Copilot itself—it’s that alerts without context are ignored, giving cover for actual breaches. If the SOC tunes out noisy AI signals, dangerous incidents slip right past the fence. Oversight tools have to explain—not just announce—when Copilot interacts with critical information. So what looks like a glitch alert is really a test of whether your team has built the bridge between AI signals and traditional data security. With DSPM in place, that first confusing notification doesn’t trigger panic or dismissal. It transforms into a traceable sequence with evidence: here’s the data involved, here’s who requested it, here’s the timeline. Your playbook evolves from reactive coin-flipping to guided action. That’s the baseline challenge. But soon, things get less clean. Not every alert is about Copilot doing its normal job. Sometimes a human sets the stage, bending the rules so that AI flows toward places it was never supposed to touch. And that’s where the real fight begins. The Insider Who Rewrites the Rules A file stamped “Confidential” suddenly drops down to “Internal.” Minutes later, Copilot glides through it without resistance. On paper it looks like routine business—an AI assistant summarizing another document. But behind the curtain, someone just moved the goalposts. They didn’t need an exploit, just the ability to rewrite a label. That’s the insider playbook: change the sign on the door and let the system trust what it sees. The tactic is painfully simple. Strip the “this is sensitive” tag, then let Copilot do the summarizing, rewriting, or extracting. You walk away holding a neat package of insights that should have stayed locked, without ever cracking the files yourself. To the SOC, it looks mundane: approved AI activity, no noisy alerts, no red-flag network spikes. It’s business flow camouflaged as compliance. You’ve trained your defenses to focus on outside raiders—phishing, ransomware, brute-forcing. But insiders don’t need malware when they can bend the rules you asked everyone to trust. Downgraded labels become camouflage. That trick works—until DSPM and Insider Risk put the sequence under a spotlight. Here’s the vignette: an analyst wants a peek at quarterly budgets they shouldn’t access. Every AI query fails because the files are tagged “Confidential.” So they drop the label to “Internal,” rerun the prompt, and Copilot delivers the summary without complaint. No alarms blare. The analyst never opens the doc directly and slips under the DLP radar. On the raw logs, it looks as boring as a weather check. But stitched together, the sequence is clear: label change, followed by AI assist, followed by potential misuse. This is where Microsoft Purview DSPM makes a difference. It doesn’t just list Copilot requests; it ties those requests to the file’s label history. DSPM can detect sequences such as a label downgrade immediately followed by AI access, and flag that pairing as irregular. From there it can recommend remediation, or in higher-risk cases, escalate to Insider Risk Management. That context flips a suspicious shuffle from “background noise” into an alert-worthy chain of behavior. And you’re not limited to just watching. Purview’s DLP features let you create guardrails that block Copilot processing of labeled content altogether. If a file is tagged “Highly Confidential,” you can enforce label-based controls so the AI never even touches it. Copilot respects Purview’s sensitivity labels, which means the label itself becomes part of the defense layer. The moment someone tampers with it, you have an actionable trigger. There’s also a governance angle the insiders count on you overlooking. If your labeling system is overcomplicated, employees are more likely to mislabel or downgrade files by accident—or hide behind “confusion” when caught. Microsoft’s own guidance is to map file labels from parent containers, so a SharePoint library tagged “Confidential” passes that flag automatically to every new file inside. Combine that with a simplified taxonomy—no more than five parent labels with clear names like “Highly Confidential” or “Public”—and you reduce both honest mistakes and deliberate loopholes. Lock container defaults, and you stop documents from drifting into the wrong category. When you see it in practice, the value is obvious. Without DSPM correlations, SOC sees a harmless Copilot query. With DSPM, that same query lights up as part of a suspicious chain: label flip, AI access, risky outbound move. Suddenly, it’s not a bland log entry; it’s a storyline with intent. You can intervene while the insider still thinks they’re invisible. The key isn’t to treat AI as the villain. Copilot plays the pawn in these moves—doing what its access rules allow. The villain is the person shifting the board by altering labels and testing boundaries. By making label changes themselves a monitored event, you reveal intent, not just output. On a natural 20, your SOC doesn’t just react after the leak; it predicts the attempt. You can block the AI request tied to a label downgrade, or at the very least, annotate it for rapid investigation. That’s the upgrade—from shrugging at odd entries to cutting off insider abuse before data walks out the door. But label shenanigans aren’t the only kind of trick in play. Sometimes, what on the surface looks like ordinary Copilot activity—summarizing, syncing, collaborating—ends up chained to something very different. And separating genuine productivity from someone quietly laundering data is the next challenge. Copilot or Cover Story? A document sits quietly on SharePoint. Copilot pulls it, builds a neat summary, and then you see that same content synced into a personal OneDrive account. That sequence alone makes the SOC stop cold. Is it just an employee trying to be efficient, or someone staging exfiltration under AI’s cover? On the surface, both stories look the same: AI touched the file, output was generated, then data landed in a new location. That’s the judgment call SOC teams wrestle with. You can’t block every movement of data without choking productivity, but you can’t ignore it either. Copilot complicates this because it’s a dual actor—it can power real work or provide camouflage for theft. Think of it like a player mashing the same game dungeon. At first it looks like simple grinding, building XP. But when the loot starts flowing out of band, you realize it’s not practice—it’s a bug exploit. Same surface actions, different intent. Context is what revea

    19 min
  5. R or T-SQL? One Button Changes Everything

    -2 J

    R or T-SQL? One Button Changes Everything

    Here’s a story: a team trained a model, everything worked fine—until the dataset doubled. Suddenly, their R pipeline crawled for hours. The root cause wasn’t the algorithm at all. It was compute context. They were running in local compute, dragging every row across the network into memory. One switch to SQL compute context pushed the R script to run directly on the server, kept the data in place, and turned the crawl into a sprint. That’s the rule of thumb: if your dataset is large, prefer SQL compute context to avoid moving rows over the network. Try it yourself—run the same R script locally and then in SQL compute. Compare wall-clock time and watch your network traffic. You’ll see the difference. And once you understand that setting, the next question becomes obvious: where’s the real drag hiding when the data starts to flow? The Invisible Bottleneck What most people don’t notice at first is a hidden drag inside their workflow: the invisible bottleneck. It isn’t a bug in your model or a quirk in your code—it’s the way your compute context decides where the work happens. When you run in local compute context, R runs on your laptop. Every row from SQL Server has to travel across the network and squeeze through your machine’s memory. That transfer alone can strangle performance. Switch to SQL Server compute context, and the script executes inside the server itself, right next to the data. No shuffling rows across the wire, no bandwidth penalty—processing stays local to the engine built to handle it. A lot of people miss this because small test sets don’t show the pain. Ten thousand rows? Your laptop shrugs. Ten million rows? Now you’re lugging a library home page by page, wondering why the clock melted. The fix isn’t complex tuning or endless loop rewrites. It’s setting the compute context properly so the heavy lifting happens on the server that was designed for it. That doesn’t mean compute context is a magic cure-all. If your data sources live outside SQL Server, you’ll still need to plan ETL to bring them in first. SQL compute context only removes the transfer tax if the data is already inside SQL Server. Think of it this way: the server’s a fortress smithy; if you want the blacksmith to forge your weapon fast, you bring the ore to him rather than hauling each strike back and forth across town. This is why so many hours get wasted on what looks like “optimization.” Teams adjust algorithms, rework pipeline logic, and tweak parameters trying to speed things up. But if the rows themselves are making round trips over the network, no amount of clever code will win. You’re simply locked into bandwidth drag. Change the compute context, and the fight shifts in your favor before you even sharpen the code. Still, it’s worth remembering: not every crawl is caused by compute context. If performance stalls, check three things in order. First, confirm compute context—local versus SQL Server. Second, inspect your query shape—are you pulling the right columns and rows, or everything under the sun? Third, look at batch size, because how many rows you feed into R at a time can make or break throughput. That checklist saves you from wasting cycles on the wrong fix. Notice the theme: network trips are the real tax collector here. With local compute, you pay tolls on every row. With SQL compute, the toll booths vanish. And once you start running analysis where the data actually resides, your pipeline feels like it finally got unstuck from molasses. But even with the right compute context, another dial lurks in the pipeline—how the rows are chunked and handed off. Leave that setting on default, and you can still find yourself feeding a beast one mouse at a time. That’s where the next performance lever comes in. Batch Size: Potion of Speed or Slowness Batch size is the next lever, and it behaves like a potion: dose it right and you gain speed, misjudge it and you stagger. In SQL Server, the batch size is controlled by the `rowsPerRead` parameter. By default, `rowsPerRead` is set to 50,000. That’s a safe middle ground, but once you start working with millions of rows, it often starves the process—like feeding a dragon one mouse at a time and wondering why it still looks hungry. Adjusting `rowsPerRead` changes how many rows SQL Server hands over to R in each batch. Too few, and R wastes time waiting for its next delivery. Too many, and the server may choke, running out of memory or paging to disk. The trick is to find the point where the flow into R keeps it busy without overwhelming the system. A practical way to approach this is simple: test in steps. Start with the default 50,000, then increase to 500,000, and if the server has plenty of memory, try one million. Each time, watch runtime and keep an eye on RAM usage. If you see memory paging, you’ve pushed too far. Roll back to the previous setting and call that your sweet spot. The actual number will vary based on your workload, but this test plan keeps you on safe ground. The shape of your data matters just as much as the row count. Wide tables—those with hundreds of columns—or those that include heavy text or blob fields are more demanding. In those cases, even if the row count looks small, the payload per row is huge. Rule of thumb: if your table is wide or includes large object columns, lower `rowsPerRead` to prevent paging. Narrow, numeric-only tables can usually handle much larger values before hitting trouble. Once tuned, the effect can be dramatic. Raising the batch size from 50,000 to 500,000 rows can cut wait times significantly because R spends its time processing instead of constantly pausing for the next shipment. Push past a million rows and you might get even faster results on the right hardware. The runtime difference feels closer to a network upgrade than a code tweak—even though the script itself hasn’t changed at all. A common mistake is ignoring `rowsPerRead` entirely and assuming the default is “good enough.” That choice often leads to pipelines that crawl during joins, aggregations, or transformations. The problem isn’t the SQL engine or the R code—it’s the constant interruption from feeding R too slowly. On the flip side, maxing out `rowsPerRead` without testing can be just as costly, because one oversized batch can tip memory over the edge and stall the process completely. That balance is why experimentation matters. Think of it as tuning a character build: one point too heavy on offense and you drop your defenses, one point too light and you can’t win the fight. Same here—batch size is a knob that lets you choose between throughput and resource safety, and only trial runs tell you where your system maxes out. The takeaway is clear: don’t treat `rowsPerRead` as a background setting. Use it as an active tool in your tuning kit. Small increments, careful monitoring, and attention to your dataset’s structure will get you to the best setting faster than guesswork ever will. And while batch size can smooth how much work reaches R at once, it can’t make up for sloppy queries. If the SQL feeding the pipeline is inefficient, then even a well-tuned batch size will struggle. That’s why the next focus is on something even more decisive: how the query itself gets written and whether the engine can break it into parallel streams. The Query That Unlocks Parallel Worlds Writing SQL can feel like pulling levers in a control room. Use the wrong switch and everything crawls through one rusty conveyor. Use the right one and suddenly the machine splits work across multiple belts at once. Same table, same data, but the outcome is night and day. The real trick isn’t about raw compute—it’s whether your query hands the optimizer enough structure to break the task into parallel paths. SQL Server will parallelize happily—but only if the query plan gives it that chance. A naive “just point to the table” approach looks simple, but it often leaves the optimizer no option but a single-thread execution. That’s exactly what happens when you pass `table=` into `RxSqlServerData`. It pulls everything row by row, and parallelism rarely triggers. By contrast, defining `sqlQuery=` in `RxSqlServerData` with a well-shaped SELECT gives the database optimizer room to generate a parallel plan. One choice silently bottlenecks you; the other unlocks extra workers without touching your R code. You see the same theme with SELECT statements. “SELECT *” isn’t clever, it’s dead weight. Never SELECT *. Project only what you need, and toss the excess columns early. Columns that R can’t digest cleanly—like GUIDs, rowguids, or occasionally odd timestamp formats—should be dropped or cast in SQL itself, or wrapped in a view before you hand them to R. A lean query makes it easier for the optimizer to split tasks, and it keeps memory from being wasted on junk you’ll never use. Parallelism also extends beyond query shape into how you call R from SQL Server. There are two main dials here. If you’re running your own scripts through `sp_execute_external_script` and not using RevoScaleR functions, explicitly set `@parallel = 1`. That tells SQL it can attempt parallel processes on your behalf. But if you are using the RevoScaleR suite—the functions with the rx* prefix—then parallel work is managed automatically inside the SQL compute context, and you steer it with the `numTasks` parameter. Just remember: asking for 8 or 16 tasks doesn’t guarantee that many will spin up. SQL still honors the server’s MAXDOP and resource governance. You might request 16, but get 6 if that’s all the server is willing to hand out under current load. The lesson is simple: test both methods against your workload, and watch how the server responds. One smart diagnostic step is to check your query in Management Studio before ever running it with R. Execute it, right-

    20 min
  6. CI/CD With Dev Containers: Flawless Victory Or Epic Fail?

    -2 J

    CI/CD With Dev Containers: Flawless Victory Or Epic Fail?

    Imagine queuing up for raid night, but half your guild’s game clients are patched differently. That’s what building cloud projects feels like without Dev Containers—chaos, version drift, and way too many ‘works-on-my-machine’ tickets. If you work with Azure and teams, you care about one thing: consistent developer environments. Before we roll initiative on this boss fight, hit subscribe and toggle notifications so you’ve got advantage in every future run. In this session, you’ll see exactly how a devcontainer.json works, why Templates and Features stop drift, how pre-building images cuts startup lag, and how to share Git credentials safely inside containers. The real test—are Dev Containers in CI/CD your reliable path to synchronized builds, or do they sometimes roll a natural 1? Let’s start with what happens when your party can’t sync in the first place. When Your Party Can’t Sync When your squad drifts out of sync, it doesn’t take long before the fight collapses. Azure work feels the same when every engineer runs slightly different toolchains. What starts as a tiny nudge—a newer SQL client here, a lagging Node version there—snowballs until builds misfire and pipelines redline. The root cause is local installs. Everyone outfits their laptop with a personal stack of SDKs and CLIs, then crosses their fingers that nothing conflicts. It only barely works. CI builds splinter because one developer upgrades Node without updating the pipeline, or someone tests against a provider cached on their own workstation but not committed to source. These aren’t rare edge cases; the docs flag them as common drift patterns that containers eliminate. A shared image or pre‑built container means the version everyone pulls is identical, so the problem never spawns. Onboarding shows it most clearly. Drop a new hire into that mess and you’re handing them a crate of random tools with no map. They burn days installing runtimes, patching modules, and hunting missing dependencies before they can write a single line of useful code. That wasted time isn’t laziness—it’s the tax of unmanaged drift. Even when veterans dig in, invisible gaps pop up at the worst moments. Running mismatched CLIs is like casting spells with the wrong components—you don’t notice until combat starts. With Azure, that translates into missing Bicep compilers, outdated PowerShell modules, or an Azure CLI left to rot on last year’s build. Queries break, deployments hang, and the helpdesk gets another round of phantom tickets. The real‑world fallout isn’t hypothetical. The docs call out Git line‑ending mismatches between host and container, extension misfires on Alpine images, and dreaded SSH passphrase hangs. They’re not application bugs; they’re tool drift unraveling the party mid‑dungeon. This is where Dev Containers flatten the field. Instead of everyone stacking their own tower of runtimes, you publish one baseline. The devcontainer.json in the .devcontainer folder is the contract: it declares runtimes, extensions, mounts. That file keeps all laptops from turning into rogue instances. You don’t need to trust half‑remembered setup notes—everyone pulls the same container, launches VS Code inside it, and gets the same runtime, same extensions, same spelling of reality. It also kills the slow bleed of onboarding and failing CI. When your whole team spawns from the same image, no one wastes morning cycles copying config files or chasing arcane errors. Your build server gets the same gear loadout as your laptop. A junior engineer’s VM rolls with the same buffs as a senior’s workstation. Instead of firefighting mismatches, you focus on advancing the quest. The measurable payoff is speed and stability. Onboarding shrinks from days to hours. CI runs stop collapsing on trivial tool mismatches. Developers aren’t stuck interpreting mysterious error logs—they’re working against the same environment, every single time. Even experiments become safer: you can branch a devcontainer to test new tech without contaminating your base loadout. When you’re done, you roll back, and nothing leaks into your daily kit. So the core takeaway is simple: containers stop the desync before it wipes the group. Every player hits the dungeon on the same patch level, the buffs are aligned, and the tools behave consistently. That’s the baseline you need before any real strategy even matters. But synchronizing gear is just the first step. Once everyone’s in lockstep, the real advantage comes from how you shape that shared foundation—because no one wants to hand‑roll a wizard from scratch every time they log in. Templates as Pre-Built Classes In RPG terms, picking a class means you skip the grind of rolling stats from scratch and jump right into the fight with a kit that already works. That’s what Dev Container Templates do for your projects—they’re the pre-built classes of the dev world, baked with sane defaults and ready to run. Without them, you’re forcing every engineer to cobble their own sheet. One dev kludges together Docker basics, another scavenges an old runtime off the web, and somebody pastes in a dusty config file from a blog nobody checks anymore. Before writing a single piece of app code, you’ve already burned a day arguing what counts as “the environment.” Templates wipe out that thrash. In VS Code, you hit the Command Palette and choose “Dev Containers: Add Dev Container Configuration Files….” From there you pull from a public template index—what containers.dev calls the gallery. Select an Azure SQL Database template and VS Code auto-generates a .devcontainer folder with a devcontainer.json tuned for database work. Extensions, Docker setup, and baseline configs are already loaded. It’s the equivalent of spawning your spellcaster with starter gear and a couple of useful cantrips already slotted. Same deal with the .NET Aspire template. You can try duct taping runtimes across everyone’s laptops, or you can start projects with one standard template. The template lays down identical versions across dev machines, remote environments, and CI. Instead of builds diverging into chaos, you get consistency down to the patch level. Debugging doesn’t mean rerolling saves every five minutes, because every player is using the same rulebook. And it’s not just about the first spin-up. Templates continue to pay off daily. For Node in Azure, one template can define the interpreter, pull in the right package manager, and configure Docker integration so that every build comes container-ready. No scavenger hunt, no guesswork. Think of it like a class spec: you can swap one skill or weapon, but you aren’t forced to reinvent “what magic missile even does” every session. Onboarding is where it’s most obvious. With a proper template, adding a new engineer shifts from hours of patching runtimes and failed installs to minutes of opening VS Code and hitting “Reopen in Container.” As soon as the environment reloads, they’re running on the exact stack everyone else is using. Instead of tickets about missing CLIs or misaligned versions, they’re ready to commit before the coffee cools. Because templates live in repos, they evolve without chaos. When teams update a base runtime, fix a quirk, or add a handy extension, the change hits once and everyone inherits it. That’s like publishing an updated character guide—suddenly every paladin gets higher saves without each one browsing a patch note forum. Nothing is left to chance, and nobody gets stuck falling behind. Templates also scale with your team’s growth. Veteran engineers don’t waste time re-explaining local setup, and new hires don’t fight mystery configs. Everyone uses the same baseline loadout, the same devcontainer.json, the same reproducible outcome. In practice, that prevents drift from sneaking in and killing your pipeline later. The nutshell benefit: Templates transform setup from a dice roll into a repeatable contract. Every project starts on predictable ground, every laptop mirrors the same working environment, and your build server gets to play by the same rules. Templates give you stability at level one instead of praying for lucky rolls. But these base classes aren’t the whole story. Sometimes you want your kit tuned just a little tighter—an extra spell, a bonus artifact, the sort of upgrade that changes how your character performs. That’s when it’s time to talk about Features. Features: Loot Drops for Your Toolkit Features are the loot drops for your environment—modular upgrades that slot in without grind or guesswork. Clear the room, open the chest, and instead of a random rusty sword you get a tool that actually matters: Git, Terraform, Azure CLI, whatever your project needs. Technically speaking, a Feature is a self-contained install unit referenced under the "features" property in devcontainer.json and can be published as an OCI artifact (see containers.dev/features). That one line connects your container to a specific capability, and suddenly your characters all roll with the same buff. The ease is the point. Instead of writing long install scripts and baking them into every Dockerfile, you just call the Feature in your devcontainer.json and it drops into place. One example: you can reference ghcr.io/devcontainers/features/azure-cli:1 in the features section to install the Azure CLI. No scribbling apt-get commands, no worrying which engineer fat-fingered a version. It’s declarative, minimal, and consistent across every environment. Trying to work without Features means dragging your party through manual setup every time you need another dependency. Every container build turns into copy-paste scripting, apt-get loops, and the slow dread of waiting while installs grind. Worse, you still risk different versions sneaking in depending on base image or local ca

    19 min
  7. You're Flying Blind Without Business Central Telemetry and howto fix it with Power BI

    -3 J

    You're Flying Blind Without Business Central Telemetry and howto fix it with Power BI

    Imagine rolling a D20 every morning just to see if Business Central will behave. No telemetry? That’s like rolling blindfolded. Quick side note—hit subscribe if you want regular, no‑nonsense admin walkthroughs like this one. It keeps you from wandering alone in the dungeon. Here’s the deal: I’ll show you how to connect the Power BI telemetry app to Azure Application Insights, and why one field—the Application ID—trips more admins than any boss fight. To run full live reports, you’ll need an Application Insights resource in Azure and a Power BI Pro license. Telemetry only captures behavior signals like sessions, errors, and performance—not customer invoice data. It’s privacy‑by‑design, meant for system health, not business secrets. Without it, you’re stumbling in the dark. So what happens when you try to run without that visibility? Think no mini‑map, no enemy markers, and no clear path forward. The Hidden Mini-Map: Why Telemetry Matters That’s where telemetry comes in—the hidden mini‑map you didn’t know you were missing. Business Central already emits the signals; you just need to surface them. With telemetry turned off, you aren’t choosing “less convenience.” You’re losing sight of how your environment actually behaves. Helpdesk tickets alone? That’s reaction mode. Users only raise their hand when something hurts, and by then it’s already failed. Telemetry keeps the loop tight. It shows performance shifts and error patterns as they build, not just when the roof caves in. Take deadlocks. By themselves, they’re quiet failures. Users rarely notice them until throughput explodes under load. In one real case, telemetry highlighted deadlocks tied directly to the Replication Counter update process. Enabling the “Skip Replication Counter Update” switch fixed it instantly. Without telemetry, you’d never connect those dots in time—you’d just watch payroll grind to a halt. That’s the real power: turning invisible pressure into visible patterns. The dashboards let you spot the slope of trouble before it hits the cliff. It’s the difference between scheduling a fix on Tuesday afternoon and watching your weekend vanish into emergency calls. And telemetry isn’t spying. It doesn’t capture who typed an invoice line or customer details. What it does capture are behavior signals—sessions starting and ending, login sources, SQL query durations, page views. Importantly, it covers both environment‑wide signals and per‑extension signals, so you aren’t locked into one dimension of visibility. It’s motion detection, not reading diaries. Of course, all that data goes into Azure Application Insights. That means one requirement: you need an Application Insights resource in your Azure subscription, and you need the proper permissions to read from it. Otherwise, the reports will come up blank and you’ll spend time “fixing” something that isn’t broken—it’s just gated by access. Compare that to raw error logs. They’re verbose, unreadable walls of text. Telemetry condenses that chaos onto dashboards where trends pop. Deadlocks line up on graphs. SQL lag shows up in comparison charts. Misbehaving extensions stand out. Instead of parsing twenty screens of stack traces, you just get a simple view of what’s wrong and when. That clarity changes your posture as an admin. With only logs, you’re reacting to pain reports after they land. With telemetry dashboards, you’re watching the health of the system live. You can spot spikes before they take down payroll. You can measure “it’s slow” into actual metrics tied to contention, queries, or extensions. It arms both IT and dev teams with visibility users can’t articulate on their own. And here’s the kicker: Microsoft already provides the feed. Business Central can emit telemetry into Application Insights straight out of the box. The dashboards in Power BI are what turn that feed into something usable. By default, you only see demo samples. But once you connect with the right credentials, the map fills in with your real environment. That’s the unlock. It lets you stop working blind and start working ahead. Instead of asking, “why did it fail,” you start asking, “what trends do we need to solve before this becomes a failure.” Now the next question is which tool to use to actually view this stream. Microsoft gives you two apps—both free, both sitting in Power BI’s AppSource store. They look identical, but they aren’t. Choosing the wrong one means you’ll never see the data you need. Think of it as two treasure chests waiting on the floor—you want to know exactly which one to open. Picking Your Toolkit: The Two Power BI Apps When it comes to actually viewing telemetry in Power BI, Microsoft gives you not one but two apps, and knowing the difference matters. These are the “Dynamics 365 Business Central Usage” app and the “Dynamics 365 Business Central App Usage” app. Both are free from AppSource, both open source, and both come with prebuilt dashboards. But they serve very different roles, and if you install the wrong one for the question you’re asking, you’ll be staring at charts that don’t line up with reality. At first glance, they look nearly identical—same layout, same reports, same colors. That’s why so many admins spin them up, click around, and then wonder why they aren’t seeing the answers they expected. The split is simple once you know it. The “Usage” app is for environment telemetry. It covers cross-environment behaviors: logins, sessions, client types, performance across the whole system. Think of it like zooming out to see the whole town map in your game. The “App Usage” app, on the other hand, is tied to extension telemetry. It connects through app.json and focuses on one extension at a time—like checking a single character’s skill tree. Want to measure which custom app is throwing deadlocks or dragging queries? That’s what the “App Usage” app is for. To make it easier, Microsoft even provides direct install links. For the environment telemetry app, use aka.ms/bctelemetryreport. For the extension telemetry app, use aka.ms/bctelemetry-isv-app. Both links take you directly to AppSource where you can install them into your Power BI workspace. After install, don’t be surprised when you first open the app and see sample data. That’s baked in on purpose. Until you connect them to your actual telemetry, they load with demo numbers so you can preview the layouts. They also automatically create a Power BI workspace under the same name as the app, so don’t panic if you suddenly see a fresh workspace appear in your list. That’s normal. Now let’s talk capability, because both apps surface the same four report types—Usage, Errors, Performance, and Administration. Usage is your census ledger, capturing login counts, session timings, and client usage. Errors is the event list of failures, both user-triggered and system-level. Performance is where you spot long SQL durations or rising page load times before anyone raises a ticket. Administration logs environment events like extension installs, sandbox refreshes, and restarts—your system’s patch notes, all timestamped and organized. All of those reports are available in both apps, but the scope changes. In the environment “Usage” app, the reports describe patterns across your entire Business Central setup. You’ll see whether certain clients are more heavily used, if session counts spike at end-of-month, or where contention is hitting the system as a whole. In the extension “App Usage” app, those same reports zero in on telemetry tied strictly to that extension. Instead of studying every player in town, you’re watching how your wizard class performs when they cast spells. That focus is what lets you isolate a misbehaving customization without drowning in global stats. There is a cost gate here, though. While both apps are free to download, you can’t use them with live telemetry unless you have a Power BI Pro license. Without it, you’re limited to the static sample dashboards. With it, you get the real-time queries pulling from your Application Insights resource. That single license is the innkeeper’s fee—it’s what gets you from looking at mannequins to fighting your actual monsters. This is also why I flagged the subscription CTA earlier; if you’re planning to set this up, having Pro is not optional. So, the practical workflow ends up being straightforward. Use the Dynamics 365 Business Central Usage app when you need system-wide telemetry, the big picture of how your environment behaves. Use the Dynamics 365 Business Central App Usage app when you want to isolate one extension and judge its reliability or performance. They aren’t competing apps—you’ll want both. One shows you patterns across the whole campaign, the other reveals weaknesses or strengths in a single party member. With that toolkit installed, the next step is obvious. Right now the dashboards are running on sample skeletons. To bring them to life with your real environment, you need the key that unlocks Application Insights data. And that key comes in the form of a single code string—something buried in Azure that every admin has to hunt down before the charts mean anything at all. The Azure Portal Puzzle: Finding the Application ID The Azure portal is basically a maze. You expect neat hallways and labels, but what you get is blades, sections, and tabs that feel like they were designed to trip you up. Somewhere in that sprawl sits the one thing you need: the 36‑character Application ID inside Application Insights. Until you pull it out, Power BI won’t talk to your telemetry. Once you have it, the dashboards stop pretending and start reporting on your real system. Here’s the first rule: you won’t find that Application ID in Business Cen

    19 min
  8. Go Beyond the Demos—Make Copilot Do What You Need in Business Central

    -3 J

    Go Beyond the Demos—Make Copilot Do What You Need in Business Central

    Ever wish Business Central actually did the boring work for you? Like reconciling payments or drafting product text, instead of burying you in extra clicks and late-night Excel misery? That’s the promise of Copilot. And before you ask—yes, it’s built into Business Central online at no extra cost. Just don’t expect it to run on your on-prem install. Here’s the catch: most admins never look past the canned demos. Today we’ll strip it down and show you how to make Copilot work for *your* business. By the end, you’ll walk away with a survival checklist you can pressure-test in a sandbox. And it all starts with the hidden menu Microsoft barely talks about. The Secret Menu of Copilot Copilot’s real power isn’t in the flashy buttons you see on a customer card. The real trick is what Microsoft left sitting underneath. You’ll find those extension points inside the `System.AI` namespace — look for the Copilot Capability codeunit and related enums. That’s where the actual hooks for developers live. These aren’t random artifacts in the codebase. They’re built so you can define and register your own AI-powered features instead of waiting for Microsoft to sprinkle out a new demo every quarter. The menu most people interact with is just the surface. Type invoice data, get a neat summary, maybe draft a product description — fine. But those are demo scenarios to show “look, it works!” In reality, Business Central’s guts contain objects like Copilot Capability and Copilot Availability. In plain English: a Capability is the skill set you’re creating for Copilot. Availability tells the system when and where that skill should show up for end users. Together, that’s not just a menu of canned AI widgets — it’s a framework for making Copilot specific to your company. Here’s the kicker: most admins assume Copilot is fully locked down, like a shiny black box. They use what’s there, shrug, and move on. They never go looking for the extra controls. But at the developer level, you’ve got levers exposed. And yes, there’s a way for admins to actually see the results of what developers register. Head into the “Copilot & agent capabilities” page inside Business Central. Every capability you register shows up there. Admins can toggle them off one by one if something misbehaves. That connection — devs define it in AL, admins manage it in the UI — is the bridge that makes this more than just theory. Think of it less like a locked Apple device and more like a console with hidden debug commands. If all you ever do is click the main Copilot button, you’re leaving horsepower on the table. It’s like driving a Tesla and only ever inching forward in traffic. The “Ludicrous Mode” switch exists, but until you flip it, you’re just idling. Same thing here: the namespace objects are already in your tenant, but if you don’t know where to look, you’ll never use them. So what kind of horsepower are we talking about? The AI module inside Business Central gives you text completions, chat-like completions for workflow scenarios, and embeddings for semantic search. That means you can build a capability that, for example, drafts purchase orders based on your company’s patterns instead of Microsoft’s assumptions. It also means you can create assistants that talk in your company’s voice, not some sterilized HR memo. Quick note before anyone gets ideas: the preview “Chat with Copilot” feature you might have seen in Business Central isn’t extensible through this module. That chat is on its own path. What you *do* extend happens through the Capability model described here. Microsoft did a poor job of surfacing this in their marketing. Yes, it’s in the docs, but buried in a dry technical section few admins scroll through. But once you know these objects exist, the picture changes. Every finance quirk, every weird custom field, every messy approval workflow — all of it can be addressed with your own Copilot capability. Instead of waiting for Redmond to toss something down from on high, you can tailor the assistant directly to your environment. Of course, nothing this powerful comes without warning labels. These are sharp tools. Registering capabilities wrong can create conflicts, especially when Microsoft pushes updates. Do it badly, and suddenly the sandbox breaks, or worse, you block users in production. That’s why the Copilot & agent capabilities page matters: not only does it give admins visibility, it gives you a quick kill switch if your custom brain starts misbehaving. So the payoff here is simple: yes, there’s a secret menu under Copilot, yes, it’s in every tenant already, and yes, it turns Copilot from a demo toy into something useful. But knowing it exists is only step one. The real trick is registering those capabilities safely so you add firepower without burning your environment down — and that’s where we go next. Registering Without Burning Down Your Tenant Registering a Copilot capability isn’t some vague wizard trick. In plain AL terms, it means you create an `enumextension` for the `Copilot Capability` enum and then use an `Install` or `Upgrade` codeunit that calls `CopilotCapability.RegisterCapability`. That’s the handshake where you tell Business Central: “Here’s a new AI feature, treat it as part of the system.” Without that call, your extension might compile, but Copilot won’t even know the feature exists. Think of it as submitting HR paperwork: no record in the org chart, no desk, no email, no employee. Once you’ve got the basic definition in place, the next detail is scope and naming. Every capability you register lives in the same ecosystem Microsoft updates constantly. If you recycle a generic name or reserve a sloppy ID, you’re basically begging for a collision. Say you call it “Sales Helper” and tag it with a common enum value—then Microsoft ships a future update with a built-in capability in the same space. Suddenly the system doesn’t know which one to show, and your code is arguing with Redmond at runtime. The mitigation is boring but essential: pick unique names, assign your own enum values that don’t overlap with the common ranges, and version the whole extension deliberately. Add version numbers so you can track whether sandbox is on 1.2 while production’s still sitting at 1.0. And if something changes with the platform, your upgrade codeunits are the tool to carry the capability forward safely. Without those, you’re duct-taping new wiring into an old breaker box and hoping nothing bursts into flames. Now here’s where too many developers get casual. They throw the extension straight into production because “it’s just a capability.” That’s when your helpdesk lights up. The right path is simple: sandbox-only first. Break it, refactor, test it again, and only when it behaves do you move to prod. That controlled rollout reduces surprises. And this isn’t just about compiling code—it’s about governance. The Copilot & Agent capabilities page in Business Central doubles as your sanity check. If your capability doesn’t appear there after registration, you didn’t register it properly. That page reflects the system’s truth. Only after you’ve validated it there should you hand it off for admin review. And speaking of admins, flipping Copilot capabilities on or off, as well as configuring data movement, is something only admins with SUPER permissions or a Business Central admin role can do. Plan for that governance step ahead of time. A quick pro tip: when you register, use the optional `LearnMoreUrlTxt` parameter. That link shows up right there in the Copilot & Agent capabilities admin page. It’s not just a nice touch—it’s documentation baked in. Instead of making admins chase down a wiki link or bother you in Teams, they can click straight into the description of what the capability does and how to use it. Think of it as writing instructions on the actual light switch so the next person doesn’t flip the wrong one. Here’s a best-practice checklist that trims down the risks: 1) run everything in a sandbox before production, 2) pick unique enum values and avoid common ranges, 3) always use Install/Upgrade codeunits for clean paths forward, 4) attach that LearnMoreUrl so admins aren’t guessing later. Follow those four, and you’ll keep your tenant stable. Ignore them, and you’ll be restoring databases at three in the morning. The parking space metaphor still applies. Registering a capability is like officially reserving a spot for your new car. Fill out the right paperwork, it’s yours and everyone’s happy. Skip the process or park in the red zone, and now you’re blocking the fire lane and everyone’s angry. Registration is about carving out safe space for your feature so Business Central and Microsoft’s updates can coexist with it longer term. Bottom line: treat registration like production code, because that’s exactly what it is. Test in sandboxes, keep your scope unique, track your versions, and make your upgrade codeunits airtight. If something weird happens, the Copilot & Agent capabilities page plus your LearnMoreUrl is how admins will find, understand, and if needed, shut down the feature. Done right, registration sets you up for stability. Done sloppy, it sets you up for chaos. Once you’ve got that locked down, you’ll notice the capability itself is functional but generic. It answers, but without character. That’s like hiring someone brilliant who shows up mute at meetings. The next step is teaching your Copilot how to act—because if you don’t, it’ll sound less like a trusted assistant and more like a teenager rolling their eyes at your questions. Metaprompts: Teaching Your AI Manners That leads us straight into metaprompts—the part where you stop leaving Copilot adrift and start giving it rules of engagement. In Microsoft

    21 min

À propos

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Vous aimeriez peut‑être aussi