M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. Unlocking Power BI: The True Game Changer for Teams

    -2 Ч

    Unlocking Power BI: The True Game Changer for Teams

    You ever feel like your data is scattered across 47 different dungeons, each guarded by a cranky boss? That’s most organizations today—everyone claims to be data-driven, but in practice, they’re just rolling saving throws against chaos. Here’s what you’ll get in this run: the key Power BI integrations already inside Microsoft 365, the roadmap feature that finally ends cross-department fights, and three concrete actions you can take to start wielding this tool where you already work. Power BI now integrates with apps like Teams, Excel, PowerPoint, Outlook, and SharePoint. That means your “legendary gear” is sitting inside the same backpack you open every day. Before we roll initiative, hit Subscribe to give yourself advantage later. So, with that gear in mind, let’s step into the dungeon and face the real boss: scattered data. The Boss Battle of Scattered Data Think of your organization’s data as treasure, but not the kind stored neatly in one vault. It’s scattered across different dungeons, guarded by mini-bosses, and half the time nobody remembers where the keys are. One knight drags around a chest of spreadsheets. A wizard defends a stash of dashboards. A ranger swears their version is the “real” truth. The loot exists, but the party wastes hours hauling it back to camp and comparing notes. That’s not synergy—it’s just running multiple raids to pick up one rusty sword. Many organizations pride themselves on being “data-driven,” but in practice, each department drives its own cart in a different direction. Finance clings to spreadsheets—structured but instantly outdated. Marketing lives in dashboards—fresh but missing half the context. Sales relies on CRM reports—clean, but never lining up with anyone else’s numbers. What should be one shared storyline turns into endless reconciliations, emails, and duplicated charts. On a natural 1, you end up with three “final” reports, each pointing at a different reality. Take a simple but painful example. Finance builds a quarterly projection filled with pivot tables and colorful headers. Sales presents leadership with a dashboard that tells another story. The numbers clash. Suddenly you’re in emergency mode: endless Teams threads, late-night edits, and that file inevitably renamed “FINAL-REVISION-7.” The truth isn’t gone—it’s just locked inside multiple vaults, and every attempt to compare versions feels like carrying water in a colander. The hours meant for decisions vanish in patching up divergent views of reality. Here’s the part that stings: the problem usually isn’t technology. The tools exist. The choke point is culture. Teams treat their data like personal loot instead of shared guild gear. And when that happens, silos form. Industry guidance shows plenty of companies already have the data—but not the unified systems or governance to put it to work. That’s why solutions like Microsoft Fabric and OneLake exist: to create one consistent data layer rather than a messy sprawl of disconnected vaults. The direct cost of fragmentation isn’t trivial. Every hour spent reconciling spreadsheets is an hour not spent on action. A launch slips because operations and marketing can’t agree on the numbers. Budget approvals stall because confidence in the data just isn’t there. By the time the “final” version appears, the window for decision-making has already closed. That’s XP lost—and opportunities abandoned. And remember, lack of governance is what fuels this cycle. When accuracy, consistency, and protection aren’t enforced, trust evaporates. That’s why governance tools—like the way Power BI and Microsoft Purview work together—are so critical. They keep the party aligned, so everyone isn’t second-guessing whether their spellbook pages even match. The bottom line? The villain here isn’t a shortage of reports. It’s the way departments toss their loot into silos and act like merging them is optional. That’s the boss fight: fragmentation disguised as normal business. And too often the raid wipes not because the boss is strong, but because the party can’t sync their cooldowns or agree on the map. So how do you stop reconciling and start deciding? Enter the weapon most players don’t realize is sitting in their backpack—the one forged directly into Microsoft 365. Power BI as the Legendary Weapon Power BI is the legendary weapon here—not sitting on a distant loot table, but integrating tightly with the Microsoft 365 world you already log into each day. That matters, because instead of treating analytics as something separate, you swing the same blade where the battles actually happen. Quick licensing reality check: some bundles like Microsoft 365 E5 include Power BI Pro, but many organizations still need separate Power BI licenses or Premium capacity if they want full access. It’s worth knowing before you plan the rollout. Think about the Microsoft 365 apps you already use—Teams, Excel, PowerPoint, Outlook, and SharePoint. Those aren’t just town squares anymore; they’re the maps where strategies form and choices get made. Embedding Power BI into those apps is a step-change. You’re not alt-tabbing for numbers; you’re seeing live reports in the same workspace where the rest of the conversation runs. It’s as if someone dropped a stocked weapon rack right next to the planning table. The common misstep is that teams still see Power BI as an optional side quest. They imagine it as a separate portal for data people, not a main slot item for everybody. That’s like holding a legendary sword in your bag but continuing to swing a stick in combat. The “separate tool” mindset keeps adoption low and turns quick wins into overhead. In practice, a lot of the friction comes from context switching—jumping out of Teams to load a dashboard somewhere else. Embedding directly in Teams, Outlook, or Excel cuts out that friction and ensures more people actually use the analytics at hand. Picture this: you’re in a Teams thread talking about last quarter’s sales. Instead of pasting a screenshot or digging for a file, you drop in a live Power BI report. Everyone sees the same dataset, filters it in real time, and continues the discussion without breaking flow. Move over to Excel and the theme repeats. You connect directly to a Power BI dataset, and your familiar rows and formulas now update from a live source instead of some frozen export. Same with Outlook—imagine opening an email summary that embeds an interactive visual instead of an attachment. And in SharePoint or PowerPoint, the reports become shared objects, not static pictures. Once you see it in daily use, the “why didn’t we have this before” moment hits hard. There’s a productivity kicker too. Analysts point out that context switching bleeds attention. Each app jump is a debuff that saps focus. Embed the report in flow, and you cancel the debuff. Adoption then becomes invisible—nobody’s “learning a new tool,” they’re just clicking the visuals in the workspace they already lived in. That design is why embedding reduces context-switch friction, which is one of the biggest adoption blockers when you’re trying to spread analytics beyond the BI team. And while embedding syncs the daily fight, don’t forget the larger battlefield. For organizations wrestling with massive data silos, Microsoft Fabric with its OneLake component extends what Power BI can do. Fabric creates the single data fabric that Power BI consumes, unifying structured, unstructured, and streaming data sources at enterprise scale. You need that if you’re aiming for true “one source of truth” instead of just prettier spreadsheets on top of fractured backends. Think of embedding as putting a weapon in each player’s hands, and Fabric as the forge that builds a single, consistent armory. What shifts once this weapon is actually equipped? Managers stop saying, “I’ll check the dashboard later.” They make calls in the same window where the evidence sits. Conversations shorten, decisions land faster, and “FINAL-REVISION-7” dies off quietly. Collaboration looks less like a patchwork of solo runs and more like a co-op squad progressing together. Next time someone asks for proof in a meeting, you’ve already got it live in the same frame—no detours required. On a natural 20, embedding Power BI inside Microsoft 365 apps doesn’t just give you crit-level charts, it changes the rhythm of your workflow. Data becomes part of the same loop as chat, email, docs, and presentations. And if you want to see just how much impact that has, stick around—because the next part isn’t about swords at all. It’s about the rare loot drops that come bundled with this integration, the three artifacts that actually alter how your guild moves through the map. The Legendary Loot: Three Game-Changing Features Here’s where things get interesting. Power BI in Microsoft 365 isn’t just about shaving a few clicks off your workflow—it comes with three features that feel like actual artifacts: the kind that change how the whole party operates. These aren’t gimmicks or consumables; they’re durable upgrades. The first is automatic surfacing of insights. Instead of building every query by hand, Power BI now uses AI features—like anomaly detection, Copilot-generated summaries, and suggested insights—to flag spikes, dips, or outliers as soon as you load a report. Think finance reviewing quarterly results: instead of stitching VLOOKUP chains and cross-checking old exports, the system highlights expense anomalies right away. The user doesn’t have to “magically” expect the platform to learn their patterns; they just benefit from built-in AI pointing out what’s worth attention. It’s like having a rogue at the table whispering, “trap ahead,” before you blunder into it. The second is deeper integratio

    18 мин.
  2. Survive Your First D365 API Call (Barely)

    -14 Ч

    Survive Your First D365 API Call (Barely)

    Summary Making your first Dynamics 365 Finance & Operations API call often feels like walking through a minefield: misconfigured permissions, the wrong endpoints, and confusing errors can trip you up before you even start. In this episode, I break down the process step by step so you can get a working API call with less stress and fewer false starts. We’ll start with the essentials: registering your Azure AD app, requesting tokens, and calling OData endpoints for core entities like Customers, Vendors, and Invoices. From there, we’ll look at when you need to go beyond OData and use custom services, how to protect your endpoints with the right scopes, and the most common mistakes to avoid. You’ll hear not just the “happy path,” but also the lessons learned from failed attempts and the small details that make a big difference. By the end of this episode, you’ll have a clear mental map of how the D365 API landscape works, what to do first, and how to build integrations that can survive patches, audits, and real-world complexity. What You’ll Learn * How to authenticate with Azure AD and request a valid access token * The basics of calling OData endpoints for standard CRUD operations * When and why to use custom services instead of plain OData * Best practices for API security: least privilege, error handling, monitoring, and throttling * Common mistakes beginners make — and how to avoid them Guest No guest this time — just me, guiding you through the process. Full Transcript You’ve got D365 running, and management drops the classic: “Integrate it with that tool over there.” Sounds simple, right? Except misconfigured permissions create compliance headaches, and using the wrong entity can grind processes to a halt. That’s why today’s survival guide is blunt and step‑by‑step. Here’s the roadmap: one, how to authenticate with Azure AD and actually get a token. Two, how to query F&O data cleanly with OData endpoints. Three, when to lean on custom services—and how to guard them so they don’t blow up on you later. We’ll register an app, grab a token, make a call, and set guardrails you can defend to both your CISO and your sanity. Integration doesn’t need duct tape—it needs the right handshake. And that’s where we start. Meet the F&O API: The 'Secret Handshake' Meet the Finance and Operations API: the so‑called “secret handshake.” It isn’t black magic, and you don’t need to sacrifice a weekend to make it work. Think of it less like wizardry and more like knowing the right knock to get through the right door. The point is simple: F&O won’t let you crawl in through the windows, but it will let you through the official entrance if you know the rules. A lot of admins still imagine Finance and Operations as some fortress with thick walls and scary guards. Fine, sure—but the real story is simpler. Inside that fortress, Microsoft already built you a proper door: the REST API. It’s not a hidden side alley or a developer toy. It’s the documented, supported way in. Finance and Operations exposes business data through OData/REST endpoints—customers, vendors, invoices, purchase orders—the bread and butter of your ERP. That’s the integration path Microsoft wants you to take, and it’s the safest one you’ve got. Where do things go wrong? It usually happens when teams try to skip the API. You’ve seen it: production‑pointed SQL scripts hammered straight at the database, screen scraping tools chewing through UI clicks at robot speed, or shadow integrations that run without anyone in IT admitting they exist. Those shortcuts might get you quick results once or twice, but they’re fragile. They break the second Microsoft pushes a hotfix, and when they break, the fallout usually hits compliance, audit, or finance all at once. In contrast, the API endpoints give you a structured, predictable interface that stays supported through updates. Here’s the mindset shift: Microsoft didn’t build the F&O API as a “bonus” feature. This API is the playbook. If you call it, you’re supported, documented, and when issues come up, Microsoft support will help you. If you bypass it, you’re basically duct‑taping integrations together with no safety net. And when that duct tape peels off—as it always does—you’re left explaining missing transactions to your boss at month‑end close. Nobody wants that. Now, let’s get into what the API actually looks like. It’s RESTful, so you’ll be working with standard HTTP verbs: GET, POST, PATCH, DELETE. The structure underneath is OData, which basically means you’re querying structured endpoints in a consistent way. Every major business entity you care about—customers, vendors, invoices—has its shelf. You don’t rummage through piles of exports or scrape whatever the UI happens to show that day. You call “/Customers” and you get structured data back. Predictable. Repeatable. No surprises. Think of OData like a menu in a diner. It’s not about sneaking into the kitchen and stirring random pots. The menu lists every dish, the ingredients are standardized, and when you order “Invoice Lines,” you get exactly that—every single time. That consistency is what makes automation and integration even possible. You’re not gambling on screen layouts or guessing which Excel column still holds the vendor ID. You’re just asking the system the right way, and it answers the right way. But OData isn’t your only option. Sometimes, you need more than an entity list—you need business logic or steps that OData doesn’t expose directly. That’s where custom services come in. Developers can build X++‑based services for specialized workflows, and those services plug into the same API layer. Still supported, still documented, just designed for the custom side of your business process. And while we’re on options, there’s one more integration path you shouldn’t ignore: Dataverse dual‑write. If your world spans both the CRM side and F&O, dual‑write gives you near real‑time, two‑way sync between Dataverse tables and F&O data entities. It maps fields, supports initial sync, lets you pause/resume or catch up if you fall behind, and it even provides a central log so you know what synced and when. That’s a world away from shadow integrations, and it’s exactly why a lot of teams pick it to keep Customer Engagement and ERP data aligned without hand‑crafted hacks. So the takeaway is this: the API isn’t an optional side door. It’s the real entrance. Use it, and you build integrations that survive patches, audits, and real‑world use. Ignore it, and you’re back to fragile scripts and RPA workarounds that collapse when the wind changes. Microsoft gave you the handshake—now it’s on you to use it. All of that is neat—but none of it matters until you can prove who you are. On to tokens. Authentication Without Losing Your Sanity Authentication Without Losing Your Sanity. Let’s be real: nothing tests your patience faster than getting stonewalled by a token error that helpfully tells you “Access Denied”—and nothing else. You’ve triple‑checked your setup, sacrificed three cups of coffee to the troubleshooting gods, and still the API looks at you like, “Who are you again?” It’s brutal, but it’s also the most important step in the whole process. Without authentication, every other clever thing you try is just noise at a locked door. Here’s the plain truth: every single call into Finance and Operations has to be approved by Azure Active Directory through OAuth 2.0. No token, no entry. Tokens are short‑lived keys, and they’re built to keep random scripts, rogue apps, or bored interns from crashing into your ERP. That’s fantastic for security, but if you don’t have the setup right, it feels like yelling SQL queries through a window that doesn’t open. So how do you actually do this without going insane? Break it into three practical steps: * Register the app in Azure AD. This gives you a Client ID, and you’ll pair it with either a client secret or—much better—a certificate for production. That app registration becomes the official identity of your integration, so don’t skip documenting what it’s for. * Assign the minimum API permissions it needs. Don’t go full “God Mode” just because it’s easier. If your integration just needs Vendors and Purchase Orders, scope it exactly there. Least privilege isn’t a suggestion; it’s the only way to avoid waking up to compliance nightmares down the line. * Get admin consent, then request your token using the client credentials flow (for app‑only access) or delegated flow (if you need it tied to a user). Once Azure AD hands you that token, that’s your golden ticket—good for a short window of time. For production setups, do yourself a favor and avoid long‑lived client secrets. They’re like sticky notes with your ATM PIN on them: easy for now, dangerous long‑term. Instead, go with certificate‑based authentication or managed identities if you’re running inside Azure. One extra hour to configure it now saves you countless fire drills later. Now let’s talk common mistakes—because we’ve all seen them. Don’t over‑grant permissions in Azure. Too many admins slap on every permission they can find, thinking they’ll trim it back later. Spoiler: they never do. That’s how you get apps capable of erasing audit logs when all they needed was “read Customers.” Tokens are also short‑lived on purpose. If you don’t design for refresh and rotation, your integration will look great on day one and then fail spectacularly 24 hours later. Here’s the practical side. When you successfully fetch that OAuth token from Azure AD, you’re not done—you actually have to use it. Every API request you send to Finance and Operations has to include it in the header: Authorization: Bearer OData Endpoi

    17 мин.
  3. Microsoft Fabric Explained: No Code, No Nonsense

    -1 ДН.

    Microsoft Fabric Explained: No Code, No Nonsense

    Summary Microsoft has a habit of renaming things in ways that make people scratch their heads — “Fabric,” “OneLake,” “Lakehouse,” “Warehouse,” etc. In this episode, I set out to cut through the naming noise and show what actually matters under the hood: how data storage, governance, and compute interact in Fabric, without assuming you’re an engineer. We dig into how OneLake works as the foundation, what distinguishes a Warehouse from a Lakehouse, why Microsoft chose Delta + Parquet as the storage engine, and how shortcuts, governance, and workspace structure help (or hurt) your implementation. This isn’t marketing fluff — it’s the real architecture that determines whether your organization’s data projects succeed or collapse into chaos. By the end, you’ll be thinking less “What is Fabric?” and more “How can we use Fabric smartly?” — with a sharper view of trade-offs, pitfalls, and strategies. What You’ll Learn * The difference between Warehouse and Lakehouse in Microsoft Fabric * How OneLake acts as the underlying storage fabric for all data workloads * Why Delta + Parquet matter — not just as buzzwords, but as core guarantees (ACID, versioning, schema) * How shortcuts let you reuse data without duplication — and the governance risks involved * Best practices for workspace design, permissions, and governance layers * What to watch out for in real deployments (e.g. role mismatches, inconsistent access paths) Full Transcript Here’s a fun corporate trick: Microsoft managed to confuse half the industry by slapping the word “house” on anything with a data label. But here’s what you’ll actually get out of the next few minutes: we’ll nail down what OneLake really is, when to use a Warehouse versus a Lakehouse, and why Delta and Parquet keep your data from turning into a swamp of CSVs. That’s three concrete takeaways in plain English. Want the one‑page cheat sheet? Subscribe to the M365.Show newsletter. Now, with the promise clear, let’s talk about Microsoft’s favorite game: naming roulette. Lakehouse vs Warehouse: Microsoft’s Naming Roulette When people first hear “Lakehouse” and “Warehouse,” it sounds like two flavors of the same thing. Same word ending, both live inside Fabric, so surely they’re interchangeable—except they’re not. The names are what trip teams up, because they hide the fact that these are different experiences built on the same storage foundation. Here’s the plain breakdown. A Warehouse is SQL-first. It expects structured tables, defined schemas, and clean data. It’s what you point dashboards at, what your BI team lives in, and what delivers fast query responses without surprises. A Lakehouse, meanwhile, is the more flexible workbench. You can dump in JSON logs, broken CSVs, or Parquet files from another pipeline and not break the system. It’s designed for engineers and data scientists who run Spark notebooks, machine learning jobs, or messy transformations. If you want a visual, skip the sitcom-length analogy: think of the Warehouse as a labeled pantry and the Lakehouse as a garage with the freezer tucked next to power tools. One is organized and efficient for everyday meals. The other has room for experiments, projects, and overflow. Both store food, but the vibe and workflow couldn’t be more different. Now, here’s the important part Microsoft’s marketing can blur: neither exists in its own silo. Both Lakehouses and Warehouses in Fabric store their tables in the open Delta Parquet format, both sit on top of OneLake, and both give you consistent access to the underlying files. What’s different is the experience you interact with. Think of Fabric not as separate buildings, but as two different rooms built on the same concrete slab, each furnished for a specific kind of work. From a user perspective, the divide is real. Analysts love Warehouses because they behave predictably with SQL and BI tools. They don’t want to crawl through raw web logs at 2 a.m.—they want structured tables with clean joins. Data engineers and scientists lean toward Lakehouses because they don’t want to spend weeks normalizing heaps of JSON just to answer “what’s trending in the logs.” They want Spark, Python, and flexibility. So the decision pattern boils down to this: use a Warehouse when you need SQL-driven, curated reporting; use a Lakehouse when you’re working with semi-structured data, Spark, and exploration-heavy workloads. That single sentence separates successful projects from the ones where teams shout across Slack because no one knows why the “dashboard” keeps choking on raw log files. And here’s the kicker—mixing up the two doesn’t just waste time, it creates political messes. If management assumes they’re interchangeable, analysts get saddled with raw exports they can’t process, while engineers waste hours building shadow tables that should’ve been Lakehouse assets from day one. The tools are designed to coexist, not to substitute for each other. So the bottom line: Warehouses serve reporting. Lakehouses serve engineering and exploration. Same OneLake underneath, same Delta Parquet files, different optimizations. Get that distinction wrong, and your project drags. Get it right, and both sides of the data team stop fighting long enough to deliver something useful to the business. And since this all hangs on the same shared layer, it raises the obvious question—what exactly is this OneLake that sits under everything? OneLake: The Data Lake You Already Own Picture this: you move into a new house, and surprise—there’s a giant underground pool already filled and ready to use. That’s what OneLake is in Fabric. You don’t install it, you don’t beg IT for storage accounts, and you definitely don’t file a ticket for provisioning. It’s automatically there. OneLake is created once per Fabric tenant, and every workspace, every Lakehouse, every Warehouse plugs into it by default. Under the hood, it actually runs on Azure Data Lake Storage Gen2, so it’s not some mystical new storage type—it’s Microsoft putting a SaaS layer on top of storage you probably already know. Before OneLake, each department built its own “lake” because why not—storage accounts were cheap, and everyone believed their copy was the single source of truth. Marketing had one. Finance had one. Data science spun one up in another region “for performance.” The result was a swamp of duplicate files, rogue pipelines, and zero coordination. It was SharePoint sprawl, except this time the mistakes showed up in your Azure bill. Teams burned budget maintaining five lakes that didn’t talk to each other, and analysts wasted nights reconciling “final_v2” tables that never matched. OneLake kills that off by default. Think of it as the single pool everyone has to share instead of each team digging muddy holes in their own backyards. Every object in Fabric—Lakehouses, Warehouses, Power BI datasets—lands in the same logical lake. That means no more excuses about Finance having its “own version” of the data. To make sharing easier, OneLake exposes a single file-system namespace that stretches across your entire tenant. Workspaces sit inside that namespace like folders, giving different groups their place to work without breaking discoverability. It even spans regions seamlessly, which is why shortcuts let you point at other sources without endless duplication. The small print: compute capacity is still regional and billed by assignment, so while your OneLake is global and logical, the engines you run on top of it are tied to regions and budgets. At its core, OneLake standardizes storage around Delta Parquet files. Translation: instead of ten competing formats where every engine has to spin its own copy, Fabric speaks one language. SQL queries, Spark notebooks, machine learning jobs, Power BI dashboards—they all hit the same tabular store. Columnar layout makes queries faster, transactional support makes updates safe, and that reduces the nightmare of CSV scripts crisscrossing like spaghetti. The structure is simple enough to explain to your boss in one diagram. At the very top you have your tenant—that’s the concrete slab the whole thing sits on. Inside the tenant are workspaces, like containers for departments, teams, or projects. Inside those workspaces live the actual data items: warehouses, lakehouses, datasets. It’s organized, predictable, and far less painful than juggling dozens of storage accounts and RBAC assignments across three regions. On top of this, Microsoft folds in governance as a default: Purview cataloging and sensitivity labeling are already wired in. That way, OneLake isn’t just raw storage, it also enforces discoverability, compliance, and policy from day one without you building it from scratch. If you’ve lived the old way, the benefits are obvious. You stop paying to store the same table six different times. You stop debugging brittle pipelines that exist purely to sync finance copies with marketing copies. You stop getting those 3 a.m. calls where someone insists version FINAL_v3.xlsx is “the right one,” only to learn HR already published FINAL_v4. OneLake consolidates that pain into a single source of truth. No heroic intern consolidating files. No pipeline graveyard clogging budgets. Just one layer, one copy, and all the engines wired to it. It’s not magic, though—it’s just pooled storage. And like any pool, if you don’t manage it, it can turn swampy real fast. OneLake gives you the centralized foundation, but it relies on the Delta format layer to keep data clean, consistent, and usable across different engines. That’s the real filter that turns OneLake into a lake worth swimming in. And that brings us to the next piece of the puzzle—the unglamorous technology that keeps that water clear in the first place. Delta and

    19 мин.
  4. Breaking Power Pages Limits With VS Code Copilot

    -1 ДН.

    Breaking Power Pages Limits With VS Code Copilot

    Summary You know that sinking feeling when your Power Pages form refuses to validate and the error messages feel basically useless? That’s exactly the pain we’re tackling in this episode. I walk you through how to use VS Code + GitHub Copilot (with the @powerpages context) to push past Power Pages limits, simplify validation, and make your developer life smoother. We’ll cover five core moves: streamlining Liquid templates, improving JavaScript/form validation, getting plain-English explanations for tricky code, integrating HTML/Bootstrap for responsive layouts, and simplifying web API calls. I’ll also share the exact prompts and setup you need so that Copilot becomes context aware of your Power Pages environment. If you’ve ever felt stuck debugging form behavior, messing up Liquid includes, or coping with cryptic errors, this episode is for you. By the end, you’ll have concrete strategies (and sample prompts) to make Copilot your partner — reducing trial-and-error and making your Power Pages code cleaner, faster, and more maintainable. What You’ll Learn * How to set up VS Code + Power Platform Tools + Copilot Chat in a context-aware way for Power Pages * How the @powerpages prompt tag makes Copilot suggestions smarter and tailored * Techniques for form validation with JavaScript & Copilot (that avoid guesswork) * How to cleanly integrate Liquid templates + HTML + Bootstrap in Power Pages * Strategies to simplify web API calls in the context of Power Pages * Debugging tactics: using Copilot to explain code, refine error messages, and evolve scripts beyond first drafts Full Transcript You know that sinking feeling when your Power Pages form won’t validate and the error messages are about as useful as a ‘404 Brain Not Found’? That pain point is exactly what we’ll fix today. We’re covering five moves: streamlining Liquid templates, speeding JavaScript and form validation, getting plain-English code explanations, integrating HTML with Bootstrap for responsive layouts, and simplifying web API calls. One quick caveat—you’ll need VS Code with the Power Platform Tools extension, GitHub Copilot Chat, and your site content pulled down through the Power Platform CLI with Dataverse authentication. That setup makes Copilot context-aware. With that in place, Copilot stops lobbing random snippets. It gives contextual, iterative code that cuts down trial-and-error. I’ll show you the exact prompts so you can replicate results yourself. And since most pain starts with JavaScript, let’s roll into what happens when your form errors feel like a natural 1. When JavaScript Feels Like a Natural 1 JavaScript can turn what should be a straightforward form check into a disaster fast. One misplaced keystroke, and instead of stopping bad input, the whole flow collapses. That’s usually when you sit there staring at the screen, wondering how “banana” ever got past your carefully written validation logic. You know the drill: a form that looks harmless, a validator meant to filter nonsense, and a clever user typing the one thing you didn’t account for. Suddenly your console logs explode with complaints, and every VS Code tab feels like another dead end. The small errors hit the hardest—a missing semicolon, or a scope bug that makes sense in your head but plays out like poison damage when the code runs. These tiny slips show up in real deployments all the time, and they explain why broken validation is such a familiar ticket in web development. Normally, your approach is brute force. You tweak a line, refresh, get kicked back by another error, then repeat the cycle until something finally sticks. An evening evaporates, and the end result is often just a duct-taped script that runs—no elegance, no teaching moment. That’s why debugging validation feels like the classic “natural 1.” You’re rolling, but the outcome is stacked against you. Here’s where Copilot comes in. Generic Copilot suggestions sometimes help, but a lot of the time they look like random fragments pulled from a half-remembered quest log—useful in spirit, wrong in detail. That’s because plain Copilot doesn’t know the quirks of Power Pages. But add the @powerpages participant, and suddenly it’s not spitting boilerplate; it’s offering context-aware code shaped to fit your environment. Microsoft built it to handle Power Pages specifics, including Liquid templates and Dataverse bindings, which means the suggestions account for the features that usually trip you up. And it’s not just about generating snippets. The @powerpages integration can also explain Power Pages-specific constructs so you don’t just paste and pray—you actually understand why a script does what it does. That makes debugging less like wandering blindfolded and more like working alongside someone who already cleared the same dungeon. For example, you can literally type this prompt into Copilot Chat: “@powerpages write JavaScript code for form field validation to verify the phone field value is in the valid format.” That’s not just theory—that’s a reproducible, demo-ready input you’ll see later in this walkthrough. The code that comes back isn’t a vague web snippet; it’s directly applicable and designed to compile in your Power Pages context. That predictability is the real shift. With generic Copilot, it feels like you’ve pulled in a bard who might strum the right chord, but half the time the tune has nothing to do with your current battle. With @powerpages, it’s closer to traveling with a ranger who already knows where the pitfalls are hiding. The quest becomes less about surviving traps and more about designing clear user experiences. The tool doesn’t replace your judgment—it sharpens it. You still decide what counts as valid input and how errors should guide the user. But instead of burning cycles on syntax bugs and boolean typos, you spend your effort making the workflow intuitive. Correctly handled, those validation steps stop being roadblocks and start being part of a smooth narrative for whoever’s using the form. It might not feel like a flashy win, but stopping the basic failures is what saves you from a flood of low-level tickets down the line. Once Copilot shoulders the grunt work of generating accurate validation code, your time shifts from survival mode to actually sharpening how the app behaves. That difference matters. Because when you see how well-targeted commands change the flow of code generation, you start wondering what else those commands can unlock. And that’s when the real advantage of using Copilot with Power Pages becomes clear. Rolling Advantage with Copilot Commands Rolling advantage here means knowing the right commands to throw into Copilot instead of hoping the dice land your way. That’s the real strength of using the @powerpages participant—it transforms Copilot Chat from a generic helper into a context-aware partner built for your Power Pages environment. Here’s how you invoke it. Inside VS Code, open the Copilot Chat pane, and then type your prompt with “@powerpages” at the front. That tag is what signals Copilot to load the Power Pages brain instead of the vanilla mode. You can ask for validators, Liquid snippets, even Dataverse-bound calls, and Copilot will shape its answers to fit the system you’re actually coding against. Now, before that works, you need the right loadout: Visual Studio Code installed, the Power Platform Tools extension, the GitHub Copilot Chat extension, and the Power Platform CLI authenticated against your Dataverse environment. The authentication step matters the most, because Copilot only understands your environment once you’ve actually pulled the site content into VS Code while logged in. Without that, it’s just guessing. And one governance caveat: some Copilot features for Power Pages are still in preview, and tenant admins control whether they’re enabled through the Copilot Hub and governance settings. Don’t be surprised if features demoed here are switched off in your org—that’s an admin toggle, not a bug. Here’s the difference once you’re set up. Regular Copilot is like asking a bard for battlefield advice: you’ll get a pleasant tune, maybe some broad commentary, but none of the detail you need when you’re dealing with Liquid templates or Dataverse entity fields. The @powerpages participant is closer to a ranger who’s already mapped the terrain. It’s not just code that compiles; it’s code that references the correct bindings, fits into form validators, and aligns with how Power Pages actually runs. One metaphor, one contrast, one payoff: usable context-aware output instead of fragile generic snippets. Let’s talk results. If you ask plain Copilot for a validation routine, you’ll probably get a script that works in a barebones HTML form. Drop it into Power Pages, though, and you’ll hit blind spots—no recognition of entity schema, no clue what Liquid tags are doing, and definitely no awareness of Dataverse rules. It runs like duct tape: sticky but unreliable. Throw the same request with @powerpages in the lead, and suddenly you’ve got validators that don’t just run—they bind to the right entity field references you actually need. Same request, context-adjusted output, no midnight patch session required. And this isn’t just about generating scripts. Commands like “@powerpages explain the following code {% include ‘Page Copy’ %}” give you plain-English walkthroughs of Liquid or Power Pages-specific constructs. You’re not copy-pasting blind; you’re actually building understanding. That’s a different kind of power—because you’re learning the runes while also casting them. The longer you work with these commands, the more your workflow shifts. Instead of patching errors alone at 2 AM, you’re treating Copilot like a second set of eyes that already kno

    19 мин.
  5. SOC Team vs. Rogue Copilot: Who Wins?

    -2 ДН.

    SOC Team vs. Rogue Copilot: Who Wins?

    Summary Imagine your security operations center (SOC) waking up to an alert: “Copilot accessed a confidential file.” It’s not a phishing email, not malware, not a brute force attack — it’s AI doing what it’s designed to do, but in your data space. In this episode, I explore that tense battleground: can your SOC team keep up with or contain a rogue (or overly ambitious) Copilot? We unpack how Copilot’s design allows it to surface files that a user can access — which means if permissions are too loose, data leaks happen by “design.” On the flip side, the SOC team’s tools (DSPM, alerting, policies) are built around more traditional threat models. I interrogate where the gaps are, what alerts are no longer enough, and how AI changes the rules of engagement in security. By episode end, you’ll see how your security playbooks must evolve. It’s no longer just about detecting attacks — it’s about understanding AI’s behaviors, interpreting intent, and building bridges between signal and policy before damage happens. What You’ll Learn * Why a Copilot “access” alert is different from a normal threat indicator * How overshared files and lax labeling amplify risk when AI tools are involved * The role of Data Security Posture Management (DSPM) in giving context to AI alerts * How traditional SOC tools (XDR, policies, dashboards) succeed or fail in this new paradigm * Key questions your team must answer when an AI “incident” appears: was it malicious? Overreach? Justifiable? * Strategies for evolving your SOC: better labeling, tighter permissions, AI-aware alerting Full Transcript Copilot vs SOC team is basically Mortal Kombat with data. Copilot shouts “Finish Him!” by pulling up the files a user can already touch—but if those files were overshared or poorly labeled, sensitive info gets put in the spotlight. Fast, brutal, and technically “working as designed.” On the other side, your SOC team’s combos aren’t uppercuts, they’re DSPM dashboards, Purview policies, and Defender XDR hooks. The question isn’t if they can fight back—it’s who lands the fatality first. If you want these incident playbooks in your pocket, hit subscribe. Now, picture your first Copilot alert rolling onto the dashboard. When Your First AI Alert Feels Like a Glitch You log in for another shift, coffee still warm, and the SOC dashboard throws up something unfamiliar: “Copilot accessed a confidential financial file.” On the surface, it feels like a mistake. Maybe a noisy log blip. Except…it’s not malware, not phishing, not a Powershell one-liner hiding in the weeds. It’s AI—and your feeds now include an artificial coworker touching sensitive files. The first reaction is confusion. Did Copilot just perform its expected duty, or is someone abusing it as cover? Shrugging could mean missing actual data exfiltration. Overreacting could waste hours untangling an innocent document summary. Either way, analysts freeze because it doesn’t fit the kill-chain models they drilled on. It’s neither ransomware nor spam. It’s a new category. Picture a junior analyst already neck-deep in noisy spam campaigns and malicious attachments. Suddenly this alert lands in their queue: “Copilot touched a file.” There’s no playbook. Do you terminate the process? Escalate? Flag it as noise and move on? With no context, the team isn’t executing standard procedure—they’re rolling dice on something critical. That’s exactly why Purview Data Security Posture Management for AI exists. Instead of static logs, it provides centralized visibility across your data, users, and activities. When Copilot opens a file, you see how that intersects with your sensitive-data map. Did it enter a folder labeled “Finance”? Was a sharing policy triggered after? Did someone else gain access downstream? Suddenly, an ambiguous line becomes a traceable event. It’s no longer a blurry screenshot buried in the logs—it’s a guided view of where Copilot went and what it touched. [Pause here in delivery—let the audience imagine that mental mini-map.] Then resume: DSPM correlates sensitive-data locations, risky user activities, and likely exfiltration channels. It flags sequences like a sensitivity label being downgraded, followed by access or sharing, then recommends concrete DLP or Insider Risk rules to contain it. Instead of speculation, you’re handed practical moves. This doesn’t remove all uncertainty. But it reduces the blind spots. DSPM grounds each AI alert with added context—file sensitivity, label history, the identity requesting access. That shifts the question from “is this real?” to “what next action does this evidence justify?” And that’s the difference between guesswork and priority-driven investigation. Many security leaders admit there’s a maturity gap when it comes to unifying data security, governance, and AI. The concern isn’t just Copilot itself—it’s that alerts without context are ignored, giving cover for actual breaches. If the SOC tunes out noisy AI signals, dangerous incidents slip right past the fence. Oversight tools have to explain—not just announce—when Copilot interacts with critical information. So what looks like a glitch alert is really a test of whether your team has built the bridge between AI signals and traditional data security. With DSPM in place, that first confusing notification doesn’t trigger panic or dismissal. It transforms into a traceable sequence with evidence: here’s the data involved, here’s who requested it, here’s the timeline. Your playbook evolves from reactive coin-flipping to guided action. That’s the baseline challenge. But soon, things get less clean. Not every alert is about Copilot doing its normal job. Sometimes a human sets the stage, bending the rules so that AI flows toward places it was never supposed to touch. And that’s where the real fight begins. The Insider Who Rewrites the Rules A file stamped “Confidential” suddenly drops down to “Internal.” Minutes later, Copilot glides through it without resistance. On paper it looks like routine business—an AI assistant summarizing another document. But behind the curtain, someone just moved the goalposts. They didn’t need an exploit, just the ability to rewrite a label. That’s the insider playbook: change the sign on the door and let the system trust what it sees. The tactic is painfully simple. Strip the “this is sensitive” tag, then let Copilot do the summarizing, rewriting, or extracting. You walk away holding a neat package of insights that should have stayed locked, without ever cracking the files yourself. To the SOC, it looks mundane: approved AI activity, no noisy alerts, no red-flag network spikes. It’s business flow camouflaged as compliance. You’ve trained your defenses to focus on outside raiders—phishing, ransomware, brute-forcing. But insiders don’t need malware when they can bend the rules you asked everyone to trust. Downgraded labels become camouflage. That trick works—until DSPM and Insider Risk put the sequence under a spotlight. Here’s the vignette: an analyst wants a peek at quarterly budgets they shouldn’t access. Every AI query fails because the files are tagged “Confidential.” So they drop the label to “Internal,” rerun the prompt, and Copilot delivers the summary without complaint. No alarms blare. The analyst never opens the doc directly and slips under the DLP radar. On the raw logs, it looks as boring as a weather check. But stitched together, the sequence is clear: label change, followed by AI assist, followed by potential misuse. This is where Microsoft Purview DSPM makes a difference. It doesn’t just list Copilot requests; it ties those requests to the file’s label history. DSPM can detect sequences such as a label downgrade immediately followed by AI access, and flag that pairing as irregular. From there it can recommend remediation, or in higher-risk cases, escalate to Insider Risk Management. That context flips a suspicious shuffle from “background noise” into an alert-worthy chain of behavior. And you’re not limited to just watching. Purview’s DLP features let you create guardrails that block Copilot processing of labeled content altogether. If a file is tagged “Highly Confidential,” you can enforce label-based controls so the AI never even touches it. Copilot respects Purview’s sensitivity labels, which means the label itself becomes part of the defense layer. The moment someone tampers with it, you have an actionable trigger. There’s also a governance angle the insiders count on you overlooking. If your labeling system is overcomplicated, employees are more likely to mislabel or downgrade files by accident—or hide behind “confusion” when caught. Microsoft’s own guidance is to map file labels from parent containers, so a SharePoint library tagged “Confidential” passes that flag automatically to every new file inside. Combine that with a simplified taxonomy—no more than five parent labels with clear names like “Highly Confidential” or “Public”—and you reduce both honest mistakes and deliberate loopholes. Lock container defaults, and you stop documents from drifting into the wrong category. When you see it in practice, the value is obvious. Without DSPM correlations, SOC sees a harmless Copilot query. With DSPM, that same query lights up as part of a suspicious chain: label flip, AI access, risky outbound move. Suddenly, it’s not a bland log entry; it’s a storyline with intent. You can intervene while the insider still thinks they’re invisible. The key isn’t to treat AI as the villain. Copilot plays the pawn in these moves—doing what its access rules allow. The villain is the person shifting the board by altering labels and testing boundaries. By making label changes themselves a monitored event, you reveal intent, not

    19 мин.
  6. R or T-SQL? One Button Changes Everything

    -2 ДН.

    R or T-SQL? One Button Changes Everything

    Summary Here’s a story: a team trained a model, and everything worked fine — until their dataset doubled. Suddenly, their R pipeline crawled to a halt. The culprit? Compute context. By default they were running R in local compute, which meant every row had to cross the network. But when they switched to SQL compute context, the same job ran inside the server, next to the data, and performance transformed overnight. In this episode, we pull back the curtain on what’s really causing slowdowns in data workflows. It’s rarely the algorithm. Most often, it’s where the work is being executed, how data moves (or doesn’t), and how queries are structured. We talk through how to choose compute context, how to tune batch sizes wisely, how to shape your SQL queries for parallelism, and how to offload transformations so R can focus on modeling. By the end, you’ll have a set of mental tools to spot when your pipeline is bogged down by context or query design — and how to flip the switch so your data flows fast again. What You’ll Learn * The difference between local compute context and SQL compute context, and how context impacts performance * Why moving data across the network is often the real bottleneck (not your R code) * How to tune rowsPerRead (batch size) for throughput without overloading memory * How the shape of your SQL query determines whether SQL Server can parallelize work * Strategies for pushing transformations and type casting into SQL before handing over to R * Why defining categories (colInfo) upfront can save massive overhead in R Full Transcript Here’s a story: a team trained a model, everything worked fine—until the dataset doubled. Suddenly, their R pipeline crawled for hours. The root cause wasn’t the algorithm at all. It was compute context. They were running in local compute, dragging every row across the network into memory. One switch to SQL compute context pushed the R script to run directly on the server, kept the data in place, and turned the crawl into a sprint. That’s the rule of thumb: if your dataset is large, prefer SQL compute context to avoid moving rows over the network. Try it yourself—run the same R script locally and then in SQL compute. Compare wall-clock time and watch your network traffic. You’ll see the difference. And once you understand that setting, the next question becomes obvious: where’s the real drag hiding when the data starts to flow? The Invisible Bottleneck What most people don’t notice at first is a hidden drag inside their workflow: the invisible bottleneck. It isn’t a bug in your model or a quirk in your code—it’s the way your compute context decides where the work happens. When you run in local compute context, R runs on your laptop. Every row from SQL Server has to travel across the network and squeeze through your machine’s memory. That transfer alone can strangle performance. Switch to SQL Server compute context, and the script executes inside the server itself, right next to the data. No shuffling rows across the wire, no bandwidth penalty—processing stays local to the engine built to handle it. A lot of people miss this because small test sets don’t show the pain. Ten thousand rows? Your laptop shrugs. Ten million rows? Now you’re lugging a library home page by page, wondering why the clock melted. The fix isn’t complex tuning or endless loop rewrites. It’s setting the compute context properly so the heavy lifting happens on the server that was designed for it. That doesn’t mean compute context is a magic cure-all. If your data sources live outside SQL Server, you’ll still need to plan ETL to bring them in first. SQL compute context only removes the transfer tax if the data is already inside SQL Server. Think of it this way: the server’s a fortress smithy; if you want the blacksmith to forge your weapon fast, you bring the ore to him rather than hauling each strike back and forth across town. This is why so many hours get wasted on what looks like “optimization.” Teams adjust algorithms, rework pipeline logic, and tweak parameters trying to speed things up. But if the rows themselves are making round trips over the network, no amount of clever code will win. You’re simply locked into bandwidth drag. Change the compute context, and the fight shifts in your favor before you even sharpen the code. Still, it’s worth remembering: not every crawl is caused by compute context. If performance stalls, check three things in order. First, confirm compute context—local versus SQL Server. Second, inspect your query shape—are you pulling the right columns and rows, or everything under the sun? Third, look at batch size, because how many rows you feed into R at a time can make or break throughput. That checklist saves you from wasting cycles on the wrong fix. Notice the theme: network trips are the real tax collector here. With local compute, you pay tolls on every row. With SQL compute, the toll booths vanish. And once you start running analysis where the data actually resides, your pipeline feels like it finally got unstuck from molasses. But even with the right compute context, another dial lurks in the pipeline—how the rows are chunked and handed off. Leave that setting on default, and you can still find yourself feeding a beast one mouse at a time. That’s where the next performance lever comes in. Batch Size: Potion of Speed or Slowness Batch size is the next lever, and it behaves like a potion: dose it right and you gain speed, misjudge it and you stagger. In SQL Server, the batch size is controlled by the `rowsPerRead` parameter. By default, `rowsPerRead` is set to 50,000. That’s a safe middle ground, but once you start working with millions of rows, it often starves the process—like feeding a dragon one mouse at a time and wondering why it still looks hungry. Adjusting `rowsPerRead` changes how many rows SQL Server hands over to R in each batch. Too few, and R wastes time waiting for its next delivery. Too many, and the server may choke, running out of memory or paging to disk. The trick is to find the point where the flow into R keeps it busy without overwhelming the system. A practical way to approach this is simple: test in steps. Start with the default 50,000, then increase to 500,000, and if the server has plenty of memory, try one million. Each time, watch runtime and keep an eye on RAM usage. If you see memory paging, you’ve pushed too far. Roll back to the previous setting and call that your sweet spot. The actual number will vary based on your workload, but this test plan keeps you on safe ground. The shape of your data matters just as much as the row count. Wide tables—those with hundreds of columns—or those that include heavy text or blob fields are more demanding. In those cases, even if the row count looks small, the payload per row is huge. Rule of thumb: if your table is wide or includes large object columns, lower `rowsPerRead` to prevent paging. Narrow, numeric-only tables can usually handle much larger values before hitting trouble. Once tuned, the effect can be dramatic. Raising the batch size from 50,000 to 500,000 rows can cut wait times significantly because R spends its time processing instead of constantly pausing for the next shipment. Push past a million rows and you might get even faster results on the right hardware. The runtime difference feels closer to a network upgrade than a code tweak—even though the script itself hasn’t changed at all. A common mistake is ignoring `rowsPerRead` entirely and assuming the default is “good enough.” That choice often leads to pipelines that crawl during joins, aggregations, or transformations. The problem isn’t the SQL engine or the R code—it’s the constant interruption from feeding R too slowly. On the flip side, maxing out `rowsPerRead` without testing can be just as costly, because one oversized batch can tip memory over the edge and stall the process completely. That balance is why experimentation matters. Think of it as tuning a character build: one point too heavy on offense and you drop your defenses, one point too light and you can’t win the fight. Same here—batch size is a knob that lets you choose between throughput and resource safety, and only trial runs tell you where your system maxes out. The takeaway is clear: don’t treat `rowsPerRead` as a background setting. Use it as an active tool in your tuning kit. Small increments, careful monitoring, and attention to your dataset’s structure will get you to the best setting faster than guesswork ever will. And while batch size can smooth how much work reaches R at once, it can’t make up for sloppy queries. If the SQL feeding the pipeline is inefficient, then even a well-tuned batch size will struggle. That’s why the next focus is on something even more decisive: how the query itself gets written and whether the engine can break it into parallel streams. The Query That Unlocks Parallel Worlds Writing SQL can feel like pulling levers in a control room. Use the wrong switch and everything crawls through one rusty conveyor. Use the right one and suddenly the machine splits work across multiple belts at once. Same table, same data, but the outcome is night and day. The real trick isn’t about raw compute—it’s whether your query hands the optimizer enough structure to break the task into parallel paths. SQL Server will parallelize happily—but only if the query plan gives it that chance. A naive “just point to the table” approach looks simple, but it often leaves the optimizer no option but a single-thread execution. That’s exactly what happens when you pass `table=` into `RxSqlServerData`. It pulls everything row by row, and parallelism rarely triggers. By contrast, defining `sqlQuery=` in `RxSqlServerData` with a well-shaped SELECT gives the database optimizer room to generate a parallel pla

    20 мин.
  7. CI/CD With Dev Containers: Flawless Victory Or Epic Fail?

    -3 ДН.

    CI/CD With Dev Containers: Flawless Victory Or Epic Fail?

    Imagine queuing up for raid night, but half your guild’s game clients are patched differently. That’s what building cloud projects feels like without Dev Containers—chaos, version drift, and way too many ‘works-on-my-machine’ tickets. If you work with Azure and teams, you care about one thing: consistent developer environments. Before we roll initiative on this boss fight, hit subscribe and toggle notifications so you’ve got advantage in every future run. In this session, you’ll see exactly how a devcontainer.json works, why Templates and Features stop drift, how pre-building images cuts startup lag, and how to share Git credentials safely inside containers. The real test—are Dev Containers in CI/CD your reliable path to synchronized builds, or do they sometimes roll a natural 1? Let’s start with what happens when your party can’t sync in the first place. When Your Party Can’t Sync When your squad drifts out of sync, it doesn’t take long before the fight collapses. Azure work feels the same when every engineer runs slightly different toolchains. What starts as a tiny nudge—a newer SQL client here, a lagging Node version there—snowballs until builds misfire and pipelines redline. The root cause is local installs. Everyone outfits their laptop with a personal stack of SDKs and CLIs, then crosses their fingers that nothing conflicts. It only barely works. CI builds splinter because one developer upgrades Node without updating the pipeline, or someone tests against a provider cached on their own workstation but not committed to source. These aren’t rare edge cases; the docs flag them as common drift patterns that containers eliminate. A shared image or pre‑built container means the version everyone pulls is identical, so the problem never spawns. Onboarding shows it most clearly. Drop a new hire into that mess and you’re handing them a crate of random tools with no map. They burn days installing runtimes, patching modules, and hunting missing dependencies before they can write a single line of useful code. That wasted time isn’t laziness—it’s the tax of unmanaged drift. Even when veterans dig in, invisible gaps pop up at the worst moments. Running mismatched CLIs is like casting spells with the wrong components—you don’t notice until combat starts. With Azure, that translates into missing Bicep compilers, outdated PowerShell modules, or an Azure CLI left to rot on last year’s build. Queries break, deployments hang, and the helpdesk gets another round of phantom tickets. The real‑world fallout isn’t hypothetical. The docs call out Git line‑ending mismatches between host and container, extension misfires on Alpine images, and dreaded SSH passphrase hangs. They’re not application bugs; they’re tool drift unraveling the party mid‑dungeon. This is where Dev Containers flatten the field. Instead of everyone stacking their own tower of runtimes, you publish one baseline. The devcontainer.json in the .devcontainer folder is the contract: it declares runtimes, extensions, mounts. That file keeps all laptops from turning into rogue instances. You don’t need to trust half‑remembered setup notes—everyone pulls the same container, launches VS Code inside it, and gets the same runtime, same extensions, same spelling of reality. It also kills the slow bleed of onboarding and failing CI. When your whole team spawns from the same image, no one wastes morning cycles copying config files or chasing arcane errors. Your build server gets the same gear loadout as your laptop. A junior engineer’s VM rolls with the same buffs as a senior’s workstation. Instead of firefighting mismatches, you focus on advancing the quest. The measurable payoff is speed and stability. Onboarding shrinks from days to hours. CI runs stop collapsing on trivial tool mismatches. Developers aren’t stuck interpreting mysterious error logs—they’re working against the same environment, every single time. Even experiments become safer: you can branch a devcontainer to test new tech without contaminating your base loadout. When you’re done, you roll back, and nothing leaks into your daily kit. So the core takeaway is simple: containers stop the desync before it wipes the group. Every player hits the dungeon on the same patch level, the buffs are aligned, and the tools behave consistently. That’s the baseline you need before any real strategy even matters. But synchronizing gear is just the first step. Once everyone’s in lockstep, the real advantage comes from how you shape that shared foundation—because no one wants to hand‑roll a wizard from scratch every time they log in. Templates as Pre-Built Classes In RPG terms, picking a class means you skip the grind of rolling stats from scratch and jump right into the fight with a kit that already works. That’s what Dev Container Templates do for your projects—they’re the pre-built classes of the dev world, baked with sane defaults and ready to run. Without them, you’re forcing every engineer to cobble their own sheet. One dev kludges together Docker basics, another scavenges an old runtime off the web, and somebody pastes in a dusty config file from a blog nobody checks anymore. Before writing a single piece of app code, you’ve already burned a day arguing what counts as “the environment.” Templates wipe out that thrash. In VS Code, you hit the Command Palette and choose “Dev Containers: Add Dev Container Configuration Files….” From there you pull from a public template index—what containers.dev calls the gallery. Select an Azure SQL Database template and VS Code auto-generates a .devcontainer folder with a devcontainer.json tuned for database work. Extensions, Docker setup, and baseline configs are already loaded. It’s the equivalent of spawning your spellcaster with starter gear and a couple of useful cantrips already slotted. Same deal with the .NET Aspire template. You can try duct taping runtimes across everyone’s laptops, or you can start projects with one standard template. The template lays down identical versions across dev machines, remote environments, and CI. Instead of builds diverging into chaos, you get consistency down to the patch level. Debugging doesn’t mean rerolling saves every five minutes, because every player is using the same rulebook. And it’s not just about the first spin-up. Templates continue to pay off daily. For Node in Azure, one template can define the interpreter, pull in the right package manager, and configure Docker integration so that every build comes container-ready. No scavenger hunt, no guesswork. Think of it like a class spec: you can swap one skill or weapon, but you aren’t forced to reinvent “what magic missile even does” every session. Onboarding is where it’s most obvious. With a proper template, adding a new engineer shifts from hours of patching runtimes and failed installs to minutes of opening VS Code and hitting “Reopen in Container.” As soon as the environment reloads, they’re running on the exact stack everyone else is using. Instead of tickets about missing CLIs or misaligned versions, they’re ready to commit before the coffee cools. Because templates live in repos, they evolve without chaos. When teams update a base runtime, fix a quirk, or add a handy extension, the change hits once and everyone inherits it. That’s like publishing an updated character guide—suddenly every paladin gets higher saves without each one browsing a patch note forum. Nothing is left to chance, and nobody gets stuck falling behind. Templates also scale with your team’s growth. Veteran engineers don’t waste time re-explaining local setup, and new hires don’t fight mystery configs. Everyone uses the same baseline loadout, the same devcontainer.json, the same reproducible outcome. In practice, that prevents drift from sneaking in and killing your pipeline later. The nutshell benefit: Templates transform setup from a dice roll into a repeatable contract. Every project starts on predictable ground, every laptop mirrors the same working environment, and your build server gets to play by the same rules. Templates give you stability at level one instead of praying for lucky rolls. But these base classes aren’t the whole story. Sometimes you want your kit tuned just a little tighter—an extra spell, a bonus artifact, the sort of upgrade that changes how your character performs. That’s when it’s time to talk about Features. Features: Loot Drops for Your Toolkit Features are the loot drops for your environment—modular upgrades that slot in without grind or guesswork. Clear the room, open the chest, and instead of a random rusty sword you get a tool that actually matters: Git, Terraform, Azure CLI, whatever your project needs. Technically speaking, a Feature is a self-contained install unit referenced under the "features" property in devcontainer.json and can be published as an OCI artifact (see containers.dev/features). That one line connects your container to a specific capability, and suddenly your characters all roll with the same buff. The ease is the point. Instead of writing long install scripts and baking them into every Dockerfile, you just call the Feature in your devcontainer.json and it drops into place. One example: you can reference ghcr.io/devcontainers/features/azure-cli:1 in the features section to install the Azure CLI. No scribbling apt-get commands, no worrying which engineer fat-fingered a version. It’s declarative, minimal, and consistent across every environment. Trying to work without Features means dragging your party through manual setup every time you need another dependency. Every container build turns into copy-paste scripting, apt-get loops, and the slow dread of waiting while installs grind. Worse, you still risk different versions sneaking in depending on base image or local ca

    19 мин.
  8. You're Flying Blind Without Business Central Telemetry and howto fix it with Power BI

    -3 ДН.

    You're Flying Blind Without Business Central Telemetry and howto fix it with Power BI

    Summary Running Business Central Telemetry without proper setup is like flying a plane with no instruments — you might stay in the air, but you’ll have no idea when something’s about to fail. In this episode, I explain why telemetry is the single most important tool you’re not using, and how it can transform troubleshooting from guesswork into clear, data-driven action. We’ll cover how to hook telemetry into Azure Application Insights, why one small detail (the Application ID) trips up so many admins, and how Power BI turns telemetry into dashboards you can actually use. Just as important, I’ll clear up a common myth: telemetry doesn’t expose invoice or customer data. It’s strictly about system health and performance. By the end of this episode, you’ll understand what telemetry really is, how to enable it, and why flying blind without it is a risk no team should take. What You’ll Learn * Why telemetry is like your system’s flight instruments * How to connect Business Central telemetry to Azure Application Insights * Which Power BI apps to install and how to use them effectively * The critical difference between Application ID and Instrumentation Key * How to switch from sample dashboards to live telemetry data with lookback & refresh settings * Patterns that reveal spikes, deadlocks, and extension issues before users raise tickets Full Transcript Imagine rolling a D20 every morning just to see if Business Central will behave. No telemetry? That’s like rolling blindfolded. Quick side note—hit subscribe if you want regular, no‑nonsense admin walkthroughs like this one. It keeps you from wandering alone in the dungeon. Here’s the deal: I’ll show you how to connect the Power BI telemetry app to Azure Application Insights, and why one field—the Application ID—trips more admins than any boss fight. To run full live reports, you’ll need an Application Insights resource in Azure and a Power BI Pro license. Telemetry only captures behavior signals like sessions, errors, and performance—not customer invoice data. It’s privacy‑by‑design, meant for system health, not business secrets. Without it, you’re stumbling in the dark. So what happens when you try to run without that visibility? Think no mini‑map, no enemy markers, and no clear path forward. The Hidden Mini-Map: Why Telemetry Matters That’s where telemetry comes in—the hidden mini‑map you didn’t know you were missing. Business Central already emits the signals; you just need to surface them. With telemetry turned off, you aren’t choosing “less convenience.” You’re losing sight of how your environment actually behaves. Helpdesk tickets alone? That’s reaction mode. Users only raise their hand when something hurts, and by then it’s already failed. Telemetry keeps the loop tight. It shows performance shifts and error patterns as they build, not just when the roof caves in. Take deadlocks. By themselves, they’re quiet failures. Users rarely notice them until throughput explodes under load. In one real case, telemetry highlighted deadlocks tied directly to the Replication Counter update process. Enabling the “Skip Replication Counter Update” switch fixed it instantly. Without telemetry, you’d never connect those dots in time—you’d just watch payroll grind to a halt. That’s the real power: turning invisible pressure into visible patterns. The dashboards let you spot the slope of trouble before it hits the cliff. It’s the difference between scheduling a fix on Tuesday afternoon and watching your weekend vanish into emergency calls. And telemetry isn’t spying. It doesn’t capture who typed an invoice line or customer details. What it does capture are behavior signals—sessions starting and ending, login sources, SQL query durations, page views. Importantly, it covers both environment‑wide signals and per‑extension signals, so you aren’t locked into one dimension of visibility. It’s motion detection, not reading diaries. Of course, all that data goes into Azure Application Insights. That means one requirement: you need an Application Insights resource in your Azure subscription, and you need the proper permissions to read from it. Otherwise, the reports will come up blank and you’ll spend time “fixing” something that isn’t broken—it’s just gated by access. Compare that to raw error logs. They’re verbose, unreadable walls of text. Telemetry condenses that chaos onto dashboards where trends pop. Deadlocks line up on graphs. SQL lag shows up in comparison charts. Misbehaving extensions stand out. Instead of parsing twenty screens of stack traces, you just get a simple view of what’s wrong and when. That clarity changes your posture as an admin. With only logs, you’re reacting to pain reports after they land. With telemetry dashboards, you’re watching the health of the system live. You can spot spikes before they take down payroll. You can measure “it’s slow” into actual metrics tied to contention, queries, or extensions. It arms both IT and dev teams with visibility users can’t articulate on their own. And here’s the kicker: Microsoft already provides the feed. Business Central can emit telemetry into Application Insights straight out of the box. The dashboards in Power BI are what turn that feed into something usable. By default, you only see demo samples. But once you connect with the right credentials, the map fills in with your real environment. That’s the unlock. It lets you stop working blind and start working ahead. Instead of asking, “why did it fail,” you start asking, “what trends do we need to solve before this becomes a failure.” Now the next question is which tool to use to actually view this stream. Microsoft gives you two apps—both free, both sitting in Power BI’s AppSource store. They look identical, but they aren’t. Choosing the wrong one means you’ll never see the data you need. Think of it as two treasure chests waiting on the floor—you want to know exactly which one to open. Picking Your Toolkit: The Two Power BI Apps When it comes to actually viewing telemetry in Power BI, Microsoft gives you not one but two apps, and knowing the difference matters. These are the “Dynamics 365 Business Central Usage” app and the “Dynamics 365 Business Central App Usage” app. Both are free from AppSource, both open source, and both come with prebuilt dashboards. But they serve very different roles, and if you install the wrong one for the question you’re asking, you’ll be staring at charts that don’t line up with reality. At first glance, they look nearly identical—same layout, same reports, same colors. That’s why so many admins spin them up, click around, and then wonder why they aren’t seeing the answers they expected. The split is simple once you know it. The “Usage” app is for environment telemetry. It covers cross-environment behaviors: logins, sessions, client types, performance across the whole system. Think of it like zooming out to see the whole town map in your game. The “App Usage” app, on the other hand, is tied to extension telemetry. It connects through app.json and focuses on one extension at a time—like checking a single character’s skill tree. Want to measure which custom app is throwing deadlocks or dragging queries? That’s what the “App Usage” app is for. To make it easier, Microsoft even provides direct install links. For the environment telemetry app, use aka.ms/bctelemetryreport. For the extension telemetry app, use aka.ms/bctelemetry-isv-app. Both links take you directly to AppSource where you can install them into your Power BI workspace. After install, don’t be surprised when you first open the app and see sample data. That’s baked in on purpose. Until you connect them to your actual telemetry, they load with demo numbers so you can preview the layouts. They also automatically create a Power BI workspace under the same name as the app, so don’t panic if you suddenly see a fresh workspace appear in your list. That’s normal. Now let’s talk capability, because both apps surface the same four report types—Usage, Errors, Performance, and Administration. Usage is your census ledger, capturing login counts, session timings, and client usage. Errors is the event list of failures, both user-triggered and system-level. Performance is where you spot long SQL durations or rising page load times before anyone raises a ticket. Administration logs environment events like extension installs, sandbox refreshes, and restarts—your system’s patch notes, all timestamped and organized. All of those reports are available in both apps, but the scope changes. In the environment “Usage” app, the reports describe patterns across your entire Business Central setup. You’ll see whether certain clients are more heavily used, if session counts spike at end-of-month, or where contention is hitting the system as a whole. In the extension “App Usage” app, those same reports zero in on telemetry tied strictly to that extension. Instead of studying every player in town, you’re watching how your wizard class performs when they cast spells. That focus is what lets you isolate a misbehaving customization without drowning in global stats. There is a cost gate here, though. While both apps are free to download, you can’t use them with live telemetry unless you have a Power BI Pro license. Without it, you’re limited to the static sample dashboards. With it, you get the real-time queries pulling from your Application Insights resource. That single license is the innkeeper’s fee—it’s what gets you from looking at mannequins to fighting your actual monsters. This is also why I flagged the subscription CTA earlier; if you’re planning to set this up, having Pro is not optional. So, the practical workflow ends up being straightforward

    19 мин.

Об этом подкасте

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Вам может также понравиться