M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. Model-Driven Apps: The Unsung Power Platform Hero

    -11 H

    Model-Driven Apps: The Unsung Power Platform Hero

    Everyone loves to clown on Model-Driven Apps: “Old-school!” “Looks like 2008!” “Canvas does this better!” Fine. But here’s the punchline — I’ve built secure, production-ready Model-Driven apps fast, and in this video I’ll show you exactly how. By the end, you’ll see a plain Dataverse table turned into a themed, secure app with custom forms, views, a role, and even an automation — live. Subscribe to the M365.Show newsletter at m365 dot show so you don’t miss the follow-ups. Model-Driven isn’t supposed to be sexy — it’s supposed to work. Stick around for the 5-minute mark when that “boring” blank table becomes a usable form. And before we get there, let’s address why everyone insists these apps are boring in the first place. Why Everyone Thinks Model-Driven Apps Are Boring Model-Driven Apps carry a reputation problem. The common take is that they’re just old, stiff forms with no charm, no flexibility, and about as exciting as Windows XP’s Control Panel. That reputation sticks in kickoff meetings: bring up Canvas Apps and the room perks up; bring up Model-Driven and suddenly everyone needs another coffee. A lot of that comes from people who’ve only ever seen the default, grey starter app. They peek, smirk, and walk away. But judging it from that is like opening Excel, staring at the blank grid, and deciding spreadsheets are pointless—you never even tried the formulas. The criticism usually circles the same points. The interface looks dated. It’s not as customizable out of the box. Compared to the instant drag-and-drop magic in Canvas, it doesn’t grab attention. And to be fair, if your only exposure is the raw starter template, you’d assume it’s lifeless. Microsoft’s demo culture doesn’t exactly help either: Canvas is all bright colors and custom layouts, while Model-Driven gets positioned as “the serious option nobody shows off.” The flashy one is the cousin doing TikTok dances; Model-Driven is the one who shows up on time with the project plan. I’ll give you a concrete story. On one rollout I supported, the manager was sold on Canvas. The mock-ups looked slick. It impressed at first. But the moment our Dataverse schema started evolving, things got messy. Every column tweak triggered a handful of fixes across the app—dropdowns broke, formulas needed re-pointing, visibility conditions stopped working. It wasn’t catastrophic, but it was constant patching. Meanwhile, the Model-Driven build we had sitting quietly in parallel kept pace with the schema changes because the form logic was already sitting where it belonged—in Dataverse. No scrambling, no rework, just steady updates. That “boring” Honda Civic app ran quietly while the sports car prototype was in the shop. This stability isn’t about luck. In practice, when we use Model-Driven builds, the core business logic lives directly in Dataverse rather than being stretched over connectors and side scripts. That means when the schema shifts, the platform adjusts without a lot of duct tape. I’ve seen the difference firsthand. And to avoid staying too theoretical, we’ll actually attach a security role and wire up a relationship live in this demo so you can see how it behaves, not just hear me say it. Another piece the critics miss is security. In many Canvas builds, you end up writing conditions or duplicating checks in a few flows. In contrast, Model-Driven apps tie straight into Microsoft’s role-based model. In my experience, if an auditor or finance manager needs a clean division of what they can view versus what they can edit, roles handle it without me reinventing the wheel. It doesn’t look flashy, but it scales without panic. And that’s exactly why enterprise environments lean on it. Bottom line: Model-Driven isn’t trying to win design awards. It’s designed to get you to production without constant fires. People can call it dull, but dull and stable beats pretty and fragile when the business depends on it. Sure, it may not sparkle in the first five minutes, but once you push it, you start to understand the long game advantage. That’s what we’re going to put to the test. Instead of telling you it works better at scale, we’ll start from the absolute bare minimum and build forward in real time. From a single empty table, we’ll see how quickly structure, relationships, and even basic security can fall into place. And the best part is, you’ll watch the change happen right on screen. Building from Zero: The 'Before' State You open Dataverse and there’s one empty table. That’s where a lot of folks check out and dismiss Model-Driven apps. Nothing greets you—no dashboards, no forms, no quick sparkle of progress. Just a bare table with no personality. It looks like someone abandoned the setup wizard halfway, and for skeptics, that’s proof enough it’s boring. But really, this is just the launchpad. The truth is, on its own, that empty table does look lifeless. If you’ve been hanging around Canvas, you’re spoiled by the quick dopamine: drag a box, add a dropdown, and instantly it feels like something alive. Model-Driven doesn’t hand you that upfront rush. Instead, it’s designed around structure and stability. It asks: do you want something that looks busy right away, or something that’s going to hold together when business requirements shift six months from now? So let’s treat this plain starting point like a demo roadmap. Step one: add a few fields. Step two: create a relationship so tables can talk to each other. Step three: see how that instantly shapes the form without custom code. That’s enough to flip it from dead weight into a working scaffold you can keep building on. Fields are where we begin. You don’t need code, scripts, or PowerShell here. In the table designer, you drop in a text column for a name, maybe a choice column for status, maybe a lookup to another table. It’s straightforward, and the change is baked into Dataverse itself. I’ve built enough apps to know—when you define a column here, it’s consistent everywhere. You won’t be chasing 15 different dropdowns across the UI later. We’ll actually add those live in a second so you can see how quickly the form reacts. Think of it as snapping Lego bricks onto a base plate: every new field adds a piece, ready to link with the next. You can rearrange pieces without wrecking the foundation. Contrast that with Canvas, where sometimes it feels like pouring wet concrete—once it sets, you’re stuck with it unless you want to start over. The Lego model keeps things flexible without cracking the base. Now we add a relationship. Imagine linking Customers to Orders. In Dataverse, once you define that connection, the platform already knows how to behave. Relationships are treated as first-class citizens. You’re not patching formulas or fighting delegation rules. The moment you add the lookup, it flows through the app. When we set this up in the demo, watch how related records simply appear in the form—there’s your micro‑payoff. That’s the kind of instant feedback most people assume doesn’t happen in Model‑Driven apps. At this stage, it’s no longer an empty shell. You start to see the skeleton of a real business app: one table branching into another, roles you could imagine plugging in, and forms that are already becoming functional. Suddenly, it’s obvious how sales could log interactions, finance could track invoices, or managers could pull reports. Five minutes ago we had a blank placeholder; now, there’s scaffolding you can actually use. And here’s the important part that critics miss: this structure comes together without hacks or backflips. You don’t need to duct-tape flows or rewrite formulas just because a schema changed. The system is working with you instead of against you. It’s not about speed bragging rights. It’s about how reliably and cleanly the foundation grows once you start adding parts. So by the time we close out this “before” state, we’ve already gone from one lonely table to a connected framework with fields, choices, and relationships in place. When you see the first form render those related records, you realize the platform isn’t stuck in neutral. It’s already moving forward. Next, we go from skeleton to presentation. Because what everyone loves to complain about is how it looks. And that’s exactly where we’re headed. From Skeleton to App: Form, Views, and Theming When critics call Model-Driven Apps “stuck in 2008,” they’re really talking about the default look. Spin one up and you’re met with a grey interface that feels like corporate beige wall paint—functional, but forgettable. That plain start is intentional: it’s not trying to wow anyone out of the box, it’s giving you a blank workspace. Instead of dwelling on the blandness, we’ll walk through three quick moves that take the skeleton into something that feels like an actual app—forms, views, and theming. Step one is restructuring the form. Out of the box, every field gets dumped into one long column, like someone unloading all their groceries down the checkout belt with no order. Navigating that isn’t fun for your users, and it doesn’t help anyone make sense of the data. In the form designer, we can break that mess into logical sections: customer info on the left, order details on the right, and a new tab holding notes and history. Watch here as I move the contact block into a clean tab—notice how the record still renders properly and the relationships remain intact. Same data model underneath, just organized in a way that humans can actually use. Think of it like tailoring a suit off the rack; the material is the same, but once it’s adjusted, it looks and works ten times better. Step two is building out the views. The default view is a flat list that could make even the most caffeinated project manager g

    18 min
  2. The Dataverse Migration Nobody Wants (But Needs)

    -23 H

    The Dataverse Migration Nobody Wants (But Needs)

    Look, we joke about Microsoft licensing being a Rubik’s cube with missing stickers—but Dataverse isn’t just that headache. Subscribe to the M365.Show newsletter now, because when the next rename hits, you’ll want fixes, not a slide deck. Here’s the real story: Dataverse unlocks richer Power Platform scenarios that make Copilot and automation actually practical. Some features do hinge on extra licensing—we’ll flag those, and I’ll drop Microsoft’s own docs in the description so you can double‑check the fine print. Bottom line: Dataverse makes your solutions sturdier than duct tape, but it brings costs and skills you need to face upfront. We’ll be blunt about the skills and the migration headaches so you don’t get surprised. And that starts with the obvious question everyone asks—why not just keep it in a List? What Even Is Dataverse, and Why Isn’t It Just Another List? So let’s clear up the confusion right away—Dataverse is not just “another List.” It’s built as a database layer for the Power Platform, not a prettier SharePoint table. Sure, Lists give you an easy, no-license-required place to start, but Dataverse steps in when “easy” starts collapsing under real-world demands. Here’s why it actually matters: Lists handle simple tables—columns, basic permissions, maybe a lookup or two if you’re lucky. Dataverse takes that same idea and adds muscle. Think: * Proper relationships between tables (not duct tape lookups). * Role-based security, down to record and field level. * Auditing and history tracking baked right in. * Integration endpoints and APIs ready for automation. That’s why I call it SharePoint that hit the gym. It’s not flexing for show; it actually builds the structure to handle business-grade workloads. But let’s be fair—Lists feel fantastic the day you start. They’re fast, simple, and solve the nightmare of “project_final_FINAL_v7.xlsx” on a shared drive. If your team just needs a tracker or a prototype, they work beautifully. That’s why people keep reaching for them. Convenience wins, until it doesn’t. I’ve watched this play out: someone built a small project tracker in a List—simple at first, then it snowballed. Extra columns, multiple lookups, half the org piling on. Flows started breaking, permissions turned messy, and the whole thing became a fight just to stay online. At that point, Dataverse didn’t look like overkill anymore—it looked like the life raft. And that, right there, is the pivot. Lists hit limits when you try to bolt on complexity. Larger view thresholds, too many lookups, or data models that demand relationships—it doesn’t take long before things wobble. Microsoft even has docs explaining these constraints, and I’d link those in the description if you want the exact numbers. For now, just understand: Lists scale only so far, and Dataverse is designed for everything beyond that line. The shorthand is this: Lists = convenience. Dataverse = structural integrity. One is the quick patch; the other is the framework. Neither is “better” across the board—it comes down to fit. So how do you know which way to go? Here’s a simple gut-check: * Will your data need relationships across different objects? Yes → lean Dataverse. No → List could be fine. * Do you need record-level or field-level security, or auditing that stands up to compliance? Yes → Dataverse. No → List. * Is this something designed to scale or run a business-critical process long-term? Yes → Dataverse. No → List probably gets you there. That’s it. No flowcharts, just three questions. Keep in mind that Dataverse brings licensing and governance overhead; Lists keep you quick and light. You don’t pick one forever—you pick based on scope and durability. Bottom line, both tools have a place. Lists cover prototypes and lightweight needs. Dataverse underpins apps that must handle scale, control, and governance. Get that match wrong, and you either drown in duct tape or overspend on armor you didn’t need. And this is where it gets interesting—because neither choice is flawless. Both have wins, both bring pain, and SQL still sits in the background like the grumpy uncle nobody can retire. That’s where we head next: the good, the bad, and the ugly of stacking Lists against Dataverse. The Good, The Bad, and The Ugly of Lists vs Dataverse Let’s be honest—none of these tools are perfect, and each will betray you if you put it in the wrong role. Lists, Dataverse, SQL: they all have their moments, they all have their limits, and they all have their specific ways of nuking your weekend. The real pain doesn’t come from the tools themselves—it comes from picking the wrong one, then acting shocked when it falls apart. So here’s the practical version of “the good, the bad, and the ugly.” Instead of dragging this out with a dating analogy *and* a food analogy, let’s just call it what it is: three tools, three trade-offs. * Lists are fast, low-cost, and anyone in your org who can open Excel can learn to use one. They’re perfect for quick fixes or lightweight projects, and they spare you extra license drama. But scale them up with multiple related lists or heavy lookups, and you’re duct-taping duct tape. Your “tracker” quickly mutates into a swamp of random errors and warning dialogs no one can explain. * Dataverse is structured and secure—it gives you real data relationships, role-based access, and features tuned for Power Platform apps. It’s the reliable backbone when compliance, auditing, or long-term apps are involved. The catch? It comes with licensing twists and storage costs that pile up fast. I won’t pretend to list exact tiers here—check the official Microsoft docs linked in the description if you need numbers—but the point is simple: Dataverse is powerful, but it carries an ongoing bill, both in dollars and skills. * SQL is legendary. It’s got power, flexibility, and the longest resume in the room. But most makers can’t touch it without a crash course in dark arts like permissions, indexing, and joins. For citizen developers, SQL is basically a locked door with a “you must be this tall to ride” sign. If your team doesn’t already have a DBA in their corner, it’s not where your Power Platform app should live. Each of these fails for a different reason. Lists fail when they get overloaded—suddenly you’re fighting view thresholds, broken lookups, and flows that stall out of nowhere. Dataverse fails when you underestimate the cost—it looks “included” at first, then you trigger the premium side of licensing and find out your budget was imaginary. SQL fails when you throw non-technical staff into it—it becomes an instant graveyard of half-finished apps no one can manage. So how do you decide? A simple ground rule: if you’re feeding a production app that multiple teams depend on, lean toward Dataverse unless your IT group has good reasons to keep SQL at the center. If it’s genuinely small or disposable, Lists handle it fine. And if you’re staring at an old SQL server in someone’s closet, understand that it may be reliable, but it’s also not where Microsoft is building the future. The key is clarity up front: map which tool belongs to which kind of project *before* anyone starts building. Otherwise, you’re not just choosing a tool—you’re scheduling your own emergency tickets for six months from now. Trust me, there’s nothing fun about explaining to your manager why the project tracker “just stopped working” because someone added one lookup too many. Here’s the bottom line. Lists win for lightweight and short-term needs. Dataverse shines for scalable, governed apps with security and automation at the core. SQL is still hanging around out of necessity, but for many orgs, it’s more habit than strategy. Get the match wrong, the cost hits you in wasted hours, failed apps, or invoices you didn’t plan for. And speaking of cost, that’s where we go next. Because once you admit Dataverse might be the right choice, the real question isn’t about features anymore—it’s about what the bill looks like. Next up: how much will this actually cost in time and money? The Cost Nobody Puts in the Demo Slide Here’s the thing nobody shows you in a slick demo: the real cost doesn’t stop at “it runs” and a smiling screenshot. The marketing slides love telling you what Dataverse can do; they conveniently forget the part where you realize halfway through rollout that Microsoft charges for more than just buttons and tables. That gap between demo-land and production reality? That’s where teams get burned. Think of it like this: you budget for a bicycle, then Microsoft hands you not only the bike but also a helmet, gloves, reflective gear, and a bill for a maintenance plan you didn’t ask for. Licensing feels the same. It isn’t that Dataverse is a rip-off—it’s that there are layers most people don’t count for until the invoice hits. Expect licensing and storage to be the two knobs that turn your monthly bill higher. If you’re serious about adopting it, budget for capacity and premium features early instead of scrambling later. Makers often assume Dataverse is “free” because it shows up bundled in some trial or baked into their tenant. That’s the trap. Trials are temporary, and not every license covers production use. Don’t assume those trial checkboxes equal long-term rights. Validate your licenses with procurement before you migrate a single workload. If you miss that step, you’ll find yourself explaining to leadership why your shiny new enterprise app now needs a premium plan. Pro tip: include a licensing checklist in your planning doc. Better yet, grab the one we’ll link in the description or newsletter—it’ll save you from guessing. Here’s a quick budgeting checklist you s

    18 min
  3. How Data Goblins Wreck Copilot For Everyone

    -1 J

    How Data Goblins Wreck Copilot For Everyone

    Picture your data as a swarm of goblins: messy, multiplying in the dark, and definitely not helping you win over users. Drop Copilot into that chaos and you don’t get magic productivity—you get it spitting out outdated contract summaries and random nonsense your boss thinks came from 2017. Not exactly confidence-inspiring. Here’s the fix: tame those goblins with the right prep and rollout, and Copilot finally acts like the assistant people actually want. I’ll give you the Top 10 actions to make Copilot useful, not theory—stuff you can run this week. Quick plug: grab the free checklist at m365.show so you don’t miss a step. Because the real nightmare isn’t day two of Copilot. It’s when your rollout fails before anyone even touches it. Why Deployments Fail Before Day One Too many Copilot rollouts sputter out before users ever give it a fair shot. And it’s rarely because Microsoft slipped some bad code into your tenant or you missed a magic license toggle. The real problem is expectation—people walk in thinking Copilot is a switch you flip and suddenly thirty versions of a budget file merge into one perfect answer. That’s the dream. Reality is more like trying to fuel an Olympic runner with cheeseburgers: instead of medals, you just get cramps and regret. The issue comes down to data. Copilot doesn’t invent knowledge; it chews on whatever records you feed it. If your tenant is a mess of untagged files, duplicate spreadsheets, and abandoned SharePoint folders, you’ve basically laid out a dumpster buffet. One company I worked with thought their contract library was “clean.” In practice, some contracts were expired, others mislabeled, and half were just old drafts stuck in “final” folders. The result? Copilot spat out a summary confidently claiming a partnership from 2019 was still active. Legal freaked out. Leadership panicked. And trust in Copilot nosedived almost instantly. That kind of fiasco isn’t on the AI—it’s on the inputs. Copilot did exactly what it was told: turn garbage into polished garbage. The dangerous part is how convincing the output looks. Users hear the fluent summary and trust it, right up until they find a glaring contradiction. By then, the tool carries a new label: unreliable. And once that sticker’s applied, it’s hard to peel off. Experience and practitioner chatter all point to the same root problem: poor data governance kills AI projects before they even start. You can pay for licenses, bring in consultants, and run glossy kickoff meetings. None of it matters if the system underneath is mud. And here’s the kicker—users don’t care about roadmap PowerPoints or governance frameworks. If their very first Copilot query comes back wrong, they close the window and move on. From their perspective, the pitch is simple: “Here’s this fancy new assistant. Ask it anything.” So they try something basic like, “Show me open contracts with supplier X.” Copilot obliges—with outdated deals, missing clauses, and expired terms all mixed in. Ask yourself—would they click a second time after that? Probably not. As soon as the office rumor mill brands it “just another gimmick,” adoption flatlines. So what’s the fix? Start small. Take that first anecdote: the messy contract library. If it sounds familiar, don’t set out to clean your entire estate. Instead, triage. Pick one folder you can fix in two days. Get labels consistent, dates current, drafts removed. Then connect Copilot to that small slice and run the same test. The difference is immediate—and more importantly, it rebuilds user confidence. Think of it like pest control. Every missing metadata field, every duplicate spreadsheet, every “Final_V7_REALLY.xlsx” is another goblin running loose in the basement. Leadership may be upstairs celebrating their shiny AI pilot, but downstairs those goblins are chewing wires and rearranging folders. Let Copilot loose down there, and you’ve just handed them megaphones. The takeaway is simple: bad data doesn’t blow up your deployment in one dramatic crash. It just sandpapers every interaction until user trust wears down completely. One bad answer becomes two. Then the whispers start: “It’s not accurate.” Soon nobody bothers to try it at all. So the hidden first step isn’t licensing or training—it’s hunting the goblins. Scrub a small set of records. Enforce some structure. Prove the tool works with clean inputs before scaling out. Skip that, and yes—your rollout fails before Day One. But there’s another side to this problem worth calling out. Even if the data is ready, users won’t lean in unless they actually *want* to. Which raises the harder question: why would someone ask for Copilot at all, instead of just ignoring it? How Organizations Got People to *Want* Copilot What flipped the script for some organizations was simple: they got people to *want* Copilot, not just tolerate it. And that’s rare in IT land. Normally, when we push out a new tool, it sits in the toolbar like an unwanted app nobody asked for. But when users see immediate value—actual time back in their day—they stop ignoring it and start asking managers why their department doesn’t have it yet. Here’s the key difference: tolerated tools just live on the desktop collecting dust, opened only when the boss says, “use it.” Demanded tools show up in hallway chatter—“Hey, this just saved me an hour.” That shift comes from visible wins. Not theory—practical things people can measure. For example: cutting monthly report prep from eight hours to two, automating status updates so approvals close a full day faster, or reducing those reconciliation errors that make finance teams want to chuck laptops out the window. Those are the kind of wins that turn curiosity into real appetite. Too many IT rollouts assume adoption works by decree. Licensing gets assigned, the comms team sends a cheerful Monday email, and someone hopes excitement spreads. It doesn’t. Users don’t care about strategy decks; they care if their Friday night is saved because they didn’t have to chase through thirty spreadsheets. Miss that, and Copilot gets ghosted before it has a chance. The opposite shows up in real deployments that created demand. I saw a finance firm run a small, focused Copilot pilot in one department. A handful of analysts went from drowning in Excel tabs to handing off half that grunt work to Copilot. Reports went out cleaner. Backlogs shrank. And the best part—word leaked beyond the pilot group. Staff in other departments started pressing managers with, “Why do they get this and we don’t?” Suddenly IT wasn’t pushing adoption—it was refereeing a line at the door. And if you want the playbook, here’s how they did it: six analysts, a three-week pilot, live spreadsheets, and a daily feedback loop. Tight scope, rapid iteration, visible gains. That’s the cafeteria effect: nobody cares about lukewarm mystery meat, but bring in a taco bar and suddenly there’s a line. And it sticks—because demand is driven by proof of value, not by another corporate comms blast. Want the pilot checklist to start your own “taco bar”? Grab it at m365.show. Here’s what the smart teams leaned on. First, they used champions inside the business—not IT staff—to show real stories like “this saved me an hour this morning.” Second, they picked wins others could see: reports delivered early, approvals unclogged, prep time cut in half. Third, they let the proof spread socially. Word of mouth across Teams chats and roundtables hit harder than any glossy announcement ever could. It wasn’t about marketing—it was about letting peer proof build credibility. That’s why people began asking for Copilot. Because suddenly it wasn’t one more login screen—it was the thing saving them from another tedious data grind. Organizations that made those wins visible flipped the whole posture. Instead of IT nagging people to “adopt,” users were pulling Copilot into their daily flow like oxygen. That’s adoption with teeth—momentum you don’t have to manufacture. Of course, showing the wins is one thing; structuring the rollout so it doesn’t feel like a sales pitch is another. And that’s where the right frameworks came into play. The Frameworks That Didn’t Sound Like Sales Pitches You ever sat through change management slides and thought, “Wow, this feels like an MBA group project”? Same here. AI rollouts should be simple: show users what the tool does, prep them to try it, and back them up when they get stuck. Instead, we get decks with a hundred arrows, concentric circles, and more buzzwords than a product rename week at Microsoft. That noise might impress a VP, but it doesn’t help the people actually grinding through spreadsheets. The ones that work are frameworks stripped down, pointed at real pain points, and built short enough that employees don’t tune out. ADKAR was one of the few that translated cleanly into practice. On paper it’s Awareness, Desire, Knowledge, Ability, Reinforcement. In Copilot world, here’s what that means: Awareness comes from targeted demos that actually show what Copilot can do for their role—not a glossy video about the “future of productivity.” Desire means proving payoff right away, like showing them how a task they hate takes half the time. Knowledge has to be microlearning, not death-by-deck. Give them five-minute checklists, cheat sheets, or tooltips. Ability comes from sandboxing, letting users practice with fake data or non-critical work so they don’t feel like one wrong click could tank a project. Reinforcement isn’t another corporate memo—it’s templates, shortcuts, or a manager giving recognition when someone pulls it off. Stripped of its acronym armor, ADKAR isn’t theory at all. It’s a roadmap that says: tell them what it is, why it improves their day, how to

    18 min
  4. GitHub, Azure DevOps, or Fabric—Who’s Actually in Charge?

    -1 J

    GitHub, Azure DevOps, or Fabric—Who’s Actually in Charge?

    Here’s a statement that might sting: without CI/CD, your so‑called Medallion Architecture is nothing more than a very expensive CSV swamp. Subscribe to the M365.Show newsletter so you i can reach Gold Medallion on Substack! Now, the good news: we’re not here to leave you gasping in that swamp. We’ll show a practical, repeatable approach you can follow to keep Fabric Warehouse assets versioned, tested, and promotable without midnight firefights. By the end, you’ll see how to treat data pipelines like code, not mystery scripts. And that starts with the first layer, where one bad load can wreck everything that follows. Bronze Without Rollback: Your CSV Graveyard Picture this: your Bronze layer takes in corrupted data. No red lights, no alarms, just several gigabytes of garbage neatly written into your landing zone. What do you do now? Without CI/CD to protect you, that corruption becomes permanent. Worse, every table downstream is slurping it up without realizing. That’s why Bronze so often turns into what I call the CSV graveyard. Teams think it’s just a dumping ground for raw data, but if you don’t have version control and rollback paths, what you’re really babysitting is a live minefield. People pitch Bronze as the safe space: drop in your JSON files, IoT logs, or mystery exports for later. Problem is, “safe” usually means “nobody touches it.” The files become sacred artifacts—raw, immutable, untouchable. Except they’re not. They’re garbage-prone. One connector starts spewing broken timestamps, or a schema sneaks in three extra columns. Maybe the feed includes headers some days and skips them on others. Weeks pass before anyone realizes half the nightly reports are ten percent wrong. And when the Bronze layer is poisoned, there’s no quick undo. Think about it: you can’t just Control+Z nine terabytes of corrupted ingestion. Bronze without CI/CD is like writing your dissertation in one single Word doc, no backups, no versions, and just praying you don’t hit crash-to-desktop. Spoiler alert: crash-to-desktop always comes. I’ve seen teams lose critical reporting periods that way—small connector tweaks going straight to production ingestion, no rollback, no audit trail. What follows is weeks of engineers reconstructing pipelines from scratch while leadership asks why financials suddenly don’t match reality. Not fun. Here’s the real fix: treat ingestion code like any other codebase. Bronze pipelines are not temporary throwaway scripts. They live longer than you think, and if they’re not branchable, reviewable, and version-controlled, they’ll eventually blow up. It’s the same principle as duct taping your car bumper—you think it’s temporary until one day the bumper falls off in traffic. I once watched a retail team load a sea of duplicated rows into Bronze after an overnight connector failure. By the time they noticed, months of dashboards and lookups were poisoned. The rollback “process” was eight engineers manually rewriting ingestion logic while trying to reload weeks of data under pressure. That entire disaster could have been avoided if they had three simple guardrails. Step one: put ingestion code in Git with proper branching. Treat notebooks and configs like real deployable code. Step two: parameterize your connection strings and schema maps so you don’t hardwire production into every pipeline. Step three: lock deployments behind pipeline runs that validate syntax and schema before touching Bronze. That includes one small but vital test—run a pre-deploy schema check or a lightweight dry‑run ingestion. That catches mismatched timestamps or broken column headers before they break Bronze forever. Now replay that earlier horror story with these guardrails in place. Instead of panicking at three in the morning, you review last week’s commit, you roll back, redeploy, and everything stabilizes in minutes. That’s the difference between being crushed by Bronze chaos and running controlled, repeatable ingestion that you trust under deadline. The real lesson here? You never trust luck. You trust Git. Ingestion logic sits in version control, deployments run through CI/CD with schema checks, and rollback is built into the process. That way, when failure hits—and it always does—you’re not scrambling. You’re reverting. Big difference. Bronze suddenly feels less like Russian roulette and more like a controlled process that won’t keep you awake at night. Fixing Bronze is possible with discipline, but don’t take a victory lap yet. Because the next layer looks polished, structured, and safe—but it hides even nastier problems that most teams don’t catch until the damage is already done. Silver Layer: Where Governance Dies Quietly At first glance, Silver looks like the clean part of the Warehouse. Neat columns, standard formats, rows aligned like showroom furniture. But this is also where governance takes the biggest hit—because the mess doesn’t scream anymore, it tiptoes in wearing a suit and tie. Bronze failures explode loudly. Silver quietly bakes bad logic into “business-ready” tables that everyone trusts without question. The purpose of Silver, in theory, is solid. Normalize data types, apply basic rules, smooth out the chaos. Turn those fifty date formats into one, convert text IDs into integers, iron out duplicates so the sales team doesn’t have a meltdown. Simple enough, right? Except when rules get applied inconsistently. One developer formats phone numbers differently from another, someone abbreviates state codes while someone else writes them out, and suddenly you’ve got competing definitions in a layer that’s supposed to define truth. It looks organized, but the cracks are already there. The worst slip? Treating Silver logic as throwaway scripts. Dropping fixes straight into a notebook without source control. Making changes directly against production tables because “we just need this for tomorrow’s demo.” I’ve seen that happen. It solves the urgent problem but leaves test and production permanently out of sync. Later, your CI/CD jobs fail, your reports disagree, and nobody remembers which emergency tweak caused the divergence. That’s not cleanup—that’s sabotage by convenience. Here’s where we cut the cycle. Silver needs discipline, and there’s a blunt three‑step plan that works every time: Step one: put every transformation into source control with pull‑request reviews. No exceptions. That’s filters, joins, derived columns—everything. If it changes data, it goes in Git. Step two: build automated data‑quality checks into your CI pipeline. Null checks, uniqueness checks, type enforcement. Even something as basic as a schema‑compatibility check that fails if column names or types don’t match between dev and test. Make your CI run those automatically, so nobody deploys silent drift. Step three: promote only through CI/CD with approvals, never by direct edits. That’s how dev, test, and prod stay aligned instead of living three separate realities you can’t reconcile later. Automated checks and PRs prevent “polite” Silver corruption from becoming executive‑level panic. Think about it—errors masked as clean column names are the ones that trigger frantic late‑night calls because reports look wrong, even though the pipelines say green. With governance in place, those failures get stopped at the pull request instead of at the boardroom. Professional payoff? You stop wasting nights chasing down half‑remembered one‑off fixes. You stop re‑creating six months of ad‑hoc transformations just to figure out why customer counts don’t match finance totals. Instead, your rules are peer‑reviewed, tested, and carried consistently through environments. What happens in dev is what happens in prod. That’s the standard. Bottom line: if Bronze chaos is messy but obvious, Silver chaos is clean but invisible. And invisible failures are worse because leadership doesn’t care that your layer “looked” tidy—they care that the numbers don’t match. Guardrails in Silver keep authority in your data, not just surface polish in your tables. Now we’ve talked about the quiet failures. But sometimes governance issues don’t wait until the monthly audit—they land in your lap in the middle of the night. And that’s when the next layer starts to hurt the most. Gold Layer: Analytics at 3 AM Picture this: you’re asleep, your phone buzzes, and suddenly finance dashboards have gone dark. Senior leadership expects numbers in a few hours, and nobody wants to hear “sorry, it broke.” Gold is either reliable, or it destroys your credibility before breakfast. This is the layer everyone actually sees. Dashboards, KPIs, reports that executives live by—Gold is the plate you’re serving, not the prep kitchen in back. Mess up here, and it doesn’t matter how meticulous Bronze or Silver were, because the customer-facing dish is inedible. That’s why shortcuts in Gold cost the most. Without CI/CD discipline, one casual schema tweak upstream can wreck trust instantly. Maybe someone added a column in Silver without testing. Maybe a mapping fix changed values in ways nobody noticed. Suddenly the quarter‑end metrics don’t reconcile, and you’re scrambling. Unlike Bronze, you can’t shrug and reload later—leaders already act on the data. You need guarantees that Gold only reflects changes that were tested and approved. Too many teams instead resort to panic SQL patch jobs. Manual updates to production tables at 4 a.m., hoping the dashboard lights back up in time for the CFO. Sure, the query might “fix” today, but prod drifts into its own reality while dev and test stay behind. No documentation, no rollback, and good luck remembering what changed when the issue resurfaces. If you want sanity, Gold needs mirrored environments. Dev, test, a

    18 min
  5. Your Power Automate Approval Flow Isn’t Audit-Proof

    -2 J

    Your Power Automate Approval Flow Isn’t Audit-Proof

    Here’s the catch Microsoft doesn’t highlight: Power Automate’s run history is time‑limited by default. Retention depends on your plan and license, and it’s not forever. Once it rolls off, it’s gone—like it never ran. Great for Microsoft’s servers. Terrible for your audit trail. Designing without logging is like deleting your CCTV before the cops arrive. You might think you’re fine until someone actually needs the footage. Today we’ll show you how to log approvals permanently, restart flows from a stage, use dynamic approvers, and build sane escalations and reminders. Subscribe to the newsletter at m365 dot show if you want blunt fixes, not marketing decks. Because here’s the question you need to face—think your workflow trail is permanent? Spoiler: it disappears faster than free donuts in the break room. Why Your Flow History Vanishes So let’s get into why your flow history quietly disappears in the first place. You hit save on a flow, you check the run history tab, and you think, “Perfect. There’s my record. Problem solved.” Except that little log isn’t built to last. It’s more like a Post-it note on the office fridge—looks useful for a while, but it eventually drops into the recycling bin. Here’s the truth: Power Automate isn’t giving you a permanent archive. It’s giving you temporary storage designed with Microsoft’s servers in mind—not your compliance officer. How long your runs stay visible varies by plan and license. If you want the specifics, check your tenant settings or Microsoft’s own documentation. I’ll link the official retention guidance in the notes—verify your setup, because what you see depends entirely on your license. Most IT teams assume “cloud equals forever.” Microsoft assumes “forever equals a storage nightmare.” So they quietly clean house. That’s the built-in expectation: logs expire, data rolls off, and your history evaporates. They’re doing housekeeping. You’re the one left without receipts when auditors come calling. Let’s bring it into real life. Imagine HR asks for proof of a promotion approval from last year. Fourteen months ago, your director clicked Approve, everyone celebrated, and the process moved on. Fast forward, compliance wants records. You open Power Automate, dig into runs... and there’s nothing left. That tidy approval trail you trusted has already been vacuumed away. That’s not Microsoft failing to tell you. It’s right there in the docs—you just don’t see it unless you squint through the licensing fine print. They’re clear they’re not your compliance archive. That’s your job. And if you walk into an audit with holes in your data, the meeting isn’t going to be pleasant. Now picture this: it’s like Netflix wiping your watch history every Monday. One week you know exactly where you paused mid-season. Next week? Gone. The system pretends you never binged a single show. That’s how absurd it looks when an auditor asks for approval records and your run history tab is empty. The kicker is the consequences. Missing records isn’t just a mild inconvenience. Failing to show documentation can trigger compliance reviews and consequences that vary by regulation—and if you’re in a regulated industry, that can get expensive very quickly. And even if regulators aren’t involved, leadership will notice. You were trusted to automate approvals. If you can’t prove past approvals existed, congratulations—you’re now the weak link in the chain. And no, screenshots don’t save you. Screenshots are like photos of your dinner—you can show something happened, but you can’t prove it wasn’t staged. Auditors want structured data: dates, times, names, decisions. All the detail that screenshots can’t provide. And that doesn’t live in the temporary run history. Here’s a quick reality check you can do right now. Pause this video, go into Power Automate, click “My flows,” open run history on one of your flows, and look for the oldest available run. That’s your retention window. If it’s missing approvals you thought were permanent, you’ve already felt the problem firsthand. Want to know the one-click way to confirm exactly what your tenant holds? Stick around—I’ll show you in the checklist. So where does this leave you? Simple: if you don’t build logging into your workflows, you don’t have approval history at all. Pretending defaults are enough is like trusting a teenager when they say they cleaned their room—by Monday the mess will resurface, and nothing important will have survived. The key takeaway: Power Automate run history is a debugging aid, not a record keeper. It’s disposable by design, not permanent or audit-ready. If you want usable records, you have to create your own structured logs outside that temporary buffer. And this isn’t just about saving history. Weak logging means fragile workflows, and fragile workflows collapse the first time you push them beyond the basics. Which leads to the next problem—designing approval flows that don’t fall apart under real-world pressure. Designing Approval Flows That Don’t Collapse The harsh truth is most approval flows implode because they’re built too thin. A single-stage design feels quick and clever, but it’s fragile. It looks fine on paper until people actually try to run real processes through it. One person signs off? Easy. But the second another department asks for an alternate rule or an extra checkpoint, that quick win starts cracking. You’re stuck patching half-baked fixes on top of hard-coded steps. That’s why admins end up in trouble three months later. You built the demo, showed your manager, looked like a hero. Then Procurement comes knocking with five-stage rules, HR wants a single approver, and Legal demands two plus compliance. Suddenly, your flow is Frankenstein’s monster, stitched together with duct tape and conditions nobody understands. Sure, it runs—but it’s just waiting to crash. The worst habit? Hard-coding emails. That’s the death sentence of reusability. You wire the flow directly to Bob in Finance. Bob leaves. Now every approval for that process either fails or vanishes into a black hole. Critical approvals die in silence—all because one person’s address was baked into the logic. Store role membership, not emails. Use object IDs when possible so changes don’t break flows. Here’s the better play. Instead of hard-coding addresses, fetch approvers dynamically. Use a role lookup—an Azure AD group, a SharePoint list, or Dataverse (if you use it). Each option gives you a central place to swap out owners when people leave. That’s sustainable. For example, you define “Finance Approvers” in AD. If Bob quits, remove him from the group, add his replacement, and the flow never skips a beat. Keep it blunt: if Joe’s email is hard-coded and Joe quits, approvals either fail or disappear into nowhere. Avoid that nonsense. Always map a role to a person record, never tie business logic to a single mailbox. The rigidity problem doesn’t stop with emails. I’ve seen admins chain one approval after another, custom to each department, no dynamic routing, no fallback logic, no state tracking. Looks great in a demo because everyone’s at their desks, clicking on time. But the first vacation, sick day, or missed reminder? Dead in the water. What your flows really need is state awareness. That means a persistent stage field—some durable store that records what step the request is on. Design your workflow to read and write that stage value. Then if the flow chokes, you can restart it at the right stage without guessing or resending the entire process. It’s a reset button baked into the design, and it saves you from firefighting messy re-runs. Now, let’s get practical. Finance might demand dual approvals for purchases. HR may only need one quick signoff. IT might worry people will stall, so they demand escalation logic. If you try cramming all three departments into one hard-coded flow, you either water down their rules or spam the wrong people. Modular, stage-driven flows solve that by flexing to each need without creating chaos. Pro tip: always pilot your multi-stage design with one department first. Don’t roll it out tenant-wide on day one. You’ll catch the edge cases quicker, and you won’t inherit a mountain of technical debt before you know the weak spots. The bottom line is simple. Fragile approval flows collapse because they assume nothing changes—same staff, same process, same perfect conditions. But in reality, people leave, rules shift, and exceptions pop up constantly. Make your flows modular, stage-aware, and role-driven. Store roles instead of emails. Track state so you can recover gracefully. Every shortcut you skip now turns into rework later. But even with clean design, there’s another killer. Approvals stall when people sit on them, and admins try to fix it with wave after wave of reminder emails. That’s how you create a whole new problem waiting to explode. Escalations and Reminders Without the Spam Storm Nothing says “bad flow design” louder than a pile of 45 unread reminder emails at the top of someone’s inbox on Monday morning. If your automation looks less like a workflow and more like a spam cannon, you didn’t build a process—you built workplace harassment with a subject line. And let’s be clear: reminders aren’t the enemy. Escalations aren’t the problem. The issue is when “nudging” gets confused with “relentless inbox hammering.” What you actually want is persistence, not digital water torture. Reminders exist because people get distracted, go on leave, or assume someone else will handle a task. Escalations exist because no one wants approvals left in limbo until three weeks later when someone finally complains. So yes, you need them. But you need them smar

    20 min
  6. -2 J

    Why Leadership Thinks Copilot Is Useless (And Where the Numbers Back Them Up)

    Here’s something spicy: most organizations think Copilot is underused… not because users hate it, but because no one’s checking the right dashboard. Subscribe at m365.show if you want the survival guide, not the PPT bingo. We’ll check which Copilot telemetry matters, where users actually click, and how prompts reveal who’s using it for real work. Often the signals you need live in a different pane—let’s show you where to look in your tenant. This isn’t a pep rally; it’s a reality check with the data points that count. And once you’ve seen that, we need to talk about the reports leadership is already waving in their hands. The CFO’s Report Doesn’t Lie Ever had that moment when the CFO barges in, waving a glossy admin report like it’s courtroom evidence, and asks why the company shelled out for a Copilot license nobody seems to use? Your stomach drops, because you’re not just defending an IT budget line—you’re defending your job. And here’s the kicker: the chart they’re holding isn’t wrong, but it’s not telling the story they think it is. The leadership bind is simple: licenses cost real money, so execs want hard proof that Copilot isn’t just another line item in the finance system. Microsoft does provide reports, but what those charts measure isn’t what most people assume. Log into the admin center and you’ll see nice graphs of sign-ins and “active users.” Sounds impressive until you realize it’s basically counting how many times someone opened Word, not whether they actually touched Copilot once they were in there. This is where the data trips people up. That report showing 2,000 Word sign-ins? Leadership reads that as 2,000 instances of Copilot lighting up productivity. Reality: it just means 2,000 people still have Word pinned to their taskbar and clicked it once. No one tells them that Copilot activity is captured in separate telemetry. So while the chart says “adoption,” in truth Copilot might be sitting unused like an expensive treadmill doubling as a coat rack. Now, to be fair, Entra AD does exactly what it promises. It focuses on identity and sign-in telemetry—it tells you who walked through the door and which app they technically opened. What it does not do, by default, is surface the action-level data that proves Copilot adoption. Put simply: it’ll show you that John launched Word, but it won’t show you that John asked Copilot to crank out a three-page summary to save himself an afternoon. Always check your tenant’s docs or Insights pane for what’s actually available, because the defaults don’t go that far. Here’s one clean metaphor you can safely use with leadership: those reports are counting swipes of a gym membership card. They don’t show whether anyone touched the treadmill, lifted weights, or just grabbed a chocolate bar from the vending machine. That one line paints the picture without drowning in analogies. So what do you say when finance is breathing down your neck with pretty graphs? Here’s the leadership-ready soundbite: “Sign-in counts show who opened the apps. Copilot adoption means showing who actually used prompts and actions. I can pull behavioral reports for that—if our tenant has telemetry enabled.” That’s a safe, honest line that doesn’t oversell anything but tells executives you can provide a real answer once you’ve got the right data enabled. And this is where your action item comes in. Don’t waste time trying to prove adoption from identity numbers. Instead, verify whether your tenant has Copilot Insights or usage reports that surface prompts and actions. If it does, prep a side-by-side demo for the CFO or CIO: slide one shows a bland graph of “sign-ins,” slide two shows actual prompts being used in Outlook, Word, or Teams. The contrast makes your point in about 30 seconds flat. Because at the end of the day, raw sign-in and license charts will always frame the wrong narrative. They’re a door count, not a usage log. What leadership really needs to see are the actions that prove value—the moments where Copilot shaved hours off real workflows, not just opened an application window. And that sets up the bigger story. Because Microsoft doesn’t give you just one way to see Copilot activity—they give you two different dashboards. One is the guard at the door. The other is the camera inside the building. And only one of them will tell you whether Copilot is actually changing how people work. Entra AD vs. Insights: The Tale of Two Dashboards Microsoft gives you two main dashboards tied to Copilot adoption: Entra AD and Insights. At first glance they both look polished enough to screenshot into a slide deck, but they don’t measure the same thing. If you confuse them, you’ll end up telling execs a story that sounds great but falls apart the minute someone asks, “Okay, but did anyone actually use Copilot to get work done?” Here’s the split. Entra AD primarily records identity and sign‑in events. It’s useful for security and access checks—who got in, from where, on what device, and at what time. That’s its lane. It doesn’t go deep on what happens after the app opens. Think of it as the entry log. Helpful? Yes. Proof of adoption? Not really. Insights, on the other hand, is where you start seeing behavioral patterns. Depending on what your tenant exposes, it can surface things like prompt counts, app‑level activity, and even departmental usage. That’s the level you need if the conversation is about ROI. Here’s the trap admins fall into. You’re on the spot in a meeting, leadership asks, “How many people used Copilot this week?” Under pressure, you run a quick Entra AD report and point to a chart showing thousands of Word users. But what you’ve proved is that people still open Word every week. Copilot activity might be a fraction of that, but Entra won’t show it. It’s the classic “stadium is full” headline when the team never left the locker room. A better way to frame it is like this: Entra AD tells you John opened Word at 9 a.m. on Monday. Insights, if enabled, might show John used Copilot to generate a draft report in those same three minutes. Which one of those slides convinces a CFO that the product is paying for itself? Exactly. So what should you actually do here? Step 1: in your admin center, confirm whether Copilot/Insights telemetry is visible and that you have the right permissions to view it. Don’t assume it’s just on. Depending on your role assignment, you might not be able to see adoption data at all. Step 2: if telemetry is available, check what categories are exposed. Non‑definitively, you’re looking for at least three dimensions—activity by app, prompt/use counts, and departmental breakdowns. Step 3: keep in mind privacy. Avoid dropping user‑level detail into leadership slides. Aggregate it by team, anonymize where needed, and stay compliant. Finance might want a name‑and‑shame list, but resist that urge, or you’ll create more HR tickets than Copilot tasks. This is the part that kills a lot of adoption projects. You can’t design smart training or change campaigns if the only metric you have is “they opened Word.” Insights, when surfaced, shows where work is really happening—or not happening. Maybe Marketing fires off prompts all day, while Legal never touches the thing. That department‑level picture is the difference between targeted adoption efforts and yet another generic, hour‑long training no one remembers. Bottom line: Entra AD is valuable, but it’s a door count. It makes sense for identity, security, and blocking attackers. It doesn’t measure whether Copilot replaced busywork. Insights, if wired into your tenant, is the view you need to argue for ROI, improve adoption, and spot which groups need support. Always check role permissions first, respect privacy rules, and don’t oversell what the data shows. Now that you know which pane matters, the next step is to examine where inside the apps that Copilot usage actually shows up—and that picture looks very different from the marketing demos you’ve been shown. Where Copilot Clicks Actually Happen When you strip away the dashboards, the real story sits in the apps themselves—where people actually click Copilot. And here’s the surprise: those clicks aren’t happening in the flashy scenarios from Microsoft’s keynote clips. They’re landing in the everyday tools people touch constantly, and that pattern matters more than any marketing reel. A common pattern we see in many tenants is heavier Copilot usage in communication apps like Outlook, Teams, and Word, compared with apps like PowerPoint or Excel. It makes sense if you think about it—email triage happens every morning, chat replies pop up all day, and quick edits in Word are a constant grind. Slide decks or complex models? Those only hit the calendar once in a while. But don’t take my word for it. Validate it yourself against your tenant’s Insights reports. And this brings us to a key point: Copilot adoption grows fastest where the tool reduces friction in small, repetitive tasks. Think cleaning up an email, fixing awkward grammar, or smoothing a Teams message before you hit send. Those are quick, safe interactions that don’t require perfect prompts or lengthy context dumps. Users try it once for a throwaway email, it works, and then they start leaning on it every day. That’s “sticky adoption”—not glamorous, but reliable. Here’s your actionable step: pull app-level activity from your tenant and rank the top three apps by prompt count. Start your training and success stories in the top two. Don’t waste budget blasting universal sessions when you already have evidence of where your people click. Adoption always follows need, not wishful thinking. Let’s hold up PowerPoint as an example. In a lot of tenants, its Copilot numbers

    18 min
  7. The Power BI Gateway Horror Story No One Warned You About

    -3 J

    The Power BI Gateway Horror Story No One Warned You About

    You know what’s horrifying? A gateway that works beautifully in your test tenant but collapses in production because one firewall rule was missed. That nightmare cost me a full weekend and two gallons of coffee. In this episode, I’m breaking down the real communication architecture of gateways and showing you how to actually bulletproof them. By the end, you’ll have a three‑point checklist and one architecture change that can save you from the caffeine‑fueled disaster I lived through. Subscribe at m365.show — we’ll even send you the troubleshooting checklist so your next rollout doesn’t implode just because the setup “looked simple.” The Setup Looked Simple… Until It Wasn’t So here’s where things went sideways—the setup looked simple… until it wasn’t. On paper, installing a Power BI gateway feels like the sort of thing you could kick off before your first coffee and finish before lunch. Microsoft’s wizard makes it look like a “next, next, finish” job. In reality, it’s more like trying to defuse a bomb with instructions half-written in Klingon. The tool looks friendly, but in practice you’re handling something that can knock reporting offline for an entire company if you even sneeze on it wrong. That’s where this nightmare started. The plan itself sounded solid. One server dedicated to the gateway. Hook it up to our test tenant. Turn on a few connections. Run some validations. No heroics involved. In our case, the portal tests all reported back with green checks. Success messages popped up. Dashboards pulled data like nothing could go wrong. And for a very dangerous few hours, everything looked textbook-perfect. It gave us a false sense of security—the kind that makes you mutter, “Why does everyone complain about gateways? This is painless.” What changed in production? It’s not what you think—and that mystery cost us an entire weekend. The moment we switched over from test to production, the cracks formed fast. Dashboards that had been refreshing all morning suddenly threw up error banners. Critical reports—the kind you know executives open before their first meeting—failed right in front of them, with big red warnings instead of numbers. The emails started flooding in. First analysts, then managers, and by the time leadership was calling, it was obvious that the “easy” setup had betrayed us. The worst part? The documentation swore we had covered everything. Supported OS version? Check. Server patches? Done. Firewall rules as listed? In there twice. On paper it was compliant. In practice, nothing could stay connected for more than a few minutes. The whole thing felt like building an IKEA bookshelf according to the manual, only to watch it collapse the second you put weight on it. And the logs? Don’t get me started. Power BI’s logs are great if you like reading vague, fortune-cookie lines about “connection failures.” They tell you something is wrong, but not what, not where, and definitely not how to fix it. Every breadcrumb pointed toward the network stack. Naturally, we assumed a firewall problem. That made sense—gateways are chatty, they reach out in weird patterns, and one missing hole in the wall can choke them. So we did the admin thing: line-by-line firewall review. We crawled through every policy set, every rule. Nothing obvious stuck out. But the longer we stared at the logs, the more hopeless it felt. They’re the IT equivalent of being told “the universe is uncertain.” True, maybe. Helpful? Absolutely not. This is where self-doubt sets in. Did we botch a server config? Did Azure silently reject us because of some invisible service dependency tucked deep in Redmond’s documentation vault? And really—why do test tenants never act like production? How many of you have trusted a green checkmark in test, only to roll into production and feel the floor drop out from under you? Eventually, the awful truth sank in. Passing a connection test in the portal didn’t mean much. It meant only that the specific handshake *at that moment* worked. It wasn’t evidence the gateway was actually built for the real-world communication pattern. And that was the deal breaker: our production outage wasn’t caused by one tiny mistake. It collapsed because we hadn’t fully understood how the gateway talks across networks to begin with. That lesson hurts. What looked like success was a mirage. Test congratulated us. Production punched us in the face. It was never about one missed checkbox—it was about how traffic really flows once packets start leaving the server. And that’s the crucial point for anyone watching: the trap wasn’t the server, wasn’t the patch level, wasn’t even a bad line in a config file. It was the design. And this is where the story turns toward the network layer. Because when dashboards start choking, and the logs tell you nothing useful, your eyes naturally drift back to those firewall rules you thought were airtight. That’s when things get interesting. The Firewall Rule Nobody Talks About Everyone assumed the firewall was wrapped up and good to go. Turns out, “everyone” was wrong. The documentation gave us a starting point—some common ports, some IP ranges. Looks neat on the page. But in our run, that checklist wasn’t enough. In test, the basic rules made everything look fine. Open the standard ports, whitelist some addresses, and it all just hums along. But the moment we pushed the same setup into production, it fell apart. The real surprise? The gateway isn’t sitting around hoping clients connect in—it reaches outward. And in our deployment, we saw it trying to make dynamic outbound connections to Azure services. That’s when the logs started stacking up with repeated “Service Bus” errors. Now on paper, nothing should have failed. In practice, the corporate firewall wasn’t built to tolerate those surprise outbound calls. It was stricter than the test environment, and suddenly that gateway traffic went nowhere. That’s why the test tenant was smiling and production was crying. For us, the logs became Groundhog Day. Same error over and over, pointing us back to Azure. It wasn’t that we misconfigured the inbound rules—it was that outbound was clamped down so tightly, the server could never sustain its calls. Test had relaxed outbound filters, production didn’t. That mismatch was the hidden trap. Think about it like this: the gateway had its ID badge at the border, but when customs dug into its luggage, they tossed it right back. Outbound filtering blocked enough of its communication that the whole service stumbled. And here’s where things get sneaky. Admins tend to obsess over charted ports and listed IP ranges. We tick off boxes and move on. But outbound filtering doesn’t care about your charts. It just drops connections without saying much—and the logs won’t bail you out with a clean explanation. That’s where FQDN-based whitelisting helped us. Instead of chasing IP addresses that change faster than Microsoft product names, we whitelisted actual service names. In practice, that reduced the constant cycle of updates. We didn’t just stumble into that fix. It took some painful diagnostics first. Here’s what we did: First, we checked firewall logs to see if the drops were inbound or outbound—it became clear fast it was outbound. Then we temporarily opened outbound traffic in a controlled maintenance window. Sure enough, reports started flowing. That ruled out app bugs and shoved the spotlight back on the firewall. Finally, we ran packet captures and traced the destination names. That’s how we confirmed the missing piece: the outbound filters were killing us. So after a long night and a lot of packet tracing, we shifted from static rules to adding the correct FQDN entries. Once we did that, the error messages stopped cold. Dashboards refreshed, users backed off, and everyone assumed it was magic. In reality it was a firewall nuance we should’ve seen coming. Bottom line: in our case, the fix wasn’t rewriting configs or reinstalling the gateway—it was loosening outbound filtering in a controlled way, then adding FQDN entries so the service could talk like it was supposed to. The moment we adjusted that, the gateway woke back up. And as nasty as that was, it was only one piece of the puzzle. Because even when the firewall is out of the way, the next layer waiting to trip you up is permissions—and that’s where the real headaches began. When Service Accounts Become Saboteurs You’d think handing the Power BI gateway a domain service account with “enough” permissions would be the end of the drama. Spoiler: it rarely is. What looks like a tidy checkbox exercise in test turns into a slow-burn train wreck in production. And the best part? The logs don’t wave a big “permissions” banner. They toss out vague lines like “not authorized,” which might as well be horoscopes for all the guidance they give. Most of us start the same way. Create a standard domain account, park it in the right OU, let it run the On-Premises Data Gateway service. Feels nice and clean. In test, it usually works fine—reports refresh, dashboards update, the health check flowers are all green. But move the exact setup to production? Suddenly half your datasets run smooth, the other half throw random errors depending on who fires off the refresh. It doesn’t fail consistently, which makes you feel like production is haunted. In our deployments the service account actually needed consistent credential mappings across every backend in the mix—SQL, Oracle, you name it. SQL would accept integrated authentication, Oracle wanted explicit credentials, and if either side wasn’t mirrored correctly, the whole thing sputtered. The account looked healthy locally, but once reports touched multiple data sources, random “access denied” bombs dr

    19 min
  8. You're Probably Using Teams Channels Wrong

    -3 J

    You're Probably Using Teams Channels Wrong

    Let’s be real—Teams channels are just three kinds of roommates. Standard channels are the open-door living room. Private channels are the locked bedroom. Shared channels? That’s when your roommate’s cousin “stays for a few weeks” and suddenly your fridge looks like a crime scene. Here’s the value: by the end, you’ll know exactly which channel to pick for marketing, dev, and external vendors—without accidentally leaking secrets. We’ll get into the actual mechanics later, not just the surface-level labels. Quick pause—subscribe to the M365.Show newsletter at m365 dot show. Save yourself when the next Teams disaster hits. Because the real mess happens when you treat every channel the same—and that’s where we’re heading next. Why Picking the Wrong Channel Wrecks Your Project Ever watched a project slip because the wrong kind of Teams channel got used? Confidential files dropped in front of the wrong people, interns scrolling through data they should never see, followed by that embarrassing “please delete that file” email that nobody deletes. It happens because too many folks treat the three channel types like carbon copies. They’re not, and one bad choice can sink a project before it’s out of planning mode. Quick story. A company handling a product launch threw marketing and dev into the same Standard channel. Marketing uploaded the glossy, client-ready files. Dev uploaded raw test builds and bug reports. End result: marketing interns who suddenly had access to unfinished code, and developers casually browsing embargoed press kits. Nobody meant to leak—Microsoft didn’t “glitch.” The leak happened because the structure guaranteed it. Here’s what’s going on under the hood. A Standard channel is tied to the parent Team. In practice, that means the files there behave like shared storage across the entire Team membership. No prompts, no “are you sure” moments—everyone in the Team sees it. That broad inheritance is great for open collaboration but dangerous if you only want part of the group to see certain content. (Editor note: verify against Microsoft Docs—if confirmed, simplify to plain English and cite. If not confirmed, reframe as observed admin behavior.) Think of that open spread as leaving your garage wide open. Nothing feels wrong until the neighbors start “borrowing” tools that were supposed to stay with you. Teams works the same way: what goes in a Standard channel gets shared broadly, like it or not. That’s why accidental data leaks feel less like bad luck and more like math. And here’s the real pain: once the wrong files land in the wrong channel, you’re stuck with cleanup. That means governance questions, compliance headaches, and scrambling to rebuild trust with the business. Worse—auditors love catching mistakes that could have been avoided if the right channel was set from the start. Choosing incorrectly doesn’t just create an access problem; it sets the wrong perimeter for every permission, audit log, and policy downstream. The takeaway? The channel type is not just a UI label. It’s your project’s security gate. Pick Standard and expect everyone in the Team to have visibility. Pick Private to pull a smaller group aside. Pick Shared if you’re bringing in external partners and don’t want to hand them the whole house key. You make the call once, and you deal with the consequences for the entire lifecycle. Here’s your quick fix if you’re running projects: decide the channel type during kickoff. Don’t leave it to “we’ll just create one later.” Lock down who can even create channels, so you don’t wake up six months in with a sprawl of random standards leaking files everywhere. That single governance move saves you from a lot of firefighting. So yes—wrong channel equals wrong audience, and wrong audience equals risk. Pretty UI aside, that’s how Teams behaves. Which raises the next big question: what actually separates these three flavors of channels, beyond the fluffy “collaboration space” jargon you keep hearing? That’s where we’re heading. Standard, Private, and Shared: Cutting the Marketing Fluff Microsoft’s marketing team loves to slap the phrase “collaboration space” on every channel type. Technically true, but about as helpful as calling your garage, your bedroom, and your driveway “living areas.” Sure, you can all meet in any of them, but eventually you’re wondering why your neighbor is folding laundry on your lawn. The reality is, Standard, Private, and Shared channels behave very differently. Treating them as identical is how files leak, audits fail, and admins lose sleep. So let’s cut the fluff. Think of channels less as “spaces” and more as three different security models wearing the same UI. They all show you a chat window, a files tab, and some app tabs. But underneath, the way data is stored and who sees it changes. Get those differences wrong, and you’re not running a project—you’re running damage control. Here’s the clean breakdown. Standard channels: What it is: the default channel type inside any Team. Where files live: inside the parent Team’s SharePoint site (verify against Microsoft Docs). It adds a folder, not a brand-new collection. Who sees it: everyone who’s a member of the parent Team, no exceptions. When to use it: broad conversations, project chatter, updates you’re fine with all members seeing. Think of it as the living room. Collaborative and open, but not where you’ll leave your passport. Private channels: What it is: a channel locked to a smaller group of people already in the parent Team. Where files live: many tenants show that a Private channel creates a separate SharePoint site (verify exact behavior in your tenant before stating as fact). Who sees it: only the subset you explicitly add. Everyone else in the Team doesn’t even see it exist. When to use it: content meant for an inner circle—finance numbers, HR plans, leadership discussions. Private channels are the locked bedroom. You pick who has the key, and nobody else wanders in by accident. Shared channels: What it is: a channel you can share across Teams—or even across organizations—without granting access to the entire parent Team. Where files live: most documentation confirms Shared channels create a distinct storage space (verify exact mechanism in Microsoft Docs and tenant behavior). Who sees it: both internal members you select and external participants you invite. The catch is they see only that channel, not the Team around it. When to use it: vendor engagement, client collaboration, or anywhere you want external voices inside one conversation but without giving them the house key. Shared channels are the Airbnb suite. Guests can use the room, but they don’t wander through your closets. That’s the part marketing glide right over. These aren’t three shades of the same tool—they’re three very different guardrail models. Standard opens everything to all Team members. Private carves off its own smaller room. Shared creates a bridge to outside people without flooding your directory with guests. Notice the pattern: what it is, where the files land, who sees it, when to use it. Once you force yourself to check those four boxes, the decision gets a lot simpler. You’re no longer guessing at vague phrases like “collaboration space”—you’re matching the right container to the right problem. Of course, description is one thing. Picking the right channel in real-world projects is where the headaches start. Use Standard too often and interns skim company financials. Lean too hard on Private and you build silos where nobody sees the full picture. Go all-in on Shared and you risk governance drift if nobody tracks who’s invited. Okay—now for the selection rules and real-world scenarios. Picking the Right Channel Without Getting Burned Picking the right channel without getting burned starts with one truth: stop clicking “new channel” like you’re ordering from a vending machine. Teams isn’t chips and soda. The default choice isn’t always the right choice, and one sloppy click can end with the intern casually browsing financial forecasts or the vendor stumbling into your board deck. Channel selection is governance, not guesswork. So here’s the channel rulebook boiled down to three sentences. Standard = broad transparency. Use it when the whole Team needs eyes on the same content. Example: a cross-department kickoff where marketing, sales, and HR all need to see the high-level plan. Private = inner-circle with limited access. Only the people you select get in. Use it for things like feature design or financials—content that would only confuse or risk exposure if the wider Team saw it. Example: developers hashing out raw build notes their VP doesn’t need popping up over morning coffee. Shared = external collaboration without Team-wide membership. It creates a doorway for vendors or clients to step into the conversation without turning them loose across your entire tenant. Example: a contractor who only needs one project space but doesn’t need to rummage around in your org chart. That’s your quick decision grid. No coin flips, no overthinking. Standard when you want sunlight. Private when you need walls. Shared when you’ve got additional guests. Done. Now here’s the part too many orgs skip: building a process so this choice happens the same way every time. Don’t leave it up to random project leads. That’s how you end up with a “Cold War bunker” of Private channels nobody remembers creating or a sprawl of orphaned Shared links floating around with God-knows-who invited in. The fix is a playbook. Four steps. First, scope the audience—ask “who must actually see this?” Don’t write a novel, just list the real participants. Second, match it to the rule-of-thumb c

    17 min

À propos

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Vous aimeriez peut‑être aussi