M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. You're Probably Using Teams Channels Wrong

    قبل ٦ ساعات

    You're Probably Using Teams Channels Wrong

    Let’s be real—Teams channels are just three kinds of roommates. Standard channels are the open-door living room. Private channels are the locked bedroom. Shared channels? That’s when your roommate’s cousin “stays for a few weeks” and suddenly your fridge looks like a crime scene. Here’s the value: by the end, you’ll know exactly which channel to pick for marketing, dev, and external vendors—without accidentally leaking secrets. We’ll get into the actual mechanics later, not just the surface-level labels. Quick pause—subscribe to the M365.Show newsletter at m365 dot show. Save yourself when the next Teams disaster hits. Because the real mess happens when you treat every channel the same—and that’s where we’re heading next. Why Picking the Wrong Channel Wrecks Your Project Ever watched a project slip because the wrong kind of Teams channel got used? Confidential files dropped in front of the wrong people, interns scrolling through data they should never see, followed by that embarrassing “please delete that file” email that nobody deletes. It happens because too many folks treat the three channel types like carbon copies. They’re not, and one bad choice can sink a project before it’s out of planning mode. Quick story. A company handling a product launch threw marketing and dev into the same Standard channel. Marketing uploaded the glossy, client-ready files. Dev uploaded raw test builds and bug reports. End result: marketing interns who suddenly had access to unfinished code, and developers casually browsing embargoed press kits. Nobody meant to leak—Microsoft didn’t “glitch.” The leak happened because the structure guaranteed it. Here’s what’s going on under the hood. A Standard channel is tied to the parent Team. In practice, that means the files there behave like shared storage across the entire Team membership. No prompts, no “are you sure” moments—everyone in the Team sees it. That broad inheritance is great for open collaboration but dangerous if you only want part of the group to see certain content. (Editor note: verify against Microsoft Docs—if confirmed, simplify to plain English and cite. If not confirmed, reframe as observed admin behavior.) Think of that open spread as leaving your garage wide open. Nothing feels wrong until the neighbors start “borrowing” tools that were supposed to stay with you. Teams works the same way: what goes in a Standard channel gets shared broadly, like it or not. That’s why accidental data leaks feel less like bad luck and more like math. And here’s the real pain: once the wrong files land in the wrong channel, you’re stuck with cleanup. That means governance questions, compliance headaches, and scrambling to rebuild trust with the business. Worse—auditors love catching mistakes that could have been avoided if the right channel was set from the start. Choosing incorrectly doesn’t just create an access problem; it sets the wrong perimeter for every permission, audit log, and policy downstream. The takeaway? The channel type is not just a UI label. It’s your project’s security gate. Pick Standard and expect everyone in the Team to have visibility. Pick Private to pull a smaller group aside. Pick Shared if you’re bringing in external partners and don’t want to hand them the whole house key. You make the call once, and you deal with the consequences for the entire lifecycle. Here’s your quick fix if you’re running projects: decide the channel type during kickoff. Don’t leave it to “we’ll just create one later.” Lock down who can even create channels, so you don’t wake up six months in with a sprawl of random standards leaking files everywhere. That single governance move saves you from a lot of firefighting. So yes—wrong channel equals wrong audience, and wrong audience equals risk. Pretty UI aside, that’s how Teams behaves. Which raises the next big question: what actually separates these three flavors of channels, beyond the fluffy “collaboration space” jargon you keep hearing? That’s where we’re heading. Standard, Private, and Shared: Cutting the Marketing Fluff Microsoft’s marketing team loves to slap the phrase “collaboration space” on every channel type. Technically true, but about as helpful as calling your garage, your bedroom, and your driveway “living areas.” Sure, you can all meet in any of them, but eventually you’re wondering why your neighbor is folding laundry on your lawn. The reality is, Standard, Private, and Shared channels behave very differently. Treating them as identical is how files leak, audits fail, and admins lose sleep. So let’s cut the fluff. Think of channels less as “spaces” and more as three different security models wearing the same UI. They all show you a chat window, a files tab, and some app tabs. But underneath, the way data is stored and who sees it changes. Get those differences wrong, and you’re not running a project—you’re running damage control. Here’s the clean breakdown. Standard channels: What it is: the default channel type inside any Team. Where files live: inside the parent Team’s SharePoint site (verify against Microsoft Docs). It adds a folder, not a brand-new collection. Who sees it: everyone who’s a member of the parent Team, no exceptions. When to use it: broad conversations, project chatter, updates you’re fine with all members seeing. Think of it as the living room. Collaborative and open, but not where you’ll leave your passport. Private channels: What it is: a channel locked to a smaller group of people already in the parent Team. Where files live: many tenants show that a Private channel creates a separate SharePoint site (verify exact behavior in your tenant before stating as fact). Who sees it: only the subset you explicitly add. Everyone else in the Team doesn’t even see it exist. When to use it: content meant for an inner circle—finance numbers, HR plans, leadership discussions. Private channels are the locked bedroom. You pick who has the key, and nobody else wanders in by accident. Shared channels: What it is: a channel you can share across Teams—or even across organizations—without granting access to the entire parent Team. Where files live: most documentation confirms Shared channels create a distinct storage space (verify exact mechanism in Microsoft Docs and tenant behavior). Who sees it: both internal members you select and external participants you invite. The catch is they see only that channel, not the Team around it. When to use it: vendor engagement, client collaboration, or anywhere you want external voices inside one conversation but without giving them the house key. Shared channels are the Airbnb suite. Guests can use the room, but they don’t wander through your closets. That’s the part marketing glide right over. These aren’t three shades of the same tool—they’re three very different guardrail models. Standard opens everything to all Team members. Private carves off its own smaller room. Shared creates a bridge to outside people without flooding your directory with guests. Notice the pattern: what it is, where the files land, who sees it, when to use it. Once you force yourself to check those four boxes, the decision gets a lot simpler. You’re no longer guessing at vague phrases like “collaboration space”—you’re matching the right container to the right problem. Of course, description is one thing. Picking the right channel in real-world projects is where the headaches start. Use Standard too often and interns skim company financials. Lean too hard on Private and you build silos where nobody sees the full picture. Go all-in on Shared and you risk governance drift if nobody tracks who’s invited. Okay—now for the selection rules and real-world scenarios. Picking the Right Channel Without Getting Burned Picking the right channel without getting burned starts with one truth: stop clicking “new channel” like you’re ordering from a vending machine. Teams isn’t chips and soda. The default choice isn’t always the right choice, and one sloppy click can end with the intern casually browsing financial forecasts or the vendor stumbling into your board deck. Channel selection is governance, not guesswork. So here’s the channel rulebook boiled down to three sentences. Standard = broad transparency. Use it when the whole Team needs eyes on the same content. Example: a cross-department kickoff where marketing, sales, and HR all need to see the high-level plan. Private = inner-circle with limited access. Only the people you select get in. Use it for things like feature design or financials—content that would only confuse or risk exposure if the wider Team saw it. Example: developers hashing out raw build notes their VP doesn’t need popping up over morning coffee. Shared = external collaboration without Team-wide membership. It creates a doorway for vendors or clients to step into the conversation without turning them loose across your entire tenant. Example: a contractor who only needs one project space but doesn’t need to rummage around in your org chart. That’s your quick decision grid. No coin flips, no overthinking. Standard when you want sunlight. Private when you need walls. Shared when you’ve got additional guests. Done. Now here’s the part too many orgs skip: building a process so this choice happens the same way every time. Don’t leave it up to random project leads. That’s how you end up with a “Cold War bunker” of Private channels nobody remembers creating or a sprawl of orphaned Shared links floating around with God-knows-who invited in. The fix is a playbook. Four steps. First, scope the audience—ask “who must actually see this?” Don’t write a novel, just list the real participants. Second, match it to the rule-of-thumb c

    ١٧ من الدقائق
  2. Live Data in SPFx: Why Yours Isn’t Moving

    قبل ١٠ ساعات

    Live Data in SPFx: Why Yours Isn’t Moving

    Question for you: why does your SPFx web part look polished, but your users still ignore it? Because it’s not alive. They don’t care about a static list of names copied from last week—they want today’s data, updated the second they open Teams. In this video, we cover three wins you can actually ship: 1) connect SPFx to Graph and SharePoint securely, 2) make your calls faster with smaller payloads and caching, and 3) make updates real-time with webhooks and sockets. And good news—SPFx already has Graph and REST helpers baked in, so this isn’t an OAuth death march. Subscribe to the M365.Show newsletter at m365.show so you don’t miss these survival guides. Now, let’s take a closer look at why all that polish isn’t helping you yet. When Pretty Isn’t Enough You’ve put all the shine on your SPFx web part, but without live data it might as well be stuck behind glass. Sure, it loads, the CSS looks modern, the icons line up—but it’s no more useful than a lobby poster. Users figure it out in seconds: nothing moves, nothing changes, and that means nothing they can trust. The real issue isn’t looks—it’s trust. A dashboard is only valuable if it reflects reality. The moment it doesn’t, it stops being a tool and becomes a prop. Show users a “status board” that hasn’t updated in months, and you’ve trained them to stop checking it. Put yourself in their shoes: would you rely on metrics or contact info if you suspect it’s outdated? Probably not. That’s why static dashboards die fast, no matter how slick they appear. Here’s the simplest way to understand it: imagine a digital clock that’s frozen at 12:00. Technically, the screen works. The numbers display. But nobody uses it, because it’s lying the moment you look at it. In contrast, even a cheap wall clock with a ticking second hand feels alive and trustworthy. Our brains are wired to equate motion or freshness with reliability, which is exactly why your frozen SPFx display gets ignored. And the trap is deeper than just creating something irrelevant. When you polish a static web part, you actually amplify the problem. The nice gradients, the sleek tiles, the professional presentation—it broadcasts credibility. Users assume what they’re seeing is current. When they realize it’s six months old, that credibility collapses. Which hurts worse than if you had rolled out a plain text list. This isn’t just theory—it’s documented in Microsoft’s own SPFx case studies. One common failure pattern is the “team contacts” dashboard built as a static list. It looks helpful: one page, all the people in a group, with phones and emails. But if you’re not pulling straight from a live directory through Microsoft Graph or REST, those names go bad fast. Someone leaves, a role changes, numbers rotate—and suddenly the dashboard routes calls into a void. That’s not just dead data; it’s actively misleading. And as the research around SPFx examples confirms: people data always goes stale unless it’s pulled live. That one fact alone can sink adoption for otherwise solid projects. What makes it sting is how easy it is to avoid. SPFx already has the plumbing for exactly this: SharePoint REST endpoints, Microsoft Graph integration, and PnP libraries that wrap the messy parts. The pipes are there; you just have to open them up. Instead of your web part sitting frozen like a brochure, it can act like a real dashboard—a surface that reflects changes as they happen. That’s the difference between users glancing past it and users depending on it. And that’s really the message here: don’t waste your hours fiddling with padding values or button styling when the fix is turning on the live data feeds. SPFx wasn’t designed for static content any more than Outlook was designed for pen pals. Use the infrastructure it’s giving you. Because when the information is fresh—when it syncs to actual sources—the web part feels like something alive, not just another SharePoint decoration. Of course, the moment you start going live, you run face-first into the part everybody hates: authentication. And if you’ve ever tried to untangle OAuth token flows on your own, you already know it’s the programming version of reading an IKEA manual written in Klingon. So let’s hit that head-on and talk about how to stop authentication from killing your project. Beating Authentication Headaches Most devs don’t throw in the towel on Microsoft Graph because fetch calls are tricky—it’s because authentication feels like surviving an IKEA manual written in Klingon. Every token, every consent screen, every obscure “scope” suddenly turns into diagrams that don’t line up with reality. By the time you think you’ve wired it all together, the thing wobbles if you so much as breathe on it. I’ve seen hardened engineers lose entire weekends just trying to pass a single Graph call through that security handshake. The problem isn’t Graph itself—it’s the dance to get in the door. Here’s the bit that gets lost in the noise: SPFx actually saves you from most of this pain. The minute you use `MSGraphClient`, the framework is already juggling tokens behind the curtain. It grabs access, refreshes it, caches it—you don’t lift a finger for the ugly bits. Your real job is lining up the right permissions, which sounds easy until you open Azure AD and realize scopes are like cafeteria trays. Grab the wrong one, you lose half your lunch. And too many devs learn the hard way that “works for admin” doesn’t mean everyone else gets in. Misconfigured or missing scopes mean your users hit “access denied” long before they see anything useful. Think of it like this: OAuth is that nightclub bouncer none of us liked in college. Everyone’s outside in line. Only people with the right wristband get inside. Graph calls are your message to the DJ—but you don’t get anywhere unless the bouncer sees that wristband. SPFx plays chauffeur; it drives you to the door and waves the list around. But if you forgot to define those wristbands with `webApiPermissionRequests`, you’re left out front explaining why “get my profile” suddenly needs an executive sign-off. Here’s the good news: the roadmap is short and clear. Three steps and you’ll actually clear the front door. One: declare the Graph scopes you need in `webApiPermissionRequests` inside package-solution.json. Two: deploy the solution package to your App Catalog and have an admin approve those scopes in the SharePoint Admin Center. Three: test your web part with a normal user account, not your admin account, to make sure permissions behave the way you think they do. Skip one of those, and you’re back in the rain outside the club. Common scopes you might need? Start with `User.Read` if you’re just pulling names and emails. `Sites.Read.All` if you want to load data from SharePoint lists. `Mail.Send` if you’re generating outbound mail from a web part. Teams apps often dip into `Team.ReadBasic.All` or `Channel.ReadBasic.All`. The mix depends on what your web part does. The point is—don’t go greedy. The principle of least privilege isn’t just something security folks chant; it’ll keep your admin from rejecting the whole request and leaving your part dead on arrival. And here’s your strongest sanity tip: do not rely on admin-only testing. As an admin, you see far more than your users. Run your first Graph call under your admin account, and you’ll think it’s flawless. Then you push to production, and regular folks are locked out at every click. You end up with a ticket firestorm and the dreaded “works for me” defense. Save yourself: always test with an everyday user account and validate real-world scope behavior early. Once scopes are sorted, that’s when `MSGraphClient` feels like a cheat code. You call the endpoint, it returns live data with valid tokens attached, and you never touch an OAuth flow diagram again. That means you can hook `/me` for user profiles, check who’s in which Teams, glance at calendars, or even fire off emails—all without sweating token lifetimes or consent prompts mid-demo. If your package declares permissions properly and the tenant admin blesses them, you focus on building features instead of tracing 401s. The payoff is speed of delivery. Instead of weeks drowning in token soup, you’re shipping live Graph integrations in minutes. You click a dropdown, see real people from the organization appear instantly, and—boom—your web part looks alive. That’s the kind of moment where users stop treating your project as “another SharePoint tile” and start relying on it daily. All without losing your mind over security diagrams. So yes, the bouncer is picky, but SPFx hands you the VIP pass if you request the right scopes and validate them the right way. Do that, and Graph calls feel almost effortless. But we’re not out of the woods yet. Even if your call works, if it takes longer than a coffee refill to load, users are already disengaged. And that’s where performance tuning makes or breaks adoption. Making Graph Calls Snappy Here’s where most developers trip and eat asphalt: the call works, but the speed is dreadful. Making Graph calls snappy isn’t about adding more horsepower—it’s about cutting the junk out of the request and being smart with what you keep. By default, Graph isn’t shy about sending you way too much. Call `/me` with no filters, and it hands you an encyclopedia when you just needed a sticky note. Every unnecessary property adds weight to the payload, slows parsing, and chews through bandwidth. Multiply that waste across dozens of loads—and if you have hundreds of users hitting it—you’ll run right into Graph throttling. That’s why the first rule is: be selective. Graph gives you the `$select` parameter for a reason. Use it. If all you need is someone’s display n

    ٢٠ من الدقائق
  3. The Info Architect’s Guide to Surviving Purview

    قبل ١٨ ساعة

    The Info Architect’s Guide to Surviving Purview

    Here’s the disaster nobody tells new admins: one bad Purview retention setting can chew through your storage like Pac-Man on Red Bull. Subscribe to the M365.Show newsletter at m365 dot show so you don’t miss the next rename-ocalypse. We’ll cover what Purview actually does, the retention trap, the IA guardrails to prevent disaster, and a simple pilot plan you can run this month. Reversing a bad retention setting often takes time and admin effort — check Microsoft’s docs and always test in a pilot before trust-falling your tenant into production. The good news: with a solid information architecture, Purview isn’t the enemy. It can actually become one of your strongest tools. So, before we talk guardrails, let’s start with the obvious question — what even is Purview? What Even Is Purview? Microsoft has a habit of tossing out new product names like it’s a side hustle, and the latest one lighting up eye-roll meters is Purview. A lot of information architects hear the word, decide it sounds like an IT-only problem, and quietly step out of the conversation. That’s a mistake. Ignoring Purview is like ignoring the safety inspector while you’re still building the house. You might think nothing’s wrong yet, but eventually they show up with a clipboard, and suddenly that “dream home” doesn’t meet code. Purview functions as the compliance and governance layer that helps enforce retention, classification, and other lifecycle controls across Microsoft 365 — in practice it acts like your tenant’s compliance inspector. Let’s break Microsoft’s jargon into plain English. Purview is the set of tools Microsoft gives us for compliance and content governance across the tenant. Depending on licensing, it usually covers retention, classification, sensitivity labels, access control, eDiscovery, and data lifecycle. If it’s sitting inside Microsoft 365 — files, Outlook mailboxes, Teams chats, SharePoint sites, even meeting recordings — Purview commonly has a say in how long it sticks around, how it’s classified, and when it should disappear. You can picture it as the landlord with the clipboard. But here’s the catch: the rules it enforces depend heavily on the structure you’ve set up. If information architecture is sloppy, Purview enforces chaos. If IA is solid, Purview enforces order. This is where a lot of architects get tripped up. It’s tempting to think Purview is “IT turf” and not really part of your world. But Purview reaches directly into your content stores whether you like it or not. Retention policies don’t distinguish between a contract worth millions and a leftover lunch flyer. If you haven’t provided metadata and categorization, Purview treats them the same. And when that happens, your intranet stops feeling like a library and starts feeling like a haunted house — doors welded shut, content blocked off, users banging on IT’s door because “the file is broken.” And remember, Purview doesn’t view your content with the same care you do. It doesn’t naturally recognize your taxonomy until you encode it in ways the system can read. Purview’s strength is enforcement: compliance, retention, and risk reduction. It’s not here to applaud your architecture; it’s here to apply rules without nuance. Think of it like a city building regulator. They don’t care if your house has a brilliant design — they care if you left out the fire exit. And when your IA isn’t strong, the “fines” aren’t literal dollars, but wasted storage, broken workflows, and frustrated end users who can’t reach their data. That’s why the partnership between IA and Purview matters. Without metadata, content types, and logical structures in place, Purview defaults into overkill mode. Its scans feel like a spam filter set to “paranoid.” It keeps far too much, flags irrelevant content, and generates compliance reports dense enough to melt your brain. But when your IA work is dialed in, Purview has the roadmap it needs to act smarter. It can retain only sensitive or regulated information, sweep out junk, and keep collaboration running without adding friction. There’s another wrinkle here: Copilot. If your organization wants to roll it out, Purview instantly becomes non-negotiable. Copilot feeds from Microsoft Search. Search quality depends on your IA. And Purview layers governance on that same foundation. If the structure is weak, Copilot turns into a chaos machine, surfacing garbage or the wrong sensitive info. Purview, meanwhile, swings from being a precision scalpel to a blunt-force hammer. Translation: those shiny AI demos you promised the execs collapse when retention locks half of your data in digital amber. The real bottom line is this: Purview is not some bolt-on compliance toy for auditors. It’s built into the bones of your tenant. Pretending it’s someone else’s problem is like pretending you don’t need brakes because only other people drive the car. If you’re an architect, it’s your concern too. Get the structure right, and Purview enforces it in your favor. Get it wrong, and you’ll be fielding angry tickets while your storage costs quietly double. Which brings us to the most dangerous button in Purview: retention. The Retention Policy Trap The first thing that pulls people in with Purview is that shiny option called “Retention Policy.” It sounds helpful, even protective — like you’re about to shield your data, keep the auditors off your back, and win IT citizenship of the month. But here’s the trap: applied without a plan, it can wreck user experience and bury your tenant in problems faster than you can open a ticket. Here’s the blunt version of how it works. Retention can be broad, or it can be precise. In the admin center you’ll see policies that apply at scale — things like entire Exchange mailboxes, OneDrive accounts, or Teams chats. You’ll also see labels that can be applied manually or automatically on libraries, folders, or individual files. The official marketing spin: “It sets rules for how long files stay and whether they’re deleted or kept.” The real world: if you misconfigure it, it’s like deciding to freeze your entire grocery store because you didn’t want the milk to spoil. Sure, things are technically preserved, but they’re also unusable. And retention doesn’t stop to ask questions. It doesn’t know the difference between a contract that runs your business or a disposable screenshot of yesterday’s lunch menu. Once you apply a rule, anything in scope gets frozen under that same setting. Meeting notes? Locked. Project files mid-edit? Locked. Teams threads that someone desperately needs to clean up? Locked as well. From the user side, it feels like the system is malfunctioning: “I can’t delete this.” “Why did this document get stuck?” The system isn’t broken, you just hard-coded compliance cement across their workflow. That fallout is where support calls start piling up. And unwinding it isn’t as easy as hitting undo. You’ll want to think carefully before enforcing retention across wide swaths of the tenant. The smarter path is prep work: First, map where content actually lives. Second, add metadata and content types to critical libraries so you’ve got meaningful ways to target things later. Third, pilot your retention policies in a small, low-risk scope before you go broad. Those three steps alone save you from an avalanche of “the file is broken” tickets. Now let’s talk about why rollback gets admins swearing. Once retention is set, reversing it is not like flipping permissions or unsharing a file. It can take reprocessing time, it might require re-indexing, and sometimes support has to step in. A safer plan: before you roll out a policy, write down a record of what you’re about to change — who owns that mailbox, what site collections are impacted, what type of content sits inside. Have a tested rollback path. Run your pilot. And know what resources you’ll need if you have to backtrack. That way, when a VP shouts that their project files are locked, you’ve got a ripcord and not just panic. As for those so-called “immutable” label settings, think of them as permanent tattoos. Some retention settings, once made, can’t simply be rolled back. Microsoft’s own docs advise treating them as effectively permanent — so always test in a contained spot before turning them on. If you’re not sure which ones can be reversed, check the docs and test, because there is no magic delete key for compliance labels once they’ve cemented content. Then there’s the hidden cost people don’t talk about: storage. Retention doesn’t just prevent deletion. It makes the system hoard files in the background even if the user tries to delete them. Suddenly you’ve got SharePoint sites jammed with preserved copies, OneDrive full of zombie documents, and Teams chat histories stretching back years because nobody told the system to cut them loose. Think of it like renting a storage unit — everything users try to throw out secretly ends up there. The bill shows up later, and finance wants answers. The better move: measure storage impact during your pilot. If growth spikes, fix your scope and labeling before expanding. The main point here isn’t “don’t use retention.” You will need it. Compliance, regulations, and eventual audits guarantee that. The key is to line it up with your information architecture so policies attach to categories and content types — not random guesswork across your environment. Strong IA is the difference between retention holding what matters and retention freezing everything in sight. Bad retention habits mean bloated storage, frustrated users, and needless chaos. Smart retention guided by IA means you satisfy compliance without strangling day-to-day collaboration. And that leads us directly into the next problem: Pu

    ٢٠ من الدقائق
  4. Your Teams Notifications Are Dumb: Fix Them With Adaptive Cards

    قبل يوم واحد

    Your Teams Notifications Are Dumb: Fix Them With Adaptive Cards

    Your Teams notifications are dumb. Yeah, I said it. They spam reminders nobody reads, and they look like they were designed in 2003. Here’s the fix: we’re going to walk through three parts — structuring data in Microsoft Lists, designing an Adaptive Card, and wiring it together with Power Automate. Subscribe to the newsletter at m365 dot show if you want the full step‑by‑step checklist. Once you connect those pieces, the boring alerts turn into slick, clickable mini‑apps in Teams. By the end, you’ll build a simple task card — approve or snooze — without users ever leaving chat. Sounds good, but first let’s look at why the default Teams notifications are so useless in the first place. Why Teams Notifications Fail Ever notice how quick we are to hit “mark as read” on a Teams alert without even glancing at it? Happens all the time. The dirty truth is that most notifications aren’t worth the click — they aren’t asking you to actually *do* anything. They just pile up, little blocks of static text that technically “alert” you, but don’t invite action. Teams was supposed to make collaboration easier, yet those alerts work more like an old-school overhead PA system: loud, one-way, and usually ignored. Here’s the play-by-play. Somebody sets up a flow — say, an approval request or a reminder to check a task. Teams sends out the ping. But that ping is empty. It’s just words in a box with zero interactivity. The recipient shrugs, clears it, and forgets about it. Meanwhile, that request sits untouched, waiting like an abandoned ticket in the queue. Multiply that by dozens of alerts a week, and congratulations — you’ve built digital background noise on par with standing between a jackhammer and a jet engine. The fallout shows up fast. A manager needs an approval, but the request is sitting in limbo, so they end up chasing the person in chat: “Hey, did you see that?” That message promptly gets buried under noise about lunch-and-learns, upcoming surveys, or the outage notice no one can action anyway. Before long, muscle memory takes over: swipe, snooze, dismiss. The result isn’t that Teams is broken; the problem is that the notifications running through it were never meant for interaction. Think of the current system like a fax machine in 2024. Yes, the paper comes out the other side, and technically the information transferred. But nobody brags about using it. Same with Teams alerts: technically functional, but painfully outdated. The real “work” still spills into other channels — endless email trails, chat chasers, and manual spreadsheets. Teams becomes a hallway covered in digital flyers that everyone walks past. From what we’ve seen across real deployments and support cases, notifications that aren’t actionable get ignored. In practice, when users get hammered with these static “FYI” pings, response rates drop hard — we keep seeing the same pattern across tenants: the more hollow the alerts, the less anyone bothers to act on them. And with that, productivity craters. Missed approvals, overdue tasks, broken handoffs — it all snowballs into “sorry, I didn’t see that” excuses, and the cycle repeats. Time is where it really hurts. Every useless ping spawns follow-up emails, escalations, manual tracking, and a dozen extra steps that never needed to exist. Teams channels fill with bot posts nobody reads, and actual high-priority alerts sink unseen. The fastest way to torpedo user engagement with your processes is to keep flooding people with alerts that don’t let them resolve anything in place. One client story hammered this home. They had a Purchase Order approval process wired into Teams, but the messages were generic blurbs with a bland “view request” link. Clicking took you to a site with no context, no instructions, just a blank box waiting for input. One approval ended up sitting untouched for three weeks, holding up procurement until the vendor finally walked away. The lesson was obvious: context and action have to be built into the notification itself, or it fails completely. The real kicker is that none of this pain is needed. Notifications don’t have to be treated like paper slips shoved under a digital door. They can ask for action directly. They can carry buttons, fields, and context so users can respond instantly. That’s exactly where Adaptive Cards shift the game. Instead of shouting information and hoping someone reacts, the card itself says: here’s the choice, click it now. FYIs turn into “done with one click.” Bottom line: Teams notifications fail because they’re static. They dump context-free information and leave the user to go hunting elsewhere. Adaptive Cards succeed because they remove that hunting trip. They bring the needed action — approve, update, close — right into the chat window. That’s the difference between annoying noise and useful workflow. So the big question is, how do you make those cards actually work the way you want? The trick is that smart cards rely on smart data. If your underlying data is messy or unstructured, the cards will feel just as clunky as the static alerts. Next, we’ll dig into the tool most folks underestimate but is actually the foundation of the whole setup: Microsoft Lists. Want a heads-up when each part drops? Subscribe at m365 dot show so you don’t miss it. The Secret Weapon: Microsoft Lists So let’s talk about the real foundation of this whole setup: Microsoft Lists. Most folks glance at it and say, “Oh great, another Excel wannabe living in SharePoint.” Then they dump random notes in it, half-fill columns, and call it a day. But here’s the twist — Lists isn’t the sidekick. It’s the engine that makes your Adaptive Cards actually work. If the source data is junk, your cards will be junk. Simple as that. Adaptive Cards, no matter how sharp they look, are only as useful as the data behind them. If your List is full of inconsistent text, blank fields, and random guesses, the card becomes nonsense. Instead of a clear call to action, you’ve got reminders that confuse people and buttons tied to vague non-answers. That’s not a workflow — that’s digital wallpaper. Structured data is what makes these cards click. Without it, even the fanciest design falls flat. The pain shows up fast. I’ve seen Lists where an “Owner” column was filled with nicknames, first names, and one that literally said “ask John.” Great, now your card pings the wrong person or nobody at all. Or status fields where one entry says “In Progress,” another says “IP,” and another just says “working-ish.” Try automating that — good luck. The card ends up pulling “Task maybe working-ish” onto a button, and users will either ignore it or laugh at it before ignoring it. Here’s the cleaner way to think about it. Treat Microsoft Lists like your kitchen pantry. Adaptive Cards are just recipes pulling from those shelves. If the pantry is stocked with expired cans and mystery bags, your dinner’s ruined. But if everything’s labeled and consistent — flour, sugar, rice — the recipe comes out right. Same deal here. A clean List makes Adaptive Cards clear, actionable, and fast. Let’s ground it in a practical example. Say you want a simple task tracker to drive reminders inside Teams. Make a List with four fields: * TaskName (single line of text) * DueDate (date) * Owner (person) * ReminderFlag (choice or yes/no) That’s it. Four clean columns you can wire straight into a card. The card then shows the task, tells the owner when it’s due, and offers two buttons: “Mark Done” or “Snooze.” No guessing. No digging. Click, done. Now compare that to the same list where “Owner” just says “me,” “DueDate” is blank half the time, and “ReminderFlag” is written like “yes??” That card is confusing, and confusion kills engagement. Column types aren’t window dressing either. They’re the difference between a working card and a dead one. Choice columns give you neat, predictable options that translate cleanly into card buttons. Date/time columns let you trigger exact reminder logic. Use People/Person columns so you can present owner info and, in Teams, humans can recognize the person at a glance — name, and often an avatar. That’s way more reliable than shoving in a random free-text field. And here’s the pitfall I see again and again: the dreaded Notes column. One giant text blob that tries to capture everything. Don’t do it. Avoid dumping all your process into freeform notes. Use actual column types so the card can render clean, clickable elements instead of just spitting text. Once you shift your mindset, it clicks. Lists aren’t passive storage. They’re the schema — the definition of what your workflow actually means. Every column sets a rule. Every field enforces structure. That structure feeds into the card design, which then feeds into Power Automate when you wire it together. Get the schema right, and you’re not building a “card.” You’re building a mini-app that looks clean and works exactly how people expect. The bottom line is this: Microsoft Lists aren’t boring busywork. They’re the hidden layer that makes your notifications into something more than noise. Keep them structured, and your Adaptive Cards stop feeling like static spam and start feeling like tools people use. Pantry stocked? Next we design the recipe — the Adaptive Card. Designing Your First Adaptive Card Designing your first Adaptive Card can feel like opening an IKEA box where the instructions show four screws but the bag has fifteen. In short: a little confusing, and you start to wonder if this thing will collapse the first time someone leans on it. That’s the point where most people stall. You open the editor, you’re staring at raw JSON and random options, and suddenly the excitement drain

    ١٩ من الدقائق
  5. قبل يوم واحد

    Domains in Fabric: Easier Than It Looks? (Spoiler: No)

    Admins, remember when Power BI Premium felt like your biggest headache? Overnight, you’re suddenly a Fabric Administrator, staring at domains, capacities, and tenant configs like a set of IKEA instructions written in Klingon. Microsoft says this makes things “simpler.” Our experience shows it just means fifty new moving parts you didn’t ask for. By the end, you’ll know what to lock down first, how to design workspaces that actually scale, and how to avoid surprise bills when capacity goes sideways. Subscribe to the M365.Show newsletter so you get our Fabric survival checklist before chaos hits. Let’s start with domains. Microsoft says they “simplify organization.” Reality: prepare for sprawl with fancier nametags. Domains Aren’t Just New Labels Domains aren’t just another batch of Microsoft labels to memorize. They change how data, people, and governance collide in your tenant—and if you treat them like renamed workspaces, you’ll spend more time firefighting than managing. On the surface, a domain looks tidy. It’s marketed as a logical container: HR gets one, Finance gets one, Marketing gets one. Sounds neat until you realize each domain doesn’t just sit there quietly—it comes with its own ownership, policies, and permission quirks. One team sees it as their sandbox, another sees it as their data vault, and suddenly you’re refereeing a brawl between groups who have never once agreed on governance rules. The scope is bigger too. Workspaces used to be about reports and maybe a dataset or two. Now your Marketing domain doesn’t just hold dashboards—it’s sucking in staging pipelines, raw data ingests, models, and random file dumps. That means your business analyst who just wanted to publish a campaign dashboard ends up sharing space with a data engineer pushing terabytes of logs. Guess who dominates that territory? Not the analyst. Then comes the permission puzzle. You’re not just picking Viewer, Contributor, or Admin anymore. Domains bring another layer: domain-level roles and domain-level policies that can override workspace rules. That’s when you start hearing from users: “Why can’t I publish in my own workspace?” The answer is buried in domain settings you probably didn’t know existed when you set it up. And every one of those support pings ends up at your desk. Here’s the metaphor that sticks: setting up domains without governance is like trying to organize your garage into “zones.” You put tools on one wall, bikes in another corner, boxes on shelves. Feels under control—until the neighbors dump their junk in there too. Now you’re tripping over someone else’s lawnmower trying to find your screwdriver. That’s domains: the illusion of neat order without actual rules. The punchline? Domains are an opportunity for chaos or control—and the only way you get control is by locking a few things in early. So what do you lock? First 30 days: define your domain taxonomy. Decide whether domains represent departments, projects, or purposes. Don’t let people invent that on the fly. First 60 days: assign single ownership with a clear escalation path. One team owns structure, another enforces usage, and everybody else knows where to escalate. First 90 days: enforce naming rules, then pilot policies with one team before rolling them out everywhere. That gives you a safe zone to see how conflicts actually surface before they become tenant-wide tickets. And what do you watch for along the way? Easy tells that a domain is already misconfigured: multiple near-identical domains like “Sales,” “Sales Reporting,” and “Sales 2025.” Owners you’ve never heard of suddenly holding keys to sensitive data. Users reporting mysterious “can’t publish” errors that resolve only when you dig into domain policies. Each of these is a canary in the coal mine—and if you ignore them, the sprawl hardens fast. We’ve seen domain sprawl happen quickly when teams can create domains freely. It’s not hypothetical—it only takes one unchecked department creating a new shiny container for their project, and suddenly you’ve got duplicates and silos sprouting up. The mess builds quicker than you think, and unlike workspaces, a domain is bigger by design, which means the fallout stretches further. The fix isn’t abandoning domains. Done right, they actually help carve order into Fabric. But doing it right means starting boring and staying boring. Naming conventions aren’t glamorous, and ownership charts don’t impress in a slide deck. But it’s exactly that unsexy work that prevents months of renaming, re-permissioning, and explaining to your boss why Finance can see HR’s data warehouse. Domains don’t magically simplify anything. You’ve got to build the scaffolding before they scale. When you skip that, Microsoft’s “simpler organization” just becomes another layer of chaos dressed up in clean UI. And once domains are running wild, the next layer you’ll trip over isn’t naming—it’s the foundation everything sits on: workspace architecture. That’s where the problems shift from labels to structure, and things start looking less like Legos and more like a Jenga tower. Workspace Architecture: From Lego to Jenga Now let’s dig into workspace architecture, because this is where admins either set order early or watch the entire tenant bend under its own weight. Old Power BI workspaces were simple—few reports, a dataset, done. In Fabric, that world is gone. Workspaces are crammed with lakehouses, warehouses, notebooks, pipelines, and the dashboards nobody ever stopped building. Different teams—engineering, analysts, researchers—are all piling their work into the same bucket, and you’re supposed to govern it like it’s still just reporting. That mismatch is where the headaches start. The scope has blown up. Workspaces aren’t just about “who sees which report” anymore. They cover ingestion, staging, analysis, and even experimentation. In the same space you’ve got someone dumping raw logs, another team tuning a model, and another trying to prep board slides. Mixing those roles with no structure means unstable results. Data gets pulled from the wrong copy, pipelines overwrite each other, performance sinks, and you’re dealing with another round of help desk chaos. The trap for admins is assuming old rules stretch to this new reality. Viewer and Member aren’t enough when the question is: who manages staging, who protects production, and who keeps experiments from knocking over production datasets? Workspace roles multiply risk, and if you manage them like it’s still just reports, you’re courting failure. Here’s what usually happens. Someone spins up a workspace for a “simple dashboard.” Six months later, it’s bloated with CSV dumps, multiple warehouses mislabeled, a couple of experimental notebooks, and datasets pointing at conflicting sources. Analysts can’t tell staging from production, someone presents the wrong numbers to leadership, and the blame lands on you for letting it spin out of control. Microsoft’s advice is “purpose-driven workspaces.” Good guidance—but many orgs treat them like folders with shinier icons. Need Q4 content? New workspace. Need a sandbox? New workspace. Before long, you’ve got dozens of abandoned ones idling with random objects, still eating capacity, and no clear rules holding any of it together. So how do you cut through the chaos? Three rules—short and blunt. Separate by function. Enforce naming and lifecycle. Automate with templates. That’s the backbone of sustainable Fabric workspace design. Separate by function: Staging, production, and analytics don’t belong in the same bucket. Keep them distinct. One workable pattern: create a staging workspace managed by engineering, a production workspace owned by BI, and a shared research space for experiments. Each team knows their ground, and reports don’t pull from half-built pipelines. Enforce naming and lifecycle: Don’t trust memory or guesswork. Is it SALES_PROD or SALES_STAGE? Tagging and naming stop the mix-ups. Pair it with lifecycle—every space needs an owner and expiry checks, so years from now you aren’t cleaning up junk nobody remembers making. Automate with templates: Humans forget rules; automation won’t. Build a workspace template that locks in owners, tags, and naming from the start. Don’t try to boil the ocean—pilot with one team, smooth the wrinkles, then expand it. Admins always want a sanity check: how do you know if you’ve structured it right? Run three quick tests. Are production reports separated from experiments? Can an experimenter accidentally overwrite a production dataset? Do warehouses and pipelines follow a naming convention that a stranger could recognize in seconds? If any answer is “no,” your governance won’t scale. The payoff is practical. When staging blows up, it doesn’t spill into production. When executives need reporting, they aren’t pulling test data. And when workloads start climbing, you know exactly which spaces should map to dedicated capacity instead of scrambling to unpick the mess later. Architecture isn’t about controlling creativity, it’s about making performance and governance predictable. Done right, architecture sets you up to handle the next big challenge. Because once multiple workspaces start hammering workloads, your biggest strain won’t just be who owns what—it’s what’s chewing through your compute. And that’s where every admin who thinks Premium still means what it used to gets a rude surprise. Capacities: When Premium Isn’t Premium Anymore Capacities in Fabric will test you in a way Premium never did. What used to feel like a horsepower upgrade for reports is now a shared fuel tank that everything taps into—reports, warehouses, pipelines, notebooks, and whatever else your teams spin up.

    ١٩ من الدقائق
  6. قبل يومين

    LINQ to SQL: Magic or Mayhem?

    Have you ever written a LINQ query that worked perfectly in C#, but when you checked the SQL it generated, you wondered—how on earth did it get to *that*? In this session, you’ll learn three things in particular: how expression trees control translation, how caching shapes performance and memory use, and what to watch for when null logic doesn’t behave as expected. If you’ve suspected there’s black-box magic inside Entity Framework Core, the truth is closer to architecture than magic. EF Core uses a layered query pipeline that handles parsing, translation, caching, and materialization behind the scenes. First we’ll look at how your LINQ becomes an expression tree, then the provider’s role, caching, null semantics, and finally SQL and materialization. And it all starts right at the beginning: what actually happens the moment you run a LINQ query. From LINQ to Expression Trees When you write a LINQ query, the code isn’t automatically fluent in SQL. LINQ is just C#—it doesn’t know anything about databases or tables. So when you add something like a `Where` or a `Select`, you’re really calling methods in C#, not issuing commands to SQL. The job of Entity Framework Core is to capture those calls into a form it can analyze, before making any decisions about translation or execution. That capture happens through expression trees. Instead of immediately hitting the database, EF Core records your query as a tree of objects that describe each part. A `Where` clause doesn’t mean “filter rows” yet—it becomes a node in the tree that says “here’s a method call, here’s the property being compared, and here’s the constant value.” At this stage, nothing has executed. EF is simply documenting intent in a structured form it can later walk through. One way to think about it is structure before meaning. Just like breaking a sentence into subject and verb before attempting a translation, EF builds a tree where joins, filters, projections, and ordering are represented as nodes. Only once this structure exists can SQL translation even begin. EF Core depends on expression trees as its primary mechanism to inspect LINQ queries before deciding how to handle them. Each clause you write—whether a join or a filter—adds new nodes to that object model. For example, a condition like `c.City == "Paris"` becomes a branch with left and right parts: one pointing to the `City` property, and one pointing to the constant string `"Paris"`. By walking this structure, EF can figure out what parts of your query map to SQL and what parts don’t. Behind the scenes, these trees are not abstract concepts, but actual objects in memory. Each node represents a method call, a property, or a constant value—pieces EF can inspect and categorize. This design gives EF a reliable way to parse your query without executing it yet. Internally, EF treats the tree as a model, deciding which constructs it can send to SQL and which ones it must handle in memory. This difference explains why some queries behave one way in LINQ to Objects but fail in EF. Imagine you drop a custom helper function inside a lambda filter. In memory, LINQ just runs it. But with EF, the expression tree now contains a node referring to your custom method, and EF has no SQL equivalent for that method. At that point, you’ll often notice a runtime error, a warning, or SQL falling back to client-side evaluation. That’s usually the signal that something in your query isn’t translatable. The important thing to understand is that EF isn’t “running your code” when you write it. It’s diagramming it into this object tree. And if a part of that tree doesn’t correspond to a known SQL pattern, EF either stops or decides to push that part of the work into memory, which can be costly. Performance issues often show up here—queries that seem harmless in C# suddenly lead to thousands of rows being pulled client-side because EF couldn’t translate one small piece. That’s why expression trees matter to developers working with EF. They aren’t just an internal detail—they are the roadmap EF uses before SQL even enters the picture. Every LINQ query is first turned into this structural plan that EF studies carefully. Whether a query succeeds, fails, or slows down often depends on what that plan looks like. But there’s still one more step in the process. Once EF has that expression tree, it can’t just ship it off to the database—it needs a gatekeeper. Something has to decide whether each part of the tree is “SQL-legal” or something that should never leave C#. And that’s where the next stage comes in. The Gatekeeper: EF Core’s Query Provider Not every query you write in C# is destined to become SQL. There’s a checkpoint in the middle of the pipeline, and its role is to decide what moves forward and what gets blocked. This checkpoint is implemented by EF Core’s query provider component, which evaluates whether the expression tree’s nodes can be mapped to SQL or need to be handled in memory. You can picture the provider like a bouncer at a club. Everyone can show up in line, but only the queries dressed in SQL-compatible patterns actually get inside. The rest either get turned away or get redirected for client-side handling. It’s not about being picky or arbitrary. The provider is enforcing the limits of translation. LINQ can represent far more than relational databases will ever understand. EF Core has to walk the expression tree and ask of each node: is this something SQL can handle, or is it something .NET alone can execute? That call gets made early, before SQL generation starts, which is why you sometimes see runtime errors up front instead of confusing results later. For the developer, the surprise often comes from uneven support. Many constructs map cleanly—`Where`, `Select`, `OrderBy` usually translate with no issue. Others are more complicated. For example, `GroupBy` can be more difficult to translate, and depending on the provider and the scenario, it may either fail outright or produce SQL that isn’t very efficient. Developers see this often enough that it’s a known caution point, though the exact behavior depends on the provider’s translation rules. The key thing the provider is doing here is pattern matching. It isn’t inventing SQL on the fly in some magical way. Instead, it compares the expression tree against a library of translation patterns it understands. Recognized shapes in the tree map to SQL templates. Unrecognized ones either get deferred to client-side execution or rejected. That’s why some complex queries work fine, while others lead to messages about unsupported translation. The decision is deterministic—it’s all about whether a given pattern has a known, valid SQL output. This is also the stage where client-side evaluation shows up. If a part of the query can’t be turned into SQL, EF Core may still run it in memory after fetching the data. At first glance, that seems practical. SQL gives you the data, .NET finishes the job. But the cost can be huge. If the database hands over thousands or even millions of rows just so .NET can filter them afterward, performance collapses. Something that looked innocent in a local test database can stall badly in production when the data volume grows. Developers often underestimate this shift. Think of a query that seems perfectly fine while developing against a dataset of a few hundred rows. In production, the same query retrieves tens of thousands of records and runs a slow operation on the application server. That’s when users start complaining that everything feels stuck. The provider’s guardrails matter here, and in many cases it’s safer to get an error than to let EF try to do something inefficient. For anyone building with EF, the practical takeaway is simple: always test queries against real or representative data, and pay attention to whether performance suddenly nosedives in production. If it feels fast locally but drags under load, that’s often a sign the provider has pushed part of your logic to client-side evaluation. It’s not automatically wrong, but it is a signal you need to pay closer attention. So while the provider is the gatekeeper, it isn’t just standing guard—it’s protecting both correctness and performance. By filtering what can be translated into SQL and controlling when to fall back to client-side execution, it keeps your pipeline predictable. At the same time, it’s under constant pressure to make these decisions quickly, without rewriting your query structure from scratch every time. And that’s where another piece of EF Core’s design becomes essential: a system to remember and reuse decisions, rather than starting from zero on every request. Caching: EF’s Secret Performance Weapon Here’s where performance stops being theoretical. Entity Framework Core relies on caching as one of its biggest performance tools, and without it, query translation would be painfully inefficient. Every LINQ query starts its life as an expression tree and has to be analyzed, validated, and prepared for SQL translation. That work isn’t free. If EF had to repeat it from scratch on every execution, even simple queries would bog down once repeated frequently. To picture what that would mean in practice, think about running the same query thousands of times per second in a production app. Without caching, EF Core would grind through full parsing and translation on each call. The database wouldn’t necessarily be the problem—your CPU would spike just from EF redoing the prep work. This is why caching isn’t an optional optimization; it’s the foundation that makes EF Core workable at real-world scale. So how does it actually help? EF Core uses caching to recognize when a query shape it has already processed shows up again. Instead of re-analyzing the expression tree node by node, EF can reuse the earlier work. That means whe

    ٢٠ من الدقائق
  7. قبل يومين

    Why Dirty Code Always Wins (Until It Doesn't)

    Ever notice how the fastest way to ship code is usually the messiest? Logging scattered across controllers, validation stuffed into random methods, and authentication bolted on wherever it happens to work. It feels fast in the moment, but before long the codebase becomes something no one wants to touch. Dirty code wins the short-term race, but it rarely survives the marathon. In this session, we’ll unpack how cross-cutting concerns silently drain your productivity. You’ll hear how middleware and decorator-style wrappers let you strip out boilerplate and keep business logic clean. So how do we stop the rot without slowing down? Why Messy Code Feels Like the Fastest Code Picture this: a small dev team racing toward a Friday release. The product owner wants that new feature live by Monday morning. The tests barely pass, discussions about architecture get skipped, and someone says, “just drop a log here so we can see what happens.” Another teammate copies a validation snippet from a different endpoint, pastes it in, and moves forward. The code ships, everyone breathes, and for a moment, the team feels like heroes. That’s why messy code feels like the fastest path. You add logging right where it’s needed, scatter a few try-catch blocks to keep things from blowing up, and copy in just enough validation to stop the obvious errors. The feature gets out the door. The business sees visible progress, users get what they were promised, and the team avoids another design meeting. It’s immediate gratification—a sense of speed that’s tough to resist. But the cost shows up later. The next time someone touches that endpoint, the logging you sprinkled in casually takes up half the method. The validation you pasted in lives in multiple places, but now each one fails the same edge case in the same wrong way. Debugging a new issue means wading through repetitive lines before you even see the business logic. Something that once felt quick now hides the real work under noise. Take a simple API endpoint that creates a customer record. On paper, it should be clean: accept a request, build the object, and save it. In practice, though, logging lives inside every try-catch block, validation code sits inline at the top of the method, and authentication checks are mixed in before anything else can happen. What should read like “create customer” ends up looking like “log, check, validate, catch, log again,” burying the actual intent. It still functions, it even passes tests, but it no longer reads like business logic—it reads like clutter. So why do teams fall into this pattern, especially in startup environments or feature-heavy sprints? Because under pressure, speed feels like survival. We often see teams choose convenience over architecture when deadlines loom. If the backlog is full and stakeholders expect weekly progress, “just make it work now” feels safer than “design a pipeline for later.” It’s not irrational—it’s a natural response to immediate pressure. And in the short term, it works. Messy coding collapses the decision tree. Nobody has to argue about whether logging belongs in middleware or whether validation should be abstracted. You just type, commit, and deploy. Minutes later, the feature is live. That collapse of choice gives the illusion of speed, but each shortcut adds weight. You’re stacking boxes in the hallway instead of moving them where they belong. At first it’s faster. But as the hallway fills up, every step forward gets harder. Those shortcuts don’t stay isolated, either. With cross-cutting tasks like logging or authentication, the repetition multiplies. Soon, the same debug log line shows up in twenty different endpoints. Someone fixes validation logic in one spot but misses the other seven. New hires lose hours trying to understand why controllers are crammed with logging calls and retry loops instead of actual business rules. What once supported delivery now taxes every future change. That’s why what feels efficient in the moment is really a deferred cost. Messy code looks like progress, but the debt it carries compounds. The larger the codebase grows, the heavier the interest gets, until the shortcuts block real development. What felt like the fast lane eventually turns into gridlock. The good news: you don’t have to choose between speed and maintainability. The Rise of Cross-Cutting Concerns So where does that slowdown really come from? It usually starts with something subtle: the rise of cross-cutting concerns. Cross-cutting concerns are the kinds of features every system needs but that don’t belong inside business logic. Logging, authentication, validation, audit trails, exception handling, telemetry—none of these are optional. As systems grow, leadership wants visibility, compliance requires oversight, and security demands checks at every step. But these requirements don’t naturally sit in the same place as “create order” or “approve transaction.” That’s where the clash begins: put them inline, and the actual intent of your code gets diluted. The way these concerns creep in is painfully familiar. A bug appears that no one can reproduce, so a developer drops in a log statement. It doesn’t help enough, so they add another deeper in the call stack. Metrics get requested, so telemetry calls are scattered inside handlers. Then security notes a missing authentication check, so it’s slotted in directly before the service call. Over time, the method reads less like concise business logic and more like a sandwich: infrastructure piled on both ends, with actual intent hidden in the middle. Think of a clean controller handling a simple request. In its ideal form, it just receives the input, passes it to a service, and returns a result. Once cross-cutting concerns take over, that same controller starts with inline authentication, runs manual validation, writes a log, calls the service inside a try-catch that also logs, and finally posts execution time metrics before returning the response. It still works, but the business purpose is buried. Reading it feels like scanning through static just to find one clear sentence. In more regulated environments, the clutter grows faster. A financial application might need to log every change for auditors. A healthcare system must store user activity traces for compliance. Under data protection rules like GDPR, every access and update often requires tracking across multiple services. No single piece of code feels extreme, but the repetition multiplies across dozens or even hundreds of endpoints. What began as a neat domain model becomes a tangle of boilerplate driven by requirements that were never part of the original design. The hidden cost is consistency. On day one, scattering a log call is harmless. By month six, it means there are twenty versions of the same log with slight differences—and changing them risks breaking uniformity across the app. Developers spend time revisiting old controllers, not because the business has shifted, but because infrastructure has leaked into every layer. The debt piles up slowly, and by the time teams feel it, the price of cleaning up is far higher than it would have been if handled earlier. The pattern is always the same: cross-cutting concerns don’t crash your system in dramatic ways. They creep in slowly, line by line, until they smother the business logic. Adding a new feature should be a matter of expressing domain rules. Instead, it often means unraveling months of accumulated plumbing just to see where the new line of code belongs. That accumulation isn’t an accident—it’s structural. And because the problem is structural, the answer has to be as well. We need patterns that can separate infrastructure from domain intent, handling those recurring concerns cleanly without bloating the methods that matter. Which raises a practical question: what if you could enable logging, validation, or authentication across your whole API without touching a single controller? Where Design Patterns Step In This is where design patterns step in—not as academic buzzwords, but as practical tools for keeping infrastructure out of your business code. They give you a structured way to handle cross-cutting concerns without repeating yourself in every controller and service. Patterns don’t eliminate the need for logging, validation, or authentication. Instead, they move those responsibilities into dedicated structures where they can be applied consistently, updated easily, and kept separate from your domain rules. Think back to those bloated controllers we talked about earlier—the ones mixing authentication checks, logs, and error handling right alongside the actual business process. That’s not unusual. It’s the natural byproduct of everyone solving problems locally, with the fastest cut-and-paste solution. Patterns give you an alternative: instead of sprinkling behaviors across dozens of endpoints, you centralize them. You define one place—whether through a wrapper, a middleware component, or a filter—and let it run the concern system-wide. That’s how patterns reduce clutter while protecting delivery speed. One of the simplest illustrations is the decorator pattern. At a high level, it allows you to wrap functionality around an existing service. Say you have an invoice calculation service. You don’t touch its core method—you keep it focused on the calculation. But you create a logging decorator that wraps around it. Whenever the calculation runs, the decorator automatically logs the start and finish. The original service remains unchanged, and now you can add or remove that concern without touching the domain logic at all. This same idea works for validation: a decorator inspects inputs before handing them off, throwing errors when something looks wrong. Clean separation, single responsibility preserved. Another powerful option, especially in .NET, is middlewa

    ١٩ من الدقائق
  8. قبل ٣ أيام

    No-Code vs. Pro-Code: Security Showdown

    If your Power App suddenly exposed sensitive data tomorrow, would you know why it happened—or how to shut it down? No-code feels faster, but hidden governance gaps can quietly stack risks. Pro-code offers more control, but with heavier responsibility. We’ll compare how each model handles security, governance, and operational risk so you can decide which approach makes the most sense for your next project. Here’s the path we’ll follow: first, the tradeoff between speed and risk. Then, the different security models and governance overhead. Finally, how each choice fits different project types. Before we jump in, drop one word in the comments—“security,” “speed,” or “integration.” That’s your top concern, and I’ll be watching to see what comes up most. So, let’s start with the area everyone notices first: the speed of delivery—and what that speed might really cost you. The Hidden Tradeoff: Speed vs. Security Everyone in IT has heard the promise of shipping an app fast. No long requirements workshops, no drawn-out coding cycles. Just drag, drop, publish, and suddenly a spreadsheet-based process turns into a working app. On the surface, no-code tools like Power Apps make that dream look effortless. A marketing team can stand up a lightweight lead tracker during lunch. An operations manager can create an approval flow before heading home. Those wins feel great, but here’s the hidden tradeoff: the faster things move, the easier it is to miss what’s happening underneath. Speed comes from skipping the natural pauses that force you to slow down. Traditional development usually requires some form of documentation, testing environments, and release planning. With no-code, many of those checkpoints disappear. That freedom feels efficient—until you realize those steps weren’t just administrative overhead. They acted as guardrails. For instance, many organizations lack a formal review gate for maker-built apps, which means risky connectors can go live without anyone questioning the security impact. One overlooked configuration can quietly open a path to sensitive data. Here’s a common scenario we see in organizations. A regional sales team needs something more dynamic than their weekly Excel reports. Within days, a manager builds a polished dashboard in Power Apps tied to SharePoint and a third-party CRM. The rollout is instant. Adoption spikes. Everyone celebrates. But just a few weeks later, compliance discovers the app replicates European customer data into a U.S. tenant. What looked like agility now raises GDPR concerns. No one planned for a violation. It happened because speed outpaced the checks a slower release cycle would have enforced. Compare that to the rhythm of a pro-code project. Azure-based builds tend to move slower because everything requires configuration. Networking rules, managed identities, layered access controls—all of it has to be lined up before anyone presses “go live.” It can take weeks to progress from dev to staging. On paper, that feels like grinding delays. But the very slowness enforces discipline. Gatekeepers appear automatically: firewall rules must be met, access has to remain least-privileged, and data residency policies are validated. The process itself blocks you from cutting corners. Frustrating sometimes, but it saves you from bigger cleanup later. That’s the real bargain. No-code buys agility, but the cost is accumulated risk. Think about an app that can connect SharePoint data to an external API in minutes. That’s productivity on demand, but it’s also a high-speed path for sensitive data to leave controlled environments without oversight. In custom code, the same connection isn’t automatic. You’d have to configure authentication flows, validate tokens, and enable logging before data moves. Slower, yes, but those steps act as security layers. Speed lowers technical friction—and lowers friction on risky decisions at the same time. The problem is visibility. Most teams don’t notice the risks when their new app works flawlessly. Red flags only surface during audits, or worse, when a regulator asks questions. Every shortcut taken to launch a form, automate a workflow, or display a dashboard has a security equivalent. Skipped steps might not look like trouble today, but they can dictate whether you’re responding to an incident tomorrow. We’ll cover an example policy later that shows how organizations can stop unauthorized data movement before it even starts. That preview matters, because too often people assume this risk is theoretical until they see how easily sensitive information can slip between environments. Mini takeaway: speed can hide skipped checkpoints—know which checkpoints you’re willing to trade for agility. And as we move forward, this leads us to ask an even harder question: when your app does go live, who’s really responsible for keeping it secure? Security Models: Guardrails vs. Full Control Security models define how much protection you inherit by default and how much you’re expected to design yourself. In low-code platforms, that usually means working within a shared responsibility model. The vendor manages many of the underlying services that keep the platform operational, while your team is accountable for how apps are built, what data they touch, and which connectors they rely on. It’s a partnership, but one that draws boundaries for you. The upside is peace of mind when you don’t want to manage every technical layer. The downside is running into limits when you need controls the platform didn’t anticipate. Pro-code environments, like traditional Azure builds, sit on the other end of the spectrum. You get full control to implement whatever security architecture your project demands—whether that’s a custom identity system, a tailored logging pipeline, or your own encryption framework. But freedom also means ownership of every choice. There’s no baseline rule stepping in to stop a misconfigured endpoint or a weak password policy. The system is only as strong as the security decisions you actively design and maintain. Think of it like driving. Low-code is similar to leasing a modern car with airbags, lane assist, and stability control already in place. You benefit from safety features even when you don’t think about them. Pro-code development is like building your own car in a workshop. You decide what protection goes in, but you’re also responsible for each bolt, weld, and safety feature. Done well, it could be outstanding. But if you overlook a detail, nothing kicks in automatically to save you. This difference shows up clearly in how platforms prevent risky data connections. Many low-code tools give administrators DLP-style controls. These act as guardrails that block certain connectors from talking to others—for example, stopping customer records from flowing into an unknown storage location. The benefit is that once defined, these global policies apply everywhere. Makers barely notice anything; the blocked action just doesn’t go through. But because the setting is broad, it often lacks nuance. Useful cases can be unintentionally blocked, and the only way around it is to alter the global rule, which can introduce new risks. With custom-coded solutions, none of that enforcement is automatic. If you want to restrict data flows, you need to design the logic yourself. That could include implementing your own egress rules, configuring Azure Firewall, or explicitly coding the conditions under which data can move. You gain fine-grained control, and you can address unique edge cases the platform could never cover. But every safeguard you want has to be built, tested, and maintained. That means more work at the front end and ongoing responsibility to ensure it continues functioning as intended. It’s tempting to argue that pre-baked guardrails are always safer, but things become murky once your needs go beyond common scenarios. A global block that prevents one bad integration might also prevent the one legitimate integration your business critically relies on. At that point, the efficiency of inherited policies starts to feel like a constraint. On the other side, the open flexibility of pro-code environments can feel empowering—until you realize how much sustained discipline is required to keep every safeguard intact as your system evolves. The result is that neither option is a clear winner. Low-code platforms give you protections you didn’t design, consistent across the environment but hard to customize. Pro-code platforms give you control for every layer, but they demand constant attention and upkeep. Each comes with tradeoffs: consistency versus flexibility, inherited safety versus engineered control. Here’s the question worth asking your own team: does your platform give you global guardrails you can’t easily override, or are you expected to craft and maintain every control yourself? That answer tells you not just how your security model works today, but also what kind of operational workload it creates tomorrow. And that naturally sets up the next issue—when something does break, who in your organization actually shoulders the responsibility of managing it? Governance Burden: Who Owns the Risk? When people talk about governance, what they’re really pointing to is the question of ownership: who takes on the risk when things inevitably go wrong? That’s where the contrast between managed low-code platforms and full custom builds becomes obvious. In a low-code environment, much of the platform-level maintenance is handled by the vendor. Security patches, infrastructure upkeep, service availability—all of that tends to be managed outside your direct view. For your team, the day-to-day work usually revolves around policy decisions, like which connectors are permissible or how environments are separated. Makers—the business users who build apps—focus almost enti

    ٢١ من الدقائق

حول

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

قد يعجبك أيضًا