M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

Mirko Peters - Microsoft 365 Expert Podcast

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. قبل ٣ ساعات

    How Telemetry Can Transform Your Business Central Experience

    In today’s business world, data is very important for making smart choices. An amazing 73% of companies use data to improve how they work. Mastering Telemetry is a strong tool that gives you real-time information. This helps you make quick and smart decisions. How can you use mastering telemetry to change your Business Central experience? What steps should you take to use this technology well? This blog will answer these questions and more. Key Takeaways * Telemetry gives you live data. This helps you watch how your system works and make fast choices. * With telemetry, you can find problems early. This stops issues from getting worse, which cuts downtime and boosts service quality. * Using tools like Azure Application Insights helps you check performance and see how users act better. * It is very important to set up telemetry the right way. Follow good practices to make sure monitoring works well. * Looking at telemetry data with tools like Power BI can help you understand things better. This leads to smarter business choices. Understanding Telemetry Telemetry means measuring and sending data automatically from faraway places. It uses sensors to collect different kinds of data. This includes electrical data like voltage and current. It also includes physical data like temperature and pressure. The data is sent to other places for monitoring and studying. In Business Central, telemetry is very important. It helps improve your business by giving real-time information about how the system is working. Key Benefits of Telemetry Using telemetry in your business software, like Business Central, has many big benefits: * Performance Monitoring: Telemetry makes it easy to see how well things are working. You can find errors and areas that need fixing quickly. * Proactive Issue Detection: It helps you notice problems before they happen. This means you can fix issues faster and avoid disruptions. * Enhanced Decision-Making: Data about user behavior helps you decide on new features and how to use resources. This helps you keep improving. Mastering telemetry helps you collect and study performance data in an organized way. This is key for making business processes better. It gives you insights into how applications perform, finds slow spots, and predicts possible problems. This way, you can make sure users have a great experience. Managing things this way improves how well your business runs and how you use resources. Operational Insights With telemetry, you get insights that change how you run your business. You can see system health with real-time dashboards. These dashboards show important numbers, like session counts, error rates, and query times. By watching these numbers, you can find slowdowns and permission errors before they affect your work. Also, using telemetry with tools like Azure Application Insights and Power BI helps you analyze data better. You can look closely at specific extensions or see everything at once. This kind of insight helps you make smart decisions that move your business forward. Telemetry vs. Traditional Logging When you look at telemetry and traditional logging, you notice big differences that can improve your experience with Dynamics 365 Business Central. Proactive Monitoring Telemetry helps you watch your system closely. It captures many activities and gives detailed information about how things are running. Traditional logging often depends on incomplete user reports. In contrast, telemetry records exact error messages and their details. This helps you find unusual patterns or behaviors quickly. With real-time monitoring, you can spot problems right away. This reduces downtime and keeps service quality high. Here are some key benefits of telemetry compared to traditional logging: * Logs vs. Events: Events are a part of logs. They have complete details about a task, while logs might only show parts of that information. * Use Cases: Log analytics mainly focus on fixing problems. Event analytics give insights into how products perform and how users interact. Faster Troubleshooting Telemetry also makes troubleshooting faster. You don’t have to rely on users to report errors anymore. Telemetry automatically logs all errors, including those about user access and app performance. This automatic logging helps you find and fix issues more quickly. For example, telemetry in Dynamics 365 Business Central allows you to monitor field changes by sending signals to Application Insights. This feature lets you run powerful KQL queries, which are better than traditional event logging. Traditional logging needs manual setup for alerts, but telemetry automates notifications through tools like Logic Apps or Power Automate. Practical Applications of Application Insights Telemetry Performance Monitoring You can use Azure Application Insights to check how well your Business Central applications are working. This tool helps you track different numbers that show how your apps are doing. By making custom telemetry events, you can send specific trace events to Azure Application Insights. This information helps you look at performance and find areas that need fixing. Here are some important parts of performance monitoring with telemetry: * Application Performance Tracking: You can see how your application is performing in real-time. This helps you find problems quickly and improve user experience. * Feature Usage Logging: The Feature usage module from the System Application lets you log events from extensions. This gives you insights into how users use different features. By using these tools, you get a better view of your application’s health. You can make smart choices to improve performance and keep users happy. Error Diagnostics Using telemetry in Business Central helps you find errors better. You can collect detailed information about different types of errors. This helps you fix problems before they affect users. Here’s a list of the types of telemetry that help with error diagnostics: You can also improve error diagnostics by adding special features. For example, you can include an AL stack trace to job queue error signals. This shows where problems happen. Also, error messages can be shown in English for easier understanding, and error codes in failed OData calls help with troubleshooting. Real-world uses of telemetry show how effective it is. For example, checking API call success and failure rates lets you respond quickly to problems. You can also notify your team about unusual increases in response times, making sure you keep a high-quality user experience. By adding Azure Application Insights telemetry to your Business Central setup, you change how you check performance and find errors. This proactive method leads to smoother operations and happier users. Best Practices for Mastering Telemetry Setting Up Telemetry To set up telemetry in your Business Central environment, do these steps: * Choose an Azure subscription. * Make a Resource Group. * Create a Log Analytics workspace. * Set up an Azure Application Insights instance in that workspace. * Get the Application Insights connection string. * Decide how long to keep data on the Application Insights instance. * Set a daily limit on the Application Insights instance. * Connect to Dynamics 365 Business Central using Admin Center APIs. * Put the connection string in the Dynamics 365 Business Central environment. Remember to improve your telemetry setup. Use this command to set the Application Insights connection string: Set-NAVServerConfiguration -ServerInstance BC200 -Keyname ApplicationInsightsConnectionString -Keyvalue ‘InstrumentationKey=aaaaaaaa-0b0b-1c1c-2d2d-333333333333;IngestionEndpoint=https://westeurope-1.in.applicationinsights.azure.com/’ Setting everything up correctly is very important. Each Business Central tenant needs its own Application Insights resource. This helps you watch usage closely and control costs. Also, make sure to follow privacy laws by removing personal data. Analyzing Telemetry Data After setting up telemetry, analyzing the data is very important. You can use different tools to learn about your system’s performance. Here are some good methods: Connecting Power BI with telemetry data improves how you monitor system use. You can see daily active users, what parts of the app are used, and even details like browser and device types. This connection helps you make better decisions and work more efficiently by showing user activities and system health. By following these best practices, you can make sure your telemetry setup works well and that you use the collected data wisely. This proactive method will help you find problems early and improve your Business Central experience. Telemetry changes how you use Business Central. It gives you real-time information and helps you watch for problems. You can find issues before they get worse. This makes your work smoother and helps you make better choices. Think about using telemetry to make your business better. Future trends say that AI updates will help with tasks like sales orders. Also, using Azure Managed Grafana will make your data analysis and monitoring better. Check out telemetry solutions today to stay ahead in your business journey! 🚀 FAQ What is telemetry in Business Central? Telemetry in Business Central means collecting and sending data automatically. This data shows how the system is working, error rates, and user actions. It helps you check your system’s health in real-time. How does telemetry improve decision-making? Telemetry gives you real-time information about system performance. You can look at user behavior and spot trends. This helps you make smart choices that improve your business. Can I set up telemetry without technical skills? You don’t need to be an expert, but some tech knowledge helps. You can follow easy guides to set up telemetry. Many resources are available to help you. You can also ask IT professionals for help if needed.

    ٣٦ من الدقائق
  2. Dynamics 365 Sales Isn’t Just CRM—It’s Your Sales HQ

    قبل ٨ ساعات

    Dynamics 365 Sales Isn’t Just CRM—It’s Your Sales HQ

    Think your CRM is just a fancy address book? The truth is, many teams still wrestle with manual logging and repetitive admin work instead of actually selling. Before we roll initiative, hit Subscribe so these ecosystem hacks auto-deploy to your feed—no patch window required. Now imagine this instead: your CRM whispering the next best move, drafting client-ready emails, and dropping call summaries straight onto your desk. That’s Copilot in Dynamics 365 Sales. Pair that with Outlook, Teams, and Power Platform plugged directly into your workflow, and you’ve got a real command hub—far more than a Rolodex in the cloud. So let’s talk about why this system isn’t just another CRM. Why This Isn’t Just Another CRM A lot of folks still picture CRM as a clunky filing cabinet with a search bar attached. That mindset leaves reps treating the tool like cold storage for names and notes instead of a command post for selling. The difference matters, because the moment your system stops being passive and starts acting like mission control, you gain actual leverage. Traditional CRMs keep track of calls, emails, and meetings, and they’re decent at showing a list of past actions. But notice the pattern—everything is retrospective. You type, you log, you file, and in exchange you get a static report once the quarter ends. It’s busywork wearing a business suit. In gaming terms, that’s like scribbling your character stats on loose paper while the battle rages on. You might capture history, but you have no live HUD showing where to swing next. Dynamics 365 Sales flips that script. Instead of a flat notebook, it’s more like a dashboard in a game showing health bars on accounts, XP levels on opportunities, and status alerts on what matters now. That one analogy gets the point across: real-time guidance over static notes. The “HQ” framing isn’t just a cute tagline either. It signals a shift from storage to orchestration. Headquarters are where signals arrive, orders are shaped, and teammates coordinate before moving. Microsoft backs this with more than branding—the platform actively invests in AI guidance with Sales Copilot, embedded agents, and extensibility in the current and upcoming release plan. It’s not just holding records; it’s wired to handle the flow of selling itself. Here’s where the HQ idea shows up in action. Instead of staring at blank fields and trying to guess what comes next, D365 can surface a playbook tied to your process. Playbooks, guided sequences, and next-best-action prompts create a worklist so you execute rather than chase scattered tasks. If a buyer opens your proposal, the system doesn’t just log the view—it nudges you to follow up with the right context. That replaces the haunting question of “what now?” with a clear sequence you can trust. And because everything connects, the HQ pulls signals from deals, calls, emails, and customer interactions into one view. You’re not juggling seven different apps to puzzle together the situation. Instead, insights and scoring surface in one console. That matters, because it cuts out manual overhead. Instead of slogging through updates like a secretary with a stack of forms, you scroll through a prioritized task list and act. The grunt work is offloaded, the decision-making stays with you. It’s worth spelling out the contrast. A record-keeper CRM tells you what already happened. A Sales HQ tells you what deserves your attention right now and with which tactic. Guided selling sequences, AI scoring, and task lists turn it into the tactical console, so every action counts. Once you run a few turns from that playbook, going back to static spreadsheets feels like a natural 1. That’s what earns it the “mission control” label. It transforms the feel of selling—less keyboard logging, more strategic steering. The HQ becomes the place you check for situational awareness, confident that all your comms, data points, and nudges are consolidated. With fewer clicks and cleaner signals, reps stop drowning in inputs and start executing with pace. But of course, even the best headquarters can feel distant if you have to travel back and forth just to use it. Which leads to the next real challenge: your daily workflow is already split between Outlook, Teams, and whatever else is screaming for your attention. So what happens when the HQ doesn’t sit apart at all, but pipes directly into the tools already fighting for space on your screen? No More Tab-Hopping: Outlook and Teams Built In How many windows do you juggle before lunch? A draft email half-written, CRM data hiding in another tab, Teams chat pinging like a party member spamming emotes. It’s not multitasking—it’s a tab zoo, and every extra switch pulls you out of rhythm. That friction adds up. Type a client email, realize you need account notes, bounce to CRM, copy details, hop back to Outlook—and by then Teams has already thrown you another “quick” question. It seems small, but it’s the drip damage that drains your focus bar one point at a time. Dynamics 365 Sales stitches those loose ends together. With Outlook integration, the context you always chase—deal stage, last meeting notes, open opportunities—sits right beside the email you’re drafting. You don’t alt‑tab. You don’t paste numbers back and forth. Copilot even goes further: it can summarize long client emails into the key points, suggest whether to track that message against a record, and draft a smart reply based on past interactions and your calendar. You stay in one window, but the system makes it feel like you have a support NPC feeding you intel in your ear. Teams joins the party the same way. Conversations stop becoming scavenger hunts. If a teammate pings “Who owns this account?” you no longer wait while somebody digs. The record is visible in‑chat, synced from Dynamics. For bigger deals, you can even spin up a dedicated deal room in Teams, tied directly to the opportunity in CRM. That room collects documents, stakeholders, notes, and chat threads—all linked, all live. Everyone sees the same board, no matter if they’re using Dynamics every day or not. The result is less about cutting clicks and more about keeping momentum. Instead of losing the flow because you’re checking three dashboards, the right data stands next to the conversation where you need it. Email threads show account insights. Chat threads show customer records. One screen, one context, no wasted rolls fumbling through menus. It also shifts adoption. Because Dynamics shows up inside Outlook and Teams—tools you already live in—the CRM stops being that separate place you dread updating. Tracking an email or logging a meeting becomes a natural extension of writing the message or joining the call. And because Copilot can input updates or draft responses on your behalf, the overhead shrinks even further. You’re not translating game notes back into the rulebook—the notes score themselves. That’s where the payoff hides. Every time you stay in context, you avoid the micro‑delays that chip away at an hour. Those reclaimed minutes compound into actual selling time. You don’t just feel less scattered—you are less scattered, because the platforms that normally compete for your attention now cooperate inside one frame. So Outlook stops being just an inbox. Teams stops being just a noisy chat queue. Together with Dynamics, they act like extensions of your HQ—spaces where action and record‑keeping overlap without you thinking about it. The tab zoo gets tamed into one coherent workspace. But that raises a new twist. If your CRM data already lives inside email and chat, what becomes of all the dashboards and long reports managers love? Do they still rule the strategy, or are they now background noise? And tucked inside that question is the next upgrade—because once the system stops just showing you data and starts guiding your moves, you’re no longer the only one calling plays. Copilot: Your Pipeline’s Dungeon Master Picture your pipeline with a Dungeon Master at the table—not rolling the dice for you, but laying out the map, marking the traps, and pointing to the treasure chest that’s actually worth opening. That’s Copilot inside Dynamics 365 Sales. It doesn’t replace your choices; it scores, prioritizes, and recommends, leaving you in command of every move. Here’s the common grind. A pipeline stacked with fifty names looks like a spreadsheet dungeon—rows of numbers, stages, and half-written notes that blur together after two minutes. Everyone says they’ll prioritize, then ends up chasing the loudest deal or the shiniest logo. Without help, deciding where to swing next feels like guessing with a blindfold. Copilot cuts through that fog. It looks at the same clutter you do, then assigns scores that highlight where effort pays off. Leads get ranked by likelihood to convert. Opportunities get graded, complete with relationship health estimates that flag if a client’s been ghosting. You don’t get a mystery wall of records—you get clear signals on where attention drives results. That scoring is paired with next-step suggestions, surfaced from the history of calls and emails. Instead of hunting through logs, you see “follow up now, reference last week’s proposal, and answer the client’s pending question.” It’s tactical advice, not crystal-ball theatrics. Think of it like heading into combat while a rogue in the party whispers which enemy is carrying healing potions. The strike is yours to make, but you make it with better odds because the data isn’t drowning you. You log in, Copilot already highlights where actions gain the most XP. And preparation—the time sink nobody misses—gets lighter too. Normally before a client call, you scramble through email threads, scrape LinkedIn, and re-read notes to avoid asking something obv

    ١٨ من الدقائق
  3. Licensing Nightmares: Why Self-Service BI Costs More Than You Think

    قبل ٢٠ ساعة

    Licensing Nightmares: Why Self-Service BI Costs More Than You Think

    Licensing is not the footnote in your BI strategy—it’s the horror movie twist nobody sees coming. One month you feel empowered with Fabric; the next your CFO is asking why BI costs more than your ERP system. It’s not bad math; it’s bad planning. The scariest part? Many organizations lack clear approval paths or policies for license purchasing, so expenses pile up before anyone notices. Stick around—we’re breaking down how to avoid that mess with three fixes: Fabric Domains to control sprawl, a Center of Excellence to stop duplicate buys, and shared semantic models with proper licensing strategy. And once you see how unchecked self-service plays out in real life, the picture gets even messier. The Wild West of Self-Service BI Welcome to the Wild West of Self-Service BI. If you’ve opened a Fabric tenant and seen workspaces popping up everywhere, you already know the story: one team spins up their own playground, another duplicates a dataset, and pretty soon your tenant looks like a frontier town where everyone builds saloons but nobody pays the tax bill. At first glance, it feels empowering—dashboards appear faster, users skip the IT line, and folks cheer because they finally own their data. On the surface, it looks like freedom. But freedom isn’t free. Each one of those “just for us” workspaces comes with hidden costs. Refreshes multiply, storage stacks up, and licensing lines balloon. Think of it like everyone quietly adding streaming subscriptions on the corporate card—individually small, collectively eye-watering. The real damage doesn’t show up until your finance team opens the monthly invoice and realizes BI costs are sprinting ahead of plan. Here’s where governance makes or breaks you. A new workspace doesn’t technically require Premium capacity or PPU by default, but without policies and guardrails, users create so many of them that you’re forced to buy more capacity or expand PPU licensing just to keep up. That’s how you end up covering demand you never planned for. The sprawl itself becomes the driver of the bill, not any one big purchase decision. I’ve seen it firsthand—a sales team decided to bypass IT to launch their own revenue dashboard. They cloned central datasets into a private workspace, built a fresh semantic model, and handed out access like candy. Everyone loved the speed. Nobody noticed the cost. Those cloned datasets doubled refresh cycles, doubled storage, and added a fresh patch of licensing usage. It wasn’t malicious, just enthusiastic, but the outcome was the same: duplicated spend quietly piling up until the financial report hit leadership. This is the exact trade-off of self-service BI: speed versus predictability. You get agility today—you can spin up and ship reports without IT hand-holding. But you sacrifice predictability because sprawl drives compute, storage, and licensing up in ways you can’t forecast. It feels efficient right now, but when the CEO asks why BI spend exceeds your CRM or ERP, the “empowerment” story stops being funny. The other side effect of uncontrolled self-service? Conflicting numbers. Different teams pull their own versions of revenue, cost, or headcount. Analysts ask why one chart says margin is 20% and another claims 14%. Trust in the data erodes. When the reporting team finally gets dragged back in, they’re cleaning up a swamp of duplicated models, misaligned definitions, and dozens of half-baked dashboards. Self-service without structure doesn’t just blow up your budget—it undermines the very reason BI exists: consistent, trusted insight. None of this means self-service is bad. In fact, done right, it’s the only way to keep up with business demand. But self-service without guardrails is like giving every department a credit card with no limit. Eventually someone asks who’s paying the tab, and the answer always lands in finance. That’s why experts recommend rolling out governance in iterations—start light, learn from the first wave of usage, and tighten rules as adoption grows. It’s faster than over-centralizing but safer than a free-for-all. So the bottom line is simple: Fabric self-service doesn’t hand you cost savings on autopilot. It hands you a billing accelerator switch. Only governance determines whether that switch builds efficiency or blows straight through your budget ceiling. Which brings us to the next step. If giving everyone their own workbench is too chaotic, how do you maintain autonomy without burning cash? One answer is to rethink ownership—not in terms of scattered workspaces, but in terms of fenced-in domains. Data Mesh as Fencing, Not Policing Data Mesh in Fabric isn’t about locking doors—it’s about putting up fences. Not the barbed-wire kind, but the sort that gives people space without letting them trample the neighbor’s garden. Fabric calls these “Domains.” They let you define who owns which patch of data, catalog trusted datasets as products, and give teams the freedom to build reports without dragging half the IT department into every request. Think of it less as policing and more as building yards: you’re shaping where work happens so licensing and compute don’t spiral out of control. Here’s the plain-English version. In Fabric, a domain is just a scoped area of ownership. Finance owns revenue data. HR owns headcount. Sales owns pipeline. Each business unit is responsible for curating, publishing, and certifying its own data products. With Fabric Domains, you can assign owners, set catalog visibility, and document who’s accountable for quality. That way, report writers don’t keep cloning “their own” revenue model every week—the domain already provides a certified one. Users still self-serve, but now they do it off a central fence instead of pulling random copies into personal workspaces. If you’ve ever lived through the opposite, you know it hurts. Without domains, every report creator drags their own version of the same dataset into a workspace. Finance copies revenue. Sales copies revenue. Ops copies it again. Pretty soon, refresh times triple, storage numbers look like a cloud mining operation, and you feel forced to throw more Premium capacity at the problem. That’s not empowerment—it’s waste disguised as progress. Here’s the kicker: people assume decentralization itself is expensive. More workspaces, more chaos, more cost… right? Wrong. Microsoft’s governance guidance flat-out says the problem isn’t decentralization—it’s bad decentralization. If every domain publishes its own certified semantic model, one clean refresh can serve hundreds of users. You skip the twelve duplicate refresh cycles chewing through capacity at 2 a.m. The waste only comes when nobody draws boundaries. With proper guardrails, decentralization actually cuts costs because you stop paying for cloned storage and redundant licenses. Let’s put it in story mode. I once audited a Fabric tenant that looked clean on the surface. Reports ran, dashboards dazzled, nothing was obviously broken. But under the hood? Dozens of different revenue models sitting across random workspaces, each pulling from the same source system, each crunching refresh jobs on its own. Users thought they were being clever. Finance thought they were being agile. In reality, they were just stacking hidden costs. When we consolidated to one finance-owned semantic model, licensed capacity stabilized overnight. Costs stopped creeping, and the CFO finally stopped asking why Power BI was burning more dollars than CRM. And here’s the practical fix most teams miss: stop the clones at the source. In Fabric, you can endorse semantic models, mark them as discoverable in the OneLake catalog, and turn on Build permission workflows. That way, when a sales analyst wants to extend the revenue model, they request Build rights on the official version instead of dragging their own copy. Small config step, big financial payoff—because every non-cloned model is one less refresh hammering capacity you pay for. The math is simple: trusted domains + certified semantic models = predictable spend. Everybody still builds their own reports, but they build off the same vetted foundation. IT doesn’t get crushed by constant “why isn’t my refresh working” tickets, business teams trust the numbers, and finance doesn’t walk into another budget shock when Azure sends the monthly bill. Domains don’t kill freedom—they cut off the financial bleed while letting users innovate confidently. Bottom line, Data Mesh in Fabric works because it reframes governance. You’re not telling people “no.” You’re telling them “yes, through here.” Guardrails that reduce duplication, published models that scale, and ownership that keeps accountability clear. Once you set those fences, the licensing line on your budget actually starts to look like something you can defend. And while fenced yards keep the chaos contained, you still need someone walking the perimeter, checking the gates, and making sure the same mistakes don’t repeat in every department. That role isn’t about being the fun police—it’s about coordinated cleanup, smarter licensing, and scaling the good practices. Which is exactly where a Center of Excellence comes in. The Center of Excellence: Your Licensing SWAT Team Think of the Center of Excellence as your licensing SWAT team. Not the Hollywood kind dropping out of helicopters, but the squad that shows up before every department decides their dashboard needs a separate budget line. Instead of confiscating workspaces or wagging fingers, they’re more like a pit crew—tightening bolts, swapping tires, and keeping the engine from catching fire. And in this case, the “engine” is your licensing costs before they spin out of control. Here’s the problem: every department believes they’re an exception. HR thinks their attrition dash

    ١٩ من الدقائق
  4. The Azure CAF Nobody Follows (But Should)

    قبل يوم واحد

    The Azure CAF Nobody Follows (But Should)

    We’re promised six clean stages in Azure’s Cloud Adoption Framework: Strategy, Plan, Ready, Adopt, Govern, Manage. Sounds simple, right? Microsoft technically frames CAF as foundational phases plus ongoing operational disciplines, but let’s be honest — everyone just wants to know what breaks in the real world. I’ll focus on the two that trip people fastest: Strategy and Plan. In practice, Strategy turns into wish lists, Ready turns into turf wars over networking, and Governance usually appears only after an auditor throws a fit. Subscribe at m365 dot show for templates that don’t rot in SharePoint. So let’s start where it all falls apart: that first Strategy doc. The 'Strategy' Stage Nobody Reads Twice The so‑called Strategy phase is where most cloud journeys wobble before they even get going. On paper, Microsoft says this step is about documenting your motivations and outcomes. That’s fair. In reality, the “strategy doc” usually reads like someone stuffed a bingo card full of buzzwords—digital transformation, future‑proofing, innovation at scale—and called it a plan. It might look slick on a slide, but it doesn’t tell anyone what to actually build. The problem is simple: teams keep it too high‑level. Without measurable outcomes and a real link to workloads, the document is just poetry. A CIO can say, “move faster with AI,” but without naming the application or service, admins are left shrugging. Should they buy GPUs, rewrite a legacy app, or just glue a chatbot into Outlook signatures? If the words can mean anything, they end up meaning nothing. Finance spots the emptiness right away. They’re staring at fluffy phrases like “greater agility” and thinking, “where are the numbers?” And they’re right. CAF guidance and every piece of industry research says the same thing: strategies stall when leaders don’t pin outcomes to actual workloads and measurable business impact. If your only goal is “be more agile,” you won’t get far—because no one funds or builds around vibes. This is why real strategy should sound less like a vision statement and more like a to‑do list with metrics attached. One strong example: “Migrate identified SQL workloads onto Azure SQL Managed Instance to cut on‑prem licensing costs and simplify operations.” That sentence gives leadership something to measure, tells admins what Azure service to prepare, and gives finance a stake in the outcome. Compare that to “future‑proof our data layer” and tell me which one actually survives past the kickoff call. The CAF makes this easier if you actually pick up its own tools. There’s a strategy and plan template, plus the Cloud Adoption Strategy Evaluator, both of which are designed to help turn “motivations” into measurable business outcomes. Not fun to fill out, sure, but those worksheets force clarity. They ask questions like: What’s the business result? What motivates this migration? What’s the cost pattern? Suddenly, your strategy ties to metrics finance can understand and guardrails engineering can build against. When teams skip that, the fallout spreads fast. The landing zone design becomes a mess because nobody knows which workloads will use it. Subscription and networking debates drag on endlessly because no one agreed what success looks like. Security baselines stay abstract until something breaks in production. Everything downstream suffers from the fact that Strategy was written as copy‑paste marketing instead of a real playbook. I’ve watched organizations crash CAF this way over and over. And every time, the pattern is the same: endless governance fights, firefighting in adoption, endless meetings where each group argues, “well I thought…” None of this is because Azure doesn’t work. It’s because the business strategy wasn’t grounded in what to migrate, why it mattered, and what to measure. Building a tighter strategy doesn’t mean writing a 50‑page appendix of jargon. It means translating leadership’s slogans into bite‑sized commitments. Instead of “we’ll innovate faster,” write, “stand up containerized deployments in Azure Kubernetes Service to improve release cycles.” Don’t say “increase resilience.” Say, “implement Azure Site Recovery so payroll can’t go offline longer than 15 minutes.” Short, direct, measurable. Those are the statements people can rally around. That’s really the test: can a tech lead, a finance analyst, and a business sponsor all read the strategy document and point to the same service, the same workload, and the same expected outcome? If yes, you’ve just unlocked alignment. If no, then you’re building on sand, and every later stage of CAF will feel like duct tape and guesswork. So, trim the fluff, nail the three ingredients—clear outcome, named workload, linked Azure service—and use Microsoft’s own templates to force the discipline. Treat Strategy as the foundation, not the marketing splash page. Now, even if you nail that, the next question is whether the numbers actually hold up. Because unlike engineers, CFOs won’t be swayed by slides covered in promises of “synergy.” They want to see how the math works out—and that’s where we hit the next make‑or‑break moment in CAF. The Business Case CFOs Actually Believe You know what gets zero reaction in a CFO meeting? A PowerPoint filled with “collaboration synergies” and pastel arrows pointing in circles. That stuff is basically CFO repellant. If you want the finance side to actually lean forward, you need to speak in their language: concrete numbers, clear timelines, and accountability when costs spike. That’s exactly where the CAF’s Plan phase either makes you look credible or exposes you as an amateur. On paper, the Plan phase is straightforward. Microsoft tells you to evaluate financial considerations, model total cost of ownership, map ROI, and assign ownership. Sounds simple. But in practice? Teams often treat “build a business case” as an excuse to recycle the same empty jargon from the strategy doc. They’ll throw words like “innovation at scale” into a deck and call it evidence. To finance, that’s not a plan. That’s the horoscope section wearing a suit. Here’s the shortcut failure I’ve seen firsthand. A migration team promised cost savings in a glossy pitch but hadn’t even run an Azure Migrate assessment or looked at Reserved Instances. When finance asked for actual projections, they had nothing. The CFO torched the proposal on the spot, and months later half their workloads are still running in a half-empty data center. The lesson: never promise savings you can’t model, because finance will kill it instantly. So, what do CFOs actually want? It boils down to three simple checkpoints. First: the real upfront cost, usually the bill you’ll eat in the next quarter. No fluffy “ranges,” just an actual number generated from Azure Migrate or the TCO calculator. Second: a break-even timeline that shows when the predicted savings overtake the upfront spend. Saying “it’s cheaper long term” doesn’t work unless you pin dates to it. Third: accountability for overages. Who takes the hit if costs balloon? Without naming an owner, the business case looks like fantasy budgeting. CAF is crystal clear here: the Plan phase is about evaluating financial considerations and building a case that ties cloud economics to business outcomes. That means actually using the tools Microsoft hands you. Run an Azure Migrate assessment to get a defensible baseline of workload costs. Use the TCO calculator to compare on-prem numbers against Azure, factoring in cost levers like Reserved Instances, Savings Plans, and the Azure Hybrid Benefit. Then put those values into a model that finance understands—upfront expense, break-even point, and long-term cost control tied back to the workloads you already named in strategy. And don’t stop with raw numbers. Translate technical optimizations into measurable impacts that matter outside IT. Example: adopting Reserved Instances doesn’t just “optimize compute.” It locks cost predictability for three years, which finance translates into stable budgets. Leveraging Hybrid Use Benefit isn’t just “reduced licensing waste.” It changes the line item on your quarterly bill. Automating patching through Azure reduces ticket volume, and that directly cuts service desk hours, which is payroll savings the finance team can measure. These aren’t abstract IT benefits—they’re business outcomes written as numbers. Here’s why that shift works: IT staff often get hyped about words like “containers” or “zero trust.” Finance doesn’t. They respond when you connect those projects to reduced overtime hours, lower software licensing, or avoidance of capital hardware purchases. The CAF framework is designed to help you make those connections, but you actually have to fill in the models and show the math. Run the scenarios, document the timelines, and make overspend ownership explicit. That’s the difference between a CFO hearing “investment theater” and a CFO signing off budget. Bottom line: if you can walk into a boardroom and say, “Here’s next quarter’s Azure bill, here’s when we break even, and here’s who owns risk if we overspend,” you’ll get nods instead of eye-rolls. That’s a business case a CFO can actually believe. But the Plan phase doesn’t automatically solve the next trap. Even the best strategy and cost model often end up filed away in SharePoint, forgotten within weeks. The numbers may be solid, but they don’t mean much if nobody reopens the document once the project starts rolling. The Forgotten Strategy That Dies in SharePoint Here’s the quiet killer in most CAF rollouts: the strategy that gets filed away after kickoff and never looked at again. The so‑called north star ends up parked

    ٢٠ من الدقائق
  5. Unlocking Power BI: The True Game Changer for Teams

    قبل يوم واحد

    Unlocking Power BI: The True Game Changer for Teams

    You ever feel like your data is scattered across 47 different dungeons, each guarded by a cranky boss? That’s most organizations today—everyone claims to be data-driven, but in practice, they’re just rolling saving throws against chaos. Here’s what you’ll get in this run: the key Power BI integrations already inside Microsoft 365, the roadmap feature that finally ends cross-department fights, and three concrete actions you can take to start wielding this tool where you already work. Power BI now integrates with apps like Teams, Excel, PowerPoint, Outlook, and SharePoint. That means your “legendary gear” is sitting inside the same backpack you open every day. Before we roll initiative, hit Subscribe to give yourself advantage later. So, with that gear in mind, let’s step into the dungeon and face the real boss: scattered data. The Boss Battle of Scattered Data Think of your organization’s data as treasure, but not the kind stored neatly in one vault. It’s scattered across different dungeons, guarded by mini-bosses, and half the time nobody remembers where the keys are. One knight drags around a chest of spreadsheets. A wizard defends a stash of dashboards. A ranger swears their version is the “real” truth. The loot exists, but the party wastes hours hauling it back to camp and comparing notes. That’s not synergy—it’s just running multiple raids to pick up one rusty sword. Many organizations pride themselves on being “data-driven,” but in practice, each department drives its own cart in a different direction. Finance clings to spreadsheets—structured but instantly outdated. Marketing lives in dashboards—fresh but missing half the context. Sales relies on CRM reports—clean, but never lining up with anyone else’s numbers. What should be one shared storyline turns into endless reconciliations, emails, and duplicated charts. On a natural 1, you end up with three “final” reports, each pointing at a different reality. Take a simple but painful example. Finance builds a quarterly projection filled with pivot tables and colorful headers. Sales presents leadership with a dashboard that tells another story. The numbers clash. Suddenly you’re in emergency mode: endless Teams threads, late-night edits, and that file inevitably renamed “FINAL-REVISION-7.” The truth isn’t gone—it’s just locked inside multiple vaults, and every attempt to compare versions feels like carrying water in a colander. The hours meant for decisions vanish in patching up divergent views of reality. Here’s the part that stings: the problem usually isn’t technology. The tools exist. The choke point is culture. Teams treat their data like personal loot instead of shared guild gear. And when that happens, silos form. Industry guidance shows plenty of companies already have the data—but not the unified systems or governance to put it to work. That’s why solutions like Microsoft Fabric and OneLake exist: to create one consistent data layer rather than a messy sprawl of disconnected vaults. The direct cost of fragmentation isn’t trivial. Every hour spent reconciling spreadsheets is an hour not spent on action. A launch slips because operations and marketing can’t agree on the numbers. Budget approvals stall because confidence in the data just isn’t there. By the time the “final” version appears, the window for decision-making has already closed. That’s XP lost—and opportunities abandoned. And remember, lack of governance is what fuels this cycle. When accuracy, consistency, and protection aren’t enforced, trust evaporates. That’s why governance tools—like the way Power BI and Microsoft Purview work together—are so critical. They keep the party aligned, so everyone isn’t second-guessing whether their spellbook pages even match. The bottom line? The villain here isn’t a shortage of reports. It’s the way departments toss their loot into silos and act like merging them is optional. That’s the boss fight: fragmentation disguised as normal business. And too often the raid wipes not because the boss is strong, but because the party can’t sync their cooldowns or agree on the map. So how do you stop reconciling and start deciding? Enter the weapon most players don’t realize is sitting in their backpack—the one forged directly into Microsoft 365. Power BI as the Legendary Weapon Power BI is the legendary weapon here—not sitting on a distant loot table, but integrating tightly with the Microsoft 365 world you already log into each day. That matters, because instead of treating analytics as something separate, you swing the same blade where the battles actually happen. Quick licensing reality check: some bundles like Microsoft 365 E5 include Power BI Pro, but many organizations still need separate Power BI licenses or Premium capacity if they want full access. It’s worth knowing before you plan the rollout. Think about the Microsoft 365 apps you already use—Teams, Excel, PowerPoint, Outlook, and SharePoint. Those aren’t just town squares anymore; they’re the maps where strategies form and choices get made. Embedding Power BI into those apps is a step-change. You’re not alt-tabbing for numbers; you’re seeing live reports in the same workspace where the rest of the conversation runs. It’s as if someone dropped a stocked weapon rack right next to the planning table. The common misstep is that teams still see Power BI as an optional side quest. They imagine it as a separate portal for data people, not a main slot item for everybody. That’s like holding a legendary sword in your bag but continuing to swing a stick in combat. The “separate tool” mindset keeps adoption low and turns quick wins into overhead. In practice, a lot of the friction comes from context switching—jumping out of Teams to load a dashboard somewhere else. Embedding directly in Teams, Outlook, or Excel cuts out that friction and ensures more people actually use the analytics at hand. Picture this: you’re in a Teams thread talking about last quarter’s sales. Instead of pasting a screenshot or digging for a file, you drop in a live Power BI report. Everyone sees the same dataset, filters it in real time, and continues the discussion without breaking flow. Move over to Excel and the theme repeats. You connect directly to a Power BI dataset, and your familiar rows and formulas now update from a live source instead of some frozen export. Same with Outlook—imagine opening an email summary that embeds an interactive visual instead of an attachment. And in SharePoint or PowerPoint, the reports become shared objects, not static pictures. Once you see it in daily use, the “why didn’t we have this before” moment hits hard. There’s a productivity kicker too. Analysts point out that context switching bleeds attention. Each app jump is a debuff that saps focus. Embed the report in flow, and you cancel the debuff. Adoption then becomes invisible—nobody’s “learning a new tool,” they’re just clicking the visuals in the workspace they already lived in. That design is why embedding reduces context-switch friction, which is one of the biggest adoption blockers when you’re trying to spread analytics beyond the BI team. And while embedding syncs the daily fight, don’t forget the larger battlefield. For organizations wrestling with massive data silos, Microsoft Fabric with its OneLake component extends what Power BI can do. Fabric creates the single data fabric that Power BI consumes, unifying structured, unstructured, and streaming data sources at enterprise scale. You need that if you’re aiming for true “one source of truth” instead of just prettier spreadsheets on top of fractured backends. Think of embedding as putting a weapon in each player’s hands, and Fabric as the forge that builds a single, consistent armory. What shifts once this weapon is actually equipped? Managers stop saying, “I’ll check the dashboard later.” They make calls in the same window where the evidence sits. Conversations shorten, decisions land faster, and “FINAL-REVISION-7” dies off quietly. Collaboration looks less like a patchwork of solo runs and more like a co-op squad progressing together. Next time someone asks for proof in a meeting, you’ve already got it live in the same frame—no detours required. On a natural 20, embedding Power BI inside Microsoft 365 apps doesn’t just give you crit-level charts, it changes the rhythm of your workflow. Data becomes part of the same loop as chat, email, docs, and presentations. And if you want to see just how much impact that has, stick around—because the next part isn’t about swords at all. It’s about the rare loot drops that come bundled with this integration, the three artifacts that actually alter how your guild moves through the map. The Legendary Loot: Three Game-Changing Features Here’s where things get interesting. Power BI in Microsoft 365 isn’t just about shaving a few clicks off your workflow—it comes with three features that feel like actual artifacts: the kind that change how the whole party operates. These aren’t gimmicks or consumables; they’re durable upgrades. The first is automatic surfacing of insights. Instead of building every query by hand, Power BI now uses AI features—like anomaly detection, Copilot-generated summaries, and suggested insights—to flag spikes, dips, or outliers as soon as you load a report. Think finance reviewing quarterly results: instead of stitching VLOOKUP chains and cross-checking old exports, the system highlights expense anomalies right away. The user doesn’t have to “magically” expect the platform to learn their patterns; they just benefit from built-in AI pointing out what’s worth attention. It’s like having a rogue at the table whispering, “trap ahead,” before you blunder into it. The second is deeper integratio

    ١٨ من الدقائق
  6. Survive Your First D365 API Call (Barely)

    قبل يومين

    Survive Your First D365 API Call (Barely)

    Summary Making your first Dynamics 365 Finance & Operations API call often feels like walking through a minefield: misconfigured permissions, the wrong endpoints, and confusing errors can trip you up before you even start. In this episode, I break down the process step by step so you can get a working API call with less stress and fewer false starts. We’ll start with the essentials: registering your Azure AD app, requesting tokens, and calling OData endpoints for core entities like Customers, Vendors, and Invoices. From there, we’ll look at when you need to go beyond OData and use custom services, how to protect your endpoints with the right scopes, and the most common mistakes to avoid. You’ll hear not just the “happy path,” but also the lessons learned from failed attempts and the small details that make a big difference. By the end of this episode, you’ll have a clear mental map of how the D365 API landscape works, what to do first, and how to build integrations that can survive patches, audits, and real-world complexity. What You’ll Learn * How to authenticate with Azure AD and request a valid access token * The basics of calling OData endpoints for standard CRUD operations * When and why to use custom services instead of plain OData * Best practices for API security: least privilege, error handling, monitoring, and throttling * Common mistakes beginners make — and how to avoid them Guest No guest this time — just me, guiding you through the process. Full Transcript You’ve got D365 running, and management drops the classic: “Integrate it with that tool over there.” Sounds simple, right? Except misconfigured permissions create compliance headaches, and using the wrong entity can grind processes to a halt. That’s why today’s survival guide is blunt and step‑by‑step. Here’s the roadmap: one, how to authenticate with Azure AD and actually get a token. Two, how to query F&O data cleanly with OData endpoints. Three, when to lean on custom services—and how to guard them so they don’t blow up on you later. We’ll register an app, grab a token, make a call, and set guardrails you can defend to both your CISO and your sanity. Integration doesn’t need duct tape—it needs the right handshake. And that’s where we start. Meet the F&O API: The 'Secret Handshake' Meet the Finance and Operations API: the so‑called “secret handshake.” It isn’t black magic, and you don’t need to sacrifice a weekend to make it work. Think of it less like wizardry and more like knowing the right knock to get through the right door. The point is simple: F&O won’t let you crawl in through the windows, but it will let you through the official entrance if you know the rules. A lot of admins still imagine Finance and Operations as some fortress with thick walls and scary guards. Fine, sure—but the real story is simpler. Inside that fortress, Microsoft already built you a proper door: the REST API. It’s not a hidden side alley or a developer toy. It’s the documented, supported way in. Finance and Operations exposes business data through OData/REST endpoints—customers, vendors, invoices, purchase orders—the bread and butter of your ERP. That’s the integration path Microsoft wants you to take, and it’s the safest one you’ve got. Where do things go wrong? It usually happens when teams try to skip the API. You’ve seen it: production‑pointed SQL scripts hammered straight at the database, screen scraping tools chewing through UI clicks at robot speed, or shadow integrations that run without anyone in IT admitting they exist. Those shortcuts might get you quick results once or twice, but they’re fragile. They break the second Microsoft pushes a hotfix, and when they break, the fallout usually hits compliance, audit, or finance all at once. In contrast, the API endpoints give you a structured, predictable interface that stays supported through updates. Here’s the mindset shift: Microsoft didn’t build the F&O API as a “bonus” feature. This API is the playbook. If you call it, you’re supported, documented, and when issues come up, Microsoft support will help you. If you bypass it, you’re basically duct‑taping integrations together with no safety net. And when that duct tape peels off—as it always does—you’re left explaining missing transactions to your boss at month‑end close. Nobody wants that. Now, let’s get into what the API actually looks like. It’s RESTful, so you’ll be working with standard HTTP verbs: GET, POST, PATCH, DELETE. The structure underneath is OData, which basically means you’re querying structured endpoints in a consistent way. Every major business entity you care about—customers, vendors, invoices—has its shelf. You don’t rummage through piles of exports or scrape whatever the UI happens to show that day. You call “/Customers” and you get structured data back. Predictable. Repeatable. No surprises. Think of OData like a menu in a diner. It’s not about sneaking into the kitchen and stirring random pots. The menu lists every dish, the ingredients are standardized, and when you order “Invoice Lines,” you get exactly that—every single time. That consistency is what makes automation and integration even possible. You’re not gambling on screen layouts or guessing which Excel column still holds the vendor ID. You’re just asking the system the right way, and it answers the right way. But OData isn’t your only option. Sometimes, you need more than an entity list—you need business logic or steps that OData doesn’t expose directly. That’s where custom services come in. Developers can build X++‑based services for specialized workflows, and those services plug into the same API layer. Still supported, still documented, just designed for the custom side of your business process. And while we’re on options, there’s one more integration path you shouldn’t ignore: Dataverse dual‑write. If your world spans both the CRM side and F&O, dual‑write gives you near real‑time, two‑way sync between Dataverse tables and F&O data entities. It maps fields, supports initial sync, lets you pause/resume or catch up if you fall behind, and it even provides a central log so you know what synced and when. That’s a world away from shadow integrations, and it’s exactly why a lot of teams pick it to keep Customer Engagement and ERP data aligned without hand‑crafted hacks. So the takeaway is this: the API isn’t an optional side door. It’s the real entrance. Use it, and you build integrations that survive patches, audits, and real‑world use. Ignore it, and you’re back to fragile scripts and RPA workarounds that collapse when the wind changes. Microsoft gave you the handshake—now it’s on you to use it. All of that is neat—but none of it matters until you can prove who you are. On to tokens. Authentication Without Losing Your Sanity Authentication Without Losing Your Sanity. Let’s be real: nothing tests your patience faster than getting stonewalled by a token error that helpfully tells you “Access Denied”—and nothing else. You’ve triple‑checked your setup, sacrificed three cups of coffee to the troubleshooting gods, and still the API looks at you like, “Who are you again?” It’s brutal, but it’s also the most important step in the whole process. Without authentication, every other clever thing you try is just noise at a locked door. Here’s the plain truth: every single call into Finance and Operations has to be approved by Azure Active Directory through OAuth 2.0. No token, no entry. Tokens are short‑lived keys, and they’re built to keep random scripts, rogue apps, or bored interns from crashing into your ERP. That’s fantastic for security, but if you don’t have the setup right, it feels like yelling SQL queries through a window that doesn’t open. So how do you actually do this without going insane? Break it into three practical steps: * Register the app in Azure AD. This gives you a Client ID, and you’ll pair it with either a client secret or—much better—a certificate for production. That app registration becomes the official identity of your integration, so don’t skip documenting what it’s for. * Assign the minimum API permissions it needs. Don’t go full “God Mode” just because it’s easier. If your integration just needs Vendors and Purchase Orders, scope it exactly there. Least privilege isn’t a suggestion; it’s the only way to avoid waking up to compliance nightmares down the line. * Get admin consent, then request your token using the client credentials flow (for app‑only access) or delegated flow (if you need it tied to a user). Once Azure AD hands you that token, that’s your golden ticket—good for a short window of time. For production setups, do yourself a favor and avoid long‑lived client secrets. They’re like sticky notes with your ATM PIN on them: easy for now, dangerous long‑term. Instead, go with certificate‑based authentication or managed identities if you’re running inside Azure. One extra hour to configure it now saves you countless fire drills later. Now let’s talk common mistakes—because we’ve all seen them. Don’t over‑grant permissions in Azure. Too many admins slap on every permission they can find, thinking they’ll trim it back later. Spoiler: they never do. That’s how you get apps capable of erasing audit logs when all they needed was “read Customers.” Tokens are also short‑lived on purpose. If you don’t design for refresh and rotation, your integration will look great on day one and then fail spectacularly 24 hours later. Here’s the practical side. When you successfully fetch that OAuth token from Azure AD, you’re not done—you actually have to use it. Every API request you send to Finance and Operations has to include it in the header: Authorization: Bearer OData Endpoi

    ١٧ من الدقائق
  7. Microsoft Fabric Explained: No Code, No Nonsense

    قبل يومين

    Microsoft Fabric Explained: No Code, No Nonsense

    Summary Microsoft has a habit of renaming things in ways that make people scratch their heads — “Fabric,” “OneLake,” “Lakehouse,” “Warehouse,” etc. In this episode, I set out to cut through the naming noise and show what actually matters under the hood: how data storage, governance, and compute interact in Fabric, without assuming you’re an engineer. We dig into how OneLake works as the foundation, what distinguishes a Warehouse from a Lakehouse, why Microsoft chose Delta + Parquet as the storage engine, and how shortcuts, governance, and workspace structure help (or hurt) your implementation. This isn’t marketing fluff — it’s the real architecture that determines whether your organization’s data projects succeed or collapse into chaos. By the end, you’ll be thinking less “What is Fabric?” and more “How can we use Fabric smartly?” — with a sharper view of trade-offs, pitfalls, and strategies. What You’ll Learn * The difference between Warehouse and Lakehouse in Microsoft Fabric * How OneLake acts as the underlying storage fabric for all data workloads * Why Delta + Parquet matter — not just as buzzwords, but as core guarantees (ACID, versioning, schema) * How shortcuts let you reuse data without duplication — and the governance risks involved * Best practices for workspace design, permissions, and governance layers * What to watch out for in real deployments (e.g. role mismatches, inconsistent access paths) Full Transcript Here’s a fun corporate trick: Microsoft managed to confuse half the industry by slapping the word “house” on anything with a data label. But here’s what you’ll actually get out of the next few minutes: we’ll nail down what OneLake really is, when to use a Warehouse versus a Lakehouse, and why Delta and Parquet keep your data from turning into a swamp of CSVs. That’s three concrete takeaways in plain English. Want the one‑page cheat sheet? Subscribe to the M365.Show newsletter. Now, with the promise clear, let’s talk about Microsoft’s favorite game: naming roulette. Lakehouse vs Warehouse: Microsoft’s Naming Roulette When people first hear “Lakehouse” and “Warehouse,” it sounds like two flavors of the same thing. Same word ending, both live inside Fabric, so surely they’re interchangeable—except they’re not. The names are what trip teams up, because they hide the fact that these are different experiences built on the same storage foundation. Here’s the plain breakdown. A Warehouse is SQL-first. It expects structured tables, defined schemas, and clean data. It’s what you point dashboards at, what your BI team lives in, and what delivers fast query responses without surprises. A Lakehouse, meanwhile, is the more flexible workbench. You can dump in JSON logs, broken CSVs, or Parquet files from another pipeline and not break the system. It’s designed for engineers and data scientists who run Spark notebooks, machine learning jobs, or messy transformations. If you want a visual, skip the sitcom-length analogy: think of the Warehouse as a labeled pantry and the Lakehouse as a garage with the freezer tucked next to power tools. One is organized and efficient for everyday meals. The other has room for experiments, projects, and overflow. Both store food, but the vibe and workflow couldn’t be more different. Now, here’s the important part Microsoft’s marketing can blur: neither exists in its own silo. Both Lakehouses and Warehouses in Fabric store their tables in the open Delta Parquet format, both sit on top of OneLake, and both give you consistent access to the underlying files. What’s different is the experience you interact with. Think of Fabric not as separate buildings, but as two different rooms built on the same concrete slab, each furnished for a specific kind of work. From a user perspective, the divide is real. Analysts love Warehouses because they behave predictably with SQL and BI tools. They don’t want to crawl through raw web logs at 2 a.m.—they want structured tables with clean joins. Data engineers and scientists lean toward Lakehouses because they don’t want to spend weeks normalizing heaps of JSON just to answer “what’s trending in the logs.” They want Spark, Python, and flexibility. So the decision pattern boils down to this: use a Warehouse when you need SQL-driven, curated reporting; use a Lakehouse when you’re working with semi-structured data, Spark, and exploration-heavy workloads. That single sentence separates successful projects from the ones where teams shout across Slack because no one knows why the “dashboard” keeps choking on raw log files. And here’s the kicker—mixing up the two doesn’t just waste time, it creates political messes. If management assumes they’re interchangeable, analysts get saddled with raw exports they can’t process, while engineers waste hours building shadow tables that should’ve been Lakehouse assets from day one. The tools are designed to coexist, not to substitute for each other. So the bottom line: Warehouses serve reporting. Lakehouses serve engineering and exploration. Same OneLake underneath, same Delta Parquet files, different optimizations. Get that distinction wrong, and your project drags. Get it right, and both sides of the data team stop fighting long enough to deliver something useful to the business. And since this all hangs on the same shared layer, it raises the obvious question—what exactly is this OneLake that sits under everything? OneLake: The Data Lake You Already Own Picture this: you move into a new house, and surprise—there’s a giant underground pool already filled and ready to use. That’s what OneLake is in Fabric. You don’t install it, you don’t beg IT for storage accounts, and you definitely don’t file a ticket for provisioning. It’s automatically there. OneLake is created once per Fabric tenant, and every workspace, every Lakehouse, every Warehouse plugs into it by default. Under the hood, it actually runs on Azure Data Lake Storage Gen2, so it’s not some mystical new storage type—it’s Microsoft putting a SaaS layer on top of storage you probably already know. Before OneLake, each department built its own “lake” because why not—storage accounts were cheap, and everyone believed their copy was the single source of truth. Marketing had one. Finance had one. Data science spun one up in another region “for performance.” The result was a swamp of duplicate files, rogue pipelines, and zero coordination. It was SharePoint sprawl, except this time the mistakes showed up in your Azure bill. Teams burned budget maintaining five lakes that didn’t talk to each other, and analysts wasted nights reconciling “final_v2” tables that never matched. OneLake kills that off by default. Think of it as the single pool everyone has to share instead of each team digging muddy holes in their own backyards. Every object in Fabric—Lakehouses, Warehouses, Power BI datasets—lands in the same logical lake. That means no more excuses about Finance having its “own version” of the data. To make sharing easier, OneLake exposes a single file-system namespace that stretches across your entire tenant. Workspaces sit inside that namespace like folders, giving different groups their place to work without breaking discoverability. It even spans regions seamlessly, which is why shortcuts let you point at other sources without endless duplication. The small print: compute capacity is still regional and billed by assignment, so while your OneLake is global and logical, the engines you run on top of it are tied to regions and budgets. At its core, OneLake standardizes storage around Delta Parquet files. Translation: instead of ten competing formats where every engine has to spin its own copy, Fabric speaks one language. SQL queries, Spark notebooks, machine learning jobs, Power BI dashboards—they all hit the same tabular store. Columnar layout makes queries faster, transactional support makes updates safe, and that reduces the nightmare of CSV scripts crisscrossing like spaghetti. The structure is simple enough to explain to your boss in one diagram. At the very top you have your tenant—that’s the concrete slab the whole thing sits on. Inside the tenant are workspaces, like containers for departments, teams, or projects. Inside those workspaces live the actual data items: warehouses, lakehouses, datasets. It’s organized, predictable, and far less painful than juggling dozens of storage accounts and RBAC assignments across three regions. On top of this, Microsoft folds in governance as a default: Purview cataloging and sensitivity labeling are already wired in. That way, OneLake isn’t just raw storage, it also enforces discoverability, compliance, and policy from day one without you building it from scratch. If you’ve lived the old way, the benefits are obvious. You stop paying to store the same table six different times. You stop debugging brittle pipelines that exist purely to sync finance copies with marketing copies. You stop getting those 3 a.m. calls where someone insists version FINAL_v3.xlsx is “the right one,” only to learn HR already published FINAL_v4. OneLake consolidates that pain into a single source of truth. No heroic intern consolidating files. No pipeline graveyard clogging budgets. Just one layer, one copy, and all the engines wired to it. It’s not magic, though—it’s just pooled storage. And like any pool, if you don’t manage it, it can turn swampy real fast. OneLake gives you the centralized foundation, but it relies on the Delta format layer to keep data clean, consistent, and usable across different engines. That’s the real filter that turns OneLake into a lake worth swimming in. And that brings us to the next piece of the puzzle—the unglamorous technology that keeps that water clear in the first place. Delta and

    ١٩ من الدقائق
  8. Breaking Power Pages Limits With VS Code Copilot

    قبل ٣ أيام

    Breaking Power Pages Limits With VS Code Copilot

    Summary You know that sinking feeling when your Power Pages form refuses to validate and the error messages feel basically useless? That’s exactly the pain we’re tackling in this episode. I walk you through how to use VS Code + GitHub Copilot (with the @powerpages context) to push past Power Pages limits, simplify validation, and make your developer life smoother. We’ll cover five core moves: streamlining Liquid templates, improving JavaScript/form validation, getting plain-English explanations for tricky code, integrating HTML/Bootstrap for responsive layouts, and simplifying web API calls. I’ll also share the exact prompts and setup you need so that Copilot becomes context aware of your Power Pages environment. If you’ve ever felt stuck debugging form behavior, messing up Liquid includes, or coping with cryptic errors, this episode is for you. By the end, you’ll have concrete strategies (and sample prompts) to make Copilot your partner — reducing trial-and-error and making your Power Pages code cleaner, faster, and more maintainable. What You’ll Learn * How to set up VS Code + Power Platform Tools + Copilot Chat in a context-aware way for Power Pages * How the @powerpages prompt tag makes Copilot suggestions smarter and tailored * Techniques for form validation with JavaScript & Copilot (that avoid guesswork) * How to cleanly integrate Liquid templates + HTML + Bootstrap in Power Pages * Strategies to simplify web API calls in the context of Power Pages * Debugging tactics: using Copilot to explain code, refine error messages, and evolve scripts beyond first drafts Full Transcript You know that sinking feeling when your Power Pages form won’t validate and the error messages are about as useful as a ‘404 Brain Not Found’? That pain point is exactly what we’ll fix today. We’re covering five moves: streamlining Liquid templates, speeding JavaScript and form validation, getting plain-English code explanations, integrating HTML with Bootstrap for responsive layouts, and simplifying web API calls. One quick caveat—you’ll need VS Code with the Power Platform Tools extension, GitHub Copilot Chat, and your site content pulled down through the Power Platform CLI with Dataverse authentication. That setup makes Copilot context-aware. With that in place, Copilot stops lobbing random snippets. It gives contextual, iterative code that cuts down trial-and-error. I’ll show you the exact prompts so you can replicate results yourself. And since most pain starts with JavaScript, let’s roll into what happens when your form errors feel like a natural 1. When JavaScript Feels Like a Natural 1 JavaScript can turn what should be a straightforward form check into a disaster fast. One misplaced keystroke, and instead of stopping bad input, the whole flow collapses. That’s usually when you sit there staring at the screen, wondering how “banana” ever got past your carefully written validation logic. You know the drill: a form that looks harmless, a validator meant to filter nonsense, and a clever user typing the one thing you didn’t account for. Suddenly your console logs explode with complaints, and every VS Code tab feels like another dead end. The small errors hit the hardest—a missing semicolon, or a scope bug that makes sense in your head but plays out like poison damage when the code runs. These tiny slips show up in real deployments all the time, and they explain why broken validation is such a familiar ticket in web development. Normally, your approach is brute force. You tweak a line, refresh, get kicked back by another error, then repeat the cycle until something finally sticks. An evening evaporates, and the end result is often just a duct-taped script that runs—no elegance, no teaching moment. That’s why debugging validation feels like the classic “natural 1.” You’re rolling, but the outcome is stacked against you. Here’s where Copilot comes in. Generic Copilot suggestions sometimes help, but a lot of the time they look like random fragments pulled from a half-remembered quest log—useful in spirit, wrong in detail. That’s because plain Copilot doesn’t know the quirks of Power Pages. But add the @powerpages participant, and suddenly it’s not spitting boilerplate; it’s offering context-aware code shaped to fit your environment. Microsoft built it to handle Power Pages specifics, including Liquid templates and Dataverse bindings, which means the suggestions account for the features that usually trip you up. And it’s not just about generating snippets. The @powerpages integration can also explain Power Pages-specific constructs so you don’t just paste and pray—you actually understand why a script does what it does. That makes debugging less like wandering blindfolded and more like working alongside someone who already cleared the same dungeon. For example, you can literally type this prompt into Copilot Chat: “@powerpages write JavaScript code for form field validation to verify the phone field value is in the valid format.” That’s not just theory—that’s a reproducible, demo-ready input you’ll see later in this walkthrough. The code that comes back isn’t a vague web snippet; it’s directly applicable and designed to compile in your Power Pages context. That predictability is the real shift. With generic Copilot, it feels like you’ve pulled in a bard who might strum the right chord, but half the time the tune has nothing to do with your current battle. With @powerpages, it’s closer to traveling with a ranger who already knows where the pitfalls are hiding. The quest becomes less about surviving traps and more about designing clear user experiences. The tool doesn’t replace your judgment—it sharpens it. You still decide what counts as valid input and how errors should guide the user. But instead of burning cycles on syntax bugs and boolean typos, you spend your effort making the workflow intuitive. Correctly handled, those validation steps stop being roadblocks and start being part of a smooth narrative for whoever’s using the form. It might not feel like a flashy win, but stopping the basic failures is what saves you from a flood of low-level tickets down the line. Once Copilot shoulders the grunt work of generating accurate validation code, your time shifts from survival mode to actually sharpening how the app behaves. That difference matters. Because when you see how well-targeted commands change the flow of code generation, you start wondering what else those commands can unlock. And that’s when the real advantage of using Copilot with Power Pages becomes clear. Rolling Advantage with Copilot Commands Rolling advantage here means knowing the right commands to throw into Copilot instead of hoping the dice land your way. That’s the real strength of using the @powerpages participant—it transforms Copilot Chat from a generic helper into a context-aware partner built for your Power Pages environment. Here’s how you invoke it. Inside VS Code, open the Copilot Chat pane, and then type your prompt with “@powerpages” at the front. That tag is what signals Copilot to load the Power Pages brain instead of the vanilla mode. You can ask for validators, Liquid snippets, even Dataverse-bound calls, and Copilot will shape its answers to fit the system you’re actually coding against. Now, before that works, you need the right loadout: Visual Studio Code installed, the Power Platform Tools extension, the GitHub Copilot Chat extension, and the Power Platform CLI authenticated against your Dataverse environment. The authentication step matters the most, because Copilot only understands your environment once you’ve actually pulled the site content into VS Code while logged in. Without that, it’s just guessing. And one governance caveat: some Copilot features for Power Pages are still in preview, and tenant admins control whether they’re enabled through the Copilot Hub and governance settings. Don’t be surprised if features demoed here are switched off in your org—that’s an admin toggle, not a bug. Here’s the difference once you’re set up. Regular Copilot is like asking a bard for battlefield advice: you’ll get a pleasant tune, maybe some broad commentary, but none of the detail you need when you’re dealing with Liquid templates or Dataverse entity fields. The @powerpages participant is closer to a ranger who’s already mapped the terrain. It’s not just code that compiles; it’s code that references the correct bindings, fits into form validators, and aligns with how Power Pages actually runs. One metaphor, one contrast, one payoff: usable context-aware output instead of fragile generic snippets. Let’s talk results. If you ask plain Copilot for a validation routine, you’ll probably get a script that works in a barebones HTML form. Drop it into Power Pages, though, and you’ll hit blind spots—no recognition of entity schema, no clue what Liquid tags are doing, and definitely no awareness of Dataverse rules. It runs like duct tape: sticky but unreliable. Throw the same request with @powerpages in the lead, and suddenly you’ve got validators that don’t just run—they bind to the right entity field references you actually need. Same request, context-adjusted output, no midnight patch session required. And this isn’t just about generating scripts. Commands like “@powerpages explain the following code {% include ‘Page Copy’ %}” give you plain-English walkthroughs of Liquid or Power Pages-specific constructs. You’re not copy-pasting blind; you’re actually building understanding. That’s a different kind of power—because you’re learning the runes while also casting them. The longer you work with these commands, the more your workflow shifts. Instead of patching errors alone at 2 AM, you’re treating Copilot like a second set of eyes that already kno

    ١٩ من الدقائق

حول

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

قد يعجبك أيضًا