M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

  1. The Hidden Governance Risk in Copilot Notebooks

    3 GIỜ TRƯỚC

    The Hidden Governance Risk in Copilot Notebooks

    Opening – The Beautiful New Toy with a Rotten Core Copilot Notebooks look like your new productivity savior. They’re actually your next compliance nightmare. I realize that sounds dramatic, but it’s not hyperbole—it’s math. Every company that’s tasted this shiny new toy is quietly building a governance problem large enough to earn its own cost center. Here’s the pitch: a Notebooks workspace that pulls together every relevant document, slide deck, spreadsheet, and email, then lets you chat with it like an omniscient assistant. At first, it feels like magic. Finally, your files have context. You ask a question; it draws in insights from across your entire organization and gives you intelligent synthesis. You feel powerful. Productive. Maybe even permanently promoted. The problem begins the moment you believe the illusion. You think you’re chatting with “a tool.” You’re actually training it to generate unauthorized composite data—text that sits in no compliance boundary, inherits no policy, and hides in no oversight system. Your Copilot answers might look harmless—but every output is a derivative document whose parentage is invisible. Think of that for a second. The most sophisticated summarization engine in the Microsoft ecosystem, producing text with no lineage tagging. It’s not the AI response that’s dangerous. It’s the data trail it leaves behind—the breadcrumb network no one is indexing. To understand why Notebooks are so risky, we need to start with what they actually are beneath the pretty interface. Section 1 – What Copilot Notebooks Actually Are A Copilot Notebook isn’t a single file. It’s an aggregation layer—a temporary matrix that pulls data from sources like SharePoint, OneDrive, Teams chat threads, maybe even customer proposals your colleague buried in a subfolder three reorganizations ago. It doesn’t copy those files directly; it references them through connectors that grant AI contextual access. The Notebook is, in simple terms, a reference map wrapped around a conversation window. When users picture a “Notebook,” they imagine a tidy Word document. Wrong. The Notebook is a dynamic composition zone. Each prompt creates synthesized text derived from those references. Each revision updates that synthesis. And like any composite object, it lives in the cracks between systems. It’s not fully SharePoint. It’s not your personal OneDrive. It’s an AI workspace built on ephemeral logic—what you see is AI construction, not human authorship. Think of it like giving Copilot the master key to all your filing cabinets, asking it to read everything, summarize it, and hand you back a neat briefing. Then calling that briefing yours. Technically, it is. Legally and ethically? That’s blurrier. The brilliance of this structure is hard to overstate. Teams can instantly generate campaign recaps, customer updates, solution drafts—no manual hunting. Ideation becomes effortless; you query everything you’ve ever worked on and get an elegantly phrased response in seconds. The system feels alive, responsive, almost psychic. The trouble hides in that intelligence. Every time Copilot fuses two or three documents, it’s forming a new data artifact. That artifact belongs nowhere. It doesn’t inherit the sensitivity label from the HR record it summarized, the retention rule from the finance sheet it cited, or the metadata tags from the PowerPoint it interpreted. Yet all of that information lives, invisibly, inside its sentences. So each Notebook session becomes a small generator of derived content—fragments that read like harmless notes but imply restricted source material. Your AI-powered convenience quietly becomes a compliance centrifuge, spinning regulated data into unregulated text. To a user, the experience feels efficient. To an auditor, it looks combustible. Now, that’s what the user sees. But what happens under the surface—where storage and policy live—is where governance quietly breaks. Section 2 – The Moment Governance Breaks Here’s the part everyone misses: the Notebook’s intelligence doesn’t just read your documents, it rewrites your governance logic. The moment Copilot synthesizes cross‑silo information, the connection between data and its protective wrapper snaps. Think of a sensitivity label as a seatbelt—you can unbuckle it by stepping into a Notebook. When you ask Copilot to summarize HR performance, it might pull from payroll, performance reviews, and an internal survey in SharePoint. The output text looks like a neat paragraph about “team engagement trends,” but buried inside those sentences are attributes from three different policy scopes. Finance data obeys one retention schedule; HR data another. In the Notebook, those distinctions collapse into mush. Purview, the compliance radar Microsoft built to spot risky content, can’t properly see that mush because the Notebook’s workspace acts as a transient surface. It’s not a file; it’s a conversation layer. Purview scans files, not contexts, and therefore misses half the derivatives users generate during productive sessions. Data Loss Prevention, or DLP, has the same blindness. DLP rules trigger when someone downloads or emails a labeled file, not when AI rephrases that file’s content and spit‑shines it into something plausible but policy‑free. It’s like photocopying a stack of confidential folders into a new binder and expecting the paper itself to remember which pages were “Top Secret.” It won’t. The classification metadata lives in the originals; the copy is born naked. Now imagine the user forwarding that AI‑crafted summary to a colleague who wasn’t cleared for the source data. There’s no alert, no label, no retention tag—just text that feels safe because it came from “Copilot.” Multiply that by a whole department and congratulations: you have a Shadow Data Lake, a collection of derivative insights nobody has mapped, indexed, or secured. The Shadow Data Lake sounds dramatic but it’s mundane. Each Notebook persists as cached context in the Copilot system. Some of those contexts linger in the user’s Microsoft 365 cloud cache, others surface in exported documents or pasted Teams posts. Suddenly your compliance boundary has fractal edges—too fine for traditional governance to trace. And then comes the existential question: who owns that lake? The user who initiated the Notebook? Their manager who approved the project? The tenant admin? Microsoft? Everyone assumes it’s “in the cloud somewhere,” which is organizational shorthand for “not my problem.” Except it is, because regulators won’t subpoena the cloud; they’ll subpoena you. Here’s the irony—Copilot works within Microsoft’s own security parameters. Access control, encryption, and tenant isolation still apply. What breaks is inheritance. Governance assumes content lineage; AI assumes conceptual relevance. Those two logics are incompatible. So while your structure remains technically secure, it becomes legally incoherent. Once you recognize that each Notebook is a compliance orphan, you start asking the unpopular question: who’s responsible for raising it? The answer, predictably, is nobody—until audit season arrives and you discover your orphan has been very busy reproducing. Now that we’ve acknowledged the birth of the problem, let’s follow it as it grows up—into the broader crisis of data lineage. Section 3 – The Data Lineage and Compliance Crisis Data lineage is the genealogy of information—who created it, how it mutated, and what authority governs it. Compliance depends on that genealogy. Lose it, and every policy built on it collapses like a family tree written on a napkin. When Copilot builds a Notebook summary, it doesn’t just remix data; it vaporizes the family tree. The AI produces sentences that express conclusions sourced from dozens of files, yet it doesn’t embed citation metadata. To a compliance officer, that’s an unidentified adoptive child. Who were its parents? HR? Finance? A file from Legal dated last summer? Copilot shrugs—its job was understanding, not remembering. Recordkeeping thrives on provenance. Every retention rule, every “right to be forgotten” request, every audit trail assumes you can trace insight back to origin. Notebooks sever that trace. If a customer requests deletion of their personal data, GDPR demands you verify purging in all derivative storage. But Notebooks blur what counts as “storage.” The content isn’t technically stored—it’s synthesized. Yet pieces of that synthesis re‑enter stored environments when users copy, paste, export, or reference them elsewhere. The regulatory perimeter becomes a circle drawn in mist. Picture an analyst asking Copilot to summarize a revenue‑impact report that referenced credit‑card statistics under PCI compliance. The AI generates a paragraph: “Retail growth driven by premium card users.” No numbers, no names—so it looks benign. That summary ends up in a sales pitch deck. Congratulations: sensitive financial data has just been laundered through an innocent sentence. The origin evaporates, but the obligation remains. Some defenders insist Notebooks are “temporary scratch pads.” Theoretically, that’s true. Practically, users never treat them that way. They export answers to Word, email them, staple them into project charters. The scratch pad becomes the published copy. Every time that happens, the derivative data reproduces. Each reproduction inherits none of the original restrictions, making enforcement impossible downstream. Try auditing that mess. You can’t tag what you can’t trace. Purview’s catalog lists the source documents neatly, but the Notebook’s offspring appear nowhere. Version control? Irrelevant—there’s no version record because the AI overwrote itself conversationally. Your audit log shows a single session ID, not

    22 phút
  2. Stop Wasting Money: The 3 Architectures for Fabric Data Flows Gen 2

    15 GIỜ TRƯỚC

    Stop Wasting Money: The 3 Architectures for Fabric Data Flows Gen 2

    Opening Hook & Teaching Promise Somewhere right now, a data analyst is heroically exporting a hundred‑megabyte CSV from Microsoft Fabric—again. Because apparently, the twenty‑first century still runs on spreadsheets and weekend refresh rituals. Fascinating. The irony is that Fabric already solved this, but most people are too busy rescuing their own data to notice. Here’s the reality nobody says out loud: most Fabric projects burn more compute in refresh cycles than they did in entire Power BI workspaces. Why? Because everyone keeps using Dataflows Gen 2 like it’s still Power BI’s little sidecar. Spoiler alert—it’s not. You’re stitching together a full‑scale data engineering environment while pretending you’re building dashboards. Dataflows Gen 2 aren’t just “new dataflows.” They are pipelines wearing polite Power Query clothing. They can stage raw data, transform it across domains, and serve it straight into Direct Lake models. But if you treat them like glorified imports, you pay for movement twice: once pulling from the source, then again refreshing every dependent dataset. Double the compute, half the sanity. Here’s the deal. Every Fabric dataflow architecture fits one of three valid patterns—each tuned for a purpose, each with distinct cost and scaling behavior. One saves you money. One scales like a proper enterprise backbone. And one belongs in the recycle bin with your winter 2021 CSV exports. Stick around. By the end of this, you’ll know exactly how to design your dataflows so that compute bills drop, refreshes shrink, and governance stops looking like duct‑taped chaos. Let’s dissect why Fabric deployments quietly bleed money and how choosing the right pattern fixes it. Section 1 – The Core Misunderstanding: Why Most Fabric Projects Bleed Money The classic mistake goes like this: someone says, “Oh, Dataflows—that’s the ETL layer, right?” Incorrect. That was Power BI logic. In Fabric, the economic model flipped. Compute—not storage—is the metered resource. Every refresh triggers a full orchestration of compute; every repeated import multiplies that cost. Power BI’s import model trained people badly. Back there, storage was finite, compute was hidden, and refresh was free—unless you hit capacity limits. Fabric, by contrast, charges you per activity. Refreshing a dataflow isn’t just copying data; it spins up distributed compute clusters, loads staging memory, writes delta files, and tears it all down again. Do that across multiple workspaces? Congratulations, you’ve built a self‑inflicted cloud mining operation. Here’s where things compound. Most teams organize Fabric exactly like their Power BI workspace folders—marketing here, finance there, operations somewhere else—each with its own little ingestion pipeline. Then those pipelines all pull the same data from the same ERP system. That’s multiple concurrent refreshes performing identical work, hammering your capacity pool, all for identical bronze data. Duplicate ingestion equals duplicate cost, and no amount of slicer optimization will save you. Fabric’s design assumes a shared lakehouse model: one storage pool feeding many consumers. In that model, data should land once, in a standardized layer, and everyone else references it. But when you replicate ingestion per workspace, you destroy that efficiency. Instead of consolidating lineage, you spawn parallel copies with no relationship to each other. Storage looks fine—the files are cheap—but compute usage skyrockets. Dataflows Gen 2 were refactored specifically to fix this. They support staging directly to delta tables, they understand lineage natively, and they can reference previous outputs without re‑processing them. Think of Gen 2 not as Power Query’s cousin but as Fabric’s front door for structured ingestion. It builds lineage graphs and propagates dependencies so you can chain transformations without re‑loading the same source again and again. But that only helps if you architect them coherently. Once you grasp how compute multiplies, the path forward is obvious: architect dataflows for reuse. One ingestion, many consumers. One transformation, many dependents. Which raises the crucial question—out of the infinite ways you could wire this, why are there exactly three architectures that make sense? Because every Fabric deployment lives on a triangle of cost, governance, and performance. Miss one corner, and you start overpaying. So, before we touch a single connector or delta path, we’re going to define those three blueprints: Staging for shared ingestion, Transform for business logic, and Serve for consumption. Master them, and you stop funding Microsoft’s next datacenter through needless refresh cycles. Ready? Let’s start with the bronze layer—the pattern that saves you money before you even transform a single row. Section 2 – Architecture #1: Staging (Bronze) Dataflows for Shared Ingestion Here’s the first pattern—the bronze layer, also called the staging architecture. This is where raw data takes its first civilized form. Think of it like a customs checkpoint between your external systems and the Fabric ecosystem. Every dataset, from CRM exports to finance ledgers, must pass inspection here before entering the city limits of transformation. Why does this matter? Because external data sources are expensive to touch repeatedly. Each time you pull from them, you’re paying with compute, latency, and occasionally your dignity when an API throttles you halfway through a refresh. The bronze Dataflow fixes that by centralizing ingestion. You pull from the source once, land it cleanly into delta storage, and then everyone else references that materialized copy. The key word—references, not re‑imports. Here’s how this looks in practice. You set up a dedicated workspace—call it “Data Ingestion” if you insist on dull names—attached to your standard Fabric capacity. Within that workspace, each Dataflow Gen 2 process connects to an external system: Salesforce, Workday, SQL Server, whatever system of record you have. The Dataflow retrieves the data, applies lightweight normalization—standardizing column names, ensuring types are consistent, removing the occasional null delusions—and writes it into your Lakehouse as Delta files. Now stop there. Don’t transform business logic, don’t calculate metrics, don’t rename “Employee” into “Associates.” That’s silver-layer work. Bronze is about reliable landings. Everything landing here should be traceable back to an external source, historically intact, and refreshable independently. Think “raw but usable,” not “pretty and modeled.” The payoff is huge. Instead of five departments hitting the same CRM API five separate times, they hit the single landed version in Fabric. That’s one refresh job, one compute spin‑up, one delta write. Every downstream process can then link to those files without paying the ingestion tax again. Compute drops dramatically, while lineage becomes visible in one neat graph. Now, why does this architecture thrive specifically in Dataflows Gen 2? Because Gen 2 finally understands persistence. The moment you output to a delta table, Fabric tracks that table as part of the lakehouse storage, meaning notebooks, data pipelines, and semantic models can all read it directly. You’ve effectively created a reusable ingestion service without deploying Data Factory or custom Spark jobs. The Dataflow handles connection management, scheduling, and even incremental refresh if you want to pull only changed records. And yes, incremental refresh belongs here, not in your reports. Every time you configure it at the staging level, you prevent a full reload downstream. The bronze layer remembers what’s been loaded and fetches only deltas. Between runs, the Lakehouse retains history as parquet or delta partitions, so you can roll back or audit any snapshot without re‑ingesting. Let’s puncture a common mistake: pointing every notebook directly to the original data source. It feels “live,” but it’s just reckless. That’s like giving every intern a key to the production database. You overload source systems and lose control of refresh timing. A proper bronze Dataflow acts as the isolating membrane—external data stays outside, your Lakehouse holds the clean copy, and everyone else stays decoupled. From a cost perspective, this is the cheapest layer per unit of data volume. Storage is practically free compared to compute, and Fabric’s delta tables are optimized for compression and versioning. You pay a small fixed compute cost for each ingestion, then reuse that dataset indefinitely. Contrast that with re‑ingesting snippets for every dependent report—death by refresh cycles. Once your staging Dataflows are stable, test lineage. You should see straight lines: source → Dataflow → delta output. If you see loops or multiple ingestion paths for the same entity, congratulations—you’ve built redundancy masquerading as best practice. Flatten it. So, with the bronze pattern, you achieve three outcomes: physicists would call it equilibrium. One, every external source lands once, not five times. Two, you gain immediate reusability through delta storage. Three, governance becomes transparent because you can approve lineage at ingestion instead of auditing chaos later. When this foundation is solid, your data estate stops resembling a spaghetti bowl and starts behaving like an orchestrated relay. Each subsequent layer pulls cleanly from the previous without waking any source system. The bronze tier doesn’t make data valuable—it makes it possible. And once that possibility stabilizes, you’re ready to graduate to the silver layer, where transformation and business logic finally earn their spotlight. Section 3 – Architecture #2: Transform (Silver) Dataflows for Business Logic & Quality Now that your br

    24 phút
  3. GPT-5 Fixes Fabric Governance: Stop Manual Audits Now!

    1 NGÀY TRƯỚC

    GPT-5 Fixes Fabric Governance: Stop Manual Audits Now!

    Opening – The Governance Headache You’re still doing manual Fabric audits? Fascinating. That means you’re voluntarily spending weekends cross-checking Power BI datasets, Fabric workspaces, and Purview classifications with spreadsheets. Admirable—if your goal is to win an award for least efficient use of human intelligence. Governance in Microsoft Fabric isn’t difficult because the features are missing; it’s difficult because the systems refuse to speak the same language. Each operates like a self-important manager who insists their department is “different.” Purview tracks classifications, Power BI enforces security, Fabric handles pipelines—and you get to referee their arguments in Excel. Enter GPT-5 inside Microsoft 365 Copilot. This isn’t the same obedient assistant you ask to summarize notes; it’s an auditor with reasoning. The difference? GPT-5 doesn’t just find information—it understands relationships. In this video, you’ll learn how it automates Fabric governance across services without a single manual verification. Chain of Thought reasoning—coming up—turns compliance drudgery into pure logic. Section 1 – Why Governance Breaks in Microsoft Fabric Here’s the uncomfortable truth: Fabric unified analytics but forgot to unify governance. Underneath the glossy dashboards lies a messy network of systems competing for attention. Fabric stores the data, Power BI visualizes it, and Purview categorizes it—but none of them talk fluently. You’d think Microsoft built them to cooperate; in practice, it’s more like three geniuses at a conference table, each speaking their own dialect of JSON. That’s why governance collapses under its own ambition. You’ve got a Lakehouse full of sensitive data, Power BI dashboards referencing it from fifteen angles, and Purview assigning labels in splendid isolation. When auditors ask for proof that every classified dataset is secured, you discover that Fabric knows lineage, Purview knows tags, and Power BI knows roles—but no one knows the whole story. The result is digital spaghetti—an endless bowl of interconnected fields, permissions, and flows. Every strand touches another, yet none of them recognize the connection. Governance officers end up manually pulling API exports, cross-referencing names that almost—but not quite—match, and arguing with CSVs that refuse to align. The average audit becomes a sociology experiment on human patience. Take Helena from compliance. She once spent two weeks reconciling Purview’s “Highly Confidential” datasets with Power BI restrictions. Two weeks to learn that half the assets were misclassified and the other half mislabeled because someone renamed a workspace mid-project. Her verdict: “If Fabric had a conscience, it would apologize.” But Fabric doesn’t. It just logs events and smiles. The real problem isn’t technical—it’s logical. The platforms are brilliant at storing facts but hopeless at reasoning about them. They can tell you what exists but not how those things relate in context. That’s why your scripts and queries only go so far. To validate compliance across systems, you need an entity capable of inference—something that doesn’t just see data but deduces relationships between them. Enter GPT-5—the first intern in Microsoft history who doesn’t need constant supervision. Unlike previous Copilot models, it doesn’t stop at keyword matching. It performs structured reasoning, correlating Fabric’s lineage graphs, Purview’s classifications, and Power BI’s security models into a unified narrative. It builds what the tools themselves can’t: context. Governance finally moves from endless inspection to intelligent automation, and for once, you can audit the system instead of diagnosing its misunderstandings. Section 2 – Enter GPT-5: Reasoning as the Missing Link Let’s be clear—GPT‑5 didn’t simply wake up one morning and learn to type faster. The headlines may talk about “speed,” but that’s a side effect. The real headline is reasoning. Microsoft built chain‑of‑thought logic directly into Copilot’s operating brain. Translation: the model doesn’t just regurgitate documentation; it simulates how a human expert would think—minus the coffee addiction and annual leave. Compare that to GPT‑4. The earlier model was like a diligent assistant who answered questions exactly as phrased. Ask it about Purview policies, and it would obediently stay inside that sandbox. Intelligent, yes. Autonomous, no. It couldn’t infer that your question about dataset access might also require cross‑checking Power BI roles and Fabric pipelines. You had to spoon‑feed context. GPT‑5, on the other hand, teaches itself context as it goes. It notices the connections you forgot to mention and reasoned through them before responding. Here’s what that looks like inside Microsoft 365 Copilot. The moment you submit a governance query—say, “Show me all Fabric assets containing customer addresses that aren’t classified in Purview”—GPT‑5 triggers an internal reasoning chain. Step one: interpret your intent. It recognizes the request isn’t about a single system; it’s about all three surfaces of your data estate. Step two: it launches separate mental threads, one per domain. Fabric provides data lineage, Purview contributes classification metadata, and Power BI exposes security configuration. Step three: it converges those threads, reconciling identifiers and cross‑checking semantics so the final answer is verified rather than approximated. Old Copilot stitched information; new Copilot validates logic. That’s why simple speed comparisons miss the point. The groundbreaking part isn’t how fast it replies—it’s that every reply has internal reasoning baked in. It’s as if Power Automate went to law school, finished summa cum laude, and came back determined to enforce compliance clauses. Most users mistake reasoning for verbosity. They assume a longer explanation means the model’s showing off. No. The verbosity is evidence of deliberation—it’s documenting its cognitive audit trail. Just as an auditor writes notes supporting each conclusion, GPT‑5 outlines the logical steps it followed. That audit trail is not fluff; it’s protection. When regulators ask how a conclusion was reached, you finally have an answer that extends beyond “Copilot said so.” Let’s dissect the functional model. Think of it as a three‑stage pipeline: request interpretation → multi‑domain reasoning → verified synthesis. In the first stage, Copilot parses language in context, understanding that “unlabeled sensitive data” implies a Purview classification gap. In the second stage, it reasons across data planes simultaneously, correlating fields that aren’t identical but are functionally related—like matching “Customer_ID” in Fabric with “CustID” in Power BI. In the final synthesis stage, it cross‑verifies every inferred link before presenting the summary you trust. And here’s the shocker: you never asked it to do any of that. The reasoning loop runs invisibly, like a miniature internal committee that debates the evidence before letting the spokesperson talk. That’s what Microsoft means by embedded chain‑of‑thought. GPT‑5 chooses when deeper reasoning is required and deploys it automatically. So, when you ask a seemingly innocent compliance question—“Which Lakehouse tables contain PII but lack a corresponding Power BI RLS rule?”—GPT‑5 doesn’t resort to keyword lookup. It reconstructs the lineage graph, cross‑references Purview tags, interprets security bindings, and surfaces only those mismatches verifiable across all datasets. The result isn’t a guess; it’s a derived conclusion. And yes, this finally solves the governance problem that Fabric itself could never articulate. For the first time, contextual correctness replaces manual correlation. You spend less time gathering fragments and more time interpreting strategy. The model performs relational thinking on your behalf—like delegating analysis to someone who not only reads the policy but also understands the politics behind it. So, how different does your day look? Imagine an intern who predicts which policy objects overlap before you even draft the query, explains its reasoning line by line, and doesn’t bother you unless the dataset genuinely conflicts. That’s GPT‑5 inside Copilot: the intern promoted to compliance officer, running silent, always reasoning. Now, let’s put it to work in an actual audit. Section 3 – The Old Way vs. the GPT-5 Way Let’s walk through a real scenario. Your task: confirm every dataset in a Fabric Lakehouse containing personally identifiable information is classified in Purview and protected by Row‑Level Security in Power BI. Straightforward objective, catastrophic execution. The old workflow resembled a scavenger hunt designed by masochists. You opened Power BI to export access roles, jumped into Purview to list labeled assets, then exported Fabric pipeline metadata hoping column names matched. They rarely did. Three dashboards, four exports, two migraines—and still no certainty. You were reconciling data that lived in parallel universes. Old Copilot didn’t help much. It could summarize inside each service, but it lacked the intellectual glue to connect them. Ask it, “List Purview‑classified datasets used in Power BI,” and it politely retrieved lists—separately. It was like hiring three translators who each know only one language. Yes, they speak fluently, but never to each other. The audit ended with you praying the names aligned by coincidence. Spoiler: they didn’t. Now enter GPT‑5. Same query, completely different brain mechanics. You say, “Audit all Fabric assets with PII to confirm classification and security restrictions.” Copilot, powered by GPT‑5, interprets the

    22 phút
  4. Stop Using GPT-5 Where The Agent Is Mandatory

    1 NGÀY TRƯỚC

    Stop Using GPT-5 Where The Agent Is Mandatory

    Opening: The Illusion of Capability Most people think GPT‑5 inside Copilot makes the Researcher Agent redundant. Those people are wrong. Painfully wrong. The confusion comes from the illusion of intelligence—the part where GPT‑5 answers in flawless business PowerPoint English, complete with bullet points, confidence, and plausible references. It sounds like knowledge. It’s actually performance art. Copilot powered by GPT‑5 is what happens when language mastery gets mistaken for truth. It’s dazzling. It generates a leadership strategy in seconds, complete with a risk register and a timeline that looks like it came straight from a consultant’s deck. But beneath that shiny fluency? No citation trail. No retrieval log. Just synthetic coherence. Now, contrast that with the Researcher Agent. It is slow, obsessive, and methodical—more librarian than visionary. It asks clarifying questions. It pauses to fetch sources. It compiles lineage you can audit. And yes, it takes minutes—sometimes nine of them—to deliver the same type of output that Copilot spits out in ten seconds. The difference is that one of them can be defended in a governance review, and the other will get you politely removed from the conference room. Speed versus integrity. Convenience versus compliance. Enterprises like yours live and die by that axis. GPT‑5 gives velocity; the Agent gives veracity. You can choose which one you value most—but not at the same time. By the end of this video, you’ll know exactly where GPT‑5 is safe to use and where invoking the Agent is not optional, but mandatory. Spoiler: if executives are reading it, the Agent writes it. Section 1: Copilot’s Strength—The Fast Lie of Generative Fluency The brilliance of GPT‑5 lies in something known as chain‑of‑thought reasoning. Think of it as internal monologue for machines—a hidden process where the model drafts outlines, evaluates options, and simulates planning before giving you an answer. It’s what allows Copilot to act like a brilliant strategist trapped inside Word. You type “help me prepare a leadership strategy,” and it replies with milestones, dependencies, and delivery risks so polished that you could present them immediately. The problem? That horsepower is directed at coherence, not correctness. GPT‑5 connects dots based on probability, not provenance. It can reference documents from SharePoint or Teams, but it cannot guarantee those references created the reasoning behind its answer. It’s like asking an intern to draft a company policy after glancing at three PowerPoint slides and a blog post. What you’ll get back looks professional—cites a few familiar phrases—but you have no proof those citations informed the logic. This is why GPT‑5 feels irresistible. It imitates competence. You ask, it answers. You correct, it adjusts. The loop is instant and conversational. The visible speed gives the illusion of reliability because we conflate response time with thoughtfulness. When Copilot finishes typing before your coffee finishes brewing, it feels like intelligence. Unfortunately, in enterprise architecture, feelings don’t pass audits. Think of Copilot as the gifted intern: charismatic, articulate, and entirely undocumented. You’ll adore its drafts, you’ll quote its phrasing in meetings, and then one day you’ll realize nobody remembers where those numbers came from. Every unverified paragraph it produces becomes intellectual debt—content you must later justify to compliance reviewers who prefer citations over enthusiasm. And this is where most professionals misstep. They promote speed as the victory condition. They forget that artificial fluency without traceability creates a governance nightmare. The more fluent GPT‑5 becomes, the more dangerous it gets in regulated environments because it hides its uncertainty elegantly. The prose is clean. The confidence is absolute. The evidence is missing. Here’s the kicker: Copilot’s chain‑of‑thought reasoning isn’t built for auditable research. It’s optimized for task completion. When GPT‑5 plans a project, it’s predicting what a competent human would plan given the prompt and context, not verifying those steps against organizational standards. It’s synthetic synthesis, not verified analysis. Yet that’s precisely why it thrives in productivity scenarios—drafting emails, writing summaries, brainstorming outlines. Those don’t require forensic provenance. You can tolerate minor inaccuracy because the purpose is momentum, not verification. But hand that same GPT‑5 summary to a regulator or a finance auditor, and you’ve just escalated from “clever tool use” to “architectural liability.” Generative fluency without traceability becomes a compliance risk vector. When users copy AI text into Power BI dashboards, retention policies, or executive reports, they embed unverifiable claims inside systems designed for governance. That’s not efficiency; that’s contamination. Everything about Copilot’s design incentivizes flow. It’s built to keep you moving. Ask it another question, and it continues contextually without restarting its reasoning loop. That persistence—the way it picks up previous context—is spectacular for daily productivity. But in governance, context persistence without fresh verification equals compounding error. Still, we shouldn’t vilify Copilot. It’s not meant to be the watchdog of integrity; it’s the facilitator of progress. Used wisely, it accelerates ideation and lets humans focus on originality rather than formatting. What damages enterprises isn’t GPT‑5’s fluency—it’s the assumption that fluency equals fact. The danger is managerial, not mechanical. So when exactly does this shiny assistant transform from helpful companion into architectural liability? When the content must survive scrutiny. When every assertion needs lineage. When “probably right” stops being acceptable. Enter the Agent. Section 2: The Researcher Agent—Where Governance Lives If Copilot is the intern who dazzles the boardroom with fluent nonsense, the Researcher Agent is the senior auditor with a clipboard, a suspicion, and infinite patience. It doesn’t charm; it interrogates. It doesn’t sprint; it cross‑examines every source. Its purpose is not creativity—it’s credibility. When you invoke the Researcher Agent, the tone of interaction changes immediately. Instead of sprinting into an answer, it asks clarifying questions. “What scope?” “Which document set?” “Should citations include internal repositories or external verified sources?” Those questions—while undeniably irritating to impatient users—mark the start of auditability. Every clarifying loop defines the boundaries of traceable logic. Each fetch cycle generates metadata: where it looked, how long, what confidence weight it assigned. It isn’t stalling. It’s notarizing. Architecturally, the Agent is built on top of retrieval orchestration rather than probabilistic continuation. GPT‑5 predicts; the Agent verifies. That’s not a small difference. GPT‑5 produces a polished paragraph; the Agent produces a defensible record. It executes multiple verification passes—mapping references, cross‑checking conflicting statements, reconciling versions between SharePoint, Fabric, and even sanctioned external repositories. It’s like the operating system of governance, complete with its own checksum of truth. The patience is deliberate. A professional demonstrated this publicly: GPT‑5 resolved the planning prompt within seconds, while the Agent took nine full minutes, cycling through external validation before producing what resembled a research paper. That disparity isn’t inefficiency—it’s design philosophy. The time represents computational diligence. The Agent generates provenance logs, citations, and structured notes because compliance requires proof of process, not just deliverables. In governance terms, latency equals legitimacy. Yes, it feels slow. You can practically watch your ambition age while it compiles evidence. But that’s precisely the kind of slowness enterprises pay consultants to simulate manually. The Agent automates tedium that humans perform with footnotes and review meetings. It’s not writing with style; it’s writing with receipts. Think of Copilot as a creative sprint—energized, linear, impatient. Think of the Agent as a laboratory experiment. Every step is timestamped, every reagent labeled. If Copilot delivers a result, the Agent delivers a dataset with provenance, methodology, and margin notes explaining uncertainty. One generates outcomes; the other preserves accountability. This architecture matters most in regulated environments. A Copilot draft may inform brainstorming, but for anything that touches audit trails, data governance, or executive reporting, the Agent becomes non‑negotiable. Its chain of custody extends through the M365 ecosystem: queries trace to Fabric data sets, citations map back to Microsoft Learn or internal knowledge bases, and final summaries embed lineage so auditors can re‑create the reasoning path. That’s not over‑engineering—that’s survival under compliance regimes. Some users call the Agent overkill until a regulator asks, “Which document informed this recommendation?” That conversation ends awkwardly when your only answer is “Copilot suggested it.” The Agent, however, can reproduce the evidence in its log structure—an XML‑like output specifying source, timestamp, and verification step. In governance language, that’s admissible testimony. So while GPT‑5’s brilliance lies in fluid reasoning, the Researcher Agent’s power lies in fixed accountability. The two exist in separate architectural layers: one optimizes throughput, the other ensures traceability. Dismiss the Agent, and you’re effectively removing the black box recorder from your ente

    24 phút
  5. SharePoint Agent vs. Human Admin: Can AI Replace You?

    2 NGÀY TRƯỚC

    SharePoint Agent vs. Human Admin: Can AI Replace You?

    Opening: The Arrogant Intern Arrives You’ve probably heard this one already: “AI can run SharePoint now.”No, it cannot. What it can do is pretend to. The newest act in Microsoft’s circus of automation is the SharePoint Knowledge Agent—a supposedly brilliant little assistant that promises to “organize your content, generate metadata, and answer questions.” The pitch sounds amazing: a tireless, robotic librarian who never stops working. In reality, it behaves more like an overly confident intern who just discovered search filters. This “agent” lives inside SharePoint Premium, powered by Copilot, armed with the optimism of a first-year analyst and the discipline of a toddler with crayons. Microsoft markets it like you can finally fire your SharePoint admin and let AI do the filing. Users cheer, “Finally! Freedom from metadata hell!”And then—spoiler—it reorganizes your compliance folder alphabetically by emoji usage. Let’s be clear: it’s powerful, yes. But autonomous? Hardly. It’s less pilot, more co-pilot, which is a polite way of saying it still needs an adult in the room. In fact, it doesn’t remove your metadata duties; it triples them. Every document becomes a theological debate about column naming conventions. By the end of this, you’ll know what it really does, where it fumbles, and why governance officers are quietly sweating behind the scenes.So. Let’s start with what this digital intern swears it can do. Section 1: The Sales Pitch vs. Reality — “It Just Organizes Everything!” According to Microsoft’s marketing and a few overly enthusiastic YouTubers, the Knowledge Agent “organizes everything for you.” Those four words should come with an asterisk the size of a data center. What it really does is: generate metadata columns, create automated rules, build filtered views, and answer questions across sites. In other words, it’s not reorganizing SharePoint—it’s just giving your documents more personality disorders. Think of it like hiring an intern who insists they’ll “clean your desk.” You return two hours later to find your tax receipts sorted by paper thickness. It’s tidy, sure, but good luck filing your return. Before this thing even works, you must appease several bureaucratic gods: * A paid Microsoft 365 Copilot license, * An admin who opts you into SharePoint Premium, * And, ideally, someone patient enough to explain to your boss why half the columns now repeat the same data differently capitalized. Once summoned, the agent introduces three main tricks: Organize this library, Set up rules, and Ask a question. This triumvirate of convenience is Microsoft’s long bet—that Copilot-trained metadata will fuel better Q&A experiences across all 365 apps. Essentially, you teach SharePoint to understand your files today so Copilot can answer questions about them tomorrow. Admirable. Slightly terrifying. Now for reality: yes, it can automatically suggest metadata, yes, it can classify documents, but no, it cannot distinguish “Policy Owner” from “Owner Policy Copy2.” Every ounce of automation depends entirely on how clean your existing data is. Garbage in, labeled garbage out. And every fix requires—you guessed it—a human. The seductive part is the illusion of autonomy. You grant it permission, step away, and when you come back your library gleams with new columns and color-coded cheerfulness. Except behind that cheerful façade is quiet chaos—redundant fields, inconsistent tags, half-applied views. Automation doesn’t eliminate disorder; it simply buries it under polish. That’s the real magic trick: making disarray look smooth. So what happens when you let the intern loose on your document library for real? When you say, “Go ahead, organize this for me”?That’s when the comedy starts. Section 2: Auto‑Tagging — The Genius That Forgets Its Homework Here’s where our talented intern rolls up its digital sleeves and promises to “organize this library.” The phrase drips with confidence, like it’s about to alphabetize the universe. You press the button, expecting harmony. What you get looks more like abstract art produced by a neural network that just discovered Excel. The “organize this library” function sounds deceptively simple: it scans your documents, then suggests new columns of metadata—maybe things like review date, department owner, or document type. Except sometimes it decides your library needs a column called “ImportantNumberTwo” because it found the number two inside a filename. Yes, really. It’s like watching a gifted student ace calculus and then forget how to spell their own name. The first time you run it, you’re tempted to panic. The suggestions look random, the preview window glows with meaningless fields, and nothing seems coherent. That’s because it isn’t ready yet. The engine quietly does background indexing and content analysis, a process that can take hours. Until then, it’s basically guessing. In other words: if you click “create columns” right away, you get digital gibberish. Give it a night to sleep—apparently, even artificial intelligence needs to dream before it makes sense. When it finally wakes up, something magical happens: the column suggestions actually reflect structure. You might see “Review Date” correctly pulled from the header of dozens of policies. You realize it read the text, detected a pattern, and turned it into metadata. For about ten seconds, you’re impressed. Then you notice it also created “Policy Owner,” “policy owner,” and “POLICY OWNER” as separate fields. SharePoint now speaks three dialects of English. This is the first real lesson: the AI doesn’t create order—it amplifies whatever chaos already exists. Your messy document formatting? Congratulations, it’s now immortalized as structured data. It’s not malicious; it’s just painfully literal. Every inconsistency becomes a column. Every formatting quirk becomes an ontology. The intern has taken notes... on your sins. Now, Microsoft anticipated your existential crisis and thankfully made this process optional. None of these changes apply automatically. You, the alleged human adult, must review every suggestion and explicitly choose which columns to keep. The interface even highlights pending changes with polite blue indicators, whispering, “Are you sure about this?” Copilot isn’t autopilot, it’s manual labor dressed up in predictive text. You approve each change, remove duplicates, rename things, and only then commit the metadata to the view. The irony? It took you longer than just building the columns yourself. Still, when it works, it’s genuinely clever. You can preview values across a sample set—see how “Policy Owner” fills with department names, how “Review Date” populates from document headers. It’s a quick way to audit the mess. Then you apply it across the library and watch the autofill process begin: background workers hum, metadata slowly populates, and you briefly consider sending the AI a thank-you note. Don’t. It’s about to betray you again. Because here comes the lag. Updating and filling metadata is asynchronous; while it churns, the columns display blank values. For minutes—or hours. Users think nothing happened, so they rerun the task, doubling the chaos. Then, minutes later, both sets of updates collide, overwriting partially filled data. It’s not a bug; it’s a test of faith. The agent rewards patience, punishes enthusiasm. Think of it as hiring a genius who works fast only when you stop looking. Versioning adds another comedy layer. Suppose you upload “Policy_v2.docx.” The AI dutifully copies the metadata from “Policy_v1,” including the outdated owner field. Then it takes a breath—sometime later, it realizes the content changed—and kicks off another round of metadata inference. Eventually, it catches up. Eventually. If your workflow relies on instant accuracy, this delay will drive you to drink. Once you understand that timing problem, you start treating it like the quirk it is. You schedule re-indexes overnight, monitor autofill queues, and laugh bitterly at anyone who thought this feature meant less work. That’s the human‑in‑the‑loop model in action: the AI proposes, the human disposes. You curate its guesses. You correct its spelling. You restrain its enthusiasm. The agent doesn’t replace judgment—it demands supervision. On good data sets, the results can be surprisingly useful. Policy libraries gain uniform fields. Audit teams can filter documents by owner. Searching for “review due next quarter” suddenly returns everything tagged correctly. The machine gives structure to your chaos—but only after you rebuilt half of it yourself. The paradox of automation: it scales efficiency and stupidity at the same time. The truth? This tool shines in broad classification. It can tell contracts from templates, policies from forms. But when it comes to compliance tagging—records management, sensitivity labels, retention categories—it’s out of its depth. It reads content, not context. It recognizes words, not accountability. That’s fine for casual queries, disastrous for legal retention. And yet, despite all that, you’ll keep using it. Because even partial organization feels like progress. The intern may forget its homework, but at least it showed up and did something while you were asleep. Just remember to check its math before sending the report upstream. Of course, our intern isn’t satisfied with sorting papers. No, it wants responsibility. It insists it can follow rules too—rules written in plain English. Naturally, we’re about to let it try. Section 3: Natural Language Rules — Governance for Dummies Enter the second act of our drama: rules. The Knowledge Agent now claims it can “set up rules” using natural language—no coding, no Power Automate wizardry, just a friendly c

    21 phút
  6. Stop Cleaning Data: The Copilot Fix You Need

    2 NGÀY TRƯỚC

    Stop Cleaning Data: The Copilot Fix You Need

    The Data Cleanup Trap You think your job is analysis. It isn’t. It’s janitorial work with better branding. Every spreadsheet in your life begins the same way—tabular chaos pretending to be data. Dates in six formats, currencies missing symbols, column headers that read like riddles. You call that analysis? That’s housekeeping with formulas. Let’s be honest—half your “reports” are just therapy for the punishment Excel inflicts. You open the file, stare into the abyss of merged cells, sigh, and start another round of “Find and Replace.” Hours vanish. The terrible part? You already know how pointless it is. Because by the time you finish, the source data changes again, and you’re back to scrubbing. Every minute formatting cells is a minute not spent extracting insights. The company pays you to understand performance, forecast trends, and drive strategy. Yet most days you’re just fighting the effects of institutional laziness—people exporting garbage CSVs and calling it “data.” Here’s the twist: Excel Copilot isn’t a cute chatbot for formulas. It’s the AI janitor you’ve been pretending to be. It reads your mess, understands the structure, and cleans it before you can reach for the “Trim” function. By the end of this, you’ll stop scrubbing like an intern and start orchestrating intelligent automation. Oh—and we’ll eventually reach the single prompt that fixes eighty percent of cleanup tasks… if you survive the upcoming CSV horror story. Section 1: Why Excel Is a Chaos Factory Excel was never meant to be the world’s data hub. It was built for grids, not governance—a sandbox for accountants that somehow became the backbone of global analytics. Small wonder every enterprise treats spreadsheets like a duct-taped database. Functional? Yes. Sustainable? About as much as storing medical records on sticky notes. The flaw starts with human nature. Give an average user a column and they’ll type whatever they like into it. December 3 becomes 03-12, 12/3, or “Dec third.” Some countries reverse day and month; others write it longhand. Excel shrugs, pretends everything’s fine, and your visuals later show financial spikes that never happened. Those invisible trailing spaces—oh yes, the ghosts of data entry—break lookups, implode joins, and silently poison automations. You think your Power Automate flow failed randomly? No. It met a rogue space at the end of “Product Name” and gave up. Then there’s the notorious mixed-type column. Numbers acting like text. Text pretending to be numbers. A polite way of saying: formulas stop working without warning. One cell says “42,” the next says “forty-two.” You can’t sum that; you can only suffer. Every inconsistency metastasizes as your spreadsheet ages. Excel tries to please everyone, so it lets chaos breed. That flexibility—the ability to type anything anywhere—is both its genius and its curse. Now, extend the problem downstream. Those inconsistencies aren’t isolated; they’re contagious. A Power BI dashboard connected to bad data doesn’t display trends—it manufactures fiction. Power Automate flows crumble when a column header changes by one character. Fabric pipelines stall because one table used “CA” and another wrote “California.” I once saw a manager spend three days reconciling regional sales. She was convinced her west-coast numbers were incomplete. They were fine; they were just labeled differently. “California,” “Calif.,” and “CA” politely refused to unify because Excel doesn’t assume they’re the same thing. By the time she found it, the reporting deadline had passed and the leadership team had already made a decision based on incomplete figures. Congratulations—you’ve automated misinformation. Excel’s architecture encourages this disaster. It has no schema enforcement, no input validation, no relational discipline. You can design a formula to calculate orbital mechanics but still accidentally delete a quarter’s worth of invoices by sorting one column independently. It’s like giving a toddler algebra tools and then acting surprised when the living room explodes. These flaws wouldn’t matter if Excel stayed personal—one analyst, one sheet. But it became collaborative, shared via OneDrive, circulated through Teams, copied endlessly across departments. Each copy accumulates its own micro‑mutations until no one remembers the original truth. The spreadsheet becomes a family heirloom of errors. And then, in desperation, we export the mess into Power Platform, expecting automation to transcend lunacy. Spoiler alert—it doesn’t. Flows break, connectors fail, dashboards lie, and you blame the platform instead of the real culprit: the spreadsheet habit. That’s the swamp Copilot was trained to drain. It doesn’t judge your column naming skills or your inconsistent capitalization; it just reads the chaos, classifies the problems, and offers to fix them. Excel remains wonderfully permissive—but now, finally, it has a sentient assistant that understands the consequences. The next time you stare at a corrupted CSV thinking, “Why does this keep happening?” remember: you’re not cursed—you’re using a tool designed for flexibility in a world that now demands precision. And Copilot’s job is to convert that flexibility into order before it leaks downstream into every automated nightmare you’ve ever unleashed. Section 2: Enter Copilot — Excel’s AI Janitor with a PhD Enter Copilot—the only coworker in your department who doesn’t sigh when opening a spreadsheet. This isn’t a plug‑in or a chatbot doing party tricks; it’s a full‑time analyst embedded in Excel’s bloodstream. While you’re still scrolling through columns wondering why C looks suspiciously like B, Copilot’s already mapped the logic, traced dependencies, and prepared a surgical checklist of what’s broken. Think of Copilot as an AI janitor with a PhD in pattern recognition. It doesn’t just mop up duplicated rows—it reads the history of your mess and understands why it happened. Because Copilot sits inside your Microsoft 365 environment, it sees the bigger context: the CSV you saved in OneDrive, the list you exported from a SharePoint table, even the numbers that came through last week’s Outlook attachment labeled “final_final3.xlsx.” It draws threads between them without you dragging in references or imports. Traditional Excel users shuffle data in with fear, hoping it aligns. Copilot knows that integration is safest when native. It pulls intelligently across OneDrive, SharePoint, and Teams, so your source never detaches from its environment. You don’t “import;” you delegate. The moment you say, “Summarize last quarter’s revenue by region from sales‑pipeline.xlsx,” Copilot knows which file you mean, because it’s living in the same digital apartment complex. Now, the real misunderstanding begins with its two personalities: Chat mode and App Skills mode. The chat panel is where you converse—ask questions, request summaries, probe for patterns. It’s conversational, diagnostic. You can type “show orders above ten thousand” and Copilot will politely tell you which rows qualify. But that’s all it does—it observes. The App Skills side, however, is operational. That’s where Copilot actually edits your spreadsheet: adds formulas, applies formatting, creates tables, runs transformations. Most users linger in chat, wondering why nothing changes. They’re basically talking to their analyst and ignoring the janitor holding the tools. Flip the switch to App Skills, and the gloves come off. Suddenly, your natural language becomes executable logic. Say, “Highlight orders over ten thousand,” and it rewrites your conditional formatting rule, generating the equivalent of nested IF statements faster than you can say “syntax error.” Under the hood, Copilot converts your phrasing into exact Excel formula syntax—clean, validated, and target‑matched. You get the mathematical outcome, minus the ritual suffering. Picture Copilot reading your Excel file like a forensic accountant debriefing a crime scene. “Ah, column headings misaligned, date serialization inconsistent, numeric strings formatted as text.” With infinite patience, it corrects them one by one. Every correction is previewed before application; you remain the supervisor. Copilot proposes, you approve. It’s like watching an intern perform at doctoral level—and asking permission first. Here’s the trick most users miss: because Copilot sits atop Microsoft Graph, it understands the semantics of your data. “Revenue,” “Region,” and “Date” aren’t random words; they’re identified entities. So when you ask it to “normalize all region names,” it doesn’t merely align spelling—it recognizes the organizational geography underpinning your tenant. That’s why it’s more than autocomplete; it’s context‑aware auditing. But the true power? Not understanding you—it’s obeying you. When you direct it, your spreadsheet becomes programmable through thought alone. You stop interpreting formulas and start issuing orders. The janitor becomes the executor, the mess becomes structure, and Excel—miraculously—behaves like it graduated from chaos management school. The next section shows what happens when you finally trust it to clean unattended. Section 3: The Three Commands That End Manual Cleanup Let’s commit heresy—delegate the cleanup. You’ve spent years believing spreadsheets require human penance. They don’t. Copilot is perfectly capable of tidying your data while you sip coffee and rehearse pretending to work. The trick is giving it the right orders, not vague pleas. Three categories of commands eliminate almost every manual cleanup ritual you still cling to. First, the nuclear option: Normalize Everything. When your dataset looks like the aftermath

    23 phút
  7. 3 NGÀY TRƯỚC

    Fix Power Apps Data Entry: Use THIS AI Agent

    The Data Entry Nightmare Let’s start with something familiar — the Power Apps form. Every organization has one. Rows of text boxes pretending to be productivity. You click “New Record,” a form opens, and suddenly you’re not an analyst or a manager. You’re a typist. Copying names, phone numbers, addresses, maybe from an email that someone forwarded, maybe from a PDF invoice that refuses to let you copy cleanly. This isn’t digital transformation. It’s clerical labor with branding. Now multiply that by hundreds of records. Each one entered manually, each one a potential typo waiting to ruin your reports later. The average user calls it “filling forms.” Professionals, however, know the truth — it’s slow, error-prone data decay, and it happens daily across every Power App ever built. And yet, Power Apps insists on those same rigid fields because, well, someone has to enter the data… right? Wrong. Enter the AI Data Entry Agent — and suddenly, the whole miserable ritual collapses. Why Traditional Power Apps Forms Fail Traditional Power Apps forms are a triumph of structure over sanity. They promise governance, validation, and consistency, but what they actually deliver is the illusion of control wrapped in user frustration. Every form designer knows the pain: text inputs aligned like soldiers, drop-down menus cloned from Dataverse tables, and those “required” asterisks that audibly sigh when someone forgets them. You build a “customer onboarding” form. Ten fields should be easy. But then finance wants two additional fields, sales wants three optional notes, and compliance insists every address follow a specific format. Suddenly your minimalist form looks like it was designed by a committee of auditors. Users stop reading; they tab blindly through fields like they’re trying to finish an exam they didn’t study for. And accuracy? Forget it. Data doesn’t start clean — it arrives as emails, chat logs, scanned documents, screenshots, half-finished Excel sheets. Each requires manual interpretation before those neat form fields ever see a keystroke. The result is garbage in, garbage out — only slower. Even when you paste in text, you still have to carve it apart. Name in one box, phone in another, and heaven help you if there’s a middle initial because now validation fails. Power Apps forms were never built for unstructured input. They’re databases disguised as paperwork. And that matters because the modern business world runs on unstructured content. The average customer record might originate in an Outlook thread, a Teams chat, or a photo of a business card someone snapped in a meeting. Expecting humans to manually normalize all that feels like asking accountants to do math on napkins. The consequence isn’t just inefficiency — it’s inaccuracy. The longer a human touches the data, the more opportunity for deviation creeps in. Typos, inconsistent abbreviations, blank fields. The cost cascades through reports, dashboards, and automated flows. “Why do our customer counts never match Power BI?” Because Susan misspelled Contoso twice. The system didn’t catch it because it was syntactically correct, just semantically wrong. And yet, this failure perpetuates. Admins add more validation rules. Makers add more labels explaining what to type. Trainers create tutorials teaching people how to copy information correctly — as if accuracy were a skill problem instead of a design flaw. What Power Apps needed wasn’t a better form. It needed a smarter interpreter — one that could read context, understand meaning, and populate fields without making the user think. That, at last, is what the AI Data Entry Agent delivers. Meet the AI Agent: Overview and Capabilities The AI Data Entry Agent isn’t a gimmick; it’s a demotion notice for manual data entry. Think of it as a bilingual translator living inside your form. It reads messy human text and speaks perfect Dataverse. When users open a record and activate the agent, they don’t have to interact with every field. They simply paste what they have — an email from a colleague, a paragraph of onboarding info, even raw notes copied from Teams — and the agent parses, interprets, and maps each piece to the correct column. Microsoft calls this Smart Paste, but that label undersells the brilliance. The model behind it recognizes entities like names, addresses, and phone numbers, but also learns from context within your specific table schema. If your table includes “Preferred Contact Method,” it understands that “email” in the text likely belongs there. It doesn’t hallucinate; it aligns with your metadata. In effect, the AI agent behaves like a form’s internal analyst — it reads unstructured input, determines intent, and builds structured data faster than any human could. But Smart Paste is only half the trick. The other is File Upload, a feature that feels slightly supernatural. Instead of text, you can drag in an image — say, a screenshot of that same email or a scan of a paper invoice. The AI agent extracts text using OCR, detects field-like patterns, and automatically fills them into your form. And yes, it knows the difference between a company name and a street address because it’s grounded in Microsoft’s AI foundations used across Outlook, Viva, and Dynamics. Most users see it work once and refuse to go back. Here’s the kicker: the agent doesn’t overwrite anything blindly. Every suggestion appears with its source context. You see exactly where each value came from and can accept or reject it individually — or, for the brave, accept all in bulk. Data accuracy rises not through enforcement, but through intelligent prediction. It makes users faster without making them careless. In practice, it feels like cheating. What used to take minutes per record now takes seconds. The user experience flips — instead of begging people to fill the form, you now watch them volunteer, simply because it stopped being annoying. And since it works directly within the Model-driven app experience, there’s zero new interface to learn. The same forms, the same tables — except now, they’re sentient enough to do half your job. Administrators appreciate it for different reasons. Every field filled by the AI agent respects existing validation rules and data types. No rogue inputs, no API calls, no custom connectors sneaking data around. It’s compliance-safe automation baked into the product. Meanwhile, makers can breathe again. They no longer have to redesign forms or create special “quick entry” apps. The AI agent does all the heavy lifting behind the scenes. So yes, Technically, it’s just another Copilot feature. Functionally, though, it’s the end of manual entry in Power Apps. It transforms forms from passive receivers of text into active participants in data quality — a distinction the average user will never notice but every admin will silently celebrate. Prerequisites: Enabling the AI Feature Before you can make your forms magically intelligent, you must first remove the most common obstacle in enterprise IT — disabled settings. The AI Data Entry Agent doesn’t emerge fully formed; it must be explicitly enabled within your environment. Yes, administrators, that means you. This isn’t “plug it in and hope”; it’s a controlled feature switch in Power Platform admin settings, hidden precisely where only the least adventurous will never find it. Open your Power Platform admin center and navigate to Environment Settings → Product Features. Scroll down until you hit the cluster of options labeled AI capabilities. Inside, look for the one labeled “AI Form Fill.” That unassuming toggle is the gatekeeper. The moment you switch it on, Smart Paste and File Upload become available throughout your model-driven apps. Think of it as an evolutionary gene activation. You’re not installing something new; you’re awakening a function Microsoft already packaged into the platform. Crucially, this isn’t a tenant-level free-for-all. The feature respects environment boundaries. You can enable it for testing in one sandbox before trusting it in production. Because yes, every enterprise has that one department that copies email signatures into the address field, and you’ll want to test whether the algorithm forgives them. Spoiler: it usually does. Once enabled, existing forms need no redesign. The AI Agent integrates seamlessly into the current form experience. You’ll notice a small AI button or “Use Copilot to fill” icon appearing near the command bar in new or edit mode. That’s your portal. It connects your unstructured chaos with Dataverse order. If the icon doesn’t appear, either your environment is lagging behind updates, or someone with a deep aversion to change has disabled modern features globally. Politely remind them this is 2025 and that the keyboard is no longer humanity’s finest input device. After activation, the rest is user-level permission. Because the agent operates within Dataverse security roles, it uses the same privilege model as manual entry. Users who could previously write data can now ask AI to do it on their behalf. No custom roles, no risk of overexposure — just efficiency layered over existing governance. Once that’s confirmed, you’re ready to watch bureaucratic typing die its quiet death. Demo 1: Effortless Record Creation Now that the AI feature exists in your environment, let’s abuse it — constructively. Picture the typical onboarding scenario. You receive an email from someone named Sarah in operations: “Hey, we’ve onboarded a new customer called Contoso, here’s their information.” She dumps a few lines of text — address, phone, contact name, maybe the account manager — and expects you to “add it to the system.” Historically, this instruction translates to tabbing through ten required fields with growing resentment. With the Data En

    23 phút
  8. 3 NGÀY TRƯỚC

    Stop Migrating: Use Lists as Copilot Knowledge

    The Myth of Mandatory Migration Why is it that in every digital transformation meeting, someone insists the first step is to migrate everything? As if physical relocation somehow increases intelligence. A file sits peacefully in SharePoint, minding its own business, and then a consultant declares it must be “upgraded” to Dataverse for “future compatibility.” Translation: they’d like another project. You’re told that modernization equals movement, even though nothing’s broken—except, perhaps, your budget. For years, the myth persisted: that Copilot, Power BI, or any shiny AI assistant needed data that lived elsewhere—somewhere fancier, more “enterprise-class.” SharePoint Lists were treated like embarrassing relatives at a corporate reunion: useful once, but not to be seen in public. The assumption? Too old, too simple, too unworthy of conversational AI. And yet, quietly—without fanfare—Microsoft flipped that assumption. Copilot Studio now talks directly to SharePoint Lists. No ETL pipelines, no schema redesign, no recreating permissions you already spent months configuring. The connector authenticates in real time, retrieving live data without duplication. Suddenly, the “legacy” tool outsmarts the migration budget. So today we’re breaking a commandment the IT priesthood refuses to question: thou shalt not move data for no reason. You can keep your lists where they are and still have Copilot read them fluently. Let’s dismantle the migration mirage. Section 1: The Migration Mirage Every enterprise has a reflex. Something important appears? Move it to Dataverse. Something large? Fabric, obviously. Something nonstandard? Export it anyway; we’ll clean it later. It’s muscle memory disguised as strategy. Migration has become a ritual, not a necessity—a productivity tax masquerading as modernization. Consider the sales pipeline that already lives in a SharePoint list. It’s updated daily, integrated with Teams alerts, and feeds a dozen dashboards. But once Copilot entered the picture, someone panicked: “AI can’t use Lists; we’ll have to rebuild it in Dataverse.” Weeks later, the same data exists twice, with half the triggers broken, a few licensing costs multiplied, and no measurable improvement in functionality. Congratulations—you’ve achieved digital motion without progress. Modernization is supposed to make work easier. Instead, we build data ferries. Information leaves SharePoint, visits Power Automate for translation, docks at Fabric for modeling, and then returns to Teams pretending to be insight. It’s the world’s least efficient round trip. Let’s count the costs. First, licensing—because Dataverse isn’t free. Every migrated record incurs an invisible tax that someone in finance eventually notices with horror. Next, schema redesign—those column types in Lists never quite map one-to-one. Something breaks, which triggers meetings, which trigger Power Automate rebuilds. The end result: thousands of dollars spent achieving what you already had—a structured table accessible in Microsoft 365. And the absurdity compounds. Each year brings a new “recommended” platform, shinier than the last, so data hops again: Lists to Dataverse, Dataverse to Fabric, Fabric to some eventual “Unified Lake Platform.” The name changes, the bills persist, the value doesn’t. Users just want their information to answer questions; they never asked for serialized migration. The truth is brutal in its simplicity: Copilot never needed your data copied—it needed permission to see it. Authentication, not replication. All those hours spent writing connectors and dataflows? They existed to make up for an access gap that no longer exists. The new SharePoint List connector removes the gap entirely. For the first time, AI in Microsoft’s ecosystem understands the data where it naturally lives. No detours, no middleware acrobatics. It queries your list directly under the same user context you already trust. If you can open a row, so can Copilot. If you can’t, neither can it. Governance remains intact; logic remains simple. Think about what that means. The endless migration carousel—the expensive dance between platforms—wasn’t driven by technology limits. It was driven by institutional habit. Data migration became a corporate superstition, performed “just in case,” like carrying an umbrella indoors. The enterprise mind equated movement with progress, complexity with sophistication. It never occurred to anyone that simplicity might finally work. And now, without any ceremony, Microsoft just invalidated all that ritual. No new architecture diagram. No whitepaper claiming “revolution.” Just a quiet update: “SharePoint Lists can now be added as knowledge in Copilot Studio.” That’s it. Five seconds of configuration wiped away entire categories of budget justification. Governance teams who lived off “data modernization initiatives” now face an existential crisis. Because when information remains where it’s always been—secure, auditable, and instantly accessible—there’s nothing left to migrate. The new challenge isn’t infrastructure; it’s mindset. So, remember this the next time someone proposes “lifting and shifting” perfectly functional lists. Migration for its own sake isn’t strategy; it’s busywork with branding. Microsoft has made it redundant. Then, of course, there’s the punchline. After decades of consultants insisting SharePoint Lists were the problem—turns out, Lists were just waiting for Microsoft to stop pretending they weren’t good enough. And now they are. Section 2: Enter the SharePoint List Connector Then Microsoft did something uncharacteristic: it made the obvious choice. It stopped trying to rebrand common sense and simply allowed Copilot Studio to read SharePoint Lists directly. No extraneous layers, no “prerelease schema adapter,” just—connect and go. You paste a list URL, authenticate like any normal user, and Copilot immediately treats that list as an authoritative knowledge source. The operation takes less time than it does to argue about whether it’s “best practice.” Let’s go through this slowly because the sheer simplicity confuses people who are conditioned to complexity. In Copilot Studio, you open your agent, click “Add Knowledge,” choose SharePoint, and two familiar phrases appear: My Lists and Recent Lists. These aren’t marketing reinventions; they mirror exactly what you see in SharePoint itself. My Lists surfaces those you created through the Lists app. Recent Lists shows the ones you’ve visited lately. Microsoft even preserved that little bit of human laziness—your AI can only connect easily to lists you actually use. There’s poetic justice in that. Once you select a list, Copilot establishes an authenticated connection. It doesn’t copy the data or build an index; it simply references the live list through the same channels you already rely on. When you or a colleague update a record—add a new holiday, change a project deadline—Copilot’s knowledge reflects that instantly. No cache refresh, no waiting on a background indexing job. Real-time isn’t a buzzword here; it’s literal. The AI queries the latest data every single time you ask. Consider a small demonstration. Your organization’s holiday calendar lives in a SharePoint list that you dutifully ignore until there’s free cake involved. Traditionally, if an assistant or chatbot needed that information, you’d have to export it, convert it, maybe import it into Dataverse, and then pray it synchronized correctly. Now? You just connect the list once. The HR team adds “Labor Day, September 1.” You ask Copilot, “What’s the next company holiday?” It answers instantly—“Labor Day.” No reindexing, no retraining, no “sync in progress.” The list updated, the AI saw it, end of story. Behind the scenes, the brilliance lies not in new technology but in what Microsoft chose not to do. They didn’t create a parallel data store; they respected your existing one. Copilot operates in the user’s security context, inheriting permissions automatically. If only certain departments can see salary data, then only those users’ instances of Copilot can retrieve it. No elevated service accounts, no security loopholes disguised as convenience. It’s not a shortcut; it’s the correct route we should’ve taken from the start. This architectural restraint—doing less in order to achieve more—is rare in enterprise software. For decades, integrations have thrived on duplication. But Microsoft realized that duplication is fragility. Every copy of data becomes a compliance liability, every new database a new failure point for governance. By letting Copilot talk to the original list, they’ve turned the humble SharePoint list into what might be the most cost-efficient knowledge base available. And the kicker? This practically eliminates the boundaries between operational data and conversational intelligence. Your support list, asset register, or task tracker—they all become live informational feeds for Copilot Studio. Nothing is archived for AI; everything is conversational in situ. The list becomes a living knowledge cell—structured, controlled, and continuously current. Now, of course, this shakes the foundations of entire departments whose job titles contain the word “migration.” They depend on the narrative that modernization requires movement. But when Copilot can derive structured insight from the same list your intern edits daily, that rationale evaporates. The platform that once required datacenter-grade justification suddenly needs nothing beyond a URL and a click. Think about the cultural shift that implies. For the first time, the intelligence layer doesn’t demand upheaval beneath it. The AI doesn’t force the business to reorganize around a new schema; it adapts to the existing one. In t

    21 phút

Giới Thiệu

The M365 Show – Microsoft 365, Azure, Power Platform & Cloud Innovation Stay ahead in the world of Microsoft 365, Azure, and the Microsoft Cloud. The M365 Show brings you expert insights, real-world use cases, and the latest updates across Power BI, Power Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and more. Hosted by industry experts, each episode features actionable tips, best practices, and interviews with Microsoft MVPs, product leaders, and technology innovators. Whether you’re an IT pro, business leader, developer, or data enthusiast, you’ll discover the strategies, trends, and tools you need to boost productivity, secure your environment, and drive digital transformation. Your go-to Microsoft 365 podcast for cloud collaboration, data analytics, and workplace innovation. Tune in, level up, and make the most of everything Microsoft has to offer. Visit M365.show. m365.show

Nội Dung Khác Của m365.Show

Có Thể Bạn Cũng Thích