M365 Show with Mirko Peters - Microsoft 365 Digital Workplace Daily
M365 Show brings you expert insights, news, and strategies across Power Platform, Azure, Security, Data, and Collaboration in the Microsoft ecosystem.
Podcaster
Episoden
17.08.2025
22 Minuten
Ever built the perfect Teams app locally… only to realize your
customer can’t test it without a painful deployment? What if you
could make your laptop act like a secure cloud endpoint in under
3 minutes? Dev Tunnels in Visual Studio can do exactly that—but
only if you configure them right. Today I’ll walk you through how
to open your local services to the internet, securely, for
Microsoft 365 testing—and the one setting that could accidentally
expose your entire dev box if you miss it.
What Dev Tunnels Really Are (and Why That Matters for M365 Apps)
Most developers hear “Dev Tunnels” and think “temporary URL.”
That description isn’t wrong—it’s just incomplete. A tunnel isn’t
just a link. It’s a remote entry point into whatever service is
running on your machine right now. And when you’re building for
Microsoft 365—whether that’s a Teams personal app, a SharePoint
web part, or a Power Platform custom connector—that distinction
matters a lot more than most people realize. At its simplest, a
Dev Tunnel is a way to take something running locally—say a Node
server for your Teams tab—and let people hit it from anywhere on
the internet. No publishing to Azure first. No staging
environment. It’s just your local code, reachable by someone
across the country the moment they load the link. For M365
scenarios, that’s gold. Teams tabs, SharePoint Framework
solutions, or webhooks for Power Automate flows often need to
make live calls to your dev service. Without an externally
reachable endpoint, those features either break or force you into
a slow deployment loop. Here’s the catch—because tunnels are so
easy to spin up, plenty of developers treat them as a disposable
convenience. They’ll click “create,” grab the link, and forget
about what’s actually being exposed. But the truth is, the way
you configure your tunnel shapes who has access, how secure it
is, and even how long it lasts. Without thinking about that, you
might hand out more access than you intended—sometimes to the
entire internet. Think of it like issuing a guest pass to your
dev machine. If that pass has “access all areas” printed on it,
and you’ve left your desk unattended, you can imagine the risk.
Dev Tunnels work the same way: the rules you set at creation time
are the guardrails that keep your guests from wandering into
places they shouldn’t. I’ve seen people run right into this
problem when testing with remote teammates. One dev tried to get
feedback on a Teams tab from a colleague in another city. The app
worked fine locally. But Teams, by design, wasn’t going to call
`localhost:3000` from a user’s client session in another tenant.
Without a tunnel, their only option was to package and deploy the
tab into a test tenant running in Azure. That deployment cycle?
Fifteen minutes per change. By lunchtime, they’d tested almost
nothing. The first time they used a Dev Tunnel, feedback was
instant—click save, reload in Teams, done. Microsoft actually has
a big base of developers using Microsoft 365 tools this way. A
significant portion report that they can run most of their
iteration cycles entirely locally, as long as they can push
traffic through a tunnel. Those working cloud-only generally
accept that slower loop as the trade-off. The tunnel group, in
contrast, gets real-time feedback. Before tunnels, devs relied on
staging servers or manually deploying builds to cloud sandboxes.
Staging works fine for stable features, but it’s overkill for
testing a half-built card in Adaptive Cards Designer or checking
if your bot’s messaging extension responds correctly when called
from a remote Teams client. Not to mention that staging
environments add network hops, authentication differences, and
configuration mismatches that can hide or introduce issues you
won’t see in production. And it’s not just about speed. When you
compress the feedback loop this much, collaboration changes. You
can have a PM, a designer, and a developer looking at the same
instance of your app in live Teams while you tweak things on your
laptop. You’re building in real time, without waiting for a
pipeline to run or an environment to reset. That leads to fewer
surprises once you do publish. So while “temporary URL” might be
the simplest way to describe a Dev Tunnel, it barely scratches
the surface. In Microsoft 365 development, they’re more like an
on-demand extension of your developer environment into the
outside world—one you can control down to the visibility,
authentication, and lifespan. They’re not a side tool. They’re
the connective tissue that makes secure, rapid iteration and real
collaboration possible without blowing up your schedule in
deployment waits. And once you see them as that kind of
infrastructure, the next question is pretty clear—how do you
switch them on without wrecking the setup you already depend on?
Enabling Dev Tunnels in Visual Studio Without Breaking Your Setup
The first time you switch on a tunnel in Visual Studio, it feels
almost effortless. A couple of clicks, and suddenly your dev box
is reachable from anywhere. But that’s also when you notice your
localhost SSL prompt breaking, your configured URLs no longer
lining up, or your OAuth redirect URIs throwing errors. The magic
moment turns into a head‑scratch fast if you’re not intentional
about the setup. Starting from a clean project, enabling a tunnel
is straightforward. Open your solution, go to the project node in
Solution Explorer, right‑click, and choose Properties. In
web‑friendly projects, you’ll see a Debug tab. That’s where the
Dev Tunnel option lives. Click the checkbox to enable a tunnel,
and Visual Studio will prompt you for a tunnel name and scope.
This is where most people just type something quick and hit
Enter. But what you name and how you scope it will shape how your
tunnel behaves later—especially when testing Microsoft 365 apps.
By default, Visual Studio tends to pick a public visibility
option if you don’t change it. That means anyone with the link
can hit your endpoint. For a throwaway demo, maybe that’s fine.
But if your project has authentication flows linked to an
internal tenant or exposes an API behind that tunnel, you’ve
effectively given the internet a way in. It’s all too common to
see developers click through this without realizing they’ve left
the door wide open. In the visibility dropdown, you’ll see
choices like Public, Public (authenticated), and Private
(authenticated). Pick carefully—this isn’t just a label. For a
Microsoft 365 Teams app that uses Entra ID, choosing an
authenticated tunnel keeps your audience to those who can sign in
with valid credentials. That makes accidental data exposure much
less likely. Once you’ve chosen scope, give your tunnel a clear
and reusable name. This isn’t just cosmetic. If you use a
consistent subdomain—for example, `myteamsapp.dev.tunnels.ms`—you
can avoid constant adjustments to registered redirect URIs in
Azure AD app registrations. OAuth callbacks are sensitive to
exact URLs. If your tunnel address changes every test session,
you’ll spend half your day in the Azure Portal re‑registering
redirect URLs. Locking in a persistent tunnel name avoids that
churn entirely. Visually, the setup page is simple. Under the Dev
Tunnel section, type your name, pick the scope, and hit “Apply.”
Visual Studio will then spin up the tunnel the next time you
debug. You’ll see a network forwarding table in the Output window
showing your local port mapped to the `.dev.tunnels.ms` address.
But here’s something that trips up a lot of people: if you’re
running IIS Express and enable a tunnel mid‑session, Visual
Studio often restarts the web server. That restart kills your
breakpoints for the current debugging run. If you were halfway
through tracking down a tricky state bug, it’s frustrating to
start over. The fix is simple—enable the tunnel before you start
debugging. That way, the process starts in its final state, and
your breakpoints hold as expected. For multi‑port projects—say,
you’ve got an API backend and a front‑end app—Visual Studio can
only tunnel one port per project by default. If you need both
exposed, you’ll have to either run them in separate projects with
their own tunnels or use the CLI later for advanced config.
Understanding this now helps you plan your debugging session
without surprises. Once configured, running your project now
serves it both on `localhost` and via the external tunnel link.
You can send that link to a colleague, paste it into your Teams
configuration, or register it in your SharePoint Framework
`serve.json` pointing to the externally accessible address. And
because you set the visibility right, you know exactly who can
reach it. Done this way, a tunnel doesn’t disrupt your local SSL
setup, it doesn’t kill your debugging context, and it doesn’t
require repeated redirect URI edits. It just adds a secure,
shareable endpoint on top of your normal workflow. That’s the
sweet spot—a transparent addition to your process, not a new set
of headaches. Once you’ve got the tunnel running smoothly like
this, the next big decision is the type of tunnel you
choose—because that choice decides whether your remote testers
are in a locked‑door environment or walking straight in off the
street.
Choosing Between Public Anonymous and Private Authenticated
Tunnels
Choosing the wrong tunnel type is basically like handing a
stranger the keys to your office and hoping they only look at the
thing you invited them to see. When Visual Studio prompts you to
pick between Public Anonymous, Public Authenticated, and Private
Authenticated, it’s not just a formality. Each choice changes
what someone on the other end can do and how easily they can get
in. Public Anonymous is the fastest to set up. You create the
tunnel, share the URL, and anyone with the link can hit your
endpoint. There’s no sign‑in, no extra step for the tester. That
speed is appealing during something like a hackathon, where your
biggest concern is getting an early proof of concept in front of
judges or teammates. You’re moving fast, the code is throwaway,
and there’s no sensitive tenant data involved. Public
Authenticated takes that same open link but requires the user to
log in with a Microsoft account before they can see your app.
It’s a middle ground—good if you want to keep out completely
anonymous access but don’t need to lock things to a single
identity provider or narrow group. You’re adding a speed bump
without fully closing the gate. Private Authenticated is where
you define a specific audience. Your testers have to
authenticate, and they have to be in whatever group, tenant, or
directory you’ve allowed. In the Microsoft 365 context, this
often means they must sign in with an account in your Microsoft
Entra ID tenant. That instantly cuts down who can see your app,
and it ties access to an identity you already manage. If someone
leaves your project or company, disabling their Entra ID account
means their tunnel access disappears too—no separate cleanup
needed. Here’s where developers get tripped up. It’s common to
default to Public Anonymous during early development because it’s
frictionless. But if your code references internal APIs, even
unintentionally, or your Teams app renders content from secure
Graph endpoints, you’re exposing more than just a demo UI. That
link could become an easy target for automated scanners or bots
that sweep public URLs. Even if you think the risk is low, you
have no visibility into who might be poking around. Think about
it in scenarios. At a hackathon, speed wins. You’re showing
features, not production data, so Public Anonymous might be fine
for those 24 hours. Compare that to pre‑release enterprise
testing for a Teams app that’s wired into a finance system. In
that case, Private Authenticated should be the default. Only
invited testers can sign in, and their activity is logged through
your normal Microsoft 365 auditing. When deciding, break it into
three questions: Who exactly needs to access this? What kind of
data will they see or interact with? Do I need to verify their
identity before they can connect? If the answers are “a small
known group,” “potentially sensitive business data,” and “yes,”
then Private Authenticated isn’t optional—it’s mandatory. If it’s
“anyone,” “sample data only,” and “no,” then speed might win out
with Public Anonymous, but you make that trade‑off consciously.
Entra ID integration is what makes Private Authenticated so
strong for M365 work. It’s not just another login screen—it’s
your tenant’s directory, with multi‑factor authentication and
conditional access policies baked in. That means even in a tunnel
context, you can enforce the same trust level as your production
systems. Testers get in the same way they would access company
resources, and you don’t have to invent a separate credential
system for dev testing. The security gap between public and
authenticated options is bigger than most people assume. Public
mode—anonymous or even with basic Microsoft account sign‑in—still
faces the risk of targeted attacks if the URL gets shared or
discovered. Authenticated tunnels tied to your directory shrink
that risk to a controlled tester pool, and since every request is
tied to a real account, you gain accountability. Once you start
thinking this way, picking a tunnel type stops being a “just
click the first one” step and becomes part of your security
posture. You’re no longer reacting after something gets
exposed—you’re defining access at the moment you open the
connection. And with that in place, the next logical step is
figuring out how to create, manage, and reuse those exact
configurations without manually clicking through Visual Studio
every time. That’s where the CLI completely changes the game.
Taking Control with the Dev Tunnels CLI
The Visual Studio UI works fine when you’re setting up a single
tunnel on your own machine. But if you’re trying to mirror that
exact tunnel on multiple laptops, or spin one up from a build
agent in seconds, the clicking becomes a bottleneck. That’s where
the Dev Tunnels CLI starts to pull its weight. It gives you the
same underlying tunnel service that Visual Studio uses, but with
the repeatability and scripting control that’s hard to match in a
GUI. Most developers never open the CLI, so they miss out on a
lot of practical capabilities. The UI is tied to the project you
have open, which feels natural until you want a tunnel for
something outside of Visual Studio or you need to reuse the same
port without re‑configuring. The CLI doesn’t have those limits.
You can start a tunnel without loading an IDE, you can
standardise naming conventions, and you can automate the whole
process so it’s consistent across every environment. If you’ve
never touched it before, the CLI approach is straightforward.
Once you’ve installed the Dev Tunnels tool, creating a new tunnel
is a single command. For example, `devtunnel create myteamsapp
--port 3000 --allow-unauthenticated` will set up a new tunnel
called “myteamsapp” forwarding port 3000 with anonymous access.
From there, `devtunnel list` shows all your active and saved
tunnels, along with their URLs. To start one, you use `devtunnel
host myteamsapp`, and the service comes online instantly. The
naming here is more than cosmetic. Consistent names translate to
consistent URLs, which is critical for any Microsoft 365 app that
has registered redirect URIs or webhook endpoints. One wrong
character in a callback URL, and your auth flow fails silently.
With the CLI, you can define those names once and know that every
developer on the team will get the same outcome. Config files
make this even smoother. You can store tunnel settings—name,
port, authentication type—in a JSON file alongside your code.
When a new developer joins the project, they run one command
against that file, and their tunnel matches the rest of the
team’s setup immediately. There’s no “go click here, then here,
then change this dropdown” walkthrough. No step is left to
memory. This standardisation is where larger teams see real
gains. Imagine onboarding three contractors into a Teams app
project. Instead of each person fumbling through UI settings,
they all run `devtunnel create --config devtunnel.json` and have
the exact same environment in under a minute. That removes a
whole category of “it works on my machine” problems before they
start. The CLI also opens the door to using Dev Tunnels in CI/CD
pipelines. If your automated build runs integration tests that
rely on an external callback—maybe a Teams messaging extension
that needs to call your dev API—you can have the pipeline spin up
a tunnel automatically during the job. The build agent hosts it,
the tests run against the real endpoint, and when the job
finishes, the tunnel is torn down. You get end‑to‑end testing
with live callbacks without exposing anything permanently to the
internet. For local debugging, this flexibility means you’re not
tied to having Visual Studio up and running just to serve
traffic. You can host the backend through the CLI, then hit it
from your front‑end app that’s running in another dev tool
entirely. That makes it easier to test scenarios where one part
of the solution is in .NET and another is in Node, without
forcing both into the same debugging session. And because the CLI
can reuse previous tunnels without changing URLs, your OAuth and
webhook registrations stay valid between runs. All of this adds
up to a key point: the CLI isn’t just for people who dislike
GUIs. It’s for anyone who needs repeatability, speed, and
environment parity without relying on manual clicks. Once you’ve
got a set of tested CLI commands, you can share them in your
repo’s README, embed them in npm scripts, or wire them into
PowerShell build routines. That’s a level of control and
consistency you simply can’t get if every tunnel is created from
scratch by hand. When you’re working this way, tunneling stops
being an occasional tool and becomes part of the core workflow.
Every machine can host the same configuration, automation can
stand up tunnels in seconds, and you can scale testing to more
people without slowing down setup. But with that power comes
responsibility—because once your tunnels are this fast to deploy,
you need to be deliberate about how you protect them from the
moment they come online.
Locking It Down: Avoiding Common Security Mistakes
One missed setting can turn your dev box into an open buffet for
attackers. It’s not an exaggeration—Dev Tunnels are essentially
internet-facing endpoints into whatever you’re running locally,
and if you treat them like a casual “share” link, you’re handing
out access without tracking who’s walking through the door. The
most common mistakes keep repeating. Developers spin up a tunnel
and leave it at the default visibility—often Public
Anonymous—without considering that the link may travel further
than intended. Or they finish testing, close their laptop, and
forget the tunnel is still running hours later. In some cases,
tunnels stay accessible overnight with services still responding.
And then there’s weak authentication—using accounts without MFA
or handing access to people outside a controlled tenant. The
pattern is predictable: speed wins in the moment, and security
gets added “later.” It doesn’t take much for a small oversight to
snowball. Picture this: a local API behind a public tunnel helps
test a Teams messaging extension. That API trusts calls from
Graph with a certain token. An attacker probing randomly finds
your tunnel’s URL, hits a less protected endpoint, and extracts
some internal IDs. Those IDs get used to craft an authenticated
request upstream. Now you’ve got a compromised API key tied back
to your connected M365 tenant. From one missed scope change to a
potential breach—without you ever realising someone outside your
circle connected. I’ve seen log snippets where exposed tunnels
started receiving GET requests from IP ranges in regions the
developer had no connection to. Patterns like `/wp-login.php` and
`/phpmyadmin` litter the logs—a clear sign someone’s scanning for
common admin pages. None of those existed on the dev box, but the
noise made it harder to spot requests that actually mattered. If
there had been a real vulnerable endpoint, the story could have
ended differently. This is why the principle of least privilege
applies as much here as it does to any cloud role assignment.
Open as little as possible for as short a time as possible. If
you only need port 3000 for a Teams client test, don’t forward
your entire web stack. Limit visibility to a specific tenant or
authorised accounts. And the moment testing is done, shut it
down—don’t leave the channel open “just in case.” Microsoft Entra
ID plays a big role here for M365-focused work. If you set your
tunnel to Private Authenticated and tie it to your Entra ID
tenant, you immediately raise the bar for access. Multi-factor
authentication can kick in for testers. Conditional access
policies can require devices to meet compliance before they
connect. It’s the same trust layer you’d rely on for production
resources, applied to your ephemeral dev endpoint. That’s far
more effective than a random password or relying on an
unvalidated public login. Logging and monitoring can feel like
overkill for something temporary, but it pays off the first time
you need to verify whether a tester actually saw the latest build
or if a strange request came from your team or from outside.
Tunnel logs give you request timestamps, source IPs, and hit
endpoints. Pair them with your app logs, and you get a timeline
you can actually trust. In a sensitive test cycle, that record is
your safety net. At a practical level, locking things down means
having a short checklist you run through every time you start a
tunnel: set the correct visibility, verify the authentication
mode, confirm only the necessary ports are forwarded, and plan a
clear stop time. If your work takes more than one session, reuse
named tunnels with fixed URLs instead of rebuilding them publicly
each day—especially if that rebuild defaults to open access. The
goal isn’t to make tunnelling cumbersome—it’s to make it
conscious. When security settings are part of your standard flow,
you stop treating them as an “extra” and start treating them like
the foundation. That’s how you avoid the feeling of “it was only
supposed to be a quick test” becoming “we might have just leaked
production data.” Once you bake those habits into your workflow,
Dev Tunnels stop being a casual risk and start being a reliable
part of your testing pipeline. Which brings us to the bigger
picture—how they fit strategically into how you build and
collaborate in Microsoft 365 without slowing down delivery.
Conclusion
Dev Tunnels aren’t just a handy trick to skip deployment—they’re
a bridge that connects your local code to real users securely
when you configure them with intention. In Microsoft 365
projects, that means faster collaboration without taking
shortcuts that weaken security. For your next Teams, SharePoint,
or Power Platform build, try starting with a private,
authenticated tunnel tied to your tenant. You’ll keep control
while giving testers the same experience as production. Next
time, we’ll look at pairing tunnels with automated Teams app
testing pipelines—so your local changes can be validated
end‑to‑end without ever leaving your development environment.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Mehr
16.08.2025
22 Minuten
If you’re still exporting Dynamics 365 data to Excel just to make
a chart, you’re losing hours you’ll never get back. What if those
insights could appear live, inside the CRM or ERP screens your
team already lives in? Today, we’re connecting Dynamics 365
directly to Microsoft Fabric’s analytics models — and then
embedding Power BI so your data updates instantly, right where
you need it. Forget static spreadsheets. Let’s see how real-time,
in-app analytics can change your sales and operations game.
When Reporting Feels Like Groundhog Day
Imagine pulling the same sales or ops report every morning,
opening it in Excel, tweaking the formulas just enough to make it
work, and then realising that by the time you press save, the
numbers are already stale. For a sales manager, that might be
this morning’s revenue by region. For an operations lead, it’s
the latest order fulfilment rates. Either way, the day starts
with the same ritual: download from Dynamics 365, open the
spreadsheet template, reapply pivot table filters, and hope
nothing in the export broke. It’s a routine that feels
productive, but it’s really just maintenance work — updating a
picture of the business that’s no longer accurate by the time the
first meeting rolls around.In most organisations, this happens
because it’s still the fastest way people know to get answers.
You can’t always wait for IT to build a new dashboard. You need
the numbers now, so you fall back on what you control — a
spreadsheet on your desktop. But that’s where the trouble begins.
Once the file leaves Dynamics 365, it becomes a standalone
snapshot. Someone else in the team has their own spreadsheet with
the same base data but a filter applied differently. Their totals
don’t match yours. By mid-morning, you’re in a call debating
which version is “right” rather than discussing what to do about
the actual trend in the numbers.Those mismatches don’t just
appear once in a while — they’re baked into how disconnected
reporting functions. One finance analyst might be updating the
same report you created yesterday with their own adjustments. A
territory manager might be adding in late-reported deals you
didn’t see. When you eventually try to combine these different
sources for a management review, it can take hours to reconcile.
A team of six working through three separate versions can lose
half a day chasing down why totals differ by just a few
percentage points. By the time it is sorted, whatever advantage
you had in acting early is gone.And this isn’t just about
spreadsheets. Even so-called “live” dashboards can end up pulling
stale data if they live in a different tool or need to be
manually refreshed. Maybe your Dynamics 365 instance syncs with a
separate analytics platform overnight. That means the sales
pipeline you’re looking at during a 9 a.m. meeting is really from
yesterday afternoon. In fast-moving environments, that delay
matters. A prime example: a regional sales push for a
limited-time promotion that didn’t register in the report until
after the campaign window closed. Because leadership didn’t see
the lagging numbers, they didn’t deploy extra resources to help —
and the shortfall in orders was baked in before anyone could
respond.Over time, this kind of lag erodes trust in the numbers.
When teams know the stats aren’t current, they start making
decisions based on gut feel, back-channel updates, or whatever
data source they like best. It becomes harder to align on
priorities. People hedge their bets in meetings with “well,
according to my numbers…” and nobody’s quite sure which dataset
should decide the next move. The more these manual steps pile up,
the more your so-called data-driven culture turns into a cycle of
checking, re-checking, and second-guessing.The irony is, none of
this points to a skill gap or a motivation problem. The people
involved are experienced. The processes they follow might even be
documented. The real block is that operational systems and
analytical systems aren’t wired to work as one. Your CRM is great
at capturing and processing transactions in real time. Your
analytics layer is good at aggregating and visualising trends.
But when they live apart, you end up shuffling snapshots back and
forth instead of making decisions from a shared, current view of
the truth.It doesn’t have to stay that way. There are ways to
bring live, contextual insight right into the same screen where
the work happens, without switching tabs or exporting a single
record. Once those two worlds are connected, the updates you need
are there as soon as the data changes — no rebuild, no refresh
lag, no version mismatch.Now that the pain is clear, let’s see
what changes when we actually bridge the operational and
analytical worlds.
The Missing Link Between Data and Action
Most teams treat operational data like it’s stuck in two separate
realities — it’s either living inside your CRM, updating
transaction by transaction, or frozen in some report that was
pulled last week and emailed around. The two rarely meet in a way
that drives actual decisions in the moment. Dynamics 365 is a
perfect example. It’s fantastic at capturing every customer
interaction, lead status change, order update, and service ticket
the second they happen. But once you need a cross-region sales
view, trend analysis, or combined operations snapshot, that data
has to go somewhere else to be worked on. And that’s where the
first gap appears.Transactional systems like CRM and ERP are
built for speed and accuracy in recording operational events.
Analytics platforms are designed for aggregation, correlation,
and historical trend tracking. Stitching the two together isn’t
as simple as pointing Power BI at your live database and calling
it done. Sure, Power BI can connect directly to data sources, but
raw transactional tables are rarely ready for reporting. They
need relationships defined. They need measures and calculated
columns. They need to be reshaped so that the “products” in one
system match the “items” in another. Without that modeling layer,
you might get a visual, but it won’t tell you much beyond a count
of rows.Even when teams have dashboards connected, placing them
outside the operational app creates its own friction. Imagine a
sales rep working through opportunity records in Dynamics 365.
They notice that their territory’s pipeline looks weak. They open
a separate dashboard in Power BI to explore why, but the filters
there don’t line up with the live CRM context. It takes mental
energy to align what they’re seeing with what they were just
working on. And the moment they switch away, the operational
detail is out of sight, meaning the analysis becomes disconnected
from the action they could be taking right then.The problem isn’t
a lack of tools. It’s that the live operational context and the
cleaned, modeled analytical view have been living in different
worlds. This is exactly where Microsoft Fabric changes the game.
Instead of exporting data out of Dynamics 365 or trying to keep
multiple refresh cycles in sync, Fabric creates one unified,
analysis-ready copy of the data. And it’s not just pulling in CRM
tables — it can merge data streams from finance systems, supply
chain trackers, marketing platforms, and anything else in your
Microsoft ecosystem into that same analytical copy.Think of
Fabric as the central nervous system in your organisation’s data
flow. Operational systems fire off events the way your body’s
sensors send impulses. Fabric catches those impulses in real
time, processes them so they make sense together, and then pushes
the relevant signal to wherever it’s needed — whether that’s a
Power BI report embedded in Dynamics 365, or a separate analytics
workspace for deeper exploration. The beauty here is that the
data arrives already modeled and fit for purpose. You’re not
waiting on an overnight process to prepare yesterday’s numbers.
You’ve got an always-on layer distributing clean, connected
insights.And once Fabric is part of your setup, embedding Power
BI into Dynamics 365 stops being a wishlist item and starts being
a straightforward configuration step. You already have the data
modeled in Fabric. Power BI can draw from it without complicated
query logic or repeated transformation steps. The report you
design can be built to match the exact context of a CRM form or
ERP process screen. That alignment means someone looking at a
customer record is seeing performance metrics that reflect that
moment, not a stale approximation from hours ago.What you end up
with is a single pipeline that runs from event to insight without
detouring through disconnected tools or stale exports. Dynamics
365 keeps doing what it’s best at — recording the truth as it
happens. Fabric continuously shapes that truth into a form that
can be visualised and acted on. And Power BI becomes the lens
that shows those insights right inside the workflow.With that
bridge in place, the friction between data and action disappears.
There’s no need to choose between speed and accuracy, or between
operational detail and analytical depth. The two become part of a
single experience. Now let’s uncover the actual process to wire
Dynamics 365 into Fabric.
Wiring Dynamics 365 to Fabric: The Practical Playbook
The idea of connecting two big enterprise systems sounds like a
month-long integration project — diagrams, code, test cycles, the
works. But if you know the right path, you can stand it up in a
fraction of the time without custom connectors or surprise costs.
The trick is understanding how Dynamics 365, Dataverse, Fabric,
and Power BI talk to each other, and setting each stage up so the
next one just clicks into place.Before you start, there are a
couple of non-negotiables. You need a Power BI workspace that’s
enabled for Fabric. Without that, you’re trying to build in an
environment that can’t actually host the analytical copy Fabric
produces. On the Dynamics 365 side, check that you have the right
admin permissions — at minimum, the ability to manage environment
settings and enable features inside Power Platform. If you’re
working in a larger org, you might also need to loop in the
security team to approve service access between Dynamics and
Fabric.A lot of admins assume this connection means standing up
middleware or buying a third-party integration tool. It doesn’t.
Microsoft built the bridge through Dataverse. Think of Dataverse
as the shared storage layer under Dynamics 365. Every table in
CRM or ERP already lives here. By pointing Fabric at Dataverse,
you’re essentially tapping into the source system without pulling
data out through an export file. This also means you inherit the
schema and relationships Dynamics already uses, so you’re not
recreating them later in Power BI.The first practical step is
enabling the analytics export in Power Platform admin. You select
the Dataverse tables you want — accounts, opportunities, orders,
whatever fits your reporting goals. Here’s where being
intentional matters. It’s tempting to turn on everything, but
that adds noise and processing overhead later. Once your tables
are mapped, you define the destination in Fabric where that data
copy will live. From there, schedule ingestion to keep that
analytical copy fresh. Depending on your latency needs, it could
be near real-time for operational KPIs or every few hours for
less time-sensitive metrics.Getting that raw data into Fabric is
only half the job. You still need it shaped for analysis, and
that’s where Fabric’s Data Factory or Dataflows Gen2 come in.
Data Factory gives you pipelines to join, filter, and transform
datasets at scale. Dataflows Gen2 works well for more targeted
transformation — renaming columns, splitting fields, adding
calculated measures. This is the point where you can also bring
in other data sources — maybe finance data from Business Central
or inventory signals from Supply Chain Management — and unify
them into that same Fabric workspace.Security isn’t an
afterthought here. Role-based access in both Dynamics 365 and
Power BI should align so users only see what they have rights to
in the source system. That’s where user identity mapping becomes
critical. You want someone viewing a report embedded in Dynamics
to see it filtered down to their territory or business unit
automatically, without manually applying filters. Data
sensitivity labels in Fabric can help prevent accidental exposure
when you start combining datasets from across departments.Once
this pipeline is in place, the heavy lifting is done. You now
have an analytical copy of your Dynamics 365 data flowing into
Fabric, kept in sync on your schedule, transformed into a model
that works for reporting, and secured in line with your
operational rules. At this stage, embedding a Power BI report
back into Dynamics is almost plug-and-play. Power BI connects to
the Fabric dataset. The report is built with the fields and
measures you’ve already prepared. Embedding settings in Dynamics
control where it appears — maybe in a dashboard tab, maybe right
inside a form.The connection stage isn’t about writing complex
code or debugging APIs. It’s about deliberately configuring each
link in the chain so the next step works out of the box. When
you’ve done that, the rest — building visuals, embedding them,
and delivering that in-app insight — becomes the quick part. With
live Fabric datasets in place, the next move is turning them into
meaningful visuals your teams will actually use.
Designing Embedded Reports Your Team Will Actually Use
A beautiful Power BI dashboard isn’t worth much if it lives in a
forgotten browser tab. The value isn’t in how good it looks —
it’s in how many decisions it influences. And that influence
drops to almost nothing if people have to break their flow to
find it. That’s where embedding inside Dynamics 365 changes the
game. Instead of expecting users to remember to open a separate
report, you bring the insights directly into the screens they
already rely on to manage customers, process orders, or track
cases. No extra logins, no juggling windows — the data is just
part of the process.When a report sits right next to the records
someone is working on, it stays inside their decision window. A
service rep handling a support case can see real-time backlog
trends without leaving the case form. An account manager
scrolling through an opportunity record can check projected
revenue impact without clicking into another app. That proximity
matters because it removes the mental gap between reviewing data
and taking action. You’re not moving from analysis mode to
execution mode — you’re doing both in the same place.But there’s
a trap here. Just because you can bring the full power of Power
BI into Dynamics 365 doesn’t mean you should flood the screen
with every chart you have. Too many metrics can turn into white
noise. Important indicators get buried under less relevant
trends, and users either ignore the whole thing or cherry-pick
the parts that confirm what they already thought. The goal is to
surface the right numbers for the role, in the right context.Take
a sales dashboard embedded into the opportunity form as an
example. Instead of a generic set of charts, it can show the
current deal’s probability score, the average cycle length for
similar deals, and the recommended next step pulled from your
sales playbook logic. If the deal is stuck in a stage longer than
average, the report can highlight that in red right in the view.
There’s no need to dig into another report — the prompt to act
sits in the exact place the rep enters notes or schedules the
next call.That role-specific focus applies across the business.
Sales teams care about pipeline value, win rates, and deals at
risk. Operations teams need to see production backlog, supply
metrics, and shipment delays. Finance might need invoice aging
and payment patterns. A one-size-fits-all embedded report means
everyone has to filter and interpret their way to what matters,
which eats into speed. Designing separate reports for each major
role means you control the signal-to-noise ratio from the
start.This is where row-level security in Power BI becomes more
than a compliance box-tick. Using RLS, those embedded reports can
adapt to the user. A territory manager sees only their geography.
A departmental lead sees only their cost centre’s data. That
filtering happens automatically based on their Dynamics 365
login, so they’re never staring at irrelevant numbers or — worse
— data they shouldn’t have.On the technical side, embedding is
straightforward once your dataset lives in Fabric. Power BI
reports use that dataset and are then placed into Dynamics 365
forms or dashboard sections through standard components. You can
add a report as a tab in a model-driven app, drop it into a
dashboard tile, or even embed it inside a specific record type’s
main form. That placement decides whether the context is broad,
like a company-wide dashboard, or narrow, like a report focused
on a single account record.When you get the alignment right — UI
placement, metric selection, and role-based filtering — you don’t
have to beg for adoption. People use the reports because they’re
unavoidable in the best way possible. They’re part of doing the
job, not an extra step on top of it. Over time, this
normalisation changes the way teams think about data. It’s no
longer an occasional check-in from a separate tool, but a
constant presence guiding everyday actions.Once the reports are
live and framed inside the workflows, something interesting
happens. They start paying for themselves almost immediately
through faster reactions, more consistent decision-making, and
fewer “I didn’t see that in the dashboard” conversations. The
next step is watching how those small, in-context insights
compound into bigger results when the setup is running day after
day.
From Reactive to Proactive: The Immediate Payoff
The difference between a reactive team and a proactive one often
comes down to timing. Specifically, how quickly they catch shifts
in the numbers before those shifts snowball into bigger problems.
If you’re only spotting a sales slump at the end of the month,
the damage is already baked in. But if that dip shows up in an
embedded report on Wednesday morning while reps are updating
their opportunities, you can address it before the week’s out.
That’s the kind of edge that changes outcomes.Picture a supply
chain manager watching a live backlog metric inside their
Dynamics 365 order management screen. A spike appears in red —
orders piling up in one warehouse faster than usual. They can
react before it cascades into slow deliveries for a key customer
segment. Without that embedded metric, that signal might only
show up in a monthly performance review, when frustrated
customers are already calling and delivery schedules are weeks
behind.It’s not that these issues didn’t have data trails before.
They did. But the old process meant waiting for a scheduled
review — end of week, end of month, quarterly dashboards. By the
time those numbers landed in the meeting deck, the context was
old, the causes harder to trace, and the options for fixing the
problem much narrower. A sales slump caught in the third week of
the month can still be turned around. The same slump identified
after month-end is just used to explain why the target was
missed.One of the clearest gains from embedding live
Fabric-powered reports is the collapse of insight latency. That’s
the lag between something happening in the business and the
moment you notice. In many organisations, that lag is measured in
days, sometimes longer. By wiring Fabric datasets into Dynamics
365 and embedding role-specific reports, you cut that down to
minutes. Pipeline value drops in one territory? You see it right
there in the same view you’re using to assign leads. Inventory
for a top-selling product dips below reorder threshold? It’s
flagged in real time on the order entry screen.There’s a
psychological shift that comes with this immediacy. When teams
trust that the numbers on their screen are current to the last
few minutes, confidence in acting on those numbers goes up. They
stop second-guessing the data or cross-checking with three other
sources “just to be sure.” That extra caution made sense when
most dashboards were based on yesterday’s extracts. But it also
slowed everything down and drained energy from the decision
process. Real-time embedded reports remove that
hesitation.Decision-makers also stop wasting mental bandwidth
juggling multiple systems. Without embedded analytics, you might
keep Dynamics 365 in one tab, Power BI in another, maybe even a
shared spreadsheet for quick custom views. Verifying a single KPI
means hopping between them, re-filtering datasets, and trying to
reconcile differences. That context-switching is not just tedious
— it’s a point where focus gets lost. When the KPI is embedded
right next to the transaction data that drives it, you’re
validating and acting in one sweep.The compounding effect is easy
to underestimate. A single well-placed embedded report can
influence dozens of micro-decisions across a team every day. A
sales manager reallocating leads before the quarter-end crunch.
An operations lead rerouting orders to balance warehouse loads. A
service manager escalating certain cases earlier because backlog
metrics make the risk clear. Each decision might save an hour
here, a lost sale there. Over weeks and months, the aggregate
impact adds up to measurable revenue gains and efficiency
improvements just from putting the right numbers in the right
place.And this isn’t a fragile solution that needs constant
babysitting. Microsoft is iterating on Fabric and Power BI’s
integration points, making dataset refreshes faster, embedding
smoother, and security mapping more automatic. That means the
same pipeline you set up now will only get more capable with each
update, extending the range of reports and data combinations you
can embed without re-engineering the stack. You’re not locking
yourself into a snapshot of today’s capabilities — you’re putting
a growth path under your reporting layer.When people talk about
digital transformation in CRM or ERP, it often sounds abstract.
In reality, embedding Fabric-powered Power BI reports into
Dynamics 365 turns it from a place you store and retrieve data
into a live decision environment. The moment something changes
that matters to your role, the system can show you — right where
you work — without you having to go hunt for it.So where does
this leave you and your next steps?
Conclusion
Real-time, in-app analytics isn’t just a convenience feature —
it’s how modern teams outpace competitors who still wait for
end-of-month reviews. If your data lives where your people work,
action happens faster and with more confidence. Take a hard look
at your current Dynamics 365 reporting. Find the one workflow
where faster, context-aware insight could make the biggest
difference, and pilot an embedded Fabric-powered report there.
Microsoft’s already moving toward tighter integration and smarter
automation in Fabric. Soon, setup times will shrink, models will
get smarter, and that advantage you build today will compound
without extra maintenance.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Mehr
16.08.2025
21 Minuten
Ever wondered what your team is really doing in Microsoft 365?
Not in a micromanaging way, but from a compliance and security
perspective? The truth is, without auditing, you’re flying
blind—especially in a hybrid world where sensitive data moves
faster than ever. Today, we’re going to show you how Microsoft
Purview lets you actually see what’s happening behind the scenes.
Are your audit logs catching what matters most—or are you missing
the signs of a risk that could cost you? Let’s find out.
Why Visibility Matters More Than Ever
Your organization might be tracking logins, but do you know who’s
opening sensitive files at two in the morning? That’s the gap so
many companies miss. It’s easy to feel like activity is covered
when you see pretty dashboard charts of active users and
sign-ins, but that barely scratches the surface of what’s
actually happening in your environment. The shift to hybrid work
has been great for flexibility, but it’s also made user activity
harder to monitor. People are connecting from personal devices,
home networks you don’t control, and cloud apps that blur the
boundary between what lives in your tenant and what gets shared
outside of it. The lines are fuzzier than ever, and so are the
risks.Most companies assume the built-in usage reports in
Microsoft 365 are the same thing as audit logs. They’re not.
Usage reports might tell you that a OneDrive file was accessed
five times, but they rarely tell you which user accessed it,
under what session, or from where. That’s like checking the
odometer on your car—sure, you know how many miles were driven,
but you have no idea who was behind the wheel. It looks good
until your compliance officer asks for precise accountability,
and suddenly you realize those gaps aren’t just minor oversights.
They can turn into questions you can’t answer.Imagine this
scenario: your legal department asks you to provide a clear
account of who viewed and copied financial records last quarter.
Maybe there’s an investigation, maybe it’s just part of due
diligence. If all you have is a roll-up report or email activity
stats, you’ll find yourself staring at incomplete data that fails
to answer the actual question. When you can’t meet that level of
detail, the issue shifts from inconvenience to liability. The
ability to trace actions back to individual users, with a
timeline, is no longer a nice-to-have capability—it’s the
baseline expectation.Then you have the pressure of regulations
stacked on top. Frameworks like GDPR, HIPAA, and
industry-specific mandates demand that organizations keep
detailed records of user activity. They aren’t satisfied with
generic counts and summaries; they want traceability,
accountability, and proof. Regulators don’t care if your portal
makes things look secure. They care about evidence—clear logs of
who did what, when they did it, and in many cases, from what
device or IP. If you can’t produce that, you can end up with
everything from fines to litigation risk. And fines are the
visible part—damage to reputation or client trust is often far
worse.Without strong auditing, blind spots put you in danger two
ways. One is regulatory exposure, where you simply cannot produce
the information required. The other is making it easier for
insider threats to slip by unnoticed. You may catch a brute force
login attempt against an MFA-protected account, but would you
notice a trusted user quietly exporting mailbox data to a PST
file? If you don’t have the right granularity in your logs, some
of those actions blend into the background and never raise
alarms. That’s what makes blind spots so dangerous—they hide
activity in plain sight.It’s like setting up a building with
security cameras at the front door, but all those cameras do is
mark that “someone entered.” You have absolutely no view of
whether they walked straight to the lobby or broke into the
records room. That kind of system satisfies nobody. You wouldn’t
feel safe in that building, and you wouldn’t trust it to host
sensitive conversations or high-value assets. Yet many IT
organizations operate this way because they don’t realize their
current reports offer that same shallow view.The good news is
that Microsoft Purview closes those gaps. Rather than siloed or
surface-level data, it gives structured visibility into activity
happening across Exchange, SharePoint, Teams, Power BI, and more.
It doesn’t just say “a user connected”—it captures the actions
they performed. That difference moves you from broad usage stats
to fine-grained audit trails you can actually stand behind.At
this point, it’s clear that auditing user activity isn’t optional
anymore. It’s not just about checking a compliance box—it’s the
shield protecting both trust and accountability in your
organization. When you can show exactly who did what, you reduce
risk, strengthen investigations, and put yourself in a position
where regulators and security teams alike take your evidence
seriously. Now that we know why visibility is non-negotiable, the
next question is obvious: what exactly is Microsoft Purview
Audit, and how does it separate itself from the standard logs
already built into Microsoft 365?
What Microsoft Purview Audit Actually Is
So what makes Purview Audit different than simple activity
logging? On the surface, activity logs and usage reports seem
like they deliver the same thing. You get numbers, dates, and
maybe the high-level actions users performed. But Purview Audit
goes deeper—it isn’t just a log of who signed in or how many
files were shared. It’s Microsoft’s centralized system for
capturing the details of user and admin actions across Microsoft
365 services, letting you investigate events with much more
precision. Instead of looking at fragmented reports from
Exchange, SharePoint, Teams, and OneDrive individually, you work
from a single investigation pane. That unifies oversight and
makes evidence gathering a structured process rather than
scattered detective work. A lot of admins miss that difference.
It’s common to confuse the friendly graphs inside the M365 admin
center with actual auditing. A usage chart might reassure you
that Teams is “adopted widely” or SharePoint storage grew by some
percentage. But if your compliance team asks for proof about a
deleted file, that data won’t help. Purview Audit captures
forensic-level detail: the specific user, the activity type,
timestamps, and in many cases contextual metadata like client IP
or workload. It replaces the guesswork with provable logs that
hold up under scrutiny, whether that’s regulatory review or
incident response. There are two layers to understand—Standard
and Premium. Purview Audit Standard comes on for most tenants
automatically and gives you the baseline: actions like file
access, document sharing, email moves, mailbox logins, and basic
administrator activity across the core workloads such as
Exchange, SharePoint, OneDrive, and Azure Active Directory. Think
of Standard as the foundation. You’ll be able to track major user
events, verify if someone signed in, exported mail, or touched a
file, and set date ranges to review those actions. For smaller
organizations or those not working in deeply regulated
industries, it can feel sufficient. Premium is where the line
sharpens. With Audit Premium, Microsoft expands the scope and
retention of what’s captured. Suddenly you’re not only seeing the
obvious actions, you’re getting advanced signals like
forensic-level logon data including token usage, geolocation
context, and client details. Teams activity isn’t just about a
file uploaded; you can capture message reads, reactions, and link
clicks. The retention jumps from a limited 90 days in Standard to
up to 365 days or longer in Premium. That longer retention is
often the difference between being able to investigate past
incidents or hitting a frustrating dead end. If you’ve ever had
an investigation that spanned several months, you know why older
data is essential. Put this into a real-world example. Imagine
you suspect an insider quietly exported large quantities of
mailbox content. In Standard, you might see a note that “a
mailbox export was initiated” along with a timestamp and the
account name. Helpful, but limited. In Premium, you’d see the
session identifiers, the client used for the export, and the
specific context about how the action was initiated. That
additional metadata can point to whether it was a legitimate
admin following procedure or an unusual account trying to sneak
out data at 3 A.M. For forensic investigations and eDiscovery
readiness, that extra layer of granularity turns a flat report
into actionable intelligence. This is why for heavily regulated
industries—finance, healthcare, government—Standard won’t cut it
in the long term. Even if the basics cover today’s questions,
audits grow more complex as regulations get stricter. When an
auditor asks not just “who accessed this file” but “show me all
anomalous activity in the weeks before,” Premium-level logging
becomes essential. You cannot answer nuanced, time-sensitive
questions without that data. For everyone else, there’s still
value in Premium because subtle insider risks or advanced threats
won’t reveal themselves in just basic usage activity. What makes
Purview Audit stand out, then, is not simply volume. It’s the
nature of the information you can act on. You aren’t just
collecting logs to satisfy compliance; you’re capturing a
narrative of digital activity across your tenant. Every login,
every admin command, every unusual traffic spike can be turned
into evidence. The distinction boils down to this: with usage
reports you watch from 30,000 feet. With Purview, you walk the
floors and see exactly what happened, even months later. That’s
why Purview Audit isn’t just another dashboard tucked away in the
portal. It’s the fail-safe when things go sideways, the proof you
turn to after an incident, and the accountability layer for
compliance officers. Having the right edition for your scenario
determines whether you can quickly investigate or whether you’re
left scrambling for missing details. Now that we’ve clarified
what Purview Audit really is and why those distinctions matter,
the natural step is to see it in action. So let’s walk through
how to actually get hands-on with the audit experience inside the
portal.
How to Get Started in the Portal
The Compliance portal can feel overwhelming the first time you
log in. Tabs, widgets, categories—you get the sense Microsoft
wanted to pack everything neatly, but somehow it still turns into
a scroll marathon. So where do you even start if your goal is to
look at audit logs? The path isn’t obvious, and that’s why most
people hesitate the first time they land here. Don’t worry—once
you know the entry point, it actually makes sense. The place you
want to go is the Microsoft Purview compliance portal. You can
get there by heading to the URL compliance.microsoft.com and
signing in with the right level of admin privileges. If you
already have a bookmark to the Microsoft 365 admin center, don’t
confuse that for the same thing. The audit experience lives
specifically in the Purview compliance portal, not the core admin
center. That’s where Microsoft puts the compliance-focused tools
like eDiscovery, Insider Risk Management, and of course, Audit.
Here’s where most new admins trip up. You log in, you see this
long menu of solutions—Communication Compliance, Content Search,
Information Protection, Encryption, and on and on. You scroll
down, scanning through more than a dozen items, and wonder if
Audit even exists in your tenant. The answer is yes, it does. But
the menu uses broad grouping, so the “Audit” link is tucked right
under “Solutions.” You click there, and only then do you feel
like you’ve found the starting line. Picture opening this portal
for the first time. You’re scrolling past retention policies,
classification tabs, insider alerts, and endpoint data loss
prevention. It feels endless. Finally, Audit sneaks into view,
usually further down than you expect. That moment of “oh, there
it is” happens to almost everyone. And then another question pops
up: is audit actually running in the background right now? That’s
not always obvious either. By default, Microsoft enables Standard
audit logging for most tenants. What that means is user and admin
actions across your core services are likely being logged
already. But “likely” isn’t enough for compliance, and it’s
definitely not enough for peace of mind. The first thing you
should always do is confirm the setting. In the Audit homepage,
if audit logging isn’t on, you’ll see a clear option to enable
it. Click that, confirm the prompt, and from that point forward
everything across the core workloads starts landing in your logs.
If it’s already on, you’ll see a confirmation banner letting you
know it’s active. Once that groundwork is settled, you can
finally run an actual search. This is where the tool starts to
show its value. At the top of the audit page, there’s an option
for a new search. Here you can filter based on user accounts,
specific activities, or date ranges. For example, maybe you want
to check whether a certain employee accessed files in SharePoint
over the last week. You enter their username, select the
activities you want to trace—like “File Accessed” or “File
Deleted”—and then set the timeframe. The system then queries the
logs and presents you with matching results. Every record comes
with the timestamp, the service involved, and often the IP
address or device associated with the action. Running that first
query feels like the hurdle is finally cleared. You move from
staring at an empty dashboard to seeing actual data that tells
you what happened in your environment. That’s when the tool
starts to feel useful instead of confusing. And researchers or
compliance staff quickly realize it’s not difficult to build
targeted searches once you’ve seen the process once or twice.
Another feature here that gets overlooked is exporting. You’re
not limited to reviewing the data inside the Compliance portal.
Say your security team wants to line up activity with data from a
firewall appliance, or your compliance officer wants to build
charts for an internal review. You can select export to CSV
directly in the search results, hand that file off, and they can
run their own analysis. For organizations who need
visualizations, the data can also integrate into Power BI, giving
you filters and dashboards across departments. That’s a major
plus when audit needs to be shared beyond one technical team.
Once you’ve crossed that initial learning curve—finding Audit in
the portal, confirming logging is active, and running those first
queries—the tool feels much less intimidating. Search starts to
become second nature. You stop worrying about whether data is
captured, and instead focus on the insights hidden in the
records. Of course, this is just scratching the surface. Being
able to type queries and export results is one level of use, but
what happens when you need more? That’s when the question shifts
from portal clicks to integration. Because if you truly want to
catch threats or correlate behavior, you need those logs feeding
into bigger security workflows, not just sitting in a CSV file.
What If You Want to Go Further?
Running searches in the portal is nice, but what happens when you
need automation? Scrolling through logs on demand works for a
quick check, but no security team can realistically sit in the
portal each morning and run through 20 different filters. The
volume of activity in Microsoft 365 environments is massive, and
by the time someone notices something odd in a manual export,
it’s probably too late. Taking a CSV to Excel every time you want
insight gets old quickly, and more importantly, it creates lag.
If an attacker is already exfiltrating sensitive data, that
week-long lag between activity and discovery is exactly the
window they need. That’s why automation has to be part of the
picture. The audit data is only worth something if you can make
use of it in real time or on a repeatable schedule. This is where
PowerShell becomes a powerful extension of the Purview Audit
feature. Instead of relying on the portal alone, admins can
schedule scripts that query logs at set intervals and apply
advanced filters on the fly. With PowerShell, you can query by
user, IP address, activity type, or even combinations of those.
That lets you design audit pulls that map directly to what’s
relevant for your environment. For example, you might care less
about every Teams reaction and more about nonstop file downloads
in OneDrive. Building that logic into a scheduled job means the
question gets answered daily without anyone having to hit
“export.” Let’s put this into a scenario. Say you want to monitor
for unusual logins—accounts signing in outside business hours, or
connections coming from regions where your company doesn’t even
operate. With PowerShell you can create a script to query login
logs based on timestamps and geolocation, and automatically flag
results outside your expected ranges. Suddenly, the idea that
you’d only know about those odd logins a week later from an
analyst’s CSV disappears. You’ve got a repeatable detection
system feeding you results right away. Another example: if
someone tries to download hundreds of files in a short burst,
your script can be written to catch that behavior. Those are the
kinds of patterns that, if left unchecked, often indicate insider
threats or compromised accounts. Automating the search closes
that gap. But PowerShell is just one part. The other leap comes
when you integrate Microsoft Purview Audit data directly into
Sentinel, Microsoft’s SIEM and SOAR offering. Sentinel is where
security operations centers live day-to-day, watching dashboards,
running detections, and responding to alerts. If Purview sits
isolated as a compliance-only tool, audit insights aren’t helping
that SOC workflow. But once logs are funneled into Sentinel, they
stop being just historical evidence and start driving live
monitoring. You can create custom analytics rules that trigger
alerts when audit data matches suspicious behavior. Imagine near
real-time notifications for mass mailbox exports or repeated
SharePoint sharing to external domains—that context goes from
hidden in an export to front and center in your SOC screen.
Leaving audit isolated creates risk because it keeps valuable
data siloed. Compliance officers might be happy the logs exist,
but security teams lose the opportunity to act on them in the
moment. If an attacker is working slowly and carefully to avoid
detection, those siloed logs might catch the activity weeks later
during a compliance review. By then, the damage is long done.
Integrating audit into broader security workflows collapses that
timeline—you move from reactive reporting to proactive defense.
This is also why many enterprises don’t stop at just Sentinel.
They start weaving Purview Audit into other layers of Microsoft’s
security stack. For example, tying signals into Identity
Protection, so unusual audit activity combines with risk-based
conditional access policies. Or blending with Insider Risk
Management to surface subtler concerns, like employees
exfiltrating data before leaving the company. Data Loss
Prevention can even layer those insights further, correlating
what users are doing in logs with what files or items are
sensitive in the first place. The real strength arrives when
auditing isn’t sitting alone but feeding into a web of connected
defenses. When you reach that stage, the role of Purview Audit
transforms. It stops being simply a way to prove compliance
during a regulator’s audit. It becomes part of your everyday
detection engine and part of the reason your SOC spots unusual
behavior before it spirals into a breach. Instead of combing
through spreadsheets for answers after the fact, you position
audit data as an active layer of defense. It’s evidence when
questions come later, but more importantly, it’s intelligence you
can use right now. That brings us to the big picture. Having the
technology set up correctly matters, but if you want auditing to
serve its purpose, you need to think well beyond the mechanics of
settings, scripts, and exports.
Shaping Your Organization’s Strategy
It’s easy to treat auditing as a checkbox, but what if it shaped
your security culture instead of sitting quietly in the
background? Most organizations think of logs as something you
keep because compliance requires it, not because it can actively
change how the business operates. The truth is, the way you
approach auditing has a direct impact on whether it becomes a
living part of your security posture or just another archive
gathering dust. When Purview Audit is used strategically, it
stops being a tool you pull out during regulator check-ins and
becomes a system that guides your everyday understanding of
what’s normal versus what’s not. The first mindset shift is
realizing that logs by themselves don’t solve anything. Having
them switched on is the floor, not the ceiling. What matters is
how that data is used. If you never look for patterns, never test
what “normal” in your tenant feels like, then the logs collect
for months without producing real value. Reactive use of
auditing—waiting until an incident happens to start reading
through records—misses the point. Strategy means layering in
baselines from the start, understanding user rhythms, and
learning what expected activity looks like before a problem
arrives. This is where a lot of firms stumble. They enable
auditing once, assume that’s the win, and forget that the data is
useless without context. Let’s say your team logs a million
actions per week. On paper, that sounds impressive. But unless
you’ve established what counts as standard behavior for those
actions, spikes or gaps go unnoticed. An intruder who wants to
blend in doesn’t want to stand out—they want to look like
everyone else. If you never defined what “everyone else” looks
like, then camouflage works. That’s the tension: clear signals
exist in the logs, but no one notices them because there’s no
frame of reference. Baselining regular activity is one of the
simplest yet most powerful things you can do with Purview Audit.
It’s not glamorous—sometimes it’s running the same queries week
by week and plotting them so you see patterns. But over time, a
picture forms of your organization’s digital heartbeat. How often
files get accessed, when Teams chats spike, when SharePoint usage
drops for weekends or holidays. Once you know these patterns,
deviations jump off the page. That’s how the system evolves from
endless records into insight that feels alive. Take Teams file
shares. If on average your organization shares 600 files a week
and suddenly that number doubles in two days, you don’t
immediately jump to “breach.” It could be a large project
deadline or a new department adopting Teams more actively. But
now you have a reason to check, because you noticed the spike in
the first place. Without that baseline, it would sit buried in
totals until someone stumbled across it. With the baseline, you
frame a question: is this legitimate growth, or an intruder
offloading data under the cover of normal traffic? The challenge
is that data volume grows quickly in any modern tenant. Without
strategy, logs shift from valuable signals to noisy chatter. You
can’t notice meaningful patterns if they’re buried under
thousands of inconsequential entries. That’s why strategy has to
go deeper than just turning on auditing—it’s about organizational
structure. Different roles need different lenses. Compliance
officers benefit from summaries that demonstrate who accessed
what, grouped into reports they can hand to oversight committees.
Security teams, by contrast, hunt for anomalies, spikes, and
correlations that point to risk. IT admins focus on proving who
performed high-impact changes, like mailbox exports or new
privilege assignments. Trying to dump the exact same audit data
onto each of these groups won’t work. Role-based reporting
ensures everyone consumes what matters to them. Breaking down
responsibilities this way addresses two issues: people don’t feel
overwhelmed by irrelevant noise, and the signal-to-noise ratio
improves for every team. Instead of everyone ignoring the logs
because they’re unreadable, each group sees the parts of the
audit system that align with their job. That ensures logs get
checked regularly, not only when forced by external pressure. The
payoff is that auditing shifts from a reactive fallback to a
proactive monitor. It becomes a living system inside your tenant,
an indicator of health and an early-warning system. You stop
framing logs as a burden and start framing them as
visibility—evidence of everything your cloud is doing and capable
of flagging when something doesn’t match expectations. Purview
Audit, with strategy wrapped around it, is more than storage for
records. It’s the pulse you check to make sure your digital
environment is safe and accountable. At this point, the next step
is obvious: you can’t wait until trouble surfaces to decide if
your audit approach is working. You need to act intentionally
today, or those unseen risks will keep piling up, hidden behind
the comfort of “at least the logs are turned on.”
Conclusion
Auditing isn’t a future nice-to-have—it’s the barrier keeping
your operations controlled instead of running on blind trust.
Without it, you’re left hoping your environment is safe rather
than knowing it. That distinction matters more each day as data
spreads across services, devices, and users you only partially
manage. So here’s the challenge: sign in to your Purview portal
today. Don’t assume logging is enough. Check whether your audit
setup is intentional or accidental, and ask if the data you’d
need tomorrow is truly there. Because the real risk isn’t what
you see—it’s what’s quietly happening when you’re not looking.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Mehr
16.08.2025
22 Minuten
If you've ever wondered why your data suddenly disappears from a
report, or who exactly changed the source file feeding your
monthly dashboard, you're not alone. Most teams are flying blind
when it comes to seeing the full journey of their data.Today,
we're going to trace that journey inside Microsoft Fabric — from
ingestion, through transformation, into analytics — and uncover
how lineage, permissions, and the catalog work together to keep
you in control. By the end, you'll see every hop your data makes,
and exactly who can touch it.
Seeing the Invisible: The Path Data Actually Takes
Most people picture data traveling like a straight road: it
leaves the source, passes through a few hands, and ends up neatly
in a report. In reality, it’s closer to navigating an old
building that’s been renovated a dozen times. You’ve got hallways
that suddenly lead to locked doors, side passages you didn’t even
know existed, and shortcuts that bypass major rooms entirely.
That’s the challenge inside any modern analytics platform—your
data’s path isn’t just a single pipeline, it’s a web of steps,
connections, and transformations. Microsoft Fabric’s Lakehouse
model gives the impression of a single, unified home for your
data. And it is unified—but under the hood, it’s a mix of
specialized services working together. There’s a storage layer,
an analytics layer, orchestration tools, and processing engines.
They talk to each other constantly, passing data back and forth.
Without the right tools to record those interactions, what you
actually have is a maze with no map. You might know how records
entered the system and which report they eventually landed in,
but the middle remains a black box. When that black box gets in
the way, it’s usually during troubleshooting. Maybe a number is
wrong in last month’s sales report. You check the report logic,
it looks fine. The dataset it’s built on seems fine too. But
somewhere upstream, a transformation changed the values, and no
one documented it. That invisible hop—where the number stopped
being accurate—becomes the needle in the haystack. And the longer
a platform has been in use, the more invisible hops it tends to
collect. This is where Fabric’s approach to lineage takes the
maze and lays down a breadcrumb trail. Take a simple example:
data comes in through Data Factory. The moment the pipeline runs,
lineage capture starts—without you having to configure anything
special. Fabric logs not just the target table in the Lakehouse
but also every source dataset, transformation step, and
subsequent table or view created from it. It doesn’t matter if
those downstream objects live in the same workspace or feed into
another Fabric service—those links get recorded automatically in
the background. In practice, that means if you open the lineage
view for a dataset, you’re not just seeing what it feeds—you’re
seeing everything feeding it, all the way back to the ingestion
point. It’s like tracking a shipment and seeing its path from the
supplier’s warehouse, through every distribution center, truck,
and sorting facility, instead of just getting a “delivered”
notification. You get visibility over the entire chain, not just
the start and finish. Now, there’s a big difference between
choosing to document lineage and having the system do it for you.
With user-driven documentation, it’s only as current as the last
time someone updated it—assuming they remembered to update it at
all. With Fabric, this happens as a side effect of using the
platform. The metadata is generated as you create, move, and
transform data, so it’s both current and accurate. This reduces
the human factor almost entirely, which is the only way lineage
maps ever stay trustworthy in a large, active environment. It’s
worth noting that what Fabric stores isn’t just a static diagram.
That automatically generated metadata becomes the basis for other
controls—controls that don’t just visualize the flow but actually
enforce governance. It’s the foundation for connecting technical
lineage to permissions, audit trails, and compliance cataloging.
When you hear “metadata,” it can sound like passive information,
but here it’s the scaffolding that other rules are built on. And
once you have that scaffolding in place, permissions stop being
static access lists. They can reflect the actual relationships
between datasets, reports, and workspaces. Which means you’re not
granting access in isolation anymore—you’re granting it with the
full context of where that data came from and where it’s going.
That’s where lineage stops being just an operational tool for
troubleshooting and becomes a strategic tool for governance.
Because once you can see the full path every dataset takes, you
can make sure control over it travels just as consistently. And
that’s exactly where permission inheritance steps in.
One Permission, Everywhere It Matters
Imagine giving someone permission to open a finished, polished
report — only to find out they can now see the raw, unfiltered
data behind it. It’s more common than you’d think. The intent is
harmless: you want them to view the insights. But if the
permissions aren’t aligned across every stage, you’ve just handed
over access to things you never meant to share. In the Lakehouse,
Microsoft Fabric tries to solve this with permission inheritance.
Instead of treating ingestion, storage, and analytics as isolated
islands, it treats them like rooms inside the same building. If
someone has a key to enter one room, and that room directly feeds
into the next, they don’t need a separate key — the access
decision flows naturally from the first. The model works by using
your workspaces as the control point. Everything in that
workspace — whether it’s a table in the Lakehouse, a semantic
model in Power BI, or a pipeline in Data Factory — draws from the
same set of permissions unless you override them on purpose. In a
more siloed environment, permissions are often mapped at each
stage by different tools or even different teams: one team
manages database roles, another manages storage ACLs, another
handles report permissions. Over time, those separate lists drift
apart. You lock something down in one place but forget to match
it in another, or you remove a user from one system but they
still have credentials cached in another. That’s how security
drift creeps in — what was once a consistent policy slowly turns
into a patchwork. Let’s make this concrete. Picture a Lakehouse
table holding sales transactions. It’s secured so that only the
finance team can view it. Now imagine you build a Power BI
dataset that pulls directly from that table, and then a dashboard
on top of that dataset. In a traditional setup, you’d need to
manually ensure that the Power BI dataset carries the same
restrictions as the Lakehouse table. Miss something, and a user
with only dashboard access could still query the source table and
see sensitive details. In Fabric, if both the Lakehouse and the
Power BI workspace live under the same workspace structure, the
permissions cascade automatically. That finance-only table is
still finance-only when it’s viewed through Power BI. You don’t
touch a single extra setting to make that happen. Fabric already
knows that the dataset’s upstream source is a restricted table,
so it doesn’t hand out access to the dataset without verifying
the upstream rules. The mechanics are straightforward but
powerful. Because workspaces are the organizing unit, and
everything inside follows the same security model, there’s no
need to replicate ACLs or keep separate identity lists in sync.
If you remove someone from the workspace, they’re removed
everywhere that workspace’s assets appear. The administrative
load drops sharply, but more importantly, the chances of
accidental access go down with it. This is where the contrast
with old methods becomes clear. In a classic warehouse + BI tool
setup, you might have a database role in SQL Server, a folder
permission in a file share, and a dataset permission in your
reporting tool — all for the same logical data flow. Managing
those in parallel means triple the work and triple the
opportunity to miss a step. Even with automation scripts, that’s
still extra moving parts to maintain. The “one permission, many
surfaces” approach means that a change at the source isn’t just
reflected — it’s enforced everywhere downstream. If the Lakehouse
table is locked, no derived dataset or visual bypasses that lock.
For governance, that’s not a nice-to-have — it’s the control that
stops data from leaking when reports are shared more widely than
planned. It aligns your security model with your actual data
flow, instead of leaving them as two separate conversations. When
you combine this with the lineage mapping we just talked about,
those permissions aren’t operating in a void. They’re linked,
visually and technically, to the exact paths your data takes.
That makes it possible to see not just who has access, but how
that access might propagate through connected datasets,
transformations, and reports. And it’s one thing to enforce a
policy — it’s another to be able to prove it, step by step,
across your entire pipeline. Of course, having aligned
permissions is great, but if something goes wrong, you’ll want to
know exactly who made changes and when. That’s where the audit
trail becomes just as critical as the permission model itself.
A Single Source of Truth for What Happened and When
Ever try to figure out who broke a dashboard — and end up stuck
in a reply-all thread that keeps growing while no one actually
answers the question? You bounce between the data team, the BI
team, and sometimes even the storage admins, piecing together
guesses. Meanwhile, the person who actually made the change is
probably wondering why the metrics look “different” today. This
is the part of analytics work where the technical problem turns
into a game of office politics. Audit logs are Fabric’s way of
taking that noise out of the equation. They act like a black box
recorder for your entire Lakehouse environment. Every significant
action is captured: who did it, what they touched, and when it
happened. It’s not just a generic access log—Fabric ties these
entries directly to specific objects in the platform. So if a
dataset’s schema changes, you can see the exact user account that
made it, along with a timestamp and the method they used. Here’s
where the connection to lineage makes a difference. If all you
had was a folder of log files, you’d still end up manually
cross-referencing IDs and timestamps to figure out the impact.
But because Fabric already maps the data flow, those logs don’t
live in isolation. You can view a dataset’s lineage, click on a
node, and see precisely which actions were run against it. That
means you can trace a broken metric right back to the
transformation job it came from — and identify the person or
process that ran it. The coverage is broad, too. Fabric’s audit
layer records access events, so you know when someone queried a
table or opened a report. It logs creation and deletion of
datasets, pipelines, and tables. Modifications get a record
whether they’re structural, like changing a column type, or
procedural, like editing a pipeline activity. Even publishing a
new version of a Power BI report counts as an event, tied back to
its lineage. All of it gets the same treatment: time, user, and
object ID, stored in a consistent format. This uniformity is what
turns the logs into something usable for compliance. Regulatory
audits don’t care about your internal tooling—they care that you
can prove exactly who accessed sensitive data, under what
authorizations, and what they did with it. Fabric’s audit trail
can be queried to produce that history across ingestion,
transformation, and output. If an HR dataset is classified as
containing personal information, you can show not only the access
list but every interaction that dataset had, right down to report
exports. Incident investigations work the same way. Say a number
in a quarterly report doesn’t match the finance system. Instead
of speculating, you go to the dataset feeding that report, pull
its audit history, and see that two weeks ago a transformation
step was added to a notebook. The person who committed that
change is there in the log. You can verify if it was intentional,
test the outcome, and fix the issue without having to untangle
chains of hearsay. One of the underappreciated parts here is how
it integrates with Purview. While Fabric keeps the logs, Purview
can pull them in alongside the catalog and lineage data from
across the organization. That means the audit for a dataset in
one workspace can be looked at next to its related objects in
other workspaces and even non-Fabric data sources. For large
organizations, this stops investigations at the borders between
teams. Everything’s indexed in a single, searchable layer. When
you link logs and lineage like this, you get more than a record
of events—you get a timeline of your data’s actual life. You can
follow the route from source to report, while also seeing who
stepped in at each point. It’s a complete view that connects
human actions to data flows. That’s what saves you from chasing
people down in email threads or making decisions based on
guesswork. And beyond solving technical problems, this level of
visibility takes the politics out of post-mortems. You’re not
relying on memory or conflicting descriptions— you’ve got a
single, objective record. No matter how complex the pipeline or
how many teams touched it, you can back every claim with the same
source of truth. And once that visibility is in place, the
obvious next step is to scale it out, so that same clarity exists
across every dataset and every team in the organization. That’s
where the catalog comes in.
Purview: The Map Room for Your Data Universe
Knowing the lineage inside one workspace is useful — but it’s
also like knowing the street map of your own neighborhood without
ever seeing the city plan. You can navigate locally, but if the
delivery truck gets lost two suburbs over, you have no idea why
it’s late. That’s the gap between workspace-level insight and an
enterprise-wide view. And that’s exactly where Microsoft Purview
steps in. Purview sits above Fabric, acting like an index for
everything the platform knows about your data’s structure,
movement, and classification. Instead of digging into each
workspace separately, you get a single catalog that brings
lineage, definitions, and access rules into one place. It
aggregates metadata from multiple Fabric environments — and from
outside sources too — so your view isn’t limited by team or
project boundaries. The problem it solves is straightforward but
critical. Without a central catalog, each team’s view of lineage
ends at their own assets. The BI group might know exactly how
their dashboards are built from their datasets. The data
engineering team might know how those datasets were sourced and
transformed from raw data. But unless they’re trading notes
constantly, the full picture never exists in one system.
Troubleshooting, compliance checks, and data discovery all slow
down because you have to stitch fragments together manually. In
Purview’s catalog, lineage from ingestion to analytics is mapped
across every Fabric workspace it’s connected to. Imagine opening
a dataset’s page and not only seeing its lineage inside its
current workspace, but also the ingestion pipeline in another
workspace that feeds it, and the curated table two more steps
upstream. That’s not a separate diagram you have to maintain —
it’s read directly from Fabric’s metadata and preserved in the
catalog. From there, anyone with the right access can navigate it
like a continuous chain, no matter which logical or
organizational boundaries it crosses. One of the most tangible
benefits is search. Purview isn’t just indexing object names; it
understands classifications and sensitivity labels. If your
compliance officer wants to know where all data containing
“customer phone number” is stored or consumed, they can run a
query across the catalog and get every instance — in Lakehouse
tables, Power BI datasets, even Synapse artifacts. That search
works because Purview stores both the technical metadata and the
business metadata you’ve added, so “customer phone number” could
match a column in a Lakehouse table as well as a field in a
report’s data model. That connection to business glossaries is
where Purview goes beyond being a passive map. If you’ve defined
common business terms, you can link them directly to datasets or
columns in the catalog. It means that “Net Revenue” isn’t just a
label in a report — it’s tied to the actual data source,
transformation logic, and every report that uses it. For
governance, this reduces ambiguity. Different teams aren’t
debating definitions in chat threads; they’re all pointing to the
same glossary entry, which links back to the exact data objects
in Fabric. Integration with technical assets is broad and
consistent. Purview understands Power BI datasets, including
their table and column structures. It knows Lakehouse tables and
the pipelines feeding them. It registers Synapse notebooks, SQL
scripts, and dataflow artifacts. And for each asset, it keeps
track of lineage relationships and classifications. This makes it
just as easy to trace the origin of a KPI in a Power BI report as
it is to audit a transformation notebook’s impact on multiple
downstream tables. Centralizing all of this breaks down silos in
a practical way. With no single catalog, the security team might
only see logs and permissions for their own systems, while the
analytics team works in total isolation on reporting models.
Purview creates overlap — the catalog becomes the single
reference point for technical teams, analysts, and compliance
officers alike. It means a governance policy written at the
organizational level can be checked against real data flows,
instead of relying on assumptions or self-reported documentation.
And that’s the point where technical reality meets compliance
reporting. You’re not just drawing maps to satisfy curiosity.
You’re connecting verified lineage to actual usage,
classifications, and security rules in a way that can stand up to
audits or investigations. Whether the question is “Where is this
sensitive field stored?” or “Which reports depend on this table
we’re changing?”, the answer is in the catalog — complete,
current, and verifiable. With that kind of organization-wide
visibility in place, you can finally see how every piece of the
pipeline connects. Which raises the next challenge: ensuring that
transparency isn’t lost once the data starts changing inside
transformations.
Keeping Transparency Through Every Transformation
Every time data goes through a transformation, you’re removing or
reshaping something. Maybe it’s a simple column rename, maybe a
full aggregation — but either way, the original form changes. If
the system isn’t capturing that moment, you’re left with a number
you can’t properly account for. It still looks valid in a report,
but ask how it was calculated and you’ll find yourself digging
through scripts, emails, and memory to reconstruct what happened.
Inside Microsoft Fabric, this is where the Synapse transformation
layer earns its keep. Whether you’re working in SQL scripts,
Spark notebooks, or Dataflows, each step that changes the data
keeps its connection back to the original source. The Lakehouse
doesn’t just store the output table — it also knows exactly which
datasets or tables fed into it, and how they link together. Those
links become part of the lineage graph, so you can navigate both
the “before” and the “after” without guessing or relying on
separate documentation. The risk without transformation-level
lineage is pretty straightforward. You start trusting aggregates
or calculated fields that may be outdated, incomplete, or based
on incorrect joins. You can double-check the final query if you
have it, but that tells you nothing about upstream filters or
derived columns created three models earlier. This is how
well-meaning teams can ship KPIs that contradict each other —
each one consistent within its own context, but not rooted in the
same underlying data path. Here’s a simple scenario. You’ve got a
transaction table logging individual sales: date, product,
region, amount. The business asks for weekly sales totals by
region. In a notebook, you group by week and sum the amounts,
creating an aggregated table. In most systems, the link back to
the base table isn’t tracked beyond the notebook script itself.
In Fabric, that weekly sales table still appears in the lineage
graph with a live connection to the source transaction table.
When you click that node, you see where it came from, which
transformation objects touched it, and where it’s used downstream
in reports. That connection doesn’t fade after the job completes
— it’s part of the metadata until you delete the asset. On the
graph, each transformation appears as its own node: a Dataflow, a
Notebook, a SQL script. You can see both the incoming edges — the
datasets it consumes — and the outgoing edges — the tables,
views, or datasets it produces. This makes it obvious when
multiple outputs come from the same transformation. For example,
a cleansing script might produce a curated table for analytics
and a separate feed for machine learning. The lineage view shows
those two paths branching from the same point, so any changes to
that transformation are visible to the owners of both outputs.
What’s useful is that this scope isn’t limited to one type of
tool. A Dataflow transforming a CSV has the same kind of upstream
and downstream tracking as a Spark notebook joining two Lakehouse
tables. That consistency is possible because Fabric’s internal
service mesh treats these tools as peers, passing metadata the
same way it passes the actual data. The fact you built something
in SQL and your colleague built theirs in a visual Dataflow
doesn’t mean you need two different ways to see the lineage. This
automatic, tool-agnostic mapping turns an abstract governance
goal into something you can actually act on. Quality assurance
teams can audit an entire calculation path, not just the last
step. Compliance officers can prove that a sensitive field was
removed at a specific transformation stage and never
reintroduced. Analysts can check if two KPIs share a common base
table before deciding whether they truly compare like-for-like.
It’s not about policing work — it’s about trusting outputs
because you can see and verify every step that shaped them. In a
BI environment, trust is fragile. One unexplained spike or
mismatch erodes confidence quickly. When you’ve got
transformation-level lineage baked in, you can answer “Where did
this number come from?” with more than a shrug. You can click
your way from the report through each transformation, all the way
back to the original record. And when that degree of traceability
is combined with governance controls, permissions, and catalogs,
the result isn’t just visibility — it’s an entire data estate
where every decision and every metric can be backed by proof.
That’s what ties all of these capabilities together into
something more than the sum of their parts.
Conclusion
In Fabric, lineage, permissions, logging, and cataloging aren’t
extra features you bolt on later — they hold the Lakehouse
together. They work in the background, connecting every source,
transformation, and report with rules and proof you can actually
rely on. The clearer you see your data’s actual journey, the more
confidently you can use it without creating risk. That’s the
difference between trusting a number because it “looks right” and
trusting it because you’ve verified every step. Tomorrow, pick
one of your data flows. Trace it start to finish. See what’s
recorded — and what that visibility could save you.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Mehr
16.08.2025
22 Minuten
If you think Copilot only shows what you’ve already got
permission to see—think again. One wrong Graph permission and
suddenly your AI can surface data your compliance team never
signed off on. The scary part? You might never even realize it’s
happening.In this video, I’ll break down the real risks of
unmanaged Copilot access—how sensitive files, financial
spreadsheets, and confidential client data can slip through. Then
I’ll show you how to lock it down using Graph permissions, DLP
policies, and Purview—without breaking productivity for the
people who actually need access.
When Copilot Knows Too Much
A junior staffer asks Copilot for notes from last quarter’s
project review, and what comes back isn’t a tidy summary of their
own meeting—it’s detailed minutes from a private board session.
Including strategy decisions, budget cuts, and names that should
never have reached that person’s inbox. No breach alerts went
off. No DLP warning. Just an AI quietly handing over a document
it should never have touched.This happens because Copilot doesn’t
magically stop at a user’s mailbox or OneDrive folder. Its reach
is dictated by the permissions it’s been granted through
Microsoft Graph. And Graph isn’t just a database—it’s the central
point of access to nearly every piece of content in Microsoft
365. SharePoint, Teams messages, calendar events, OneNote, CRM
data tied into the tenant—it all flows through Graph if the right
door is unlocked. That’s the part many admins miss.There’s a
common assumption that if I’m signed in as me, Copilot will only
see what I can see. Sounds reasonable. The problem is, Copilot
itself often runs with a separate set of application permissions.
If those permissions are broader than the signed-in user’s
rights, you end up with an AI assistant that can reach far more
than the human sitting at the keyboard. And in some deployments,
those elevated permissions are handed out without anyone
questioning why.Picture a financial analyst working on a
quarterly forecast. They ask Copilot for “current pipeline data
for top 20 accounts.” In their regular role, they should only see
figures for a subset of clients. But thanks to how Graph has been
scoped in Copilot’s app registration, the AI pulls the entire
sales pipeline report from a shared team site that the analyst
has never had access to directly. From an end-user perspective,
nothing looks suspicious. But from a security and compliance
standpoint, that’s sensitive exposure.Graph API permissions are
effectively the front door to your organization’s data. Microsoft
splits them into delegated permissions—acting on behalf of a
signed-in user—and application permissions, which allow an app to
operate independently. Copilot scenarios often require delegated
permissions for content retrieval, but certain features, like
summarizing a Teams meeting the user wasn’t in, can prompt admins
to approve application-level permissions. And that’s where the
danger creeps in. Application permissions ignore individual user
restrictions unless you deliberately scope them.These approvals
often happen early in a rollout. An IT admin testing Copilot in a
dev tenant might click “Accept” on a permission prompt just to
get through setup, then replicate that configuration in
production without reviewing the implications. Once in place,
those broad permissions remain unless someone actively audits
them. Over time, as new data sources connect into M365, Copilot’s
reach expands without any conscious decision. That’s silent
permission creep—no drama, no user complaints, just a gradual
widening of the AI’s scope.The challenge is that most security
teams aren’t fluent in which Copilot capabilities require what
level of Graph access. They might see “Read all files in
SharePoint” and assume it’s constrained by user context, not
realizing that the permission is tenant-wide at the application
level. Without mapping specific AI scenarios to the minimum
necessary permissions, you end up defaulting to whatever was
approved in that initial setup. And the broader those rights, the
bigger the potential gap between expected and actual
behavior.It’s also worth remembering that Copilot’s output
doesn’t come with a built-in “permissions trail” visible to the
user. If the AI retrieves content from a location the user would
normally be blocked from browsing, there’s no warning banner
saying “this is outside your clearance.” That lack of
transparency makes it easier for risky exposures to blend into
everyday workflows.The takeaway here is that Graph permissions
for AI deployments aren’t just another checkbox in the onboarding
process—they’re a design choice that shapes every interaction
Copilot will have on your network. Treat them like you would
firewall rules or VPN access scopes: deliberate, reviewed, and
periodically revalidated. Default settings might get you running
quickly, but they also assume you’re comfortable with the AI
casting a much wider net than the human behind it. Now that we’ve
seen how easily the scope can drift, the next question is how to
find those gaps before they turn into a full-blown incident.
Finding Leaks Before They Spill
If Copilot was already surfacing data it shouldn’t, would you
even notice? For most organizations, the honest answer is no.
It’s not that the information would be posted on a public site or
blasted to a mailing list. The leak might show up quietly inside
a document draft, a summary, or an AI-generated answer—and unless
someone spots something unusual, it slips by without raising
alarms.The visibility problem starts with how most monitoring
systems are built. They’re tuned for traditional activities—file
downloads, unusual login locations, large email sends—not for the
way an AI retrieves and compiles information. Copilot doesn’t
“open” files in the usual sense. It queries data sources through
Microsoft Graph, compiles the results, and presents them as
natural language text. That means standard file access reports
can look clean, while the AI is still drawing from sensitive
locations in the background.I’ve seen situations where a company
only realized something was wrong because an employee casually
mentioned a client name that wasn’t in their department’s remit.
When the manager asked how they knew that, the answer was,
“Copilot included it in my draft.” There was no incident ticket,
no automated alert—just a random comment that led IT to
investigate. By the time they pieced it together, those same AI
responses had already been shared around several teams.Microsoft
365 gives you the tools to investigate these kinds of scenarios,
but you have to know where to look. Purview’s Audit feature can
record Copilot’s data access in detail—it’s just not labeled with
a big flashing “AI” badge. Once you’re in the audit log search,
you can filter by the specific operations Copilot uses, like
`SearchQueryPerformed` or `FileAccessed`, and narrow that down by
the application ID tied to your Copilot deployment. That takes a
bit of prep: you’ll want to confirm the app registration details
in Entra ID so you can identify the traffic.From there, it’s
about spotting patterns. If you see high-volume queries from
accounts that usually have low data needs—like an intern account
running ten complex searches in an hour—that’s worth checking.
Same with sudden spikes in content labeled “Confidential” showing
up in departments that normally don’t touch it. Purview can flag
label activity, so if a Copilot query pulls in a labeled
document, you’ll see it in the logs, even if the AI didn’t output
the full text.Role-based access reviews are another way to
connect the dots. By mapping which people actually use Copilot,
and cross-referencing with the kinds of data they interact with,
you can see potential mismatches early. Maybe Finance is using
Copilot heavily for reports, which makes sense—but why are there
multiple Marketing accounts hitting payroll spreadsheets through
AI queries? Those reviews give you a broader picture beyond
single events in the audit trail.The catch is that generic
monitoring dashboards won’t help much here. They aggregate every
M365 activity into broad categories, which can cause AI-specific
behavior to blend in with normal operations. Without creating
custom filters or reports focused on your Copilot app ID and
usage patterns, you’re basically sifting for specific grains of
sand in a whole beach’s worth of data. You need targeted
visibility, not just more visibility.It’s not about building a
surveillance culture; it’s about knowing, with certainty, what
your AI is actually pulling in. A proper logging approach answers
three critical questions: What did Copilot retrieve? Who
triggered it? And did that action align with your existing
security and compliance policies? Those answers let you address
issues with precision—whether that means adjusting a permission,
refining a DLP rule, or tightening role assignments. Without that
clarity, you’re left guessing, and guessing is not a security
strategy.So rather than waiting for another “casual comment”
moment to tip you off, it’s worth investing the time to structure
your monitoring so Copilot’s footprint is visible and traceable.
This way, any sign of data overexposure becomes a managed event,
not a surprise. Knowing where the leaks are is only the first
step. The real goal is making sure they can’t happen again—and
that’s where the right guardrails come in.
Guardrails That Actually Work
DLP isn’t just for catching emails with credit card numbers in
them. In the context of Copilot, it can be the tripwire that
stops sensitive data from slipping into an AI-generated answer
that gets pasted into a Teams chat or exported into a document
leaving your tenant. It’s still the same underlying tool in
Microsoft 365, but the way you configure it for AI scenarios
needs a different mindset.The gap is that most organizations’ DLP
policies are still written with old-school triggers in mind—email
attachments, file downloads to USB drives, copying data into
non‑approved apps. Copilot doesn’t trigger those rules by default
because it’s not “sending” files; it’s generating content on the
fly. If you ask Copilot for “the full list of customers marked
restricted” and it retrieves that from a labeled document, the
output can travel without ever tripping a traditional DLP
condition. That’s why AI prompts and responses need to be
explicitly brought into your DLP scope.One practical example: say
your policy forbids exporting certain contract documents outside
your secure environment. A user could ask Copilot to extract key
clauses and drop them into a PowerPoint. If your DLP rules don’t
monitor AI-generated content, that sensitive material now exists
in an unprotected file. By extending DLP inspection to cover
Copilot output, you can block that PowerPoint from being saved to
an unmanaged location or shared with an external guest in
Teams.Setting this up in Microsoft 365 isn’t complicated, but it
does require a deliberate process. First, in the Microsoft
Purview compliance portal, go to the Data Loss Prevention section
and create a new policy. When you choose the locations to apply
it to, include Exchange, SharePoint, OneDrive, and importantly,
Teams—because Copilot can surface data into any of those. Then,
define the conditions: you can target built‑in sensitive
information types like “Financial account number” or custom ones
that detect your internal project codes. If you use Sensitivity
Labels consistently, you can also set the condition to trigger
when labeled content appears in the final output of a file being
saved or shared. Finally, configure the actions—block the
sharing, show a policy tip to the user, or require justification
to proceed.Sensitivity labels themselves are a key part of making
this work. In the AI context, the label is metadata that Copilot
can read, just like any other M365 service. If a “Highly
Confidential” document has a label that restricts access and
usage, Copilot will respect those restrictions when generating
answers—provided that label’s protection settings are enforced
consistently across the apps involved. If the AI tries to use
content with a label outside its permitted scope, the DLP policy
linked to that label can either prevent the action or flag it for
review. Without that tie‑in, the label is just decoration from a
compliance standpoint.One of the most common misconfigurations I
run into is leaving DLP policies totally unaware of AI scenarios.
The rules exist, but there’s no link to Copilot output because
admins haven’t considered it a separate channel. That creates a
blind spot where sensitive terms in a generated answer aren’t
inspected, even though the same text in an email would have been
blocked. To fix that, you have to think of “AI‑assisted
workflows” as one of your DLP locations and monitor them along
with everything else.When DLP and sensitivity labels are properly
configured and aware of each other, Copilot can still be useful
without becoming a compliance headache. You can let it draft
reports, summarize documents, and sift through datasets—while
quietly enforcing the same boundaries you’d expect in an email or
Teams message. Users get the benefit of AI assistance, and the
guardrails keep high‑risk information from slipping out.The
advantage here isn’t just about preventing an accidental
overshare, it’s about allowing the technology to operate inside
clear rules. That way you aren’t resorting to blanket
restrictions that frustrate teams and kill adoption. You can tune
the controls so marketing can brainstorm with Copilot, finance
can run analysis, and HR can generate onboarding guides—each
within their own permitted zones. But controlling output is only
part of the puzzle. To fully reduce risk, you also have to decide
which people get access to which AI capabilities in the first
place.
One Size Doesn’t Fit All Access
Should a marketing intern and a CFO really have the same Copilot
privileges? The idea sounds absurd when you say it out loud, but
in plenty of tenants, that’s exactly how it’s set up. Copilot
gets switched on for everyone, with the same permissions, because
it’s quicker and easier than dealing with role-specific
configurations. The downside is that the AI’s access matches the
most open possible scenario, not the needs of each role.That’s
where role-based Copilot access groups come in. Instead of
treating every user as interchangeable, you align AI capabilities
to the information and workflows that specific roles actually
require. Marketing might need access to campaign assets and brand
guidelines, but not raw financial models. Finance needs those
models, but they don’t need early-stage product roadmaps. The
point isn’t to make Copilot less useful; it’s to keep its scope
relevant to each person’s job.The risks of universal enablement
are bigger than most teams expect. Copilot works by drawing on
the data your Microsoft 365 environment already holds. If all
staff have equal AI access, the technology can bridge silos
you’ve deliberately kept in place. That’s how you end up with HR
assistants stumbling into revenue breakdowns, or an operations
lead asking Copilot for “next year’s product release plan” and
getting design details that aren’t even finalized. None of it
feels like a breach in the moment—but the exposure is
real.Getting the access model right starts with mapping job
functions to data needs. Not just the applications people use,
but the depth and sensitivity of the data they touch day to day.
You might find that 70% of your sales team’s requests to Copilot
involve customer account histories, while less than 5% hit
high-sensitivity contract files. That suggests you can safely
keep most of their AI use within certain SharePoint libraries
while locking down the rest. Do that exercise across each
department, and patterns emerge.Once you know what each group
should have, Microsoft Entra ID—what many still call Azure
AD—becomes your enforcement tool. You create security groups that
correspond to your role definitions, then assign Copilot
permissions at the group level. That could mean enabling certain
Graph API scopes only for members of the “Finance-Copilot” group,
while the “Marketing-Copilot” group has a different set. Access
to sensitive sites, Teams channels, or specific OneDrive folders
can follow the same model.The strength of this approach is when
it’s layered with the controls we’ve already covered. Graph
permissions define the outer boundaries of what Copilot can
technically reach. DLP policies monitor the AI’s output for
sensitive content. Role-based groups sit in between, making sure
the Graph permissions aren’t overly broad for lower-sensitivity
roles, and that DLP doesn’t end up catching things you could have
prevented in the first place by restricting input sources.But
like any system, it can be taken too far. It’s tempting to create
a micro-group for every
scenario—“Finance-Analyst-CopilotWithReportingPermissions” or
“Marketing-Intern-NoTeamsAccess”—and end up with dozens of
variations. That level of granularity might look precise on
paper, but in a live environment it’s a maintenance headache.
Users change roles, projects shift, contractors come and go. If
the group model is too brittle, your IT staff will spend more
time fixing access issues than actually improving security.The
real aim is balance. A handful of clear, well-defined role groups
will cover most use cases without creating administrative
gridlock. The CFO’s group needs wide analytical powers but tight
controls on output sharing. The intern group gets limited data
scope but enough capability to contribute to actual work.
Department leads get the middle ground, and IT retains the
ability to adjust when special projects require exceptions.
You’re not trying to lock everything down to the point of
frustration—you’re keeping each AI experience relevant, secure,
and aligned with policy.When you get it right, the benefits show
up quickly. Users stop being surprised by the data Copilot serves
them, because it’s always something within their sphere of
responsibility. Compliance teams have fewer incidents to
investigate, because overexposures aren’t happening by accident.
And IT can finally move ahead with new Copilot features without
worrying that a global roll-out will quietly erode all the data
boundaries they’ve worked to build.With access and guardrails
working together, you’ve significantly reduced your risk profile.
But even a well-designed model only matters if you can prove that
it’s working—both to yourself and to anyone who comes knocking
with an audit request.
Proving Compliance Without Slowing Down
Compliance isn’t just security theatre; it’s the evidence that
keeps the auditors happy. Policies and guardrails are great, but
if you can’t show exactly what happened with AI-assisted data,
you’re left making claims instead of proving them. An audit-ready
Copilot environment means that every interaction, from the user’s
query to the AI’s data retrieval, can be explained and backed up
with a verifiable trail.The tricky part is that many companies
think they’re covered because they pass internal reviews. Those
reviews often check the existence of controls and a few sample
scenarios, but they don’t always demand the level of granularity
external auditors expect. When an outside assessor asks for a log
of all sensitive content Copilot accessed last quarter—along with
who requested it and why—it’s surprising how often gaps appear.
Either the logs are incomplete, or they omit AI-related events
entirely because they were never tagged that way in the first
place.This is where Microsoft Purview can make a big difference.
Its compliance capabilities aren’t just about applying labels and
DLP policies; they also pull together the forensic evidence you
need. In a Copilot context, Purview can record every relevant
data access request, the identity behind it, and the source
location. It can also correlate those events to data movement
patterns—like sensitive files being referenced in drafts,
summaries, or exports—without relying on the AI to
self-report.Purview’s compliance score is more than a vanity
metric. It’s a snapshot of how your environment measures up
against Microsoft’s recommended controls, including those that
directly limit AI-related risks. Stronger Graph permission
hygiene, tighter DLP configurations, and well-maintained
role-based groups all feed into that score. And because the score
updates as you make changes, you can see in near real time how
improvements in AI governance increase your compliance
standing.Think about a regulatory exam where you have to justify
why certain customer data appeared in a Copilot-generated report.
Without structured logging, that conversation turns into
guesswork. With Purview properly configured, you can show the
access request in an audit log, point to the role and permissions
that authorized it, and demonstrate that the output stayed within
approved channels. That’s a much easier discussion than
scrambling to explain an undocumented event.The key is to make
compliance reporting part of your normal IT governance cycle, not
just a special project before an audit. Automated reporting goes
a long way here. Purview can generate recurring reports on
information protection policy matches, DLP incidents, and
sensitivity label usage. When those reports are scheduled to drop
into your governance team’s workspace each month, you build a
baseline of AI activity that’s easy to review. Any anomaly stands
out against the historical pattern.The time-saving features add
up. For instance, Purview ships with pre-built reports that
highlight all incidents involving labeled content, grouped by
location or activity type. If a Copilot session pulled a
“Confidential” document into an output and your DLP acted on it,
that incident already appears in a report without you building a
custom query from scratch. You can then drill into that record
for more details, but the heavy lifting of collection and
categorization is already done.Another efficiency is the
integration between Purview auditing and Microsoft 365’s
role-based access data. Because Purview understands Entra ID
groups, it can slice access logs by role type. That means you can
quickly answer focused questions like, “Show me all instances
where marketing roles accessed finance-labeled data through
Copilot in the past 90 days.” That ability to filter down by both
role and data classification is exactly what external reviewers
are looking for.When you think about it, compliance at this level
isn’t a burden—it’s a guardrail that confirms your governance
design is working in practice. It also removes the stress from
audits because you’re not scrambling for evidence; you already
have it, neatly organized and timestamped. With the right setup,
proving Copilot compliance becomes as routine as applying
security updates to your servers. It’s not glamorous, but it
means you can keep innovating with AI without constantly worrying
about your next audit window. And that leads straight into the
bigger picture of why a governed AI approach isn’t just
safer—it’s smarter business.
Conclusion
Securing Copilot isn’t about slowing things down or locking
people out. It’s about making sure the AI serves your business
without quietly exposing it. The guardrails we’ve talked
about—Graph permissions, DLP, Purview—aren’t red tape. They’re
the framework that keeps Copilot’s answers accurate, relevant,
and safe. Before your next big rollout or project kick-off,
review exactly what Graph permissions you’ve approved, align your
DLP so it catches AI outputs, and check your Purview dashboards
for anything unusual. Done right, governed Copilot doesn’t just
avoid risk—it lets you use AI with confidence, speed, and
precision. That’s a competitive edge worth protecting.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Mehr
Über diesen Podcast
The M365 Show – Microsoft 365, Azure, Power Platform & Cloud
Innovation Stay ahead in the world of Microsoft 365, Azure, and the
Microsoft Cloud. The M365 Show brings you expert insights,
real-world use cases, and the latest updates across Power BI, Power
Platform, Microsoft Teams, Viva, Fabric, Purview, Security, AI, and
more. Hosted by industry experts, each episode features actionable
tips, best practices, and interviews with Microsoft MVPs, product
leaders, and technology innovators. Whether you’re an IT pro,
business leader, developer, or data enthusiast, you’ll discover the
strategies, trends, and tools you need to boost productivity,
secure your environment, and drive digital transformation. Your
go-to Microsoft 365 podcast for cloud collaboration, data
analytics, and workplace innovation. Tune in, level up, and make
the most of everything Microsoft has to offer. Visit
M365.show.
m365.show
Kommentare (0)