Governed AI: Keeping Copilot Secure and Compliant

Governed AI: Keeping Copilot Secure and Compliant

22 Minuten
Podcast
Podcaster
M365 Show brings you expert insights, news, and strategies across Power Platform, Azure, Security, Data, and Collaboration in the Microsoft ecosystem.
MirkoPeters

Kein Benutzerfoto
Stuttgart

Beschreibung

vor 3 Monaten

If you think Copilot only shows what you’ve already got
permission to see—think again. One wrong Graph permission and
suddenly your AI can surface data your compliance team never
signed off on. The scary part? You might never even realize it’s
happening.In this video, I’ll break down the real risks of
unmanaged Copilot access—how sensitive files, financial
spreadsheets, and confidential client data can slip through. Then
I’ll show you how to lock it down using Graph permissions, DLP
policies, and Purview—without breaking productivity for the
people who actually need access.


When Copilot Knows Too Much


A junior staffer asks Copilot for notes from last quarter’s
project review, and what comes back isn’t a tidy summary of their
own meeting—it’s detailed minutes from a private board session.
Including strategy decisions, budget cuts, and names that should
never have reached that person’s inbox. No breach alerts went
off. No DLP warning. Just an AI quietly handing over a document
it should never have touched.This happens because Copilot doesn’t
magically stop at a user’s mailbox or OneDrive folder. Its reach
is dictated by the permissions it’s been granted through
Microsoft Graph. And Graph isn’t just a database—it’s the central
point of access to nearly every piece of content in Microsoft
365. SharePoint, Teams messages, calendar events, OneNote, CRM
data tied into the tenant—it all flows through Graph if the right
door is unlocked. That’s the part many admins miss.There’s a
common assumption that if I’m signed in as me, Copilot will only
see what I can see. Sounds reasonable. The problem is, Copilot
itself often runs with a separate set of application permissions.
If those permissions are broader than the signed-in user’s
rights, you end up with an AI assistant that can reach far more
than the human sitting at the keyboard. And in some deployments,
those elevated permissions are handed out without anyone
questioning why.Picture a financial analyst working on a
quarterly forecast. They ask Copilot for “current pipeline data
for top 20 accounts.” In their regular role, they should only see
figures for a subset of clients. But thanks to how Graph has been
scoped in Copilot’s app registration, the AI pulls the entire
sales pipeline report from a shared team site that the analyst
has never had access to directly. From an end-user perspective,
nothing looks suspicious. But from a security and compliance
standpoint, that’s sensitive exposure.Graph API permissions are
effectively the front door to your organization’s data. Microsoft
splits them into delegated permissions—acting on behalf of a
signed-in user—and application permissions, which allow an app to
operate independently. Copilot scenarios often require delegated
permissions for content retrieval, but certain features, like
summarizing a Teams meeting the user wasn’t in, can prompt admins
to approve application-level permissions. And that’s where the
danger creeps in. Application permissions ignore individual user
restrictions unless you deliberately scope them.These approvals
often happen early in a rollout. An IT admin testing Copilot in a
dev tenant might click “Accept” on a permission prompt just to
get through setup, then replicate that configuration in
production without reviewing the implications. Once in place,
those broad permissions remain unless someone actively audits
them. Over time, as new data sources connect into M365, Copilot’s
reach expands without any conscious decision. That’s silent
permission creep—no drama, no user complaints, just a gradual
widening of the AI’s scope.The challenge is that most security
teams aren’t fluent in which Copilot capabilities require what
level of Graph access. They might see “Read all files in
SharePoint” and assume it’s constrained by user context, not
realizing that the permission is tenant-wide at the application
level. Without mapping specific AI scenarios to the minimum
necessary permissions, you end up defaulting to whatever was
approved in that initial setup. And the broader those rights, the
bigger the potential gap between expected and actual
behavior.It’s also worth remembering that Copilot’s output
doesn’t come with a built-in “permissions trail” visible to the
user. If the AI retrieves content from a location the user would
normally be blocked from browsing, there’s no warning banner
saying “this is outside your clearance.” That lack of
transparency makes it easier for risky exposures to blend into
everyday workflows.The takeaway here is that Graph permissions
for AI deployments aren’t just another checkbox in the onboarding
process—they’re a design choice that shapes every interaction
Copilot will have on your network. Treat them like you would
firewall rules or VPN access scopes: deliberate, reviewed, and
periodically revalidated. Default settings might get you running
quickly, but they also assume you’re comfortable with the AI
casting a much wider net than the human behind it. Now that we’ve
seen how easily the scope can drift, the next question is how to
find those gaps before they turn into a full-blown incident.


Finding Leaks Before They Spill


If Copilot was already surfacing data it shouldn’t, would you
even notice? For most organizations, the honest answer is no.
It’s not that the information would be posted on a public site or
blasted to a mailing list. The leak might show up quietly inside
a document draft, a summary, or an AI-generated answer—and unless
someone spots something unusual, it slips by without raising
alarms.The visibility problem starts with how most monitoring
systems are built. They’re tuned for traditional activities—file
downloads, unusual login locations, large email sends—not for the
way an AI retrieves and compiles information. Copilot doesn’t
“open” files in the usual sense. It queries data sources through
Microsoft Graph, compiles the results, and presents them as
natural language text. That means standard file access reports
can look clean, while the AI is still drawing from sensitive
locations in the background.I’ve seen situations where a company
only realized something was wrong because an employee casually
mentioned a client name that wasn’t in their department’s remit.
When the manager asked how they knew that, the answer was,
“Copilot included it in my draft.” There was no incident ticket,
no automated alert—just a random comment that led IT to
investigate. By the time they pieced it together, those same AI
responses had already been shared around several teams.Microsoft
365 gives you the tools to investigate these kinds of scenarios,
but you have to know where to look. Purview’s Audit feature can
record Copilot’s data access in detail—it’s just not labeled with
a big flashing “AI” badge. Once you’re in the audit log search,
you can filter by the specific operations Copilot uses, like
`SearchQueryPerformed` or `FileAccessed`, and narrow that down by
the application ID tied to your Copilot deployment. That takes a
bit of prep: you’ll want to confirm the app registration details
in Entra ID so you can identify the traffic.From there, it’s
about spotting patterns. If you see high-volume queries from
accounts that usually have low data needs—like an intern account
running ten complex searches in an hour—that’s worth checking.
Same with sudden spikes in content labeled “Confidential” showing
up in departments that normally don’t touch it. Purview can flag
label activity, so if a Copilot query pulls in a labeled
document, you’ll see it in the logs, even if the AI didn’t output
the full text.Role-based access reviews are another way to
connect the dots. By mapping which people actually use Copilot,
and cross-referencing with the kinds of data they interact with,
you can see potential mismatches early. Maybe Finance is using
Copilot heavily for reports, which makes sense—but why are there
multiple Marketing accounts hitting payroll spreadsheets through
AI queries? Those reviews give you a broader picture beyond
single events in the audit trail.The catch is that generic
monitoring dashboards won’t help much here. They aggregate every
M365 activity into broad categories, which can cause AI-specific
behavior to blend in with normal operations. Without creating
custom filters or reports focused on your Copilot app ID and
usage patterns, you’re basically sifting for specific grains of
sand in a whole beach’s worth of data. You need targeted
visibility, not just more visibility.It’s not about building a
surveillance culture; it’s about knowing, with certainty, what
your AI is actually pulling in. A proper logging approach answers
three critical questions: What did Copilot retrieve? Who
triggered it? And did that action align with your existing
security and compliance policies? Those answers let you address
issues with precision—whether that means adjusting a permission,
refining a DLP rule, or tightening role assignments. Without that
clarity, you’re left guessing, and guessing is not a security
strategy.So rather than waiting for another “casual comment”
moment to tip you off, it’s worth investing the time to structure
your monitoring so Copilot’s footprint is visible and traceable.
This way, any sign of data overexposure becomes a managed event,
not a surprise. Knowing where the leaks are is only the first
step. The real goal is making sure they can’t happen again—and
that’s where the right guardrails come in.


Guardrails That Actually Work


DLP isn’t just for catching emails with credit card numbers in
them. In the context of Copilot, it can be the tripwire that
stops sensitive data from slipping into an AI-generated answer
that gets pasted into a Teams chat or exported into a document
leaving your tenant. It’s still the same underlying tool in
Microsoft 365, but the way you configure it for AI scenarios
needs a different mindset.The gap is that most organizations’ DLP
policies are still written with old-school triggers in mind—email
attachments, file downloads to USB drives, copying data into
non‑approved apps. Copilot doesn’t trigger those rules by default
because it’s not “sending” files; it’s generating content on the
fly. If you ask Copilot for “the full list of customers marked
restricted” and it retrieves that from a labeled document, the
output can travel without ever tripping a traditional DLP
condition. That’s why AI prompts and responses need to be
explicitly brought into your DLP scope.One practical example: say
your policy forbids exporting certain contract documents outside
your secure environment. A user could ask Copilot to extract key
clauses and drop them into a PowerPoint. If your DLP rules don’t
monitor AI-generated content, that sensitive material now exists
in an unprotected file. By extending DLP inspection to cover
Copilot output, you can block that PowerPoint from being saved to
an unmanaged location or shared with an external guest in
Teams.Setting this up in Microsoft 365 isn’t complicated, but it
does require a deliberate process. First, in the Microsoft
Purview compliance portal, go to the Data Loss Prevention section
and create a new policy. When you choose the locations to apply
it to, include Exchange, SharePoint, OneDrive, and importantly,
Teams—because Copilot can surface data into any of those. Then,
define the conditions: you can target built‑in sensitive
information types like “Financial account number” or custom ones
that detect your internal project codes. If you use Sensitivity
Labels consistently, you can also set the condition to trigger
when labeled content appears in the final output of a file being
saved or shared. Finally, configure the actions—block the
sharing, show a policy tip to the user, or require justification
to proceed.Sensitivity labels themselves are a key part of making
this work. In the AI context, the label is metadata that Copilot
can read, just like any other M365 service. If a “Highly
Confidential” document has a label that restricts access and
usage, Copilot will respect those restrictions when generating
answers—provided that label’s protection settings are enforced
consistently across the apps involved. If the AI tries to use
content with a label outside its permitted scope, the DLP policy
linked to that label can either prevent the action or flag it for
review. Without that tie‑in, the label is just decoration from a
compliance standpoint.One of the most common misconfigurations I
run into is leaving DLP policies totally unaware of AI scenarios.
The rules exist, but there’s no link to Copilot output because
admins haven’t considered it a separate channel. That creates a
blind spot where sensitive terms in a generated answer aren’t
inspected, even though the same text in an email would have been
blocked. To fix that, you have to think of “AI‑assisted
workflows” as one of your DLP locations and monitor them along
with everything else.When DLP and sensitivity labels are properly
configured and aware of each other, Copilot can still be useful
without becoming a compliance headache. You can let it draft
reports, summarize documents, and sift through datasets—while
quietly enforcing the same boundaries you’d expect in an email or
Teams message. Users get the benefit of AI assistance, and the
guardrails keep high‑risk information from slipping out.The
advantage here isn’t just about preventing an accidental
overshare, it’s about allowing the technology to operate inside
clear rules. That way you aren’t resorting to blanket
restrictions that frustrate teams and kill adoption. You can tune
the controls so marketing can brainstorm with Copilot, finance
can run analysis, and HR can generate onboarding guides—each
within their own permitted zones. But controlling output is only
part of the puzzle. To fully reduce risk, you also have to decide
which people get access to which AI capabilities in the first
place.


One Size Doesn’t Fit All Access


Should a marketing intern and a CFO really have the same Copilot
privileges? The idea sounds absurd when you say it out loud, but
in plenty of tenants, that’s exactly how it’s set up. Copilot
gets switched on for everyone, with the same permissions, because
it’s quicker and easier than dealing with role-specific
configurations. The downside is that the AI’s access matches the
most open possible scenario, not the needs of each role.That’s
where role-based Copilot access groups come in. Instead of
treating every user as interchangeable, you align AI capabilities
to the information and workflows that specific roles actually
require. Marketing might need access to campaign assets and brand
guidelines, but not raw financial models. Finance needs those
models, but they don’t need early-stage product roadmaps. The
point isn’t to make Copilot less useful; it’s to keep its scope
relevant to each person’s job.The risks of universal enablement
are bigger than most teams expect. Copilot works by drawing on
the data your Microsoft 365 environment already holds. If all
staff have equal AI access, the technology can bridge silos
you’ve deliberately kept in place. That’s how you end up with HR
assistants stumbling into revenue breakdowns, or an operations
lead asking Copilot for “next year’s product release plan” and
getting design details that aren’t even finalized. None of it
feels like a breach in the moment—but the exposure is
real.Getting the access model right starts with mapping job
functions to data needs. Not just the applications people use,
but the depth and sensitivity of the data they touch day to day.
You might find that 70% of your sales team’s requests to Copilot
involve customer account histories, while less than 5% hit
high-sensitivity contract files. That suggests you can safely
keep most of their AI use within certain SharePoint libraries
while locking down the rest. Do that exercise across each
department, and patterns emerge.Once you know what each group
should have, Microsoft Entra ID—what many still call Azure
AD—becomes your enforcement tool. You create security groups that
correspond to your role definitions, then assign Copilot
permissions at the group level. That could mean enabling certain
Graph API scopes only for members of the “Finance-Copilot” group,
while the “Marketing-Copilot” group has a different set. Access
to sensitive sites, Teams channels, or specific OneDrive folders
can follow the same model.The strength of this approach is when
it’s layered with the controls we’ve already covered. Graph
permissions define the outer boundaries of what Copilot can
technically reach. DLP policies monitor the AI’s output for
sensitive content. Role-based groups sit in between, making sure
the Graph permissions aren’t overly broad for lower-sensitivity
roles, and that DLP doesn’t end up catching things you could have
prevented in the first place by restricting input sources.But
like any system, it can be taken too far. It’s tempting to create
a micro-group for every
scenario—“Finance-Analyst-CopilotWithReportingPermissions” or
“Marketing-Intern-NoTeamsAccess”—and end up with dozens of
variations. That level of granularity might look precise on
paper, but in a live environment it’s a maintenance headache.
Users change roles, projects shift, contractors come and go. If
the group model is too brittle, your IT staff will spend more
time fixing access issues than actually improving security.The
real aim is balance. A handful of clear, well-defined role groups
will cover most use cases without creating administrative
gridlock. The CFO’s group needs wide analytical powers but tight
controls on output sharing. The intern group gets limited data
scope but enough capability to contribute to actual work.
Department leads get the middle ground, and IT retains the
ability to adjust when special projects require exceptions.
You’re not trying to lock everything down to the point of
frustration—you’re keeping each AI experience relevant, secure,
and aligned with policy.When you get it right, the benefits show
up quickly. Users stop being surprised by the data Copilot serves
them, because it’s always something within their sphere of
responsibility. Compliance teams have fewer incidents to
investigate, because overexposures aren’t happening by accident.
And IT can finally move ahead with new Copilot features without
worrying that a global roll-out will quietly erode all the data
boundaries they’ve worked to build.With access and guardrails
working together, you’ve significantly reduced your risk profile.
But even a well-designed model only matters if you can prove that
it’s working—both to yourself and to anyone who comes knocking
with an audit request.


Proving Compliance Without Slowing Down


Compliance isn’t just security theatre; it’s the evidence that
keeps the auditors happy. Policies and guardrails are great, but
if you can’t show exactly what happened with AI-assisted data,
you’re left making claims instead of proving them. An audit-ready
Copilot environment means that every interaction, from the user’s
query to the AI’s data retrieval, can be explained and backed up
with a verifiable trail.The tricky part is that many companies
think they’re covered because they pass internal reviews. Those
reviews often check the existence of controls and a few sample
scenarios, but they don’t always demand the level of granularity
external auditors expect. When an outside assessor asks for a log
of all sensitive content Copilot accessed last quarter—along with
who requested it and why—it’s surprising how often gaps appear.
Either the logs are incomplete, or they omit AI-related events
entirely because they were never tagged that way in the first
place.This is where Microsoft Purview can make a big difference.
Its compliance capabilities aren’t just about applying labels and
DLP policies; they also pull together the forensic evidence you
need. In a Copilot context, Purview can record every relevant
data access request, the identity behind it, and the source
location. It can also correlate those events to data movement
patterns—like sensitive files being referenced in drafts,
summaries, or exports—without relying on the AI to
self-report.Purview’s compliance score is more than a vanity
metric. It’s a snapshot of how your environment measures up
against Microsoft’s recommended controls, including those that
directly limit AI-related risks. Stronger Graph permission
hygiene, tighter DLP configurations, and well-maintained
role-based groups all feed into that score. And because the score
updates as you make changes, you can see in near real time how
improvements in AI governance increase your compliance
standing.Think about a regulatory exam where you have to justify
why certain customer data appeared in a Copilot-generated report.
Without structured logging, that conversation turns into
guesswork. With Purview properly configured, you can show the
access request in an audit log, point to the role and permissions
that authorized it, and demonstrate that the output stayed within
approved channels. That’s a much easier discussion than
scrambling to explain an undocumented event.The key is to make
compliance reporting part of your normal IT governance cycle, not
just a special project before an audit. Automated reporting goes
a long way here. Purview can generate recurring reports on
information protection policy matches, DLP incidents, and
sensitivity label usage. When those reports are scheduled to drop
into your governance team’s workspace each month, you build a
baseline of AI activity that’s easy to review. Any anomaly stands
out against the historical pattern.The time-saving features add
up. For instance, Purview ships with pre-built reports that
highlight all incidents involving labeled content, grouped by
location or activity type. If a Copilot session pulled a
“Confidential” document into an output and your DLP acted on it,
that incident already appears in a report without you building a
custom query from scratch. You can then drill into that record
for more details, but the heavy lifting of collection and
categorization is already done.Another efficiency is the
integration between Purview auditing and Microsoft 365’s
role-based access data. Because Purview understands Entra ID
groups, it can slice access logs by role type. That means you can
quickly answer focused questions like, “Show me all instances
where marketing roles accessed finance-labeled data through
Copilot in the past 90 days.” That ability to filter down by both
role and data classification is exactly what external reviewers
are looking for.When you think about it, compliance at this level
isn’t a burden—it’s a guardrail that confirms your governance
design is working in practice. It also removes the stress from
audits because you’re not scrambling for evidence; you already
have it, neatly organized and timestamped. With the right setup,
proving Copilot compliance becomes as routine as applying
security updates to your servers. It’s not glamorous, but it
means you can keep innovating with AI without constantly worrying
about your next audit window. And that leads straight into the
bigger picture of why a governed AI approach isn’t just
safer—it’s smarter business.


Conclusion


Securing Copilot isn’t about slowing things down or locking
people out. It’s about making sure the AI serves your business
without quietly exposing it. The guardrails we’ve talked
about—Graph permissions, DLP, Purview—aren’t red tape. They’re
the framework that keeps Copilot’s answers accurate, relevant,
and safe. Before your next big rollout or project kick-off,
review exactly what Graph permissions you’ve approved, align your
DLP so it catches AI outputs, and check your Purview dashboards
for anything unusual. Done right, governed Copilot doesn’t just
avoid risk—it lets you use AI with confidence, speed, and
precision. That’s a competitive edge worth protecting.


Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15