Your Phishing Reports Aren’t Showing the Whole Story
22 Minuten
Podcast
Podcaster
M365 Show brings you expert insights, news, and strategies across Power Platform, Azure, Security, Data, and Collaboration in the Microsoft ecosystem.
Beschreibung
vor 4 Monaten
Ever wonder why your phishing reports feel like they’re missing
half the story? Most dashboards just show surface-level numbers,
but behind those simple stats is a constant stream of real
threats slipping through cracks. Today, I’ll show you how to
transform Microsoft Defender data into living dashboards that
actually tell you what’s happening in your environment — and what
you’re not seeing yet.
The Hidden Layer: What Defender Knows That Your Reports Don’t
If you’ve ever looked at your security dashboard and thought,
“Looks good to me,” you’re not alone. Execs love a tidy
chart—blocked emails, a drop in reported phishing, maybe one or
two suspicious sign-ins. It’s comforting, right? But here’s the
catch: the data sitting right underneath is almost never as
simple as those friendly graphs make it seem. In most orgs, the
actual story is far more complicated, largely because those
dashboards pull from the same handful of exportable stats. A lot
rides on whatever filter you set in your mail flow reports or
security tool. Most people stick to what’s easy to get out of
Exchange Online or the built-in phishing report from their email
provider. If a user flagged something, tick mark. If an email was
blocked, bar goes up. End of story—or so it appears.But Microsoft
Defender for Office 365 is sitting on a goldmine of details most
teams skip over completely. It’s the classic iceberg: everything
you show in a regular incident review covers about twenty percent
of what actually gets picked up in the background. What Defender
captures is almost embarrassingly detailed. It logs every click
your users make on links inside emails—even when Safe Links steps
in to stop a detonation. It tracks those silent “near miss”
moments when a phish was one click away from success. Automated
Investigation & Response runs playbooks in the background,
picking up on correlated signals your manual review would
probably never spot until the situation escalates for real. Most
dashboards? They just don’t bother to look under the surface. We
all know those emails that get blocked right away get counted,
but a targeted attack that blends into a newsletter and is
manually reported by one vigilant user? Often lost in the
noise.Let’s talk reality for a second. I saw this firsthand last
summer. Security had a dashboard that looked flawless—trendline
of blocked phishing up, reported incidents down, execs all happy.
Meanwhile, a low-volume spear-phishing campaign was targeting the
finance team. Defender tagged it with a high severity, ran an
automated investigation, and quietly bundled up the event in the
backend logs. None of it landed in the weekly cybersecurity
summary because nobody was pulling data from the Automated
Investigation & Response logs. It wasn’t even a blip for
execs until someone got suspicious about a calendar invite.
That’s the gap—Defender caught the signal, but the dashboard
never showed it.If you crack open Defender’s portal, there are
three sources that almost always get left out: Threat Explorer,
Automated Investigation & Response, and User Submissions.
Threat Explorer is not just a list of threats—it maps
relationships between malicious files, sender infrastructure, and
user behavior. It tracks attack campaigns, figuring out who else
in your org saw the same phish, even if no one reported it. AIR,
that's Automated Investigation & Response, does more than
block an obvious threat. It pieces together what your automated
policies did: what devices were checked, how compromised accounts
were flagged, which mailboxes were scanned for ‘potentially
harmful’ content long before a breach is visible to end users.
And user submissions—probably the least appreciated signal—layer
something valuable on top: human reporting of suspicious items
that the filters missed. Defender takes those and sometimes
surfaces genuine threats by combining user intel with backend
analytics.Research from Microsoft regularly shows data gaps
between what’s available in Defender logs and what actually gets
piped into exec-facing tools. Even in mature security programs,
you’ll see dashboards showing blocked mail totals but skipping
over AIR investigations, user-reported near-miss phishes, or
campaign mapping data from Threat Explorer. In many tenants,
nobody’s wiring up the automated investigation tables to reports
at all—it’s an extra export, another click, something to fill
next quarter’s backlog. The net effect is that leaders walk into
security reviews seeing “zero incidents” when what actually
happened is much more complicated. They miss context—what threats
got close but were caught at the last second, how many users
actually clicked something dangerous before the block, or which
attack vectors are being tested by threat actors right now.This
isn’t just a technical shortcoming—it's an awareness problem that
can leave the business exposed. Say you’re only catching two out
of five signals that matter. Maybe you’ve got blocks and
reports—but nothing from AIR or Threat Explorer. Leaders end up
believing that the risk is low because those details never make
it to the dashboard. But the most useful dashboards surface
signals most people miss: who’s being targeted and how often, how
employees respond to sophisticated lures, and whether automated
policies are actually working or just hiding problems until they
escalate.The gap between what Defender knows and what hits the
regular reports is bigger than most orgs think. Those glossy,
high-level metrics end up creating a kind of invisible shield
where executive teams assume their controls are better than they
are. And all the while, the real signals—those near-misses,
automated investigation results, and full campaign data—get lost
in the shuffle because nobody wired them into the story. So if
all this data is right there in Defender, what’s stopping us from
using it? The answer: almost no one is building frameworks that
take advantage of it. That’s what needs to change, and that’s
exactly what I want to get into next.
Beyond the One-Off: Building a Repeatable Security Dashboard
Framework
If you’ve ever watched your shiny new dashboard fall apart the
moment Microsoft Defender changes a field name, you already know
how fragile these setups really are. Teams get excited, spin up
Power BI, connect to that first export, and within a week they’ve
got a handful of pretty charts. Job done—for now. But fast
forward to the next Defender update, or worse, the next round of
phishing attacks using totally new lures and attacker
infrastructure. Suddenly columns are missing, charts break, and
the data just doesn’t line up. The reality is, it’s
straightforward to pull a phishing summary for this month, but
building something that adapts to whatever the threat landscape
throws at you? That’s where most dashboards fall flat.We’ve all
been there: your team spends hours every quarter scrambling
through spreadsheets, manually fixing broken queries and swapping
in new attack types that didn’t exist when you built the last
report. Someone pulls an export from AIR, another from Threat
Explorer, and now you’ve got two sources that don’t even speak
the same language. In the background, Defender itself is
updating; Microsoft tweaks schemas, new API endpoints arrive, and
suddenly all those beautiful visuals are out of sync. If your
dashboards rely on manual steps and one-off metrics, you’re not
just chasing attackers—you’re chasing your own tools.That cycle
happens because most orgs treat dashboards like fixed artifacts,
not living systems. We see a lot of patchwork: tables copied out
of Excel, mismatched metrics stitched together, and visuals meant
to impress more than inform. The result? Dashboards that tell you
what happened last month, but can’t keep up with what’s happening
now because they break every time Defender evolves. When
executive reporting time comes, teams rush to update everything
by hand because automation was always “tomorrow’s problem.” It’s
familiar, but it’s also kind of exhausting. And risky.This is
where the idea of a dashboard framework comes in—a repeatable,
modular system that’s designed to connect to the real Defender
data, model how everything relates, and standardize the critical
metrics that actually indicate risk. A real framework isn’t a
template you download and forget about. Instead, it’s a
collection of core building blocks: reliable connectors that pull
Defender’s freshest data automatically, a resilient model that
adapts when the source data structure shifts, a shortlist of KPIs
that matter for threat response, and flexible visuals focused on
what matters most, not just what looks pretty.Let’s break that
down. First, reliable data connectors. Too many teams grab a CSV
from the portal, build out a dashboard, and call it a day. Until
next week, when they need a new CSV. Instead, you want direct
connections—using Defender’s API, set up in a way that survives
authentication changes and schema updates. Power BI’s connectors
can do this, but only if you invest the time upfront to map how
each table and field relates to real threat signals.Second, that
resilient data model. Think of all the ways Defender can adjust
its logging—new columns, renamed fields, sudden additions for a
brand-new detection policy. If all you’ve got is a pile of flat
tables, every change is a ticket to go fix broken dashboards. But
if your model relates incidents, users, mailboxes, devices, and
actions in a unified schema, Defender’s tweaks don’t derail your
narrative. Microsoft’s own security ops guidance pushes this
approach: invest first in structuring your data before painting
any visuals.Third, prioritized KPIs. Not all metrics deserve
equal attention. Executive teams don’t need ten flavors of “email
blocked.” What they want: time to incident resolution, users
clicking on threats, high-risk accounts targeted repeatedly, and
which attack vectors got closest to succeeding. Defining these
KPIs up front, based on both operational needs and business
impact, means your dashboards are more than vanity metrics—they
drive decisions.Finally, visual templates that highlight the
story. A mature framework always includes layouts for quickly
flagging anomalies, escalation paths for incidents, trendlines
for campaigns, and simple cues that answer, “How bad is it this
week?” Standardized visuals mean updates don’t have to be
custom-made every quarter when something changes.The difference
here is simple. A report tells you what happened. A framework
shows you what’s changing right now. This is the core of avoiding
what Microsoft calls “dashboard drift”—where tools slowly lose
touch with reality and have to be rebuilt from scratch. Instead,
you get a setup that grows with your environment. Whether it’s a
new batch of phishing lures or Microsoft tweaking Defender’s
backend, your dashboard survives and stays actionable. The net
result: you’re not fighting the dashboard every time attackers
invent a new move.And here’s the kicker: a framework is only ever
as strong as the data moving through it. Building one is great,
but if your data sources are shaky or your connections keep
breaking, the whole thing falls apart just as fast as a flat
Excel sheet. So how do you actually wire Power BI to Defender and
keep your feeds flowing even as the data shifts underneath?
That’s where most teams hit the real challenge, and it’s what
we’re unpacking next.
Connecting the Dots: Data Modeling and Power BI Pitfalls
If you’ve tried pushing Microsoft Defender data into Power BI and
found yourself knee-deep in cryptic error codes or missing
tables, you’re not alone. The sign-in looks easy enough: hook up
a dataset, hit refresh, and expect a stream of clean updates.
Five minutes later, Power BI throws a red warning about a broken
connection, and you’re scrolling help forums trying to figure out
which column name changed this month. These are the pitfalls that
slow down almost every team. Pulling raw Defender data sounds
like a win, but right away you run into mismatched schemas, API
rate limits, and a laundry list of missing relationships. You’re
working with logs that were designed for analysts, not reporting,
so every export is a puzzle with too many missing pieces.It’s a
classic trap. Somebody gets an export from the Defender
portal—usually a CSV or Excel file—and builds out some charts in
Power BI. The results look promising at first. But as soon as
someone suggests automating the data feed, all those little
mismatches pop up. Defender’s APIs don’t line up exactly with the
portal exports. Field names shift from “incidentId” to “id,” or
there’s a GUID in one place and a username in another. Even when
you make it past the authentication hurdles, you hit API rate
limits that stop loads midway, or Defender returns extra fields
you hadn’t mapped because a new detection feature launched
overnight.One of the biggest mistakes is relying on static
exports. It sounds easier than learning Defender’s REST API, but
those exports will never scale. Every time you run the same
report, the context changes—sometimes new attack types appear,
sometimes field definitions get tweaked because Microsoft quietly
updated the schema. Teams skipping normalization steps end up
with tables full of “unknown” or inconsistent values. What works
for a one-off audit falls apart when you need that dashboard to
keep running for six months straight.Then there’s the battle with
Power BI’s refresh mechanics. DirectQuery and dataflows are
pitched as the dream solution: hit refresh, and the latest events
pour in automatically. In practice, though, DirectQuery brings
its own baggage. If you’re streaming data in real time, you’re
working against the clock—Power BI may slow down or throttle
requests if your model isn’t optimized. Dataflows help with
clean-up and joining tables, but they add another step where
something can break. If you don’t have careful control over how
your tables join—especially if you’ve mixed static exports and
API pulls—errors creep in quickly.I watched a security team set
up a weekly dataflow refresh, confident that their dashboard
would catch anything critical. Looked good until a phishing
campaign hit over a holiday weekend. The attack started Thursday
night, peaked Friday, but since the refresh wasn’t set to pull
again until Monday, none of those incidents even showed up in the
report they’d prepped for senior management. That slice of time
vanished, so the debrief had a hole exactly where it mattered
most.API integration can be a minefield, especially with
Defender’s quirks. Authentication isn’t just plugging in a
key—it’s handling OAuth tokens, setting up appropriate app
registrations, and dealing with permissions that change as the
security baseline is adjusted. Pagination is another one:
Defender’s API returns results in batches, so if you’re not
looping through every page correctly, you’re missing large chunks
of incident data. Even simple fields can be trouble—what’s
labeled as “ThreatLevel” in one table is “RiskScore” in another,
or maybe there’s a flag for “compromised” that only shows up if
you choose the right endpoint. If your connectors don’t
explicitly map these relationships, Power BI ends up with
mismatched or duplicate entries.Normalization is where the real
work is. Threat data is noisy by design—it’s pulled from
thousands of mailboxes, endpoints, and apps, each with its own
format. Unless you run normalization scripts to standardize these
fields before they land in your dataset, you’ll never be able to
compare apples to apples. I always recommend setting up dataflows
with transformation steps: clean the column names, align field
types, and translate all your IDs into real user names or device
identifiers. This not only makes data more legible—it creates a
model that stands the test of shifting schemas.But even a clean
dataset isn’t enough unless you build a semantic model. This is
the layer where logs turn into actionable intelligence. Map
incidents to users, overlay geographic or business-unit metadata,
and group alerts by threat type or attack vector. The difference
is huge: instead of seeing a chart of “Incidents This Month,” you
can break down who was targeted, which teams are most exposed,
and if certain locations are being hammered more than others.
I’ve seen organizations take an extra step and link defender data
with external HR data or device inventories, which gives even
richer context. Now, if a phishing attempt hits the finance team,
you immediately see which endpoints were targeted and which users
were most likely to fall for it.All this detail means you go from
a stack of logs to a living system that adapts as attackers shift
tactics. Incidents show relationships. Trends become visible.
Instead of chasing broken exports every week, you have a setup
that tracks what actually matters in real time. That sets you up
for the next—and maybe most important—challenge: turning streams
of data into visuals and KPIs that executives will actually use
to make decisions.
From Noise to Narrative: Executive KPIs and Visualization That
Drive Action
If you’ve ever sat through an executive security review, you know
the dashboard ritual by heart. Someone pulls up a slide full of
bar graphs—blocked emails, total phishing attempts last quarter,
maybe a pie chart breaking out malware types. Everybody nods, but
the mood in the room is glazed-over. And here’s the irony: even
with all those stats on the screen, the one chart every
leadership team needs almost never gets included. The missing
piece isn’t more numbers. It’s context that links those numbers
to real-world risk and actual decisions executives need to
make.Standard metrics like “number of phishing attempts blocked”
might tick a compliance box, but those aren’t the numbers that
drive change or investment. Dashboards that focus on incident
counts or weekly summaries sound informative, but they don’t
actually answer what leaders care about—are we getting better at
stopping attacks, or are threats evolving faster than our
defenses? Too much raw data ends up hiding key signals. If your
dashboard looks like an airport arrival board, with endless lines
and totals, eventually everyone tunes out and starts checking
their phones.I saw this play out with a finance sector client
last year. Their dashboard boasted all the classics: total
phishing mails, number of blocks, and average response time
stitched into slick visuals. But right in the middle of Q2, there
was a spike—an attack that actually made it past filtering and
led to a credential reset for a high-value account. The board
presentation buried this incident behind generic charts. The only
hint of the breach was a single row in a ten-page appendix. The
team thought they were providing full transparency, but in
reality, the story of what mattered most was lost in the noise.
Instead of sparking a discussion about process improvements or
extra training for targeted employees, the meeting circled back
to incident totals and ended early. That is, until compliance
flagged the event a month later.So, what actually belongs at the
center of an executive dashboard? Vanity metrics like blocked
emails are easy wins, but they’re not what changes behavior.
Actionable KPIs do that by zeroing in on outcomes. Take attack
success rate—a measure of how often phishing attempts make it
through defenses and result in any real impact, like a user
clicking a malicious link. If you notice this rate ticking up,
that’s an instant alarm to review training, policies, or
technology gaps. User click trends go a step deeper. You can see
not just who received a phish, but who interacted with it, who
reported it as suspicious, and how quickly IT responded. If user
reporting rates are rising, that’s a healthy sign; if they’re
flat, attackers might be adapting faster than users can spot
threats.Another overlooked metric is dwell time before
remediation. This is the window of exposure—the clock that starts
when a threat sneaks in and stops when it’s contained. If
incidents linger for hours, even after detection, you’re giving
attackers more room to operate. High dwell times directly
translate into higher risk, especially in organizations facing
targeted attacks.Now, let’s get specific. Five KPIs consistently
separate noise from the insight executives actually want. First,
incident resolution time: how fast do you close out real threats
after they get reported or detected? Second, user-reporting
rates: what percent of users who get baited actually spot and
flag the phish? This doesn’t just measure security tools; it
tracks human awareness and shows where education is needed.
Third, high-risk entity exposure: which users, accounts, or
systems keep getting targeted, again and again? If it’s the CFO’s
mailbox every week, that’s a trend—one you need laid out in plain
sight. Fourth, attack vector trends: are attackers favoring
attachments, links, or business email compromise tactics this
month? Seeing how these shift lets everyone adjust defenses
proactively. And finally, near-miss escalation rates: the count
of threats detected at the last second—after a click but before
damage. If these rates spike, you’re winning the last-mile
battle, but barely.Visualization matters just as much as what you
measure. Highlight anomalies—don’t let peaks and spikes get lost
in the baseline. Use sparklines for trends over time, and color
strategically. It’s not about making dashboards pretty; it’s
about instantly flagging what’s urgent. If resolution times jump
after a new attack, that cell should go bright orange, not subtle
blue. When user reporting falls off a cliff, it ought to grab
attention before the next campaign rolls through. Simplicity here
is deceptive—you’re aiming for a dashboard where one glance tells
leaders what keeps them up at night.Microsoft’s Secure Score
illustrates this approach. By mapping security actions and
configurations to a quantifiable score, it creates direct
alignment between technical steps and business risk. When you
connect Defender’s KPIs to something like Secure Score, you’re
telling business leaders not just what happened, but what to do
next. You relate every metric to a real-world outcome: more
clicks means more training; slower response times mean you need
better automation or more headcount.The difference these
visualizations make is immediate. Executives stop skimming slides
and start asking questions: why are high-risk accounts showing up
every week? What changed last month that led to longer
remediation times? Suddenly, your dashboard isn’t just a history
lesson—it's a living status report that drives decisions in real
time. If you want dashboards that actually matter, you need to
move past surface-level counts and start telling the story of
your defenders, your users, and your threats in a way that
demands action.So, if you’re ready to level up, it’s not just
about collecting more logs. It’s about building dashboards that
leaders will actually use, with stories that give context,
urgency, and direction—because that’s what changes outcomes.
Conclusion
If you’ve ever relied on a dashboard and assumed it covered all
your bases, now’s the time to challenge that comfort.
Surface-level phishing stats don’t tell the real story.
Defender’s deeper data adds missing context—those click logs,
near misses, and automated investigations fill in the gaps that
simple numbers always leave behind. When you start with richer
signals and build a dashboard framework that can survive real
change, you end up with a tool that warns you, not just informs
you. Ready to see dashboards actually drive action? Subscribe and
drop your toughest Power BI security questions in the comments
below.
Get full access to M365 Show - Microsoft 365 Digital Workplace
Daily at m365.show/subscribe
Weitere Episoden
22 Minuten
vor 3 Monaten
22 Minuten
vor 3 Monaten
21 Minuten
vor 3 Monaten
22 Minuten
vor 3 Monaten
22 Minuten
vor 3 Monaten
In Podcasts werben
Kommentare (0)