Platform, Security, Workplace
Platform, Security, Workplace
Someone might be inside your Microsoft 365 environment right now. This guide shows you exactly how to detect a Microsoft 365 breach fast, keep in mind this article is based on my experience and my point of view.
Picture this: a colleague walks up to your desk on a Tuesday morning looking confused. People are replying to emails she never sent. Someone called her asking why she needed an urgent wire transfer. Her calendar has meetings she never scheduled. According to IBM’s research, the global average cost of a data breach reached $4.44 million in 2025. Stolen or compromised credentials remain the most common attack vector, and among the hardest to detect. Organizations often take months to identify and contain them.
Every minute you spend figuring out where to look is a minute the attacker spends going deeper. How quickly you act, and whether you know where to look, is what separates a contained incident from a full organizational crisis.
This guide gives you a clear, repeatable process to detect a Microsoft 365 breach in under 10 minutes, written in plain language without assuming you have a security operations team behind you. Keep in mind this is rapid triage for obvious compromises, not guaranteed full detection.
This guide gives you that. A clear, repeatable process to detect a Microsoft 365 account compromise in under 10 minutes, written in plain language without assuming you have a security operations team standing behind you. Keep in mind this is a rapid triage for obvious compromises, not guaranteed full detection!
The Unified Audit Log is automatically enabled for most modern Microsoft 365 tenants. Older tenants or legacy configurations may still have it disabled. Without audit logging, you have no historical activity data for your investigation. Admins must enable it manually to record Microsoft 365 activities. If it was never turned on, you are investigating blind there is no history of what happened.
Go to the Microsoft Purview compliance portal → Solutions → Audit. If you see a banner saying “Start recording user and admin activity,” click it immediately. Then come back to this guide.
Microsoft 365 E5 includes Audit (Premium), which retains audit records for Exchange, SharePoint, and Microsoft Entra for one year. It also provides access to critical events like when users access, reply to, or forward mail items. You can purchase a 10-year retention add-on for the E5 license. Standard audit (E3 and below) retains most records for only 90–180 days depending on the service. The shorter your retention window, the smaller your investigation window, and that matters if an attacker has been sitting quietly in the account for weeks.
| License | Default audit log retention | Notes |
|---|---|---|
| E3 / Microsoft 365 Business Standard/Premium | 90–180 days (depends on service and tenant settings) | Called Standard Audit. Some critical activities are not recorded or retained long enough for thorough investigations. |
| E5 / Microsoft 365 E5 Compliance Add-on | 1 year by default, extendable to 10 years with add-on | Called Premium Audit. Includes all critical audit events, like mailbox forwarding, admin role changes, OAuth consent events, and SharePoint/OneDrive file activity. |
What audit events matter most for breach detection?
Some events essential for detecting a Microsoft 365 breach are only available with E5 / Premium audit:
On E3 or lower, these events may not appear in your audit log or retention may be too short to investigate older compromises.
| Audit Event / Action | Available in E3 / Standard Audit? | Available in E5 / Premium Audit? | Why it matters for breach detection |
|---|---|---|---|
| Set-Mailbox (mailbox forwarding changes) | ❌ Not retained / may be missing | ✅ Fully retained | Detects attackers setting up mailbox-level forwarding to exfiltrate emails silently |
| New-InboxRule (including hidden rules) | ✅ Partially visible | ✅ Fully visible | Detects hidden inbox rules that hide security alerts or forward emails |
| Consent to application (OAuth app consent events) | ❌ Not fully visible | ✅ Fully visible | Detects malicious third-party apps maintaining access even after password reset or MFA changes |
| Add member to role (directory role changes) | ❌ Limited or not retained | ✅ Fully retained | Detects attackers trying to escalate privileges or gain admin access |
| FileDownloaded / FileSyncDownloadedFull (SharePoint & OneDrive bulk downloads) | ❌ Partially retained | ✅ Fully retained | Detects large-scale data exfiltration |
| Sign-in logs & authentication events | ✅ Basic logs available | ✅ Full logs + risky sign-ins | Impossible travel, unusual hours, legacy auth usage, MFA bypass detection |
E3 tenants: Consider direct PowerShell checks for mailbox forwarding and OAuth apps. OAuth consent events in particular require PowerShell on E3.
E5 tenants: Premium audit gives full visibility with 1-year retention by default, with full correlation across Exchange, SharePoint, OneDrive, Teams, and Entra ID.
Before you start digging through logs, you need to know what you are looking for. Attackers are predictable,they follow patterns. Here are the five things they almost always do after compromising an account. Treat any of these as serious until proven otherwise.
This is the single most reliable indicator when you need to detect a Microsoft 365 breach. It is also the most consistently overlooked.
After gaining access, attackers almost always create inbox rules that run silently in the background. These rules serve two purposes: hiding security notifications from the legitimate user, and siphoning data to an external address. A rule that moves emails containing words like “password,” “invoice,” or “Microsoft security alert” into the Junk folder means the account owner never sees warnings about their own compromise. A rule that forwards every incoming email to a Gmail address gives the attacker a live feed of everything, without ever logging in again.
What makes this particularly dangerous is that attackers deliberately hide some of these rules so they do not appear in standard admin tools. You can only find them using PowerShell with a specific parameter that forces a direct query of the mailbox storage. Standard Microsoft 365 admin tools only show some inbox rules, leaving security teams blind to the most dangerous ones. Use PowerShell’s Get-InboxRule -IncludeHidden command to find hidden mailbox rules.
Separate from inbox rules, Microsoft 365 allows forwarding at the mailbox level, a global setting that silently copies every email to an external address. This differs from an inbox rule because it operates at the infrastructure level, Outlook does not show it to the user, and it survives even after someone cleans up suspicious inbox rules. An attacker who configures mailbox-level forwarding can clean up their inbox rules, remove their registered MFA device, and disappear, while email continues flowing to their external address for weeks afterward.
A login from a country your colleague has never visited. A successful authentication at 3am when she has never worked outside business hours. Two successful logins from different continents within the same 20-minute window, physically impossible unless someone else used her credentials. The sign-in logs reveal these patterns immediately, and they are often the fastest way to confirm what you already suspect. Microsoft flags this as “impossible travel.”
Once an attacker registers their own authenticator app or phone number on a compromised account, they lock in permanent access. Even after the legitimate user resets their password, the attacker authenticates with their own MFA method and walks straight back in. An unrecognized MFA registration is not a maybe, treat it as a confirmed breach until proven otherwise.
This is the sign most IT administrators miss entirely, and attackers rely on that blind spot and hope that you wont detect a Microsoft 365 breach . OAuth applications are third-party tools that users can authorize to access their Microsoft 365 data. When an attacker tricks someone into clicking a carefully crafted link, that user may unknowingly grant a malicious application permission to read all their emails, access their files, or send mail on their behalf. The application then maintains access independently of the user’s password or MFA settings. You can reset the password, revoke sessions, even wipe the device, and the OAuth application still holds a valid token in your tenant, quietly doing its job.
Here’s your exact investigation sequence to detect a Microsoft 365 breach. Start the clock!
Go to: Microsoft Entra admin center (entra.microsoft.com) → Users → All Users → Select the suspected user → Sign-in logs
Filter for Successful sign-ins first. You are looking for:
Where to go (for standard inbox rules): Microsoft 365 admin center → Active users → Select user → Mail tab → Email apps → Manage email apps
For hidden inbox rules (the ones attackers actually use), you need PowerShell:
Open PowerShell connected to Exchange Online and run:
Get-InboxRule -Mailbox "user@yourdomain.com" -IncludeHidden | Format-List Name, Enabled, RedirectTo, ForwardTo, ForwardAsAttachmentTo, DeleteMessage
The -IncludeHidden flag is not optional here. Without it, PowerShell returns only the rules visible in standard admin tools, the attacker’s rules will not appear. Look for any rule that forwards to an external address, redirects mail to Junk or Notes folders, or deletes messages automatically. Any of these on an account the user did not configure themselves is a red flag.
To check for global mailbox forwarding:
Get-Mailbox -Identity "user@yourdomain.com" | Format-List ForwardingAddress, ForwardingSmtpAddress, DeliverToMailboxAndForward
If ForwardingSmtpAddress has any value at all, email is being forwarded externally. If DeliverToMailboxAndForward is set to False, the user is not even receiving copies of their own incoming mail, it is going exclusively to the attacker’s address.
Forwarding rules and mailbox-level forwarding get most of the attention, but there is a third way attackers silently maintain access to a mailbox that almost nobody checks during an initial investigation: delegate permissions. Mailbox delegation allows one account to access another account’s mailbox directly, reading emails, sending messages on behalf of the owner, or both. It is a legitimate feature used by executive assistants and shared mailbox setups. It is also something an attacker with temporary access to an account can configure in seconds, and it will survive a password reset completely intact.
The dangerous part is that the legitimate user has no obvious indication this happened. There is no notification, no visible setting in Outlook, and no inbox rule to stumble across. The attacker’s account simply has quiet, persistent read access to everything that arrives. To check whether any unexpected accounts have been granted delegate access, run the following in Exchange Online PowerShell:
Get-MailboxPermission -Identity "user@yourdomain.com" | Where-Object {$_.AccessRights -eq "FullAccess" -and $_.IsInherited -eq $false}
Any result that is not inherited and does not belong to a legitimate admin or assistant should be removed immediately. Also check Send As permissions separately:
Get-RecipientPermission -Identity "user@yourdomain.com" | Where-Object {$_.AccessRights -eq "SendAs" -and $_.IsInherited -eq $false}
A Send As permission means someone can send emails that appear to come directly from this person, no forwarding rule needed, no trace in the compromised account’s Sent Items folder. For business email compromise, this is particularly valuable to an attacker because the emails look completely authentic to the recipient. If you find anything unexpected in either of these outputs, remove it immediately and note the account name it was granted to, that account may itself be compromised or attacker-controlled.
Where to go: Microsoft Purview compliance portal (compliance.microsoft.com) → Solutions → Audit → New Search
The Unified Audit Log records activity across Microsoft 365 services such as Exchange, SharePoint, OneDrive, and Teams. Authentication events and detailed login information are found in Entra ID sign-in logs. Set your date range to cover the last 30 days (or more if you suspect a longer intrusion). Enter the user’s email address. Leave the activities filter broad for the initial pass, you want to see everything.
Key events to search for when trying to detect a Microsoft 365 breach:
The audit log can be used to find the IP address of the computer used to access a compromised account, determine who set up email forwarding for a mailbox, and determine if a user deleted email items in their mailbox.
Where to go: Microsoft Entra admin center → Users → All users → Select the user → Authentication methods
Go through every single registered authentication method on this account. Ask yourself for each one: did this person add this? Is this their phone number? Is this their authenticator app? Anything you cannot account for was registered by someone else. Also check the Devices registered to this account. An unfamiliar device, particularly one enrolled recently and running an operating system inconsistent with what that user normally uses, suggests the attacker registered their own machine, potentially to satisfy device compliance requirements in your Conditional Access policies.
Where to go: Microsoft Entra admin center → Applications → Enterprise applications
Filter by “Users and groups” or look for recently added applications. You are looking for anything that was not deliberately installed by your IT team, particularly applications with broad permission scopes such as:
– Mail.Read or Mail.ReadWrite
– Files.ReadWrite.All
– Mail.Send
– Directory.ReadWrite.All
Not all OAuth applications carry the same level of risk, and knowing the difference helps you prioritize what to investigate first. When a regular user authorizes an application, that consent applies only to their own data, the app can access their mailbox or their files, but nobody else’s. When an administrator grants consent on behalf of the entire organization, however, that single approval gives the application access to every user’s data across the whole tenant. This is called admin consent, and it is significantly more dangerous in the wrong hands.
When you are reviewing Enterprise Applications under a suspected compromise, sort by consent type and look at admin-consented applications first. A malicious app with admin consent is not just a problem for one account, it is a problem for every account in your organization simultaneously. Any admin-consented application you cannot directly account for should be treated as the highest priority item in your entire investigation, ahead of everything else on this list.
To quickly identify which applications have been granted admin consent across your tenant, go to Microsoft Entra admin center → Applications → Enterprise applications → filter by “Admin consent” in the permissions column, or run the following:
Get-MgServicePrincipal -All | Where-Object {$_.AppRoles -ne $null} | Select-Object DisplayName, AppId
For a more detailed view of exactly what permissions each application holds, the Microsoft Entra admin center provides a cleaner picture than PowerShell for most investigations, navigate to the specific application → Permissions → and look at the Admin consent tab versus the User consent tab side by side.
Any application with these permissions that you cannot explain the origin of should be treated as malicious until proven otherwise. Check when it was authorized, which account authorized it, and whether that timestamp aligns with suspicious sign-in activity from step one.
Detection without response is just watching the damage happen. The moment you confirm a breach, move through these steps without stopping.
Step 1: Block the account
Microsoft 365 admin center → Active users → Select user → Block sign-in
This prevents any new authentication using these credentials. It does not disconnect active sessions, that is the next step. Note that blocking an account in the M365 admin center only prevents new logins. Disabling an account in Entra ID disables it entirely and may cause issues with dynamic security groups filtering on the ‘AccountEnabled’ property.
Step 2: Revoke all active sessions
Microsoft Entra admin center → Users → Select user → Revoke sessions
This invalidates every active refresh token the attacker holds. Any open session now requires re-authentication, which sign-in blocking prevents. One caveat: access tokens already issued can remain valid for up to approximately one hour before they expire naturally. Move quickly through the remaining steps.
Note: Existing access tokens may remain active until they naturally expire. Access tokens typically remain valid for up to one hour, although exact lifetimes depend on policy and service. Administrators can shorten token lifetimes with Conditional Access sign-out policies, but this is advanced and not always configured by default.
Step 3: Reset the password
Use a strong, randomly generated password the account has never used before. Do not send it via email, if the attacker is still reading the inbox during that one-hour residual window, they will intercept it. Deliver it via a password management tool like 1Password for example.
Detection without response is just watching the damage happen. The moment you confirm a breach, move through these steps without stopping.
Step 4: Delete the malicious inbox rules and remove forwarding
Using the PowerShell commands from earlier, delete every suspicious rule:
Remove-InboxRule -Mailbox "user@yourdomain.com" -Identity "RuleName"
Keep in mind that it is smart to always export the existing rules first using: Get-InboxRule | Export-Csv before deleting anything.
And clear external forwarding:
Set-Mailbox -Identity "user@yourdomain.com" -ForwardingSmtpAddress $null -DeliverToMailboxAndForward $false
Step 5: Remove suspicious MFA methods and devices
Once you’ve blocked the account and revoked sessions, the next step is to make sure the attacker can’t simply log back in using their own authentication methods. Go into the Microsoft Entra admin center, navigate to the affected user, and check their Authentication methods. Look at every phone number, authenticator app, or security key that’s registered. Ask yourself: did the user personally add this? Anything unfamiliar is a red flag. Also check if a Temporary Access Pass was created!
Remove any methods that don’t belong to the legitimate user, and then force them to re-register MFA from scratch. This ensures they’re starting clean with verified methods only.
Next, check registered devices. Attackers sometimes enroll their own machines to satisfy Conditional Access policies or maintain persistent access. Any device that looks unfamiliar — especially recently added ones or devices running operating systems the user doesn’t normally use, should be removed immediately. This step ensures that the attacker’s devices are cut off entirely.
Cleaning up MFA and devices might feel tedious, but it’s one of the most important ways to lock a compromised account down quickly. Without this, the attacker could bypass your password reset and regain access in minutes.
Step 6: Revoke suspicious OAuth app access
Even after resetting passwords and cleaning up MFA, there’s one more trap many organizations overlook: malicious OAuth applications. These are third-party apps a user might have unwittingly authorized to access their Microsoft 365 data. Once an attacker gets a token through one of these apps, they can continue reading emails, sending messages, or accessing files, all without touching the password.
To check for this, go to Enterprise Applications in Entra. Filter for recently added apps or those with broad permissions, such as Mail.ReadWrite, Mail.Send, Files.ReadWrite.All, or Directory.ReadWrite.All. For each app, ask: did IT authorize this, or did the user install it knowingly? Anything suspicious should be removed immediately.
Don’t forget to check whether other users in the organization authorized the same app. Attackers often cap`ture multiple accounts in a single phishing campaign.
To prevent this from happening again, consider setting up alerts in Microsoft Purview or Microsoft Sentinel that notify you whenever a user grants an application access. For most organizations, OAuth consent should be a rare event, if an alert fires, treat it as urgent.
Step 7: Tell the user, and tell your team
The compromised account may still have sent emails that appear to come from a trusted colleague. People in your organization may have already replied to requests from the attacker, shared documents, or taken actions they should not have. Get the word out quickly, and make clear that anything sent from that account in the relevant time window should be treated with suspicion until verified.
Running through this checklist catches the majority of compromised accounts. But the attacks making headlines right now specifically bypass every check on this list.
The most notable example: a token theft campaign KnowBe4 researchers first identified in December 2025 does not steal passwords or use MFA fatigue. Instead, it tricks users into completing a completely legitimate Microsoft authentication, on the real Microsoft domain, with their real MFA method, and intercepts the OAuth token Microsoft issues after a successful login. The attacker never needs the user’s credentials. They receive a valid, authenticated token that grants full account access. Your sign-in logs show a successful MFA-completed login. Everything looks normal. Nothing is.
This is why configuring the right alerts before a breach happens matters as much as knowing how to investigate one after.
Set up Risky User alerts. Microsoft Entra ID Protection, available with a P2 license, scores every sign-in for suspicious characteristics and flags accounts it considers at risk. By default, those flags sit in a dashboard that nobody watches. Configure email alerts for medium and high risk users so that the moment Microsoft’s systems detect something wrong, your team knows about it within minutes rather than days.
Restrict device code flow. Device code authentication exists for legitimate purposes, allowing devices without browsers, like printers or shared screens, to authenticate with Microsoft 365. It is also the exact mechanism that token theft campaigns exploit. If your organization does not use shared devices that require this flow, disable it through a Conditional Access policy. Note that some shared devices like conference room systems or lab machines may require exceptions.
Alert on OAuth consent events. Create an alert in Microsoft Purview or Microsoft Sentinel that fires every time any user in your organization grants permissions to an application. A typical user should almost never do this. When the alert fires, treat it as a priority investigation until you can confirm the application is legitimate.
The 10-minute window in this title is a target, not a guarantee. The first time you run through this process it will probably take longer , you are learning where things live and what normal looks like. That is exactly why you should practice it before you need it. Run through this checklist on a known-good account. Understand what your sign-in logs look like when nothing is wrong. Know what a clean inbox rule list looks like. Learn where the enterprise applications page is before 9am on a crisis morning.
Because when the moment comes to detect a Microsoft 365 breach, and for most organizations, it eventually does, you will not have time to figure out the basics. You will only have time to act.
IInterested in the difference between passkeys and security keys? I covered exactly that in a previous article, check it out.