Back to Blog

When Microsoft ships a Patch Tuesday with six zero-days already being exploited, that is not “patching.” That is incident response with a nicer name.

And no, your users do not care that it’s “Patch Tuesday.” They care that their finance app works and the printer prints. Meanwhile, attackers care that one person clicks one thing and now they have code execution or a privilege escalation chain.

February 2026 was a good example of why the old patch playbook is dead. Several of the exploited bugs were security feature bypasses: SmartScreen, MSHTML, Office. Those are the exact places attackers live because they turn phishing into a reliable delivery mechanism.

Here’s how I triage weeks like this in the real world, when you have a normal IT team, normal staffing, and a business that still expects “no downtime.”

The part everyone misses: exploited does not always mean “internet worm”

People see “zero-day exploited in the wild” and assume it’s a remote unauthenticated RCE on a public-facing server. Sometimes it is. A lot of the time it’s worse for most businesses: it’s a client-side bypass that makes phishing work again.

In February’s batch, Microsoft called out exploited vulnerabilities like:

If you run Windows in an enterprise, you do not get to ignore these because “it requires user interaction.” Your entire business runs on user interaction. That’s the point.

My triage rule: stop arguing about CVSS and start asking “what’s the kill chain?”

CVSS is useful for dashboards and compliance checklists. It is not how attackers plan their week.

When I’m triaging, I want to know where the vulnerability sits in the kill chain:

Initial access enablers: SmartScreen bypasses, MSHTML weirdness, Office bypasses. These are the ones that make a phishing campaign convert.

Privilege escalation and persistence helpers: Local elevation of privilege bugs in DWM, RDS, Win32k, whatever. These are the ones attackers chain after they land.

Both categories matter. The “initial access” stuff is how you keep new infections from happening. The “post-compromise” stuff is how you keep a single compromised user from turning into a domain takeover.

If you have to pick one to patch first, patch the things that are easiest to trigger at scale. Phishers do not need to be inside your network to take their shot.

What I actually do in the first 24 hours

I’m going to keep this practical. Not “make sure you have a process.” Everyone has a process. The process is usually “panic and yell at SCCM.”

1) Patch a small pilot ring fast, then push broad

I want 10 to 25 machines patched within a few hours. IT staff, a couple power users, maybe someone in finance and someone in operations. Real software, real workflows.

If that pilot ring is clean, I go wide. If it breaks something, I still go wide, but with mitigations. Because leaving exploited zero-days unpatched is also “breaking something,” you just do not see it until it’s 2:00 AM and you’re talking to your insurance carrier.

2) Reduce the blast radius while patches roll out

Patching takes time, even when you move fast. So I tighten controls that specifically hurt the bypass-and-phish style of attack.

Examples that actually move the needle:

These aren’t magic. They are time-buying measures that break common tradecraft while your patching catches up.

3) Hunt for the boring indicators: LNKs, HTML, and weird execution chains

If you have EDR, use it. Stop searching for the latest fancy malware name and start searching for what these bugs enable.

Things I look for when Windows Shell, MSHTML, or Word bypasses are being exploited:

This is where most IT teams get stuck because they do not have time or the query language is annoying. I get it. But this is also where you can catch an intrusion before it turns into a ransomware weekend.

Where patch triage goes to die: “we can’t reboot servers”

Let’s talk about the real political constraint: reboots.

Exploited client-side bypasses push you to patch endpoints immediately. Exploited elevation-of-privilege vulnerabilities push you to patch everything, including the servers people are scared to touch.

Here’s my opinionated take: if you have Windows servers that you cannot patch and reboot in a predictable window, you do not have “high availability.” You have unmanaged risk.

Build a reboot schedule the business can live with, then enforce it. Otherwise you are just waiting for a chained exploit to prove the point for you.

Why this matters beyond February: the attacker ROI is insane

Security feature bypass bugs are attacker gold. If they can reliably skip SmartScreen prompts, bypass MOTW protections, or sneak past Office safeguards, they can crank out phishing at scale and let automation do the rest.

Pair that with a local privilege escalation like DWM or an RDS bug, and now the attacker does not need a sophisticated toolchain. They just need one user to interact with one file.

That’s why I take “exploited” seriously, even when the vulnerability sounds boring on paper.

My blunt recommendation

If you are still treating patching as a monthly compliance checkbox, you are going to keep losing to the same playbook. The only winning move is to make patching a muscle memory activity, with two lanes:

Most organizations think they have both lanes. In practice, they have one lane, and it’s a traffic jam.

If you want an outside set of eyes to validate what an attacker would chain together in your environment, that is where a pentest actually earns its keep. Not as a compliance artifact. As a reality check.

Want to Know What an Attacker Would Actually Do With This?

Zio Security can run a practical penetration test and show you where a real adversary can turn a single phish into domain-level impact. If you have been meaning to pressure-test your patching and endpoint controls, let’s talk.

Book a Call