Security

20 years of Patch Tuesday: it’s time to look outside the Windows when fixing vulnerabilities

For two decades we have been patching our Windows machines every second Tuesday of the month, devoting time and resources to testing and reviewing updates that are not generally rolled out until they have been validated and it is confirmed that they will do no damage. This may be a reasonable approach for key equipment for which there is no backup, but is this process worthwhile anymore in the day and age of phishing and zero-days, or should resources and security dollars be reprioritized?

Twenty years after Microsoft first introduced Patch Tuesday, I’d argue that we need to move some of our resources away from worrying so much about Windows systems and instead review everything else in our network that needs firmware and patching. From edge devices to CPU code, nearly everything in a network needs to be monitored for potential security patches or updates. Patching teams should still be concerned about Microsoft’s Patch Tuesday, but it’s time to add every other vendor’s release to the schedule. I guarantee you that our attackers know more about the patches they need than do you.

The plan for applying patches to workstations

First, let’s consider workstations. In a consumer setting where the user typically does not have redundancies nor spare hardware, a blue screen of death or failure after an update is installed means they are without computing resources. In a business setting, however, you should have plans and processes in place to deal with patching failures just as you would plan for recovery after a security incident.

There should be a plan in place for reinstalling, redeploying, or reimaging workstations and a similar plan to redeploy servers and cloud services should any issue occur. Where there are standardized applications, deploying updates should be automatic and done without testing.

Unanticipated side effects should trigger a standard process to either uninstall a deployed update and defer it to the following month (under the assumption that vendors will have found the issues and fixed them) or if the failure is catastrophic, the operating system will have to be reimaged and redeployed. Testing for Windows workstations and servers should be at a minimum. The goal for these systems is to have a plan in place to deal with any failure, conserving resources for elsewhere.

Today’s attacks call for better monitoring and logging

Testing before the deployment of patches should be reserved for those systems that cannot be quickly redeployed or reimaged. Some systems, such as special-purpose equipment controlled by Windows machines in healthcare situations, should be treated with more care and testing and, if possible, isolated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button