Earlier this month, Red Hat announced that their Enterprise Linux operating system was threatened by the Spectre and Meltdown vulnerabilities and that updates to the Linux kernel and virtualization-related components, in combination with a CPU microcode update, were required.

SEE: Spectre and Meltdown: Cheat sheet (TechRepublic)

A flurry of patching commenced across all industries once these vulnerabilities came to light due to the severity involved. Here are seven important lessons I took away from the process:

1.Even vendors produce problem patches

Unfortunately, I can say first hand that the initial patch provided by Red Hat produced massive problems on Dell PowerEdge 630 servers with Intel Xeon E5-2643 v4 3.40Ghz processors.

After applying the January microcode_ctl-1.17-25.2.el6_9:1.x86_64 package (using Red Hat Satellite) I had nine dead servers on my hands which would not boot RHEL 6. In each case, when the server powered on the operating system did begin to load and proceeded past the GRUB (Grand Unified Boot Loader), but then it failed and went back to the boot loader screen.

Don’t be fooled into thinking this one is a Dell hardware problem related to the CPU. While willing to help, Dell stated they had no involvement in the issue. I updated the Dell BIOS firmware on a problem server with the latest release from Dell, to no avail. I could quickly rule them out, and it made sense since I’d applied Red Hat and not Dell patches to the system in question.

I contacted Red Hat support and they worked it out. They publicly owned up and announced the solution to the mystery.

Fortunately these were test servers used to deploy initial patches. Which leads me to lesson number two.

2.Always test patches on disposable systems

Always test patches on non-critical servers before you roll them out to production; this is as elementary as tying one’s shoes before heading out the door. The test systems should of course be identical in every day to your production systems, otherwise testing fixes would be useless.

I patched nine test systems which died, and that was okay since they were there to serve as advance scouts, as it were, but my job would have been on the line had this issue occurred in production and I didn’t know about it in advance. Forewarned is forearmed. Test, test, test where it doesn’t count before you apply fixes where it does count.

3.Have a quick recovery plan in place for your systems

The problem with my servers was that I couldn’t boot the Red Hat operating system at all, not even to a Rescue CD. I opted to reimage them since we use kickstart for automated Red Hat Linux installations and I wanted a clean sweep of all the problem systems to ensure the issue was sufficiently resolved. I then associated my newly-rebuilt test systems with the yum repo containing the new patch and installed it, confirming it worked perfectly. Had I been forced to manually reinstall the operating system, drivers, packages and settings on these systems it would have taken a painful amount of time.

Some other common sense rules apply as always to this scenario.

SEE: Network security policy (Tech Pro Research)

4.Check for the latest patches the moment before you proceed

If you are preparing to patch your systems, always apply the most immediately available fixes – check the vendor website continuously and update your repos as needed. Companies currently working off the old microcode_ctl-1.17-25.2.el6_9:1.x86_64 microcode_ctl package initially released are in for a nasty surprise which is why it’s crucial to stay abreast of package changes.

5.Educate yourself on the patch ramifications

Always read the vulnerability/patch advisories and other details involved to know what you’re getting into; what possible issues may occur, what the resolutions may be, and how to prepare in advance. The more hype is involved with a vulnerability and the associated fix means the more caution you must employ when proceeding; these patches are often rapidly pushed out with less testing than may be advisable.

SEE: How strong is your company’s cybersecurity plan? Take this quick survey and tell us. (Tech Pro Research)

6.Seek alternate solutions where necessary

Many vulnerabilities with a problematic patch don’t necessarily need to be applied (although most do).

Before I received the updated package from Red Hat, I considered whether there could be other options to protect my production systems. Sometimes you can employ a workaround by turning an unneeded process off, for instance, or disabling an unused setting. Follow vendor guidelines and determine your best course.

In short: research and understand your vulnerabilities, know your remediation process, and build your plan of attack.