Spectre Meltdown

These guys are a real pain……

So here we are, just a couple of weeks since learning of the Spectre / Meltdown debacle welcoming all IT professionals to the New Year.

Many of us have been busy patching as we get the releases, and already the news stories have popped up with scary tales.

AMD were initially cocky against Intel when the vulnerability was announced, only to then admit they were vulnerable too. On top of this it then appears some patches were unduly nasty to AMD plugged devices.

Schadenfreude for Intel there?

Also when trying to patch Windows we discovered that the slated patch in Windows Update never appeared. Yes, I know all about the registry hack, and that qualified AV systems should do that for me. Well in reality for me it goes like this:

  • Run windows update scan without reg key – result: only updates to Dec 2017 appear
  • Apply reg key (or new version of AV to auto apply it)
  • Run windows update – result: Jan 2018 updates appear – yippee!!!
  • Not so fast, look at the KB number, the Spectre / Meltdown patch has NOT been delivered!!!

This happened on every Windows 2012 / 2012R2 system we inspected, and we therefore had to get the patch and manually install it after adding all the other patches from Windows Update.

We also have Manage Engine Patch Manager on our systems. This was even worse. It correctly identified that the patch was missing but refused to install it even with the registry key there. Then when we manually installed and confirmed it was so, it STILL said it was missing on the next scan!!!

WTF is that all about? I found little comment on this anywhere online except an article that said AWS have reported the same problem with non automatically delivered patches through Windows Update. Why isn’t a big issue for everybody else then?

I haven’t even raised it with Manage Engine yet, I’m already eyeing the whiskey bottle as it is……. Meltdown indeed.

It gets worse……

We are Intel based and never in doubt we needed the patches on our hardware servers, and as we run VMware it is essential that the underlying hypervisor gets the patches. This is to stop potential cross guest VM contamination even if the guest OS is patched – critical in a multi-tenant hosting environment.

Not ones to rush in we started with our test systems first to make sure all was OK, which it seemed to be….. until today.

We were just embarking on our live fixes when this KB popped up in a security advisory from VMware: https://kb.vmware.com/s/article/52345

Now the long and the short of this is that if you applied the VMware fixes and you have Haswell or Broadwell CPUs you may just have put a time bomb under your entire VMware cluster – because the microcode supplied by Intel seems to have a bug in it on certain of their CPUs.

I mean, FFS!

For those already too weary patching to read the attached KB (though I advise you do), I’ll summarise:

  1. STOP ALL PATCHING OF YOUR VMWARE HOSTS IMMEDIATELY!!!
  2. Read number 1 again, in case you are not sure
  3. Check what processors you have and see if they are in the family groups that are bold highlighted in the table below
  4. If you have already patched your hosts with the latest VMware patches containing the VMSA-2018-0004
    • DO NOT POWER CYCLE ANY RUNNING VM ON THOSE SYSTEMS
    • Add this line to the /etc/vmware/config file on each host: cpuid.7.edx = “—-:00–:—-:—-:—-:—-:—-:—-“
    • This does not require a host reboot
    • This must be removed when a proper working microcode patch appears
    • Power cycle running VMs on the hosts when appropriate
    • My opinion: if you haven’t cycled them after first applying the patch (reboot doesn’t count) you should be OK anyway

Table 1:

Note: Processors impacted by Intel Microcode sightings are highlighted in bold.

VCG Processor Series/Family Encoded CPUID Family. Model. Stepping Processor SKU Stepping Microcode Revision
Intel Xeon E3-1200-v3
Intel i3-4300
Intel i5-4500-TE
Intel i7-4700-EQ
0x000306C3 C0 0x00000023
Intel Xeon E5-1600-v2
Intel Xeon E5-2400-v2
Intel Xeon E5-2600-v2;
Intel Xeon E5-4600-v2
0x000306E4 C1/M1/S1 0x0000042A
Intel Xeon E5-1600-v3
Intel Xeon E5-2400-v3
Intel Xeon E5-2600-v3;
Intel Xeon E5-4600-v3
0x000306F2 C0/C1, M0/M1, R1/R2   0x0000003B
Intel Xeon E7-8800/4800-v3 0x000306F4 E0   0x00000010
  Intel Xeon E3-1200-v4 0x00040671 G0   0x0000001B
  Intel Xeon E5-1600-v4
Intel Xeon E5-2600-v4;
Intel Xeon E5-4600-v4
0x000406F1 B0/M0/R0   0x0B000025
  Intel Xeon E7-8800/4800-v4 0x000406F1 B0/M0/R0   0x0B000025
Intel Xeon Gold 61/00/5100, Silver 4100, Bronze 3100 (Skylake-SP) Series 0x00050654 H0 0x0200003A
Intel Xeon Platinum 8100  (Skylake-SP) Series 0x00050654 H0 0x0200003A
Intel Xeon D-1500 0x00050663 V2   0x07000011
Intel Xeon E3-1200-v5 0x000506E3 R0/S0 0x000000C2
Intel Xeon E3-1200-v6 0x000906E9 B0 0x0000007C

Really considering the likelihood of actually being affected by the vulnerability any time soon, especially for non internet facing internal systems, this mad panic round of patching seems more dangerous than the vulnerability.

This latest debacle between Intel and VMware just takes the dog’s biscuit. We have now come to a grinding halt on our patching run, and have no idea when we can move forward.

What a year, and it’s still bloody January………