Skip to main content

LT1F Vulnerability (L1TF) VMware





I would like to inform about important issue: Intel L1 Terminal Fault Vulnerabilities which high impact to vSphere Infrastructure. That issue had been announced at 00:00 AM today (10:00 AM PDT). This new class of vulnerabilities can occur on current and past Intel processors (from at least 2009 - 2018) when affected Intel microprocessors are speculating beyond an unpermitted data access.


This new class of vulnerabilities can occur on current and past Intel processors (from at least 2009 - 2018) when affected Intel microprocessors are speculating beyond an unpermitted data access.
By continuing the speculation in these cases, the affected Intel microprocessors expose a new side-channel for attack, allowing a malicious VM to infer data in the hypervisor and other VM’s running on a core.
The most severe of the three vulnerabilities (CVE-2018-3646: L1 Terminal Fault – VMM) impacts all hypervisors running on x86 Intel CPUs, including VMware vSphere, VMware Workstation and VMware Fusion. As a consequence, our services that use these products (including VMware Cloud on AWS and VMware Horizon Cloud), and our VMware Cloud Provider Program partner environments are impacted.


As part of the August 14th disclosure by Intel, three vulnerabilities have been named:

  1. CVE-2018-3646 (L1 Terminal Fault - VMM)
Mitigation of CVE-2018-3646 requires Hypervisor-Specific Mitigations for hosts running on Intel hardware.

  1. CVE-2018-3620 (L1 Terminal Fault - OS)
Mitigation of CVE-2018-3620 requires Operating System-Specific Mitigations.

  1. CVE-2018-3615 (L1 Terminal Fault - SGX)
CVE-2018-3615 does not affect VMware products and/or services. See KB54913 for more information.

 The most severe of the three vulnerabilities (CVE-2018-3646: L1 Terminal Fault – VMM) impacts all hypervisors running on x86 Intel CPUs, including VMware vSphere, VMware Workstation and VMware Fusion. As a consequence, our services that use these products (including VMware Cloud on AWS and VMware Horizon Cloud), and our VMware Cloud Provider Program partner environments are impacted.


Action Plan:

CVE-2018-3646 (L1 Terminal Fault – VMM): This vulnerability impacts all hypervisors running on x86 Intel CPUs

CVE-2018-3646 has two currently known attack vectors which will be referred to as "Sequential-Context" and "Concurrent-Context."





Reference:           https://kb.vmware.com/s/article/55806

Patches for <customer>

VMware Product
Product Version
Running On
Severity
Replace_with/Apply_Patch
Mitigation/Workaround
VC
6.7
Any
Important
6.7.0d
None
VC
6.5
Any
Important
6.5u2c
None
ESXi
6.7
Any
Important
ESXi670-201808401-BG*
ESXi670-201808402-BG**
ESXi670-201808403-BG*
None
ESXi
6.5
Any
Important
ESXi650-201808401-BG*
None
ESXi650-201808402-BG**
ESXi650-201808403-BG*
ESXi
6
Any
Important
ESXi600-201808401-BG*
None
ESXi600-201808402-BG**
ESXi600-201808403-BG*
ESXi
5.5
Any
Important
ESXi550-201808401-BG*
None
ESXi550-201808402-BG**
ESXi550-201808403-BG*

Notes: These patches only support to mitigated the Sequential-Context Attack Vector: a malicious VM can potentially infer recently accessed L1 data of a previous context (hypervisor thread or other VM thread) on either logical processor of a processor core.

**These patches include microcode updates required for mitigation of the Sequential-context attack vector. This microcode may also be obtained from your hardware OEM in the form of a BIOS or firmware update.

Concurrent-context attack vector: a malicious VM can potentially infer recently accessed L1 data of a concurrently executing context (hypervisor thread or other VM thread) on the other logical processor of the Hyper-Threading enabled processor core. Currently for mitigated this, please come to KB55806 https://kb.vmware.com/s/article/55806

Important note: Please try to apply in test environment before apply to production because when enable to solve this may be impacted to performance of system

CVE-2018-3620 (L1 Terminal Fault - OS): Operating System-Specific Mitigations

VMware has investigated the impact CVE-2018-3620 may have on virtual appliances. Details on this investigation including a list of unaffected virtual appliances can be found in KB55807.

Products that ship as an installable windows or linux binary are not directly affected, but patches may be required from the respective operating system vendor that these products are installed on. VMware recommends contacting your 3rd party operating system vendor to determine appropriate actions for mitigation of CVE-2018-3620. This issue may be applicable to customer-controlled environments running in a VMware SaaS offering, review KB55808



Comments

Post a Comment

Popular posts from this blog

Changing the FQDN of the vCenter appliance (VCSA)

This article states how to change the system name or the FQDN of the vCenter appliance 6.x You may not find any way to change the FQDN from the vCenter GUI either from VAMI page of from webclient as the option to change the hostname always be greyed out. Now the option left is from the command line of VCSA appliance. Below steps will make it possible to change the FQDN of the VCSA from the command line. Access the VCSA from console or from Putty session. Login with root permission Use above command in the command prompt of VCSA : /opt/vmware/share/vami/vami_config_net Opt for option 3 (Hostname) Change the hostname to new name Reboot the VCSA appliance.   After reboot you will be successfully manage to change the FQDN of the VCSA . Note: Above step is unsupported by VMware and may impact your SSL certificate and face problem while logging to vSphere Web Client. If you are using self-signed certificate, you can regenerate the certificate with the

Issue : Configure Management Network option is Grayed out into ESXi

Last week I got into an issue of one of my client into Vsphere environment where one of its ESXi went done out of the network. Issue was IP address was showing 0.0.0.0 on main Esxi screen and when I tried to change the network configuration, its " Configure Management network option was greyed out.  I tried to gid into it and try to analyis its vmKernal and vmwarning logs. What I found is its VMkernal switch got removed due to unexpected reason. So to resolve the issue I tried to reconfigure its vswitch0 (vmk0) by going into Tech Mode of that Exi. Below are the steps which I followed to resolve the issue. 1) Login to ESXi 2) Press F2, Check if you " Configure Management network " is greyed out or not" if yes,    follow below 3) Press ALT+F1 to move the ESXi screen to tech mode   ( This is command line like dos) 4) login with root account 5) Run the following command into it esxcli network ip interface add --interface-name= vmk0

Collecting Logs from NSX-T Edge nodes using CLI

  This article explains how to extract the logs from NSX-T Edge nodes from CLI. Let's view the steps involved: 1) Login to NSX-T  Edge node using CLI from admin credentials. 2) Use of  " get support-bundle " for Log extraction. get support-bundle command will extract the complete logs from NSX-T manager/Edge nodes. nsx-manager-1> get support-bundle file support-bundle.tgz 3) Last step is to us e of " copy file support-bundle.tgz url " command. copy file will forward your collected logs from the NSX-T manager to the destination(URL) host from where you can download the logs. copy file support.bundle.tgz url scp://root@192.168.11.15/tmp Here, the URL specified is the ESXi host ( 192.168.11.15) under /tmp partition where logs will be copied and from there one can extract it for further log review. Happy Learning.  :)