Sunday, December 30, 2018

New Home Lab managed by containerized PowerCLI and RACADM

Christmas holidays are a perfect time to rebuild the home lab. I have got a "Christmas present" from my longtime colleague knowing each other from times when we were both Dell employes. Thank you, Ondrej. He currently works for local IT company (Dell partner) and because they did a hardware refresh for one of their customers, I have got from him 4 decommissioned, but still good enough, Dell servers PowerEdge R620 each having populated a single CPU socket and 96 GB RAM. The perfect setup for a home lab, isn't it? My home lab environment is the topic for another blog post but today I would like to write about containerization of management CLI's (VMware PowerCLI and Dell RACADM) which will eventually help me with automation of home lab power off / on operations.

Before these new Dell servers, I had in my lab 4 Intel NUCs which I'm replacing with Dell PE R620. Someone can argue that Dell servers will consume significantly more electrical energy, however, it is not that bad. Single PE R620 server withdraws around 70-80 Watts. Yes, It is more than Intel NUC but it is roughly just 2 or 3 times more. Anyway, 4 x 80 Watt = 320 Watt which is still around 45 EUR per month so I have decided to keep servers Powered Off and spin up them only on demand. Dell servers have out of band management (iDRAC7) so it is easy to start and stop these servers automatically via RACADM CLI. To gracefully shutdown all Virtual Machines and put ESXi hosts into maintenance mode and shutdown them I will leverage PowerCLI. I've decided to use one Intel NUC with ESXi 6.5 to keep some workloads up and running all times. These workloads are vCenter Server Appliance, Management Server, Backup Server, etc. All other servers can be powered off until I need to do some tests or demos in my home lab.

I would like to have RACADM and PowerCLI also up and running to manage Dell Servers and vSphere via CLI os automation scripts. PowerCLI is available as an official VMware docker image and there are also some unofficial RACADM docker images available in DockerHub, therefore I have decided to deploy PhotonOS as a container host and run RACADM and PowerCLI in Docker containers.

In this blog post, I'm going to document steps and gotchas from this exercise.

DEPLOYMENT

Photon OS is available at GitHub as OVA, so deployment is very easy.

CHANGE ROOT PASSWORD

The first step after Photon OS deployment is to log in as root with the default password (default password is "changeme" without quotation marks) and change root password.

CHANGE IP ADDRESS

By default, IP address is assigned via DHCP. I want to use static IP address therefore I have to change network settings. In Photon OS, the process systemd-networkd is responsible for the network configuration.

You can check its status by executing the following command:

systemctl status systemd-networkd

By default, systemd-networkd receives its settings from the configuration file 99-dhcp-en.network located in /etc/systemd/network/ folder.

Setting a Static IP Address is documented here.

I have created file /etc/systemd/network/10-static-en.network with the following content

==============================================
[Match]
Name=eth0

[Network]
DHCP=no
IPv6AcceptRA=no
Address=192.168.4.7/24
Gateway=192.168.4.254
DNS=192.168.4.4 192.168.4.20
Domains=home.uw.cz
NTP=time1.google.com time2.google.com ntp.cesnet.cz
==============================================

File permissions should be 644 so you can enforce it by command
chmod 644 10-static-en.network

New settings are applied by command
systemctl restart systemd-networkd

CREATE USER FOR REMOTE ACCESS

It is always better to use regular user instead of root account having full administration rights on the system. Therefore, the next step is to add my personal account

useradd -m -G sudo dpasek

-m creates the home directory, while -G adds the user to the sudo group.

Set password for this user

passwd dpasek

The next step is to edit the sudoers file with visudo. Search for %sudo and remove the ‘#’ from that line. After that, you can log in with that account and run commands like a root with ’sudo ’. Please note, that sudo is not installed by default, therefore you have to install it by your self by a single command

tdnf install sudo

as described later in this post.

DISABLE PASSWORD EXPIRATION

If you want to disable password expiration use command chage

chage -M 99999 root
chage -M 99999 dpasek

ALLOW PING

Photon OS by default blocks ICMP, therefore you cannot ping from outside. Ping is, IMHO, very essential network tool for troubleshooting, therefore it should be always enabled. I do not think it is worth to disable in the sake of better security. Here are commands to enable ping ...

iptables -A INPUT -p ICMP -j ACCEPT
iptables -A OUTPUT -p ICMP -j ACCEPT

iptables-save > /etc/systemd/scripts/ip4save

UPDATE OS OR INSTALL ADDITIONAL SOFTWARE

Photon OS package manager is tdnf, therefore OS update is done with command ..

tdnf update

if you need to install additional software you can search for it and install it

I have realized there is no sudo in the minimal installation from OVA, therefore if you need it, you can search for sudo

tdnf search sudo

and install it

tdnf install sudo

START DOCKER DAEMON

I'm going to use Photon OS as a Docker host for two containers (PowerCLI and RACADM) therefore I have to start docker daemon ...

systemctl start docker

To start the docker daemon, on boot, use the command:

systemctl enable docker

ADD USER TO DOCKER GROUP

To run docker command without sudo I have to add linux user (me) to group docker.

usermod -a -G docker dpasek

POWERCLI DOCKER IMAGE

I already wrote the blog post how to spin up of PowerCLIcore container here. So let's quickly pull PowerCLIcore image and instantiate PowerCLI container.

docker pull vmware/powerclicore

Now, I can remotely log in (SSH) as a regular user (dpasek) and run any of my PowerCLI commands to manage my home lab environment.

docker run --rm -it vmware/powerclicore

Option --rm stands for "Automatically remove the container when it exits".

To work with PowerCLI following commands are necessary to initialize PowerCLI configuration.

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $true
Set-PowerCLIConfiguration -InvalidCertificateAction:ignore

The configuration persists within each container session, however, it disappears when the container is removed, therefore it is better to instantiate container without -rm option, configure PowerCLI configuration, keep the container in the system and start container next time to perform any other PowerCLI operation.

docker run -it -v "/home/dpasek/scripts/homelab:/tmp/scripts" --name homelab-powercli --entrypoint='/usr/bin/pwsh' vmware/powerclicore

Option --name is useful to set the name of the instantiated container because the name can be used to restart container and continue with PowerCLI.

Inside the container, we can initialize PowerCLI configuration and use all other PowerCLI commands, scripts and eventually exit from the container back to the host and return back by command

docker start homelab-powercli -i

In such approach, the PowerCLI configuration persists.

RACADM DOCKER IMAGE

Another image I will need in my homelab is Dell RACADM to manage Dell iDRACs. Let's install and instantiate the most downloadable RACADM image.

docker pull justinclayton/racadm

and it can be used interactively and get system information from iDRAC with hostname esx21-oob

docker run --rm justinclayton/racadm -r esx21-oob -u root -p calvin getsysinfo

INSTALL AND CONFIGURE GIT

I would like to store all my home lab scripts in GitHub repository, synchronize it with my container host and leverage it to manage my home lab.

# install Git
sudo tdnf install git

# configure Git
git config --global user.name "myusrname"
git config --global user.email "mymail@example.com"

git clone https://github.com/davidpasek/homelab

# save Git credentials
git config credential.helper store

RUN POWERCLI SCRIPT STORED IN CONTAINER HOST

In case, I do not want to use PowerCLI interactively and run some predefined PowerCLI scripts then local script directory has to be mapped to the container as shown in the example below

docker run -it --rm -v /home/dpasek/scripts/homelab:/tmp/scripts --entrypoint='/usr/bin/pwsh' vmware/powerclicore /tmp/scripts/get-vms.ps1

The option -rm is used to remove the container from the system after the PowerCLI script is executed.

The option -v is used to do the mapping between container host directory /home/dpasek/scripts/homelab and container directory /tmp/scripts

I was not able to run the PowerCLI script directly with docker command without the option --entrypoint

The whole toolset is up and running so the rest of exercise is to develop RACADM and PowerCLI scripts to effectively managed my home lab. The idea is to shut down all VMs and ESXi hosts when the lab is not needed. When I will need the lab, I will simply power on some vSphere Cluster and VMs within these clusters having vSphere tag "StartUp".

I'm planning to store all these scripts in GitHub repository from two reasons
  1. GitHup repository will be used as a backup solution 
  2. You can track the progress of my home lab automation project
Hope I will find some spare time to finish my idea and automate this process which I have to do manually at the moment.

Related resources:

Tuesday, December 11, 2018

VMware Change Block Tracking (CBT) and the issue with incremental backups

One of my customers is experiencing a weird issue when using a traditional enterprise backup (IBM TSM / Spectrum Protect in this particular case) leveraging VMware vSphere Storage APIs (aka VDDK) for image-level backups of vSphere 6.5 Virtual Machines. They observed strange behavior on the size of incremental backups. IBM TSM backup solution should do a full backup once and incremental backups forever. This is a great approach to save space on backup (secondary) storage. However, my customer observed on some Virtual Machines, randomly created over the time, almost full backups instead of expected continuous incremental backup. This has obviously a very negative impact on the capacity of the backup storage system and also on backup window times.

The customer has vSphere 6.5 U2 (build 9298722) and IBM TSM VE 8.1.4.1. They observed the problem just on VMs where VM hardware was upgraded to version 13. The customer opened a support case with VMware GSS and IBM support.

IBM Support observed VADP/VDDK API function QueryChangedDiskAreas was failing with TSM log message similar to ...

10/19/2018 12:04:26.230 [007260] [11900] : ..\..\common\vm\vmvisdk.cpp(2436): ANS9385W Error returned from VMware vStorage API for virtual machine 'VM-NAME' in vSphere API function __ns2__QueryChangedDiskAreas. RC=12, Detail message: SOAP 1.1 fault: "":ServerFaultCode[no subcode]
"Error caused by file /vmfs/volumes/583eb2d3-4345fd68-0c28-3464a9908b34/VM-NAME/VM-NAME.vmdk"

VMware Support (GSS) instructed my customer to reset CBT - https://kb.vmware.com/kb/2139574 or disable and re-enable CBT - https://kb.vmware.com/kb/1031873 and observe if it solves the problem.

A few days after CBT reset, the problem with backup occurred again, therefore it was not a resolution.

I did some research and found another KB - CBT reports larger area of changed blocks than expected if guest OS performed unmap on a disk (59608). We believe that this the root cause and KB contains workaround and final resolution.

The root cause mentioned in VMware KB 59608 ...
When an unmap is triggered in the guest, the OS issues UNMAP requests to underlying storage. However, the requested blocks include not only unmapped blocks but also unallocated blocks. And all those blocks are captured by CBT and considered as changed blocks then returned to backup software upon calling the vSphere API queryChangedDiskAreas(changeId).
Workaround recommended in KB ...
Disable unmap in guest VM.
For example, in MS Windows Operating Systems UNMAP can be disabled by command

fsutil behavior set Disable DeleteNotify 1 

and re-enabled by command

fsutil behavior set Disable DeleteNotify 0

Warning! Disabling UNMAP in guest OS can have a tremendous negative impact on storage space reclamation, therefore, fixing space issue in secondary storage can cause storage space issue on your primary storage. Check your specific design before the final decision on how to workaround this issue.

Anyway, the final problem resolution has to be done by the backup software vendor ...
If you have VDDK 6.7 or later libraries, take the intersection of VixDiskLib_QueryAllocatedBlocks() and queryChangedDiskAreas(changeId) to calculate the actually changed blocks.
The backup software should not use just API function QueryChangedDiskAreas but also function QueryAllocatedBlocks and calculate disk blocks for incremental backups. Based on VDDK 6.7 Release Notes, VDDK 6.7 can be leveraged even for vSphere 6.5 and 6.0. For more info read Release Notes here.

I believe the problem occurs only on the following conditions
  • The virtual disk must be thin-provisioned.
  • VM Hardware is 11 and later - older VM hardware versions do not pass UNMAP SCSI commands through
  • The guest operating system must be able to identify the virtual disk as thin and issuing UNMAP SCSI commands down to the storage system
Based on conditions above I personally believe, that another workaround to this issue would be to not use thin-provisioned virtual disks and convert them into thick virtual disks. As far as I know, thick virtual disks do not pass UNMAP commands through VM hardware, therefore it should not cause CBT issues.

My customer is not leveraging thin-provisioning on physical storage layer, therefore he is going to test workaround recommended in KB 59608 (disable UNMAP in Guest OS's) as a short-term solution and start the investigation of the long-term problem fix with IBM Spectrum Protect (aka TSM). It seems IBM Spectrum Protect Data Mover 8.1.6 is leveraging VDDK 6.7.1 so upgrade from current version 8.1.4 to 8.1.6 could solve the issue.

Friday, December 07, 2018

ESXi : This host is potentially vulnerable to issues described in CVE-2018-3646

This is a very short post in reaction to those who asked me recently.

When you update to the latest ESXi builds you can see the warning message as depicted on the screenshot below.

Warning message in ESXi Client User Interface (HTML5)
This message just informs you about Intel CPU Vulnerability described in VMware Security Advisory 2018-0020 (VMSA-2018-0020).

You have three choices

  • to eliminate the security vulnerability
  • ignore potential security risk and dismiss the warning
  • keep it as it is and ignore the warning in User Interface
Elimination of "L2 Terminal" security vulnerability is described in VMware KB 55806. It is configurable by ESXi advanced option VMkernel.Boot.hyperthreadingMitigation. If you set a value to TRUE or 1, ESXi will be protected.

The warning message suppression is configurable by another ESXi advanced option UserVars.SuppressHyperthreadWarning. A value TRUE or 1 will suppress the warning message. 

Thursday, December 06, 2018

VMware Metro Storage Cluster - is it DR solution?

Yesterday morning I had a design discussion with one of my customers about HA and DR solutions. We were discussing VMware Metro Storage Cluster topic the same day afternoon within our internal team, therefore it inspired me to write this blog article and use it as a reference for future similar discussions. By the way, I have presented this topic on local VMUG meeting two years ago so you can find the original slides here on SlideShare. On this blog post, I would like to document the topics, architectures, and conclusions I discussed today with several folks.

Stretched (aka active/active) clusters are very popular infrastructure architecture patterns nowadays. VMware implementation of such active/active cluster pattern is vMSC (VMware Metro Storage Cluster). Official VMware vSphere Metro Storage Cluster Recommended Practices can be found here. Let's start with definition what vMSC is and is not from HA (High Availability), DA (Disaster Avoidance) and DR (Disaster Recovery) perspective.

vMSC (VMware Metro Storage Cluster) is
  • High Availability solution extending infrastructure high availability across two availability zones (sites in the metro distance)
  • Disaster Avoidance solution enabling live migration of VMs not only across ESXi hosts within single availability zone (local cluster) but also to another availability zone (another site)
vMSC (VMware Metro Storage Cluster) is great High Availability and Disaster Avoidance technology but it is NOT pure Disaster Recovery solution even it can help with two specific disaster scenarios (one of two storage systems failure,  single site failure). Why it is not pure DR solution? Here are a few reasons
  • vMSC requires Storage Metro Cluster technology which joins two storage systems into a single distributed storage system allowing stretched storage volumes (LUNs) but this creates a single fault zone for situations when LUNs are locked or badly served from the storage system. It is great for HA but not good for DR. Such single fault zone can lead to total cluster outage in situations like described here - http://www-01.ibm.com/support/docview.wss?uid=ssg1S1005201https://kb.vmware.com/kb/2113956
  • vMSC compute cluster (vSphere cluster) requires to be stretched across two availability zones which creates a single fault zone. Such single fault zone can lead to total cluster outage in situations like described here - https://kb.vmware.com/kb/56492
  • DR is not only about infrastructure but also about applications, people and processes.
  • DR should be business service oriented therefore from IT perspective, DR is more about applications than infrastructure
  • DR should be tested on regular basis. Can you afford to power-off the whole site and test that all VMs will be restarted on the other side? Are you sure the applications (or more importantly business services) will survive such test? I know a few environments where they can afford it but most enterprise customers cannot.
  • DR should allow going back into the past, therefore the solution should be able to leverage old data recovery points. Recoverability from old recovery points should be possible on the application group and not for the whole infrastructure.
Combination of HA and DR solutions
Any HA solution should be combined with some DR solution. At a minimum, such DR solution is any classic backup solution having a local or even remote site backup repositories. The typical challenge with any backup solution is RTO (Recovery Time Objective) because
  • You must have the infrastructure hardware ready for workloads to be restored and powered on
  • Time to recovery from traditional backup repositories is usually very time-consuming and it may or may not fulfill RTO requirement
That's the reason why orchestrated DR with storage replications and snapshots is usually better DR solution than a classic backup. vMSC can be safely combined with storage based DR solutions with lower RTO SLA's. VMware has specific Disaster Recovery product called Site Recovery Manager (SRM) to achieve orchestrated vSphere or Storage replications and automated workload recovery. With such combination, you can get Cross Site High Availability, Cross Site Disaster Avoidance provided by vMSC, and pure Disaster Recovery provided by SRM. Such a combination is not so common, at least in my region, because it is relatively expensive. That's the reason customers usually have to decide for only one solution. Now, let's think why vMSC is preferred solution by infrastructure guys over pure DR like SRM. Here are the reasons
  • It is "more simple" and much easier to implement and operate
  • No need to understand, configure and test application dependencies
  • Can be "wrongly" claimed as DR solution 
It is not very well known, but VMware SRM nowadays supports Disaster Recovery and Avoidance on top of stretched storage. It is described in the last architecture concept below.

So let's have a look at various architecture concepts for cross-site HA and DR with VMware products.

VMware Metro Storage Cluster (vMSC) - High Availability and Disaster Avoidance Solution

VMware Metro Storage Cluster (vMSC)
On the figure above I have depicted the VMware Metro Storage Cluster consist of
  • Two availability zones (Site A, Site B)
  • Single centralized vSphere Management (vCenter A)
  • Single stretched storage volume(s) distributed across two storage system each in different availability zone (Site A, Site B)
  • VMware vSphere Cluster stretched across two availability zone (Site A, Site B)
  • Third location (Site C) for storage witness. If the third site is not available, the witness can be placed in Site A or B but storage administrator is the real arbiter in case of potential split-brain scenarios
Advantages of such architecture are
  • Cross-site high availability (positive impact on Availability, thus Business Continuity)
  • Cross-site vMotion (good for Disaster Avoidance)
  • Protects against single storage system (storage in one site) failure scenario
  • Protects against single availability zone (one site) failure scenario
  • Self-initiated fail-over procedure.
Drawbacks
  • vMSC is tightly integrated distributed cluster system between vSphere HA Cluster and Storage Metro Cluster, therefore it is potential single fault zone. Stretched LUN(s) is a single fault zone for issues caused by the distributed storage system or the bad behavior of cluster filesystem (VMFS)
  • Typically, the third location is required for storage witness
  • It is usually very difficult to test HA
  • It is almost impossible to test DR
VMware Site Recovery Manager in Classic Architecture - Disaster Recovery Solution

VMware Site Recovery Manager - Classic Architecture
On the figure above I have depicted the classic architecture of VMware DR solution (Site Recovery Manager) consist of
  • Two availability zones (Site A, Site B)
  • Two independent vSphere Management servers (vCenter A, vCenter B)
  • Two independent DR orchestration servers (SRM A, SRM B)
  • Two independent vSphere Clusters
  • Two independent storage systems. One in Site A, second in Site B
  • Synchronous or asynchronous data replication between storage systems
  • Snapshots (multiple recovery points) on backup site are optional but highly recommended if you do DR planning seriously.
Advantages of such architecture are
  • Cross-site disaster recoverability (positive impact on Recoverability, thus Business Continuity)
  • Maximal infrastructure independence, therefore we have two independent fault zones. The only connection between the two sites is storage (data) replication.
  • Human-driven and well-tested disaster recovery procedure.
  • Disaster Avoidance (migration of applications between sites) can be achieved but only with business service downtime. Protection Group has to be shut down on one site and restarted on another site.
Drawbacks
  • Disaster Avoidance without service disruption is not available.
  • Usually, there is a huge level of effort with application dependency mapping and application-specific recovery plans (Automated or Semi-automated Run Books) has to be planned, created and tested
VMware Site Recovery Manager in Stretched Storage Architecture - Disaster Recovery and Avoidance Solution

VMware Site Recovery Manager - Stretched Storage Architecture
On the last figure, I have depicted the new architecture of VMware DR solution (Site Recovery Manager). In this architecture, SRM supports stretched storage volumes but everything else is independent and specific for each site. The solution consists of
  • Two availability zones (Site A, Site B)
  • Two independent vSphere Management servers (vCenter A, vCenter B)
  • Two independent DR orchestration servers (SRM A, SRM B)
  • Two independent vSphere Clusters
  • Single distributed storage systems having storage volumes stretched across Site A and Site B
  • Snapshots (multiple recovery points) on backup site are optional but highly recommended if you do DR planning seriously.
Advantages of such architecture are
  • Cross-site disaster recoverability (positive impact on Recoverability, thus Business Continuity)
  • Maximal infrastructure independence, therefore we have two independent fault zones. The only connection between the two sites is storage (data) replication.
  • Human-driven and well-tested disaster recovery procedure.
  • Disaster Avoidance without service disruption leveraging cross vCenter vMotion technology.
Drawbacks
  • Usually, there is a huge level of effort with application dependency mapping and application-specific recovery plans (Automated or Semi-automated Run Books) has to be planned, created and tested
  • Virtual Machine internal identifier (moRef ID) is changed after cross vCenter vMotion, therefore your supporting solutions (backup software, monitoring software, etc.) must not be dependent on this identifier.
CONCLUSION

Infrastructure availability and recoverability are two independent infrastructure qualities. Both of them have a positive impact on business continuity but each solves the different situation. High Availability solutions are increasing the reliability of the system with more redundancy and self-healing automated failover among redundant system components. Recoverability solutions are data backups from one system and allow a full recovery in another independent system. Both solutions can and should be combined in compliance with SLA/OLA requirements.

VMware Metro Storage Cluster is great High Availability technology but it should not be used as a replacement for disaster recovery technology. VMware Metro Storage Cluster is not a Disaster Recovery solution even it can protect the system against two specific disaster scenarios (single site failure, single storage system failure). You also do not call VMware "vSphere HA Cluster" as DR solution even it can protect you against single ESXi host failure.

The final infrastructure architecture always depends on specific use cases, requirements and expectations of the particular customer but expectations should be set correctly and we should know what designed system does and what does not. It is always better to know potential risks and not have unknown risks. For known risks, mitigation or contingency plan can be prepared and communicated to system users and business clients. You cannot do it for unknown risks.

Other resources
There are other posts on the blogosphere explaining what vMSC is and is NOT.

“VMware vMSC can give organizations many of the benefits that a local high-availability cluster provides, but with geographically separate sites. Stretched clustering, also called distributed clustering, allows an organization to move virtual machines (VMs) between two data centers for failover or proactive load balancing. VMs in a metro storage cluster can be live migrated between sites with vSphere vMotion and vSphere Storage vMotion. The configuration is designed for disaster avoidance in environments where downtime cannot be tolerated, but should not be used as an organization's primary disaster recovery approach.”
Another very nice technical write up about vMSC is here - The dark side of stretched clusters

Tuesday, November 13, 2018

VCSA - This appliance cannot be used or repaired ...

I have just got an email from my customer describing the weird issue with VMware vCenter Server Appliance (aka VCSA).

The customer is doing weekly native backups of VCSA manually via VAMI. He wanted to run VCSA native backup again but when he tried to log into virtual appliance management interface (VAMI) he is getting the following error message
 
Error message - This appliance cannot be used or repaired because of failure was encountered. You need to deploy a new appliance.
 
The error message includes a resolution. Deploy a new appliance. The recommended solution is the last thing a typical vSphere admin would like to resolve such an issue. Fortunately enough, there is another solution/workaround.

To resolve this issue stop and start all the services on the vCSA,
  • Putty/SSH to vCenter server appliance.
  • Login to VCSA using the root credentials.
  • Enabled "shell".
  • Restart VCSA services

To restart VCSA services run the following commands:
service-control --stop --all
service-control --start --all

In case, simple services restart does not help, you can have an issue with some recent backup job. In such a case, there is another resolution with an additional workaround
  • Putty/SSH to vCenter server appliance.
  • Login to vCSA using the root credentials.
  • Enabled "shell".
  • Move the /var/vmware/applmgmt/backupRestore-history.json file to /var/tmp/.
  • Restart the vCenter Server Appliance.
Hope this helps other folks in VMware community.    

Sunday, November 11, 2018

Intel Software Guard Extensions (SGX) in VMware VM

Yesterday, I have got a very interesting question. I have been asked by a colleague of mine if Intel SGX can be leveraged within VMware virtual machine. We both work for VMware as TAMs (Technical Account Managers), therefore we are the first stop for similar technical questions of our customers.

I'm always curious what is the business reason behind any technical question. The question comes from the customer of my colleague who is going to run infrastructure for some "Blockchain" applications leveraging Intel SGX CPU feature set. The customer would like to run these applications virtualized on top of VMware vSphere to
  • simplify infrastructure management and capacity planning
  • increase server high availability 
  • optimize compute resource management

However, support of SGX is mandatory for such type of application, therefore if Virtual Machines do not support it, they are forced to run it on bare metal.

We, as VMware TAMs, can leverage a lot of internal resources, however, I personally believe that nothing compares to the own testing. After few hours of testing in the home lab, I feel more confidential to discuss the subject with other folks internally within VMware or externally with my customers. By the way, that's the reason I have my own vSphere home lab and this is a very nice example of justification to me and my family why I have invested a pretty decent money into the home lab in our garage. But back to the topic.

Let's start with the terminology and testing method

Intel Software Guard eXtensions (SGX) is a modern Intel processor security feature that enables apps to run within protected software containers known as enclaves, providing hardware-based memory encryption that isolates the applications' code and data in memory.

Intel Software Guard Extensions (SGX) 
is a set of central processing unit (CPU) instruction codes from Intel that allows user-level code to allocate private regions of memory, called enclaves, that are protected from processes running at higher privilege levels.[Intel designed SGX to be useful for implementing a secure remote computation, secure web browsing, and digital rights management (DRM). [source]

The CPUID opcode is a processor supplementary instruction (its name derived from CPU IDentification) for the x86 architecture allowing software to discover details of the processor. It was introduced by Intel in 1993 when it introduced the Pentium and SL-enhanced 486 processors. [source]

It is worth to read the document "Properly Detecting Intel® Software Guard Extensions (Intel® SGX) in Your Applications" [source]

The most interesting part is ...
What about CPUID?The CPUID instruction is not sufficient to detect the usability of Intel SGX on a platform. It can report whether or not the processor supports the Intel SGX instructions, but Intel SGX usability depends on both the BIOS settings and the PSW. Applications that make decisions based solely on CPUID enumeration run the risk of generating a #GP or #UD fault at runtime.In addition, VMMs (for example, Hyper-V*) can mask CPUID results, and thus a system may support Intel SGX even though the results of the CPUID report that the Intel SGX feature flag is not set.
For our purpose, CPUID detection should be enough as we can test it on bare metal OS and later on Guest OS running inside Virtual Machine. The rest of testing is on the application itself but it is out of this blog post scope.

Another article worth to read is "CPUID — CPU Identification" [source]. The most interesting part of this document is ...
INPUT EAX = 12H: Returns Intel SGX Enumeration InformationWhen CPUID executes with EAX set to 12H and ECX = 0H, the processor returns information about Intel SGX capabilities. 
And the most useful resource is https://github.com/ayeks/SGX-hardware
There is a GNU C source code to test SGX support and clear explanation on how to identify support within the operating system. I have used my favorite OS FreeBSD and simply downloaded the code from GitHub

fetch https://raw.githubusercontent.com/ayeks/SGX-hardware/master/test-sgx.c

compile it

cc test-sgx.c -o test-sgx

and run executable application

./test-sgx

and you can see the application (test-sgx) output with information about SGX support. The output should be similar to this one.

 root@test-sgx-vmhw4:~/sgx # ./test-sgx  
 eax: 406f0 ebx: 10800 ecx: 2d82203 edx: fabfbff 
 stepping 0 
 model 15 
 family 6 
 processor type 0 
 extended model 4 
 extended family 0 
 smx: 0 
 Extended feature bits (EAX=07H, ECX=0H) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 sgx available: 0 
 CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0) 
 eax: 0 ebx: 440 ecx: 0 edx: 0 
 sgx 1 supported: 0 
 sgx 2 supported: 0 
 MaxEnclaveSize_Not64: 0 
 MaxEnclaveSize_64: 0 
 CPUID Leaf 12H, Sub-Leaf 1 of Intel SGX Capabilities (EAX=12H,ECX=1) 
 eax: 0 ebx: 3c0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 2 of Intel SGX Capabilities (EAX=12H,ECX=2) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 3 of Intel SGX Capabilities (EAX=12H,ECX=3) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 4 of Intel SGX Capabilities (EAX=12H,ECX=4) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 5 of Intel SGX Capabilities (EAX=12H,ECX=5) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 6 of Intel SGX Capabilities (EAX=12H,ECX=6) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 7 of Intel SGX Capabilities (EAX=12H,ECX=7) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 8 of Intel SGX Capabilities (EAX=12H,ECX=8) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 
 CPUID Leaf 12H, Sub-Leaf 9 of Intel SGX Capabilities (EAX=12H,ECX=9) 
 eax: 0 ebx: 0 ecx: 0 edx: 0 

Let's continue with testing of various combination

I have the home lab based on Intel NUC 6i3SYH, which have support for SGX. SGX has to be enabled on BIOS. There are three SGX options within BIOS
  • Software Controlled (default)
  • Disabled
  • Enabled

 
BIOS screenshot
First of all, let's do three tests of SGX support on bare metal (Intel NUC 6i3SYH). I have installed FreeBSD 11.0 on USB disk and tested SGX with all three BIOS options related to SGX.


Physical hardware (Software Controlled SGX)

eax: 406e3 ebx: 100800 ecx: 7ffafbbf edx: bfebfbff
stepping 3
model 14
family 6
processor type 0
extended model 4
extended family 0
smx: 0

Extended feature bits (EAX=07H, ECX=0H)
eax: 0 ebx: 29c6fbf ecx: 0 edx: 0
sgx available: 1 (TRUE)

CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0)
eax: 0 ebx: 0 ecx: 0 edx: 0
sgx 1 supported: 0 (FALSE)
sgx 2 supported: 0
MaxEnclaveSize_Not64: 0 (FALSE)
MaxEnclaveSize_64: 0 (FALSE)

Test result: SGX is available for CPU but not enabled in BIOS


Physical hardware (Disabled SGX)

eax: 406e3 ebx: 1100800 ecx: 7ffafbbf edx: bfebfbff
stepping 3
model 14
family 6
processor type 0
extended model 4
extended family 0
smx: 0

Extended feature bits (EAX=07H, ECX=0H)
eax: 0 ebx: 29c6fbf ecx: 0 edx: 0
sgx available: 1 (TRUE)

CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0)
eax: 0 ebx: 0 ecx: 0 edx: 0
sgx 1 supported: 0 (FALSE)
sgx 2 supported: 0
MaxEnclaveSize_Not64: 0 (FALSE)
MaxEnclaveSize_64: 0 (FALSE)

Test result: SGX is available for CPU but not enabled in BIOS


Physical hardware (Enabled SGX)

eax: 406e3 ebx: 100800 ecx: 7ffafbbf edx: bfebfbff
stepping 3
model 14
family 6
processor type 0
extended model 4
extended family 0
smx: 0

Extended feature bits (EAX=07H, ECX=0H)
eax: 0 ebx: 29c6fbf ecx: 0 edx: 0
sgx available: 1 (TRUE)

CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0)
eax: 1 ebx: 0 ecx: 0 edx: 241f
sgx 1 supported: 1 (TRUE)
sgx 2 supported: 0
MaxEnclaveSize_Not64: 1f (OK)
MaxEnclaveSize_64: 24 (OK)

Test result: SGX is available for CPU and enabled in BIOS



So, we have validated that SGX capabilities are available on FreeBSD operating system running on bare metal when SGX is enabled in BIOS.

Next step is to repeat tests on Virtual Machines running on top of VMware hypervisor (ESXi) installed on the same physical hardware (Intel NUC 6i3SYH).  At the moment, I have vSphere 6.5 (ESXi build 7388607) which support VM hardware up to version 13. Let's run SGX tests on very old VM hardware 4 and on fresh VM hardware 13. All test with VMs were executed on physical system with explicitly enabled SGX in BIOS.



VM hardware version 4

eax: 406f0 ebx: 10800 ecx: 2d82203 edx: fabfbff
stepping 0
model 15
family 6
processor type 0
extended model 4
extended family 0
smx: 0

Extended feature bits (EAX=07H, ECX=0H)
eax: 0 ebx: 0 ecx: 0 edx: 0
sgx available: 0 (FALSE)

CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0)
eax: 0 ebx: 440 ecx: 0 edx: 0
sgx 1 supported: 0 (FALSE)
sgx 2 supported: 0
MaxEnclaveSize_Not64: 0 (FALSE)
MaxEnclaveSize_64: 0 (FALSE)

Test result: SGX is not available for CPU in VM hardware version 4


VM hardware version 13

eax: 406f0 ebx: 10800 ecx: fffa3203 edx: fabfbff
stepping 0
model 15
family 6
processor type 0
extended model 4
extended family 0
smx: 0

Extended feature bits (EAX=07H, ECX=0H)
eax: 0 ebx: 1c2fbb ecx: 0 edx: 0
sgx available: 0 (FALSE)

CPUID Leaf 12H, Sub-Leaf 0 of Intel SGX Capabilities (EAX=12H,ECX=0)
eax: 7 ebx: 340 ecx: 440 edx: 0
sgx 1 supported: 1 (TRUE)
sgx 2 supported: 1 (TRUE)
MaxEnclaveSize_Not64: 0 (FALSE)
MaxEnclaveSize_64: 0 (FALSE)

Test result: CPU SGX functions are deactivated or SGX is not supported

Conclusion

To leverage Intel SGX CPU capabilities in the application, the physical hardware must support SGX and SGX must be enabled on BIOS. 
Note: Explicitly enabled SGX within BIOS has been successfully tested in operating system FreeBSD 11 running on bare metal (physical servers). It might work with BIOS option "Software Controlled" but it would require software enablement within Guest OS. I was not testing such scenario, therefore another testing would be required to prove such an assumption.
Operating system FreeBSD 11 has been tested on bare metal with enabled SGX in BIOS and in such configuration SGX CPU capabilities has been successfully identified within operating system.
SGX support in virtual machines on top of VMware Hypervisor (ESXi 6.5) has been tested solely on physical hardware with SGX explicitly enabled in BIOS. 
Unfortunately, SGX has NOT been successfully identified even on the latest VM hardware for vSphere 6.5 (VM Hardware ver 13) even the CPU capabilities identified in VM hardware 13 by Guest Operating System are significantly extended in comparison to VM hardware 4.
I will try to upgrade my home lab to the latest vSphere 6.7 U1 and do additional testing with VM hardware version 14. In the meantime, I will open discussion inside VMware organization about SGX support because, at the moment, one large VMware customer cannot virtualize a specific type of applications even they would like to. 


UPDATE 2020-03-16: SGX is supported on vSphere 7. For further details see video at vSGX & Secure Enclaves in vSphere 7