Wednesday, October 20, 2021

Kubernetes vSphere CSI Driver

The main reason why I do blogging is to document some technical details and design patterns I discuss with my customers. Usually, I decide to write the blog post about some topic, when there are more then two customers wanting to know some technical details or experiencing some technical challenge.

Today I will write a first blog about Kubernetes. It seems to me that Kubernetes has finally reached the momentum and everybody is trying to jump into the wagon. It is obvious, that Kubernetes is the infrastructure platform for modern distributed applications. VMware has recognized this trend very early and integrated Kubernetes into VMware vSphere platform, also known as Tanzu. I do not want to describe Tanzu platform from product perspective because there are plenty of such blog posts across the blogosphere. Cormac Hogan is my favorite Tanzu/Kubernetes blogger, probably because in the past he was blogging about vSphere and storage related topics. Therefore, if you want to get some info about VMware Tanzu, I highly recommend Cormac's blog which is available at https://cormachogan.com/.

In this article, I would like to describe the architecture overview of vSphere CSI Driver and some process flow behind the scene.

Disclaimer: Please note that this is just my personal understanding how it works and some things can be inaccurate or at very high detail. Nevertheless, if you believe there is something totally wrong, speak up in comments below the article.

First thing first, I'm the visual guy therefore let's start with overall solution architecture.


 The DevOps process to create persistent volume is following

  • DevOps Admin will ask Kubernetes cluster to create persistent volume via kubectl and YAML manifest (aka persistent volume claim)
  • CSI driver has control plane in K8s supervisor and CSI Driver agents on all K8s worker nodes
  • DevOps Admin request (claim) of persistent volume is sent to CSI driver control plane
  • CSI driver control plane is integrated with vCenter server via vSphere API
  • CSI driver control plane via vCenter API asks vSphere to create storage volume.
  • Storage volume can be VMDK file on VMFS filesystem, vSAN object, vVol (lun on physical storage) or NFS shared storage (mountpoint).
  • vCenter will create such storage volume via some ESXi host
  • CSI driver control plane can leave such storage volume unattached (aka FCD - First Class Disk) or it can attach the storage volume into particular ESXi host because eventually it knows into which K8s pod (container) such volume should be attached. And it also knows in which K8S Worker Node (linux guest os on top of virtual machine) the K8s pod is running, therefore, it dynamically attach the volume (it leverages hot-plug/hot-add capability) to particular virtual machine.
    • Note 1: block persistent volumes are attached to virtual machines via PVSCSI driver as it supports higher number (64) of disks and as virtual machine supports up to four (4) SCSI adapters, single VM (K8s worker node) can have up to 256 volumes.
    • Note 2: CSI driver can add additional PVSCSI adapters to VM dynamically
    • Note 3: It only works when VM addvaced setting "devices.hotplug" is enabled, which is default setting.
  • Finally, CSI driver agent detects new storage volume within K8s worker node (linux guest os) and because it knows into which K8s pod (linux container / chroot) the particular volume should be attached, it will attach it to the desired container (pod).

Hope I did not forget something in the automated workflow vSphere CSI driver is doing :-)

I guess now you would ask me, how DevOps admin issues persistent volumes claims into K8s cluster, right?

Well, it is two step process. The first of all, K8s cluster must know K8s Storage Class which is later used for persistent volume claims. Storage Class is just a mapping between vSphere Storage Policy and K8s Storage Class object (aka kind). If you are not yet familiar with VMware vSphere SPBM (Storage Policy Based Management), please read this.

The second step is to create Persistent Volume Claim, describing the particular storage request.

Examples of both Kubernetes (YAML) requests are below. 

 

I believe examples above are self-explanatory. 

Hope this article helps broader VMware user community to understand what is under the cover.

References:

 

Monday, October 04, 2021

2-Node vSAN Direct Connect and LACP

One of my customers is using 2-node vSANs on multiple branch offices. One of many reasons of using 2-node vSAN is the possibility to leverage existing 1 Gb network and use 25 Gb Direct Connect between ESXi hosts (vSAN nodes) without the need of 25 Gb Ethernet switches. Generally they have very good experience with vSAN, but recently they have experienced vSAN Direct Connect outages when testing the network resiliency. The resiliency test was done by administrative shutdown of one vmnic (physical NIC port) on one vSAN node. After further troubleshooting, they realized their particular NICs (Network Adapters) do not propagate link down state to the physical link, when vmnic is administratively disabled by command "esxcli network nic down -n vmnic2". 

It is worth to mention, that such network outage does not mean 2-node vSAN outage because that's the reason why we have vSAN witness, however, vSAN is in degraded state and cannot provide mirror (RAID1) protection of vSAN objects.

Such network behavior is definitely strange and we have opened discussion and root cause analysis with hardware vendor, however, we have also started the internal discussion about design alternatives we have to mitigate such weird situations and increase resiliency and the overall availability of vSAN system.

Here are three design options how to implement direct connect networking between two ESXi hosts.

Design Option 1 - Switch independent teaming with explicit fail-over

Option 1 is using single VMkernel interface (vmk2) connected to single vSwitch portgroup which is using two uplinks with explicit fail-over teaming where vmnic2 is the explicit active uplink and vmnic3 will be used only in case vmnic2 is not available.
 

This design option is generally recommended by VMware.

Benefits: simple configuration, highly available solution

Drawbacks: in case of link state hardware problem, you can be in situation when one vSAN node is using VMkernel interface via vmnic2 uplink and 

Design Option 2 - Link Aggregation (LACP)

Option 2 is using single VMkernel interface (vmk2) connected to single vSwitch portgroup having single logical uplink (LAG) which is backed by two uplinks (vmnic2, vminc3) bonded into the port-channel. In such network configuration, both uplinks are active. It is worth to mention, that in 2-node configuration, LACP load balancing algorithm can help with load balancing of vSAN traffic across both uplinks, but the benefit of LACP is periodical heart beating (sending LACPDU) which is by default done every 30 seconds (slow LACP). For more information LACP timers read this blog post.

Benefits: LAG virtual interface with LACPDU heart beating can mitigate the risk of black hole scenario in case of problems with link state.

Drawbacks: 

  • LACP configuration is more complicated than switch independent teaming, therefore it has a negative impact on manageability. 
  • Network availability is not guaranteed with multiple vmknics in some asymmetric failures, such as one NIC failure on one host and another NIC failure on another host. However, more bundled links can increase vSAN traffic availability, because vSAN L3 connectivity would stay up and running until single L1 link is up.

Useful LACP commands

  • esxcli network vswitch dvs vmware lacp status get
  • esxcli network vswitch dvs vmware lacp stats get
  • esxcli network nic down -n vmnic2 
  • esxcli network nic up -n vmnic2

Design Option 3 - Two vSAN Air Gap Network

Two vSAN Air Gap Networks actually means two vSAN vmkernel interfaces connected to two totally independent (air gap) networks.

Benefits: Little bit easier configuration than LACP.

Drawbacks: 

  • Setup is complex and error prone, so troubleshooting is more complex. 
    • Requires multiple L3 VMkernel interfaces for vSAN traffic. 
  • Network availability is not guaranteed with multiple vmknics in some asymmetric failures, such as one NIC failure on one host and another NIC failure on another host. 
  • Source: Pros and Cons of Air Gap Network Configurations with vSAN

Conclusion and design decision

In this blog post, I have described three different option of network configuration for vSAN direct connect. I personally believe, the design option 2 (LACP for vSAN Direct Connect) is the optimal design decision, especially if NIC link state propagation is not reliable as is the case for my customer. However, the design option 2 is solving the issue as well. The final design decision is on the customer.

Friday, October 01, 2021

Enhanced Load Balancing Path Selection Policy

This blog post will be very short.

Few years ago I wrote the blog post about this topic. It is available here so read it for further details.

What we have today realized with my colleagues, this VMW_PSP_RR sub-policy options is enabled by default, therefore VMware Round Robin multi-pathing policy is considering I/O latency for optimal storage path selection.

The ESXi setting can be validated in ESXi shell by command

esxcfg-advcfg -g /Misc/EnablePSPLatencyPolicy 

where the output in ESXi 6.7 U3 and above is

Value of EnablePSPLatencyPolicy is 1

Note: 1 is TRUE.

This is the reason, why you can observe different traffic via different storage paths.

Thursday, September 30, 2021

VMware Distributed Switch - vSphere 6.7 versus 7.0

This will be a really quick heads-up for those upgrading vSphere 6 to vSphere 7.

I've been informed by one colleague, that his customer had an network outage when he upgraded VMware Distributed Switch (aka VDS) from version 6.6.0 (vSphere 6.7 U3) to 7.0.2 (vSphere 7.0 U2).

That was a surprise, as we were not aware about any VDS upgrade issues in the past.

The network outage was observed on Microsoft Network Load Balancers (aka NLB) which was a pretty good hint for Root Cause Analysis.

After the further analysis, the root cause was the change of VMware DVS default advanced setting "Multicast filtering mode".

In vSphere 6.7, the default "Multicast filtering mode" is basic.


In vSphere 7.0, the default "Multicast filtering mode" is IGMP/MLD Snooping.

 

For those who know how IGMP Snooping works, it is not a big surprise why it might be problem for Microsoft Network Load Balancer.

Hope this will help broader VMware community.
 


Thursday, September 09, 2021

vSphere design : ESXi protection against network port flapping

I've just finished a root cause analysis of VM restart in customer production environment, so let me share with you the symptoms of the problem, current customer's vSphere design and recommended improvement to avoid similar problems in the future. 

After the further discussion with customer we have identified following symptoms:

  • VM was restarted in different ESXi host
  • original ESXi host, where VM was running before the restart, was isolated (network isolation)
  • vSAN was partitioned

What does it mean?

Well, for those understanding how vSphere HA Cluster works it is pretty simple diagnosis.

  • ESXi was isolated from the network
  • HA Cluster "Response for Host Isolation Response" was set to "Power Off and restart VMs"
    • this is recommended setting for IP storage, because when network is not available, there is a huge probability, the storage is not available and VM is in trouble
    • customer has vSAN, which is a IP storage, therefore such setting makes perfect sense

That having said, this was the reason VM was restarted and it is expected behavior to achieve higher VM availability in cost of some small unavailability because of VM restart.   

However, there is a logical question.

Why was ESXi isolated from network when there is network teaming (vmnic1 + vmnic3) configured?

The customer environment is depicted on design drawing below.

When vSAN is used, vSphere HA heart beating is happening across vSAN network, therefore vmk3 L3 interface (vSAN) is in use, leveraging vmnic1 and vmnic3 uplinks. Customer has both uplinks active with "Route based on originating virtual port", therefore the traffic goes either through vminc1 or vmnic3. This is called uplink pinning and only one uplink is used for vSphere HA heart beat traffic.

Customer is using VMware LogInsight (syslog + data analytics) for central log management, therefore troubleshooting was a piece of cake. We have found vmnic3 flapping (link up, down, up, down, ...) and Fault Domain Manager (aka FDM) log message about the host isolation and VM restart.

Cool, we know the Root Cause, but what options do we have to avoid such situation?

Well, the issue described above is called Network Port Flapping and in such single port issue, in our case with vmnic3, the vmk3 (vSAN, HA heart beat) interface was originally pinned to vmnic3 and when vmnic3 went down, vmk3 was failed over from vmnic3 to vmnic1. However, because vmnic3 went up, the fail-over process was stooped and kept on vmnic3. Nevertheless, vmnic3 went down, up, down, up, etc. again and as network was very unstable, vSphere HA heart beating failed. As we do not have traditional datastores, there is no vSphere HA storage heart beating and we only rely on network heart beating which failed, thus ESXi host was claimed as isolated, and VM was Powered Off and restarted on another ESXi within vSphere cluster, where VM can provide application services running within VM again. This is actually the goal of vSphere HA, to increase VM services availability and network availability is part of the availability.

So, what is port flapping?

Source: https://lantern.splunk.com/IT_Use_Case_Guidance/Infrastructure_Performance_Monitoring/Network_Monitoring/Managing_Cisco_IOS_devices/Port_flapping_on_Cisco_IOS_devices

Port flapping is a situation in which a physical interface on the switch continually goes up and down, three or more times a second for at least 10 seconds.

Common causes for port flapping are bad, unsupported, or non-standard cable or other link synchronization issues. The cause for port flapping can be intermittent or permanent. You need a search to identify when it happens on your network so you can investigate and resolve the problem.

How to avoid port flapping consequences in vSphere Cluster?

(1) Link Dampening. There are some possibilities in Ethernet switch side. I was blogging about "Dell Force10 Link Dampening" few years ago, which should help in these situations.

(2) There is VMware vSwitch "Teaming and failover" option Failback=No available through GUI.


(3) And there is ESXi advanced setting "Net.teampolicyupdelay" which is something like "Link Dampening" described above. Source: https://kb.vmware.com/s/article/2014075

Each option above has their own benefits and drawbacks

+ means benefit

- means drawback

Let's go option by option and discuss pluses and minuses.

Option 1: Physical Ethernet switch Link Dampening

+ per physical switch port setting, therefore not too much places to set, but still some effort. Some switches supports profile configuration which can have positive impact on manageability.

- such feature might or might not be available for particular network vendor and if available, configuration varies vendor by vendor

- must be done by network admin, therefore vSphere admin does not have rights and clue about such setting and you must explain and justify it to network admin, network manager, etc.

Option 2: VMware vSwitch "Teaming and failover / Failback=No"

+ per vSwitch port group setting, therefore, single and straight forward setting in case of Distributed Virtual Switch (aka VDS)

- In case of Standard Virtual Switch (aka VSS), the setting must be done for vSwitch on each ESXi host, which has negative impact on manageability

- it will failover all trafic from flapping vmnic to fully operated vmnic, but it will never failback until ESXi restart. It has a positive impact on availability but potentially negative impact on performance and throughput

Option 3: ESXi advanced setting "Net.teampolicyupdelay"

- per ESXi advanced setting, which is not perfect from manageability point of view

+ it has a positive impact on availability and also performance, because in case of temporary flapping issue, it can failover traffic back after some longer time, lets say 5 or 10 second.

- Unfortunately, there is no such granularity like Force10 Link Dampening, which can penalize the interface based on flap frequency and decays exponentially depending on the configured half-life.

Conclusion

What option should customer implement? To be honest, it is up to cross team discussion, because each option has some advantages and disadvantages. Nevertheless, there are some options to consider to increase system availability and resiliency.

Hope you have found this write up useful. This is my give back to VMware community. I believe that sharing the knowledge is the only way how to improve not only technology but human civilization. Do you have another opinion, options or experience, please do not hesitate to write a comment below this article.

Thursday, September 02, 2021

ESXi, Intel NICs and LLDP

This will be a very short blog post because Dusan Tekeljak has already written a blog post about this topic. Nevertheless, I was not aware about such Intel NIC driver behavior which is pretty interesting, thus writing this blog post for broader awareness.

My customer who is modernizing their physical networking and implementing Cisco ACI, therefore moving from CDP (Cisco Discovery Protocol) to LLDP (Link Layer Discovery Protocol), which is industry standard. They have observed, that LLDP does not work on some NICs and after further troubleshooting, they realized it happens only on ESXi hosts with Intel X710 NIC. The NICs from other vendors (Broadcom, QLogic) worked as expected.

After some further research, they found following internet articles

and after opening this topic with server vendor (HPE), they got the following Customer Avisory

Long story short Intel NIC driver contains the "LLDP agent" which is by default enabled and consumes LLDP ethernet frames. By disabling LLDP agent within the Intel NIC driver, LLDP is not handled by NIC driver and can be observed within ESXi hypervisor, thus vSwitch network discovery via LLDP works as expected.

If you have four (4) port Intel NIC you must use following command to disable LLDP agent on all four NIC port.

esxcli system module parameters set -m i40en -p LLDP=0,0,0,0

On HPE server, this behavior can be controlled via BIOS. The feature for disabling the Link Layer Discovery Protocol (LLDP) in the BIOS/Platform Configuration (RBSU) has been included in the HPE Intel Online Firmware Upgrade released in 20 Dec 2019.

Thursday, July 15, 2021

vSAN Capacity and Performance Sizer

VMware vSAN is enterprise production-ready software-defined storage for VMware vSphere. After several (7+) years on the market, it is a proven storage technology especially for VMware Software-Defined Data Centers aka SDDC.   

As a seasoned vSphere infrastructure designer, I had a need for vSAN sizer I would trust and that was the reason to prepare just another spreadsheet with my own calculations. There are multiple VMware official vSAN and VxRail sizers, however, it is always good to have your own based on your understanding of underlying technology. It is not only about vSAN software, but also about vSphere/vSphere HA Cluster details (Admission Control HA Redundancy versus vSAN Data Protection within Storage Policy) and hardware related details like the performance difference between NAND Flash and Intel Optane Flash, NAND asymmetric read/write performance, etc. 

If you are looking for an official vSAN Sizer, go to https://vsansizer.vmware.com/

Here is the link to my UNSUPPORTED "vSAN Capacity and Performance Sizer".

Usage Instructions:

Download the excel file from the link above and start to play with yellow cells to plan your capacity and performance design.

What's unique about my vSAN sizer?

  • Detailed storage capacity calculation allowing to consider vSphere HA Admission Contro
  • Storage performance sizing for various workload pasterns with 64 kB I/O size, which is IMHO typical I/O size average in an enterprise environment
    • I/O Size 64 kB, 100% read, 100% random
    • I/O Size 64 kB, 70% read, 100% random
    • I/O Size 64 kB, 50% read, 100% random
    • I/O Size 64 kB, 30% read, 100% random
    • I/O Size 64 kB, 20% read, 100% random
    • I/O Size 64 kB, 0% read, 100% random

Disclaimer: PERFORMANCE SIZING IS VERY TRICKY!!! Over the years, I used this excel for various vSphere designs and sometimes even validated capacity and performance estimations. Based on the test results, I was tuning the calculations and parameters. However, this is always just an estimation that has to be always validated by vSphere designer responsible for a particular design.

Hope you will find this tool useful. The only ask for anybody who will use the spreadsheet for capacity and mainly performance estimations, please, give me the feedback if calculated results were close to capacity and mainly the performance you observed during your testing before putting the infrastructure into production. We all do perform test plans before production usage, right? :-) And we all know that VMware has a great synthetic storage performance test tool called HCI Bench, do not we?

Share and collaborate, this is the way we live in the VMware community!

Use comments below the blog post for further discussions.

Tuesday, June 15, 2021

vSphere 7 - ESXi boot media partition layout changes

VMware vSphere 7 is the major product release with lot of design and architectural changes. Among these changes, VMware also reviewed and changed the layout of ESXi 7 storage partitions on boot devices. Such change has some design implications which I'm trying to cover in this blog post. 

Note: Please, be aware that almost all information in this blog post are sourced from external resources such as VMware Documentation, VMware KB, VMware blog posts, and also VMware community blog posts.

Let's start with ESXi 7 Storage Requirements

Here is the list of boot device storage requirements from VMware documentation - source [2]:
  • Installing ESXi 7.0 requires a boot device that is a minimum of 8 GB for USB or SD devices, and 32 GB for other device types.
  • Upgrading to ESXi 7.0 requires a boot device that is a minimum of 4 GB. 
  • When booting from a local disk, SAN or iSCSI LUN, a 32 GB disk is required to allow for the creation of system storage volumes, which include a boot partition, boot banks, and a VMFS-L based ESX-OSData volume. 
  • The ESX-OSData volume takes on the role of the legacy /scratch partition, locker partition for VMware Tools, and core dump destination.

Key changes between ESXi 6 and ESXi 7

Here are listed key boot media partitioning changes between ESXi 6 and :
  • larger system boot partition
  • larger boot banks
  • introducing ESX OSData (ROM-data, RAM-data)
    • consolidation of coredump, tools and scratch into a single VMFS-L based ESX-OSData volume
    • coredumps default to a file in ESX-OSData
  • variable partition sizes based on boot media capacity

The biggest change to the partition layout is the consolidation of VMware Tools Locker, Core Dump and Scratch partitions into a new ESX-OSData volume (based on VMFS-L). This new volume can vary in size (up to 138GB). [4]

Official support for specifying the size of ESX-OSData has been added to the release of ESXi 7.0 Update 1c with a new ESXi kernel boot option called systemMediaSize which takes one of four values [4]:

  • min = 25GB
  • small = 55GB
  • default = 138GB (default behavior)
  • max = Consumes all available space

What is ESX OS Data partition?

ESX-OSData is new partition to store ESXi configuration, system state, and system or agent virtual machines. The OSData partition is divided into two sections 

  1. ROM-data
  2. RAM-data

ROM-data is not read/only as a name can implied, but it is a section for data written to the disk infrequently. Example of such data is VMtools ISOs, ESXi configurations, core dumps, etc.

RAM-data is for frequently written data like logs, VMFS global traces, vSAN EPD and traces, and live system state files.

How the partition layout changed? 

Below is depicted partition Lay-out in vSphere 6.x and Consolidated Partition Lay-out in vSphere 7  [1]



Partition size variations

There are various partition sizes based on boot device size. The only fix size is for the system boot partition which is always 100 MB. All other variations are depicted on picture below [1].

Note: If you use USB or SD storage devices, the ESX-OSData partition is created on an additional storage device such as an HDD or SSD. When an additional storage device is not available, ESX-OSData is created on USB or SD devices, but the ESX-OSData partition is used only to store ROM data and RAM-data are stored on a RAM disk. [1]

What design options do I have? 

ESX-OSData is used as the unified location to store Scratch, Core Dump, and ProductLocker data. By default, it is located on boot media partition (ESX-OSData) but there are advanced settings allowing these type of data relocate to external location.

Design Option #1 - Changing ScratchPartition location

In ESXi 7.0, a VMFS-L based ESX-OSData volume (where logs, coredumps and configuration are stored) replaces the traditional scratch partition. During upgrade, the configured scratch partition is converted to ESX-OSData. The settings described in VMware KB 1033696 [7] are still applicable for cases where you want to point the scratch path to another location. It is about ESXi advanced setting ScratchConfig.ConfiguredScratchLocation. I wrote the blog post about changing Scratch Location here.

Design Option #2 - Create a core dump file on a datastore

Core dump location can be also changed. To create a core dump file on a datastore, see the KB article 2077516 [8].

Design Option #3 - Changing ProductLocker location

To change productLocker location form boot media to directory on a datastore, see the VMware KB article 2129825 [10].

Applying all three options above can significantly reduce I/O operations to boot media with less endurance such as USB Flash Disk or SD Card. However, hardware industry improved over the last years and nowadays we have new boot media options such as SATA-DOM, M.2 slots for SSD, or low-cost NVMe (PCI-e SSD).

Note: I have not tested above design options in my lab, therefore, I'm assuming it works as expected based on VMware KBs reffered in each option.

Other known problems you can observe when using USB or SD media

There are other known issues with using USB or SD as a boot media, but some of these issues are already addressed or will be addressed in future patches as USB and SSD media is officially supported.
 
 I'm aware about these issues:
  • ESXi hosts experiences All Paths Down events on USB based SD Cards while using the vmkusb driver [5] [15]
    • Luciano Patrao blogged about this (or similar) issue at [14] and he has found the workaround until the final VMware fix which should be released in ESXi 7.0 U3. The Luciano's workaround is to 
      1. login to ESXi console (SSH or DCUI)
      2. execute command "esxcfg-rescan -d vmhba32" several times until it finishes without an error.
      3. You need to give some minutes between each time you rerun the command. Be patient and try again in 2/5m.
      4. After all, errors are gone and the command finishes without any error, you should see in logs that “mpx.vmhba32:C0:T0:L0” was mounted in rw mode, and you should be able to do some work on the ESXi hosts again.
      5. If you still have some issues, restart the management agents
        • /etc/init.d/hostd restart
        • /etc/init.d/vpxa restart   
      6. After this, you should be able to migrate your VMs to another ESXi host and reboot this one. Until it breaks again in case someone is trying to use VMtools.
  • VMFS-L Locker partition corruption on SD cards in ESXi 7.0 U1 and U2 [6] (should be fixed in future ESXi patch)
  • High frequency of read operations on VMware Tools image may cause SD card corruption [12]
    • This issue has been addressed in ESXi 6.7 U3 - changes were made to reduce the number of read operations being sent to the SD card, an advanced parameter was introduced that allows you to migrate your VMware tools image to ramdisk on boot . This way, the information is read only once from the SD card per boot cycle.
      • However, it seems that problem reoccurred in ESXi 7.x, because ToolsRamdisk option is not available with ESXi 7.0.x releases [13]
    • The other vSphere design solution is IMHO the change of ProductLocker location mentioned above, because VMtools image is not located on boot media.

Conclusion

ESXi 7 is using ESX-OSData partition for various logging and debugging files. In addition, if vSAN and/or NSX is enabled in ESXi, there are additional trace files leading into even higher I/O. This ESXi system behavior requires higher endurance of boot media than in the past. 

If you are defining the new hardware specification, it is highly recomended to use larger boot media (~150 GB or more) based on NAND flash technology and connected through modern buses like M.2 or PCI-e. When larger boot media is in use, ESXi 7 will do all the magic required for correct partitioning of ESX boot media.

In case of existing hardware and no budget for additional hardware upgrade, you can still use SD cards or USB drives, but you should carefully design boot media layout and consider relocation of Scratch, Core Dump, and ProductLocker to external locations to mitigate the risk of boot media failure.

Hope this write-up helps and if you will have some other finding or comment do not hesitate to let me know via comments bellow the post, twitter or email.

Sources:

Saturday, May 15, 2021

AWS, FreeBSD AMIs and WebScale application FlexBook

I've started to play with AWS cloud computing. When I'm starting with any new technology, the best way how to learn it, is to use it for some project. And because I participate in one open-source project, where we develop multi-cloud application which can run, scale and auto migrate among various cloud providers, I've decided to do a Proof of Concept in AWS. 

The open-source software I'm going to deploy is FlexBook and is available on GitHub.

Below is the logical infrastructure design of AWS infrastructure for deployment of webscale application.

My first PoC is using following AWS resources

  • 1x AWS Region
  • 1x AWS VPC
  • 1x AWS Availability Zone
  • 1x AWS Internet Gateway
  • 1x AWS Public Segment
  • 1x AWS Private Segment
  • 1x AWS NAT Gateway
  • 6x EC Instances
    • 1x FlexBook Ingress Controller - NGINX used as L7 load balancer redirecting ingress traffic to particualar FlexBook node
    • 1x WebPortal - NGINX used as web server for static portal page using JavaScript components leveraging REST API communication to FlexBook cluster (3 FlexBook nodes which can auto scale if necessary)
    • 1x FlexBook Manager - responsible for FlexBook cluster management including deployment, auto-scale, application distributed resource management, etc.
    • 3x FlexBook Node - this is where multi-tenant FlexBook application is running. App tenants can be migrated across FlexBook nodes.

For all EC2 instances I'm going to use my favorite operating system - FreeBSD.

I've realized, that AWS EC2 instances do not support console access, therefore, ssh is the only way how to log in to servers. You can generate SSH Key Pair during EC2 deployment and download private key (PEM) to your computer. AWS shows you how to connect to your EC2 instance. This is what you see in instructions:

ssh -i "flxb-mgr.pem" root@ec2-32-7-14-5.eu-central-1.compute.amazonaws.com

However, command above does not work for FreeBSD. AWS tells you following information ...

Note: In most cases, the guessed user name is correct. However, read your AMI usage instructions to check if the AMI owner has changed the default AMI user name. 
And that's the point. The default username for FreeBSD AWS AMIs is ec2-user, therefore, following command will let you connect to AWS EC2 FreeBSD instance.

ssh -i "flxb-mgr.pem" ec2-user@ec2-32-7-14-5.eu-central-1.compute.amazonaws.com

When you SSH to the ec2-user, you can su to a root account which does not have any password.

Here are best practices for production usage

  • set a root password
  • remove the ec2-user account and create your own account with your SSH own keys

That's it for now. I will continue with AWS discovery and potential production use of AWS for some FlexBook projects. 

 Sources and additional resources:

Wednesday, March 24, 2021

What's new in vSphere 7 Update 2

vSphere 7 is not only about server virtualization (Virtual Machines) but also about Containers orchestrated by Kubernetes orchestration engine. VMware Kubernetes distribution and the broader platform for modern applications, also known as CNA - Cloud Native Applications or Developer Ready Infrastructure) is called VMware Tanzu. Let's start with enhancements in this area and continue with more traditional areas like Operational, Scalability, and Security improvements.

Developer Ready Infrastructure

vSphere with Tanzu - Integrated LoadBalancer

vSphere Update 2 includes fully supported, integrated, highly available, enterprise-ready Load Balancer for Tanzu Kubernetes Grid Control Plane and Kubernetes Services of type Load Balancer - NSX Advanced Load Balancer Essentials (Formerly Avi Load Balancer). NSX Advanced Load Balancer Essentials is scale out load balancer. The data path for users accessing the VIPs is through a set of Service Engines that automatically scale out as workloads increase.

Sphere with Tanzu - Private Registry Support

If you are using a container registry with self-signed, or private CA signed certs – this allows them to be used with TKG clusters.

Sphere with Tanzu - Advanced security for container-based workloads in vSphere with Tanzu on AMD

For customers interested in running containers with as much security in place as possible, Confidential Containers provides full and complete register and memory isolation and encryption from Pod to Pod and Hypervisor to Pod.

  • Builds on vSphere’s industry-leading, easy-to-enable support for AMD SEV-ES data protections on 2nd & 3rd generation AMD EPYC CPUs
  • Each Pod is uniquely encrypted to protect applications and data in use within CPU and memory
  • Enabled with standard Kubernetes YAML annotation

Artificial Intelligence & Machine Learning

vSphere and NVIDIA. The new Ampere family of NVIDIA GPUs is supported on vSphere 7U2. This is part of a bigger effort between the two companies to build a full stack AI/ML offering for customers.

  • Support for new NVIDIA Ampere family of GPUs
    • In the new Ampere family of GPUs, the A100 GPU is the new high-end offering. Previously the high-end GPU was the V100 – the A100 is about double the performance of the V100. 
  • Multi-Instance GPU (MIG) improves physical isolation between VMs & workloads
    • You can think of MIG as spatial  separation as opposed to the older form of vGPU which did time-slicing to separate one VM from another on the GPU. MIG is used through a familiar vGPU profile assigned to the VM. You enable MIG at the vSphere host level firstly using one simple command "nvidia-smi mig enable -I 0". This requires SR-IOV to be switched on in the BIOS (via the iDRAC on a Dell server, for example).  
  • Performance enhancements with GPUdirect & Address Translation Service in the hypervisor

Operational Enhancements

VMware vSphere Lifecycle Manager - support for Tanzu & NSX-T

  • vSphere Lifecycle Manager now handles vSphere with Tanzu “supervisor” cluster lifecycle operations
  • Uses declarative model for host management

VMware vSphere Lifecycle Manager Desired Image Seeding

Extract an image from an existing host

ESXi Suspend-to-Memory

Suspend to Memory introduces a new option to help reduce the overall ESXi host upgrade time.

  • Depends on Quick Boot
  • New option to suspend the VM state to memory during upgrades
  • Options defined in the Host Remediation Settings
  • Adds flexibility and reduces upgrade time

Availability & Efficiency

vSphere HA support for Persistent Memory Workloads

  • Use vSphere HA to automatically restart workloads with PMEM
  • Admission Control ensures NVDIMM failover capacity
  • Can be enabled with VM Hardware 19

Note: By default, vSphere HA will not attempt to restart a virtual machine using NVDIMM on another host. Allowing HA on host failure to failover the virtual machine, will restart the virtual machine on another host with a new, empty NVDIMM

VMware vMotion Auto Scale

vSphere 7 U2 automatically tunes vMotion to the available network bandwidth for faster live-migrations for faster outage avoidance and less time spent on maintenance.

  • Faster live migration on 25, 40, and 100 GbE networks means faster outage avoidance and less time spent on maintenance
  • One vMotion stream capable of processing 15 Gbps+
  • vMotion automatically scales the number of streams to the available bandwidth
  • No more manual tuning to get the most from your network

VMware vMotion Auto Scale

AMD optimizations

As customers trust in AMD increases, so is the performance of ESXi on modern AMD processors.

  • Optimized scheduler ​for AMD EPYC architecture
  • Better load balancing and cache locality
  • Enormous performance gains

Reduced I/O Jitter for Latency-sensitive Workloads

Under the hood vSphere kernel improvements in vSphere 7U2 allow for significantly improved I/O latency for virtual Telco 5G Radio Access Networks (vRAN) deployments.

  • Eliminate Jitter for Telco 5G Deployments
  • Significantly Improve I/O Latency
  • Reduce NIC Passthrough Interrupts

Security & Compliance

ESXi Key Persistence

ESXi Key Persistence helps eliminate dependency loops and creates options for encryption without the traditional infrastructure. It’s the ability to use a Trusted Platform Module, or TPM, on a host to store secrets. A TPM is a secure enclave for a server, and we strongly recommend customers install them in all of their servers because they’re an inexpensive way to get a lot of advanced security.

  • Helps Eliminate Dependencies
  • Enabled via Hardware TPM
  • Encryption Without vCenter Server

VMware vSphere Native Key Provider 

vSphere Native Key Provider puts data-at-rest protections in reach for all customers.

  • Easily enable vSAN Encryption, VM Encryption, and vTPM
  • Key provider integrated in vCenter Server & clustered ESXi hosts
  • Works with ESXi Key Persistence to eliminate dependencies
  • Adds flexible and easy-to-use options for advanced data-at-rest security
 vSphere has some pretty heavy-duty data-at-rest protections, like vSAN Encryption, VM encryption, and virtual TPMs for workloads. One of the gotchas there is that customers need a third-party key provider to enable those features, traditionally known as a key management service or KMS. There are inexpensive KMS options out there but they add significant complexity to operations. In fact, complexity has been a real deterrent to using these features… until now!

Storage

iSCSI path limits
 
ESXi has had a disparity in path limits between iSCSI and Fibre Channel. 32 paths for FC and 8 (8!) paths for iSCSI. As of ESXi 7.0 U2 this limit is now 32 paths. For further details read this.

File Repository on a vVol Datastore

VMware added a new feature that supports creating a custom size config vVol–while this was technically possible in earlier releases, it was not supported. For further details read this.

VMware Tools and Guest OS

Virtual Trusted Platform Module (vTPM) support on Linux & Windows

  • Easily enable in-guest security requiring TPM support
  • vTPM available for modern versions of Microsoft Windows and select Linux distributions
  • Does not require physical TPM
  • Requires VM Encryption, easy with Native Key Provider!

VMware Tools Guest Content Distribution

Guest store enables the customers to distribute various types of content to the VMs, like an internal CDN system.

  • Distribute content “like an internal CDN”
  • Granular control over participation
  • Flexibility to choose content

VMware Time Provider Plugin for Precision Time on Windows

With the introduction of new plugin: vmwTimeProvider shipped with VMware Tools, guests can synchronize directly with hosts over a low-jitter channel.

  • VMware Tools plugin to synchronize guest clocks with Windows Time Service
  • Added via custom install option in VMware Tools
  • Precision Clock device available in VM Hardware 18+
  • Supported on Windows 10 and Windows Server 2016+
  • High quality alternative to traditional time sources like NTP or Active Directory

Conclusion

vSphere 7 Update 2 is nice evolution of vSphere platform. If you ask me what is the most interesting feature in this release, I would probably answer VMware vSphere Native Key Provider, because it has a positive impact on manageability and simplification of overall architecture. The second one is VMware vMotion Auto Scale, which reduces operational time during ESXi maintenace operations in environments with 25+ Gb NICs already adopted.


 




Wednesday, February 17, 2021

VMware Short URLs

 VMware has a lot of products and technologies, here are few interesting URL shortcuts to quickly get resources for a particular product, technology, or other information.

VMware HCL and Interop

https://vmware.com/go/hcl - VMware Compatibility Guide

https://vmwa.re/vsanhclc or https://vmware.com/go/vsanvcg - VMware Compatibility Guide vSAN 

https://vmware.com/go/interop - VMware Product Interoperability Matrices

VMware Partners

https://www.vmware.com/go/partnerconnect - VMware Partner Connect

VMware Customers

https://www.vmware.com/go/myvmware - My VMware Overview
 
https://www.vmware.com/go/customerconnect - Customer Connect Overview

https://www.vmware.com/go/patch - Customer Connect, where you can download VMware bits

http://vmware.com/go/skyline - VMware Skyline

http://vmware.com/go/skyline/download - Download VMware Skyline

VMware vSphere

http://vmware.com/go/vsphere - VMware vSphere

VMware CLIs

http://vmware.com/go/dcli - VMware Data Center CLI

VMware Software-Defined Networking and Security

https://vmware.com/go/vcn - Virtual Cloud Network

https://vmware.com/go/nsx - VMware NSX Data Center

https://vmware.com/go/vmware_hcx - Download VMware HCX

VVD

https://vmware.com/go/vvd-diagrams - Diagrams for VMware Validated Design

https://vmware.com/go/vvd-stencils - VMware Stencils for Visio and OmniGraffle

http://vmware.com/go/vvd-community - VVD Community

http://www.vmware.com/go/vvd-sddc - Download VMware Validated Design for Software-Defined Data Center

VCF

https://vmware.com/go/vcfrc - VMware Cloud Foundation Resource Center

http://vmware.com/go/cloudfoundation - VMware Cloud Foundation

http://vmware.com/go/cloudfoundation-community - VMware Cloud Foundation Discussions

http://vmware.com/go/cloudfoundation-docs - VMware Cloud Foundation Documentation

Tanzu Kubernetes Grid (TKG)

http://vmware.com/go/get-tkg - Download VMware Tanzu Kubernetes Grid

Hope this helps at least one person in the VMware community.

Sunday, February 14, 2021

Top Ten Things VMware TAM should have on his mind and use on a daily basis

The readers may or may not know, that I work for VMware as a TAM. For those who do not know, TAM stands for Technical Account Manager. VMware TAM is the billable consulting role available for VMware customers who want to have an on-site dedicated technical advisor/consultant/advocate for long term cooperation. VMware TAM organization historically belonged under VMware PSO (Professional Services Organization), however, recently has been moved under Customer Success Organization, which makes perfect sense if you ask me, because customer success is the key goal of a TAM role.

How TAM engagement works? It is pretty easy. VMware Technical Account Managers have 5 slots (days) per week which can be consumed by one or many VMware customers. There are Tier1, Tier2, and Tier3 offerings, where Tier 1 TAM service includes one day per week for the customer, Tier 2 has 2.5 days per week and Tier 3 TAM is fully dedicated.

The TAM job role is very flexible and customizable based on specific customer demand. I like the figure below, describing TAM Service standard Deliverables and On-Demand Activities.


VMware TAM is delivering standard deliverables like
  • Kickoff Meeting and TAM Business Reviews to continuously align with customer expectations
  • Standard Analytics and Reporting including the report of customer estate in terms of VMware products and technologies (we call it CI.Next), Best Practices Review report highlighting a few best practices violations against VMware Health Check’s recommended practices.
  • Technical Advisory Service about VMware Product Releases, VMware Security Advisories, Specific TAM Customer Technical Webinars, Events, etc.
However, what is the most interesting part of VMware TAM job role, at least for me, are On Demand Activities including
  • Technical Enablements, DeepDives, Roadmaps, etc.
  • Planning and Conceptual Designing of Technical Solutions and Transformation Project
  • Problem Management and Design Troubleshootings
  • Product Feature Request management
  • Etc.

And this is the reason why I love my job, because I like technical planning, designing, coordinating technical implementations, validating and testing implementations before it is handed over to production. And I also like to communicate with operation teams and after a while, reevaluate the implemented design and take the operational feedback back to the architecture and engineering for continuous solution improvement. 
That’s the reason why the TAM role is my dream job for one of the best and impactful IT companies in the world.

During the last One on One meeting with my manager, I have been asked to write down the top ten things VMware TAM should have on his mind and use on a daily basis in 2021. To be honest, the rules I will ist are not specific to the year 2021 but very general applying to any other year, and also easily reusable for any other human activity.

After 25 years in the IT industry, 15 years in Professional Consulting, and 5 years as a VMware TAM, I immodestly believe, the 10 things below are the most important things to be the valuable VMware TAM for my customers. These are just my best practices and it is good to know, there are no best practices written into stone, therefore your opinion may vary. Anyway, take it or leave it. Here we go.

#1 Top Bottom approach

I use the Top Bottom approach, to be able to split any project or solution into Conceptual, Logical, and Physical layers. I use Abstraction and Generalization. While abstraction reduces complexity by hiding irrelevant detail, generalization reduces complexity by replacing multiple entities that perform similar functions with a single construct. Do not forget, the modern IT system complexity can be insane. Check the video “Power of Ten” to understand details about other systems' complexity and how it can be visible at various levels.

#2 Correct Expectations

I always set correct expectations. Discover customer’s requirements, constraints, and specific use cases before going into any details or specific solutions is the key to customer success.

#3 Communication

Open and honest communication is the key to any long term successful relationship. As a TAM, I have to be the communicator who can break barriers between various customer silos and teams, like VMware, compute, storage, network, security application, developers, DevOps, you name it. They have to trust you, otherwise, you cannot make success.

#4 Assumptions

I do not assume. Sometimes we need some assumptions to not be stuck and move forward, however, we should validate those assumptions as soon as possible, because false assumptions lead to risks. And one of our primary goals as TAMs is to mitigate risks for our customers. 

#5 Digital Transformation

I leverage online and digital platforms. Nothing compares to personal meetings and whiteboarding, however, tools like Zoom, Miro.com, and Monday.com increase efficiency and help with communication especially in COVID-19 times. This is probably the only related point to the year 2021, as COVID-19 challenges are staying with us for some time.

#6 Agile Methodologies

I use an agile consulting approach leveraging tools like Miro.com, Monday.com, etc. gives me a toolbox to apply agile software methodologies into technical infrastructure architecture design. In the past, when I worked as a software developer, software engineer, and software architect I was a follower of Extreme Programming. I apply the same or similar concepts and methods to Infrastructure Architecture Design and Consulting. This approach helps me to keep up with the speed of IT and high business expectations.

#7 Documentation

I document everything. The documentation is essential. If it’s not written down, it doesn’t exist! I like "Eleven Rules of Design Documentation" by Greg Ferro.

#8 Resource Mobilization

I leverage resources. Internal and External. As TAMs, we have access to a lot of internal resources (GSS, Engineering, Product Management, Technical Marketing, etc.) which we can leverage for our TAM customers. We can also leverage external resources like partners, other vendors from the broader VMware ecosystem, etc. However, we should use resources efficiently. Do not forget, all human resources are shared, thus limited. And time is the most valuable resource, at least for humans, therefore Time Management is important. Anyway, resource mobilization is the true value of the VMware TAM program, therefore we must know how to leverage these resources. 

#9 Customer Advocacy

As a TAM, I work for VMware but also for TAM customers. Therefore, I do customer advocacy within VMware and VMware advocacy within the Customer organization. This is again about the art of communication.

#10 Technical Expertise

Last but not least, I must have technical expertise and competency. I’m a Technical Account Manager, therefore I try to have deep technical expertise in at least one VMware area and broader technical proficiency in few other areas. This approach is often called Full Stack Engineering. I’m very aware of the fact that expertise and competency are very tricky and subjective. It is worth understanding the Dunning Kruger-Effect which is the law about the correlation between competence and confidence. In other words, I’m willing to have real competence and not only false confidence about the competence. If I do not feel confident in some area, I honestly admit it and try to find another resource (see rule #8). The best approach to get and validate my competency and expertise is to continuously learn and validate it by VMware advanced certifications.

Hope this write-up will be useful for at least one person on the VMware TAM Organization.

Thursday, February 04, 2021

Back to basics - MTU & IP defragmentation

This is just a short blog post as it can be useful for other full-stack (compute/storage/network) infrastructure engineers.

I have just had a call from my customer with the following problem symptom. 

Symptom:

When ESXi (in ROBO)  is connected to vCenter (in Datacenter), TCP/IP communication overloads 60 Mbps network link. In such a scenario, huge packet retransmit is observed. IP packets are defragmented and packet retransmission is observed.

Design drawing:

Hypothesis:

MTU Defragmentation is happening in the physical network and MTU is lower than 1280 Bytes.

Planned test:

Find the smallest MTU in the end-2-end network path between ESXi and vCenter

vmkping -s 1472 -d VCENTER-IP

Decrease -s parameter value until the ping is successful. This is the way how to find the smallest MTU in the IP network path. 

Back to basics

IP fragmentation is an Internet Protocol (IP) process that breaks packets into smaller pieces (fragments), so that the resulting pieces can pass through a link with a smaller maximum transmission unit (MTU) than the original packet size. The fragments are reassembled by the receiving host. [source]

The vmkping command has some parameters you should know and use in this case:

-s to set the payload size

Syntax:vmkping -s size IP-address

With the parameter -s you can define the size of the ICMP payload. If you have defined an MTU size from eg. 1500 bytes and use this size in your vmkping command, you may get a “Message too long” error. This happens because ICMP needs 8 bytes for its ICMP header and 20 bytes for IP header:

The size you need to use in your command will be:

1500 (MTU size) – 8 (ICMP header) – 20 (IP header) = 1472 bytes for ICMP payload

-d to disable IP fragmentation

Syntax:vmkping -d IP-address

Use the command “vmkping -s 1472 IP-address” to test your end-2-end network path.

Decrease -s parameter until the ping is successful.

Monday, January 11, 2021

Server rack design and capacity planning

Our VMware local SE team has got a great Christmas present from regional Intel BU. Four rack servers with very nice technical specifications and the latest Intel Optane technology. 

Here is the server technical spec: 

Node Configuration

Description

Quantity

CPU

Intel Platinum 8280L (28 cores, max memory 4.5TB)                          

2

DDR4 Memory

768GB DDR4 DRAM RDIMM

12 x 64GB 

Intel Persistent Memory

3TB Intel Persistent Memory

12 x 256GB

Caching Tier

Intel Optane SSD DC P4800X Series

(750GB, 2.5in PCIe* x4, 3D XPoint™)

2

Capacity Tier

Intel SSD DC P4510 Series

(4.0TB, 2.5in PCIe* 3.1 x4, 3D2, TLC)

4

Networking

       +

transceivers, cables

Intel® Ethernet Network Adapter XXV710-DA2

(25G, 2 ports)

1


These servers are vSAN Ready and the local VMware team is planning to use them for demonstration purposes of VMware SDDC (vSphere, vSAN, NSX, vRealize), therefore VMware Cloud Foundation (VCF) is a very logical choice.

Anyway, even Software-Defined Data Center requires power and cooling, so I've been asked to help with server rack design with proper power capacity planning. To be honest, the server rack plan and design is not rocket science. It is just simple math & elementary physics, however, you have to know the power consumption of each computer component. I did some research and here is the math exercise with a power consumption of each component:

  • CPU - 2x CPU Intel Platinum 8280L (110 W Idle, 150 W Computational,  360 W Peak load)
    • Estimation: 2x150 W = 300 W
  • RAM - 12x 64 GB DDR4 DRAM RDIMM (768 GB)
    • Estimation: 12x 24 Watt = 288 W
  • Persistent RAM - 12x 256GB (3TB) Intel Persistent Memory
    • Estimation: 12x 15 Watt = 180 W
  • vSAN Caching Tier - 2x Intel Optane SSD DC P4800X 750GB
    • Estimation: 2x18W =>  36W
  • vSAN Capacity Tier - 4x Intel SSD DC P4510 Series 4TB
    • Estimation: 4x 16W => 64 W
  • NIC - 1x Intel® Ethernet Network Adapter XXV710-DA2 (25G, 2 ports)
    • Estimation: 15 W

If we sum the power consumption above, we will get 883 Watt per single server.  

To validate the estimation above, I used the DellEMC Enterprise Infrastructure Planning Tool available at http://dell-ui-eipt.azurewebsites.net/#/, where you can place infrastructure devices and get the Power and Heating calculations. You can see the IDLE and COMPUTATIONAL consumptions below.

Idle Power Consumption


Computational Power Consumption

POWER CONSUMPTION
Based on the above calculations, the server power consumption range between 300 and 900 Watts, so it is good to plan a 1 kW power budget per server which in our case would be 4 kW / 17.4 Amp per a single power brach, which would mean 1x32 Amp PDUs just for 4 servers. 

For a full 45U Rack with 21 servers, it would be 21 kW / 91.3 Amp, which would mean 3x32 Amp per a single branch in the rack.

HEATING AND COOLING
Heating and cooling are other considerations. Based on Dell Infrastructure Planning Tool, the temperature in the environment will rise by 9°C (idle load) or even 15 °C (computational load). This also requires appropriate cooling and electricity planning.

Conclusion

1 kW per server is a pretty decent consumption. When you design your cool SDDC, do not forget for basics - Power and Cooling.