Monday, September 26, 2016

How to change default policy of VMware Virtual Distributed Switch (VDS)

The main advantage of VMware virtual distributed switch (VDS) over VMware virtual standard switch (VSS) is the centralized configuration which is pushed to ESXi hosts. This centralized management provides uniform virtual switch configuration across all ESXi hosts in VDS scope. Virtual switch specific settings can be generally reconfigured for each port-group. In other words, port-group is a virtual switch construct which defines how particular group of ports will behave. Port-groups inherit default settings from virtual switch.

So far so good. However, VDS and SVS differ. In VDS, you can change default settings of the switch and all port-groups on that particular switch inherit these settings. However, this is not possible on VDS. Actually, it is not possible through vSphere Client. Through, GUI you can change just settings of particular port-group as you can see on screenshot below.

DVS port-group various settings.

It is very annoying and time consuming to change settings for each port-group manually. There is also very high probability of inconsistent settings across port-groups. To mitigate challenges described above, you can leverage PowerCLI and change default VDS settings (policy) as you can see in the example below. This will change default VDS settings and all new port-groups will be configured consistently and as required by particular design.

PowerCLI script below shows how to change default settings for traffic Shaping (in/out) and uplink teaming policy to LACP. This particular example shows how to configure LAG teaming with IP src/dst load balancing but any other teaming and load balancing can be used as well.

 $vDSName = “vDS01”  
 $vds = Get-VDSwitch $vDSName  
 $spec = New-Object VMware.Vim.DVSConfigSpec  
 $spec.configVersion = $vds.ExtensionData.Config.ConfigVersion  
 $spec.defaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting
  
 $inShapingPolicy = New-Object VMware.Vim.DVSTrafficShapingPolicy  
 $inShapingPolicy.Enabled = New-Object VMware.Vim.BoolPolicy  
 $inShapingPolicy.AverageBandwidth = New-Object VMware.Vim.LongPolicy  
 $inShapingPolicy.PeakBandwidth = New-Object VMware.Vim.LongPolicy  
 $inShapingPolicy.BurstSize = New-Object VMware.Vim.LongPolicy  
 $inShapingPolicy.Enabled.Value = $true  
 $inShapingPolicy.AverageBandwidth.Value = 1000000000  
 $inShapingPolicy.PeakBandwidth.Value = 5000000000  
 $inShapingPolicy.BurstSize.Value = 19200000000  
 $inShapingPolicy.Inherited = $false  
 $outShapingPolicy = New-Object VMware.Vim.DVSTrafficShapingPolicy  
 $outShapingPolicy.Enabled = New-Object VMware.Vim.BoolPolicy  
 $outShapingPolicy.AverageBandwidth = New-Object VMware.Vim.LongPolicy  
 $outShapingPolicy.PeakBandwidth = New-Object VMware.Vim.LongPolicy  
 $outShapingPolicy.BurstSize = New-Object VMware.Vim.LongPolicy  
 $outShapingPolicy.Enabled.Value = $true  
 $outShapingPolicy.AverageBandwidth.Value = 1000000000  
 $outShapingPolicy.PeakBandwidth.Value = 5000000000  
 $outShapingPolicy.BurstSize.Value = 19200000000  
 $outShapingPolicy.Inherited = $false  
 $uplinkTeamingPolicy = New-Object VMware.Vim.VmwareUplinkPortTeamingPolicy  
 $uplinkTeamingPolicy.policy = New-Object VMware.Vim.StringPolicy  
 $uplinkTeamingPolicy.policy.inherited = $false  
 $uplinkTeamingPolicy.policy.value = “loadbalance_ip”  
 $uplinkTeamingPolicy.uplinkPortOrder = New-Object VMware.Vim.VMwareUplinkPortOrderPolicy  
 $uplinkTeamingPolicy.uplinkPortOrder.inherited = $false  
 $uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort = New-Object System.String[] (1) # designates the number of uplinks you will be specifying.  
 $uplinkTeamingPolicy.uplinkPortOrder.activeUplinkPort[0] = "LAG01"  
 $spec.DefaultPortConfig.InShapingPolicy = $inShapingPolicy  
 $spec.DefaultPortConfig.OutShapingPolicy = $outShapingPolicy  
 $spec.DefaultPortConfig.UplinkTeamingPolicy = $uplinkTeamingPolicy  
 $vds.ExtensionData.ReconfigureDvs_Task($spec)  

Another example reconfigures default DVS policy to VMware Load Based Teaming (aka LBT).

 Import-Module VMware.VimAutomation.vds  
 $vDSName = “test”   
 $vds = Get-VDSwitch $vDSName   
 $spec = New-Object VMwareåç.Vim.DVSConfigSpec   
 $spec.configVersion = $vds.ExtensionData.Config.ConfigVersion   
 $spec.defaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting  
 $uplinkTeamingPolicy = New-Object VMware.Vim.VmwareUplinkPortTeamingPolicy   
 $uplinkTeamingPolicy.policy = New-Object VMware.Vim.StringPolicy   
 $uplinkTeamingPolicy.policy.value = “loadbalance_loadbased”   
 $spec.DefaultPortConfig.UplinkTeamingPolicy = $uplinkTeamingPolicy  
 $vds.ExtensionData.ReconfigureDvs_Task($spec)  

VDS supports following teaming policies (source: vSphere Documentation)

  • failover_explicit
  • loadbalance_ip
  • loadbalance_loadbased
  • loadbalance_srcid
  • loadbalance_srcmac

Please, note that examples above have to be altered to fulfill your particular needs but now you should have a pretty good idea how to change the default DVS.

Wednesday, September 14, 2016

VMworld 2016 US sessions worth to watch

Here is the list of VMworld 2016 sessions from US event I watched or still have to watch during next days and weeks. After watching the session I do categorization and brief description of sessions. I'm also assigning category labels and technical level to each session.

Category labels:
  • Strategy
  • Architecture
  • Operations
  • High Level Product Overview
  • Deep Dive Product Overview
  • Technology Preview
  • Idea for Improvement

Technical levels:
  • Basic
  • Middle
  • Advanced

Note: If you will have trouble with re-play particular session from my links below go to OFFICIAL VMWORLD SITE and search for particular session by session code. You will need to register there but it still should be free of charge.

Already watched and categorized sessions

Session code: INF9044
Speaker(s): Emad Younis
Category labels: Technology Preview
Technical level: Basic-Middle
Brief session summary: Introduction and demo of vCenter migration tool for easy migration from Windows based vCenter to vCenter Server Appliance (VCSA).

Session code: INF8260
Speaker(s): William Lam, Alan Renouf
Category labels: Technology Preview
Technical level: Middle-Advanced
Brief session summary: William explains current automation possibilities of automated VCSA deployment and Alan presents what is coming soon. In nutshell, REST API is coming and it is really great if you ask me.

Session code: INF8108
Speaker(s): Ravi Soundararajan, Priya Sethuraman
Category labels: Deep Dive Product Overview
Technical level: Advanced
Brief session summary: At the begging there are presented some numbers explaining vCenter Server Appliance performance benefits over Windows based vCenter. At 00:07:00 vCenter deep dive starts with explanation of vCenter internal architecture (vsphere-client, sso, directory-service, vpxd, vpxd-svcs,  vmware-sps [aka storage profile services], perfcharts, eam [esx agent manager] ). The new info for me was that vCenter does not use inventory services anymore and since 2015 it is replaced by vpxd-svcs. Then vsphere-client plugin architecture is explained.
Single vCenter Internal Architecture
Later search use cases and consequences for different vCenter/SSO topologies are explained.
Search in Single Site multi vCenter Topology
Search in Multi Site multi vCenter Topology
In next section, other PSC and vCenter performance considerations are discussed. For PSC 2 vCPU and 4 GB is sufficient for any environment.

vCenter 6  has several hard limits good to know:
  • 640 concurrent operations before incoming requests are queued
  • 2000 concurrent sessions (user sessions + incoming requests +   remote console sessions)
ESXi host limits
  • A host can perform up to 8 provisioning operations at once (provisioning = clone, vMotion, relocate, snapshot, etc.)
  • If host is source and destination then host can only do 4 operations at once
Datastore limits
  • A datastore can perform up to 128 vMotions at once
  • A datastore can perform up to 8 Storage vMotions at once
In presentation is highlighted the fact that, vSphere 6 supports dedicated vmknic with special Provisioning TCP/IP Stack for Cold Migration, Cloning, and Snapshots. I did not know that! It is pretty cool management and potential performance optimization trick.
Dedicated vmknic configuration for provisioning
Higher latency between vCenter and ESXi does NOT have so big impact on vCenter operations (up to 100ms is good) but latency between vCenter and database is critical. Therefore embedded database is recommended and preferred configuration.

vCenter performance stats levels have big impact on vCenter performance. The biggest performance drop (4x) is between level 1 and level 2. Therefore, there is recommendation to keep stats on level 1 and use some external monitoring solution (like vROps) for historical performance monitoring and capacity planning or size your database performance appropriately.

If you have VCSA you can use tools like vimtop and cloudvm-ram-size for performance troubleshooting and tuning.

At the end of presentation, there are presented general conclusions. Generally, more hardware resources improves vCenter 6 performance significantly more then vCenter 5. Additional vCenter plugins and extensions can require more hardware resources.  Future releases of vCenter provide better scale and performance. Use the appliance (VCSA) to achieve better performance and simplified management.

vSPHERE HA


Session code: INF8045
Speaker(s): Manoj Krishnan, Matthew Meyer
Category labels: Technology Preview
Technical level: Middle-Advanced
Brief session summary: At the beginning of presentation, vSphere HA basics (master/slaves, heart beating, host failure scenarios, partitioning, host isolation, ) are explained by Matthew and Manoj. After basic HA intro Matthew explains VM Component Protection (HA APD and PDL responses). Matthew explanation of PDL (00:12:20) is little bit strange because he is explaining it on example with LUN masking on fibre channel switches which is not the right case. LUN masking is done on storage arrays. In fibre channel switches we usually do zoning which is different technique. Matthew claims ESXi host will get PDL after wrong SAN fabric reconfiguration which is IMHO not the case because PDL has to be sent from storage through each particular path. Therefore if there is not path between ESXi host (initiator) and storage front-end port (target) ESXi cannot get PDL from storage array. Yes, you can get PDL from storage array when you will do LUN masking mistake on storage array. Slides are correct, just explanation example is not correct. The more interesting is Manoj explanation of APD timeouts (00:13:45).  APD VM recovery (VMCP APD response) is executed after ESXi APD Timeout (140 seconds) + VMCP APD Timeout (default is 3 minutes).  VMCP APD Timeout can be configured through vSphere Web Client. Another option is "Response for APD recovery after APD timeout". This response (reset or no) is done in case datastore appears back to ESXi host during "VMCP APD Timeout" period. In such case affected VMs are not restarted on different host in cluster but they are restarted on the same ESXi host. This prevents potential VM panic state inside VM guest OS'es because of storage unavailability. Next presented topic are networking and storage recommendations. Presentation continues with explanation of HA differences when using conventional storage or VSAN. When conventional storage is used then management network is used for network heart beating, any datastore connected to more then one host can be used for storage heartbeat and host is claimed as isolated when isolation address cannot be pinged. When VSAN is used, then VSAN network for network heart beating, storage heart beating can be used only if traditional datastores exists as well and host isolation is claimed when host cannot ping isolation addresses over VSAN network. At 00:28:43 presentation covers Admission Control details and HA integration with DRS. HA can ask DRS for cluster defragmentation in case there is not any available space for failed VMs. At 00:34:31 starts technology preview of new things VMware is working on - vSphere HA priorities (5 priorities instead of 3), HA orchestrated restart (dependencies among VMs ), FTT Admission Control (Percentage based but calculated automatically), Proactive HA (vMotion in case of hardware health degradation).

ESXi

Session name: vSphere 6.x Host Resource Deep Dive
Session code: INF8430
Speaker(s): Frank Denneman, Niels Hagoort
Category labels: Architecture
Technical level: Advanced
Brief session summary: Frank is presenting his findings about NUMA and implications for vSphere Architecture. Frank is also explaining the difference between RAM, local NVMe and external storage proxiimity and impact on access times. Niels is very nicely explaining how hardware features like VXLAN, RSS, VMDq, device drivers and drivers parameters impacts performance of virtual overlay networking (VXLAN).   

VIRTUAL SAN

Session name: VSAN Networking Deep Dive and Best Practices
Session code: STO8165R
Speaker(s): Ankur Pai, John Nicholson
Category labels: Architecture, Deep Dive Product Overview
Technical level: Advanced
Brief session summary: John starts presentation and he discuss what kind of virtual switch to use. You can use standard or distributed. Distributed virtual switch is highly recommended because of configuration consistency and advanced features like NIOC, LACP, etc. Next there is discussion about multicast used for VSAN. Multicast is used just for state and metadata. VSAN data traffic is transfer via unicast. IGMP snooping for L2 should be always configured to optimize metada traffic and PIM just in case the VSAN traffic should go over L3. VSAN uses two multicast addresses - 224.2.3.4 port 23451 (Agent Group Multicast Address) and 224.1.2.3 port 12345 (Master Group Multicast Address). These default multicast addresses has to be changed in case two VSAN clusters are running in the same broadcast domain. John is sharing esxcli commands how to change default multicast addresses but generally it is recommended to keep each VSAN cluster in dedicated non routable VLAN (single broadcast domain). John explains that VSAN is supported on both L2 and L3 topologies but keep it in L2 topology is significantly simpler. Physical network equipment, cabling and subscription ratios matters. John warns against CISCO FEX (port extender) and Blade Chassis switches where oversubscription is usually expected. Next topic is troubleshooting and presenters introduce VSAN health-check plugin which removes the need for cli (Ruby vSphere Console aka RVC) troubleshooting commands. VSAN does not need Jumbo Frames but you can use it if dictated by some other requirement. LLDP/CDP should be enabled for better visibility and easier troubleshooting.  After 24 minute John pass the presentation to Ankur. Ankur is presenting Virtual SAN stretched cluster networking considerations. He starts with high level overview of stretched VSAN topology and witness requirement in third location. L2 stretch network is recommended even L3 is supported but more complex. Cross site round trip has to be less then 5 ms and recommended bandwidth is 10Gbps. Witness can be connected over L3 with roundtrip less then 200ms and bandwidth 100Mbps (2Mbps per 1000 VSAN components). Communication between the witness and main site is L3 unicast - tcp 2233 (IO) and udp 23451 (cluster) both directions. Periodic heartbeating between the witness and the main site occurs every second. Failure is declared after 5 consecutive failures. Static routes have to be configured between data hosts and witness host because custom ESXi TCP/IP stack for VSAN is not supported at the moment. Ankurs presents three form factors of witness appliance (Large, Medium, Tiny). Later Ankurs explains bandwidth calculations for cross site interconnect and another calculation for witness network bandwidths requirements. At the end there is Q&A for about 5 minutes.   

Session name: Extreme Performance Series: Virtual SAN Performance Troubleshooting
Session code: STO8743
Speaker(s): Zach Shen, Ruijin Zhou
Category labels: Deep Dive Product Overview
Technical level: Advanced
Brief session summary: Zach and Ruijin are from VMware Performance Engineering team. Zach started presentation by quick VSAN architecture overview depicted below.
VSAN Architecture
VSAN Architecture - LSOM detail
At 7:10 Ruijin takes the stage and explains what VSAN Observer is. VSAN observer has to be initiated from RVC (Ruby vSphere Console) and then you can access live data via web URL: https://vc-hostname:8010. Ruijin goes through VSAN Observer GUI and explains what is what. At 25:25 presentation continues by troubleshooting tool #2, which is VSAN Performance Service.  VSAN Perfo rmance Service is integrated in vSphere Web Client. One can monitor common metrics (IOPS, latency, throughput) in 5 minutes granularity. Single plot range 24 hours and 90 days are available before roll over.


VIRTUAL VOLUMES (VVOLs)

Session name: Top 10 Thing You MUST Know Before Implementing Virtual Volumes
Session code: STO5888
Speaker(s): Eric Siebert
Category labels: Architecture, Deep Dive Product Overview
Technical level: Middle
Brief session summary: Eric Siebert from HPE explains VVOLs 1.0 architecture in pretty nice detail with some HP 3PAR specific implementation details which is very good for understanding how particular VVOL implementation fits into VMware VVOL general framework.

BUSINESS CRITICAL APPLICATIONS

Session name: Performance Tuning and Monitoring for Virtualized  Database Servers
Session code: VIRT7511
Speaker(s): David Klee, Thomas LaRock
Category labels: Architecture, Operations
Technical level: Middle
Brief session summary: TBD

vSPHERE CONFIGURATION MANAGEMENT

Session name: Enforcing a vSphere Cluster Design with PowerCLI Automation
Session code: INF8036
Speaker(s): Duncan Epping, Chris Wahl
Category labels: Idea for Improvement
Technical level: Middle to Advanced
Brief session summary: TBD


SITE RECOVERY MANAGER

Session name: SRM with NSX: Simplifying Disaster Recovery Operations & Reducing RTO
Session code: STO7977
Speaker(s): Rumen Colov, Stefan Tsonev
Category labels: High Level Product Overview
Technical level: Basic-Medium
Brief session summary: Rumen Colov is SRM Product Manager so his part of session is more about High Level introduction to SRM and NSX integration. Session really starts from 8:30 min. Before it is just introduction. Then Rumen explains main SRM use cases - Disaster Recovery, Datacenter Migration, Disaster Avoidance - and differentiation among them. The latest SRM versions supports zero-downtime VM mobility in Active/Active datacenter topology.  Then there is explained Storage Policy-based Management integration with SRM and NSX 6.2 interoperability with NSX stretched logical wire (NSX Universal Logical Switch) across two vCenters. Ruman also covers product licensing editions required for integration.  In 0:22:53 Rumen says that network inventory auto-mapping for NSX stretched networking works only with storage based replication (SBR) and not with host based replication (aka HBR or vSphere Replication). It seems to me the reason is because for SBR information about network connectivity are saved in VMX file which is replicated by storage but for HBR we would need manual inventory mappings. In 0:24:00 presentation is handed over to Stefan Tsonev (Director R&D).  Stefan shows several screenshots with SRM and NSX integration. It seem that Stefan is very deeply SRM oriented with basic NSX knowledge but it is very nice introduction into SRM and NSX integration basics. 

LOG INSIGHT

Session name: Insight into the World of Logs with VMware vRealize Log Insight
Session code: MGT7685R
Speaker(s): Iwan Rahabok, Karl Fultz, Manny Sidhu
Category labels: Operations, Deep Dive Product Overview
Technical level: Middle
Brief session summary: Very nice walkthrough VMware LogInsight use cases and real examples how to analyze logs. At the end of presentation are explained some architecture topologies for log management in financial institution.

Session name: vSphere Logs Grow Up! Tech Preview of Actionable Logging with vRealize Log Insight
Session code: INF8845
Speaker(s): Mike Foley, Antoan Arnaudov
Category labels: Technology Preview
Technical level: Basic
Brief session summary: In this session, Antoan and Mike will show you what is coming in next release of vSphere from logging perspective. We can expect significantly improved details in vCenter events which are sent to syslog server and if you use something like LogInsight you can do a lot of magics with these details. It can significantly helps with security audits, configuration management, etc. 

PSO and CoE

Session name: How to Relocate Your Physical Data Center Without Downtime in a Few Clicks, Thanks to Automation
Session code: INF8837
Speaker(s): Rene-Francois Mennecier (PSO), Constantin Natchev (CoE)
Category labels: Operations, Idea for Improvement
Technical level: Middle
Brief session summary: Rene-Francois and Constantin presents the datacentre migration project they have delivered to one of the largest banks in Europe. The project was fully automated by vRealize Orchestrator and enhanced storage vMotion (vMotion share nothing) was leveraged for workload migrations.

Still have to watch and categorize sessions below

STO7650 - Duncan Epping, Lee Dilworth : Software-Defined Storage at VMware Primer
Downtime in a Few Clicks, Thanks to Automation
INF8858 - vSphere Identity: Multifactor Authentication Deep Dive
INF9089 - Managing vCenter Server at Scale? Here's What You Need to Know
INF8553 - The Nuts and Bolts of vSphere Resource Management
INF8959 - Extreme Performance Series: DRS Performance Deep Dive—Bigger Clusters, Better Balancing, Lower Overhead
VIRT8530R - Deep Dive on pNUMA & vNUMA - Save Your SQL VMs from Certain DoomA!
INF8780R - vSphere Core 4 Performance Troubleshooting and Root Cause Analysis, Part 1: CPU and RAM
INF8701R - vSphere Core 4 Performance Troubleshooting and Root Cause Analysis, Part 2: Disk and Network
INF9205R - Troubleshooting vSphere 6 Made Easy: Expert Talk
INF8755R - Troubleshooting vSphere 6: Tips and Tricks for the Real World
INF8850 - vSphere Platform Security
VIRT8290R - Monster VMs (Database Virtualization) Doing IT Right
VIRT7654 - SQL Server on vSphere: A Panel with Some of the World's Most Renowned Experts
VIRT7621 - Virtualize Active Directory, the Right Way!
INF8469 - iSCSI/iSER: HW SAN Performance Over the Converged Data Center
SDDC7808-S - How I Learned to Stop Worrying and Love Consistency: Standardizing Datacenter Designs
INF9048 - An Architect's Guide to Designing Risk: The VCDX Methodology
INF8644 - Getting the Most out of vMotion: Architecture, Features, Performance and Debugging
INF8856 - vSphere Encryption Deep Dive: Technology Preview
INF8914 - Mastering the VM Tools Lifecycle in your vSphere Data Center
INF7825 - vSphere DRS and vRealize Operations: Better Together
INF7827 - vSphere DRS Deep Dive: Understanding the Best Practices, Advanced Concepts, and Future Direction of DRS
INF8275R - How to Manage Health, Performance, and Capacity of Your Virtualized Data Center Using vSphere with Operations Management
INF9047 - Managing vSphere 6.0 Deployments and Upgrades
STO7965 - VMware Site Recovery Manager: Technical Walkthrough and Best Practices
STO7973 - Architecting Site Recovery Manager to Meet Your Recovery Goals
STO8344 - SRM with vRA 7: Automating Disaster Recovery Operations
STO8246R - Virtual SAN Technical Deep Dive and What’s New
STO8750 - Troubleshooting Virtual SAN 6.2: Tips & Tricks for the Real World
STO7904 - Virtual SAN Management Current & Future
STO8179R - Understanding the Availability Features of Virtual SAN
STO7557 - Successful Virtual SAN 6 Stretched Clusters
STO7645r - Virtual Volumes Technical Deep Dive
INF8038r - Getting Started with PowerShell and PowerCLI for Your VMware Environment
NET7858R - Reference Design for SDDC with NSX and vSphere: Part 2
CNA9993-S - Cloud Native Applications, What it means & Why it matters
CNA7741 - From Zero to VMware Photon Platform
CNA7524 - Photon Platform, vSphere, or Both?
SDDC7502 - On the Front Line: A VCDX Perspective Working in VMware Global Support Services
VIRT9034 - Oracle Databases Licensing on a Hyper-Converged Platform
VIRT9009 - Licensing SQL Server and Oracle on vSphere
INF9083 - Ask the vCenter Server Experts Panel
INF9151 - Getting to Zero: Zero Downtime, Zero Data loss with vSphere Fault Tolerance
INF9119 - How To manage PSC like Batman
INF8225 - The vCenter Server and PSC guide to the Gallaxy
INF9144 - An Overview of vCenter Server Appliance management interface
INF9128 - Day 2 operations: A vCenter Server Administrator's Diary
INF8172 - vSphere Client Roadmap: Host Client, HTML 5 Client, and web Client
INF8631 - VMware Certificate Management for Mere Mortals
INF8465 - Power Management's Impact on Performance
INF8089 - vSphere Compute and Memory
VIRT7598 - Monster VM Database Performance

All VMworld US 2016 Breakout Sessions

All VMworld US 2016 session are listed here.

If you cannot play session from list above go to OFFICIAL SITE of VMWORLD 2016 US and search for particular session. You will need to register there but it is free of charge.

And here is another repository of all videos for VMworld 2016 US.

Wednesday, September 07, 2016

VMware Virtual Machine Hardware Version and CPU Features

I always thought that only device not virtualized by VMware ESXi is the CPU. It is generally true but I have just been informed by someone that available CPU instructions sets (Features) are dependent on VM hardware version. CPU Features are generally enhanced CPU Instruction sets for special purposes. For more information about CPUID and Features read this.

My regular readers knows that I don't believe anything unless I test it. Therefore I did a simple test. I provisioned new VM with hardware version 4, installed FreeBSD OS and identified CPU features. You can see screenshot below.

VM with hardware version 4 and FreeBSD Guest OS
Next, I provisioned another VM with hardware 10 with FreeBSD OS on the same ESXi hosts and listed CPU Features. See screenshot below.

VM with hardware version 10 and FreeBSD Guest OS
Now, if you compare CPU Features you can see differences. Following CPU Features are added in VM hardware version 10:

  • FMA
  • PCID
  • X2APIC
  • XSAVE
  • OSXSAVE
  • AVX
  • F16C
  • RDRAND
  • HV
Does it matter? Well, it depends if your particular application really needs advanced CPU features.

vCPU in VMware virtualization is really not virtualized but some CPU Features are masked in older VM hardware because VM hardware emulates particular chipset.

I did not know that! I have never thought about it! My bad.

Anyway, this is just another proof that everyday there is some surprise and you can always learn something new even in area where you believe you are good at. We never know everything.