Sunday, September 24, 2017

How to downsize vCenter Server Appliance 6.5 storage?

Last week I have been asked by one partner how to downsize vCenter Server Appliance (VCSA) 6.5 storage.

Well, let's start with upsizing. To add CPU and RAM resources is very easy. VCSA 6.5 supports CPU Hot Add and Memory Hot Plug, therefore you do not need to even shut down VCSA to increase CPU and RAM resources.

CPU Hot Add and RAM Hot Plug
Storage expansion though is a little bit more difficult. You still do not have to shut down VCSA because virtual disk can hot-extended, however after a disk is extended you have to grow disk partitions within the operating system, Photon OS in this particular case. William Lam wrote the blog post here about it. Generally you have to run script  /usr/lib/applmgmt/support/scripts/autogrow.sh within VCSA shell so it is not a rocket science you just need to know what script to execute.

So upsize is easily doable. But what about downsize? VCSA 6.5 supports CPU Hot Remove but RAM cannot be downsized, therefore for RAM downsizing you have to shut down VCSA decrease memory resources and power on VM. Not a big deal, just small downtime so it can be done during a maintenance window. But the storage downsize is not possible. Well, in theory, it is possible but it is definitely not supported to decrease the size of disk partitions and the virtual disk itself because it is hard and also very risky as you do not know where data are located within the disk.

Warning: I have been told by someone that downsizing method described below does not work and restore options will allow you to choose just bigger form factor than originally backed up  VCSA. Unfortunately, I have the smallest form factor in my home lab so I cannot verify it but I believe him. Sorry for the misleading idea but there might be some unsupported method how to tweak backup files to have the impression it is a backup from smaller VCSA form factor.  

We have another downsizing option for VCSA 6.5. You are most probably aware that VCSA 6.5 has introduced application-based backup where vCenter inventory, identity, and even the database is backed up to a remote location via following protocols HTTP, HTTPS, SCP, FTP, FTPS. The restore of VCSA 6.5 backup is done as a new VCSA deployment executed in a restore mode where a previously created backup is used as a restore point. The nice thing is, that during VCSA restore you can choose different VCSA form factor with different storage footprint. So this is a potential way how to downsize vCenter Server Appliance 6.5 storage in a supported way. The only assumption is that your backup data will fit into new storage size.

If you will plan such downsizing exercise, please do not forget to keep your original vCenter Server Appliance somewhere to have a simple way how to roll back.

Hope this is helpful and informative.

Tuesday, September 19, 2017

What is the difference between VMware vRealize Suite and vCloud Suite

Several times I have been asked by my customers what is the difference between VMware vRealize Suite and vCloud Suite. Both are actually licensing packaging suits. VMware vCloud Suite suite is the superset of VMware vRealize Suite. In other words, vCloud Suite includes everything as vRealize Suite plus vSphere Infrastructure (ESXi Enterprise Plus licenses).

VMware vRealize Suite is a purpose-built management solution for the heterogeneous data center and the hybrid cloud. It is designed to deliver and manage infrastructure and applications to increase business agility while maintaining IT control. It provides the most comprehensive management stack for private and public clouds, multiple hypervisors, and physical infrastructure.

vRealize Suite editions comparison is available here and visualized on figure below.

vRealize Suite editions comparison
More specifically vRealize Suite 2017 includes following components:
  • vSphere Replication 6.5.1
  • vSphere Data Protection 6.1.5
  • vSphere Big Data Extensions 2.3.2
  • vRealize Orchestrator Appliance 7.3.0
  • vRealize Suite Lifecycle Manager 1.0
  • vRealize Operations Manager 6.6.1
  • vRealize Business for Cloud 7.3.1
  • vRealize Log Insight 4.5.0
  • vRealize Automation 7.3.0
  • vRealize Code Stream Management Pack for IT DevOps 2.2.1
VMware vCloud Suite brings together VMware’s industry-leading vSphere hypervisor with vRealize Suite, the complete cloud management solution.

vCloud Suite 2017 currently includes vRealize Suite 2017 and vSphere Enterprise Plus and NSX-V:
  • Everything included in vRealize Suite 2017
  • plus vSphere Enterprise Plus Infrastructure for vCloud Suite (ESXi licenses only, vCenter is not included)
  • ESXi 6.5 U1
  • plus NSX-V
  • NSX-V 6.3.3
What's new in 2017 Suites?
  1. VMware has introduced new overall manager called vRealize Suite Lifecycle Manager to simplify deployment and on-going management of the vRealize products.
  2. NSX-V (NSX for vSphere) is included in vCloud Suite 2017 ???
Please note, that vCenter Server is not included in vCloud Suite, therefore additional license for vCloud Suite is required.

Hope this helps to understand VMware license packaging.

...........................................................................................

UPDATE 2017-09-20:  The note about NSX-V licensing in vCloud Suite 2017 Release announcement is a little bit misleading. On VMware WebSite here is written
NSX is available as an optional component that can be purchased with vCloud Suite. 
So, VMware vCloud Suite 2017 customers are not entitled to use full NSX-V. It is just an entitlement to use NSX Manager for AV (replacement for vCNS Endpoint). NSX Manager for AV has been available for free even before vCloud Suite 2017 so that's nothing new. And of course, any VMware customer can purchase full NSX-V to extend their vSphere infrastructure for a network virtualization and datacenter modernization.

Sunday, September 17, 2017

CLI for VMware Virtual Distributed Switch - implementation procedure

Some time ago I have blogged about perl scripts emulating well known physical network switch CLI commands (show mac-address-table and show interface status) for VMware Distributed Virtual Switch (aka VDS). See the blog post here "CLI for VMware Virtual Distributed Switch".

Now is the time to operationalize it. My scripts are written in Perl leveraging vSphere Perl SDK which is distributed by VMware as vCLI. vCLI is available for Linux and Windows OS. I personally prefer Linux over Windows, therefore, the implementation procedure below is for Centos 7.

Implementation procedure for Centos 7

1/ Install Centos 7 minimal OS installation

2/ Install Perl 

yum install perl

3/ Install vCLI Prerequisite Software for Linux Systems with Internet Access

yum install e2fsprogs-devel libuuid-devel openssl-devel perl-devel
yum install glibc.i686 zlib.i686
yum install perl-XML-LibXML libncurses.so.5 perl-Crypt-SSLeay

4/ Install the vCLI Package on a Linux System with Internet Access

Download vSphere Perl SDK (Use link here)
tar –zxvf VMware-vSphere-CLI-6.X.X-XXXXX.x86_64.tar.gz
sudo vmware-vsphere-cli-distrib/vmware-install.pl

5/ Install VDSCLI scripts

# Create directory for vdscli
mkdir vdscli

# Change directory to vdscli
cd vdscli

# Install wget to be able to download VDSCLI files
yum install wget

# Download vdscli.pl
wget https://raw.githubusercontent.com/davidpasek/vdscli/master/vdscli.pl

# Download supporting shell wraper for show mac-address-table
wget https://raw.githubusercontent.com/davidpasek/vdscli/master/show-mac-address-table.sh

# Download supporting shell wraper for show interface status
wget https://raw.githubusercontent.com/davidpasek/vdscli/master/show-interface-status.sh

# Change mod of all files in VDSCLI directory to allow execution
chmod 755 *

# Edit shell wrappers with your specific vCenter hostname and credentials

It is highly recommended to create a specific readonly (AD or vSphere SSO) account for VDSCLI as depicted on screenshot below

vSphere SSO account for VDSCLI

6/ VDSCLI validation

# You must be in vdscli directory
# Run command below
./show-mac-address-table.sh

You should get mac address table for VMware distributed virtual switch. Something like on screenshot below

Output from ./show-mac-address-table.sh 

So now we have Perl scripts to get information from VMware Distributed Virtual Switch. So far so good. However, we would like to have Interactive CLI to have the same user experience as we have on physical switches CLI, right? For Interactive CLI I have decided to use Python ishell (https://github.com/italorossi/ishell).

7/ iShell installation

# Install python
yum install python

# Install and upgrade pip which is Python package manager
yum install epel-release
yum install python-pip
pip install --upgrade pip

# We need gcc therefore we install all development tools
yum group install "Development Tools"
yum install python-devel
yum install ncurses-devel

# Now we can finally install ishell
pip install ishell

8/ Interactive VDSCLI validation

# You must be in vdscli directory
# Run command bellow
./vdscli-ishell.py

You should be able to use an interactive shell with just two show commands. See. the screenshot below to get the impression how it works.

VDSCLI Interactive Shell

9/ Remote SSH or Telnet

The last step is to expose VDSCLI interactive shell over SSH or Telnet. I will show you how to do it for SSH but if you enable Telnet on your linux server it will work as well.

Let's add specific OS user
adduser admin
passwd admin

You have to copy all vdscli scripts to home dir of the newly created user (/home/admin). It is user admin in our case but you can create any username you want.

We have to add our interactive shell /home/admin/vdscli-ishell.py into /etc/shells because only programs configred there can be used as shells.

chsh admin
and use /home/vdscli/vdscli-ishell.py as a shell

CONCLUSION

So at the end you can simply ssh to Linux system we have build and immediately use CLI to VMware Distributed Switch as depicted on screenshot below

VDSCLI Interactive Shell over SSH
And that was the goal. Hope somebody else in VMware community will find it useful.

Friday, September 01, 2017

ESXi Physical NIC Capabilities for NSX VTEP

NSX VTEP encapsulation significantly benefits from physical NIC offload capabilities. In this blog post, I will show  how to identify NIC capabilities.

Check NIC type and driver

esxcli network nic get -n vmnic4
[dpasek@esx01:~] esxcli network nic get -n vmnic4
   Advertised Auto Negotiation: false
   Advertised Link Modes: 10000BaseT/Full
   Auto Negotiation: false
   Cable Type: FIBRE
   Current Message Level: 0
   Driver Info: 
         Bus Info: 0000:05:00.0
         Driver: bnx2x
         Firmware Version: bc 7.13.75
         Version: 2.713.10.v60.4
   Link Detected: true
   Link Status: Up 
   Name: vmnic4
   PHYAddress: 1
   Pause Autonegotiate: false
   Pause RX: true
   Pause TX: true
   Supported Ports: FIBRE
   Supports Auto Negotiation: false
   Supports Pause: true
   Supports Wakeon: false
   Transceiver: internal
   Virtual Address: 00:50:56:59:d8:8c
   Wakeon: None
[dpasek@czchoesint203:~] 

esxcli software vib list | grep bnx2x
[dpasek@esx01:~] esxcli software vib list | grep bnx2x
net-bnx2x                      2.713.10.v60.4-1OEM.600.0.0.2494585   QLogic     VMwareCertified   2017-05-10  
[dpasek@czchoesint203:~]

Driver parameters can be listed by command …
esxcli system module parameters list -m bnx2x
[dpasek@esx01:~] esxcli system module parameters list -m bnx2x
Name                                  Type          Value  Description                                                                                                                                                                                                                                                                                    
------------------------------------  ------------  -----  -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
RSS                                   int                  Controls the number of queues in an RSS pool. Supported Values 2-4.                                                                                                                                                                                                                            
autogreeen                            uint                  Set autoGrEEEn (0:HW default; 1:force on; 2:force off)                                                                                                                                                                                                                                        
bnx2x_vf_passthru_wait_event_timeout  uint                 For debug purposes, set the value timeout value on VF OP to complete in ms                                                                                                                                                                                                                     
debug                                 uint                  Default debug msglevel                                                                                                                                                                                                                                                                        
debug_unhide_nics                     int                  Force the exposure of the vmnic interface for debugging purposes[Default is to hide the nics]1.  In SRIOV mode expose the PF                                                                                                                                                                   
disable_feat_preemptible              int                  For debug purposes, disable FEAT_PREEMPTIBLE when set to value of 1                                                                                                                                                                                                                            
disable_fw_dmp                        int                  For debug purposes, disable firmware dump  feature when set to value of 1                                                                                                                                                                                                                      
disable_iscsi_ooo                     uint                  Disable iSCSI OOO support                                                                                                                                                                                                                                                                     
disable_rss_dyn                       int                  For debug purposes, disable RSS_DYN feature when set to value of 1                                                                                                                                                                                                                             
disable_tpa                           uint                  Disable the TPA (LRO) feature                                                                                                                                                                                                                                                                 
disable_vxlan_filter                  int                  Enable/disable vxlan filtering feature. Default:1, Enable:0, Disable:1                                                                                                                                                                                                                         
dropless_fc                           uint                  Pause on exhausted host ring                                                                                                                                                                                                                                                                  
eee                                                        set EEE Tx LPI timer with this value; 0: HW default; -1: Force disable EEE.                                                                                                                                                                                                                    
enable_default_queue_filters          int                  Allow filters on the default queue. [Default is disabled for non-NPAR mode, enabled by default on NPAR mode]                                                                                                                                                                                   
enable_geneve_ofld                    int                  Enable/Disable GENEVE offloads. 1: [Default] Enable GENEVE Offloads. 0: Disable GENEVE Offloads.                                                                                                                                                                                               
enable_live_grcdump                   int                  Enable live GRC dump 0x0: Disable live GRC dump, 0x1: Enable Parity/Live GRC dump [Enabled by default], 0x2: Enable Tx timeout GRC dump, 0x4: Enable Stats timeout GRC dump                                                                                                                    
enable_vxlan_ofld                     int                  Allow vxlan TSO/CSO offload support.[Default is enabled, 1: enable vxlan offload, 0: disable vxlan offload]                                                                                                                                                                                    
heap_initial                          int                  Initial heap size allocated for the driver.                                                                                                                                                                                                                                                    
heap_max                              int                  Maximum attainable heap size for the driver.                                                                                                                                                                                                                                                   
int_mode                              uint                  Force interrupt mode other than MSI-X (1 INT#x; 2 MSI)                                                                                                                                                                                                                                        
max_agg_size_param                    uint                 max aggregation size                                                                                                                                                                                                                                                                           
max_vfs                               array of int         Number of Virtual Functions: 0 = disable (default), 1-64 = enable this many VFs                                                                                                                                                                                                                
mrrs                                  int                   Force Max Read Req Size (0..3) (for debug)                                                                                                                                                                                                                                                    
multi_rx_filters                      int                  Define the number of RX filters per NetQueue: (allowed values: -1 to Max # of RX filters per NetQueue, -1: use the default number of RX filters; 0: Disable use of multiple RX filters; 1..Max # the number of RX filters per NetQueue: will force the number of RX filters to use for NetQueue
native_eee                            int                                                                                                                                                                                                                                                                                                                 
num_queues                            int                   Set number of queues (default is as a number of CPUs)                                                                                                                                                                                                                                         
num_queues_on_default_queue           int                  Controls the number of RSS queues ( 1 or more) enabled on the default queue. Supported Values 1-7, Default=4                                                                                                                                                                                   
num_rss_pools                         int                  Control the existance of an RSS pool. When 0,RSS pool is disabled. When 1, there will be an RSS pool (given that RSS>0).                                                                                                                                                                       
poll                                  uint                  Use polling (for debug)                                                                                                                                                                                                                                                                       
pri_map                               uint                  Priority to HW queue mapping                                                                                                                                                                                                                                                                  
psod_on_panic                         int                   PSOD on panic                                                                                                                                                                                                                                                                                 
rss_on_default_queue                  int                  RSS feature on default queue on eachphysical function that is an L2 function. Enable=1, Disable=0. Default=0                                                                                                                                                                                   
skb_mpool_initial                     int                  Driver's minimum private socket buffer memory pool size.                                                                                                                                                                                                                                       
skb_mpool_max                         int                  Maximum attainable private socket buffer memory pool size for the driver.                                                                                                                                                                                                                      

To get driver parameter value
esxcfg-module --get-options bnx2x 
[dpasek@esx01:~] esxcfg-module -g bnx2x
bnx2x enabled = 1 options = ''

-->


HCL device identifiers

vmkchdev -l | grep vmnic
[dpasek@esx01:~] vmkchdev -l | grep vmnic
0000:02:00.0 14e4:1657 103c:22be vmkernel vmnic0
0000:02:00.1 14e4:1657 103c:22be vmkernel vmnic1
0000:02:00.2 14e4:1657 103c:22be vmkernel vmnic2
0000:02:00.3 14e4:1657 103c:22be vmkernel vmnic3
0000:05:00.0 14e4:168e 103c:339d vmkernel vmnic4
0000:05:00.1 14e4:168e 103c:339d vmkernel vmnic5
0000:88:00.0 14e4:168e 103c:339d vmkernel vmnic6
0000:88:00.1 14e4:168e 103c:339d vmkernel vmnic7
So, in case of vmknic4 there is
·       VID:DID SVID:SSID
·       14e4:168e 103c:339d 

Check TSO configuration

esxcli network nic tso get
[dpasek@esx01:~] esxcli network nic tso get
NIC     Value
------  -----
vmnic0  on   
vmnic1  on   
vmnic2  on   
vmnic3  on   
vmnic4  on   
vmnic5  on   
vmnic6  on   
vmnic7  on   

esxcli system settings advanced list -o /Net/UseHwTSO
[dpasek@esx01:~] esxcli system settings advanced list -o /Net/UseHwTSO
   Path: /Net/UseHwTSO
   Type: integer
   Int Value: 1
   Default Int Value: 1
   Min Value: 0
   Max Value: 1
   String Value: 
   Default String Value: 
   Valid Characters: 
   Description: When non-zero, use pNIC HW TSO offload if available

If you want disable TSO, use following commands …
esxcli network nic software set --ipv4tso = 0 -n vmnicX
esxcli network nic software set --ipv6tso = 0 -n vmnicX

Guest OS TSO settings in Linux OS can be changed by command …
ethtool -K ethX tso on/ off

Check LRO configuration

esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
[dpasek@esx01:~] esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
   Path: /Net/TcpipDefLROEnabled
   Type: integer
   Int Value: 1
   Default Int Value: 1
   Min Value: 0
   Max Value: 1
   String Value: 
   Default String Value: 
   Valid Characters: 
   Description: LRO enabled for TCP/IP

vmxnet settings can be validated by command …
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
[dpasek@esx01:~] esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
   Path: /Net/Vmxnet3HwLRO
   Type: integer
   Int Value: 1
   Default Int Value: 1
   Min Value: 0
   Max Value: 1
   String Value: 
   Default String Value: 
   Valid Characters: 
   Description: Whether to enable HW LRO on pkts going to a LPD capable vmxnet3

Set guest OS LRO settings in linux OS …
ethtool -k ethx lro on/ off.

Check Checksum offload configuration

ESX settings
esxcli network nic cso get
[dpasek@esx01:~] esxcli network nic cso get
NIC     RX Checksum Offload  TX Checksum Offload
------  -------------------  -------------------
vmnic0  on                   on                 
vmnic1  on                   on                 
vmnic2  on                   on                 
vmnic3  on                   on                 
vmnic4  on                   on                 
vmnic5  on                   on                 
vmnic6  on                   on                 
vmnic7  on                   on                 
[dpasek@czchoesint203:~] 

The following command can be used for disabling CSO for a specific pNIC:
esxcli network nic cso set -n vmnicX



Check VXLAN offloading

vsish -e get /net/pNics/vmnic4/properties
dpasek@esx01:~] vsish -e get /net/pNics/vmnic4/properties
properties {
   Driver Name:bnx2x
   Driver Version:2.713.10.v60.4
   Driver Firmware Version:bc 7.13.75
   System Device Name:vmnic4
   Module Interface Used By The Driver:vmklinux
   Device Hardware Cap Supported:: 0x493c032b -> VMNET_CAP_SG VMNET_CAP_IP4_CSUM VMNET_CAP_HIGH_DMA VMNET_CAP_TSO VMNET_CAP_HW_TX_VLAN VMNET_CAP_HW_RX_VLAN VMNET_CAP_SG_SPAN_PAGES VMNET_CAP_IP6_CSUM VMNET_CAP_TSO6 VMNET_CAP_TSO256k VMNET_CAP_ENCAP VMNET_CAP_GENEVE_OFFLOAD VMNET_CAP_SCHED
   Device Hardware Cap Activated:: 0x403c032b -> VMNET_CAP_SG VMNET_CAP_IP4_CSUM VMNET_CAP_HIGH_DMA VMNET_CAP_TSO VMNET_CAP_HW_TX_VLAN VMNET_CAP_HW_RX_VLAN VMNET_CAP_SG_SPAN_PAGES VMNET_CAP_IP6_CSUM VMNET_CAP_TSO6 VMNET_CAP_TSO256k VMNET_CAP_SCHED
   Device Software Cap Activated:: 0x30800000 -> VMNET_CAP_RDONLY_INETHDRS VMNET_CAP_IP6_CSUM_EXT_HDRS VMNET_CAP_TSO6_EXT_HDRS
   Device Software Assistance Activated:: 0 -> No matching defined enum value found.
   PCI Segment:0
   PCI Bus:5
   PCI Slot:0
   PCI Fn:0
   Device NUMA Node:0
   PCI Vendor:0x14e4
   PCI Device ID:0x168e
   Link Up:1
   Operational Status:1
   Administrative Status:1
   Full Duplex:1
   Auto Negotiation:0
   Speed (Mb/s):10000
   Uplink Port ID:0x0400000a
   Flags:: 0x41e0e -> DEVICE_PRESENT DEVICE_OPENED DEVICE_EVENT_NOTIFIED DEVICE_SCHED_CONNECTED DEVICE_USE_RESPOOLS_CFG DEVICE_RESPOOLS_SCHED_ALLOWED DEVICE_RESPOOLS_SCHED_SUPPORTED DEIVCE_ASSOCIATED
   Network Hint:
   MAC address:9c:dc:71:db:d0:38
   VLanHwTxAccel:1
   VLanHwRxAccel:1
   States:: 0xff -> DEVICE_PRESENT DEVICE_READY DEVICE_RUNNING DEVICE_QUEUE_OK DEVICE_LINK_OK DEVICE_PROMISC DEVICE_BROADCAST DEVICE_MULTICAST
   Pseudo Device:0
   Legacy vmklinux device:1
   Respools sched allowed:1
   Respools sched supported:1
}

VXLAN offload capability is called 'VMNET_CAP_ENCAP'. That's what you need to look for.
vsish -e get /net/pNics/vmnic4/properties | grep VMNET_CAP_ENCAP
[dpasek@esx01:~] vsish -e get /net/pNics/vmnic4/properties | grep VMNET_CAP_ENCAP
   Device Hardware Cap Supported:: 0x493c032b -> VMNET_CAP_SG VMNET_CAP_IP4_CSUM VMNET_CAP_HIGH_DMA VMNET_CAP_TSO VMNET_CAP_HW_TX_VLAN VMNET_CAP_HW_RX_VLAN VMNET_CAP_SG_SPAN_PAGES VMNET_CAP_IP6_CSUM VMNET_CAP_TSO6 VMNET_CAP_TSO256k VMNET_CAP_ENCAP VMNET_CAP_GENEVE_OFFLOAD VMNET_CAP_SCHED

1.1.7     Check VMDq (NetQueue)

esxcli network nic queue filterclass list
This esxcli command shows information about the filters supported per vmnic and used by NetQueue.
[dpasek@esx01:~] esxcli network nic queue filterclass list
NIC     MacOnly  VlanOnly  VlanMac  Vxlan  Geneve  GenericEncap
------  -------  --------  -------  -----  ------  ------------
vmnic0    false     false    false  false   false         false
vmnic1    false     false    false  false   false         false
vmnic2    false     false    false  false   false         false
vmnic3    false     false    false  false   false         false
vmnic4     true     false    false  false   false         false
vmnic5     true     false    false  false   false         false
vmnic6     true     false    false  false   false         false
vmnic7     true     false    false  false   false         false

1.1.8     Dynamic NetQ

The following command will output the queues for all vmnics in your ESXi host.
esxcli network nic queue count get
[dpasek@esx01:~] esxcli network nic queue count get 
NIC     Tx netqueue count  Rx netqueue count
------  -----------------  -----------------
vmnic0                  1                  1
vmnic1                  1                  1
vmnic2                  1                  1
vmnic3                  1                  1
vmnic4                  8                  5
vmnic5                  8                  5
vmnic6                  8                  5
vmnic7                  8                  5

It is possible to disable NetQueue on a ESXi host level using the following command:
esxcli system settings kernel set --setting =" netNetqueueEnabled" --value =" false"

VXLAN PERFORMANCE

RSS can help in case VXLAN is used because VXLAN traffic can be distributed among multiple hardware queues. NICs that offer RSS have a throughput around 9 Gbps but NICs that do not only have a throughput of around 6 Gbps. Therefore, the right choice of physical NIC is critical.