Tuesday, August 21, 2012

Converting between CPU RDY summation and CPU % ready values

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2002181

To convert between the CPU ready summation value in vCenter's performance charts and the CPU ready % value that you see in esxtop, you must use a formula.
The formula requires you to know the default update intervals for the performance charts. These are the default update intervals for each chart:
  • Realtime: 20 seconds
  • Past Day: 5 minutes (300 seconds)
  • Past Week: 30 minutes (1800 seconds)
  • Past Month: 2 hours (7200 seconds)
  • Past Year: 1 day (86400 seconds)

CPU ready %

To calculate the CPU ready % from the CPU ready summation value, use this formula:
(CPU summation value / ([chart default update interval in seconds] * 1000)) * 100 = CPU ready %
Example: The Realtime stats for a virtual machine in vCenter might have an average CPU ready summation value of 1000. Use the appropriate values with the formula to get the CPU ready %.
(1000 / (20s * 1000)) * 100 = 5% CPU ready

CPU ready summation value

To convert the CPU ready % into a CPU ready summation value, reverse the calculation and use this formula:
(CPU ready % / 100) * ([chart default update interval in seconds] * 1000 = CPU summation value
Example: If a virtual machine has a CPU ready % of 5m, its CPU ready summation value on the Realtime performance chart is calculated like this:
(5 / 100) * 20s * 1000 = 1000 CPU ready

Saturday, August 18, 2012

ls* Commands Are Even More Useful Than You May Have Thought


Information is copied from
http://www.cyberciti.biz/open-source/command-line-hacks/linux-ls-commands-examples/

lsscsi
list SCSI devices

lsblk
list block devices

lsb_release
list linux distribution and release information

lsusb
list usb devices

lsblk
list block devices


lscpu
list cpu information

lspci
list PCI devices


lshw
list information about hardware configuration

lsof
list open files, network ports, active process, ...

lsattr
list extended file attributes







 

Wednesday, August 15, 2012

Getting the NAA ID of the LUN


Getting the NAA ID of the LUN to be removed

From the vSphere Client, this information is visible from the Properties window of the datastore.

From the ESXi host, run the command:

# esxcli storage vmfs extent list


VMware vSphere 5 - Cluster Resource Allocations

Total capacities

Cluster Resource Allocation "Memory - Total Capacity" is "Total Cluster Memory" (what you see in Summary Tab) minus approx. 2576MB of RAM reserved for each ESX host.

So if I have two ESX hosts each with 8GB physical RAM I can see 16GB Total Cluster Memory in Summary Tab. However I have two ESX hosts which has together reserved 2 x 2576MB which is approximately 5GB of memory reservations. So in Cluster Resource Allocation I have 16GB-5GB which is around 11GB of RAM.

The same should apply to Cluster Resource Allocation "CPU - Total Capacity". Each ESX host has reservation of 2341 MHz.

So if I have two ESX hosts each with 10.636 GHz  I can see 21GHz Total Cluster CPU Resources in Summary Tab. However I have two ESX hosts which has together reserved 2 x 2421MHz which is approximately 4.8GHz of CPU reservations. So in Cluster Resource Allocation I should have 21GHz-4.8GHz which is around 16.2GHz. But during my tests I see there 18.4GHz which looks like only one ESX host reservations are subtract. What is the magic and why? Can someone comment it bellow the article?

If you want to know what amount of MEMORY and CPU reservations are reserved for particular ESX hypervisor component you can select some ESX host in the cluster and go to Configuration->System Resource Allocation and switch from simple to advanced view. You have to go through all components and sum all CPU and MEMORY reservations.

Reserved capacities

Cluster Allocation "Reserved Capacity" is sum of reservations of virtual machines and resource pools. Sometimes people are confused and surprised that reserved capacity is very high. That's usually because HA cluster is enabled and fail-over capacity is also reserved and not available for use.

So if I have two node cluster with N+1 redundancy and at least one protected VM is running then half of cluster capacity is reserved by HA.

Tuesday, August 14, 2012

Intel Server CPU generations

Intel Xeon 5400 = Harpertown
    » Penryn microarchitecture
    » Intel 64
    » 0.045 micron (45 nm)
    » Up to 4 cores
    » Up to 3.33 GHz
    » Up to 2x6 MB L2 cache
    » MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1
    » Demand-Based Switching except E5405, L5408
    » Enhanced Intel SpeedStep Technology (EIST) - except E5405
    » XD bit (an NX bit implementation)
    » HyperThreading
    » Virtualization (Intel VT-x, Intel VT-d)

Intel Xeon 5500 = Nehalem-EP
    » Nehalem microarchitecture
    » Intel 64
    » 0.045 micron (45 nm)
    » Up to 4 cores
    » Up to 3.33 GHz
    » Up to 8 MB L3 cache
    » MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2
    » Enhanced Intel SpeedStep Technology (EIST)
    » XD bit (an NX bit implementation)
    » HyperThreading
    » Virtualization (Intel VT-x, Intel VT-d)
  
Intel Xeon 5600 = Westmere-EP
    » Nehalem microarchitecture
    » Intel 64
    » 0.032 micron (32 nm)
    » Up to 6 cores
    » Up to 4.4 GHz
    » Up to 12 MB L3 cache
    » MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2,
    » Enhanced Intel SpeedStep Technology (EIST)  
    » XD bit (an NX bit implementation)
    » TXT
    » AES-NI
    » Smart Cache
    » Demand-Based Switching
    » HyperThreading
    » Virtualization (Intel VT-x, Intel VT-d)
    » Turbo Boost (except E5603, E5606, E5607, L5609)

Intel Xeon E5-2600 = Sandy Bridge-EP   
    » Sandy Bridge microarchitecture
    » Intel 64
    » 0.032 micron (32 nm)
    » Up to 8 cores
    » Up to 3.3 GHz
    » Up to 20 MB L3 cache
    » MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX
    » Enhanced Intel SpeedStep Technology (EIST)
    » XD bit (an NX bit implementation)
    » TXT
    » AES-NI
    » Smart Cache
    » Demand-Based Switching
    » HyperThreading
    » Virtualization (Intel VT-x, Intel VT-d)
    » Turbo Boost (except E5-2603, E5-2609)

Caution: Information was collected from various public sources therefore the completeness and correctness is not guaranteed.

VMware - Software and Hardware Techniques for x86 Virtualization

In the early days of x86 virtualization, uniformity ruled: all CPUs implemented essentially the same 32-bit architecture and the virtual machine monitor (VMM) always used software techniques to run guest operating systems. This uniformity no longer exists. CPUs today come in 32- and 64-bit variants. Some CPUs have hardware support for virtualization; others do not. Moreover, this hardware support comes in multiple forms for virtualizing different aspects of the x86 architecture.  This document describes the x86 architecture from a virtualization point of view, relating critical architectural features to the major releases of VMware ESX. The goal is to provide, for each version of VMware ESX, an understanding of:

* Which CPU features are required
* Which CPU features can be utilized (but are not required)
* Which CPU features can be virtualized—that is, made available to software running in the virtual machine

With a better understanding of how CPU features are required, used, and virtualized by VMware ESX, you can reason more precisely about what can be virtualized, what performance levels may result for a given combination of CPU, guest operating system, and version of VMware ESX, and how workloads may respond to adjusting configuration parameters both for software running in the virtual machine and at the VMware ESX level.

Full white paper is located at
http://www.vmware.com/files/pdf/software_hardware_tech_x86_virt.pdf

Monday, August 06, 2012

Linux / Unix: lftp Command Mirror Files and Directories

lftp command is a file transfer program that allows sophisticated ftp, http and other connections to other hosts. lftp command has builtin mirror which can download or update a whole directory tree. There is also reverse mirror (mirror -R) which uploads or updates a directory tree on server. Mirror can also synchronize directories between two remote servers, using FXP if available.

More info at http://www.cyberciti.biz/faq/lftp-mirror-example/

Install and Use nmon Tool To Monitor Linux Systems Performance

This systems administrator, tuner, benchmark tool gives you a huge amount of important performance information in one go with a single binary.

It works on Linux, IBM AIX Unix, Power, x86, amd64 and ARM based system such as Raspberry Pi. The nmon command displays and records local system information. The command can run either in interactive or recording mode.

More info at http://www.cyberciti.biz/faq/nmon-performance-analyzer-linux-server-tool/

Thursday, August 02, 2012

VMware ESX - Enable flow control on the 10Gb NICs used for SAN


First, update the ESXi 5 host applying all VMware patches. The recommended way to do this is by using VMware Update Manager. Be sure patch ESXi500-201112001 is installed.

1.    At the ESXi console, press [F2] and login as root, select Troubleshooting Options and press [Enter].

2.   Select Enable ESXi Shell and press [Enter].

3.   Press [Alt]+[F1] to open the local console and login as root.

4.   At the ESXi console type:

esxcfg-nics –l

5.   The available NICs are displayed (example: vmnic0, vmnic1, vmnic2…).

Using the output, determine which “vmnic” labels are assigned to adapters used for SAN
connectivity. For example, the two ports on the Broadcom 57711 may be listed as vminc4 and vmnic5.
This will vary depending on the system configuration.

6.   At the ESXi console type:

vi /etc/rc.local

7.   Go to the end of file
Press [Esc], type :$, and then press [Enter] to go to the end of file.
Type the letter “o” (lowercase) to append a new line to the file

8.   Type:

ethtool --pause tx on rx on vmnicX
Substitute the number that corresponds to the NICs identified in step 5 above. Press
[Enter]. Repeat this for each NIC that is connected to the SAN before proceeding to the next step.

9.   Press [Esc], type :wq, and then press [Enter] to save the file.

10. Type:

/sbin/auto-backup.sh

Rapid EqualLogic Configuration Portal

The Dell Rapid EqualLogic Configuration Series of documents is intended to assist users in deploying EqualLogic iSCSI SAN solutions. The following documents employ tested and proven, Dell best practices for EqualLogic SAN environments.

http://en.community.dell.com/techcenter/storage/w/wiki/3615.rapid-equallogic-configuration-portal-by-sis.aspx

Wednesday, August 01, 2012

Designing VMware Infrastructure - Video Course

Learn to properly design a vSphere environment to avoid performance problems and downtime in this infrastructure design course by VCDX Scott Lowe. Create sound network designs and prepare for the VMware VCAP-DCD certification exam as an IT architect mastered in data center design.

http://www.trainsignal.com/Designing-VMware-Infrastructure.aspx