Tuesday, January 22, 2013

HP Flex-10 Design, Plan, Implement, Test

Before design phase of VMware vSphere Infrastructure I recommend to read blog post "Understanding HP Flex-10 Mappings with VMware ESX/vSphere" to get general overview about server infrastructure and advanced  network interconnect. During design phase prepare detail test plan (aka operational verification) and test it during implementation phase. You can use blog post "Testing Scenario's VMware / HP c-Class Infrastructure" as a template for your test plan. I don't doubt that you normally test infrastructure before put it into production :-)

Saturday, January 19, 2013

MSCS RDMs causing long boot of ESX

That's because RDM LUN attached to MSCS cluster has permanent SCSI reservation initiated by active node of cluster.

In ESX 5 you have to mark all such LUNs as perennially reserved and your ESX boot can be fast as usual.

Here is CLI command to mark LUN
esxcli storage core device setconfig -d naa.id --perennially-reserved=true

This has to be changed on all ESX hosts with visibility to the LUN.

More info at http://kb.vmware.com/kb/1016106

Wednesday, January 09, 2013

How to calculate storage performance from host perspective

Storage performance is usually quantified as IOPS (I/O transactions per second). The performance from storage perspective is quite easy. It really depends on speed of each particular disk - also known as spindle. Each disk has some speed and bellow are written average values which are usually used for storage performance calculation
  • SATA disk = 80 IOPS
  • SCSI DISK(SAS or FC) 10k RPM = 150 IOPS
  • SCSI DISK(SAS or FC) 15k RPM = 180 IOPS
  • SSD disk (SLC aka EFD) = 6000 IOPS
So when we need higher performance we have to bundle disks. Disks can be bundled with standard RAID technology.

Here are most common RAID types used on standard disk arrays:
  • RAID 0 - no redundancy, disk bundle, higest performance => WRITE PENALTY = 0
  • RAID 1 - disk mirror, max bundle of 2 disks, high performance => WRITE PENALTY = 2
  • RAID 10 - RAID 1 + RAID 0 for bundling disk pairs, max disk bundle depends on disk array limits, high performance => WRITE PENALTY = 2
  • RAID 5 - block level striping with rotated parity, max disk bundle depends on disk array limits, moderate performance => WRITE PENALTY = 4
  • RAID 6 - block level striping with double parity, max disk bundle depends on disk array limits, lower performance => WRITE PENALTY = 6


So performance from storage perspective and from host perspective are different. Performance from storage perspective is simply summation of speed of all disks in RAID group. Performance from host perspective depends on selected RAID type.

To calculate estimated storage performance from host perspective we need to use the formula of several variables.

First of all let's define variables

P=write penalty of selected RAID type
R=Read % of disk workload
W=Write % of disk workload
H=IOPS from host perspective
S=IOPS from storage perspective

and now we can write formula to calculate storage performance from host perspective
H = S / (R+W*P)


Do you want to know all steps how to get this formula? It is simple. Start from another formula which describes storage behavior.
R*(1*H) + W*(P*H) = S
Above formula says - each host read IOPS generates single storage IOPS but each write IOPS generates multiple IOPS based on RAID type penalty (P).

Does it make sense? If not example can help you to understand.

My RAID group has 9 SAS disks 600GB/15k RPM and I use RAID 5 (8+1).
So from storage perspective I have 9 disks where each can perform 180 IOPS which means I have performance 1620 IOPS from storage perspective. Let's assume I have strange read/write ratio 20:80.

S = 1620
P = 4 (because of RAID 5)
R = 20% = 0.2
W= 80% = 0.8
I need to know H ... storage performance from host perspective.

H = 1620 / (0.2 + 0.8 * 4) = 1620 / 3.4 = 476.47 IOPS from host perspective.

Note: Modern disk arrays often offer AST (Automated Storage Tiering). The calculation described in this blog post is valid even for those disk arrays. You have to fully understand internal architecture and design of particular storage but generally all storage pools are build from some sub disk groups bundled and protected by some RAID type. So if you have 125 disks bundled by 5 disks in RAID 5 (4+1) then the principle is the same. We have 125 spindles and write penalty is 4 because of RAID 5.