Wednesday, May 28, 2014

DELL Force10 : VLT - Virtual Link Trunking

Do you know CISCO's Virtual port Channel? Do you want the same with DELL datacenter switches. Here we go.

General VLT overview

Virtual Link Trunking or VLT is a proprietary aggregation protocol developed by Force10 and available in their datacenter-class or enterprise-class network switches. VLT is implemented in the latest firmware releases (FTOS from 8.3.10.2) for their high-end switches like the S4810, S6000 and Z9000 10/40 Gb datacenter switches. Although VLT is a proprietary protocol from Force10, other vendors offer similar features to allow users to set up an aggregated link towards two (logical) different switches, where a standard aggregated link can only terminate on a single logical switch (thus either a single physical switch or on different members in a stacked switch setup).  For example CISCO's similar proprietary protocol is called Virtual Port Channel (aka vPC) and Juniper has another one called Multichassis LAG (MC-LAG).

VLT is a layer-2 link aggregation protocol between end-devices (servers) connected to (different) access-switches, offering these servers a redundant, load-balancing connection to the core-network in a loop-free environment, eliminating the requirement for the use of a spanning-tree protocol.[2] Where existing link aggregation protocols like (static) LAG (IEEE 802.3ad) or LACP (IEEE 802.1ax) require the different (physical) links to be connected to the same (logical) switch (such as stacked switches), the VLT, for example, allows link connectivity between a server and the network via two different switches.

Instead of using VLT between end-devices like servers it can also be used for uplinks between (access/distribution) switches and the core switches.[3]

Above VLT general description is from Wikipedia. Fore more information about VLT see http://en.wikipedia.org/wiki/Virtual_Link_Trunking

DELL published Force10 VLT Reference Architecture (PDF - link cached by google) where VLT is explained in detail so it is highly recommended to read it together with all product documentation and release notes before any real plan, design and implementation.

VLT Basic concept and terminology

The VLT peers exchange and synchronize Layer2-related tables to achieve harmonious Layer2 forwarding among the whole VLT domain, but the mechanism involved is transparent.

VLT is a trunk (as per its name) attaching remote hosts or switches.
VLTi is the interconnect link between the VLT peers. For historical reasons that is also called ICL (InterConnect Link) in the command outputs.

All the following rules apply to the VLT topologies
 2 unit per domain (as of FTOS 8.3.10.2)
 8 links per port-channel or fewer.
 Units should run the same FTOS version
 The backup should employ a different link than the VLTi, and preferably a diverse path

Simple implementation plan

Below I'll write simplified implementation plan for VLT configuration so it should be handy for any lab or proof of concept deployments.

 Implementation plan is divided in to 6 steps.
  1. Check or configure spanning tree protocol
  2. Check or configure LLDP
  3. Check or configure out of band management leveraged for VLT backup link
  4. Configure VLTi link (VLT inter connect)
  5. Configure VLT domain
  6. Configure VLT port-channel

Step 1 - Check or configure spanning tree protocol
Rapid Spanning-Tree should be enabled to prevent configuration and patching mistakes. STP configuration depends on customer environment and spanning tree topology preferences. Below parameters are just examples.

Switch A - configured to become RSTP root
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 4096 (if you want to have this switch as STP root)

Switch B - configured as backup root.
protocol spanning-tree rstp
 no disable
 hello-time 1
 max-age 6
 forward-delay 4
 bridge-priority 8192
Step 2 - LLDP configutration
LLDP must be enabled to advertise theirs configuration and receive configuration information form the adjacent LLDP-enabled device.

Switch A
protocol lldp
  advertise management-tlv system-description system-name
  no disable

Switch B
protocol lldp
  advertise management-tlv system-description system-name
  no disable
Step 3 - VLT backup link
VLT backup link is used to exchange heartbeat messages between the two VLT peers. The Management interface at both VLT peers to activate the backup link.

Switch A
interface management 0/0
  ip address switch-A-IP/switch-A-mask
  no shutdown
Switch B
interface management 0/0
  ip address switch-B-IP/switch-B-mask
  no shutdown

Step 4 - VLTi (interconnect) link
Now we configure the VLTi, the connection between both VLT peers. It is recommended to use a Static Port channel for redundancy reasons. Two 40GbE interfaces are enough and we bound it at the Port channel 127.  No special configuration is required at the interface or Port channel configuration level. To become a VLTi (automatically managed by the system), the port-channel should be in default mode (no switchport).

Switch A
interface port-channel 127
  description "VLTi - interconnect link"
  channel-member VLTi_INTERFACE1
  channel-member VLTi_INTERFACE2
  no ip address 
  mtu 12000
  no shutdown

Switch B
interface port-channel 127
  description "VLTi - interconnect link"
  channel-member VLTi_INTERFACE1
  channel-member VLTi_INTERFACE2
  no ip address 
  mtu 12000
  no shutdown

Note 1: Don't forget to do no shutdown for physical interfaces acting as port-channel members. Your port-channel stay down unless you put them up.
Note 2: Port-channel nor physical ports must NOT be in switchmode to be used for VLTi.
Note 3: If you are planning to use jumbo frames (bigger MTU size) then you have to use it also for VLTi links (max MTU on Force10 is 12000 so it is good idea to set it to max).

Use following configuration for all VLTi interfaces
interface VLTi_INTERFACEx
  no shutdown
  no switchmode

Verify port-channel status on both switches
show int po 127 brief

Port-channel should be up and composed from 2 ports.

Step 5 - VLT domain configuration
 We have to configure the domain number and the VLT domain options described below.
  • We use the peer-link command to select which is the VLTi interface.
  • We have to select the interface for the heartbeat messages exchange we use the back-up destination command with the ip address of the other VLT peer.
  • We should set the primary-priority command to configure the VLT role (primary or secondary). Primary VLT node will be the switch with lower priority. 
  • The system-mac mac-address command must match at both peers in the VLT domain. 
  • The unit id number 0 or 1 with the unit-id command will minimize the time required for the VLT system to determine the unit ID assigned to each peer switch when one peer switch reboots.

Switch A (primary)
vlt domain 1
  peer-link port-channel 127
  back-up destination switch-B-IP
  primary-priority 1
  system-mac mac-address 02:00:00:00:00:01
  unit-id 0
Switch B (secondary)
vlt domain 1
  peer-link port-channel 127
  back-up destination switch-A-IP
  primary-priority 8192
  system-mac mac-address 02:00:00:00:00:01
  unit-id 1
For verification we can use commands below
sh vlt brief
sh vlt statistics
sh vlt backup-link

Step 5 - VLT Port Channel
It is recommended that VLTs that are facing hosts/switches should be preferably built by LACP, to benefit from the protocol negotiations. However static port-channels are also supported.

It is also recommended to configure dampening (or equivalent) on the interfaces of connected hosts/switches (access switches, not VLT peers). The reason to use dampening is that at start-up time, once the physical ports are active a newly started VLT peer takes several seconds to fully negotiate protocols and synchronize (VLT peering, RSTP, VLT backup links, LACP, VLT LAG sync, etc). The attached devices are not aware of that activity and upon activation of a physical interface, the connected device will start forwarding traffic on the restored link, despite the VLT peer unit being still unprepared. It will black-hole traffic. Dampening on connected devices (access switches) will hold an interface temporarily down after a VLT peer device reload. A reload is detected as a flap: the link goes down and then up. Dampening acts as a cold start delay, ensuring that the VLT peers are up most ready to forward before the physical interface is activated, avoiding temporary black holes. Suggested dampening time: 30 seconds to 1 minute. We use 60 seconds in our example.

So let's finally configure the port channel (dynamic LAG) that interconnect the  S4810’s (VLT Domain) to the ustream S60 what is our hypotetical L3 switch (router).

Switch A
interface port-channel 1
  description "Uplink to S60"
  no ip address
  switchport
  vlt-peer-lag port-channel 1
  no shutdown

interface tengigabit 0/PO1-INTERFACE
  port-channel-protocol lacp
    port-channel 1 mode active
  dampening 10 100 1000 60
  no shutdown
Switch B
interface port-channel 1
  description "Uplink to S60"
  no ip address
  switchport
  vlt-peer-lag port-channel 1
  no shutdown

interface tengigabit 0/PO1-INTERFACE
  port-channel-protocol lacp
    port-channel 1 mode active
  dampening 10 100 1000 60
  no shutdown
Hope it is helpful not only for me but also for someone else. Any comments are welcome.

Locally Administered Address Ranges

MAC Addresses
There are  4 sets of Locally Administered Address Ranges that can be used on your network without fear of conflict, assuming no one else has assigned these on your network:
 
x2-xx-xx-xx-xx-xx
x6-xx-xx-xx-xx-xx
xA-xx-xx-xx-xx-xx
xE-xx-xx-xx-xx-xx

Replacing x with any hex value.

See http://en.wikipedia.org/wiki/MAC_address for more information.

Update 2014-10-27: one my reader notify me that some MAC OUI comply with rules above are used by some vendors. See example below:
  02-07-01   (hex) RACAL-DATACOM
  02-1C-7C   (hex) PERQ SYSTEMS CORPORATION
  02-60-86   (hex) LOGIC REPLACEMENT TECH. LTD.
  02-60-8C   (hex) 3COM CORPORATION
  02-70-01   (hex) RACAL-DATACOM
  02-70-B0   (hex) M/A-COM INC. COMPANIES
  02-70-B3   (hex) DATA RECALL LTD
  02-9D-8E   (hex) CARDIAC RECORDERS INC.
  02-AA-3C   (hex) OLIVETTI TELECOMM SPA (OLTECO)
  02-BB-01   (hex) OCTOTHORPE CORP.
  02-C0-8C   (hex) 3COM CORPORATION
  02-CF-1C   (hex) COMMUNICATION MACHINERY CORP.
  02-E6-D3   (hex) NIXDORF COMPUTER CORPORATION

Therefore I would always recommend to validate it at
http://en.wikipedia.org/wiki/MAC_address#Bit-reversed_notation
For example 02-00-00 is not used by anybody so you can most probably use it for internal purpose.

IP Addresses
Private network (internal)
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16

Private network (service provider - subscriber)
100.64.0.0/10
See http://en.wikipedia.org/wiki/Reserved_IP_addresses for more information.
 

Sunday, May 18, 2014

How to convert thick zeroed virtual disk to thin and save storage space

Last week I've been notified by my colleague about long term VMware vSphere issue described in VMware KB 2048016. The issue is that vSphere Data Protection restores a thin-provisioned disk as a thick-provisioned disk. This sounds like relatively big operational impact. However after reading VMware KB I've explained to my colleague that this is not typical issue or bug but it is rather expected behavior of VMware's CBT (change block tracking) technology and VADP (VMware API for data protection) framework. It's important to mention that it should behave like that only when you do initial full backup of thin provisioned VM which has never been powered on. In other words, if VM was ever powered on before initial backup procedure you shouldn't experience this issue.

After above explanation another logical question appeared.  
"How you can convert thick zeroed virtual disk to thin" ... when you experience weird behavior explained above and you restore your originally thin provisioned VM as thick VM. The obvious objection is to save storage space again leveraging VM thin provisioning.
My answer was to use "storage vMotion" which allows change ot vDisk type during migration. But just after my quick answer I realized there can be another potential issue with storage vMotion. If you use VAAI capable storage then storage vMotion is offloaded to the storage and it may not reclaim even zeroed vDisk space. This behavior and resolution is describe in VMware KB 2004155 named as "Storage vMotion to thin disk does not reclaim null blocks". The workaround mentioned in KB is to use offline method leveraging vmkfstools. If you want live storage migration (conversion) without downtime you would need another datastore with different block size. You can do it with legacy VMFS3 filesystem.

I decided to do a test to prove real storage vMotion behavior and know the truth. Everything else would be just speculations. Therefore I’ve test storage vMotion behavior of thick to thin migration and space reclamation in my lab where I have vSphere/ESX 5.5 and EqualLogic storage with VAAI support. To be honest the result surprised me in positive way. It seems that svMotion can save the space even I do svMotion between datastore with the same block and there is VAAI enabled.

You can see thick eager zeroed 40GB disk in screenshot below ...
Provisioned size is 44GB because VM has 4 GB RAM and therefore 4 GB swap file on VMFS.
Used storage is 40GB.

After live storage vMotion with conversion to Thin it saved the space. 
Used storage is just 22 GB.  You can see result at screenshot below ...

So I have just verified that svMotion can do what you need without downtime. And I don’t even need to migrated between datastores with different block size.

It was tested on ESX 5.5, EqualLogic firmware 6.x., and VMFS5 datastores created on thin provisioned LUNs by EqualLogic.  Storage thin provisioning is absolutely transparent to vSphere so this should not have impact on vSphere thin provisioning.


I know that this is just a workaround to the problem of VADP restore of never powered on VMDK but it works.  It converts thick to thin and is able to reclaim unused (zeroed) space insight virtual disks.

Conclusion:
vSphere 5.5 storage vMotion can convert thick VM to thin even between datastores having same block size. At least in tested configuration. Good to know. If someone else can do the test in your environment just leave the comment. It can be beneficial for others.

A/C Controller

A/C Controller is FreeBSD based appliance which monitors environmental temperature and automatically power on/off Air Conditioning units to achieve required temperature. It's distributed as 2GB (204MB zip) pre-installed FreeBSD image.

Project page: https://sourceforge.net/projects/accontrol/
Author: David Pasek

Hardware Infrastructure Monitoring Proxy

Every enterprise infrastructure product like server, blade system, storage array, fibre-channel or ethernet switch has some kind of CLI or API management. Lot of products support SNMP but it usually doesn't return everything what CLI/API offers. This project is set of connectors to different enterprise systems like DELL iDRAC and blade Chassis Management Controller, VMware vCenter and/or ESX, DELL Compellent. The framework is universal and other connectors to other systems can be simply developed.

Available connectors:
DELL CMC (racadm)
DELL DRAC (racadm)
General IPMI (ipmitools)
VMware vCLI (vcli)
Compellent Enterprise manager (odbc to MS-SQL)
Compellent Storage Center (CompCU.jar)
Brocade FC Switch (cli over telnet)

... other connectors and sensors can be simply developed. So if you have any need don't hesitate to contact me.

See video introduction at  https://www.youtube.com/watch?v=JRomfnfymlY
Project page: https://sourceforge.net/projects/monitoringproxy/ 
Author: David Pasek

Friday, May 16, 2014

Unable unmout ESX datastore

I've just been notified about annoying problem by customer for whom I did vSphere 5.5 Design. The datastore was not  posible to unmount. In ESX logs were something similar to message below.
Cannot unmount volume 'Datastore Name: vm3:xxx VMFS uuid: 517c9950-10f30962-931f-00304830a1ea' because file system is busy. Correct the problem and retry the operation.
There is KB about this symptom. VSAN component VSANTRACE was using datastore. That was the reason of busy file system. It was pretty annoying  issue as VSAN was not used nor enabled.

The solution is to disable vsantraced service so it is necessary to issue following command on evey ESX ...
chkconfig vsantraced off 

Not so nice, right? That's the downside of fully integrated VSAN software into general ESX hypervisor. I'm not happy with this approach. In my opinion, it would be much better distribute VSAN as additional software installing as regular VIB (VMware Installable Bundle).

Friday, May 09, 2014

Understanding Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) Terminology

To understand the Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) capabilities of the, you should become familiar with some basic terminology. I have just found excellent single page explaining all important terms from FC and  FCoE worlds. It is here.

Thanks Juniper to prepare it. I'm sure I will come back later for some abbreviation explanation.

 

Tuesday, May 06, 2014

Recovering from a Forgotten Password on the Force10 S series switch

I've just spent several hours to find the recovery procedure from forgotten password. Google returned just one relevant result to the Force10 tech tip page "How Do I Reset the S-Series to Factory Defaults?". However the procedure doesn't work because there is not "Option menu" during system boot. It is most probably old and deprecated procedure.

Here is the new procedure so I hope google will return it for other people looking for correct procedure.

Procedure to recovery from forgotten password on Force10 S-series switches:
  1. Use serial console
  2. Power off and then Power on all of the power modules
  3. Wait for message similar to "Hit Esc key to interrupt autoboot: 5" and press Esc key to go to Boot Loader (uBoot) interactive shell
  4. Change environment variable – setenv stconfigignore true - (uBoot - boot loader interactive shell)
  5. Save the changes - saveenv - (uBoot - boot loader interactive shell)
  6. Continue to boot the system – boot  - (uBoot - boot loader interactive shell)
  7. Default configuration is loaded so console login authentication is disabled by default
  8. Go to EXEC mode - en - (FTOS command line)
  9. Load startup configuration -  copy startup-config running-config - (FTOS command line)
  10. Now you can reconfigure the switch to change or add user login credentials
  11. Save configuration -  copy running-config startup-config - (FTOS command line)
  12. Reload the switch just for verification -  reload - (FTOS command line)

This procedure was tested on Force10 S4810 and S60.

Simple TFTP server for windows

Anybody working with networking equipment need simple tftp server. Typical use case is to download and/or upload switch configuration and to perform firmware upgrades.

I generally like simple tools which allow me to do my work quickly and efficiently.  That's the reason I really like portable version of TFTP32.

Fore more information about TFTP32 go here.


Monday, May 05, 2014

Microsoft Cluster Service (MSCS) support on VMware vSphere

Microsoft Cluster Service (MSCS) is Microsoft cluster technology required shared storage supporting SCSI reservation mechanism. Microsoft has introduced new - perhaps more modern and more descriptive - name for the same technology. New name is "Microsoft Failover Cluster" so don't be confused with different names.

VMware has supplementary documentation called "Setup for Failover Clustering and Microsoft Cluster Service" covering the subject. Here is online documentation for vSphere 5.5 and here for vSphere 5.1. This documentation is a must to read to fully understand what is and what is not possible.

However documentation is relatively old so if you want to know up-to-date information you have to leverage VMware KB. Here are two KB articles related to the topic
  • Microsoft Cluster Service (MSCS) support on ESXi/ESX (1004617)
  • Microsoft Clustering on VMware vSphere: Guidelines for supported configurations (1037959)
And here is general advice.
Plan, plan, plan ... design ... review ... implement ... verify.
I hope you know what I mean.

DELL Force10 : Initial switch configuration

[ Previous | DELL Force10 : Series Introduction ]

I assume you have serial console access to the switch unit to perform initial switch configuration. I guess it will not impressed you that to switch from read mode to configuration mode you  have to use command
conf
... before continue I would like to recap some important basic FTOS commands we will use later in this blog post. If you want to exit from configuration mode or even from deeper configuration hierarchy you can do it with one or several
exit 
commands which will jump to the upper level of configuration hierarchy and eventually exit conf mode. However the easiest way to leave configuration mode is to use command
end
which will exit configuration mode immediately.

The last but very important and very often used command is
write mem
which will write your running switch configuration to the flash and therefore configuration will survive the switch reload. You can do the same with more general command
copy running-config startup-config
If you want to display running configuration you can use command
show running-config
Whole configuration can be pretty long, so if you are interested only on some part of running configuration you can use following commands
show running-config interface managementethernet 0/0
show running-config interface gigabitethernet 0/2
show running-config spanning-tree rstp
show running-config boot
As you can see FTOS command line interface (cli) is very similar to CISCO.

Ok, so after basics let's start with initial configuration. Switch configuration usually begins with host name configuration. It is generally good practice to use unique host names because you know on which system you are logged in.
hostname f10-s60
As a next step I usually configure management IP settings and enable remote access. You have to decide if you will use in-band management leveraging normal IP settings usually configured on dedicated VLAN interface just for system management or you will leverage dedicated out-of-band management port. In example below you can see
  • out-of-band management port for system management IP settings
  • how to create admin user
  • how to enable ssh to allow remote system management
interface ManagementEthernet 0/0
  ip address 192.168.42.101/24
  no shutdown
exit
management route 0.0.0.0/0 192.168.42.1

username admin password YourPassword privilege 15

ip ssh server enable
Now you have to decide if you want to enforce login for users connected via local console. By default there is no login required which can by security risk especially in environments without strict physical security rules. Below is configuration which enforce local login credentials when using serial console.
 aaa authentication login default local
At this point I would like to note that Force10 switch has all capabilities and features disabled in default factory configuration. That's the reason why for example each switch interface must be explicitly enabled before usage because all interfaces are in shutdown state by default.

Before you enable any switch interface it is good practice to enable spanning tree protocol as security mechanism against potential loops in the network. Once again, spanning tree feature is not enabled by default so you have to do it explicitly. Force10 FTOS has implemented all standard and even some non-standard (CISCO proprietary) spanning tree protocols like PVSTP+. On the latest FTOS version following spanning tree protocols are supported:

  • STP (Spanning Tree Protocol)
  • RSTP (Rapid Spanning Tree Protocol)
  • MSTP (Multiple Spanning Tree Protocol)
  • PVSTP+ (Per-VLAN Spanning Tree Plus)
Bellow is configuration example which enables standard rapid spanning tree protocol (aka RSTP) ...
protocol spanning-tree rstp
  no disable 
Another decision you have to do before implementation is the location from where do you want to boot your switch operating system. On some Force10 models (for example on S60) is default primary boot location TFTP server  ...
boot system stack-unit 0 primary tftp://192.168.128.1/FTOS-SC-8.3.3.7.bin
boot system stack-unit 0 secondary system: A:
boot system stack-unit 0 default system: B:
boot system gateway 192.168.128.1
You can see that primary boot location is TFTP server. If you don't have tens or hundreds of switches you usually don't want to load FTOS remotely from TFTP server but from internal flash in the switch. Although default switch configuration would work because if TFTP server boot fails switch boot sequence continue with secondary location  but it's better to configure the switch boot sequence explicitly base on your requirements. Below is typical boot sequence configuration.
boot system stack-unit 0 primary system: A:boot system stack-unit 0 secondary system: B:boot system stack-unit 0 default system: A: no boot system gateway
Next thing you should check is what FTOS version do you have. Below is the command how you can check it ...
f10-s60#show version
Dell Force10 Networks Real Time Operating System Software
Dell Force10 Operating System Version: 1.0
Dell Force10 Application Software Version: 8.3.3.7
Copyright (c) 1999-2011 by Dell Inc.
Build Time: Sat Nov 26 01:23:50 2011
Build Path: /sites/sjc/work/build/buildSpaces/build20/E8-3-3/SW/SRC
f10-s60 uptime is 4 minute(s)
System image file is "system://A"
System Type: S60
Control Processor: Freescale MPC8536E with 2147483648 bytes of memory.
128M bytes of boot flash memory.
  1 48-port E/FE/GE (SC)
 48 GigabitEthernet/IEEE 802.3 interface(s)
  2 Ten GigabitEthernet/IEEE 802.3 interface(s)
You can see FTOS version 8.3.3.7 which is not the latest one as the latest FTOS version at the time of writing this article is 8.3.3.9 and boot loader 1.0.0.5. It is generally good practice to upgrade FTOS to the latest version before performing verification test and going into production. For the latest version you have to go to http://www.force10networks.com and sign in. If you don't have Force10 account you can register there. Please note that each Force10 switch model use different FTOS versions. So there can be FTOS 9.4.x for model S4810 and 8.3.x for S60.

Now I'll show you how to do FTOS and boot loader upgrade.
FTOS should be upgraded first and Boot Loader later ...
upgrade system tftp: A:
upgrade system stack-unit all A:
(applicable only if you have stack configured)
upgrade boot ftp: (applicable only if  new bootloader compatible with FTOS code exists)
reload
You can check current FTOS version
show version
and if you want to know what FTOS version do you have on which boot bank you can 
show boot system stack-unit 0
By the way, have I told you there are two boot banks? Boot bank A: and boot bank B:so you can choose primary and secondary boot location. We have already covered boot configuration but here it is again ...
conf
  boot system stack-unit 0 primary system: A:
  boot system stack-unit 0 secondary system: B:
FTOS is loaded by boot loader and current Boot Loader can be displayed by command below
show system stack-unit 0
Hope this post is helpful for IT community. In case you have any question, suggestion or idea on improvements please share your thoughts in in the comments.

Stay tuned and wait for next article ...

[ Next | DELL Force10 : Interface configuration and VLANs ]

DELL Force10 : Series Introduction

I have just decided to write dedicated blog post series about DELL Force10 networking. Why?

Who knows me in person is most probably aware that my primary professional focus is on VMware vSphere infrastructure and datacenter enterprise hardware. Sometimes I have discussion with infrastructure experts, managers and other IT folks what is the most important/complex/critical/expensive vSphere component. vSphere component is meant by compute (servers), storage and networking.  I thing it is needless discussion because all components are important and have to be integrated into single integrated system fulfilling all requirements, dealing with known constraints and mitigating all potential risks. Such integrated infrastructures are very often called POD which stands, as far as I know, for Performance Optimized Datacenter. These integrated systems are, from my point of view, new datacenter computers having dedicated but distributed computing, storage and networking components. I would prefer to call such equipment as "Optimized Infrastructure Block" or "Datacenter Computer" because it is not only about performance but also about reliability, capacity, availability, manageability, recoverability and security. We call these attributes infrastructure qualities and whole infrastructure block inherits qualities from sub-components. Older IT folks often compare this concept with main frame architectures however nowadays we usually use commodity x86 hardware "a little bit" optimized for enterprise workloads. 

By the way that's one of the reason I like current DELL datacenter product portfolio because DELL has everything I need to build POD - server, storage systems and now also networking so I'm able to design single vendor infrastructure block with unified support, warranty, etc. Maybe someone don't know but DELL acquired EqualLogic and Compellent storage vendors some time ago, but more importantly for this blog post, DELL also acquired well known (at least in US) datacenter networking producer Force10. For official acquisition details look here.

But back to the networking. Everybody would probably agree that networking is very important part of vSphere infrastructure because of several reasons. It provides interconnect between clustered components - think about vSphere networks like Management, vMotion, Fault Tolerance, VSAN, etc. It also routes network traffic to the outside world. And sometimes it even provides storage fabrics (iSCSI, FCoE, NFS). That's actually the reason why I'm going to write this series of blog posts about DELL Force10 networking - because of networking importance. However I don't want to write about  legacy networking but modern networking approach for next generation virtualized and software defined datacenters.

Modern physical networking is not only about hardware (burned intelligence in ASICs with high bandwidth, fast and low latency interfaces) but also in software. The main software sits inside DELL Force10 switches. It is switch firmware called FTOS - Force10 Operating System (see. for more general information about FTOS look here).  However, today it is not only about switch embedded firmwares but also about whole software ecosystem - managements, centralized control planes, virtual distributed switches, network overlays, etc.
 
In future articles I would like to deep dive into FTOS features, configuration examples and virtualization related integrations.

Next, actually first technical article in this series will be about about typical initial configuration of Force10 switch. I know it is not rocket science but we have to know basics before taking off. In the future I would like to write about more complex designs, capabilities and configurations like
  • Multi-Chassis Link Aggregation (aka MC-LAG). In Force10 terminology we call it VLT - Virtual Link Trunking.
  • Virtual Routing and Forwarding (aka VRF). Some S-series Force10 models with FTOS 9.4 support VRF-lite.
  • some Software Define Networking (aka SDN) capabilities like python/perl scripting inside the switch, REST API, VXLAN hardware VTEP, Integration with VMware Distributed Virtual Switch, Integration with VMware NSX, OpenFlow, etc.
If you prefer some topics please let me know in comments and I'll try to prioritize it. Otherwise I'll write future posts based on my preferences.   
So let's finish blog post series introduction and start some technical stuff and begin with deep dive into switch initial configuration! Just click next and below ...

[ Next | DELL Force10 : Initial switch configuration ]