Thursday, January 08, 2015

Can you please tell me more about VN-Link?

Back in 2010 when I have worked for CISCO Advanced Services as UCS Architect, Consultant, Engineer I compiled presentation about CISCO's virtual networking point of view in enterprise environments. Later I published this presentation on Slideshare as "VMware Networking, CISCO Nexus 1000V, and CISCO UCS VM-FEX". I used this presentation to educate CISCO partners and customers because it was really abstract topic for regular network specialists without server virtualization awareness. Please note, that SDN (Software Defined Networking) was not known and abused at that time.

Yesterday I received following Slideshare comment / question about this preso from John A.
Hi David, Thanks for this great material. Can you please tell me more about VN-Link?
I have decided to write blog post instead simple answer on Slideshare comments.

Disclaimer: I don't work for CISCO more then 3 years and I work for competitor (DELL) so this blog is my private opinion and my own understanding of CISCO technology. I might oversimplify some definitions or might be inaccurate on some statements but I believe I'm right conceptually which is the most important for John and other readers interested in CISCO virtual networking technologies for enterprise environments.

VN-link was CISCO marketing and conceptual term which is currently replaced with new term VM-FEX. VM-FEX (Virtual Machine Fabric Extender) is in my opinion better understandable term for CISCO networking professionals familiar with CISCO FEX technology. However VN-link/VM-FEX term is purely conceptual and abstract construct achievable by several different technologies or protocols. I have always imagined VN-LINK as the permanent virtual link between virtual machine virtual NIC (for example VMware vNIC) and CISCO Switch switchport with virtualization capabilities (vEth). When I'm saying switchport virtualization capabilities there are several technologies which can be used to fulfill conceptual idea of VN-link. VN-link conceptual and logical idea is always the same but implementation differs. Generally it is some kind of network overlay and each VN-link (virtual link) is the tunnel implemented by some standard protocol or proprietary technology. CISCO VN-link has one tunnel end point always the same - it is vEth on some model of CISCO Nexus switch. It can be physical Nexus switch (UCS FI, N5K, N7K, ...) or virtual switch Nexus 1000v (N1K). The second tunnel (vNIC) end point can be terminated on several places of your virtualized infrastructure. Below is conceptual view of VN-link or virtual wire if you wish.



So let's deep dive in two different technologies for CISCO VN-LINK tunnels implementations.

Nexus 1000v  (VN-link in software)

VN-link can be implemented in software by CISCO Nexus 1000v. The first VN-link tunel end point (vEth) in this particular case is in Nexus 1000v VSM (Virtual Supervisor Module) and second tunel end point (vNIC) is instantiated in CISCO virtual switch Nexus 1000v VEM (Virtual Ethernet Module) on particular hypervisor. Nexus 1000v architecture is not in scope of this blog post but someone familiar with CISCO FEX technology can imagine VSM as parent Nexus switch and VEM as remote line card (aka FEX - Fabric Extender).

VN-link  in hardware is hardware independent and everything is done in software. Is it Software Defined Networking? I can imagine Nexus 1000v VSS as a form of SDN controller. However when I speak personally with Martin Casado about this analogy on VMworld 2012 he was against it. I agree that Nexus 1000v has smaller scalability then NSX controller but conceptually this analogy works for me quite well. It always depends what scalability is required for particular environment and what kind of scalability, performance, availability and manageability you are looking for. There are always some pros and cons on each technology.

CISCO UCS and hypervisor module (VN-link in hardware)

For VN-link in hardware you must have appropriate CISCO UCS (Unified Computing System) hardware supporting protocol 802.1Qbh. Protocol 802.1Qbh (aka VN-TAG) allows physical switch port and server NIC port virtualization effectively establish virtual link over physical link. This technology dynamically creates vEth interfaces on top of physical switch interface (UCS FI). This vEth is one end point of VN-link (virtual link, virtual wire) established between CISCO UCS FI vEth and virtual machine vNIC. Virtual machine can be virtual server instance on top of any server virtualization platform (VMware vSphere, Microsoft Hyper-V, KVM, etc.) for which CISCO has plugin/module in hypervisor. CISCO VN-TAG (802.1Qbh) protocol is conceptually similar to HP Multichannel VEPA (802.1Qbg) but VN-TAG advantage is that virtual wire can be composed from several segments. This multisegment advantage is leveraged in UCS because one virtual link is combined from two following virtual links. First virtual link is in hardware and second is in software. Below are listed two segments of single virtual link over UCS infrastructure.
  1. From UCS FI vEth to UCS VIC (Virtual Interface Card) logical NIC (it goes through UCS IOM which is effectively normal CISCO physical FEX) 
  2. From UCS VIC logical NIC to VM vNIC (it goes through hypervisor module - software FEX)  
Below are specified hardware components required for VN-link in hardware.
  • CISCO UCS FI (Fabric Interconnects). UCS FI act as the first VN-link tunel end point where vEths exists.
  • CISCO UCS IOM (I/O Module) on each UCS Blade chassis is working as regular FEX   
  • CISCO VIC (Virtual Interface Card) on each server hosting specific hypervisor and allowing NIC partitioning of single physical adapter into logical NICs or HBAs.   

Conclusion

I hope this explenation of VN-link answered John A. question and help others who want to know what VN-link really is. I forget to mention that VN-link primary use case is mostly about operational collaboration between virtualization and network admins. CISCO believes that VN-link allows to keep virtual networking administration to legacy network specialists. To be honest I'm not very optimistic about this idea because it makes infrastructure significantly more complex. In my opinion IT silos (network, compute, storage) has to be merged into one team and modern datacenter administrators must be able to administer servers, storage and networking. However I agree that this is pretty big mental shift and it will definitely take some time. Especially in big enterprise environments.

No comments: