Understanding Software Defined Networking (SDN) and Network Virtualization

The evolution of virtualization lead to an evolution of wide range of virtualized technology including the key building block of a data center which is Network. A traditional network used be wired connection of physical switches and devices. A network administrator has nightmare making some configuration changes and possibility of breaking another configuration while doing same changes. Putting together a massive data center would have been expensive venture and lengthy project. Since the virtualization and cloud services on the horizon, anything can be offered as a service and almost anything can virtualised and software defined.

Since development of Microsoft SCVMM and VMware NSX, network function virtualization (NFV), network virtualization (NV) and software defined network (SDN) are making bold statement on-premises based customer and cloud based service provider. Out of all great benefits having a software defined network, two key benefits standout among all which are easy provisioning a network and easy change control of that network. You don’t have to fiddle around physical layer of network and you certainly don’t have to modify virtual host to provision a complete network with few mouse click. How does it work?

Software Defined Networking- Software defined networking (SDN) is a dynamic, manageable, cost-effective, and adaptable, high-bandwidth, agile open architecture. SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. Examples of Cisco software defined networking is here.

The fundamental building block of SDN is:

  • Programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.

Cisco SDN Capable Switches

Modular Switches

Cisco Nexus 9516
Cisco Nexus 9508
Cisco Nexus 9504

Fixed Switches

Cisco Nexus 9396PX
Cisco Nexus 9396TX
Cisco Nexus 93128TX
Cisco Nexus 9372PX
Cisco Nexus 9372TX
Cisco Nexus 9336PQ ACI Spine Switch
Cisco Nexus 9332PQ

Network Virtualization- A virtualized network is simply partitioning existing physical network and creating multiple logical network. Network virtualization literally tries to create logical segments in an existing network by dividing the network logically at the flow level. End goal is to allow multiple virtual machine in same logical segment or a private portion of network allocated by business. In a physical networking you cannot have same IP address range within same network and manage traffic for two different kind of services and application. But in a virtual world you can have same IP range segregated in logical network. Let’s say two different business/tenant have 10.124.3.x/24 IP address scheme in their internal network. But both business/tenant decided to migrate to Microsoft Azure platform and bring their own IP address scheme (10.124.3.x/24) with them. It is absolutely possible for them to retain their own IP address and migrate to Microsoft Azure. You will not see changes within Azure portal. You even don’t know that another organisation have same internal IP address scheme and possibly hosted in same Hyper-v host. It is programmatically and logically managed by Azure Stack and SCVMM network virtualization technology.

Network Functions Virtualization- Network function virtualization is virtualising layer 4 to layer 7 of OSI model in a software defined network. NFV runs on high-performance x86 platforms, and it enables users to turn up functions on selected tunnels in the network. The end goal is to allow administrator to create a service profile for a VM then create logical workflow within the network (the tunnel) and then build virtual services on that specific logical environment. NFV saves a lot of time on provisioning and managing application level of network. Functions like IDS, firewall and load balancer can be virtualised in Microsoft SCVMM and VMware NSX.

Here are some Cisco NFV products.

IOS-XRv Virtual Router: Scale your network when and where you need with this carrier-class router.

Network Service Virtualization- Network Service Virtualization (NSV) virtualizes a network service, for example, a firewall module or IPS software instance, by dividing the software image so that it may be accessed independently among different applications all from a common hardware base. NSV eliminates cost of acquiring a separate hardware for single purpose instead it uses same hardware to service different purpose every time a network is accessed or service is requested. It also open the door for service provider offer security as a service to various customer.

Network security appliances are now bundled as a set of security functions within one appliance. For example, firewalls were offered on special purpose hardware as were IPS (Intrusion Protection System), Web Filter, Content Filter, VPN (Virtual Private Network), NBAD (Network-Based Anomaly Detection) and other security products. This integration allows for greater software collaboration between security elements, lowers cost of acquisition and streamlines operations.

Cisco virtualized network services available on the Cisco Catalyst 6500 series platform.

Network security virtualization

  • Virtual firewall contexts also called security contexts
  • Up to 250 mixed-mode multiple virtual firewalls
  • Routed firewalls (Layer 3)
  • Transparent firewalls (Layer 2, or stealth)
  • Mixed-mode firewalls combination of both Layer 2 and Layer 3 firewalls coexisting on the same physical firewall. 

Virtual Route Forwarding (VRF) network services

  • NetFlow on VRF interfaces
  • VRF-aware syslog
  • VRF-aware TACACS
  • VRF-aware Telnet
  • Virtualized address management policies using VRF-aware DHCP
  • VRF-aware TACACS
  • Optimized traffic redirection using PBR-set VRF

Finally you can have all these in one basket without incurring cost for each component once you have System Center Virtual Machine Manager or Microsoft Azure Stack implemented in on-premises infrastructure or you choose to migrate to Microsoft Azure platform.

Relevant Articles

Comparing VMware vSwitch with SCVMM Network Virtualization

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Comparing VMware vSwitch with SCVMM Network Virtualization

Feature VMware vSphere System Center

VMM 2012 R2

Standard vSwitch DV Switch
Switch Features Yes Yes Yes
Layer 2 Forwarding Yes Yes Yes
IEEE 802.1Q VLAN Tagging Yes Yes Yes
Multicast Support Yes Yes Yes
Network Policy Yes Yes
Network Migration Yes Yes
NVGRE/ VXLAN Procure NSX or Cisco Appliance Yes
L3 Network Support Procure NSX or Cisco Appliance Yes
Network Virtualization Procure NSX or Cisco Appliance Yes
NIC Teaming Yes Yes Yes
Network Load Balancing Procure NSX or Cisco Appliance Yes
Virtual Switch Extension Yes Yes
Physical Switch Connectivity
EtherChannel Yes Yes Yes
Load Balancing Algorithms
Port Monitoring Yes Yes Yes
Third party Hardware load balancing Yes Yes
Traffic Management Features
Bandwidth Limiting Yes Yes
Traffic Monitoring Yes Yes
Security Features
Port Security Yes Yes Yes
Private VLANs Yes Yes
Management Features
Manageability Yes Yes Yes
Third Party APIs Yes Yes
Port Policy Yes Yes Yes
Netflow Yes* Yes* Yes
Syslog Yes** Yes** Yes
SNMP Yes Yes Yes

* Experimental Support

** Virtual switch network syslog information is exported and included with VMware ESX events.

References:

VMware Distributed Switch

VMware NSX

Microsoft System Center Features 

Related Articles:

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Understanding Network Virtualization in SCVMM 2012 R2

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes:

SCVMM Network

Logical Networking of almost “Anything” hosted in SCVMM- Logical network is a concept of complete identification, transportation and forwarding of Ethernet traffic in virtualized environment.

  • Provision and manage logical networks resources of private and public cloud
  • Management of Logical networks, subnets, VLAN, Trunk or Uplinks, PVLAN, Mac address pool, Templates, profiles, static IP address pool, DHCP address pool, IP Address Management (IPAM)
  • Integrate and manage third party hardware load balancer and Cisco virtual switch 1000v
  • Provide functionality of Virtual IP Addresses (VIPs), quality of service (QoS), monitor network traffic and virtual switch extensions
  • Creation of virtual switches and virtual network gateways

Network Virtualization – Network virtualization is a parallel concept to a server virtualization, where it allows you to abstract and run multiple virtual networks on a single physical network

  • Connects virtual machines to other virtual machines, hosts, or applications running on the same logical network.
  • Provides an independent migration of virtual machine which means when a VM moved to a different host from original host, SCVMM will automatically migrate that virtual network with the VM so that it remains connected to the rest of the infrastructure.
  • Allows multiple tenants to have their own isolated networks for security and privacy reason.
  • Allows unique IP address ranges for a tenant for management flexibility.
  • Communicate using a gateway of a site or a different site if permitted by firewall
  • Connect a VM running on a virtual network to any physical network in the same site or a different location.
  • Connect cross-network using an inbox NVGRE gateway that can be deployed as a VM to provide this cross-network interoperability.

Network Virtualization is defined in Fabric>Networking Tab of SCVMM 2012 R2 management console. Virtual Machine networking is defined in VMs and Services>VM Networks Tab of SCVMM 2012 R2 management console.

Host Config

Network virtualization terminology in SCVMM 2012 R2:

Fabric.networking

Logical networks: A logical network in VMM which contains the information of VLAN, PVLAN and subnets of a site in a Hyper-v host or a Hyper-v clusters. An IP address pool and a VM network can be associated with a logical network. A logical network can connect to another network or many network or vice-versa. Cloud function of each logical network is:

Logical network Purpose Tenant Cloud
External ·Site-to-site endpoint IP addresses

·Load balancer virtual IP addresses (VIPs)

·Network address translation (NAT) IP addresses for virtual networks

·Tenant VMs that need direct connectivity to the external network with full inbound access

Yes
Infrastructure Used for service provider infrastructure, including host management, live migration, failover clustering, and remote storage. It cannot be accessed directly by tenants. No
Load Balancer ·Uses static IP addresses

·Has outbound access to the external network via the load balancer

·Has inbound access that is restricted to only the ports that are exposed through the VIPs on the load balancer

Yes
Network Virtualization · This network is automatically used for allocating provider addresses when a VM that is connected to a virtual network is placed onto a host.

·Only the gateway VMs connect to this directly.

· Tenant VMs connect to their own VM network. Each tenant’s VM network is connected to the Network Virtualization logical network.

·A tenant VM will never connect to this directly.

·Static IP addresses are automatically assigned.

Yes
Gateway Associated with forwarding gateways, which require one logical network per gateway. For each forwarding gateway, a logical network is associated with its respective scale unit and forwarding gateway. No
Services · The Services network is used for connectivity between services in the stamp by public-facing Windows Azure Pack features, and for SQL Server and MySQL Database DBaaS deployments.

·All deployments on the Services network are behind the load balancer and accessed through a virtual IP (VIP) on the load balancer.

·This logical network is also designed to provide support for any service provider-owned service and is likely to be used by high-density web servers initially, but potentially many other services over time.

No

IP Address Pool: An IP address pool is a range of IP addresses assigned to a logical network in a site which provides IP address, subnets, gateway, DNS, WINS related information to virtual machines and applications.

Mac Address Pool: Mac Address Pool contains default mac address ranges of virtual network adapter of virtual machine. You can also create customised mac address pool and assign that pool to virtual machines.

Pool Name Vendor Mac Address
Default MAC address pool Hyper-V and Citrix XenServer 00:1D:D8:B7:1C:00 – 00:1D:D8:F4:1F:FF
Default VMware MAC address pool VMware ESX 00:50:56:00:00:00 – 00:50:56:3F:FF:FF

Hardware Load Balancer: Hardware load balancer is a functionality within SCVMM networking to provide third party loading balancing of application and services. A virtual IP or IP address Pool can be associated with hardware load balancer.

VIP Templates: VIP templates is a standard template used to define virtual addresses associated with hardware load balancer. VIP is allocated to application, services and virtual machines hosted in SCVMM 2012 R2. A template that specifies the load-balancing behaviour for HTTPS traffic on a specific load balancer by manufacturer and model.

Logical Switch: logical switches act as containers for the properties or capabilities that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. Logical switches act as an extension of physical switch with a major difference that you don’t have to drive to data center, take a patch lead and connect to computer, then configure switch ports and assign VLAN tag to that port.  Logical switch where you define uplinks or physical adapter of Hyper-v hosts, associate uplinks with logical networks and sites.

Port Profiles: Port profiles act as containers for the security and privacy that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify these capabilities in port profiles, which you can then apply to the appropriate adapters. Port profiles are associated with an uplinks in logical switch.

Port Classification: Port classifications provide global names for identifying different types of virtual network adapter port profiles. A port classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.

Network Service: Network service is container whether you can add Windows and non-Windows network gateway and IP address management and monitoring information. An IP Address Management (IPAM) server that runs on Windows Server 2012 R2 to provide resources in VMM. You can use the IPAM server in network resource tab of SCVMM to configure and monitor logical networks and their associated network sites and IP address pools. You can also use the IPAM server to monitor the usage of VM networks that you have configured or changed in VMM.

Virtual switch extension: A virtual switch extension manager in a SCVMM allows you to use a software based vendor network-management console and the VMM management server together. For example you can install Cisco 1000v extension software in a VMM server and add the functionality of Cisco switches into the VMM console.

VM Network: A VM network in a logical network is the endpoint of network virtualization which directly connect a virtual machine to allow public or private communication among VMs or other network and services. A VM network is associated with a logical network for direct access to other VMs.

VM Networks

Related Articles:

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Cisco Nexus 1000V Switch for Microsoft Hyper-V provides following advanced feature in Microsoft Hyper-v and SCVMM.

  • Integrate physical, virtual, and mixed environments
  • Allow dynamic policy provisioning and mobility-aware network policies
  • Improves security through integrated virtual services and advanced Cisco NX-OS features

The following table summarizes the capabilities and benefits of the Cisco Nexus 1000V Switch deployed with Microsoft Hyper-V and SCVMM.

Capabilities Features Benefits
Advanced Switching Private VLANs, Quality of Service (QoS), access control lists (ACLs), portsecurity, and Cisco vPath Get granular control of virtual machine-to-virtual machine interaction
Security Dynamic Host Configuration Protocol (DHCP) Snooping, Dynamic Address Resolution Protocol Inspection, and IP Source Guard Reduce common security threats in data center environments.
Monitoring NetFlow, packet statistics, Switched Port Analyzer (SPAN), and Encapsulated Remote SPAN Gain visibility into virtual machine-to-virtual machine traffic to reduce troubleshooting time.
Manageability Simple Network Management Protocol, NetConf, syslog, and other troubleshooting command-line interfaces Use existing network management tools to manage physical and virtual environments.

The Cisco Nexus 1000V Series has two major components:

Virtual Ethernet Module (VEM)- The software component is embedded on each Hyper-V host as a forwarding extension. Each virtual machine on the host is connected to the VEM through a virtual Ethernet port.

Virtual Supervisor Module (VSM)- The management module controls multiple VEMs and helps in defining virtual machine (VM)-centric network policies.

Supported Configurations

  • Microsoft SCVMM 2012 SP1/R2
  • 64 Microsoft Windows Server 2012/R2 with Hyper-V hosts
  • 2048 virtual Ethernet ports per VSM, with 216 virtual Ethernet ports per physical host
  • 2048 active VLANs
  • 2048 port profiles
  • 32 physical NICs per physical host
  • Compatible all Cisco Nexus and Cisco Catalyst switches as well as switches from other vendors

Comparison between Cisco Nexus 1000V editions:

Features Essential

Free Version

Advanced
VLANs, PVLANs, ACLs, QoS, Link Aggregation Control Protocol (LACP), and multicast Yes Yes
Cisco vPath (for virtual services) Yes Yes
Cisco NetFlow, SPAN, and ERSPAN (for traffic visibility) Yes Yes
SNMP, NetConf, syslogs, etc. (for manageability) Yes Yes
Microsoft SCVMM integration Yes Yes
DHCP snooping Yes
IP source guard Yes
Dynamic ARP Inspection Yes
Cisco VSG* Yes

Installation Steps for Cisco Nexus 1000V Switch for Microsoft Hyper-V are:

Step1: Download Cisco Nexus 1000v Appliance/ISO

Log on to Cisco using cisco account. Download software from this URL

Step2: Install SCVMM Components

step2

Step3: Install and configure VSM

step3

Step4: Configure SCVMM Fabric and VM Network

step4

Step5: Prepare Hyper-v Hosts

step5

Step6: Create 1000v logical switch

step6

Step7: Create VMs or connect existing VMs with logical switch

step7

References & Getting Started with Nexus 1000V

Cisco Nexus 1000v Quick Start Guide

Cisco Nexus 1000V Switch for Microsoft Hyper-V Deployment Guide

Cisco Nexus 1000v datasheet

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

 

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

With Server virtualization you can run multiple server instances concurrently on a single physical host; yet servers are isolated from each other and operate independently. Similarly Network virtualization provides multiple virtual network infrastructures run on the same physical network with or without overlapping IP addresses. Each virtual network infrastructure operates as if they are the only virtual network running on the shared network infrastructure. Hyper-v Network Virtualization also decouples physical network from virtual network. Network virtualization can be achieved via System Center Virtual Machine Manager (SCVMM) managing multiple Hyper-v Servers, a single Hyper-v Server or clustered Hyper-v Servers. Microsoft Hyper-v Network Virtualization provides multi-tenant aware, multi-VLAN aware and non-hierarchical IP address assignment to virtual machines in conventional on-premises and cloud based data center.

Hyper-v Virtual Network Type

  • Private Virtual Network Switch allows communication between virtual machines connected to the same virtual switch. Virtual Machines connected to this type of virtual switch cannot communicate with Hyper-V Parent Partition. You can create any number of Private virtual switches.
  • Internal Virtual Network Switch can be used to allow communication between virtual machines connected to the same switch and also allow communication to the Hyper-V Parent Partition. You can create any number of internal virtual switches
  • External Virtual Network Switch allows communication between virtual machines running on the same Hyper-V Server, Hyper-V Parent Partition and Virtual Machines running on the remote Hyper-V Server. It requires a physical network adapter on the Hyper-V Host that is not mapped to any other External Virtual Network Switch. As a result, you can create External virtual switches as long as you have physical network adapters that are not mapped to any other external virtual switches.

Follow the guide lines to configure Virtual Networking in Windows Server 2012 R2 Hyper-v role installed. A highly available clustered Hyper-v server should have the following configuration parameters.

Example VLAN

Network Type VLAN ID IP Addresses
Default 1 10.10.10.1/24
Management 2 10.10.20.1/24
Live Migration 3 10.10.30.1/24
Prod Server 4 10.10.40.1/24
Dev Server 5 10.10.50.1/24
Test Server 6 10.10.60.1/24
Storage 7 10.10.70.1/24
DMZ 99 192.168.1.1/24

Example NIC Configuration with 8 network card (e.g. 2x quad NIC card)

Virtual Network Name Purpose Connected Physical Switch Port Virtual Switch Configuration
MGMT Management Network Port configured with VLAN 2 Allow Management Network ticked

Enable VLAN identification for management operating system ticked

LiveMigration Live Migration Port configured with VLAN 3 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

iSCSI Storage Port configured with VLAN 7 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

VirtualMachines Prod, Dev, Test, DMZ Port configured with Trunk Mode Allow Management Network un-ticked

Enable VLAN identification for management operating system un-ticked

Recommendation:

  • Do not assign VLAN ID in NIC Teaming Wizard instead assign VLAN ID in Virtual Switch Manager.
  • Configure virtual switch network as External Virtual Network.
  • Configure Physical Switch Port Aggregation using EtherChannel.
  • Configure Logical Network Aggregation using NIC Teaming Wizard.
  • Enable VLAN ID in Virtual Machine Settings.

Example Virtual Machine Network Configuration

Virtual Machine Type VLAN ID Tagged in VM>Settings>Network Adapter Enable VLAN identifier Connected Virtual Network
Prod VM 4 Ticked VirtualMachines
Dev VM 5 Ticked VirtualMachines
Test VM 6 Ticked VirtualMachines
DMZ VM with two NICs 4, 99 Ticked VirtualMachines

 

NIC Teaming with Virtual Switch

Multiple network adapters on a computer to be placed into a team for the following purposes:

  • Bandwidth aggregation
  • Traffic failover to prevent connectivity loss in the event of a network component failure

There are two basic configurations for NIC Teaming.

  • Switch-independent teaming. This configuration does not require the switch to participate in the teaming. Since in switch-independent mode the switch does not know that the network adapter is part of a team in the host, the adapters may be connected to different switches. Switch independent modes of operation do not require that the team members connect to different switches; they merely make it possible.
  • Switch-dependent teaming. This configuration that requires the switch to participate in the teaming. Switch dependent teaming require participating NIC to be connected in same physical switch. There are two modes of operation for switch-dependent teaming: Generic or static teaming (IEEE 802.3ad draft v1). Link Aggregation Control Protocol teaming (IEEE 802.1ax, LACP).

Load Balancing Algorithm

NIC teaming in Windows Server 2012 R2 supports the following traffic load distribution algorithms:

  • Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic.
  • Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.
  • Dynamic. This algorithm takes the best aspects of each of the other two modes and combines them into a single mode. Outbound loads are distributed based on a hash of the TCP Ports and IP addresses. Dynamic mode also rebalances loads in real time so that a given outbound flow may move back and forth between team members. Inbound loads are distributed as though the Hyper-V port mode was in use.

NIC Teaming within Virtual Machine

NIC teaming in Windows Server 2012 R2 may also be deployed in a VM. This allows a VM to have virtual NICs connected to more than one Hyper-V switch and still maintain connectivity even if the physical NIC under one switch gets disconnected.

To enable NIC Teaming with virtual machine. In the Hyper-V Manager, in the settings for the VM, select the VM’s NIC and the Advanced Settings item, then enable the checkbox for NIC Teaming in the VM.

Physical Switch Configuration

  • In Trunk Mode, a virtual switch will listen to all the network traffic and forward the traffic to all the ports. In other words, network packets are sent to all the virtual machines connected to it. By default, a virtual switch in Hyper-V is configured in Trunk Mode, which means the virtual switch receives all network packets and forwards them to all the virtual machines connected to it. There is not much configuration needed to configure the virtual switch in Trunk Mode.
  • In Access Mode, the virtual switch receives network packets in which it first checks the VLAN ID tagged in the network packet. If the VLAN ID tagged in the network packet matches the one configured on the virtual switch, then the network packet is accepted by the virtual switch. Any incoming network packet that is not tagged with the same VLAN ID will be discarded by the virtual switch.

Cisco EtherChannel

EtherChannel provides automatic recovery for the loss of a link by redistributing the load across the remaining links. If a link fails, EtherChannel redirects traffic from the failed link to the remaining links in the channel without intervention. EtherChannel Negotiation Protocols are:

  • PAgP (Cisco Proprietary)
  • LACP (IEEE 802.3ad)

EtherChannel with Switch Independent NIC Teaming

This example shows how to configure an EtherChannel on a switch. It assigns two ports as static-access ports in VLAN 10 to channel 5 with the PAgP mode desirable:

1. To configure specific VLAN for teamed NIC

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode desirable non-silent
Switch(config-if-range)# end

2. To configure Trunk for teamed NIC

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config-if-range)# switchport mode Trunk
Switch(config-if-range)# channel-group 5 mode desirable non-silent
Switch(config-if-range)# end

EtherChannel with Switch dependent NIC Teaming

This example shows how to configure an EtherChannel on a switch. It assigns two ports as static-access ports in VLAN 10 to channel 5 with the LACP mode active:

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config)#switchport
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode active
Switch(config-if-range)# end
Switch# show port lacp-channel

This example shows how to configure a cross-stack EtherChannel. It uses LACP passive mode and assigns two ports on stack member 2 and one port on stack member 3 as static-access ports in VLAN 10 to channel 5:

Switch# configure terminal
Switch(config)# interface range gigabitethernet2/0/4 -5
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode active
Switch(config-if-range)# exit
Switch(config)# interface gigabitethernet3/0/3
Switch(config-if)# switchport mode access
Switch(config-if)# switchport access vlan 10
Switch(config-if)# channel-group 5 mode active
Switch(config-if)# exit

Setup Dynamic Load Balance with 802.3ad NIC Teaming and load balance method: Automatic.

Switch#conf t
Switch(config)#int Gi2/0/23
Switch(config-if)#switchport
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access vlan 100
Switch(config-if)#spanning-tree portfast
Switch(config-if)#channel-group 1 mode active
Switch(config)#port-channel load-balance src-mac
Switch(config)#end
Switch#show etherchannel 1 summary
Switch#show spanning-tree interface port-channel 1
Switch#show etherchannel load-balance

HP Switch Configuration

LACP Config:

PROCURVE-Core1#conf ter
PROCURVE-Core1# trunk PORT1-PORT2 (e.g. C1/C2) Trk<ID> (a.e. Trk99) LACP
PROCURVE-Core1# vlan <VLANID>
PROCURVE-Core1# untagged Trk<ID> (e.g. Trk99)
PROCURVE-Core1# show lacp
PROCURVE-Core1# show log lacp

Trunk Config:

PROCURVE-Core1#conf ter
PROCURVE-Core1# trunk PORT1-PORT2 (e.g. C1/C2) Trk<ID> (a.e. Trk99) TRUNK
PROCURVE-Core1# vlan <VLANID>
PROCURVE-Core1# untagged Trk<ID> (e.g. Trk99)
PROCURVE-Core1# show Trunk
PROCURVE-Core1# show log trunk