Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Not every organisation looses millions of dollar per second but some does. An organisation may not loose millions of dollar per second but consider customer service and reputation are number one priority. These type of business wants their workflow to be seamless and downtime free. This article is for them who consider business continuity equals money well spent. Here is how it is done:

Multi-Site Failover Cluster

Microsoft Multi-Site Failover Cluster is a group of Clustered Nodes distribution through multiple sites in a region or separate region connected with low latency network and storage. As per the diagram illustrated below, Data Center A Cluster Nodes are connected to a local SAN Storage, while replicated to a SAN Storage on the Data Center B. Replication is taken care by a identical software defined storage on each site.  Software defined storage will replicate volumes or Logical Unit Number (LUN) from primary site in this example Data Center A to Disaster Recovery Site B. Microsoft Failover cluster is configured with pass-through storage i.e. volumes and these volumes are replication to DR site. In the Primary and DR sites, physical network is configured using Cisco Nexus 7000. Data network and virtual machine network are logically segregated in Microsoft System Center VMM and physical switch using virtual local area network or VLAN.  A separate Storage Area Network (SAN) is created in each site with low latency storage. Volumes of pass-through storage are replicated to DR site using identical size of volumes.

image

                                     Figure: Highly Available Multi-site Cluster

image

                           Figure: Software Defined Storage in Each Site

 Design Components of Storage:

  • SAN to SAN replication must be configured correctly
  • Initial must be complete before Failover Cluster is configured
  • MPIO software must be installed on the cluster Nodes (N1, N2…N6)
  • Physical and logical multipathing must be configured
  • If Storage is presented directly to virtual machines or cluster nodes then NPIV must configured on the Fabric Zones.
  • All Storage and Fabric Firmware must up to date with manufacturer latest software
  • An identical software defined storage must be used on the both sites 
  • If a third party software is used to replicate storage between sites then storage vendor must be consulted before the replication. 

Further Reading:

Understanding Software Defined Storage (SDS)

How to configure SAN replication between IBM Storwize V3700 systems

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Application Scale-out File Systems

Design Components of Network:

  • Isolate management, virtual and data network using VLAN
  • Use a reliable IPVPN or Fibre optic provider for the replication over the network
  • Eliminate all single point of failure from all network components
  • Consider stretched VLAN for multiple sites 

Further Reading:

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Design failover Cluster Quorum

  • Use Node & File Share Witness (FSW) Quorum for even number of Cluster Nodes
  • Connect File Share Witness on to the third Site
  • Do not host File Share Witness on a virtual machine on same site
  • Alternatively use Dynamic Quorum

Further Reading:

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Design of Compute

  • Use reputed vendor to supply compute hardware compatible with Microsoft Hyper-v
  • Make sure all latest firmware updates are applied to Hyper-v host
  • Make manufacture provide you with latest HBA software to be installed on Hyper-v host

Further Reading:

Windows Server 2012: Failover Clustering Deep Dive Part II

Implementing a Multi-Site Failover Cluster

Step1: Prepare Network, Storage and Compute

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step2: Configure Failover Cluster on Each Site

Windows Server 2012: Failover Clustering Deep Dive Part II

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Multi-Site Clustering & Disaster Recovery

Step3: Replicate Volumes

How to configure SAN replication between IBM Storwize V3700 systems

How to create a Microsoft Multi-Site cluster with IBM Storwize replication

Use Cases:

Use case can be determined by current workloads and future workloads plus business continuity. Deploy Veeam One to determine current workloads on your infrastructure and propose a future workload plus business continuity.  Here is a list of use cases of multi-site cluster.

  • Scale-Out File Server for application data-  To store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are simultaneously online on all nodes. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active.

  • File Server for general use – This type of clustered file server, and therefore all the shares associated with the clustered file server, is online on one node at a time. This is sometimes referred to as active-passive or dual-active. File shares associated with this type of clustered file server are called clustered file shares.

  • Business Continuity Plan

  • Disaster Recovery Plan

  • DFS Replication Namespace for Unstructured Data i.e. user profile, home drive, Citrix profile

  • Highly Available File Server Replication 

Multi-Site Hyper-v Cluster for High Availability and Disaster Recovery

Gallery

This gallery contains 4 photos.

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of … Continue reading

Why Managed vCenter Provider cannot be called Cloud Provider?

Gallery

This gallery contains 2 photos.

Before I answer the question of the title of this article, let’s start with what is public cloud and how a public cloud can be defined. In cloud computing, the word cloud (also phrased as “the cloud”) is used as … Continue reading

Manage Remote Workgroup Hyper-V Hosts with Hyper-V Manager

Gallery

This gallery contains 2 photos.

The following procedures are tested on Windows Server 2016 TP4 and Windows 10 Computer. Step1: Basic Hyper-v Host Configuration Once Hyper Server 2016 is installed. Log on to Hyper-v host as administrator. You will be presented with command prompt. On … Continue reading

VMware Increases Price Again

Gallery

VMware increases price again. As per VMware pricing FAQ, the following pricing model will be in effect on April 1, 2016. vSphere with Operations Management Enterprise Plus from US$4,245/CPU to US$4,395/CPU VMware vCenter Server™ Standard from US$4,995/Instance to US$5,995/Instance vSphere … Continue reading

Understanding Software Defined Networking (SDN) and Network Virtualization

Gallery

The evolution of virtualization lead to an evolution of wide range of virtualized technology including the key building block of a data center which is Network. A traditional network used be wired connection of physical switches and devices. A network … Continue reading

Comparing VMware vSwitch with SCVMM Network Virtualization

Gallery

Feature VMware vSphere System Center VMM 2012 R2 Standard vSwitch DV Switch Switch Features Yes Yes Yes Layer 2 Forwarding Yes Yes Yes IEEE 802.1Q VLAN Tagging Yes Yes Yes Multicast Support Yes Yes Yes Network Policy – Yes Yes … Continue reading

Understanding Network Virtualization in SCVMM 2012 R2

Gallery

This gallery contains 4 photos.

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes: Logical Networking of almost “Anything” hosted in SCVMM- Logical network … Continue reading

How to implement hardware load balancer in SCVMM

Gallery

The following procedure describe Network Load Balancing functionality in Microsoft SCVMM. Microsoft native NLB is automatically included into SCVMM when you install SCVMM. This procedure describe how to install and configure third party load balancer in SCVMM. Prerequisites: Microsoft System … Continue reading

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Gallery

This gallery contains 6 photos.

Cisco Nexus 1000V Switch for Microsoft Hyper-V provides following advanced feature in Microsoft Hyper-v and SCVMM. Integrate physical, virtual, and mixed environments Allow dynamic policy provisioning and mobility-aware network policies Improves security through integrated virtual services and advanced Cisco NX-OS … Continue reading

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.

 

Unsupported:

Hyper-V on Windows 10 doesn’t support failover clustering

How to configure Hyper-v Replica Step By Step

Hyper-V Replica provides IP based asynchronous replication of virtual machines between two Hyper-v servers. Since this an asynchronous replication, replica virtual machine will not have the most recent data. However, replica virtual machines provides a cost effective way of keeping a copy of production virtual machines in a secondary site and can be made available in case of a disaster.

Benefits:

  • Shared or standalone storage to fulfill the capacity requirement of the replicated virtual machine
  • Asynchronous replication of Hyper-V virtual machines over Ethernet IP based network
  • Replica works with standalone servers, failover clusters, or a mixture of both
  • Hyper-v Hosts can be physically co-located or geographically diverse location with MPLS or IPVPN connection
  • Hyper-v Hosts can be domain joined or standalone
  • Provide planned or unplanned failover
  • Any Hyper-v virtualized server can be replication using Hyper-v replica

Requirement:

  • Windows Server 2012 R2 Hyper-v Role Installed
  • Windows Server 2012 Hyper-v Role Installed
  • Similar virtual network and physical network must be configured in secondary site for replica virtual machine to function as production virtual machine.

Step1: Configure Firewall on Primary and Secondary Hyper-v Host

1. Right Click Windows Logo on Task Bar>Control Panel>Windows Firewall

2. Open Windows Firewall with Advance Security and click Inbound Rules.

3. Right-click Hyper-V Replica HTTP Listener (TCP-In) and click Enable Rule.

4. Right-click Hyper-V Replica HTTPS Listener (TCP-In) and click Enable Rule.

Step2: Pre-stage Replica Broker Computer Object

1. Log on to DC>Open Active Directory Users & Computers>Create New Computer e.g. HVReplica

2. Right Click on HVReplica Computer Object>Properties>Security Tab>Hyper-v Cluster Nodes NetBIOS Name>Allow Full Permission>Apply>Ok.

Step3: Configure Replica Broker in Hyper-v Environment

Hyper-v Replica using Failover Cluster Wizard

1. Log on Hyper-v Host>open Failover Cluster Manager.

2. In the left pane, connect to the cluster, and while the cluster name is highlighted, click Configure Role in the Actions pane. The High Availability wizard opens

3. In the Select Role screen, select Hyper-V Replica Broker.

image

4. Complete the wizard, providing a NetBIOS name you have created in previous step and IP address to be used as the connection point to the cluster.

5. Verify that the Hyper-V Replica Broker role comes online successfully. Click Finish.

6. To test Replica broker failover, right-click the role, point to Move, and then click Select Node. Then, select a node, and then click OK.

7. click Roles in the Navigate category of the Details pane

8. Right-click the role and choose Replication Settings.

9. In the Details pane, select Enable this cluster as a Replica server.

10. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

11. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

12. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

13. Click OK or Apply when you are finished.

 

Configure Hyper-v Replica using Hyper-v Manager

To Configure Hyper-v replica Broker in non-clustered environment.

1. In Hyper-V Manager, click Hyper-V Settings in the Actions pane.

2. In the Hyper-V Settings dialog, click Replication Configuration.

image

3. In the Details pane, select Enable this computer as a Replica server.

4. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

5. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

6. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

7. Click OK or Apply when you are finished.

Step4: Configure Replica Virtual Machine

1. In the Details pane of Hyper-V Manager, select a virtual machine by clicking it.

2. Right-click the selected virtual machine and point to Enable Replication. The Enable Replication wizard opens.

3. On the Specify Replica Server page, in the Replica Server box, enter either the NetBIOS or fully qualified international domain name (FQIDN) of the Replica server that you configured in Step 2.1. If the Replica server is part of a failover cluster, enter the name of the Hyper-V Replica Broker that you configured in Step 1.4. Click Next.

4. On the Specify Connection Parameters page, the authentication and port settings you configured for the Replica server in Step 2.1 will automatically be populated, provided that Remote WMI is enabled. If it is not enabled, you will have to provide the values. Click Next.

5. On the Choose Replication VHDs page, clear the checkboxes for any VHDs that you want to exclude from replication, then click Next.

6. On the Configure Recovery History page, select the number and types of recovery points to be created on the Replica server, then click Next.

7. On the Choose Initial Replication page, select the initial replication method and then click Next.

8. On the Completing the Enable Replication Relationship Wizard page, review the information in the Summary and then click Finish.

9. A Replica virtual machine is created on the Replica server. If you elected to send the initial copy over the network, the transmission begins either immediately or at the time you configured.

Step5: Test Replicated Virtual Machine

1. In Hyper-V Manager, right-click the virtual machine you want to test failover for, point to Replication…, and then point to Test Failover….

2. After you have concluded your testing, discard the test virtual machine by choosing Stop Test Failover under the Replication option

Step6: Planed Failover

1. Start Hyper-V Manager on the primary server and choose a virtual machine to fail over. Turn off the virtual machine that you want to fail over.

2. Right-click the virtual machine, point to Replication, and then point to Planned Failover.

3. Click Fail Over to actually transfer operations to the virtual machine on the Replica server. Failover will not occur if the prerequisites have not been met.

How to respond to unplanned Failover

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to use, point to Replication, and then point to Failover….

3. In the dialog that opens, choose the recovery snapshot you want the virtual machine to recover to, and then click Failover….. The Replication Status will change to Failed over – Waiting completion and the virtual machine will start using the network parameters you previously configured for it

4. Use the Complete-VMFailover Windows PowerShell cmdlet below to complete failover.

Starting a reverse replication once disaster is over

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to reverse replicate, point to Replication, and then point to Reverse replication…. The Reverse Replication wizard opens.

3. Complete the Reverse Replication wizard. You will find the requested information to be very similar if not identical to the information you provided in the Enable Replication wizard

Similar Articles:

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to configure SAN replication between IBM Storwize V3700 systems

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

With Server virtualization you can run multiple server instances concurrently on a single physical host; yet servers are isolated from each other and operate independently. Similarly Network virtualization provides multiple virtual network infrastructures run on the same physical network with or without overlapping IP addresses. Each virtual network infrastructure operates as if they are the only virtual network running on the shared network infrastructure. Hyper-v Network Virtualization also decouples physical network from virtual network. Network virtualization can be achieved via System Center Virtual Machine Manager (SCVMM) managing multiple Hyper-v Servers, a single Hyper-v Server or clustered Hyper-v Servers. Microsoft Hyper-v Network Virtualization provides multi-tenant aware, multi-VLAN aware and non-hierarchical IP address assignment to virtual machines in conventional on-premises and cloud based data center.

Hyper-v Virtual Network Type

  • Private Virtual Network Switch allows communication between virtual machines connected to the same virtual switch. Virtual Machines connected to this type of virtual switch cannot communicate with Hyper-V Parent Partition. You can create any number of Private virtual switches.
  • Internal Virtual Network Switch can be used to allow communication between virtual machines connected to the same switch and also allow communication to the Hyper-V Parent Partition. You can create any number of internal virtual switches
  • External Virtual Network Switch allows communication between virtual machines running on the same Hyper-V Server, Hyper-V Parent Partition and Virtual Machines running on the remote Hyper-V Server. It requires a physical network adapter on the Hyper-V Host that is not mapped to any other External Virtual Network Switch. As a result, you can create External virtual switches as long as you have physical network adapters that are not mapped to any other external virtual switches.

Follow the guide lines to configure Virtual Networking in Windows Server 2012 R2 Hyper-v role installed. A highly available clustered Hyper-v server should have the following configuration parameters.

Example VLAN

Network Type VLAN ID IP Addresses
Default 1 10.10.10.1/24
Management 2 10.10.20.1/24
Live Migration 3 10.10.30.1/24
Prod Server 4 10.10.40.1/24
Dev Server 5 10.10.50.1/24
Test Server 6 10.10.60.1/24
Storage 7 10.10.70.1/24
DMZ 99 192.168.1.1/24

Example NIC Configuration with 8 network card (e.g. 2x quad NIC card)

Virtual Network Name Purpose Connected Physical Switch Port Virtual Switch Configuration
MGMT Management Network Port configured with VLAN 2 Allow Management Network ticked

Enable VLAN identification for management operating system ticked

LiveMigration Live Migration Port configured with VLAN 3 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

iSCSI Storage Port configured with VLAN 7 Allow Management Network un-ticked

Enable VLAN identification for management operating system ticked

VirtualMachines Prod, Dev, Test, DMZ Port configured with Trunk Mode Allow Management Network un-ticked

Enable VLAN identification for management operating system un-ticked

Recommendation:

  • Do not assign VLAN ID in NIC Teaming Wizard instead assign VLAN ID in Virtual Switch Manager.
  • Configure virtual switch network as External Virtual Network.
  • Configure Physical Switch Port Aggregation using EtherChannel.
  • Configure Logical Network Aggregation using NIC Teaming Wizard.
  • Enable VLAN ID in Virtual Machine Settings.

Example Virtual Machine Network Configuration

Virtual Machine Type VLAN ID Tagged in VM>Settings>Network Adapter Enable VLAN identifier Connected Virtual Network
Prod VM 4 Ticked VirtualMachines
Dev VM 5 Ticked VirtualMachines
Test VM 6 Ticked VirtualMachines
DMZ VM with two NICs 4, 99 Ticked VirtualMachines

 

NIC Teaming with Virtual Switch

Multiple network adapters on a computer to be placed into a team for the following purposes:

  • Bandwidth aggregation
  • Traffic failover to prevent connectivity loss in the event of a network component failure

There are two basic configurations for NIC Teaming.

  • Switch-independent teaming. This configuration does not require the switch to participate in the teaming. Since in switch-independent mode the switch does not know that the network adapter is part of a team in the host, the adapters may be connected to different switches. Switch independent modes of operation do not require that the team members connect to different switches; they merely make it possible.
  • Switch-dependent teaming. This configuration that requires the switch to participate in the teaming. Switch dependent teaming require participating NIC to be connected in same physical switch. There are two modes of operation for switch-dependent teaming: Generic or static teaming (IEEE 802.3ad draft v1). Link Aggregation Control Protocol teaming (IEEE 802.1ax, LACP).

Load Balancing Algorithm

NIC teaming in Windows Server 2012 R2 supports the following traffic load distribution algorithms:

  • Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic.
  • Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.
  • Dynamic. This algorithm takes the best aspects of each of the other two modes and combines them into a single mode. Outbound loads are distributed based on a hash of the TCP Ports and IP addresses. Dynamic mode also rebalances loads in real time so that a given outbound flow may move back and forth between team members. Inbound loads are distributed as though the Hyper-V port mode was in use.

NIC Teaming within Virtual Machine

NIC teaming in Windows Server 2012 R2 may also be deployed in a VM. This allows a VM to have virtual NICs connected to more than one Hyper-V switch and still maintain connectivity even if the physical NIC under one switch gets disconnected.

To enable NIC Teaming with virtual machine. In the Hyper-V Manager, in the settings for the VM, select the VM’s NIC and the Advanced Settings item, then enable the checkbox for NIC Teaming in the VM.

Physical Switch Configuration

  • In Trunk Mode, a virtual switch will listen to all the network traffic and forward the traffic to all the ports. In other words, network packets are sent to all the virtual machines connected to it. By default, a virtual switch in Hyper-V is configured in Trunk Mode, which means the virtual switch receives all network packets and forwards them to all the virtual machines connected to it. There is not much configuration needed to configure the virtual switch in Trunk Mode.
  • In Access Mode, the virtual switch receives network packets in which it first checks the VLAN ID tagged in the network packet. If the VLAN ID tagged in the network packet matches the one configured on the virtual switch, then the network packet is accepted by the virtual switch. Any incoming network packet that is not tagged with the same VLAN ID will be discarded by the virtual switch.

Cisco EtherChannel

EtherChannel provides automatic recovery for the loss of a link by redistributing the load across the remaining links. If a link fails, EtherChannel redirects traffic from the failed link to the remaining links in the channel without intervention. EtherChannel Negotiation Protocols are:

  • PAgP (Cisco Proprietary)
  • LACP (IEEE 802.3ad)

EtherChannel with Switch Independent NIC Teaming

This example shows how to configure an EtherChannel on a switch. It assigns two ports as static-access ports in VLAN 10 to channel 5 with the PAgP mode desirable:

1. To configure specific VLAN for teamed NIC

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode desirable non-silent
Switch(config-if-range)# end

2. To configure Trunk for teamed NIC

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config-if-range)# switchport mode Trunk
Switch(config-if-range)# channel-group 5 mode desirable non-silent
Switch(config-if-range)# end

EtherChannel with Switch dependent NIC Teaming

This example shows how to configure an EtherChannel on a switch. It assigns two ports as static-access ports in VLAN 10 to channel 5 with the LACP mode active:

Switch# configure terminal
Switch(config)# interface range gigabitethernet0/1 -2
Switch(config)#switchport
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode active
Switch(config-if-range)# end
Switch# show port lacp-channel

This example shows how to configure a cross-stack EtherChannel. It uses LACP passive mode and assigns two ports on stack member 2 and one port on stack member 3 as static-access ports in VLAN 10 to channel 5:

Switch# configure terminal
Switch(config)# interface range gigabitethernet2/0/4 -5
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode active
Switch(config-if-range)# exit
Switch(config)# interface gigabitethernet3/0/3
Switch(config-if)# switchport mode access
Switch(config-if)# switchport access vlan 10
Switch(config-if)# channel-group 5 mode active
Switch(config-if)# exit

Setup Dynamic Load Balance with 802.3ad NIC Teaming and load balance method: Automatic.

Switch#conf t
Switch(config)#int Gi2/0/23
Switch(config-if)#switchport
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access vlan 100
Switch(config-if)#spanning-tree portfast
Switch(config-if)#channel-group 1 mode active
Switch(config)#port-channel load-balance src-mac
Switch(config)#end
Switch#show etherchannel 1 summary
Switch#show spanning-tree interface port-channel 1
Switch#show etherchannel load-balance

HP Switch Configuration

LACP Config:

PROCURVE-Core1#conf ter
PROCURVE-Core1# trunk PORT1-PORT2 (e.g. C1/C2) Trk<ID> (a.e. Trk99) LACP
PROCURVE-Core1# vlan <VLANID>
PROCURVE-Core1# untagged Trk<ID> (e.g. Trk99)
PROCURVE-Core1# show lacp
PROCURVE-Core1# show log lacp

Trunk Config:

PROCURVE-Core1#conf ter
PROCURVE-Core1# trunk PORT1-PORT2 (e.g. C1/C2) Trk<ID> (a.e. Trk99) TRUNK
PROCURVE-Core1# vlan <VLANID>
PROCURVE-Core1# untagged Trk<ID> (e.g. Trk99)
PROCURVE-Core1# show Trunk
PROCURVE-Core1# show log trunk

VMware vSphere 6.0 VS Microsoft Hyper-v Server 2012 R2

Since the emergence of vSphere 6.0, I would like to write an article on vSphere 6.0 vs Windows Server 2012 R2. I collected vSphere 6.0 features from few blogs and VMware community forum. Note that vSphere 6.0 is in beta program which means VMware can amend anything before final release. New functionalities of vSphere 6.0 beta are already available in Windows Server 2012 R2. So let’s have a quick look on both virtualization products.

Features vSphere 6.0 Hyper-v Server 2012 R2
Certificates

 

Certificate Authority Active Directory Certificate Services
Certificate Store Certificate Store in Windows OS
Single Sign on VMware retained SSO 2.0 for vSphere 5.5 Active Directory Domain Services
Database vPostgres database for VC Appliance up to 8 vCenter Microsoft SQL Server

No Limitation

Management Tools Web Client & VI

VMware retained VI

SCVMM Console & Hyper-v Manager
Installer Combined single installer with all input upfront Combined single installer with all input upfront
vMotion Long distance Migration up to 100+ms RTTs Multisite Hyper-v Cluster and Live Migration
Storage Migration Storage vMotion with shared and unshared storage Hyper-v Live Storage Migration between local and shared storage
Combined Cloud Products Platform Services Controller (PSC) includes vCenter, vCOPs, vCloud Director, vCoud Automation Microsoft System Center combined App Controller, Configuration Manager, Data Protection Manager, Operations Manager, Orchestrator, Service Manager, Virtual Machine Manager
Service Registration View the services that are running in the system. Windows Services
Licensing Platform Services Controller (PSC) includes Licensing Volume Activation Role in Windows Server 2012 R2
Virtual Datacenters A Virtual Datacenter aggregates CPU, Memory, Storage and Network resources. Provision CPU, Memory, Storage and network using create Cloud wizard

Another key feature to be compared here that those who are planning to procure FC Tape library and maintain a virtual backup server note that vSphere doesn’t support FC Tape even with NPIV and Hyper-v support FC Tape using NPIV.

References:

http://www.wooditwork.com/2014/08/27/whats-new-vsphere-6-0-vcenter-esxi/

https://araihan.wordpress.com/2014/03/25/vmware-vs-hyper-v-can-microsoft-make-history-again/

https://araihan.wordpress.com/2013/01/24/microsofts-hyper-v-server-2012-and-system-center-2012-unleash-ko-punch-to-vmware/

https://araihan.wordpress.com/2015/08/20/hyper-v-server-2016-whats-new/

Microsoft Virtual Machine Converter: Switching from vSphere to Hyper-v Made Easy

    Are you having difficulty funding a renewal license of expensive VMware vSphere? There is an alternative brand that adds greater value to the business reducing costs, and accelerating your journey to the cloud. Making the shift from VMware to Microsoft could be the wise decision you ever made after years of working as a CIO or IS Manager. By migrating from VMware to Microsoft, you gain a unified infrastructure licensing model and simplified vendor management, off course it gives you less pain in your wallet too.
    Whether you are looking to add value to your organisation, save cost, support grown or you are a fanatical environmentalist reducing carbon foot print, Hyper-V is the correct choice for you. A move to Microsoft’s virtualization and management platform can help you better meet your business needs. Simply buying Windows Server 2012 data center, you get the cloud computing benefits of unlimited virtualization and lower costs consistently and predictably over time.
    System Center 2012 enables physical, virtual, private cloud, and public cloud management using a single platform. It offers support for multi-hypervisor management, third-party integration and process management, and deep application diagnostics and insight. You can see what is happening inside the performance of your applications, remediate issues faster, and achieve increased agility for your organization.
    With the help of free tools like Microsoft Assessment and Planning Toolkit (MAP), and with the Microsoft Virtual Machine Converter (MVMC), you can quickly, easily and safely migrate over to Hyper-V.  For enterprise customers with large numbers of virtual machines to migrate, the Migration Automation Toolkit (MAT) provides the scalability to handle mass migrations in an automated fashion. System Center 2012 and Hyper-v Server 2012 support guest virtual machine of all major Linux and Unix distribution inclusive Microsoft OS off course.
    In a nutshell Microsoft Virtual Machine Converter:
  • Provides a quick, low-risk option for VMware customers to evaluate Hyper-V.
  • Converts VMware virtual machines to Hyper-V virtual machines.
  • Convert virtual hardware and keep same configuration of original virtual machine.
  • Supports a clean migration to Hyper-V with un-installation of VMware tools on the source virtual machine.
  • Provides GUI or scriptable CLI and Windows PowerShell, making it simple to perform virtual machine conversion.
  • Installs integration services for Windows 2003 guests that are converted to Hyper-V virtual machines.
  • Supports conversion of virtual machines from VMware vSphere 4.1 and 5.0 hosts.
  • Support migration of guest machine that is part of a failover cluster.
  • Supports offline conversions of VMware-based virtual hard disks (VMDK) to a Hyper-V-based virtual hard disk file format (.vhd file).
      • Relevant Articles
        Microsoft Virtual Machine Converter Solution Accelerator
        Migration Automation Toolkit (MAT)
        Cost Calculator
        Download Windows Server 2012
        Download System Center 2012
        Hyper-v vs vSphere
        Is VMware’s fate heading towards Novell?

        Is VMware’s fate heading towards Novell?

        Previously I wrote a blog on comparing price and features of Hyper-v and VMware. I got lot of feedback and questions why I believe Microsoft will win the battle. Here is a short answer for this question.

        Living in mining city of Australia, its truth that most mining, oil and gas company isn’t adopting Microsoft Hyper-v yet excluding Fortescue Metals (FMG). FMG took a smart decision to go for Microsoft cloud than any other cloud technology. But wind is shifting quickly. Not just mining, oil and gas companies. Here are other examples: ING Direct case study and Suncorp Bank case study. There is nothing to hide that Microsoft came late to Hypervisor game. Slowly but surely Microsoft is gaining momentum.

        I worked in almost 15 years now. I have seen in many occasions that Microsoft crashes its opponent and gain market in their own business. This is what happening in Hypervisor battle. Let’s be honest VMware is THE leader in virtualization. I am sure there are skeptics who believe, beating VMware isn’t possible. Those skeptics betted their money on Novell Netware, IBM Lotus Notes and Corel Word Perfect in those days. If I had told you in year 2000 that Active Directory would beat Novell e-directory, you would have burst out of laugh. But now there’s nothing to comment on this. By now you rarely see and work e-directory, word perfect or lotus notes. These examples says it all. VMware’s fate is written when Microsoft released Windows Server 2012, Hyper-v Server 2012 and System Center 2012. By the next Windows, Hyper-v and System Center release VMware may extinct.

        If you need more evidence then you can find Microsoft’s Oil and Gas customer’s success stories on Microsoft View Point.

        Pasting text to Hyper-V guests sometimes results in garbled characters- An Work Around

        To work around this issue:

        • RDP to virtual machine using the mstsc.exe
        • Increase the keyboard class buffer size in the virtual machine
        • Disable the synthetic keyboard in the virtual machine to force using the emulated keyboard

        To Increase the keyboard class buffer size in the virtual machine

        1. Logon to a running virtual machine as an Administrator.

        2. Hover mouse on the right hand side top corner, Click Search, Type regedit, and Right Click on Registry Editor, Click Run As Administrator.

        3. Locate and then click the following registry entry:

        HKLMSYSTEMCurrentControlSetServiceskbdclassParameters

        4. In the details page, double click: KeyboardDataQueueSize

        5. Select Decimal and type a value data of: 1024

        6. Click Ok. Close the Registry Editor. you can modify the same registry for a group of Hyper-v virtual machines using GPO. GPO location is Computer Configuration/Windows Settings/Security Settings/Registry. Right Click and add new registry. 

        To disable the synthetic keyboard for a virtual machine

        1. Logon to a running virtual machine as a member of the Administrators group.

        2. Hover mouse on the right hand side top corner, Click Search, Type devmgmt.msc, and then Right Click on device manager, Click Run As Administrator. 

        3. Click Keyboards, right click Microsoft Hyper-V Virtual Keyboard and click Disable.

        4. Close the Device Manager snap-in. Restart Virtual Machine.

        5. On Windows Server 2012 Core, download DevCon.exe from the Windows Driver Kit to disable this driver using the command-line.

        Client Hyper-V in Windows 8

        Gallery

        Client Hyper-V on Windows 8 provides a rich virtual platform for developers and IT professionals. You can create and manage virtual machines using client Hyper-V leveraging the security, scale, and manageability of Windows 8 and Server Hyper-V platforms. This is … Continue reading

        FF TMG 2010—Can future be altered?

        Gallery

        I read the following articles about Microsoft Forefront TMG 2010. I was shocked by the news. TMG 2010 is one of the beautiful product Wintel Engineers and Security Administer can be proud off. I believe I am one of the … Continue reading

        System Center Virtual Machine Manager 2012 Beta First Look

        Systems Requirement:

        • Windows Server 2008 R2 x64 domain member
        • Windows Remote Management (WinRM) 2.0
        • Windows PowerShell 2.0
        • Microsoft .NET Framework 3.5 Service Pack 1 (SP1)
        • Windows AIK for Windows 7
        • SQL Server 2008 or SQL Server 2008 R2
        • WDS and WSUS roles installed 

        Installation of System Center Virtual Machine Manager 2012:

        1

        2

        3

        4

        5

        6

        7

        8

        9

        10

        11

        12

        13

        14

        15

        16

        System Center Virtual Machine Manager 2012 Beta – Evaluation (VHD)

        SCVMM 2012 Beta Download