Microsoft Software Defined Storage AKA Scale-out File Server (SOFS)

Business Challenges:

  • $/IOPS and $/TB
  • Continuous Availability
  • Fault Tolerance
  • Storage Performance
  • Segregation of production, development and disaster recovery storage
  • De-duplication of unstructured data
  • Segregation of data between production site and disaster recovery site
  • Continuous break fix of Distributed File Systems (DFS) & File Server
  • Continuously extending storage on the DFS servers
  • Single point of failure
  • File systems is not available always
  • Security of file systems is constant concern
  • Propitiatory non-scalable storage
  • Management of physical storage
  • Vendor lock-in contract for physical storage
  • Migration path from single vendor to multi vendor storage provider
  • Management overhead of unstructured data
  • Comprehensive management of storage platform

Solutions:

Microsoft Software Defined Storage AKA Scale-Out File Server is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster.Microsoft Software Defined Storage offerings compared with third party offering:

Storage feature Third-party NAS/SAN Microsoft Software-Defined Storage
Fabric Block protocol

 

File protocol Network

 

Network Low latency network with FC

 

Low latency with SMB3Direct Management

 

Management Management of LUNs

 

Management of file shares Data de-duplication

 

Data De-duplication Data de-duplication

 

Data de-duplication Resiliency

 

Resiliency RAID resiliency groups

 

Flexible resiliency options Pooling

 

Pooling Pooling of disks

 

Pooling of disks Availability

 

Availability High

 

Continuous (via redundancy) Copy offload, Snapshots

 

Copy Offloads, Snapshots Copy offload, Snapshots

 

SMB copy offload, Snapshots Tiering

 

Tiering Storage tiering

 

Performance with tiering Persistent write-back cache

 

Persistent Write-back cache Persistent write-back cache

 

Persistent write-back cache Scale up

 

Scale up Scale up

 

Automatic scale-out rebalancing Storage Quality of Service (QoS)

 

Storage Quality of Service (QoS) Storage QoS

 

Storage QoS (Windows Server 2016) Replication

 

Replication Replication

 

Storage Replica (Windows Server 2016) Updates

 

Updates Firmware updates

 

Rolling cluster upgrades (Windows Server 2016)

 

    Storage Spaces Direct (Windows Server 2016)

 

    Azure-consistent storage (Windows Server 2016)

 

 Functional use of Microsoft Scale-Out File Servers:

1. Application Workloads

  • Microsoft Hyper-v Cluster
  • Microsoft SQL Server Cluster
  • Microsoft SharePoint
  • Microsoft Exchange Server
  • Microsoft Dynamics
  • Microsoft System Center DPM Storage Target
  • Veeam Backup Repository

2. Disaster Recovery Solution

  • Backup Target
  • Object storage
  • Encrypted storage target
  • Hyper-v Replica
  • System Center DPM

3. Unstructured Data

  • Continuously Available File Shares
  • DFS Namespace folder target server
  • Microsoft Data de-duplication
  • Roaming user Profiles
  • Home Directories
  • Citrix User Profiles
  • Outlook Cached location for Citrix XenApp Session Server

4. Management

  • Single Management Point for all Scale-out File Servers
  • Provide wizard driven tools for storage related tasks
  • Integrated with Microsoft System Center

Business Values:

  • Scalability
  • Load balancing
  • Fault tolerance
  • Ease of installation
  • Ease of management/operations
  • Flexibility
  • Security
  • High performance
  • Compliance & Certification

SOFS Architecture:

Microsoft Scale-out File Server (SOFS) is  considered as a Storage Defined Storage (SDS).  Microsoft SOFS is independent of hardware vendor as long as the compute and storage is certified by Microsoft Corporation. The following figure shows Microsoft Hyper-v cluster, SQL Cluster and Object Storage on the SOFS.

image

                 Figure: Microsoft Software Defined Storage (SDS) Architecture

image

                     Figure: Microsoft Scale-out File Server (SOFS) Architecture

image

                                      Figure: Microsoft SDS Components

image

                        Figure: Unified Storage Management (See Reference)

Microsoft Software Defined Storage AKA Scale-out File Server Benefits:

SOFS:

  • Continuous availability file stores for Hyper-V and SQL Server
  • Load-balanced IO across all nodes
  • Distributed access across all nodes
  • VSS support
  • Transparent failover and client redirection
  • Continuous availability at a share level versus a server level

De-duplication:

  • Identifies duplicate chunks of data and only stores one copy
  • Provides up to 90% reduction in storage required for OS VHD files
  • Reduces CPU and Memory pressure
  • Offers excellent reliability and integrity
  • Outperforms Single Instance Storage (SIS) or NTFS compression.

SMB Multichannel

  • Automatic detection of SMB Multi-Path networks
  • Resilience against path failures
  • Transparent failover with recovery
  • Improved throughput
  • Automatic configuration with little administrative overhead

SMB Direct:

  • The Microsoft implementation of RDMA.
  • The ability to direct data transfers from a storage location to an application.
  • Higher performance and lower latency through CPU offloading
  • High-speed network utilization (including InfiniBand and iWARP)
  • Remote storage at the speed of local storage
  • A transfer rate of approximately 50Gbps on a single NIC port
  • Compatibility with SMB Multichannel for load balancing and failover

VHDX Virtual Disk:

  • Online VHDX Resize
  • Storage QoS (Quality of Service)

Live Migration

  • Easy migration of virtual machine into a cluster while the virtual machine is running
  • Improved virtual machine mobility
  • Flexible placement of virtual machine storage based on demand
  • Migration of virtual machine storage to shared storage without downtime

Storage Protocol:

  • SAN discovery (FCP, SAS, iSCSI i.e. EMC VNX, EMC VMAX)
  • NAS discovery (Self-contained NAS, NAS Head i.e. NetApp OnTap)
  • File Server Discovery (Microsoft Scale-Out File Server, Unified Storage)

Unified Management:

  • A new architecture provides ~10x faster disk/partition enumeration operations
  • Remote and cluster-awareness capabilities
  • SM-API exposes new Windows Server 2012 R2 features (Tiering, Write-back cache, and so on)
  • SM-API features added to System Center VMM
  • End-to-end storage high availability space provisioning in minutes in VMM console
  • More Windows PowerShell

ReFS:

  • More resilience to power failures
  • Highest levels of system availability
  • Larger volumes with better durability
  • Scalable to petabyte size volumes

Storage Replica:

  • Hardware agnostic storage configuration
  • Provide a DR solution for planned and unplanned outages of mission critical workloads.
  • Use SMB3 transport with proven reliability, scalability, and performance.
  • Stretched failover clusters within metropolitan distances.
  • Manage end to end storage and clustering for Hyper-V, Storage Replica, Storage Spaces, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS using Microsoft software
  • Reduce downtime, and increase reliability and productivity intrinsic to Windows.

Cloud Integration:

  • Cloud-based storage service for online backups
  • Windows PowerShell instrumented
  • Simple, reliable Disaster Recovery solution for applications and data
  • Supports System Center 2012 R2 DPM

Implementing Scale-out File Server

Scale-out File Server Recommended Configuration:

  1. Gather all virtual servers IOPS requirements*
  2. Gather Applications IOPS requirements
  3. Total IOPS of all applications & Virtual machines must be less than available IOPS of physical storage 
  4. Keep latency below 3 ms at all time for high performance
  5. Gather required capacity + potential growth + best practice
  6. N+1 Compute, Network and Storage Hardware
  7. Use low latency, high throughput networks
  8. Segregate storage network from data network using logical network (VLAN) or fibre channel
  9. Tools to be used

*Not all virtual servers are same, DHCP server generate few IOPS, SQL server and Exchange can generate thousands of IOPS.

*Do not place SQL Server on the same logical volume (LUN) with Exchange Server or Microsoft Dynamics or Backup Server.

*Isolate high IO workloads to separate logical volume or even separate storage pool if possible.

Prerequisites for Scale-Out File Server

  1. Install File and Storage Services server role, and the Failover Clustering feature on the cluster nodes
  2. Configure Microsoft failover Clusters using this article Windows Server 2012: Failover Clustering Deep Dive Part II
  3. Add Cluster Share Volume
  • Log on to the server as a member of the local Administrators group.
  • Open Server Manager> Click Tools, and then click Failover Cluster Manager.
  • Click Storage, right-click the disk that you want to add to the cluster shared volume, and then click Add to Cluster Shared Volumes> Add Storage Presented to this cluster.

Configure Scale-out File Server

  1. Open Failover Cluster Manager> Right-click the name of the cluster, and then click Configure Role.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click File Server, and then click Next.
  4. On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.
  5. On the Client Access Point page, in the Name box, type a NETBIOS of Scale-Out File Server, and then click Next.
  6. On the Confirmation page, confirm your settings, and then click Next.
  7. On the Summary page, click Finish.

Create Continuously Available File Share

  1. Open Failover Cluster Manager>Expand the cluster, and then click Roles.
  2. Right-click the file server role, and then click Add File Share.
  3. On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  4. On the Select the server and path for this share page, click the name of the cluster shared volume, and then click Next.
  5. On the Specify share name page, in the Share name box, type a name for the file share, and then click Next.
  6. On the Configure share settings page, ensure that the Continuously Available check box is selected, and then click Next.
  7. On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • To use Scale-Out File Server file share for Hyper-V: All Hyper-V computer accounts, the SYSTEM account, cluster computer account for any Hyper-V clusters, and all Hyper-V administrators must be granted full control on the share and the file system.
  • To use Scale-Out File Server on Microsoft SQL Server: The SQL Server service account must be granted full control on the share and the file system

      8. On the Confirm selections page, click Create. On the View results page, click Close.

Use SOFS for Hyper-v Server VHDX Store:

  1. Open Hyper-V Manager. Click Start, and then click Hyper-V Manager.
  2. Open Hyper-v Settings> Virtual Hard Disks> Specify Location of Store as \\SOFS\VHDShare\ and Specify location of Virtual Machine Configuration \\SOFS\VHDCShare
  3. Click Ok.

Use SOFS in System Center VMM: 

  1. Add Windows File Server in VMM
  2. Assign SOFS Share to Fabric & Hosts

Use SOFS for SQL Database Store:

1. Assign SQL Service Account Full permission to SOFS Share

  • Open Windows Explorer and navigate to the scale-out file share.
  • Right-click the folder, and then click Properties.
  • Click the Sharing tab, click Advanced Sharing, and then click Permissions.
  • Ensure that the SQL Server service account has full-control permissions.
  • Click OK twice.
  • Click the Security tab. Ensure that the SQL Server service account has full-control permissions.

2. In SQL Server 2012, you can choose to store all database files in a scale-out file share during installation.  

3. On the step 20 of SQL Setup Wizard , provide a location of Scale-out File Server which is \\SOFS\SQLData and \\SOFS\SQLLogs

4. Create a Database on SOFS Share but on the existing SQL Server using SQL Script

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’\\SOFS\SQLDB\TestDB.mdf’ )
LOG ON
( NAME = N’TestDBLog’, FILENAME = N’\\SOFS\SQLDBLog\TestDBLogs.ldf’)
GO

Use Backup & Recovery:

System Center Data Protection Manager 2012 R2

Configure and add a dedupe storage target into DPM 2012 R2. DPM 2012 R2 will not backup SOFS itself but it will backup VHDX files stored on SOFS. Follow Deduplicate DPM storage and protection for virtual machines with SMB storage  guide to backup virtual machines.

Veeam Availability Suite

  1. Log on to Veeam Availability Console>Click Backup Repository> Right Click New backup Repository
  2. Select Shared Folder on the Type Tab
  3. Add SMB Backup Target \\SOFS\Repository
  4. Follow the Wizard. Make Sure Service Account of Veeam has full access permission to \\SOFS\Repository  Share.
  5. Click Scale-out Repositories>Right Click Add Scale-out backup repository> Type the Name
  6. Select the backup repository you created in previous>Follow the Wizard to complete tasks.

References:

Microsoft Storage Architecture

Storage Spaces Physical Disk Validation Script

Validate Hardware

Deploy Clustered Storage Spaces

Storage Spaces Tiering in Windows Server 2012 R2

SMB Transparent Failover

Cluster Shared Volume (CSV) Inside Out

Storage Spaces – Designing for Performance

Related Articles:

Scale-Out File Server Cluster using Azure VMs

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Not every organisation looses millions of dollar per second but some does. An organisation may not loose millions of dollar per second but consider customer service and reputation are number one priority. These type of business wants their workflow to be seamless and downtime free. This article is for them who consider business continuity equals money well spent. Here is how it is done:

Multi-Site Failover Cluster

Microsoft Multi-Site Failover Cluster is a group of Clustered Nodes distribution through multiple sites in a region or separate region connected with low latency network and storage. As per the diagram illustrated below, Data Center A Cluster Nodes are connected to a local SAN Storage, while replicated to a SAN Storage on the Data Center B. Replication is taken care by a identical software defined storage on each site.  Software defined storage will replicate volumes or Logical Unit Number (LUN) from primary site in this example Data Center A to Disaster Recovery Site B. Microsoft Failover cluster is configured with pass-through storage i.e. volumes and these volumes are replication to DR site. In the Primary and DR sites, physical network is configured using Cisco Nexus 7000. Data network and virtual machine network are logically segregated in Microsoft System Center VMM and physical switch using virtual local area network or VLAN.  A separate Storage Area Network (SAN) is created in each site with low latency storage. Volumes of pass-through storage are replicated to DR site using identical size of volumes.

image

                                     Figure: Highly Available Multi-site Cluster

image

                           Figure: Software Defined Storage in Each Site

 Design Components of Storage:

  • SAN to SAN replication must be configured correctly
  • Initial must be complete before Failover Cluster is configured
  • MPIO software must be installed on the cluster Nodes (N1, N2…N6)
  • Physical and logical multipathing must be configured
  • If Storage is presented directly to virtual machines or cluster nodes then NPIV must configured on the Fabric Zones.
  • All Storage and Fabric Firmware must up to date with manufacturer latest software
  • An identical software defined storage must be used on the both sites 
  • If a third party software is used to replicate storage between sites then storage vendor must be consulted before the replication. 

Further Reading:

Understanding Software Defined Storage (SDS)

How to configure SAN replication between IBM Storwize V3700 systems

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Application Scale-out File Systems

Design Components of Network:

  • Isolate management, virtual and data network using VLAN
  • Use a reliable IPVPN or Fibre optic provider for the replication over the network
  • Eliminate all single point of failure from all network components
  • Consider stretched VLAN for multiple sites 

Further Reading:

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Design failover Cluster Quorum

  • Use Node & File Share Witness (FSW) Quorum for even number of Cluster Nodes
  • Connect File Share Witness on to the third Site
  • Do not host File Share Witness on a virtual machine on same site
  • Alternatively use Dynamic Quorum

Further Reading:

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Design of Compute

  • Use reputed vendor to supply compute hardware compatible with Microsoft Hyper-v
  • Make sure all latest firmware updates are applied to Hyper-v host
  • Make manufacture provide you with latest HBA software to be installed on Hyper-v host

Further Reading:

Windows Server 2012: Failover Clustering Deep Dive Part II

Implementing a Multi-Site Failover Cluster

Step1: Prepare Network, Storage and Compute

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step2: Configure Failover Cluster on Each Site

Windows Server 2012: Failover Clustering Deep Dive Part II

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Multi-Site Clustering & Disaster Recovery

Step3: Replicate Volumes

How to configure SAN replication between IBM Storwize V3700 systems

How to create a Microsoft Multi-Site cluster with IBM Storwize replication

Use Cases:

Use case can be determined by current workloads and future workloads plus business continuity. Deploy Veeam One to determine current workloads on your infrastructure and propose a future workload plus business continuity.  Here is a list of use cases of multi-site cluster.

  • Scale-Out File Server for application data-  To store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are simultaneously online on all nodes. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active.

  • File Server for general use – This type of clustered file server, and therefore all the shares associated with the clustered file server, is online on one node at a time. This is sometimes referred to as active-passive or dual-active. File shares associated with this type of clustered file server are called clustered file shares.

  • Business Continuity Plan

  • Disaster Recovery Plan

  • DFS Replication Namespace for Unstructured Data i.e. user profile, home drive, Citrix profile

  • Highly Available File Server Replication 

Multi-Site Hyper-v Cluster for High Availability and Disaster Recovery

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of a business critical application is not an option. In this case, disaster recovery and high availability are bonded together so that when both/all nodes at the primary site are lost, the nodes at the secondary site begin providing service automatically, or with minimal intervention.

The maximum availability of any services or application depends on how you design your platform that hosts these services. It is important to follow best practices in Compute, Network and Storage infrastructure to maximize uptime and maintain SLA.

The following diagram shows a multi-site failover cluster that uses four nodes and supports a clustered service or application.

 

image

 

The following rack diagram shows the identical compute, storage and networking infrastructure in both site.

image

Physical Infrastructure

  • Primary and Secondary sites are connected via 2x10Gbps dark fibre
  • Storage vendor specific replication software such as EMC recovery point
  • Storage must have redundant storage processor
  • There must be redundant Switches for networking and storage
  • Each server must be connected to redundant switches with redundant NIC for each purpose
  • Each Hyper-v server must have minimum dual Host Bus Adapter (HBA) port connected to redundant MDS switches
  • Each network must be connected to dual NIC from server to switches
  • Only iLO/DRAC will have a single connection
  • Each site must have redundant power supply.

Storage Requirements

Since I am talking about highly available and redundant systems design, this sort of design must consist of replicated or clustered storage presented to multi-site Hyper-v cluster nodes. Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. You will achieve high performance through hardware or block level replication instead of software. You should contact your storage vendor to come up with solutions that provide replicated or clustered storage.

Network Requirements:

A multi-site cluster running Windows Server 2008 can contain nodes that are in different subnet however as a best practice, you must configure Hyper-v cluster in same subnet. You applications and services can reside in separate subnets. To avoid any conflict, you should use dark fibre connection or MPLS network between multi-sites that allows VLANs.

Note that you must configure Hyper-v with static IP. In a multi-site cluster, you might want to tune the “heartbeat” settings, see http://go.microsoft.com/fwlink/?LinkId=130588 for details.

Network Configuration Spread Sheet

Network

VLAN ID

NICs and Switch Ports speed

iLO/DRAC

10

1Gbps

MGMT

20

2x1Gbps

Live Migration

30

2x10Gbps

Storage Migration

40

2x10Gbps

Virtual Machine

50,60

4x10Gbps

iSCSI Network

70

4x10Gbps

Heartbeat network

80

2x1Gbps

Storage Replication

(Separate from Hyper-v)

90

Dark Fibre

2x10Gbps

Note that iSCSI network is only required if you are using IP Storage instead of Fibre Channel (FC) storage.

Cluster Selection: Node and File Share Majority (For Cluster with Special Configurations)

Quorum Selection: Since you will be configuring Node and File Share Majority cluster, you will have the option to place quorum files to shared folder. Where do you place this shared folder? Since we are talking about fully redundant and highly available Hyper-v Cluster, we have several options to place quorum shared folder.

Option1: Secondary Site

Option 2: Third Site

Visit http://technet.microsoft.com/en-us/library/cc770620%28WS.10%29.aspx for more details on quorum.

Hyper-v Cluster Configuration:

Visit https://araihan.wordpress.com/2013/06/04/windows-server-2012-failover-clustering-deep-dive/ for detailed cluster configuration guide.

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.

 

Unsupported:

Hyper-V on Windows 10 doesn’t support failover clustering

How to configure Hyper-v Replica Step By Step

Hyper-V Replica provides IP based asynchronous replication of virtual machines between two Hyper-v servers. Since this an asynchronous replication, replica virtual machine will not have the most recent data. However, replica virtual machines provides a cost effective way of keeping a copy of production virtual machines in a secondary site and can be made available in case of a disaster.

Benefits:

  • Shared or standalone storage to fulfill the capacity requirement of the replicated virtual machine
  • Asynchronous replication of Hyper-V virtual machines over Ethernet IP based network
  • Replica works with standalone servers, failover clusters, or a mixture of both
  • Hyper-v Hosts can be physically co-located or geographically diverse location with MPLS or IPVPN connection
  • Hyper-v Hosts can be domain joined or standalone
  • Provide planned or unplanned failover
  • Any Hyper-v virtualized server can be replication using Hyper-v replica

Requirement:

  • Windows Server 2012 R2 Hyper-v Role Installed
  • Windows Server 2012 Hyper-v Role Installed
  • Similar virtual network and physical network must be configured in secondary site for replica virtual machine to function as production virtual machine.

Step1: Configure Firewall on Primary and Secondary Hyper-v Host

1. Right Click Windows Logo on Task Bar>Control Panel>Windows Firewall

2. Open Windows Firewall with Advance Security and click Inbound Rules.

3. Right-click Hyper-V Replica HTTP Listener (TCP-In) and click Enable Rule.

4. Right-click Hyper-V Replica HTTPS Listener (TCP-In) and click Enable Rule.

Step2: Pre-stage Replica Broker Computer Object

1. Log on to DC>Open Active Directory Users & Computers>Create New Computer e.g. HVReplica

2. Right Click on HVReplica Computer Object>Properties>Security Tab>Hyper-v Cluster Nodes NetBIOS Name>Allow Full Permission>Apply>Ok.

Step3: Configure Replica Broker in Hyper-v Environment

Hyper-v Replica using Failover Cluster Wizard

1. Log on Hyper-v Host>open Failover Cluster Manager.

2. In the left pane, connect to the cluster, and while the cluster name is highlighted, click Configure Role in the Actions pane. The High Availability wizard opens

3. In the Select Role screen, select Hyper-V Replica Broker.

image

4. Complete the wizard, providing a NetBIOS name you have created in previous step and IP address to be used as the connection point to the cluster.

5. Verify that the Hyper-V Replica Broker role comes online successfully. Click Finish.

6. To test Replica broker failover, right-click the role, point to Move, and then click Select Node. Then, select a node, and then click OK.

7. click Roles in the Navigate category of the Details pane

8. Right-click the role and choose Replication Settings.

9. In the Details pane, select Enable this cluster as a Replica server.

10. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

11. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

12. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

13. Click OK or Apply when you are finished.

 

Configure Hyper-v Replica using Hyper-v Manager

To Configure Hyper-v replica Broker in non-clustered environment.

1. In Hyper-V Manager, click Hyper-V Settings in the Actions pane.

2. In the Hyper-V Settings dialog, click Replication Configuration.

image

3. In the Details pane, select Enable this computer as a Replica server.

4. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

5. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

6. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

7. Click OK or Apply when you are finished.

Step4: Configure Replica Virtual Machine

1. In the Details pane of Hyper-V Manager, select a virtual machine by clicking it.

2. Right-click the selected virtual machine and point to Enable Replication. The Enable Replication wizard opens.

3. On the Specify Replica Server page, in the Replica Server box, enter either the NetBIOS or fully qualified international domain name (FQIDN) of the Replica server that you configured in Step 2.1. If the Replica server is part of a failover cluster, enter the name of the Hyper-V Replica Broker that you configured in Step 1.4. Click Next.

4. On the Specify Connection Parameters page, the authentication and port settings you configured for the Replica server in Step 2.1 will automatically be populated, provided that Remote WMI is enabled. If it is not enabled, you will have to provide the values. Click Next.

5. On the Choose Replication VHDs page, clear the checkboxes for any VHDs that you want to exclude from replication, then click Next.

6. On the Configure Recovery History page, select the number and types of recovery points to be created on the Replica server, then click Next.

7. On the Choose Initial Replication page, select the initial replication method and then click Next.

8. On the Completing the Enable Replication Relationship Wizard page, review the information in the Summary and then click Finish.

9. A Replica virtual machine is created on the Replica server. If you elected to send the initial copy over the network, the transmission begins either immediately or at the time you configured.

Step5: Test Replicated Virtual Machine

1. In Hyper-V Manager, right-click the virtual machine you want to test failover for, point to Replication…, and then point to Test Failover….

2. After you have concluded your testing, discard the test virtual machine by choosing Stop Test Failover under the Replication option

Step6: Planed Failover

1. Start Hyper-V Manager on the primary server and choose a virtual machine to fail over. Turn off the virtual machine that you want to fail over.

2. Right-click the virtual machine, point to Replication, and then point to Planned Failover.

3. Click Fail Over to actually transfer operations to the virtual machine on the Replica server. Failover will not occur if the prerequisites have not been met.

How to respond to unplanned Failover

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to use, point to Replication, and then point to Failover….

3. In the dialog that opens, choose the recovery snapshot you want the virtual machine to recover to, and then click Failover….. The Replication Status will change to Failed over – Waiting completion and the virtual machine will start using the network parameters you previously configured for it

4. Use the Complete-VMFailover Windows PowerShell cmdlet below to complete failover.

Starting a reverse replication once disaster is over

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to reverse replicate, point to Replication, and then point to Reverse replication…. The Reverse Replication wizard opens.

3. Complete the Reverse Replication wizard. You will find the requested information to be very similar if not identical to the information you provided in the Enable Replication wizard

Similar Articles:

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to configure SAN replication between IBM Storwize V3700 systems

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Windows Server 2012 R2 Gateway

Windows server 2012 R2 can be configured as a Gateway VM in a two or four node cluster on Hyper-v Host. Gateway VM or router enhance Data Center by providing them a secure router for public or private cloud. Gateway VM cluster can provide routing functionality up to 200 tenants. Each Gateway VM can provide routing functionality for up to 50 tenants.

Two different versions of the gateway router are available in Windows Server 2012 R2.

RRAS Multitenant Gateway – The RRAS Multitenant Gateway router can be used for multitenant or non-multitenant deployments, and is a full featured BGP router. To deploy an RRAS Multitenant Gateway router, you must use Windows PowerShell commands

RRAS Gateway configuration and options:

  • Configure the RRAS Multitenant Gateway for use with Hyper-V Network Virtualization
  • Configure the RRAS Multitenant Gateway for use with VLANs
  • Configure the RRAS Multitenant Gateway for Site-to-Site VPN Connections
  • Configure the RRAS Multitenant Gateway to Perform Network Address Translation for Tenant Computers
  • Configure the RRAS Multitenant Gateway for Dynamic Routing with BGP

Windows Server 2012 R2 Gateway – To deploy Windows Server Gateway, you must use System Center 2012 R2 and Virtual Machine Manager (VMM). The Windows Server Gateway router is designed for use with multitenant deployments.

Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.

This feature allow service provider the functionality to virtually isolate different subnets, VLANs and network traffic which resides in same physical core or distribution switch. Hyper-v network virtualization is a Network Virtualization Generic Routing Encapsulation NVGRE which allows tenant to bring their own TCP/IP and name space in cloud environment.

Systems requirements:

Option Hyper-v Host Gateway VM
CPU 2 Socket NUMA Node 8 vCPU for two VMs

4 vCPU for four VMs

CPU Core 8 1
Memory 48GB 8GB
Network Adapter Two 10GB NICs connect to Cisco Trunk Port1 4 virtual NICs

  • Operating Systems
  • Clustering heartbeat
  • External network
  • Internal network
Clustering Active-Active Active-Active or Active-Passive

1-NIC Teaming in Hyper-v Host- You can configure NIC teaming in Hyper-v Host for two 10GB NICs. Windows Server 2012 R2 Gateway VM with four vNIC that are connected to the Hyper-V Virtual Switch that is bound to the NIC Team.

Deployment Guides:

Windows Server 2012 R2 RRAS Deployment Guide

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Clustering Windows Server 2012 R2