Microsoft Software Defined Storage AKA Scale-out File Server (SOFS)

Business Challenges:

  • $/IOPS and $/TB
  • Continuous Availability
  • Fault Tolerance
  • Storage Performance
  • Segregation of production, development and disaster recovery storage
  • De-duplication of unstructured data
  • Segregation of data between production site and disaster recovery site
  • Continuous break fix of Distributed File Systems (DFS) & File Server
  • Continuously extending storage on the DFS servers
  • Single point of failure
  • File systems is not available always
  • Security of file systems is constant concern
  • Propitiatory non-scalable storage
  • Management of physical storage
  • Vendor lock-in contract for physical storage
  • Migration path from single vendor to multi vendor storage provider
  • Management overhead of unstructured data
  • Comprehensive management of storage platform

Solutions:

Microsoft Software Defined Storage AKA Scale-Out File Server is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster.Microsoft Software Defined Storage offerings compared with third party offering:

Storage feature Third-party NAS/SAN Microsoft Software-Defined Storage
Fabric Block protocol

 

File protocol Network

 

Network Low latency network with FC

 

Low latency with SMB3Direct Management

 

Management Management of LUNs

 

Management of file shares Data de-duplication

 

Data De-duplication Data de-duplication

 

Data de-duplication Resiliency

 

Resiliency RAID resiliency groups

 

Flexible resiliency options Pooling

 

Pooling Pooling of disks

 

Pooling of disks Availability

 

Availability High

 

Continuous (via redundancy) Copy offload, Snapshots

 

Copy Offloads, Snapshots Copy offload, Snapshots

 

SMB copy offload, Snapshots Tiering

 

Tiering Storage tiering

 

Performance with tiering Persistent write-back cache

 

Persistent Write-back cache Persistent write-back cache

 

Persistent write-back cache Scale up

 

Scale up Scale up

 

Automatic scale-out rebalancing Storage Quality of Service (QoS)

 

Storage Quality of Service (QoS) Storage QoS

 

Storage QoS (Windows Server 2016) Replication

 

Replication Replication

 

Storage Replica (Windows Server 2016) Updates

 

Updates Firmware updates

 

Rolling cluster upgrades (Windows Server 2016)

 

    Storage Spaces Direct (Windows Server 2016)

 

    Azure-consistent storage (Windows Server 2016)

 

 Functional use of Microsoft Scale-Out File Servers:

1. Application Workloads

  • Microsoft Hyper-v Cluster
  • Microsoft SQL Server Cluster
  • Microsoft SharePoint
  • Microsoft Exchange Server
  • Microsoft Dynamics
  • Microsoft System Center DPM Storage Target
  • Veeam Backup Repository

2. Disaster Recovery Solution

  • Backup Target
  • Object storage
  • Encrypted storage target
  • Hyper-v Replica
  • System Center DPM

3. Unstructured Data

  • Continuously Available File Shares
  • DFS Namespace folder target server
  • Microsoft Data de-duplication
  • Roaming user Profiles
  • Home Directories
  • Citrix User Profiles
  • Outlook Cached location for Citrix XenApp Session Server

4. Management

  • Single Management Point for all Scale-out File Servers
  • Provide wizard driven tools for storage related tasks
  • Integrated with Microsoft System Center

Business Values:

  • Scalability
  • Load balancing
  • Fault tolerance
  • Ease of installation
  • Ease of management/operations
  • Flexibility
  • Security
  • High performance
  • Compliance & Certification

SOFS Architecture:

Microsoft Scale-out File Server (SOFS) is  considered as a Storage Defined Storage (SDS).  Microsoft SOFS is independent of hardware vendor as long as the compute and storage is certified by Microsoft Corporation. The following figure shows Microsoft Hyper-v cluster, SQL Cluster and Object Storage on the SOFS.

image

                 Figure: Microsoft Software Defined Storage (SDS) Architecture

image

                     Figure: Microsoft Scale-out File Server (SOFS) Architecture

image

                                      Figure: Microsoft SDS Components

image

                        Figure: Unified Storage Management (See Reference)

Microsoft Software Defined Storage AKA Scale-out File Server Benefits:

SOFS:

  • Continuous availability file stores for Hyper-V and SQL Server
  • Load-balanced IO across all nodes
  • Distributed access across all nodes
  • VSS support
  • Transparent failover and client redirection
  • Continuous availability at a share level versus a server level

De-duplication:

  • Identifies duplicate chunks of data and only stores one copy
  • Provides up to 90% reduction in storage required for OS VHD files
  • Reduces CPU and Memory pressure
  • Offers excellent reliability and integrity
  • Outperforms Single Instance Storage (SIS) or NTFS compression.

SMB Multichannel

  • Automatic detection of SMB Multi-Path networks
  • Resilience against path failures
  • Transparent failover with recovery
  • Improved throughput
  • Automatic configuration with little administrative overhead

SMB Direct:

  • The Microsoft implementation of RDMA.
  • The ability to direct data transfers from a storage location to an application.
  • Higher performance and lower latency through CPU offloading
  • High-speed network utilization (including InfiniBand and iWARP)
  • Remote storage at the speed of local storage
  • A transfer rate of approximately 50Gbps on a single NIC port
  • Compatibility with SMB Multichannel for load balancing and failover

VHDX Virtual Disk:

  • Online VHDX Resize
  • Storage QoS (Quality of Service)

Live Migration

  • Easy migration of virtual machine into a cluster while the virtual machine is running
  • Improved virtual machine mobility
  • Flexible placement of virtual machine storage based on demand
  • Migration of virtual machine storage to shared storage without downtime

Storage Protocol:

  • SAN discovery (FCP, SAS, iSCSI i.e. EMC VNX, EMC VMAX)
  • NAS discovery (Self-contained NAS, NAS Head i.e. NetApp OnTap)
  • File Server Discovery (Microsoft Scale-Out File Server, Unified Storage)

Unified Management:

  • A new architecture provides ~10x faster disk/partition enumeration operations
  • Remote and cluster-awareness capabilities
  • SM-API exposes new Windows Server 2012 R2 features (Tiering, Write-back cache, and so on)
  • SM-API features added to System Center VMM
  • End-to-end storage high availability space provisioning in minutes in VMM console
  • More Windows PowerShell

ReFS:

  • More resilience to power failures
  • Highest levels of system availability
  • Larger volumes with better durability
  • Scalable to petabyte size volumes

Storage Replica:

  • Hardware agnostic storage configuration
  • Provide a DR solution for planned and unplanned outages of mission critical workloads.
  • Use SMB3 transport with proven reliability, scalability, and performance.
  • Stretched failover clusters within metropolitan distances.
  • Manage end to end storage and clustering for Hyper-V, Storage Replica, Storage Spaces, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS using Microsoft software
  • Reduce downtime, and increase reliability and productivity intrinsic to Windows.

Cloud Integration:

  • Cloud-based storage service for online backups
  • Windows PowerShell instrumented
  • Simple, reliable Disaster Recovery solution for applications and data
  • Supports System Center 2012 R2 DPM

Implementing Scale-out File Server

Scale-out File Server Recommended Configuration:

  1. Gather all virtual servers IOPS requirements*
  2. Gather Applications IOPS requirements
  3. Total IOPS of all applications & Virtual machines must be less than available IOPS of physical storage 
  4. Keep latency below 3 ms at all time for high performance
  5. Gather required capacity + potential growth + best practice
  6. N+1 Compute, Network and Storage Hardware
  7. Use low latency, high throughput networks
  8. Segregate storage network from data network using logical network (VLAN) or fibre channel
  9. Tools to be used

*Not all virtual servers are same, DHCP server generate few IOPS, SQL server and Exchange can generate thousands of IOPS.

*Do not place SQL Server on the same logical volume (LUN) with Exchange Server or Microsoft Dynamics or Backup Server.

*Isolate high IO workloads to separate logical volume or even separate storage pool if possible.

Prerequisites for Scale-Out File Server

  1. Install File and Storage Services server role, and the Failover Clustering feature on the cluster nodes
  2. Configure Microsoft failover Clusters using this article Windows Server 2012: Failover Clustering Deep Dive Part II
  3. Add Cluster Share Volume
  • Log on to the server as a member of the local Administrators group.
  • Open Server Manager> Click Tools, and then click Failover Cluster Manager.
  • Click Storage, right-click the disk that you want to add to the cluster shared volume, and then click Add to Cluster Shared Volumes> Add Storage Presented to this cluster.

Configure Scale-out File Server

  1. Open Failover Cluster Manager> Right-click the name of the cluster, and then click Configure Role.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click File Server, and then click Next.
  4. On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.
  5. On the Client Access Point page, in the Name box, type a NETBIOS of Scale-Out File Server, and then click Next.
  6. On the Confirmation page, confirm your settings, and then click Next.
  7. On the Summary page, click Finish.

Create Continuously Available File Share

  1. Open Failover Cluster Manager>Expand the cluster, and then click Roles.
  2. Right-click the file server role, and then click Add File Share.
  3. On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  4. On the Select the server and path for this share page, click the name of the cluster shared volume, and then click Next.
  5. On the Specify share name page, in the Share name box, type a name for the file share, and then click Next.
  6. On the Configure share settings page, ensure that the Continuously Available check box is selected, and then click Next.
  7. On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • To use Scale-Out File Server file share for Hyper-V: All Hyper-V computer accounts, the SYSTEM account, cluster computer account for any Hyper-V clusters, and all Hyper-V administrators must be granted full control on the share and the file system.
  • To use Scale-Out File Server on Microsoft SQL Server: The SQL Server service account must be granted full control on the share and the file system

      8. On the Confirm selections page, click Create. On the View results page, click Close.

Use SOFS for Hyper-v Server VHDX Store:

  1. Open Hyper-V Manager. Click Start, and then click Hyper-V Manager.
  2. Open Hyper-v Settings> Virtual Hard Disks> Specify Location of Store as \\SOFS\VHDShare\ and Specify location of Virtual Machine Configuration \\SOFS\VHDCShare
  3. Click Ok.

Use SOFS in System Center VMM: 

  1. Add Windows File Server in VMM
  2. Assign SOFS Share to Fabric & Hosts

Use SOFS for SQL Database Store:

1. Assign SQL Service Account Full permission to SOFS Share

  • Open Windows Explorer and navigate to the scale-out file share.
  • Right-click the folder, and then click Properties.
  • Click the Sharing tab, click Advanced Sharing, and then click Permissions.
  • Ensure that the SQL Server service account has full-control permissions.
  • Click OK twice.
  • Click the Security tab. Ensure that the SQL Server service account has full-control permissions.

2. In SQL Server 2012, you can choose to store all database files in a scale-out file share during installation.  

3. On the step 20 of SQL Setup Wizard , provide a location of Scale-out File Server which is \\SOFS\SQLData and \\SOFS\SQLLogs

4. Create a Database on SOFS Share but on the existing SQL Server using SQL Script

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’\\SOFS\SQLDB\TestDB.mdf’ )
LOG ON
( NAME = N’TestDBLog’, FILENAME = N’\\SOFS\SQLDBLog\TestDBLogs.ldf’)
GO

Use Backup & Recovery:

System Center Data Protection Manager 2012 R2

Configure and add a dedupe storage target into DPM 2012 R2. DPM 2012 R2 will not backup SOFS itself but it will backup VHDX files stored on SOFS. Follow Deduplicate DPM storage and protection for virtual machines with SMB storage  guide to backup virtual machines.

Veeam Availability Suite

  1. Log on to Veeam Availability Console>Click Backup Repository> Right Click New backup Repository
  2. Select Shared Folder on the Type Tab
  3. Add SMB Backup Target \\SOFS\Repository
  4. Follow the Wizard. Make Sure Service Account of Veeam has full access permission to \\SOFS\Repository  Share.
  5. Click Scale-out Repositories>Right Click Add Scale-out backup repository> Type the Name
  6. Select the backup repository you created in previous>Follow the Wizard to complete tasks.

References:

Microsoft Storage Architecture

Storage Spaces Physical Disk Validation Script

Validate Hardware

Deploy Clustered Storage Spaces

Storage Spaces Tiering in Windows Server 2012 R2

SMB Transparent Failover

Cluster Shared Volume (CSV) Inside Out

Storage Spaces – Designing for Performance

Related Articles:

Scale-Out File Server Cluster using Azure VMs

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Not every organisation looses millions of dollar per second but some does. An organisation may not loose millions of dollar per second but consider customer service and reputation are number one priority. These type of business wants their workflow to be seamless and downtime free. This article is for them who consider business continuity equals money well spent. Here is how it is done:

Multi-Site Failover Cluster

Microsoft Multi-Site Failover Cluster is a group of Clustered Nodes distribution through multiple sites in a region or separate region connected with low latency network and storage. As per the diagram illustrated below, Data Center A Cluster Nodes are connected to a local SAN Storage, while replicated to a SAN Storage on the Data Center B. Replication is taken care by a identical software defined storage on each site.  Software defined storage will replicate volumes or Logical Unit Number (LUN) from primary site in this example Data Center A to Disaster Recovery Site B. Microsoft Failover cluster is configured with pass-through storage i.e. volumes and these volumes are replication to DR site. In the Primary and DR sites, physical network is configured using Cisco Nexus 7000. Data network and virtual machine network are logically segregated in Microsoft System Center VMM and physical switch using virtual local area network or VLAN.  A separate Storage Area Network (SAN) is created in each site with low latency storage. Volumes of pass-through storage are replicated to DR site using identical size of volumes.

image

                                     Figure: Highly Available Multi-site Cluster

image

                           Figure: Software Defined Storage in Each Site

 Design Components of Storage:

  • SAN to SAN replication must be configured correctly
  • Initial must be complete before Failover Cluster is configured
  • MPIO software must be installed on the cluster Nodes (N1, N2…N6)
  • Physical and logical multipathing must be configured
  • If Storage is presented directly to virtual machines or cluster nodes then NPIV must configured on the Fabric Zones.
  • All Storage and Fabric Firmware must up to date with manufacturer latest software
  • An identical software defined storage must be used on the both sites 
  • If a third party software is used to replicate storage between sites then storage vendor must be consulted before the replication. 

Further Reading:

Understanding Software Defined Storage (SDS)

How to configure SAN replication between IBM Storwize V3700 systems

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Application Scale-out File Systems

Design Components of Network:

  • Isolate management, virtual and data network using VLAN
  • Use a reliable IPVPN or Fibre optic provider for the replication over the network
  • Eliminate all single point of failure from all network components
  • Consider stretched VLAN for multiple sites 

Further Reading:

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Design failover Cluster Quorum

  • Use Node & File Share Witness (FSW) Quorum for even number of Cluster Nodes
  • Connect File Share Witness on to the third Site
  • Do not host File Share Witness on a virtual machine on same site
  • Alternatively use Dynamic Quorum

Further Reading:

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Design of Compute

  • Use reputed vendor to supply compute hardware compatible with Microsoft Hyper-v
  • Make sure all latest firmware updates are applied to Hyper-v host
  • Make manufacture provide you with latest HBA software to be installed on Hyper-v host

Further Reading:

Windows Server 2012: Failover Clustering Deep Dive Part II

Implementing a Multi-Site Failover Cluster

Step1: Prepare Network, Storage and Compute

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step2: Configure Failover Cluster on Each Site

Windows Server 2012: Failover Clustering Deep Dive Part II

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Multi-Site Clustering & Disaster Recovery

Step3: Replicate Volumes

How to configure SAN replication between IBM Storwize V3700 systems

How to create a Microsoft Multi-Site cluster with IBM Storwize replication

Use Cases:

Use case can be determined by current workloads and future workloads plus business continuity. Deploy Veeam One to determine current workloads on your infrastructure and propose a future workload plus business continuity.  Here is a list of use cases of multi-site cluster.

  • Scale-Out File Server for application data-  To store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are simultaneously online on all nodes. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active.

  • File Server for general use – This type of clustered file server, and therefore all the shares associated with the clustered file server, is online on one node at a time. This is sometimes referred to as active-passive or dual-active. File shares associated with this type of clustered file server are called clustered file shares.

  • Business Continuity Plan

  • Disaster Recovery Plan

  • DFS Replication Namespace for Unstructured Data i.e. user profile, home drive, Citrix profile

  • Highly Available File Server Replication 

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Windows Server 2012: Failover Clustering Deep Dive

Microsoft introduced an advanced quorum configuration option in Windows Server 2012/R2. You can choose to enable dynamic quorum management by cluster. There are major benefits of having dynamic quorum in any Microsoft cluster whether for Exchange DAG, SQL cluster, Hyper-v cluster or file server cluster. When you configure dynamic quorum, the cluster dynamically manages the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node re-joins the cluster. Dynamic quorum remove dependencies of a quorum disk in Hyper-v and also enable multi-site cluster in a diverse geographic location without sharing common disk.

Pros:

  • With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node.
  • By dynamically adjusting the quorum majority requirement, the cluster can sustain sequential node shutdowns to a single node.
  • The cluster software automatically configures the quorum for a new cluster, based on the number of nodes configured and the availability of shared storage.

Cons:

  • Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
  • If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote.

How to configure a dynamic quorum?

Configure a standard cluster as you do in a Microsoft environment. Then use Quorum Configuration Wizard in Cluster Manager to configure advanced quorum.

  1. In Failover Cluster Manager, select the cluster that you want to change.
  2. With the cluster selected>under Actions>click More Actions> and then click Configure Cluster Quorum Settings> Click Next.
  3. On the Select Quorum Configuration Option page>click Advanced quorum configuration and witness selection
  4. On the Select Voting Configuration page>select an option to assign votes to nodes. By default, all nodes are assigned a vote.
  5. On the Configure Quorum Management page> enable the Allow cluster to dynamically manage the assignment of node votes
  6. On the Select Quorum Witness page>select Do not configure a quorum witness, and then complete the wizard
  7. Click Next>then click Next.

Once quorum is reconfigured then you run the Validate Quorum Configuration test to verify the updated quorum settings. Follow the steps to validate quorum.

  1. In Failover Cluster Manager, select the cluster> run the Validate Quorum Configuration test to verify the updated quorum settings.

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Scenario 1: In-place migration of two standalone Windows Servers (Hyper-v role installed) into clustered Windows Servers (Hyper-v role installed).

Steps involved in this scenario. There will be downtime in this scenario.

  1. Delete all snapshots from VMs
  2. Update Windows Server to latest patches and hotfixes
  3. Reboot hosts
  4. Install Failover Clustering Windows Feature in both hosts
  5. Connect hosts with shared storage infrastructure either iSCSI or fibre channel
  6. Present shared storage (5GB for Quorum disk and additional disk for VMs store) to Hyper-v Hosts.
  7. Run Failover cluster Wizard, create cluster.
  8. From the failover cluster manager, Click Disk, select virtual machine storage and convert the disk to clustered share volume
  9. Open Hyper-v Manager from Server Manager, run storage migration and migrate all VM data to single location which is shared storage.
  10. Now use Configure Role Wizard from Failover Cluster Manager, Select Virtual Machine from drop down list, Select one or More VMs and migrate those VMs to Failover cluster node.
  11. Test Live migration.

Scenario 2: Migrating standalone Windows Servers (Hyper-v role installed) using local storage to different Windows Servers (Hyper-v role installed) cluster using shared storage.

In this scenario, clustered Windows servers doesn’t see local storage available in old Hyper-v host and old Hyper-v host doesn’t see shared storage in new Hyper-v clustered environment. There will be downtime when you migrate VMs. Delete any snapshot, backup all VMs before you proceed.

Option A: Download Veeam Backup & Replication 8 trial version, configure a VM as Veeam management server. Add Source host as standalone hyper-v host and target host as Hyper-v cluster. Replicate all the VMs. Shutdown old VMs in standalone Hyper-v Hosts, then Power on VMs in Hyper-v cluster. Delete old VMs.

Option B: Copy VHD and configuration file and save into clustered shared storage. Log on to one of the clustered hyper-v host, Open Hyper-v Manager, Import VM option to import VM. Then use Configure Role option in failover Cluster Manager in same host to migrate the VM into cluster, then Power on VM in cluster.

My recommendation: use Veeam B&R.

Scenario 3: Migrating standalone Windows Servers (Hyper-v role installed) using iSCSI storage to different Windows Servers (Hyper-v role installed) cluster using fibre channel or iSCSI storage.

Option A: shutdown VMs. Present same iSCSI storage connected standalone hosts to clustered hosts. Use storage migration to migrate VMs to clustered Hosts. Then use configure role option, Failover cluster manager to migrate VMs to Hyper-v cluster.

Option B: Again use Veeam to do the job.

There are many factors/challenges when migrating VMs from standalone environment to clustered environment.

  1. iSCSI storage to Fibre Channel storage. When new cluster has host bus adapter (HBA) and old standalone host doesn’t have HBA. You can use Microsoft iSCSI initiation to fulfil the initiator requirement in new host.
  2. Fibre channel storage to iSCSI storage. There will heaps of downtime to fulfil this requirement because of new architecture. Veeam can be part of a solution.
  3. Multi-site and geographically diverse cluster will depend on MPLS or IPVPN network latency and bandwidth.

In conclusion, there is no silver bullet for individual situation. You have to consult with Microsoft partner to get a correct migration path that best fit your requirements.