Microsoft Software Defined Storage AKA Scale-out File Server (SOFS)

Business Challenges:

  • $/IOPS and $/TB
  • Continuous Availability
  • Fault Tolerance
  • Storage Performance
  • Segregation of production, development and disaster recovery storage
  • De-duplication of unstructured data
  • Segregation of data between production site and disaster recovery site
  • Continuous break fix of Distributed File Systems (DFS) & File Server
  • Continuously extending storage on the DFS servers
  • Single point of failure
  • File systems is not available always
  • Security of file systems is constant concern
  • Propitiatory non-scalable storage
  • Management of physical storage
  • Vendor lock-in contract for physical storage
  • Migration path from single vendor to multi vendor storage provider
  • Management overhead of unstructured data
  • Comprehensive management of storage platform

Solutions:

Microsoft Software Defined Storage AKA Scale-Out File Server is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster.Microsoft Software Defined Storage offerings compared with third party offering:

Storage feature Third-party NAS/SAN Microsoft Software-Defined Storage
Fabric Block protocol

 

File protocol Network

 

Network Low latency network with FC

 

Low latency with SMB3Direct Management

 

Management Management of LUNs

 

Management of file shares Data de-duplication

 

Data De-duplication Data de-duplication

 

Data de-duplication Resiliency

 

Resiliency RAID resiliency groups

 

Flexible resiliency options Pooling

 

Pooling Pooling of disks

 

Pooling of disks Availability

 

Availability High

 

Continuous (via redundancy) Copy offload, Snapshots

 

Copy Offloads, Snapshots Copy offload, Snapshots

 

SMB copy offload, Snapshots Tiering

 

Tiering Storage tiering

 

Performance with tiering Persistent write-back cache

 

Persistent Write-back cache Persistent write-back cache

 

Persistent write-back cache Scale up

 

Scale up Scale up

 

Automatic scale-out rebalancing Storage Quality of Service (QoS)

 

Storage Quality of Service (QoS) Storage QoS

 

Storage QoS (Windows Server 2016) Replication

 

Replication Replication

 

Storage Replica (Windows Server 2016) Updates

 

Updates Firmware updates

 

Rolling cluster upgrades (Windows Server 2016)

 

    Storage Spaces Direct (Windows Server 2016)

 

    Azure-consistent storage (Windows Server 2016)

 

 Functional use of Microsoft Scale-Out File Servers:

1. Application Workloads

  • Microsoft Hyper-v Cluster
  • Microsoft SQL Server Cluster
  • Microsoft SharePoint
  • Microsoft Exchange Server
  • Microsoft Dynamics
  • Microsoft System Center DPM Storage Target
  • Veeam Backup Repository

2. Disaster Recovery Solution

  • Backup Target
  • Object storage
  • Encrypted storage target
  • Hyper-v Replica
  • System Center DPM

3. Unstructured Data

  • Continuously Available File Shares
  • DFS Namespace folder target server
  • Microsoft Data de-duplication
  • Roaming user Profiles
  • Home Directories
  • Citrix User Profiles
  • Outlook Cached location for Citrix XenApp Session Server

4. Management

  • Single Management Point for all Scale-out File Servers
  • Provide wizard driven tools for storage related tasks
  • Integrated with Microsoft System Center

Business Values:

  • Scalability
  • Load balancing
  • Fault tolerance
  • Ease of installation
  • Ease of management/operations
  • Flexibility
  • Security
  • High performance
  • Compliance & Certification

SOFS Architecture:

Microsoft Scale-out File Server (SOFS) is  considered as a Storage Defined Storage (SDS).  Microsoft SOFS is independent of hardware vendor as long as the compute and storage is certified by Microsoft Corporation. The following figure shows Microsoft Hyper-v cluster, SQL Cluster and Object Storage on the SOFS.

image

                 Figure: Microsoft Software Defined Storage (SDS) Architecture

image

                     Figure: Microsoft Scale-out File Server (SOFS) Architecture

image

                                      Figure: Microsoft SDS Components

image

                        Figure: Unified Storage Management (See Reference)

Microsoft Software Defined Storage AKA Scale-out File Server Benefits:

SOFS:

  • Continuous availability file stores for Hyper-V and SQL Server
  • Load-balanced IO across all nodes
  • Distributed access across all nodes
  • VSS support
  • Transparent failover and client redirection
  • Continuous availability at a share level versus a server level

De-duplication:

  • Identifies duplicate chunks of data and only stores one copy
  • Provides up to 90% reduction in storage required for OS VHD files
  • Reduces CPU and Memory pressure
  • Offers excellent reliability and integrity
  • Outperforms Single Instance Storage (SIS) or NTFS compression.

SMB Multichannel

  • Automatic detection of SMB Multi-Path networks
  • Resilience against path failures
  • Transparent failover with recovery
  • Improved throughput
  • Automatic configuration with little administrative overhead

SMB Direct:

  • The Microsoft implementation of RDMA.
  • The ability to direct data transfers from a storage location to an application.
  • Higher performance and lower latency through CPU offloading
  • High-speed network utilization (including InfiniBand and iWARP)
  • Remote storage at the speed of local storage
  • A transfer rate of approximately 50Gbps on a single NIC port
  • Compatibility with SMB Multichannel for load balancing and failover

VHDX Virtual Disk:

  • Online VHDX Resize
  • Storage QoS (Quality of Service)

Live Migration

  • Easy migration of virtual machine into a cluster while the virtual machine is running
  • Improved virtual machine mobility
  • Flexible placement of virtual machine storage based on demand
  • Migration of virtual machine storage to shared storage without downtime

Storage Protocol:

  • SAN discovery (FCP, SAS, iSCSI i.e. EMC VNX, EMC VMAX)
  • NAS discovery (Self-contained NAS, NAS Head i.e. NetApp OnTap)
  • File Server Discovery (Microsoft Scale-Out File Server, Unified Storage)

Unified Management:

  • A new architecture provides ~10x faster disk/partition enumeration operations
  • Remote and cluster-awareness capabilities
  • SM-API exposes new Windows Server 2012 R2 features (Tiering, Write-back cache, and so on)
  • SM-API features added to System Center VMM
  • End-to-end storage high availability space provisioning in minutes in VMM console
  • More Windows PowerShell

ReFS:

  • More resilience to power failures
  • Highest levels of system availability
  • Larger volumes with better durability
  • Scalable to petabyte size volumes

Storage Replica:

  • Hardware agnostic storage configuration
  • Provide a DR solution for planned and unplanned outages of mission critical workloads.
  • Use SMB3 transport with proven reliability, scalability, and performance.
  • Stretched failover clusters within metropolitan distances.
  • Manage end to end storage and clustering for Hyper-V, Storage Replica, Storage Spaces, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS using Microsoft software
  • Reduce downtime, and increase reliability and productivity intrinsic to Windows.

Cloud Integration:

  • Cloud-based storage service for online backups
  • Windows PowerShell instrumented
  • Simple, reliable Disaster Recovery solution for applications and data
  • Supports System Center 2012 R2 DPM

Implementing Scale-out File Server

Scale-out File Server Recommended Configuration:

  1. Gather all virtual servers IOPS requirements*
  2. Gather Applications IOPS requirements
  3. Total IOPS of all applications & Virtual machines must be less than available IOPS of physical storage 
  4. Keep latency below 3 ms at all time for high performance
  5. Gather required capacity + potential growth + best practice
  6. N+1 Compute, Network and Storage Hardware
  7. Use low latency, high throughput networks
  8. Segregate storage network from data network using logical network (VLAN) or fibre channel
  9. Tools to be used

*Not all virtual servers are same, DHCP server generate few IOPS, SQL server and Exchange can generate thousands of IOPS.

*Do not place SQL Server on the same logical volume (LUN) with Exchange Server or Microsoft Dynamics or Backup Server.

*Isolate high IO workloads to separate logical volume or even separate storage pool if possible.

Prerequisites for Scale-Out File Server

  1. Install File and Storage Services server role, and the Failover Clustering feature on the cluster nodes
  2. Configure Microsoft failover Clusters using this article Windows Server 2012: Failover Clustering Deep Dive Part II
  3. Add Cluster Share Volume
  • Log on to the server as a member of the local Administrators group.
  • Open Server Manager> Click Tools, and then click Failover Cluster Manager.
  • Click Storage, right-click the disk that you want to add to the cluster shared volume, and then click Add to Cluster Shared Volumes> Add Storage Presented to this cluster.

Configure Scale-out File Server

  1. Open Failover Cluster Manager> Right-click the name of the cluster, and then click Configure Role.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click File Server, and then click Next.
  4. On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.
  5. On the Client Access Point page, in the Name box, type a NETBIOS of Scale-Out File Server, and then click Next.
  6. On the Confirmation page, confirm your settings, and then click Next.
  7. On the Summary page, click Finish.

Create Continuously Available File Share

  1. Open Failover Cluster Manager>Expand the cluster, and then click Roles.
  2. Right-click the file server role, and then click Add File Share.
  3. On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  4. On the Select the server and path for this share page, click the name of the cluster shared volume, and then click Next.
  5. On the Specify share name page, in the Share name box, type a name for the file share, and then click Next.
  6. On the Configure share settings page, ensure that the Continuously Available check box is selected, and then click Next.
  7. On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • To use Scale-Out File Server file share for Hyper-V: All Hyper-V computer accounts, the SYSTEM account, cluster computer account for any Hyper-V clusters, and all Hyper-V administrators must be granted full control on the share and the file system.
  • To use Scale-Out File Server on Microsoft SQL Server: The SQL Server service account must be granted full control on the share and the file system

      8. On the Confirm selections page, click Create. On the View results page, click Close.

Use SOFS for Hyper-v Server VHDX Store:

  1. Open Hyper-V Manager. Click Start, and then click Hyper-V Manager.
  2. Open Hyper-v Settings> Virtual Hard Disks> Specify Location of Store as \\SOFS\VHDShare\ and Specify location of Virtual Machine Configuration \\SOFS\VHDCShare
  3. Click Ok.

Use SOFS in System Center VMM: 

  1. Add Windows File Server in VMM
  2. Assign SOFS Share to Fabric & Hosts

Use SOFS for SQL Database Store:

1. Assign SQL Service Account Full permission to SOFS Share

  • Open Windows Explorer and navigate to the scale-out file share.
  • Right-click the folder, and then click Properties.
  • Click the Sharing tab, click Advanced Sharing, and then click Permissions.
  • Ensure that the SQL Server service account has full-control permissions.
  • Click OK twice.
  • Click the Security tab. Ensure that the SQL Server service account has full-control permissions.

2. In SQL Server 2012, you can choose to store all database files in a scale-out file share during installation.  

3. On the step 20 of SQL Setup Wizard , provide a location of Scale-out File Server which is \\SOFS\SQLData and \\SOFS\SQLLogs

4. Create a Database on SOFS Share but on the existing SQL Server using SQL Script

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’\\SOFS\SQLDB\TestDB.mdf’ )
LOG ON
( NAME = N’TestDBLog’, FILENAME = N’\\SOFS\SQLDBLog\TestDBLogs.ldf’)
GO

Use Backup & Recovery:

System Center Data Protection Manager 2012 R2

Configure and add a dedupe storage target into DPM 2012 R2. DPM 2012 R2 will not backup SOFS itself but it will backup VHDX files stored on SOFS. Follow Deduplicate DPM storage and protection for virtual machines with SMB storage  guide to backup virtual machines.

Veeam Availability Suite

  1. Log on to Veeam Availability Console>Click Backup Repository> Right Click New backup Repository
  2. Select Shared Folder on the Type Tab
  3. Add SMB Backup Target \\SOFS\Repository
  4. Follow the Wizard. Make Sure Service Account of Veeam has full access permission to \\SOFS\Repository  Share.
  5. Click Scale-out Repositories>Right Click Add Scale-out backup repository> Type the Name
  6. Select the backup repository you created in previous>Follow the Wizard to complete tasks.

References:

Microsoft Storage Architecture

Storage Spaces Physical Disk Validation Script

Validate Hardware

Deploy Clustered Storage Spaces

Storage Spaces Tiering in Windows Server 2012 R2

SMB Transparent Failover

Cluster Shared Volume (CSV) Inside Out

Storage Spaces – Designing for Performance

Related Articles:

Scale-Out File Server Cluster using Azure VMs

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Not every organisation looses millions of dollar per second but some does. An organisation may not loose millions of dollar per second but consider customer service and reputation are number one priority. These type of business wants their workflow to be seamless and downtime free. This article is for them who consider business continuity equals money well spent. Here is how it is done:

Multi-Site Failover Cluster

Microsoft Multi-Site Failover Cluster is a group of Clustered Nodes distribution through multiple sites in a region or separate region connected with low latency network and storage. As per the diagram illustrated below, Data Center A Cluster Nodes are connected to a local SAN Storage, while replicated to a SAN Storage on the Data Center B. Replication is taken care by a identical software defined storage on each site.  Software defined storage will replicate volumes or Logical Unit Number (LUN) from primary site in this example Data Center A to Disaster Recovery Site B. Microsoft Failover cluster is configured with pass-through storage i.e. volumes and these volumes are replication to DR site. In the Primary and DR sites, physical network is configured using Cisco Nexus 7000. Data network and virtual machine network are logically segregated in Microsoft System Center VMM and physical switch using virtual local area network or VLAN.  A separate Storage Area Network (SAN) is created in each site with low latency storage. Volumes of pass-through storage are replicated to DR site using identical size of volumes.

image

                                     Figure: Highly Available Multi-site Cluster

image

                           Figure: Software Defined Storage in Each Site

 Design Components of Storage:

  • SAN to SAN replication must be configured correctly
  • Initial must be complete before Failover Cluster is configured
  • MPIO software must be installed on the cluster Nodes (N1, N2…N6)
  • Physical and logical multipathing must be configured
  • If Storage is presented directly to virtual machines or cluster nodes then NPIV must configured on the Fabric Zones.
  • All Storage and Fabric Firmware must up to date with manufacturer latest software
  • An identical software defined storage must be used on the both sites 
  • If a third party software is used to replicate storage between sites then storage vendor must be consulted before the replication. 

Further Reading:

Understanding Software Defined Storage (SDS)

How to configure SAN replication between IBM Storwize V3700 systems

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Application Scale-out File Systems

Design Components of Network:

  • Isolate management, virtual and data network using VLAN
  • Use a reliable IPVPN or Fibre optic provider for the replication over the network
  • Eliminate all single point of failure from all network components
  • Consider stretched VLAN for multiple sites 

Further Reading:

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Design failover Cluster Quorum

  • Use Node & File Share Witness (FSW) Quorum for even number of Cluster Nodes
  • Connect File Share Witness on to the third Site
  • Do not host File Share Witness on a virtual machine on same site
  • Alternatively use Dynamic Quorum

Further Reading:

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Design of Compute

  • Use reputed vendor to supply compute hardware compatible with Microsoft Hyper-v
  • Make sure all latest firmware updates are applied to Hyper-v host
  • Make manufacture provide you with latest HBA software to be installed on Hyper-v host

Further Reading:

Windows Server 2012: Failover Clustering Deep Dive Part II

Implementing a Multi-Site Failover Cluster

Step1: Prepare Network, Storage and Compute

Understanding Network Virtualization in SCVMM 2012 R2

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step2: Configure Failover Cluster on Each Site

Windows Server 2012: Failover Clustering Deep Dive Part II

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Multi-Site Clustering & Disaster Recovery

Step3: Replicate Volumes

How to configure SAN replication between IBM Storwize V3700 systems

How to create a Microsoft Multi-Site cluster with IBM Storwize replication

Use Cases:

Use case can be determined by current workloads and future workloads plus business continuity. Deploy Veeam One to determine current workloads on your infrastructure and propose a future workload plus business continuity.  Here is a list of use cases of multi-site cluster.

  • Scale-Out File Server for application data-  To store server application data, such as Hyper-V virtual machine files, on file shares, and obtain a similar level of reliability, availability, manageability, and high performance that you would expect from a storage area network. All file shares are simultaneously online on all nodes. File shares associated with this type of clustered file server are called scale-out file shares. This is sometimes referred to as active-active.

  • File Server for general use – This type of clustered file server, and therefore all the shares associated with the clustered file server, is online on one node at a time. This is sometimes referred to as active-passive or dual-active. File shares associated with this type of clustered file server are called clustered file shares.

  • Business Continuity Plan

  • Disaster Recovery Plan

  • DFS Replication Namespace for Unstructured Data i.e. user profile, home drive, Citrix profile

  • Highly Available File Server Replication 

Multi-Site Hyper-v Cluster for High Availability and Disaster Recovery

Gallery

This gallery contains 4 photos.

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of … Continue reading

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.

 

Unsupported:

Hyper-V on Windows 10 doesn’t support failover clustering

How to configure Hyper-v Replica Step By Step

Hyper-V Replica provides IP based asynchronous replication of virtual machines between two Hyper-v servers. Since this an asynchronous replication, replica virtual machine will not have the most recent data. However, replica virtual machines provides a cost effective way of keeping a copy of production virtual machines in a secondary site and can be made available in case of a disaster.

Benefits:

  • Shared or standalone storage to fulfill the capacity requirement of the replicated virtual machine
  • Asynchronous replication of Hyper-V virtual machines over Ethernet IP based network
  • Replica works with standalone servers, failover clusters, or a mixture of both
  • Hyper-v Hosts can be physically co-located or geographically diverse location with MPLS or IPVPN connection
  • Hyper-v Hosts can be domain joined or standalone
  • Provide planned or unplanned failover
  • Any Hyper-v virtualized server can be replication using Hyper-v replica

Requirement:

  • Windows Server 2012 R2 Hyper-v Role Installed
  • Windows Server 2012 Hyper-v Role Installed
  • Similar virtual network and physical network must be configured in secondary site for replica virtual machine to function as production virtual machine.

Step1: Configure Firewall on Primary and Secondary Hyper-v Host

1. Right Click Windows Logo on Task Bar>Control Panel>Windows Firewall

2. Open Windows Firewall with Advance Security and click Inbound Rules.

3. Right-click Hyper-V Replica HTTP Listener (TCP-In) and click Enable Rule.

4. Right-click Hyper-V Replica HTTPS Listener (TCP-In) and click Enable Rule.

Step2: Pre-stage Replica Broker Computer Object

1. Log on to DC>Open Active Directory Users & Computers>Create New Computer e.g. HVReplica

2. Right Click on HVReplica Computer Object>Properties>Security Tab>Hyper-v Cluster Nodes NetBIOS Name>Allow Full Permission>Apply>Ok.

Step3: Configure Replica Broker in Hyper-v Environment

Hyper-v Replica using Failover Cluster Wizard

1. Log on Hyper-v Host>open Failover Cluster Manager.

2. In the left pane, connect to the cluster, and while the cluster name is highlighted, click Configure Role in the Actions pane. The High Availability wizard opens

3. In the Select Role screen, select Hyper-V Replica Broker.

image

4. Complete the wizard, providing a NetBIOS name you have created in previous step and IP address to be used as the connection point to the cluster.

5. Verify that the Hyper-V Replica Broker role comes online successfully. Click Finish.

6. To test Replica broker failover, right-click the role, point to Move, and then click Select Node. Then, select a node, and then click OK.

7. click Roles in the Navigate category of the Details pane

8. Right-click the role and choose Replication Settings.

9. In the Details pane, select Enable this cluster as a Replica server.

10. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

11. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

12. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

13. Click OK or Apply when you are finished.

 

Configure Hyper-v Replica using Hyper-v Manager

To Configure Hyper-v replica Broker in non-clustered environment.

1. In Hyper-V Manager, click Hyper-V Settings in the Actions pane.

2. In the Hyper-V Settings dialog, click Replication Configuration.

image

3. In the Details pane, select Enable this computer as a Replica server.

4. In the Authentication and ports section, select the authentication method Kerberos over HTTP and authentication over HTTPS.

5. To use certificate-based authentication, click Select Certificate and provide the request certificate information.

6. In the Authorization and storage section, you can specify default location or specific server with specific storage with the Trust Group tag.

7. Click OK or Apply when you are finished.

Step4: Configure Replica Virtual Machine

1. In the Details pane of Hyper-V Manager, select a virtual machine by clicking it.

2. Right-click the selected virtual machine and point to Enable Replication. The Enable Replication wizard opens.

3. On the Specify Replica Server page, in the Replica Server box, enter either the NetBIOS or fully qualified international domain name (FQIDN) of the Replica server that you configured in Step 2.1. If the Replica server is part of a failover cluster, enter the name of the Hyper-V Replica Broker that you configured in Step 1.4. Click Next.

4. On the Specify Connection Parameters page, the authentication and port settings you configured for the Replica server in Step 2.1 will automatically be populated, provided that Remote WMI is enabled. If it is not enabled, you will have to provide the values. Click Next.

5. On the Choose Replication VHDs page, clear the checkboxes for any VHDs that you want to exclude from replication, then click Next.

6. On the Configure Recovery History page, select the number and types of recovery points to be created on the Replica server, then click Next.

7. On the Choose Initial Replication page, select the initial replication method and then click Next.

8. On the Completing the Enable Replication Relationship Wizard page, review the information in the Summary and then click Finish.

9. A Replica virtual machine is created on the Replica server. If you elected to send the initial copy over the network, the transmission begins either immediately or at the time you configured.

Step5: Test Replicated Virtual Machine

1. In Hyper-V Manager, right-click the virtual machine you want to test failover for, point to Replication…, and then point to Test Failover….

2. After you have concluded your testing, discard the test virtual machine by choosing Stop Test Failover under the Replication option

Step6: Planed Failover

1. Start Hyper-V Manager on the primary server and choose a virtual machine to fail over. Turn off the virtual machine that you want to fail over.

2. Right-click the virtual machine, point to Replication, and then point to Planned Failover.

3. Click Fail Over to actually transfer operations to the virtual machine on the Replica server. Failover will not occur if the prerequisites have not been met.

How to respond to unplanned Failover

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to use, point to Replication, and then point to Failover….

3. In the dialog that opens, choose the recovery snapshot you want the virtual machine to recover to, and then click Failover….. The Replication Status will change to Failed over – Waiting completion and the virtual machine will start using the network parameters you previously configured for it

4. Use the Complete-VMFailover Windows PowerShell cmdlet below to complete failover.

Starting a reverse replication once disaster is over

1. Open Hyper-V Manager and connect to the Replica server.

2. Right-click the name of the virtual machine you want to reverse replicate, point to Replication, and then point to Reverse replication…. The Reverse Replication wizard opens.

3. Complete the Reverse Replication wizard. You will find the requested information to be very similar if not identical to the information you provided in the Enable Replication wizard

Similar Articles:

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to configure SAN replication between IBM Storwize V3700 systems

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Windows Server 2012 R2 Gateway

Windows server 2012 R2 can be configured as a Gateway VM in a two or four node cluster on Hyper-v Host. Gateway VM or router enhance Data Center by providing them a secure router for public or private cloud. Gateway VM cluster can provide routing functionality up to 200 tenants. Each Gateway VM can provide routing functionality for up to 50 tenants.

Two different versions of the gateway router are available in Windows Server 2012 R2.

RRAS Multitenant Gateway – The RRAS Multitenant Gateway router can be used for multitenant or non-multitenant deployments, and is a full featured BGP router. To deploy an RRAS Multitenant Gateway router, you must use Windows PowerShell commands

RRAS Gateway configuration and options:

  • Configure the RRAS Multitenant Gateway for use with Hyper-V Network Virtualization
  • Configure the RRAS Multitenant Gateway for use with VLANs
  • Configure the RRAS Multitenant Gateway for Site-to-Site VPN Connections
  • Configure the RRAS Multitenant Gateway to Perform Network Address Translation for Tenant Computers
  • Configure the RRAS Multitenant Gateway for Dynamic Routing with BGP

Windows Server 2012 R2 Gateway – To deploy Windows Server Gateway, you must use System Center 2012 R2 and Virtual Machine Manager (VMM). The Windows Server Gateway router is designed for use with multitenant deployments.

Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.

This feature allow service provider the functionality to virtually isolate different subnets, VLANs and network traffic which resides in same physical core or distribution switch. Hyper-v network virtualization is a Network Virtualization Generic Routing Encapsulation NVGRE which allows tenant to bring their own TCP/IP and name space in cloud environment.

Systems requirements:

Option Hyper-v Host Gateway VM
CPU 2 Socket NUMA Node 8 vCPU for two VMs

4 vCPU for four VMs

CPU Core 8 1
Memory 48GB 8GB
Network Adapter Two 10GB NICs connect to Cisco Trunk Port1 4 virtual NICs

  • Operating Systems
  • Clustering heartbeat
  • External network
  • Internal network
Clustering Active-Active Active-Active or Active-Passive

1-NIC Teaming in Hyper-v Host- You can configure NIC teaming in Hyper-v Host for two 10GB NICs. Windows Server 2012 R2 Gateway VM with four vNIC that are connected to the Hyper-V Virtual Switch that is bound to the NIC Team.

Deployment Guides:

Windows Server 2012 R2 RRAS Deployment Guide

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Clustering Windows Server 2012 R2

Data Deduplication in Windows Storage Server 2012 R2

Deduplication in Windows Server: Data deduplication involves finding and removing duplication within data without compromising its fidelity or integrity. The goal is to store more data in less space by segmenting files into small variable-sized chunks (32–128 KB), identifying duplicate chunks, and maintaining a single copy of each chunk. Redundant copies of the chunk are replaced by a reference to the single copy. The chunks are compressed and then organized into special container files in the System Volume Information folder.

Enhanced Dedupe features in Windows Server 2012 R2

  • Data deduplication for remote storage of Virtual Desktop Infrastructure (VDI) workloads
  • Expand an optimized file on its original path.

When using the Data Deduplication feature for the first time or migrating from a previous version of Windows Server, be sure to consider the following related technologies and issues:

  • BranchCache
  • Failover Clusters
  • DFS Replication
  • FSRM quotas
  • Single Instance Storage or NAS Box

Install and Configure Data Deduplication using GUI

1. Open Server Manager, From the Add Roles and Features Wizard, under Server Roles, select File and Storage Services.

2. Select the File Services check box, and then select the Data Deduplication check box.

3. Click Next until the Install button is active, and then click Install.

4. From the Server Manager dashboard, right-click a data volume and choose Configure Data Deduplication. The Deduplication Settings page appears.

5. In the Data deduplication box, select the workload you want to host on the volume. Select General purpose file server for general data files or Virtual Desktop Infrastructure (VDI) server when configuring storage for running virtual machines.

6. Enter the number of days that should elapse from the date of file creation until files are deduplicated, enter the extensions of any file types that should not be deduplicated, and then click Add to browse to any folders with files that should not be deduplicated.

7. Click Apply to apply these settings and return to the Server Manager dashboard, or click the Set Deduplication Schedule button to continue to set up a schedule for deduplication.

Install and Configure Data Deduplication using Windows PowerShell

Start Windows PowerShell. Right-click the Windows PowerShell icon on the taskbar, and then click Run as Administrator.

Import-Module ServerManager | Add-WindowsFeature -name FS-Data-Deduplication

Import-Module Deduplication

Enable-DedupVolume E: -UsageType HyperV

Enable-DedupVolume E: -UsageType Default

Set-Dedupvolume E: -MinimumFileAgeDays 20

Get-DedupVolume | fl

Start-DedupJob E: –Type Optimization –Wait

References:

Windows Server 2012 R2 NAS Box with Deduplication Capacity

Introduction to Windows Deduplication

Windows PowerShell Cmdlet for Deduplication

Multi-Site Hyper-v Cluster for High Availability and Disaster Recovery

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of a business critical application is not an option. In this case, disaster recovery and high availability are bonded together so that when both/all nodes at the primary site are lost, the nodes at the secondary site begin providing service automatically, or with minimal intervention.

The maximum availability of any services or application depends on how you design your platform that hosts these services. It is important to follow best practices in Compute, Network and Storage infrastructure to maximize uptime and maintain SLA.

The following diagram shows a multi-site failover cluster that uses four nodes and supports a clustered service or application.

 

image

 

The following rack diagram shows the identical compute, storage and networking infrastructure in both site.

image

Physical Infrastructure

  • Primary and Secondary sites are connected via 2x10Gbps dark fibre
  • Storage vendor specific replication software such as EMC recovery point
  • Storage must have redundant storage processor
  • There must be redundant Switches for networking and storage
  • Each server must be connected to redundant switches with redundant NIC for each purpose
  • Each Hyper-v server must have minimum dual Host Bus Adapter (HBA) port connected to redundant MDS switches
  • Each network must be connected to dual NIC from server to switches
  • Only iLO/DRAC will have a single connection
  • Each site must have redundant power supply.

Storage Requirements

Since I am talking about highly available and redundant systems design, this sort of design must consist of replicated or clustered storage presented to multi-site Hyper-v cluster nodes. Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. You will achieve high performance through hardware or block level replication instead of software. You should contact your storage vendor to come up with solutions that provide replicated or clustered storage.

Examples are:

StarWind Software

Steeleye DataKeeper

EMC Recovery Point

HP Storage

Network Requirements:

A multi-site cluster running Windows Server 2008 can contain nodes that are in different subnet however as a best practice, you must configure Hyper-v cluster in same subnet. You applications and services can reside in separate subnets. To avoid any conflict, you should use dark fibre connection or MPLS network between multi-sites that allows VLANs.

Note that you must configure Hyper-v with static IP. In a multi-site cluster, you might want to tune the “heartbeat” settings, see http://go.microsoft.com/fwlink/?LinkId=130588 for details.

Network Configuration Spread Sheet

Network

VLAN ID

NICs and Switch Ports speed

iLO/DRAC

10

1Gbps

MGMT

20

2x1Gbps

Live Migration

30

2x10Gbps

Storage Migration

40

2x10Gbps

Virtual Machine

50,60

4x10Gbps

iSCSI Network

70

4x10Gbps

Heartbeat network

80

2x1Gbps

Storage Replication

(Separate from Hyper-v)

90

Dark Fibre

2x10Gbps

Note that iSCSI network is only required if you are using IP Storage instead of Fibre Channel (FC) storage.

Cluster Selection: Node and File Share Majority (For Cluster with Special Configurations)

Quorum Selection: Since you will be configuring Node and File Share Majority cluster, you will have the option to place quorum files to shared folder. Where do you place this shared folder? Since we are talking about fully redundant and highly available Hyper-v Cluster, we have several options to place quorum shared folder.

Option1: Secondary Site

Option 2: Third Site

Visit http://technet.microsoft.com/en-us/library/cc770620%28WS.10%29.aspx for more details on quorum.

Storage Configuration:

Visit http://www.starwindsoftware.com/images/content/technical_papers/StarWind_HA_Hyper-V_6.0.pdf , http://docs.us.sios.com/ and http://us.sios.com/wp-content/uploads/sios-datakeeper-replication-multi-site-clustering-windows-servers-enterprise.pdf for clustered storage configuration for Hyper-v.

Hyper-v Cluster Configuration:

Visit http://microsoftguru.com.au/2013/06/04/windows-server-2012-failover-clustering-deep-dive/ for detailed cluster configuration guide.

Windows Server 2012: Failover Clustering Deep Dive

Physical Hardware Requirements -Up to 23 instances of SQL Server requires the following resource:

  1. Processor 2 processors for 23 instances of SQL Server as a single cluster node would require 46 CPUs.
  2. Memory 2 GB of memory for 23 instances of SQL Server as a single cluster node would require 48 GB of RAM (2 GB of additional memory for the operating system).
  3. Network adapters- Microsoft certified network adapter. Converged adapter or iSCSI Adapter or HBA.
  4. Storage Adapter- multipath I/O (MPIO) supported hardware
  5. Storage – shared storage that is compatible with Windows Server 2008/2012. Storage requirements include the following:
  • Use basic disks, not dynamic disks.
  • Use NTFS partition.
  • Use either master boot record (MBR) or GUID partition table (GPT).
  • Storage volume larger than 2 terabytes, use GUID partition table (GPT).
  • Storage volumes smaller than 2 terabytes, use master boot record (MBR).
  • 4 disks for 23 instances of SQL Server as a cluster disk array would require 92 disks.
  • Cluster storage must not be Windows Distributed File System (DFS)

Software Requirements

Download SQL Server 2012 installation media. Review SQL Server 2012 Release Notes. Install the following prerequisite software on each failover cluster node and then restart nodes once before running Setup.

  1. Windows PowerShell 2.0
  2. .NET Framework 3.5 SP1
  3. .NET Framework 4

Active Directory Requirements

  1. Cluster nodes must be member of same Active Directory Domain Services
  2. The servers in the cluster must use Domain Name System (DNS) for name resolution
  3. Use cluster naming convention for example Production Physical Node: DC1PPSQLNODE01 or Production virtual node DC2PVSQLNODE02

Unsupported Configuration

the following are the unsupported configuration: 

  1. Do not include cluster name with these characters like <, >, “,’,&
  2. Never install SQL server on a Domain Controller
  3. Never install cluster services in a domain controller or Forefront TMG 2010

Permission Requirements

System admin or project engineer who will be performing the tasks of creating cluster must be a member of at least Domain Users security group with permission to create domain computers objects in Active Directory and must be a member of administrators group on each clustered server.

Network settings and IP addresses requirements

you need at least two network card in each cluster node. One network card for domain or client connectivity and another network card heartbeat network which is shown below.

image

The following are the unique requirements for MS cluster.

  1. Use identical network settings on each node such as Speed, Duplex Mode, Flow Control, and Media Type.
  2. Ensure that each of these private networks uses a unique subnet.
  3. Ensure that each node has heartbeat network with same range of IP address
  4. Ensure that each node has unique range of subnet whether they are placed in single geographic location of diverse location.

Domain Network should be configured with IP Address, Subnet Mask, Default Gateway and DNS record.

image

Heartbeat network should be configured with only IP address and subnet mask.

image

Additional Requirements

  1. Verify that antivirus software is not installed on your WSFC cluster.
  2. Ensure that all cluster nodes are configured identically, including COM+, disk drive letters, and users in the administrators group.
  3. Verify that you have cleared the system logs in all nodes and viewed the system logs again.
  4. Ensure that the logs are free of any error messages before continuing.
  5. Before you install or update a SQL Server failover cluster, disable all applications and services that might use SQL Server components during installation, but leave the disk resources online.
  6. SQL Server Setup automatically sets dependencies between the SQL Server cluster group and the disks that will be in the failover cluster. Do not set dependencies for disks before Setup.
  7. If you are using SMB File share as a storage option, the SQL Server Setup account must have Security Privilege on the file server. To do this, using the Local Security Policy console on the file server, add the SQL Server setup account to Manage auditing and security log rights.

Supported Operating Systems

  • Windows Server 2012 64-bit x64 Datacenter
  • Windows Server 2012 64-bit x64 Standard
  • Windows Server 2008 R2 SP1 64-bit x64 Datacenter
  • Windows Server 2008 R2 SP1 64-bit x64 Enterprise
  • Windows Server 2008 R2 SP1 64-bit x64 Standard
  • Windows Server 2008 R2 SP1 64-bit x64 Web

Understanding Quorum configuration

In a simple definition, quorum is a voting mechanism in a Microsoft cluster. Each node has one vote. In a MSCS cluster, this voting mechanism constantly monitor cluster that how many nodes are online and how nodes are required to run the cluster smoothly. Each node contains a copy of cluster information and their information is also stored in witness disk/directory. For a MSCS, you have to choose a quorum among four possible quorum configurations.

  • Node Majority- Recommended for clusters with an odd number of nodes. 

clip_image002

  • Node and Disk Majority – Recommended for clusters with an even number of nodes. Can sustain (Total no of Node)/2 failures if a disk witness node is online. Can sustain ((Total no of Node)/2)-1 failures if a disk witness node is offline.

clip_image004 

clip_image006 

  • Node and File Share Majority- Clusters with special configurations. Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

clip_image008 

clip_image010 

  • No Majority: Disk Only (not recommended)

Why quorum is necessary? Network problems can interfere with communication between cluster nodes. This can cause serious issues. To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

Understanding a multi-site cluster environment

Hardware: A multi-site cluster requires redundant hardware with correct capacity, storage functionality, replication between sites, and network characteristics such as network latency.

Number of nodes and corresponding quorum configuration: For a multi-site cluster, Microsoft recommend having an even number of nodes and, for the quorum configuration, using the Node and File Share Majority option that is, including a file share witness as part of the configuration. The file share witness can be located at a third site, that is, a different location from the main site and secondary site, so that it is not lost if one of the other two sites has problems.

Network configuration—deciding between multi-subnets and a VLAN: configuring a multi-site cluster with different subnets is supported. However, when using multiple subnets, it is important to consider how clients will discover services or applications that have just failed over. The DNS servers must update one another with this new IP address before clients can discover the service or application that has failed over. If you use VLANs with multi-site you must reduce the Time to Live (TTL) of DNS discovery.

Tuning of heartbeat settings: The heartbeat settings include the frequency at which the nodes send heartbeat signals to each other to indicate that they are still functioning, and the number of heartbeats that a node can miss before another node initiates failover and begins taking over the services and applications that had been running on the failed node. In a multi-site cluster, you might want to tune the “heartbeat” settings. You can tune these settings for heartbeat signals to account for differences in network latency caused by communication across subnets.

Replication of data: Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. Therefore, the choice of the replication process requires careful consideration. There are many options you will find while replicating data. But before you make any decision, consult with your storage vendor, server hardware vendor and software vendors. Depending on vendor like NetApp and EMC, your replication design will change. Review the following considerations:

Choosing replication level ( block, file system, or application level): The replication process can function through the hardware (at the block level), through the operating system (at the file system level), or through certain applications such as Microsoft Exchange Server (which has a feature called Cluster Continuous Replication or CCR). Work with your hardware and software vendors to choose a replication process that fits the requirements of your organization.

Configuring replication to avoid data corruption: The replication process must be configured so that any interruptions to the process will not result in data corruption, but instead will always provide a set of data that matches the data from the main site as it existed at some moment in time. In other words, the replication must always preserve the order of I/O operations that occurred at the main site. This is crucial, because very few applications can recover if the data is corrupted during replication.

Choosing between synchronous and asynchronous replication: The replication process can be synchronous, where no write operation finishes until the corresponding data is committed at the secondary site, or asynchronous, where the write operation can finish at the main site and then be replicated (as a background operation) to the secondary site.

Synchronous Replication means that the replicated data is always up-to-date, but it slows application performance while each operation waits for replication. Synchronous replication is best for multi-site clusters that can are using high-bandwidth, low-latency connections. Typically, this means that a cluster using synchronous replication must not be stretched over a great distance. Synchronous replication can be performed within 200km distance where a reliable and robust WAN connectivity with enough bandwidth is available. For example, if you have GigE and Ten GigE MPLS connection you would choose synchronous replication depending on how big is your data.

Asynchronous Replication can help maximize application performance, but if failover to the secondary site is necessary, some of the most recent user operations might not be reflected in the data after failover. This is because some operations that were finished recently might not yet be replicated. Asynchronous replication is best for clusters where you want to stretch the cluster over greater geographical distances with no significant application performance impact. Asynchronous replication is performed when distance is more than 200km and WAN connectivity is not robust between sites.

Utilizing Windows Storage Server 2012 as shared storage

Windows® Storage Server 2012 is the Windows Server® 2012 platform of choice for network-attached storage (NAS) appliances offered by Microsoft partners.

Windows Storage Server 2012 enhances the traditional file serving capabilities and extends file based storage for application workloads like Hyper-V, SQL, Exchange and Internet information Services (IIS). Windows Storage Server 2012 provides the following features for an organization.

Workgroup Edition

  • As many as 50 connections
  • Single processor socket
  • Up to 32 GB of memory
  • As many as 6 disks (no external SAS)

Standard Edition

  • No license limit on number of connections
  • Multiple processor sockets
  • No license limit on memory
  • No license limit on number of disks
  • De-duplication, virtualization (host plus 2 virtual machines for storage and disk management tools), and networking services (no domain controller)
  • Failover clustering for higher availability
  • Microsoft BranchCache for reduced WAN traffic

Presenting Storage from Windows Storage Server 2012 Standard

From the Server Manager, Click Add roles and features, On the Before you begin page, Click Next. On the installation type page, Click Next. 

image

On the Server Roles Selection page, Select iSCSI Target and iSCSI target storage provider, Click Next

image

On the Feature page, Click Next. On the Confirm page, Click Install. Click Close.

On the Server Manager, Click File and Storage Services, Click iSCSI

image

On the Task Button, Click New iSCSI Target, Select the Disk drive from where you want to present storage, Click Next

image

Type the Name of the Storage, Click Next

image

Type the size of the shared disk, Click Next

image

Select New iSCSI Target, Click Next

image

Type the name of the target, Click Next

image

Select the IP Address on the Enter a value for selected type, Type the IP address of cluster node, Click Ok. Repeat the process and add IP address for the cluster nodes.   

image

image

Type the CHAP information. note that CHAP password must be 12 character. Click Next to continue.

image

Click Create to create a shared storage. Click Close once done.

image

image

Repeat the step to create all shared drive of your preferred size and create a shared drive of 2GB size for quorum disk.

image

Deploying a Failover Cluster in Microsoft environment

Step1: Connect the cluster servers to the networks and storage

1. Review the details about networks in Hardware Requirements for a Two-Node Failover Cluster and Network infrastructure and domain account requirements for a two-node failover cluster, earlier in this guide.

2. Connect and configure the networks that the servers in the cluster will use.

3. Follow the manufacturer’s instructions for physically connecting the servers to the storage. For this article, we are using software iSCSI initiator. Open software iSCSI initiator from Server manager>Tools>iSCSI Initiator. Type the IP address of target that is the IP address of Microsoft Windows Storage Server 2012. Click Quick Connect, Click Done.

image

5. Open Computer Management, Click Disk Management, Initialize and format the disk using either MBR and GPT disk type. Go to second server, open Computer Management, Click Disk Management, bring the disk online simply by right clicking on the disk and clicking bring online. Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers that you will cluster (and only those servers).

image

6. On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) In Disk Management, confirm that the cluster disks are visible.

image

7. If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.

8. Check the format of any exposed volume or LUN. Use NTFS file format.

Step 2: Install the failover cluster feature

In this step, you install the failover cluster feature. The servers must be running Windows Server 2012.

1. Open Server Manager, click Add roles and features. Follow the screen, go to Feature page.

2. In the Add Features Wizard, click Failover Clustering, and then click Install.

image

4. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.

5. Repeat the process for each server that you want to include in the cluster.

Step 3: Validate the cluster configuration

Before creating a cluster, I strongly recommend that you validate your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters.

1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

image

2. Confirm that Failover Cluster Manager is selected and then, in the center pane under Management, click Validate a Configuration. Click Next.

image

3. On the Select Server Page, type the fully qualified domain name of the nodes you would like to add in the cluster, then click Add.

image 

4. Follow the instructions in the wizard to specify the two servers and the tests, and then run the tests. To fully validate your configuration, run all tests before creating a cluster. Click next

image

5. On the confirmation page, Click Next

image

6. The Summary page appears after the tests run. To view the results, click Report. Click Finish. You will be prompted to create a cluster if you select Create the Cluster now using validation nodes.

image 

5. While still on the Summary page, click View Report and read the test results.

image

To view the results of the tests after you close the wizard, see

SystemRootClusterReportsValidation Report date and time.html

where SystemRoot is the folder in which the operating system is installed (for example, C:Windows).

6. As necessary, make changes in the configuration and rerun the tests.

Step4: Create a Failover cluster

1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

image

2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster. If you did not close the validation nodes then the validation wizard automatically open cluster creation wizard. Follow the instructions in the wizard to specify, Click Next

  • The servers to include in the cluster.
  • The name of the cluster i.e. virtual name of cluster
  • IP address of the virtual node

image

3. Verify the IP address and cluster node name and click Next

image

4. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. Click Finish.

image

image

Step5: Verify Cluster Configuration

On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is unchecked for heartbeat network. verify IP range. Click Ok.

image

On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is checked for domain network. verify IP range. Click Ok.

image

On the Cluster Manager, Click Storage, Click disks, verify quorum disk and shared disks are available. You can add multiple of disks by simply click Add new disk on the Task Pan.

image

An automated MSCS cluster configuration will add quorum automatically. However you can manually configure desired cluster quorum by right clicking on cluster>More Actions>Configure Cluster Quorum Settings.

image

Configuring a Hyper-v Cluster

In the previous steps you have configured a MSCS cluster, to configure a Hyper-v cluster all you need to do is install Hyper-v role in each cluster node. from the Server Manager, Click Add roles and features, follow the screen and install Hyper-v role. A reboot is required to install Hyper-v role.  Once role is installed in both node.

Note that at this stage add Storage for Virtual Machines and networks for Live Migration, Storage network if using iSCSI, Virtual Machine network, and Management Network. detailed configuration is out of scope for this article as I am writing about MSCS cluster not Hyper-v.

image

from the Cluster Manager, Right Click on Networks, Click Network for Live Migration, Select appropriate network for live Migration.

image

If you would like to have virtual machine additional fault tolerance like Hyper-v Replica, Right Click Cluster virtual node, Click Configure Role, Click Next.

image

From Select Role page, Click Hyper-v Replica broker, Click Next. Follow the screen.

image

From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Hard Disk to configure virtual machine storage and virtual machine configuration disk drive. Once done, From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Virtual machine to create virtual machine.

image

Backing up Clustered data, application or server

There are multiple methods for backing up information that is stored on Cluster Shared Volumes in a failover cluster running on

  • Windows Server 2008 R2
  • Hyper-V Server 2008 R2
  • Windows Server 2012
  • Hyper-V Server 2012

Operating System Level backup

The backup application runs within a virtual machine in the same way that a backup application runs within a physical server. When there are multiple virtual machines being managed centrally, each virtual machine can run a backup “agent” (instead of running an individual backup application) that is controlled from the central management server. Backup agent backs up application data, files, folder and systems state of operating systems.

clip_image012

Hyper-V Image Level backup

The backup captures all the information about multiple virtual machines that are configured in a failover cluster that is using Cluster Shared Volumes. The backup application runs through Hyper-V, which means that it must use the VSS Hyper-V writer. The backup application must also be compatible with Cluster Shared Volumes. The backup application backs up the virtual machines that are selected by the administrator, including all the VHD files for those virtual machines, in one operation. VM1_Data.VHDX, VM2_data.VHDX and VM1_System.VHDX, VM2_system.VHDX are stored in a backup disk or tape. VM1_System.VHDX and VM2_System.VHDX contain system files and page files i.e. system state, snapshot and VM configuration are stored as well.

clip_image014

Publishing an Application or Service in a Failover Cluster Environment

1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

2. Right Click on Roles, click Configure Role to publish a service or application

image 

3. Select a Cluster Services or Application, and then click Next.

image

4. Follow the instructions in the wizard to specify the following details:

  • A name for the clustered file server
  • IP address of virtual node

image

5. On Select Storage page, Select the storage volume or volumes that the clustered file server should use. Click Next

image

6. On the confirmation Page, review and Click Next

image

7. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.

8. To close the wizard, click Finish.

image

9. In the console tree, make sure Services and Applications is expanded, and then select the clustered file server that you just created.

10. After completing the wizard, confirm that the clustered file server comes online. If it does not, review the state of the networks and storage and correct any issues. Then right-click the new clustered application or service and click Bring this service or application online.

Perform a Failover Test

To perform a basic test of failover, right-click the clustered file server, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered file server instance is moved.

Configuring a New Failover Cluster by Using Windows PowerShell

Task

PowerShell command

Run validation tests on a list of servers.

Test-Cluster -Node server1,server2

Where server1 and server2 are servers that you want to validate.

Create a cluster using defaults for most settings.

New-Cluster -Name cluster1 -Node server1,server2

Where server1 and server2 are the servers that you want to include in the new cluster.

Configure a clustered file server using defaults for most settings.

Add-ClusterFileServerRole -Storage "Cluster Disk 4"

Where Cluster Disk 4 is the disk that the clustered file server will use.

Configure a clustered print server using defaults for most settings.

Add-ClusterPrintServerRole -Storage "Cluster Disk 5"

Where Cluster Disk 5 is the disk that the clustered print server will use.

Configure a clustered virtual machine using defaults for most settings.

Add-ClusterVirtualMachineRole -VirtualMachine VM1

Where VM1 is an existing virtual machine that you want to place in a cluster.

Add available disks.

Get-ClusterAvailableDisk | Add-ClusterDisk

Review the state of nodes.

Get-ClusterNode

Run validation tests on a new server.

Test-Cluster -Node newserver,node1,node2

Where newserver is the new server that you want to add to a cluster, and node1 and node2 are nodes in that cluster.

Prepare a node for maintenance.

Get-ClusterNode node2 | Get-ClusterGroup | Move-ClusterGroup

Where node2 is the node from which you want to move clustered services and applications.

Pause a node.

Suspend-ClusterNode node2

Where node2 is the node that you want to pause.

Resume a node.

Resume-ClusterNode node2

Where node2 is the node that you want to resume.

Stop the Cluster service on a node.

Stop-ClusterNode node2

Where node2 is the node on which you want to stop the Cluster service.

Start the Cluster service on a node.

Start-ClusterNode node2

Where node2 is the node on which you want to start the Cluster service.

Review the signature and other properties of a cluster disk.

Get-ClusterResource "Cluster Disk 2" | Get-ClusterParameter

Where Cluster Disk 2 is the disk for which you want to review the disk signature.

Move Available Storage to a particular node.

Move-ClusterGroup "Available Storage" -Node node1

Where node1 is the node that you want to move Available Storage to.

Turn on maintenance for a disk.

Suspend-ClusterResource "Cluster Disk 2"

Where Cluster Disk 2 is the disk in cluster storage for which you are turning on maintenance.

Turn off maintenance for a disk.

Resume-ClusterResource "Cluster Disk 2"

Where Cluster Disk 2 is the disk in cluster storage for which you are turning off maintenance.

Bring a clustered service or application online.

Start-ClusterGroup "Clustered Server 1"

Where Clustered Server 1 is a clustered server (such as a file server) that you want to bring online.

Take a clustered service or application offline.

Stop-ClusterGroup "Clustered Server 1"

Where Clustered Server 1 is a clustered server (such as a file server) that you want to take offline.

Move or Test a clustered service or application.

Move-ClusterGroup "Clustered Server 1"

Where Clustered Server 1 is a clustered server (such as a file server) that you want to test or move.

Migrating clustered services and applications to a new failover cluster

Use the following instructions to migrate clustered services and applications from your old cluster to your new cluster. After the Migrate a Cluster Wizard runs, it leaves most of the migrated resources offline, so that you can perform additional steps before you bring them online. If the new cluster uses old storage, plan how you will make LUNs or disks inaccessible to the old cluster and accessible to the new cluster (but do not make changes yet).

1. To open the failover cluster snap-in, click Administrative Tools, and then click Failover Cluster Manager.

2. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Manager, click Manage a Cluster, and then select the cluster that you want to configure.

3. In the console tree, expand the cluster that you created to see the items underneath it.

4. If the clustered servers are connected to a network that is not to be used for cluster communications (for example, a network intended only for iSCSI), then under Networks, right-click that network, click Properties, and then click Do not allow cluster network communication on this network. Click OK.

5. In the console tree, select the cluster. Click Configure, click Migrate services and applications.

6. Read the first page of the Migrate a Cluster Wizard, and then click Next.

7. Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.

8. Click View Report. The wizard also provides a report after it finishes, which describes any additional steps that might be needed before you bring the migrated resource groups online.

9. Follow the instructions in the wizard to complete the following tasks:

    • Choose the resource group or groups that you want to migrate.
    • Specify whether the resource groups to be migrated will use new storage or the same storage that you used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one shared storage location to another.
    • If you are migrating from a cluster running Windows Server 2003 that has Network Name resources with Kerberos protocol enabled, specify the account name and password for the Active Directory account that is used by the Cluster service on the old cluster.
  1. After the wizard runs and the Summary page appears, click View Report.

14. When the wizard completes, most migrated resources will be offline. Leave them offline at this stage.

Completing the transition from the old cluster to the new cluster. You must perform the following steps to complete the transition to the new cluster running Windows Server 2012.

1. Prepare for clients to experience downtime, probably brief.

2. Take each resource group offline on the old cluster.

3. Complete the transition for the storage:

    • If the new cluster will use old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster.
    • If the new cluster will use new storage, copy the appropriate folders and data to the storage. As needed for disk access on the old cluster, bring individual disk resources online on that cluster. (Keep other resources offline, to ensure that clients cannot change data on the disks in storage.) Also as needed, on the new cluster, use Disk Management to confirm that the appropriate LUNs or disks are visible to the new cluster and not visible to any other servers.

4. If the new cluster uses mount points, adjust the mount points as needed, and make each disk resource that uses a mount point dependent on the resource of the disk that hosts the mount point.

5. Bring the migrated services or applications online on the new cluster. To perform a basic test of failover on the new cluster, expand Services and Applications, and then click a migrated service or application that you want to test.

6. To perform a basic test of failover for the migrated service or application, under Actions (on the right), click Move this service or application to another node, and then click an available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered service or application is moved.

7. If there are any issues with failover, review the following:

    • View events in Failover Cluster Manager. To do this, in the console tree, right-click Cluster Events, and then click Query. In the Cluster Events Filter dialog box, select the criteria for the events that you want to display, or to return to the default criteria, click the Reset button. Click OK. To sort events, click a heading, for example, Level or Date and Time.
    • Confirm that necessary services, applications, or server roles are installed on all nodes. Confirm that services or applications are compatible with Windows Server 2008 R2 and run as expected.
    • If you used old storage for the new cluster, rerun the Validate a Cluster Configuration Wizard to confirm the validation results for all LUNs or disks in the storage.
    • Review migrated resource settings and dependencies.
    • If you migrated one or more Network Name resources with Kerberos protocol enabled, confirm that the following permissions change was made in Active Directory Users and Computers on a domain controller. In the computer accounts (computer objects) of your Kerberos protocol-enabled Network Name resources, Full Control must be assigned to the computer account for the failover cluster.

Migrating Cluster Resource with new Mount Point

When you are working with new storage for your cluster migration, you have some flexibility in the order in which you complete the tasks. The tasks that you must complete include creating the mount points, running the Migrate a Cluster Wizard, copying the data to the new storage, and confirming the disk letters and mount points for the new storage. After completing the other tasks, configure the disk resource dependencies in Failover Cluster Manager.

A useful way to keep track of disks in the new storage is to give them labels that indicate your intended mount point configuration. For example, in the new storage, when you are mounting a new disk in a folder called Mount1-1 on another disk, you can also label the mounted disk as Mount1-1. (This assumes that the label Mount1-1 is not already in use in the old storage.) Then when you run the Migrate a Cluster Wizard and you need to specify that disk for a particular migrated resource, you can look at the list and select the disk labeled Mount1-1. Then you can return to Failover Cluster Manager to configure the disk resource for Mount1-1 so that it is dependent on the appropriate resource, for example, the resource for disk F. Similarly, you would configure the disk resources for all other disks mounted on disk F so that they depended on the disk resource for disk F.

Migrating DHCP to a Cluster Running Windows Server 2012

A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

This guide describes the steps that are necessary when migrating a clustered DHCP server to a cluster running Windows Server 2008 R2, beyond the standard steps required for migrating clustered services and applications in general. The guide indicates when to use the Migrate a Cluster Wizard in the migration, but does not describe the wizard in detail.

Step 1: Review requirements and create a cluster running Windows Server 2012

Before beginning the migration described in this guide, review the requirements for a cluster running Windows Server 2008 R2, install the failover clustering feature on servers running Windows Server 2008 R2, and create a new cluster.

Step 2: On the old cluster, adjust registry settings and permissions before migration

To prepare for migration, you must make changes to registry settings and permissions on each node of the old cluster.

1. Confirm that you have a current backup of the old cluster, one that includes the configuration information for the clustered DHCP server (also called the DHCP resource group).

2. Confirm that the clustered DHCP server is online on the old cluster. It must be online while you complete the remainder of this procedure.

3. On a node of the old cluster, open a command prompt as an administrator.

4. Type: regedit Navigate to:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesDHCPServerParameters

5. Choose the option that applies to your cluster: If the old cluster is running Windows Server 2008, skip to step 7. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2:

    • Right-click Parameters, click New, click String Value, and for the name of the new value, type: ServiceMain
    • Right-click the new value (ServiceMain), click Modify, and for the value data, type: ServiceEntry
    • Right-click Parameters again, click New, click Expandable String Value, and for the name of the new value, type: ServiceDll
    • Right-click the new value (ServiceDll), click Modify, and for the value data, type: %systemroot%system32dhcpssvc.dll

6. Right-click Parameters, and then click Permissions.

7. Click Add. Locate the appropriate account and assign permissions:

    • On Windows Server 2008: Click Locations, select the local server, and then click OK. Under Enter the object names to select, type NT ServiceDHCPServer. Click OK. Select the DHCPServer account and then select the check box for Full Control.
    • On Windows Server 2003 or Windows Server 2003 R2: Click Locations, ensure that the domain name is selected, and then click OK. Under Enter the object names to select, type Everyone, and then click OK (and confirm your choice if prompted). Under Group or user names, select Everyone and then select the check box for Full Control.

8. Repeat the process on the other node or nodes of the old cluster.

Step 3: On a node in the old cluster, prepare for export, and then export the DHCP database to a file

As part of migrating a clustered DHCP server, on the old cluster, you must export the DHCP database to a file. This requires preparatory steps that prevent the cluster from restarting the clustered DHCP resource during the export. The following procedure describes the process. On the old cluster, start the clustering snap-in and configure the restart setting for the clustered DHCP server (DHCP resource group):

1. Click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

2. If the console tree is collapsed, expand the tree under the cluster that you are migrating settings from. Expand Services and Applications and then, in the console tree, click the clustered DHCP server.

3. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart.

This step prevents the resource from restarting during the export of the DHCP database, which would stop the export.

1. On the node of the old cluster that currently owns the clustered DHCP server, confirm that the clustered DHCP server is running. Then open a command prompt window as an administrator.

2. Type: netsh dhcp server export <exportfile> all

Where <exportfile> is the name of the file to which you want to export the DHCP database.

3. After the export is complete, in the clustering interface (Cluster Administrator or Failover Cluster Management), right-click the clustered DHCP server (DHCP resource group) and then click either Take Offline or Take this service or application offline. If the command is unavailable, in the center pane, right-click each online resource and click either Take Offline or Take this resource offline. If prompted for confirmation, confirm your choice.

4. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2, obtain the account name and password for the Cluster service account (the Active Directory account used by the Cluster service on the old cluster). Alternatively, you can obtain the name and password of another account that has access permissions for the Active Directory computer accounts (objects) that the old cluster uses. For a migration from a cluster running Windows Server 2003 or Windows Server 2003 R2, you will need this information for the next procedure.

Step 4: On the new cluster, configure a network for DHCP clients and run the Migrate a Cluster Wizard

Microsoft recommends that you make the network settings on the new cluster as similar as possible to the settings on the old cluster. In any case, on the new cluster, you must have at least one network that DHCP clients can use to communicate with the cluster. The following procedure describes the cluster setting needed on the client network, and indicates when to run the Migrate a Cluster Wizard.

1. On the new cluster (running Windows Server 2012), click Server Manager, click Tools, and then click Failover Cluster Manager.

2. If the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.

3. If the console tree is collapsed, expand the tree under the cluster.

4. Expand Networks, right-click the network that clients will use to connect to the DHCP server, and then click Properties.

5. Make sure that Allow cluster network communication on this network and Allow clients to connect through this network are selected.

6. To prepare for the migration process, find and take note of the drive letter used for the DHCP database on the old cluster. Ensure that the same drive letter exists on the new cluster. (This drive letter is one of the settings that the Migrate a Cluster Wizard will migrate.)

7. In Failover Cluster Manager, in the console tree, select the new cluster, and then under Configure, click Migrate services and applications.

8. Use the Migrate a Cluster Wizard to migrate the DHCP resource group from old to the new cluster. If you are using new storage on the new cluster, during the migration, be sure to specify the disk that has the same drive letter on the new cluster as was used for the DHCP database on the old cluster. The wizard will migrate resources and settings, but not the DHCP database.

Step 5: On the new cluster, import the DHCP database, bring the clustered DHCP server online, and adjust permissions

To complete the migration process, import the DHCP database that you exported to a file in Step 2. Then you can bring the clustered DHCP server online and adjust settings that were changed temporarily during the migration process.

1. If you are reusing the old cluster storage for the new cluster, confirm that you have stored the exported DHCP database file in a safe location. Then be sure to delete all the DHCP files other than the exported DHCP database file from the old storage. This includes the DHCP database, log, and backup files.

2. On the new cluster, in Failover Cluster Manager, expand Services and Applications, right-click the clustered DHCP server, and then click Bring this service or application online. The DHCP service starts with an empty database.

3. Click the clustered DHCP server.

4. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart. This step prevents the resource from restarting during the import of the DHCP database, which would stop the import.

5. In the new cluster, on the node that currently owns the migrated DHCP server, view the disk used by the migrated DHCP server, and make sure that you have copied the exported DHCP database file to this disk.

6. In the new cluster, on the node that currently owns the migrated DHCP server, open a command prompt as an administrator. Change to the disk used by the migrated DHCP server.

7. Type: netsh dhcp server import <exportfile>

Where <exportfile> is the filename of the file to which you exported the DHCP database.

8. If the migrated DHCP server is not online, in Failover Cluster Manager, under Services and Applications, right-click the migrated DHCP server, and then click Bring this service or application online.

9. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, attempt restart on current node.

This returns the resource to the expected setting, instead of the “do not restart” setting that was temporarily needed during the import of the DHCP database.

10. If the cluster was migrated from Windows Server 2003 or Windows Server 2003 R2, after the clustered DHCP server is online on the new cluster, make the following changes to permissions in the registry:

  • On the node that owns the clustered DHCP server, open a command prompt as an administrator.
  • Type: regedit Navigate to:
    HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesDHCPServerParameters
  • Right-click Parameters, and then click Permissions.
  • Click Add, click Locations, and then select the local server.
  • Under Enter the object names to select, type NT ServiceDHCPServer and then click OK. Select the DHCPServer account and then select the check box for Full Control. Then click Apply.
  • Select the Everyone account (created through steps earlier in this topic) and then click Remove. This removes the account from the list of those that are assigned permissions.

11. Perform the preceding steps only after DHCP is online on the new cluster. After you complete these steps, you can test the clustered DHCP server and begin to provide DHCP services to clients.

Configuring a Multisite SQL Server Failover Cluster

To install or upgrade a SQL Server failover cluster, you must run the Setup program on each node of the failover cluster. To add a node to an existing SQL Server failover cluster, you must run SQL Server Setup on the node that is to be added to the SQL Server failover cluster instance. Do not run Setup on the active node to manage the other nodes. The following options are available for SQL Server failover cluster installation:

Option1: Integration Installation with Add Node

Create and configure a single-node SQL Server failover cluster instance. When you configure the node successfully, you have a fully functional failover cluster instance. At this point, it does not have high availability because there is only one node in the failover cluster. On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.

Option 2: Advanced/Enterprise Installation

After you run the Prepare Failover Cluster on one node, Setup creates the Configuration.ini file that lists all the settings that you specified. On the additional nodes to be prepared, instead of following these steps, you can supply the autogenerated ConfigurationFile.ini file from first node as an input to the Setup command line. This step prepares the nodes ready to be clustered, but there is no operational instance of SQL Server at the end of this step.

image

After the nodes are prepared for clustering, run Setup on one of the prepared nodes. This step configures and finishes the failover cluster instance. At the end of this step, you will have an operational SQL Server failover cluster instance and all the nodes that were prepared previously for that instance will be the possible owners of the newly-created SQL Server failover cluster.

Follow the procedure to install a new SQL Server failover cluster using Integrated Simple Cluster Install 

  1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe. To install from a network share, browse to the root folder on the share, and then double-click Setup.exe.
  1. The Installation Wizard starts the SQL Server Installation Center. To create a new cluster installation of SQL Server, click New SQL Server failover cluster installation on the installation page

image

  1. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK.

image

  1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report. To continue, click Next.
  2. On the Setup Support Files page, click Install to install the Setup support files.
  3. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

image

  1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.
  2. On the Product key page, indicate whether you are installing a free edition of SQL Server, or whether you have a PID key for a production version of the product.
  3. On the License Terms page, read the license agreement, and then select the check box to accept the license terms and conditions.

image 

  1. To help improve SQL Server, you can also enable the feature usage option and send reports to Microsoft. Click Next to continue.

image

  1. On the Feature Selection page, select the components for your installation. You can select any combination of check boxes, but only the Database Engine and Analysis Services support failover clustering. Other selected components will run as a stand-alone feature without failover capability on the current node that you are running Setup on.

image

  1. The prerequisites for the selected features are displayed on the right-hand pane. SQL Server Setup will install the prerequisite that are not already installed during the installation step described later in this procedure. SQL Server setup runs one more set of rules that are based on the features you selected to validate your configuration.

image

  1. On the Instance Configuration page, specify whether to install a default or a named instance. SQL Server Network Name — Specify a network name for the new SQL Server failover cluster. that is the name of virtual node of the cluster.  This is the name that is used to identify your failover cluster on the network. Instance ID — By default, the instance name is used as the Instance ID. This is used to identify installation directories and registry keys for your instance of SQL Server. This is the case for default instances and named instances. For a default instance, the instance name and instance ID would be MSSQLSERVER. To use a nondefault instance ID, select the Instance ID box and provide a value. Instance root directory — By default, the instance root directory is C:Program FilesMicrosoft SQL Server. To specify a nondefault root directory, use the field provided, or click the ellipsis button to locate an installation folder.

image

  1. Detected SQL Server instances and features on this computer – The grid shows instances of SQL Server that are on the computer where Setup is running. If a default instance is already installed on the computer, you must install a named instance of SQL Server. Click Next to continue.

image

  1. The Disk Space Requirements page calculates the required disk space for the features that you specify, and compares requirements to the available disk space on the computer where Setup is running. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. To specify the SQL Server cluster resource group name, you have two options:
  • Use the drop-down box to specify an existing group to use.
  • Type the name of a new group to create. Be aware that the name “Available storage” is not a valid group name.

image

  1. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster. More than one disk can be specified. Click Next to continue.

image

  1. On the Cluster Network Configuration page, Specify the IP type and IP address for your failover cluster instance. Click Next to continue. Note that the IP address will resolve the name of the virtual node which you have mentioned earlier step.

image

  1. On the Server Configuration — Service Accounts page, specify login accounts for SQL Server services. The actual services that are configured on this page depend on the features that you selected to install.

image

  1. Use this page to specify Cluster Security Policy. Use default setting. Click Next to continue. Work flow for the rest of this topic depends on the features that you have specified for your installation. You might not see all the pages, depending on your selections (Database Engine, Analysis Services, Reporting Services).
  2. You can assign the same login account to all SQL Server services, or you can configure each service account individually. The startup type is set to manual for all cluster-aware services, including full-text search and SQL Server Agent, and cannot be changed during installation. Microsoft recommends that you configure service accounts individually to provide least privileges for each service, where SQL Server services are granted the minimum permissions they have to have complete their tasks. To specify the same logon account for all service accounts in this instance of SQL Server, provide credentials in the fields at the bottom of the page. When you are finished specifying login information for SQL Server services, click Next.
  • Use the Server Configuration – Collation tab, use default collations for the Database Engine and Analysis Services.
  • Use the Database Engine Configuration — Account Provisioning page to specify the following:
  • select Windows Authentication or Mixed Mode Authentication for your instance of SQL Server.

image

  1. Use the Database Engine Configuration – Data Directories page to specify nondefault installation directories. To install to default directories, click Next. Use the Database Engine Configuration – FILESTREAM page to enable FILESTREAM for your instance of SQL Server. Click Next to continue.

image

  1. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.
  2. Use the Analysis Services Configuration — Account Provisioning page to specify users or accounts that will have administrator permissions for Analysis Services. You must specify at least one system administrator for Analysis Services. To add the account under which SQL Server Setup is running, click Add Current User. To add or remove accounts from the list of system administrators, click Add or Remove, and then edit the list of users, groups, or computers that will have administrator privileges for Analysis Services. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

image

  1. Use the Analysis Services Configuration — Data Directories page to specify nondefault installation directories. To install to default directories, click Next.

image

  1. Use the Reporting Services Configuration page to specify the kind of Reporting Services installation to create. For failover cluster installation, the option is set to Unconfigured Reporting Services installation. You must configure Reporting Services services after you complete the installation. However, no harm to select Install and configure option if you are not an SQL expert.

image

  1. On the Error Reporting page, specify the information that you want to send to Microsoft that will help improve SQL Server. By default, options for error reporting is disabled.

image

  1. The System Configuration Checker runs one more set of rules to validate your configuration with the SQL Server features that you have specified.

image

  1. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

image

  1. During installation, the Installation Progress page provides status so that you can monitor installation progress as Setup continues. After installation, the Complete page provides a link to the summary log file for the installation and other important notes. To complete the SQL Server installation process, click Close.
  2. If you are instructed to restart the computer, do so now. It is important to read the message from the Installation Wizard when you have finished with Setup.
  3. To add nodes to the single-node failover you just created, run Setup on each additional node and follow the steps for Add Node operation.

SQL Advanced/Enterprise Failover Cluster Install

Step1: Prepare Environment

  1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe.

  2. Windows Installer 4.5 is required, and may be installed by the Installation Wizard. If you are prompted to restart your computer, restart and then start SQL Server Setup again.

  3. After the prerequisites are installed, the Installation Wizard starts the SQL Server Installation Center. To prepare the node for clustering, move to the Advanced page and then click Advanced cluster preparation

  4. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

  5. On the Setup Support Files page click Install to install the Setup support files.

  6. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

  7. On the Language Selection page, you can specify the language, to continue, click Next

  8. On the Product key page, select PIDed product key, Click Next

  9. On the License Terms page, accept the license terms and Click Next to continue.

  10. On the Feature Selection page, select the components for your installation as you did for simple installation which has been mentioned earlier.

  11. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

  12. To complete the SQL Server installation process, click Close.

  13. If you are instructed to restart the computer, do so now.

  14. Repeat the previous steps to prepare the other nodes for the failover cluster. You can also use the autogenerated configuration file to run prepare on the other nodes. A configurationfile.ini is generated in C:Program FilesMicrosoft SQL Server110Setup BootStrapLog20130603_014118configurationfile.ini which is shown below.

image

Step2 Install SQL Server

  1. After preparing all the nodes as described in the prepare step, run Setup on one of the prepared nodes, preferably the one that owns the shared disk. On the Advanced page of the SQL Server Installation Center, click Advanced cluster completion.

  2. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

  3. On the Setup Support Files page, click Install to install the Setup support files.

  4. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

  5. On the Language Selection page, you can specify the language, To continue, click Next.

  6. Use the Cluster node configuration page to select the instance name prepared for clustering

  7. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster.Click Next to continue

  8. On the Cluster Network Configuration page, specify the network resources for your failover cluster instance. Click Next to continue.

  9. Now follow the simple installation steps to select Database Engine, reporting, Analysis and Integration services.

  10. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

  11. Once installation is completed, click Close.

Follow the procedure if you would like to remove a node from an existing SQL Server failover cluster

  1. Insert the SQL Server installation media. From the root folder, double-click setup.exe. To install from a network share, navigate to the root folder on the share, and then double-click Setup.exe.

  2. The Installation Wizard launches the SQL Server Installation Center. To remove a node to an existing failover cluster instance, click Maintenance in the left-hand pane, and then select Remove node from a SQL Server failover cluster.

  3. The System Configuration Checker will run a discovery operation on your computer. To continue, click OK.

  4. After you click install on the Setup Support Files page, the System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

  5. On the Cluster Node Configuration page, use the drop-down box to specify the name of the SQL Server failover cluster instance to be modified during this Setup operation. The node to be removed is listed in the Name of this node field.

  6. The Ready to Remove Node page displays a tree view of options that were specified during Setup. To continue, click Remove.

  7. During the remove operation, the Remove Node Progress page provides status.

  8. The Complete page provides a link to the summary log file for the remove node operation and other important notes. To complete the SQL Server remove node, click Close.

Using Command Line Installation of SQL Server

1. To install a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search component, run the following command

Setup.exe /q /ACTION=Install /FEATURES=SQL /INSTANCENAME=MSSQLSERVER

/SQLSVCACCOUNT=”<DomainNameUserName>” /SQLSVCPASSWORD

2. To prepare a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search components, and Reporting Services. run the following command

Setup.exe /q /ACTION=PrepareImage /FEATURES=SQL,RS /InstanceID =<MYINST> /IACCEPTSQLSERVERLICENSETERMS

3. To complete a prepared, stand-alone instance that includes SQL Server Database Engine, Replication, and Full-Text Search components run the following command

Setup.exe /q /ACTION=CompleteImage /INSTANCENAME=MYNEWINST /INSTANCEID=<MYINST>

/SQLSVCACCOUNT=”<DomainNameUserName>” /SQLSVCPASSWORD

4. To upgrade an existing instance or failover cluster node from SQL Server 2005, SQL Server 2008, or SQL Server 2008 R2.

Setup.exe /q /ACTION=upgrade /INSTANCEID = <INSTANCEID>/INSTANCENAME=MSSQLSERVER /RSUPGRADEDATABASEACCOUNT=”<Provide a SQL DB Account>” /IACCEPTSQLSERVERLICENSETERMS

5. To upgrade an existing instance of SQL Server 2012 to a different edition of SQL Server 2012.

Setup.exe /q /ACTION=editionupgrade /INSTANCENAME=MSSQLSERVER /PID=<PID key for new edition>” /IACCEPTSQLSERVERLICENSETERMS

6. To install an SQL server using configuration file, run the following command

Setup.exe /ConfigurationFile=MyConfigurationFile.INI

7. To install an SQL server using configuration file and provide service Account password, run the following command

Setup.exe /SQLSVCPASSWORD=”typepassword” /AGTSVCPASSWORD=”typepassword”

/ASSVCPASSWORD=”typepassword” /ISSVCPASSWORD=”typepassword” /RSSVCPASSWORD=”typepassword”

/ConfigurationFile=MyConfigurationFile.INI

8. To uninstall an existing instance of SQL Server. run the following command

Setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER

Reference and Further Reading

Windows Storage Server 2012

Virtualizing Microsoft SQL Server

The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

EMC Storage Replication

Download Hyper-v Server 2012

Download Windows Server 2012

Windows Server 2012: Failover Clustering Deep Dive

    Physical Hardware Requirements -Up to 23 instances of SQL Server requires the following resource:

  1. Processor 2 processors for 23 instances of SQL Server as a single cluster node would require 46 CPUs.
  2. Memory 2 GB of memory for 23 instances of SQL Server as a single cluster node would require 48 GB of RAM (2 GB of additional memory for the operating system).
  3. Network adapters- Microsoft certified network adapter. Converged adapter or iSCSI Adapter or HBA.
  4. Storage Adapter- multipath I/O (MPIO) supported hardware
  5. Storage – shared storage that is compatible with Windows Server 2008/2012. Storage requirements include the following:
    • Use basic disks, not dynamic disks.
    • Use NTFS partition.
    • Use either master boot record (MBR) or GUID partition table (GPT).
    • Storage volume larger than 2 terabytes, use GUID partition table (GPT).
    • Storage volumes smaller than 2 terabytes, use master boot record (MBR).
    • 4 disks for 23 instances of SQL Server as a cluster disk array would require 92 disks.
    • Cluster storage must not be Windows Distributed File System (DFS)

        Software Requirements

        Download SQL Server 2012 installation media. Review SQL Server 2012 Release Notes. Install the following prerequisite software on each failover cluster node and then restart nodes once before running Setup.

        • Windows PowerShell 2.0
        • .NET Framework 3.5 SP1
        • .NET Framework 4

          Active Directory Requirements

            • Cluster nodes must be member of same Active Directory Domain Services
            • The servers in the cluster must use Domain Name System (DNS) for name resolution
            • Use cluster naming convention for example Production Physical Node: DC1PPSQLNODE01 or Production virtual node DC2PVSQLNODE02
              1. Unsupported Configuration

                the following are the unsupported configuration: 

                1. Do not include cluster name with these characters like <, >, “,’,&
                2. Never install SQL server on a Domain Controller
                3. Never install cluster services in a domain controller or Forefront TMG 2010

                  Permission Requirements

                  System admin or project engineer who will be performing the tasks of creating cluster must be a member of at least Domain Users security group with permission to create domain computers objects in Active Directory and must be a member of administrators group on each clustered server.

                  Network settings and IP addresses requirements

                  you need at least two network card in each cluster node. One network card for domain or client connectivity and another network card heartbeat network which is shown below.

                  image

                  The following are the unique requirements for MS cluster.

                  1. Use identical network settings on each node such as Speed, Duplex Mode, Flow Control, and Media Type.

                  2. Ensure that each of these private networks uses a unique subnet.

                  3. Ensure that each node has heartbeat network with same range of IP address

                  4. Ensure that each node has unique range of subnet whether they are placed in single geographic location of diverse location.

                      Domain Network should be configured with IP Address, Subnet Mask, Default Gateway and DNS record.

                      image

                      Heartbeat network should be configured with only IP address and subnet mask.

                      image

                      Additional Requirements

                      1. Verify that antivirus software is not installed on your WSFC cluster.

                      2. Ensure that all cluster nodes are configured identically, including COM+, disk drive letters, and users in the administrators group.

                      3. Verify that you have cleared the system logs in all nodes and viewed the system logs again.

                      4. Ensure that the logs are free of any error messages before continuing.

                      5. Before you install or update a SQL Server failover cluster, disable all applications and services that might use SQL Server components during installation, but leave the disk resources online.

                      6. SQL Server Setup automatically sets dependencies between the SQL Server cluster group and the disks that will be in the failover cluster. Do not set dependencies for disks before Setup.

                      7. If you are using SMB File share as a storage option, the SQL Server Setup account must have Security Privilege on the file server. To do this, using the Local Security Policy console on the file server, add the SQL Server setup account to Manage auditing and security log rights.

                          1. Supported Operating Systems

                          • Windows Server 2012 64-bit x64 Datacenter

                          • Windows Server 2012 64-bit x64 Standard

                          • Windows Server 2008 R2 SP1 64-bit x64 Datacenter

                          • Windows Server 2008 R2 SP1 64-bit x64 Enterprise

                          • Windows Server 2008 R2 SP1 64-bit x64 Standard

                          • Windows Server 2008 R2 SP1 64-bit x64 Web

                              Understanding Quorum configuration

                              In a simple definition, quorum is a voting mechanism in a Microsoft cluster. Each node has one vote. In a MSCS cluster, this voting mechanism constantly monitor cluster that how many nodes are online and how nodes are required to run the cluster smoothly. Each node contains a copy of cluster information and their information is also stored in witness disk/directory. For a MSCS, you have to choose a quorum among four possible quorum configurations.

                              • Node Majority- Recommended for clusters with an odd number of nodes. 

                                  clip_image002

                                  • Node and Disk Majority – Recommended for clusters with an even number of nodes. Can sustain (Total no of Node)/2 failures if a disk witness node is online. Can sustain ((Total no of Node)/2)-1 failures if a disk witness node is offline.

                                      clip_image004 

                                      clip_image006 

                                      • Node and File Share Majority- Clusters with special configurations. Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

                                          clip_image008 

                                          clip_image010 

                                          • No Majority: Disk Only (not recommended)

                                              Why quorum is necessary? Network problems can interfere with communication between cluster nodes. This can cause serious issues. To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

                                              Understanding a multi-site cluster environment

                                              Hardware: A multi-site cluster requires redundant hardware with correct capacity, storage functionality, replication between sites, and network characteristics such as network latency.

                                              Number of nodes and corresponding quorum configuration: For a multi-site cluster, Microsoft recommend having an even number of nodes and, for the quorum configuration, using the Node and File Share Majority option that is, including a file share witness as part of the configuration. The file share witness can be located at a third site, that is, a different location from the main site and secondary site, so that it is not lost if one of the other two sites has problems.

                                              Network configuration—deciding between multi-subnets and a VLAN: configuring a multi-site cluster with different subnets is supported. However, when using multiple subnets, it is important to consider how clients will discover services or applications that have just failed over. The DNS servers must update one another with this new IP address before clients can discover the service or application that has failed over. If you use VLANs with multi-site you must reduce the Time to Live (TTL) of DNS discovery.

                                              Tuning of heartbeat settings: The heartbeat settings include the frequency at which the nodes send heartbeat signals to each other to indicate that they are still functioning, and the number of heartbeats that a node can miss before another node initiates failover and begins taking over the services and applications that had been running on the failed node. In a multi-site cluster, you might want to tune the “heartbeat” settings. You can tune these settings for heartbeat signals to account for differences in network latency caused by communication across subnets.

                                              Replication of data: Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. Therefore, the choice of the replication process requires careful consideration. There are many options you will find while replicating data. But before you make any decision, consult with your storage vendor, server hardware vendor and software vendors. Depending on vendor like NetApp and EMC, your replication design will change. Review the following considerations:

                                              Choosing replication level ( block, file system, or application level): The replication process can function through the hardware (at the block level), through the operating system (at the file system level), or through certain applications such as Microsoft Exchange Server (which has a feature called Cluster Continuous Replication or CCR). Work with your hardware and software vendors to choose a replication process that fits the requirements of your organization.

                                              Configuring replication to avoid data corruption: The replication process must be configured so that any interruptions to the process will not result in data corruption, but instead will always provide a set of data that matches the data from the main site as it existed at some moment in time. In other words, the replication must always preserve the order of I/O operations that occurred at the main site. This is crucial, because very few applications can recover if the data is corrupted during replication.

                                              Choosing between synchronous and asynchronous replication: The replication process can be synchronous, where no write operation finishes until the corresponding data is committed at the secondary site, or asynchronous, where the write operation can finish at the main site and then be replicated (as a background operation) to the secondary site.

                                              Synchronous Replication means that the replicated data is always up-to-date, but it slows application performance while each operation waits for replication. Synchronous replication is best for multi-site clusters that can are using high-bandwidth, low-latency connections. Typically, this means that a cluster using synchronous replication must not be stretched over a great distance. Synchronous replication can be performed within 200km distance where a reliable and robust WAN connectivity with enough bandwidth is available. For example, if you have GigE and Ten GigE MPLS connection you would choose synchronous replication depending on how big is your data.

                                              Asynchronous Replication can help maximize application performance, but if failover to the secondary site is necessary, some of the most recent user operations might not be reflected in the data after failover. This is because some operations that were finished recently might not yet be replicated. Asynchronous replication is best for clusters where you want to stretch the cluster over greater geographical distances with no significant application performance impact. Asynchronous replication is performed when distance is more than 200km and WAN connectivity is not robust between sites.

                                              Utilizing Windows Storage Server 2012 as shared storage

                                              Windows® Storage Server 2012 is the Windows Server® 2012 platform of choice for network-attached storage (NAS) appliances offered by Microsoft partners.

                                              Windows Storage Server 2012 enhances the traditional file serving capabilities and extends file based storage for application workloads like Hyper-V, SQL, Exchange and Internet information Services (IIS). Windows Storage Server 2012 provides the following features for an organization.

                                              Workgroup Edition

                                              • As many as 50 connections

                                              • Single processor socket

                                              • Up to 32 GB of memory

                                              • As many as 6 disks (no external SAS)

                                                  Standard Edition

                                                  • No license limit on number of connections

                                                  • Multiple processor sockets

                                                  • No license limit on memory

                                                  • No license limit on number of disks

                                                  • De-duplication, virtualization (host plus 2 virtual machines for storage and disk management tools), and networking services (no domain controller)

                                                  • Failover clustering for higher availability

                                                  • Microsoft BranchCache for reduced WAN traffic

                                                      Presenting Storage from Windows Storage Server 2012 Standard

                                                      From the Server Manager, Click Add roles and features, On the Before you begin page, Click Next. On the installation type page, Click Next. 

                                                      image

                                                      On the Server Roles Selection page, Select iSCSI Target and iSCSI target storage provider, Click Next

                                                      image

                                                      On the Feature page, Click Next. On the Confirm page, Click Install. Click Close.

                                                      On the Server Manager, Click File and Storage Services, Click iSCSI

                                                      image

                                                      On the Task Button, Click New iSCSI Target, Select the Disk drive from where you want to present storage, Click Next

                                                      image

                                                      Type the Name of the Storage, Click Next

                                                      image

                                                      Type the size of the shared disk, Click Next

                                                      image

                                                      Select New iSCSI Target, Click Next

                                                      image

                                                      Type the name of the target, Click Next

                                                      image

                                                      Select the IP Address on the Enter a value for selected type, Type the IP address of cluster node, Click Ok. Repeat the process and add IP address for the cluster nodes.   

                                                      image

                                                      image

                                                      Type the CHAP information. note that CHAP password must be 12 character. Click Next to continue.

                                                      image

                                                      Click Create to create a shared storage. Click Close once done.

                                                      image

                                                      image

                                                      Repeat the step to create all shared drive of your preferred size and create a shared drive of 2GB size for quorum disk.

                                                      image

                                                      Deploying a Failover Cluster in Microsoft environment

                                                      Step1: Connect the cluster servers to the networks and storage

                                                      1. Review the details about networks in Hardware Requirements for a Two-Node Failover Cluster and Network infrastructure and domain account requirements for a two-node failover cluster, earlier in this guide.

                                                      2. Connect and configure the networks that the servers in the cluster will use.

                                                      3. Follow the manufacturer’s instructions for physically connecting the servers to the storage. For this article, we are using software iSCSI initiator. Open software iSCSI initiator from Server manager>Tools>iSCSI Initiator. Type the IP address of target that is the IP address of Microsoft Windows Storage Server 2012. Click Quick Connect, Click Done.

                                                      image

                                                      5. Open Computer Management, Click Disk Management, Initialize and format the disk using either MBR and GPT disk type. Go to second server, open Computer Management, Click Disk Management, bring the disk online simply by right clicking on the disk and clicking bring online. Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers that you will cluster (and only those servers).

                                                      image

                                                      6. On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) In Disk Management, confirm that the cluster disks are visible.

                                                      image

                                                      7. If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.

                                                      8. Check the format of any exposed volume or LUN. Use NTFS file format.

                                                      Step 2: Install the failover cluster feature

                                                      In this step, you install the failover cluster feature. The servers must be running Windows Server 2012.

                                                      1. Open Server Manager, click Add roles and features. Follow the screen, go to Feature page.

                                                      2. In the Add Features Wizard, click Failover Clustering, and then click Install.

                                                      image

                                                      4. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.

                                                      5. Repeat the process for each server that you want to include in the cluster.

                                                      Step 3: Validate the cluster configuration

                                                      Before creating a cluster, I strongly recommend that you validate your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters.

                                                      1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                      image

                                                      2. Confirm that Failover Cluster Manager is selected and then, in the center pane under Management, click Validate a Configuration. Click Next.

                                                      image

                                                      3. On the Select Server Page, type the fully qualified domain name of the nodes you would like to add in the cluster, then click Add.

                                                      image 

                                                      4. Follow the instructions in the wizard to specify the two servers and the tests, and then run the tests. To fully validate your configuration, run all tests before creating a cluster. Click next

                                                      image

                                                      5. On the confirmation page, Click Next

                                                      image

                                                      6. The Summary page appears after the tests run. To view the results, click Report. Click Finish. You will be prompted to create a cluster if you select Create the Cluster now using validation nodes.

                                                      image 

                                                      5. While still on the Summary page, click View Report and read the test results.

                                                      image

                                                      To view the results of the tests after you close the wizard, see

                                                      SystemRoot\Cluster\Reports\Validation Report date and time.html

                                                      where SystemRoot is the folder in which the operating system is installed (for example, C:\Windows).

                                                      6. As necessary, make changes in the configuration and rerun the tests.

                                                      Step4: Create a Failover cluster

                                                      1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                      image

                                                      2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster. If you did not close the validation nodes then the validation wizard automatically open cluster creation wizard. Follow the instructions in the wizard to specify, Click Next

                                                      • The servers to include in the cluster.

                                                      • The name of the cluster i.e. virtual name of cluster

                                                      • IP address of the virtual node

                                                          image

                                                          3. Verify the IP address and cluster node name and click Next

                                                          image

                                                          4. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. Click Finish.

                                                          image

                                                          image

                                                          Step5: Verify Cluster Configuration

                                                          On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is unchecked for heartbeat network. verify IP range. Click Ok.

                                                          image

                                                          On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is checked for domain network. verify IP range. Click Ok.

                                                          image

                                                          On the Cluster Manager, Click Storage, Click disks, verify quorum disk and shared disks are available. You can add multiple of disks by simply click Add new disk on the Task Pan.

                                                          image

                                                          An automated MSCS cluster configuration will add quorum automatically. However you can manually configure desired cluster quorum by right clicking on cluster>More Actions>Configure Cluster Quorum Settings.

                                                          image

                                                          Configuring a Hyper-v Cluster

                                                          In the previous steps you have configured a MSCS cluster, to configure a Hyper-v cluster all you need to do is install Hyper-v role in each cluster node. from the Server Manager, Click Add roles and features, follow the screen and install Hyper-v role. A reboot is required to install Hyper-v role.  Once role is installed in both node.

                                                          Note that at this stage add Storage for Virtual Machines and networks for Live Migration, Storage network if using iSCSI, Virtual Machine network, and Management Network. detailed configuration is out of scope for this article as I am writing about MSCS cluster not Hyper-v.

                                                          image

                                                          from the Cluster Manager, Right Click on Networks, Click Network for Live Migration, Select appropriate network for live Migration.

                                                          image

                                                          If you would like to have virtual machine additional fault tolerance like Hyper-v Replica, Right Click Cluster virtual node, Click Configure Role, Click Next.

                                                          image

                                                          From Select Role page, Click Hyper-v Replica broker, Click Next. Follow the screen.

                                                          image

                                                          From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Hard Disk to configure virtual machine storage and virtual machine configuration disk drive. Once done, From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Virtual machine to create virtual machine.

                                                          image

                                                          Backing up Clustered data, application or server

                                                          There are multiple methods for backing up information that is stored on Cluster Shared Volumes in a failover cluster running on

                                                          • Windows Server 2008 R2

                                                          • Hyper-V Server 2008 R2

                                                          • Windows Server 2012

                                                          • Hyper-V Server 2012

                                                              Operating System Level backup

                                                              The backup application runs within a virtual machine in the same way that a backup application runs within a physical server. When there are multiple virtual machines being managed centrally, each virtual machine can run a backup “agent” (instead of running an individual backup application) that is controlled from the central management server. Backup agent backs up application data, files, folder and systems state of operating systems.

                                                              clip_image012

                                                              Hyper-V Image Level backup

                                                              The backup captures all the information about multiple virtual machines that are configured in a failover cluster that is using Cluster Shared Volumes. The backup application runs through Hyper-V, which means that it must use the VSS Hyper-V writer. The backup application must also be compatible with Cluster Shared Volumes. The backup application backs up the virtual machines that are selected by the administrator, including all the VHD files for those virtual machines, in one operation. VM1_Data.VHDX, VM2_data.VHDX and VM1_System.VHDX, VM2_system.VHDX are stored in a backup disk or tape. VM1_System.VHDX and VM2_System.VHDX contain system files and page files i.e. system state, snapshot and VM configuration are stored as well.

                                                              clip_image014

                                                              Publishing an Application or Service in a Failover Cluster Environment

                                                              1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                              2. Right Click on Roles, click Configure Role to publish a service or application

                                                              image 

                                                              3. Select a Cluster Services or Application, and then click Next.

                                                              image

                                                              4. Follow the instructions in the wizard to specify the following details:

                                                              • A name for the clustered file server

                                                              • IP address of virtual node

                                                                • image

                                                                  5. On Select Storage page, Select the storage volume or volumes that the clustered file server should use. Click Next

                                                                  image

                                                                  6. On the confirmation Page, review and Click Next

                                                                  image

                                                                  7. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.

                                                                  8. To close the wizard, click Finish.

                                                                  image

                                                                  9. In the console tree, make sure Services and Applications is expanded, and then select the clustered file server that you just created.

                                                                  10. After completing the wizard, confirm that the clustered file server comes online. If it does not, review the state of the networks and storage and correct any issues. Then right-click the new clustered application or service and click Bring this service or application online.

                                                                  Perform a Failover Test

                                                                  To perform a basic test of failover, right-click the clustered file server, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered file server instance is moved.

                                                                  Configuring a New Failover Cluster by Using Windows PowerShell

                                                                  Task

                                                                  PowerShell command

                                                                  Run validation tests on a list of servers.

                                                                  Test-Cluster -Node server1,server2

                                                                  Where server1 and server2 are servers that you want to validate.

                                                                  Create a cluster using defaults for most settings.

                                                                  New-Cluster -Name cluster1 -Node server1,server2

                                                                  Where server1 and server2 are the servers that you want to include in the new cluster.

                                                                  Configure a clustered file server using defaults for most settings.

                                                                  Add-ClusterFileServerRole -Storage "Cluster Disk 4"

                                                                  Where Cluster Disk 4 is the disk that the clustered file server will use.

                                                                  Configure a clustered print server using defaults for most settings.

                                                                  Add-ClusterPrintServerRole -Storage "Cluster Disk 5"

                                                                  Where Cluster Disk 5 is the disk that the clustered print server will use.

                                                                  Configure a clustered virtual machine using defaults for most settings.

                                                                  Add-ClusterVirtualMachineRole -VirtualMachine VM1

                                                                  Where VM1 is an existing virtual machine that you want to place in a cluster.

                                                                  Add available disks.

                                                                  Get-ClusterAvailableDisk | Add-ClusterDisk

                                                                  Review the state of nodes.

                                                                  Get-ClusterNode

                                                                  Run validation tests on a new server.

                                                                  Test-Cluster -Node newserver,node1,node2

                                                                  Where newserver is the new server that you want to add to a cluster, and node1 and node2 are nodes in that cluster.

                                                                  Prepare a node for maintenance.

                                                                  Get-ClusterNode node2 | Get-ClusterGroup | Move-ClusterGroup

                                                                  Where node2 is the node from which you want to move clustered services and applications.

                                                                  Pause a node.

                                                                  Suspend-ClusterNode node2

                                                                  Where node2 is the node that you want to pause.

                                                                  Resume a node.

                                                                  Resume-ClusterNode node2

                                                                  Where node2 is the node that you want to resume.

                                                                  Stop the Cluster service on a node.

                                                                  Stop-ClusterNode node2

                                                                  Where node2 is the node on which you want to stop the Cluster service.

                                                                  Start the Cluster service on a node.

                                                                  Start-ClusterNode node2

                                                                  Where node2 is the node on which you want to start the Cluster service.

                                                                  Review the signature and other properties of a cluster disk.

                                                                  Get-ClusterResource "Cluster Disk 2" | Get-ClusterParameter

                                                                  Where Cluster Disk 2 is the disk for which you want to review the disk signature.

                                                                  Move Available Storage to a particular node.

                                                                  Move-ClusterGroup "Available Storage" -Node node1

                                                                  Where node1 is the node that you want to move Available Storage to.

                                                                  Turn on maintenance for a disk.

                                                                  Suspend-ClusterResource "Cluster Disk 2"

                                                                  Where Cluster Disk 2 is the disk in cluster storage for which you are turning on maintenance.

                                                                  Turn off maintenance for a disk.

                                                                  Resume-ClusterResource "Cluster Disk 2"

                                                                  Where Cluster Disk 2 is the disk in cluster storage for which you are turning off maintenance.

                                                                  Bring a clustered service or application online.

                                                                  Start-ClusterGroup "Clustered Server 1"

                                                                  Where Clustered Server 1 is a clustered server (such as a file server) that you want to bring online.

                                                                  Take a clustered service or application offline.

                                                                  Stop-ClusterGroup "Clustered Server 1"

                                                                  Where Clustered Server 1 is a clustered server (such as a file server) that you want to take offline.

                                                                  Move or Test a clustered service or application.

                                                                  Move-ClusterGroup "Clustered Server 1"

                                                                  Where Clustered Server 1 is a clustered server (such as a file server) that you want to test or move.

                                                                      Migrating clustered services and applications to a new failover cluster

                                                                      Use the following instructions to migrate clustered services and applications from your old cluster to your new cluster. After the Migrate a Cluster Wizard runs, it leaves most of the migrated resources offline, so that you can perform additional steps before you bring them online. If the new cluster uses old storage, plan how you will make LUNs or disks inaccessible to the old cluster and accessible to the new cluster (but do not make changes yet).

                                                                      1. To open the failover cluster snap-in, click Administrative Tools, and then click Failover Cluster Manager.

                                                                      2. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Manager, click Manage a Cluster, and then select the cluster that you want to configure.

                                                                      3. In the console tree, expand the cluster that you created to see the items underneath it.

                                                                      4. If the clustered servers are connected to a network that is not to be used for cluster communications (for example, a network intended only for iSCSI), then under Networks, right-click that network, click Properties, and then click Do not allow cluster network communication on this network. Click OK.

                                                                      5. In the console tree, select the cluster. Click Configure, click Migrate services and applications.

                                                                      6. Read the first page of the Migrate a Cluster Wizard, and then click Next.

                                                                      7. Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.

                                                                      8. Click View Report. The wizard also provides a report after it finishes, which describes any additional steps that might be needed before you bring the migrated resource groups online.

                                                                      9. Follow the instructions in the wizard to complete the following tasks:

                                                                      1. Choose the resource group or groups that you want to migrate.

                                                                      2. Specify whether the resource groups to be migrated will use new storage or the same storage that you used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one shared storage location to another.

                                                                      3. If you are migrating from a cluster running Windows Server 2003 that has Network Name resources with Kerberos protocol enabled, specify the account name and password for the Active Directory account that is used by the Cluster service on the old cluster.

                                                                    1. After the wizard runs and the Summary page appears, click View Report.

                                                                          14. When the wizard completes, most migrated resources will be offline. Leave them offline at this stage.

                                                                          Completing the transition from the old cluster to the new cluster. You must perform the following steps to complete the transition to the new cluster running Windows Server 2012.

                                                                          1. Prepare for clients to experience downtime, probably brief.

                                                                          2. Take each resource group offline on the old cluster.

                                                                          3. Complete the transition for the storage:

                                                                          1. If the new cluster will use old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster.

                                                                          2. If the new cluster will use new storage, copy the appropriate folders and data to the storage. As needed for disk access on the old cluster, bring individual disk resources online on that cluster. (Keep other resources offline, to ensure that clients cannot change data on the disks in storage.) Also as needed, on the new cluster, use Disk Management to confirm that the appropriate LUNs or disks are visible to the new cluster and not visible to any other servers.

                                                                              4. If the new cluster uses mount points, adjust the mount points as needed, and make each disk resource that uses a mount point dependent on the resource of the disk that hosts the mount point.

                                                                              5. Bring the migrated services or applications online on the new cluster. To perform a basic test of failover on the new cluster, expand Services and Applications, and then click a migrated service or application that you want to test.

                                                                              6. To perform a basic test of failover for the migrated service or application, under Actions (on the right), click Move this service or application to another node, and then click an available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered service or application is moved.

                                                                              7. If there are any issues with failover, review the following:

                                                                              1. View events in Failover Cluster Manager. To do this, in the console tree, right-click Cluster Events, and then click Query. In the Cluster Events Filter dialog box, select the criteria for the events that you want to display, or to return to the default criteria, click the Reset button. Click OK. To sort events, click a heading, for example, Level or Date and Time.

                                                                              2. Confirm that necessary services, applications, or server roles are installed on all nodes. Confirm that services or applications are compatible with Windows Server 2008 R2 and run as expected.

                                                                              3. If you used old storage for the new cluster, rerun the Validate a Cluster Configuration Wizard to confirm the validation results for all LUNs or disks in the storage.

                                                                              4. Review migrated resource settings and dependencies.

                                                                              5. If you migrated one or more Network Name resources with Kerberos protocol enabled, confirm that the following permissions change was made in Active Directory Users and Computers on a domain controller. In the computer accounts (computer objects) of your Kerberos protocol-enabled Network Name resources, Full Control must be assigned to the computer account for the failover cluster.

                                                                                  Migrating Cluster Resource with new Mount Point

                                                                                  When you are working with new storage for your cluster migration, you have some flexibility in the order in which you complete the tasks. The tasks that you must complete include creating the mount points, running the Migrate a Cluster Wizard, copying the data to the new storage, and confirming the disk letters and mount points for the new storage. After completing the other tasks, configure the disk resource dependencies in Failover Cluster Manager.

                                                                                  A useful way to keep track of disks in the new storage is to give them labels that indicate your intended mount point configuration. For example, in the new storage, when you are mounting a new disk in a folder called \Mount1-1 on another disk, you can also label the mounted disk as Mount1-1. (This assumes that the label Mount1-1 is not already in use in the old storage.) Then when you run the Migrate a Cluster Wizard and you need to specify that disk for a particular migrated resource, you can look at the list and select the disk labeled Mount1-1. Then you can return to Failover Cluster Manager to configure the disk resource for Mount1-1 so that it is dependent on the appropriate resource, for example, the resource for disk F. Similarly, you would configure the disk resources for all other disks mounted on disk F so that they depended on the disk resource for disk F.

                                                                                  Migrating DHCP to a Cluster Running Windows Server 2012

                                                                                  A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

                                                                                  This guide describes the steps that are necessary when migrating a clustered DHCP server to a cluster running Windows Server 2008 R2, beyond the standard steps required for migrating clustered services and applications in general. The guide indicates when to use the Migrate a Cluster Wizard in the migration, but does not describe the wizard in detail.

                                                                                  Step 1: Review requirements and create a cluster running Windows Server 2012

                                                                                  Before beginning the migration described in this guide, review the requirements for a cluster running Windows Server 2008 R2, install the failover clustering feature on servers running Windows Server 2008 R2, and create a new cluster.

                                                                                  Step 2: On the old cluster, adjust registry settings and permissions before migration

                                                                                  To prepare for migration, you must make changes to registry settings and permissions on each node of the old cluster.

                                                                                  1. Confirm that you have a current backup of the old cluster, one that includes the configuration information for the clustered DHCP server (also called the DHCP resource group).

                                                                                  2. Confirm that the clustered DHCP server is online on the old cluster. It must be online while you complete the remainder of this procedure.

                                                                                  3. On a node of the old cluster, open a command prompt as an administrator.

                                                                                  4. Type: regedit Navigate to:

                                                                                  HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DHCPServer\Parameters

                                                                                  5. Choose the option that applies to your cluster: If the old cluster is running Windows Server 2008, skip to step 7. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2:

                                                                                  1. Right-click Parameters, click New, click String Value, and for the name of the new value, type: ServiceMain

                                                                                  2. Right-click the new value (ServiceMain), click Modify, and for the value data, type: ServiceEntry

                                                                                  3. Right-click Parameters again, click New, click Expandable String Value, and for the name of the new value, type: ServiceDll

                                                                                  4. Right-click the new value (ServiceDll), click Modify, and for the value data, type: %systemroot%\system32\dhcpssvc.dll

                                                                                      6. Right-click Parameters, and then click Permissions.

                                                                                      7. Click Add. Locate the appropriate account and assign permissions:

                                                                                      1. On Windows Server 2008: Click Locations, select the local server, and then click OK. Under Enter the object names to select, type NT Service\DHCPServer. Click OK. Select the DHCPServer account and then select the check box for Full Control.

                                                                                      2. On Windows Server 2003 or Windows Server 2003 R2: Click Locations, ensure that the domain name is selected, and then click OK. Under Enter the object names to select, type Everyone, and then click OK (and confirm your choice if prompted). Under Group or user names, select Everyone and then select the check box for Full Control.

                                                                                        8. Repeat the process on the other node or nodes of the old cluster.

                                                                                        Step 3: On a node in the old cluster, prepare for export, and then export the DHCP database to a file

                                                                                        As part of migrating a clustered DHCP server, on the old cluster, you must export the DHCP database to a file. This requires preparatory steps that prevent the cluster from restarting the clustered DHCP resource during the export. The following procedure describes the process. On the old cluster, start the clustering snap-in and configure the restart setting for the clustered DHCP server (DHCP resource group):

                                                                                        1. Click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

                                                                                        2. If the console tree is collapsed, expand the tree under the cluster that you are migrating settings from. Expand Services and Applications and then, in the console tree, click the clustered DHCP server.

                                                                                        3. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart.

                                                                                        This step prevents the resource from restarting during the export of the DHCP database, which would stop the export.

                                                                                        1. On the node of the old cluster that currently owns the clustered DHCP server, confirm that the clustered DHCP server is running. Then open a command prompt window as an administrator.

                                                                                        2. Type: netsh dhcp server export <exportfile> all

                                                                                        Where <exportfile> is the name of the file to which you want to export the DHCP database.

                                                                                        3. After the export is complete, in the clustering interface (Cluster Administrator or Failover Cluster Management), right-click the clustered DHCP server (DHCP resource group) and then click either Take Offline or Take this service or application offline. If the command is unavailable, in the center pane, right-click each online resource and click either Take Offline or Take this resource offline. If prompted for confirmation, confirm your choice.

                                                                                        4. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2, obtain the account name and password for the Cluster service account (the Active Directory account used by the Cluster service on the old cluster). Alternatively, you can obtain the name and password of another account that has access permissions for the Active Directory computer accounts (objects) that the old cluster uses. For a migration from a cluster running Windows Server 2003 or Windows Server 2003 R2, you will need this information for the next procedure.

                                                                                        Step 4: On the new cluster, configure a network for DHCP clients and run the Migrate a Cluster Wizard

                                                                                        Microsoft recommends that you make the network settings on the new cluster as similar as possible to the settings on the old cluster. In any case, on the new cluster, you must have at least one network that DHCP clients can use to communicate with the cluster. The following procedure describes the cluster setting needed on the client network, and indicates when to run the Migrate a Cluster Wizard.

                                                                                        1. On the new cluster (running Windows Server 2012), click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                                                        2. If the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.

                                                                                        3. If the console tree is collapsed, expand the tree under the cluster.

                                                                                        4. Expand Networks, right-click the network that clients will use to connect to the DHCP server, and then click Properties.

                                                                                        5. Make sure that Allow cluster network communication on this network and Allow clients to connect through this network are selected.

                                                                                        6. To prepare for the migration process, find and take note of the drive letter used for the DHCP database on the old cluster. Ensure that the same drive letter exists on the new cluster. (This drive letter is one of the settings that the Migrate a Cluster Wizard will migrate.)

                                                                                        7. In Failover Cluster Manager, in the console tree, select the new cluster, and then under Configure, click Migrate services and applications.

                                                                                        8. Use the Migrate a Cluster Wizard to migrate the DHCP resource group from old to the new cluster. If you are using new storage on the new cluster, during the migration, be sure to specify the disk that has the same drive letter on the new cluster as was used for the DHCP database on the old cluster. The wizard will migrate resources and settings, but not the DHCP database.

                                                                                        Step 5: On the new cluster, import the DHCP database, bring the clustered DHCP server online, and adjust permissions

                                                                                        To complete the migration process, import the DHCP database that you exported to a file in Step 2. Then you can bring the clustered DHCP server online and adjust settings that were changed temporarily during the migration process.

                                                                                        1. If you are reusing the old cluster storage for the new cluster, confirm that you have stored the exported DHCP database file in a safe location. Then be sure to delete all the DHCP files other than the exported DHCP database file from the old storage. This includes the DHCP database, log, and backup files.

                                                                                        2. On the new cluster, in Failover Cluster Manager, expand Services and Applications, right-click the clustered DHCP server, and then click Bring this service or application online. The DHCP service starts with an empty database.

                                                                                        3. Click the clustered DHCP server.

                                                                                        4. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart. This step prevents the resource from restarting during the import of the DHCP database, which would stop the import.

                                                                                        5. In the new cluster, on the node that currently owns the migrated DHCP server, view the disk used by the migrated DHCP server, and make sure that you have copied the exported DHCP database file to this disk.

                                                                                        6. In the new cluster, on the node that currently owns the migrated DHCP server, open a command prompt as an administrator. Change to the disk used by the migrated DHCP server.

                                                                                        7. Type: netsh dhcp server import <exportfile>

                                                                                        Where <exportfile> is the filename of the file to which you exported the DHCP database.

                                                                                        8. If the migrated DHCP server is not online, in Failover Cluster Manager, under Services and Applications, right-click the migrated DHCP server, and then click Bring this service or application online.

                                                                                        9. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, attempt restart on current node.

                                                                                        This returns the resource to the expected setting, instead of the “do not restart” setting that was temporarily needed during the import of the DHCP database.

                                                                                        10. If the cluster was migrated from Windows Server 2003 or Windows Server 2003 R2, after the clustered DHCP server is online on the new cluster, make the following changes to permissions in the registry:

                                                                                      • On the node that owns the clustered DHCP server, open a command prompt as an administrator.

                                                                                      • Type: regedit Navigate to:

                                                                                        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DHCPServer\Parameters

                                                                                      • Right-click Parameters, and then click Permissions.

                                                                                      • Click Add, click Locations, and then select the local server.

                                                                                      • Under Enter the object names to select, type NT Service\DHCPServer and then click OK. Select the DHCPServer account and then select the check box for Full Control. Then click Apply.

                                                                                      • Select the Everyone account (created through steps earlier in this topic) and then click Remove. This removes the account from the list of those that are assigned permissions.

                                                                                          11. Perform the preceding steps only after DHCP is online on the new cluster. After you complete these steps, you can test the clustered DHCP server and begin to provide DHCP services to clients.

                                                                                          Configuring Print Server Cluster

                                                                                        • Open Failover Cluster Management. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Management, click Manage a Cluster, and then select the cluster you want to configure.

                                                                                        • Click Services and Applications. Under Actions (on the right), click Configure a Service or Application. then click Next. Click Print Server, and then click Next.

                                                                                        • Follow the instructions in the wizard to specify the following details: A name for the clustered print server, Any IP address and the storage volume or volumes that the clustered print server should use

                                                                                        • After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.To close the wizard, click Finish.

                                                                                        • In the console tree, make sure Services and Applications is expanded, and then select the clustered print server that you just created.

                                                                                        • Under Actions, click Manage Printers.

                                                                                        • An instance of the Failover Cluster Management interface appears with Print Management in the console tree.

                                                                                        • Under Print Management, click Print Servers and locate the clustered print server that you want to configure.

                                                                                        • Always perform management tasks on the clustered print server. Do not manage the individual cluster nodes as print servers.

                                                                                        • Right-click the clustered print server, and then click Add Printer. Follow the instructions in the wizard to add a printer.

                                                                                        • This is the same wizard you would use to add a printer on a nonclustered server.

                                                                                        • When you have finished configuring settings for the clustered print server, to close the instance of the Failover Cluster Management interface with Print Management in the console tree, click File and then click Exit.

                                                                                        • To perform a basic test of failover, right-click the clustered print server instance, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice.

                                                                                          1. Configuring a Multisite SQL Server Failover Cluster

                                                                                            To install or upgrade a SQL Server failover cluster, you must run the Setup program on each node of the failover cluster. To add a node to an existing SQL Server failover cluster, you must run SQL Server Setup on the node that is to be added to the SQL Server failover cluster instance. Do not run Setup on the active node to manage the other nodes. The following options are available for SQL Server failover cluster installation:

                                                                                            Option1: Integration Installation with Add Node

                                                                                            Create and configure a single-node SQL Server failover cluster instance. When you configure the node successfully, you have a fully functional failover cluster instance. At this point, it does not have high availability because there is only one node in the failover cluster. On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.

                                                                                            Option 2: Advanced/Enterprise Installation

                                                                                            After you run the Prepare Failover Cluster on one node, Setup creates the Configuration.ini file that lists all the settings that you specified. On the additional nodes to be prepared, instead of following these steps, you can supply the autogenerated ConfigurationFile.ini file from first node as an input to the Setup command line. This step prepares the nodes ready to be clustered, but there is no operational instance of SQL Server at the end of this step.

                                                                                            image

                                                                                            After the nodes are prepared for clustering, run Setup on one of the prepared nodes. This step configures and finishes the failover cluster instance. At the end of this step, you will have an operational SQL Server failover cluster instance and all the nodes that were prepared previously for that instance will be the possible owners of the newly-created SQL Server failover cluster.

                                                                                            Follow the procedure to install a new SQL Server failover cluster using Integrated Simple Cluster Install 

                                                                                          1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe. To install from a network share, browse to the root folder on the share, and then double-click Setup.exe.

                                                                                            1. The Installation Wizard starts the SQL Server Installation Center. To create a new cluster installation of SQL Server, click New SQL Server failover cluster installation on the installation page

                                                                                              image

                                                                                              1. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK.

                                                                                                  image

                                                                                                  1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report. To continue, click Next.

                                                                                                  2. On the Setup Support Files page, click Install to install the Setup support files.

                                                                                                  3. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

                                                                                                      image

                                                                                                      1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                      2. On the Product key page, indicate whether you are installing a free edition of SQL Server, or whether you have a PID key for a production version of the product.

                                                                                                      3. On the License Terms page, read the license agreement, and then select the check box to accept the license terms and conditions.

                                                                                                          image 

                                                                                                          1. To help improve SQL Server, you can also enable the feature usage option and send reports to Microsoft. Click Next to continue.

                                                                                                              image

                                                                                                              1. On the Feature Selection page, select the components for your installation. You can select any combination of check boxes, but only the Database Engine and Analysis Services support failover clustering. Other selected components will run as a stand-alone feature without failover capability on the current node that you are running Setup on.

                                                                                                                  image

                                                                                                                  1. The prerequisites for the selected features are displayed on the right-hand pane. SQL Server Setup will install the prerequisite that are not already installed during the installation step described later in this procedure. SQL Server setup runs one more set of rules that are based on the features you selected to validate your configuration.

                                                                                                                      image

                                                                                                                      1. On the Instance Configuration page, specify whether to install a default or a named instance. SQL Server Network Name — Specify a network name for the new SQL Server failover cluster. that is the name of virtual node of the cluster.  This is the name that is used to identify your failover cluster on the network. Instance ID — By default, the instance name is used as the Instance ID. This is used to identify installation directories and registry keys for your instance of SQL Server. This is the case for default instances and named instances. For a default instance, the instance name and instance ID would be MSSQLSERVER. To use a nondefault instance ID, select the Instance ID box and provide a value. Instance root directory — By default, the instance root directory is C:\Program Files\Microsoft SQL Server\. To specify a nondefault root directory, use the field provided, or click the ellipsis button to locate an installation folder.

                                                                                                                          image

                                                                                                                          1. Detected SQL Server instances and features on this computer – The grid shows instances of SQL Server that are on the computer where Setup is running. If a default instance is already installed on the computer, you must install a named instance of SQL Server. Click Next to continue.

                                                                                                                              image

                                                                                                                              1. The Disk Space Requirements page calculates the required disk space for the features that you specify, and compares requirements to the available disk space on the computer where Setup is running. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. To specify the SQL Server cluster resource group name, you have two options:

                                                                                                                                • Use the drop-down box to specify an existing group to use.

                                                                                                                                • Type the name of a new group to create. Be aware that the name “Available storage” is not a valid group name.

                                                                                                                                    image

                                                                                                                                  1. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster. More than one disk can be specified. Click Next to continue.

                                                                                                                                      image

                                                                                                                                      1. On the Cluster Network Configuration page, Specify the IP type and IP address for your failover cluster instance. Click Next to continue. Note that the IP address will resolve the name of the virtual node which you have mentioned earlier step.

                                                                                                                                          image

                                                                                                                                          1. On the Server Configuration — Service Accounts page, specify login accounts for SQL Server services. The actual services that are configured on this page depend on the features that you selected to install.

                                                                                                                                              image

                                                                                                                                              1. Use this page to specify Cluster Security Policy. Use default setting. Click Next to continue. Work flow for the rest of this topic depends on the features that you have specified for your installation. You might not see all the pages, depending on your selections (Database Engine, Analysis Services, Reporting Services).

                                                                                                                                              2. You can assign the same login account to all SQL Server services, or you can configure each service account individually. The startup type is set to manual for all cluster-aware services, including full-text search and SQL Server Agent, and cannot be changed during installation. Microsoft recommends that you configure service accounts individually to provide least privileges for each service, where SQL Server services are granted the minimum permissions they have to have complete their tasks. To specify the same logon account for all service accounts in this instance of SQL Server, provide credentials in the fields at the bottom of the page. When you are finished specifying login information for SQL Server services, click Next.

                                                                                                                                                • Use the Server Configuration – Collation tab, use default collations for the Database Engine and Analysis Services.

                                                                                                                                                • Use the Database Engine Configuration — Account Provisioning page to specify the following:

                                                                                                                                                • select Windows Authentication or Mixed Mode Authentication for your instance of SQL Server.

                                                                                                                                                    image

                                                                                                                                                  1. Use the Database Engine Configuration – Data Directories page to specify nondefault installation directories. To install to default directories, click Next. Use the Database Engine Configuration – FILESTREAM page to enable FILESTREAM for your instance of SQL Server. Click Next to continue.

                                                                                                                                                      image

                                                                                                                                                      1. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

                                                                                                                                                      2. Use the Analysis Services Configuration — Account Provisioning page to specify users or accounts that will have administrator permissions for Analysis Services. You must specify at least one system administrator for Analysis Services. To add the account under which SQL Server Setup is running, click Add Current User. To add or remove accounts from the list of system administrators, click Add or Remove, and then edit the list of users, groups, or computers that will have administrator privileges for Analysis Services. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

                                                                                                                                                          image

                                                                                                                                                          1. Use the Analysis Services Configuration — Data Directories page to specify nondefault installation directories. To install to default directories, click Next.

                                                                                                                                                              image

                                                                                                                                                              1. Use the Reporting Services Configuration page to specify the kind of Reporting Services installation to create. For failover cluster installation, the option is set to Unconfigured Reporting Services installation. You must configure Reporting Services services after you complete the installation. However, no harm to select Install and configure option if you are not an SQL expert.

                                                                                                                                                                  image

                                                                                                                                                                  1. On the Error Reporting page, specify the information that you want to send to Microsoft that will help improve SQL Server. By default, options for error reporting is disabled.

                                                                                                                                                                      image

                                                                                                                                                                      1. The System Configuration Checker runs one more set of rules to validate your configuration with the SQL Server features that you have specified.

                                                                                                                                                                          image

                                                                                                                                                                          1. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                              image

                                                                                                                                                                              1. During installation, the Installation Progress page provides status so that you can monitor installation progress as Setup continues. After installation, the Complete page provides a link to the summary log file for the installation and other important notes. To complete the SQL Server installation process, click Close.

                                                                                                                                                                              2. If you are instructed to restart the computer, do so now. It is important to read the message from the Installation Wizard when you have finished with Setup.

                                                                                                                                                                              3. To add nodes to the single-node failover you just created, run Setup on each additional node and follow the steps for Add Node operation.

                                                                                                                                                                                  SQL Advanced/Enterprise Failover Cluster Install

                                                                                                                                                                                  Step1: Prepare Environment

                                                                                                                                                                                  1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe.

                                                                                                                                                                                  2. Windows Installer 4.5 is required, and may be installed by the Installation Wizard. If you are prompted to restart your computer, restart and then start SQL Server Setup again.

                                                                                                                                                                                  3. After the prerequisites are installed, the Installation Wizard starts the SQL Server Installation Center. To prepare the node for clustering, move to the Advanced page and then click Advanced cluster preparation

                                                                                                                                                                                  4. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                  5. On the Setup Support Files page click Install to install the Setup support files.

                                                                                                                                                                                  6. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                  7. On the Language Selection page, you can specify the language, to continue, click Next

                                                                                                                                                                                  8. On the Product key page, select PIDed product key, Click Next

                                                                                                                                                                                  9. On the License Terms page, accept the license terms and Click Next to continue.

                                                                                                                                                                                  10. On the Feature Selection page, select the components for your installation as you did for simple installation which has been mentioned earlier.

                                                                                                                                                                                  11. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                                  12. To complete the SQL Server installation process, click Close.

                                                                                                                                                                                  13. If you are instructed to restart the computer, do so now.

                                                                                                                                                                                  14. Repeat the previous steps to prepare the other nodes for the failover cluster. You can also use the autogenerated configuration file to run prepare on the other nodes. A configurationfile.ini is generated in C:\Program Files\Microsoft SQL Server\110\Setup BootStrap\Log\20130603_014118\configurationfile.ini which is shown below.

                                                                                                                                                                                      image

                                                                                                                                                                                      Step2 Install SQL Server

                                                                                                                                                                                      1. After preparing all the nodes as described in the prepare step, run Setup on one of the prepared nodes, preferably the one that owns the shared disk. On the Advanced page of the SQL Server Installation Center, click Advanced cluster completion.

                                                                                                                                                                                      2. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                      3. On the Setup Support Files page, click Install to install the Setup support files.

                                                                                                                                                                                      4. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                      5. On the Language Selection page, you can specify the language, To continue, click Next.

                                                                                                                                                                                      6. Use the Cluster node configuration page to select the instance name prepared for clustering

                                                                                                                                                                                      7. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster.Click Next to continue

                                                                                                                                                                                      8. On the Cluster Network Configuration page, specify the network resources for your failover cluster instance. Click Next to continue.

                                                                                                                                                                                      9. Now follow the simple installation steps to select Database Engine, reporting, Analysis and Integration services.

                                                                                                                                                                                      10. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                                      11. Once installation is completed, click Close.

                                                                                                                                                                                          Follow the procedure if you would like to remove a node from an existing SQL Server failover cluster

                                                                                                                                                                                          1. Insert the SQL Server installation media. From the root folder, double-click setup.exe. To install from a network share, navigate to the root folder on the share, and then double-click Setup.exe.

                                                                                                                                                                                          2. The Installation Wizard launches the SQL Server Installation Center. To remove a node to an existing failover cluster instance, click Maintenance in the left-hand pane, and then select Remove node from a SQL Server failover cluster.

                                                                                                                                                                                          3. The System Configuration Checker will run a discovery operation on your computer. To continue, click OK.

                                                                                                                                                                                          4. After you click install on the Setup Support Files page, the System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

                                                                                                                                                                                          5. On the Cluster Node Configuration page, use the drop-down box to specify the name of the SQL Server failover cluster instance to be modified during this Setup operation. The node to be removed is listed in the Name of this node field.

                                                                                                                                                                                          6. The Ready to Remove Node page displays a tree view of options that were specified during Setup. To continue, click Remove.

                                                                                                                                                                                          7. During the remove operation, the Remove Node Progress page provides status.

                                                                                                                                                                                          8. The Complete page provides a link to the summary log file for the remove node operation and other important notes. To complete the SQL Server remove node, click Close.

                                                                                                                                                                                            1. Using Command Line Installation of SQL Server

                                                                                                                                                                                              1. To install a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search component, run the following command

                                                                                                                                                                                              Setup.exe /q /ACTION=Install /FEATURES=SQL /INSTANCENAME=MSSQLSERVER

                                                                                                                                                                                              /SQLSVCACCOUNT=”<DomainName\UserName>” /SQLSVCPASSWORD

                                                                                                                                                                                              2. To prepare a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search components, and Reporting Services. run the following command

                                                                                                                                                                                              Setup.exe /q /ACTION=PrepareImage /FEATURES=SQL,RS /InstanceID =<MYINST> /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                              3. To complete a prepared, stand-alone instance that includes SQL Server Database Engine, Replication, and Full-Text Search components run the following command

                                                                                                                                                                                              Setup.exe /q /ACTION=CompleteImage /INSTANCENAME=MYNEWINST /INSTANCEID=<MYINST>

                                                                                                                                                                                              /SQLSVCACCOUNT=”<DomainName\UserName>” /SQLSVCPASSWORD

                                                                                                                                                                                              4. To upgrade an existing instance or failover cluster node from SQL Server 2005, SQL Server 2008, or SQL Server 2008 R2.

                                                                                                                                                                                              Setup.exe /q /ACTION=upgrade /INSTANCEID = <INSTANCEID>/INSTANCENAME=MSSQLSERVER /RSUPGRADEDATABASEACCOUNT=”<Provide a SQL DB Account>” /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                              5. To upgrade an existing instance of SQL Server 2012 to a different edition of SQL Server 2012.

                                                                                                                                                                                              Setup.exe /q /ACTION=editionupgrade /INSTANCENAME=MSSQLSERVER /PID=<PID key for new edition>” /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                              6. To install an SQL server using configuration file, run the following command

                                                                                                                                                                                              Setup.exe /ConfigurationFile=MyConfigurationFile.INI

                                                                                                                                                                                              7. To install an SQL server using configuration file and provide service Account password, run the following command

                                                                                                                                                                                              Setup.exe /SQLSVCPASSWORD=”typepassword” /AGTSVCPASSWORD=”typepassword”

                                                                                                                                                                                              /ASSVCPASSWORD=”typepassword” /ISSVCPASSWORD=”typepassword” /RSSVCPASSWORD=”typepassword”

                                                                                                                                                                                              /ConfigurationFile=MyConfigurationFile.INI

                                                                                                                                                                                              8. To uninstall an existing instance of SQL Server. run the following command

                                                                                                                                                                                              Setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER

                                                                                                                                                                                              Reference and Further Reading

                                                                                                                                                                                              Windows Storage Server 2012

                                                                                                                                                                                              Virtualizing Microsoft SQL Server

                                                                                                                                                                                              The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

                                                                                                                                                                                              EMC Storage Replication

                                                                                                                                                                                              Download Hyper-v Server 2012

                                                                                                                                                                                              Download Windows Server 2012