Understanding Network Virtualization in SCVMM 2012 R2

Gallery

This gallery contains 4 photos.

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes: Logical Networking of almost “Anything” hosted in SCVMM- Logical network … Continue reading

How to implement hardware load balancer in SCVMM

Gallery

The following procedure describe Network Load Balancing functionality in Microsoft SCVMM. Microsoft native NLB is automatically included into SCVMM when you install SCVMM. This procedure describe how to install and configure third party load balancer in SCVMM. Prerequisites: Microsoft System … Continue reading

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Gallery

This gallery contains 6 photos.

Cisco Nexus 1000V Switch for Microsoft Hyper-V provides following advanced feature in Microsoft Hyper-v and SCVMM. Integrate physical, virtual, and mixed environments Allow dynamic policy provisioning and mobility-aware network policies Improves security through integrated virtual services and advanced Cisco NX-OS … Continue reading

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.

 

Unsupported:

Hyper-V on Windows 10 doesn’t support failover clustering

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Scenario 1: In-place migration of two standalone Windows Servers (Hyper-v role installed) into clustered Windows Servers (Hyper-v role installed).

Steps involved in this scenario. There will be downtime in this scenario.

  1. Delete all snapshots from VMs
  2. Update Windows Server to latest patches and hotfixes
  3. Reboot hosts
  4. Install Failover Clustering Windows Feature in both hosts
  5. Connect hosts with shared storage infrastructure either iSCSI or fibre channel
  6. Present shared storage (5GB for Quorum disk and additional disk for VMs store) to Hyper-v Hosts.
  7. Run Failover cluster Wizard, create cluster.
  8. From the failover cluster manager, Click Disk, select virtual machine storage and convert the disk to clustered share volume
  9. Open Hyper-v Manager from Server Manager, run storage migration and migrate all VM data to single location which is shared storage.
  10. Now use Configure Role Wizard from Failover Cluster Manager, Select Virtual Machine from drop down list, Select one or More VMs and migrate those VMs to Failover cluster node.
  11. Test Live migration.

Scenario 2: Migrating standalone Windows Servers (Hyper-v role installed) using local storage to different Windows Servers (Hyper-v role installed) cluster using shared storage.

In this scenario, clustered Windows servers doesn’t see local storage available in old Hyper-v host and old Hyper-v host doesn’t see shared storage in new Hyper-v clustered environment. There will be downtime when you migrate VMs. Delete any snapshot, backup all VMs before you proceed.

Option A: Download Veeam Backup & Replication 8 trial version, configure a VM as Veeam management server. Add Source host as standalone hyper-v host and target host as Hyper-v cluster. Replicate all the VMs. Shutdown old VMs in standalone Hyper-v Hosts, then Power on VMs in Hyper-v cluster. Delete old VMs.

Option B: Copy VHD and configuration file and save into clustered shared storage. Log on to one of the clustered hyper-v host, Open Hyper-v Manager, Import VM option to import VM. Then use Configure Role option in failover Cluster Manager in same host to migrate the VM into cluster, then Power on VM in cluster.

My recommendation: use Veeam B&R.

Scenario 3: Migrating standalone Windows Servers (Hyper-v role installed) using iSCSI storage to different Windows Servers (Hyper-v role installed) cluster using fibre channel or iSCSI storage.

Option A: shutdown VMs. Present same iSCSI storage connected standalone hosts to clustered hosts. Use storage migration to migrate VMs to clustered Hosts. Then use configure role option, Failover cluster manager to migrate VMs to Hyper-v cluster.

Option B: Again use Veeam to do the job.

There are many factors/challenges when migrating VMs from standalone environment to clustered environment.

  1. iSCSI storage to Fibre Channel storage. When new cluster has host bus adapter (HBA) and old standalone host doesn’t have HBA. You can use Microsoft iSCSI initiation to fulfil the initiator requirement in new host.
  2. Fibre channel storage to iSCSI storage. There will heaps of downtime to fulfil this requirement because of new architecture. Veeam can be part of a solution.
  3. Multi-site and geographically diverse cluster will depend on MPLS or IPVPN network latency and bandwidth.

In conclusion, there is no silver bullet for individual situation. You have to consult with Microsoft partner to get a correct migration path that best fit your requirements.

Windows Server 2012 R2 Gateway

Windows server 2012 R2 can be configured as a Gateway VM in a two or four node cluster on Hyper-v Host. Gateway VM or router enhance Data Center by providing them a secure router for public or private cloud. Gateway VM cluster can provide routing functionality up to 200 tenants. Each Gateway VM can provide routing functionality for up to 50 tenants.

Two different versions of the gateway router are available in Windows Server 2012 R2.

RRAS Multitenant Gateway – The RRAS Multitenant Gateway router can be used for multitenant or non-multitenant deployments, and is a full featured BGP router. To deploy an RRAS Multitenant Gateway router, you must use Windows PowerShell commands

RRAS Gateway configuration and options:

  • Configure the RRAS Multitenant Gateway for use with Hyper-V Network Virtualization
  • Configure the RRAS Multitenant Gateway for use with VLANs
  • Configure the RRAS Multitenant Gateway for Site-to-Site VPN Connections
  • Configure the RRAS Multitenant Gateway to Perform Network Address Translation for Tenant Computers
  • Configure the RRAS Multitenant Gateway for Dynamic Routing with BGP

Windows Server 2012 R2 Gateway – To deploy Windows Server Gateway, you must use System Center 2012 R2 and Virtual Machine Manager (VMM). The Windows Server Gateway router is designed for use with multitenant deployments.

Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.

This feature allow service provider the functionality to virtually isolate different subnets, VLANs and network traffic which resides in same physical core or distribution switch. Hyper-v network virtualization is a Network Virtualization Generic Routing Encapsulation NVGRE which allows tenant to bring their own TCP/IP and name space in cloud environment.

Systems requirements:

Option Hyper-v Host Gateway VM
CPU 2 Socket NUMA Node 8 vCPU for two VMs

4 vCPU for four VMs

CPU Core 8 1
Memory 48GB 8GB
Network Adapter Two 10GB NICs connect to Cisco Trunk Port1 4 virtual NICs

  • Operating Systems
  • Clustering heartbeat
  • External network
  • Internal network
Clustering Active-Active Active-Active or Active-Passive

1-NIC Teaming in Hyper-v Host- You can configure NIC teaming in Hyper-v Host for two 10GB NICs. Windows Server 2012 R2 Gateway VM with four vNIC that are connected to the Hyper-V Virtual Switch that is bound to the NIC Team.

Deployment Guides:

Windows Server 2012 R2 RRAS Deployment Guide

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Clustering Windows Server 2012 R2

VMware vSphere 6.0 VS Microsoft Hyper-v Server 2012 R2

Since the emergence of vSphere 6.0, I would like to write an article on vSphere 6.0 vs Windows Server 2012 R2. I collected vSphere 6.0 features from few blogs and VMware community forum. Note that vSphere 6.0 is in beta program which means VMware can amend anything before final release. New functionalities of vSphere 6.0 beta are already available in Windows Server 2012 R2. So let’s have a quick look on both virtualization products.

Features vSphere 6.0 Hyper-v Server 2012 R2
Certificates

 

Certificate Authority Active Directory Certificate Services
Certificate Store Certificate Store in Windows OS
Single Sign on VMware retained SSO 2.0 for vSphere 5.5 Active Directory Domain Services
Database vPostgres database for VC Appliance up to 8 vCenter Microsoft SQL Server

No Limitation

Management Tools Web Client & VI

VMware retained VI

SCVMM Console & Hyper-v Manager
Installer Combined single installer with all input upfront Combined single installer with all input upfront
vMotion Long distance Migration up to 100+ms RTTs Multisite Hyper-v Cluster and Live Migration
Storage Migration Storage vMotion with shared and unshared storage Hyper-v Live Storage Migration between local and shared storage
Combined Cloud Products Platform Services Controller (PSC) includes vCenter, vCOPs, vCloud Director, vCoud Automation Microsoft System Center combined App Controller, Configuration Manager, Data Protection Manager, Operations Manager, Orchestrator, Service Manager, Virtual Machine Manager
Service Registration View the services that are running in the system. Windows Services
Licensing Platform Services Controller (PSC) includes Licensing Volume Activation Role in Windows Server 2012 R2
Virtual Datacenters A Virtual Datacenter aggregates CPU, Memory, Storage and Network resources. Provision CPU, Memory, Storage and network using create Cloud wizard

Another key feature to be compared here that those who are planning to procure FC Tape library and maintain a virtual backup server note that vSphere doesn’t support FC Tape even with NPIV and Hyper-v support FC Tape using NPIV.

References:

http://www.wooditwork.com/2014/08/27/whats-new-vsphere-6-0-vcenter-esxi/

https://araihan.wordpress.com/2014/03/25/vmware-vs-hyper-v-can-microsoft-make-history-again/

https://araihan.wordpress.com/2013/01/24/microsofts-hyper-v-server-2012-and-system-center-2012-unleash-ko-punch-to-vmware/

https://araihan.wordpress.com/2015/08/20/hyper-v-server-2016-whats-new/

VMware vs Hyper-v: Can Microsoft Make History Again?

In 1852 Karl Marx published “The Eighteenth Brumaire of Louis Napoleon”. In his book, Karl Marx quotes “that history repeats itself, “the first as tragedy, then as farce”, referring respectively to Napoleon I and to his nephew Louis Napoleon (Napoleon III).

Here I am not talking about Karl Marx, I am not a specialist on this matter. I am a computer geek. So Why I am refer to Karl Marx? I believe above remarks can be connected to a history between Microsoft and Novell.

In my past blog I compared VMware and Hyper-v:

http://microsoftguru.com.au/2013/01/24/microsofts-hyper-v-server-2012-and-system-center-2012-unleash-ko-punch-to-vmware/

http://microsoftguru.com.au/2013/09/14/vsphere-5-5-is-catching-up-with-hyper-v-2012-r2/

http://microsoftguru.com.au/2013/04/07/is-vmwares-fate-heading-towards-novell/

I found some similar articles echoed by other commentator:

http://blogs.gartner.com/david_cappuccio/2009/06/30/just-a-thought-will-vmware-become-the-next-novell/

http://virtualizedgeek.com/2012/12/04/is-vmware-headed-the-slow-painful-death-of-novell/

Here is Gartner Inc.’s verdict:

http://www.gartner.com/technology/reprints.do?id=1-1GJA88J&ct=130628&st=sb

http://www.gartner.com/technology/reprints.do?id=1-1LV8IX1&ct=131016&st=sb

So the question is; can Microsoft defeat VMware? Can Microsoft make history again? Here is why I believe Microsoft will make history once again regardless what VMware fan boy think. Let start….

What’s New in Windows Server 2012 R2 Hyper-V

Microsoft has traditionally put out point releases to its server operating systems about every two years. Windows Server operating systems is no longer a traditional operating systems. This is cloud OS in true terms and uses. Let’s see what’s new in Windows Server 2012 R2 in terms of virtualization.

· New Generation 2 Virtual Machines

· Automatic Server OS Activation inside VMs

· Upgrade and Live Migration Improvements in Windows Server 2012 R2

· Online VHDX Virtual Disk Resize

· Live VM Export and Clone

· Linux Guest V Enhancements

· Storage Quality of Service ( QoS )

· Guest Clustering with Shared VHDXs

· Hyper-V Replica Site-to-Site Replication Enhancements

Generation 2 VMs

Hyper-V in Windows Server 2012 R2 supports the concept of a totally new architecture based on modern hardware with no emulated devices. This makes it possible to add a number of new features, such as secure boot for VMs and booting off of virtual SCSI or virtual network adapters.

VM Direct Connect

In Windows Server 2012 R2 Hyper-V with the addition of VM Direct Connect allows a direct remote desktop connection to any running VM over what’s now called the VM bus. It’s also integrated into the Hyper-V management experience.

Extend replication to a third site

Hyper-V Replica in Windows Server 2012 is currently limited to a single replication target. This makes it difficult to support scenarios like a service provider wanting to act both as a target for a customer to replicate and a source to replicate to another offsite facility. Windows Server 2012 R2 and Hyper-V now provide a tertiary replication capability to support just such a scenario. By the same token, enterprises can now save one replica in-house and push a second replica off-site.

Compression for faster migration

Two new options in Windows Server 2012 Hyper-V help improve the performance of live migrations. The first is the ability to enable compression on the data to reduce the total number of bytes transmitted over the wire. The obvious caveat is that tapping CPU resources for data compression could potentially impact other operations, so you’ll need to take that into consideration. The second option, SMB Direct, requires network adapters that support RDMA. Microsoft’s advice: If you have 10 GB available, use RDMA (10x improvement); otherwise, use compression (2x improvement). Compression is the default choice and it works for the large majority of use cases.

Online VM exporting and cloning

It’s now possible to export or clone a running VM from System Center Virtual Machine Manager 2012 R2 with a few mouse clicks. As with pretty much anything related to managing Windows Server 2012, you can accomplish the same task using Windows PowerShell.

Online VHDX resizing

In Windows Server 2012 Hyper-V, it is not possible to resize a virtual hard disk attached to a running VM. Windows Server 2012 R2 removes this restriction, making it possible to not only expand but even reduce the size of the virtual disk (VHDX format only) without stopping the running VM.

Storage QoS

Windows Server 2012 R2 includes the ability to limit individual VMs to a specific level of I/O throughput. The IOPS are measured by monitoring the actual disk rate to and from the attached virtual hard drives. If you have applications capable of consuming large amounts of I/O, you’ll want to consider this setting to ensure that a single I/O-hungry VM won’t starve neighbor VMs or take down the entire host.

Dynamic Memory support for Linux

In the Windows Server 2012 R2 release, Hyper-V gains the ability to dynamically expand the amount of memory available to a running VM. This capability is especially handy for any Linux workload (notably Web servers) where the amount of memory needed by the VM changes over time. Windows Server 2012 R2 Hyper-V also brings Windows Server backups to Linux guests.

Shared VHDX

With Windows Server R2 Hyper-V, Windows guest clusters (think traditional Windows Server failover clustering but using a pair of VMs) no longer require an iSCSI or Fibre Channel SAN, but can be configured using commodity storage: namely a shared VHDX file stored on a Cluster Shared Volume. Note that while the clustered VMs can be live migrated as per usual, a live storage migration of the VHDX file requires one of the cluster nodes to be taken offline.

Bigger Bang for the Buck: Licensing Windows Server 2012 R2

The Windows Server 2012 R2 product is streamlined and simple, making it easy for customers to choose the edition that is right for their needs.

Datacenter edition – Unlimited Windows Server 2012 R2 virtualization license.

Standard edition 2 virtualized server license or lightly virtualized environments.

Essentials edition for small businesses with up to 25 users running on servers with up to two processors.

Foundation edition for small businesses with up to 15 users running on single processor servers.

Edition

Feature comparison

Licensing model

Server Pricing*

Datacenter

Unlimited virtual OSE

All features

Processor + CAL

$6,155

Standard

Two virtual OSE

All features

Processor + CAL

$882

Essentials

2 processor

One OSE

Limited features

Server

25 user limit

$501

Foundation

1 processor

Limited features

Server

15 user limit

OEM Only

Client Access Licenses (CALs) will continue to be required for access to Windows Server 2012 R2 servers and management access licenses continue to be required for endpoints being managed by System Center. You need Windows Server 2012 CAL to access Windows Server 2012. You also need CAL to access Remote Desktop Services (RDS) and Active Directory Rights Management Services (AD RMS).

What’s New SCVMM 2012 R2

· Public Cloud for Service Provider using Windows Azure 

· Private Cloud with System Center 2012 R2 VMM

· Any storage approach- Use any kind of Storage: DAS, SAN, NAS, Windows Server 2012 File Server, Scale-out File Server Cluster

· Networking – Management of physical network switches via OMI as well as virtual network infrastructure ( PVLANs, NV-GRE Virtualized Networks, NV-GRE Gateways )

· Virtualization host agnostic – Intel/AMD/OEM Hardware running Windows Server 2012/R2/2008 R2 Hyper-V, VMware or Citrix XenServer

· Cisco Nexus 1000V Switch

· Bootstrapping a repeatable architecture

· Bare-Metal Provisioning Scale-Out File Server Cluster and Storage Spaces

· Provisioning Synthetic Fibre Channel in Guest VMs using VMM

· Guest Clustering with Shared VHDXs

· VMM Integration with IP Address Management ( IPAM )

· Hybrid Networking with Windows Azure Pack and System Center 2012 R2 VMM

· Windows Azure Hyper-V Recovery Manager

· Delegating Access Per Private Cloud

· OM Dashboard for VMM Fabric Monitoring

Fire Power of System Center: Licensing System Center 2012 R2

System Center 2012 R2 has two version: Data Center and Standard. Both version is comprised with the following components

· Operations Manager

· Configuration Manager

· Data Protection Manager

· Service Manager

· Virtual Machine Manager

· Endpoint Protection

· Orchestrator

· App Controller

System Center license is per processor based license. Cost of System Center 2012 R2 data center is USD 3607 and cost of System Center 2012 R2 Standard is USD1323. System Center license comes with a SQL Server standard edition license. This SQL server can only be used for System Center purpose. You can virtualized unlimited number of VMs in SC 2012 R2 data center edition.

Comparing Server 2008 R2 and Server 2012 R2 in terms of virtualization.

Hyper-v is not the same as you knew in Windows Server 2008. To clear fog of your mind about Hyper-v, the following table shows the improvement Microsoft has made over the years.

Comparing VMware with Windows Server 2012 R2

While VMware still number one in Hypervisor markets but the Redmond giant can also leverage on almost a billion Windows OS user globally, as well as its expertise in software and a robust range of services (including Azure, Bing, MSN, Office 365, Skype and many more). A new battle ground is ready between Microsoft and VMware would make 2014 a pivotal hybrid cloud year. The hybrid cloud could indeed give Microsoft the chance to prevail in ways that it couldn’t with the launch of Hyper-V; Hyper-V’s market share has been gradually increasing since early 2011. According to Gartner, Microsoft gained 28% Hypervisor market share last year.

Let’s dig deeper into comparison….

The following comparison is based on Windows Server 2012 R2 Data Center edition and System Center 2012 R2 Data Center edition Vs vSphere 5.5 Enterprise Plus and vCenter Server 5.5.

Licensing:

Options

Microsoft

VMware

# of Physical CPUs per License

2

1

# of Managed OSE’s per License

Unlimited

Unlimited

# of Windows Server VM Licenses per Host

Unlimited

0

Includes Anti-virus / Anti-malware protection

Yes

Yes

Includes full SQL Database Server licenses for management databases

Yes

No

Database, Hosts & VMs

A single database license is enough for 1,000 hosts and 25,000 VMs per management server.

Purchase additional database server licenses to scale beyond managing 100 hosts and 3,000 VMs with vCenter Server Appliance.

Includes licensing for Enterprise Operations Monitoring and Management of hosts, guest VMs and application workloads running within VMs.

Yes

No 

Includes licensing for Private Cloud Management capabilities – pooled resources, self-service, delegation, automation, elasticity, chargeback

Yes

No

Includes management tools for provisioning and managing VDI solutions for virtualized Windows desktops.

Yes

No

Includes web-based management console

Yes

Yes

Virtualization Scalability:

Options

Microsoft

VMware

Maximum # of Logical Processors per Host

320

320

Maximum Physical RAM per Host

4TB

4TB

Maximum Active VMs per Host

1,024

512

Maximum Virtual CPUs per VM

64

64

Hot-Adjust Virtual CPU Resources to VM

Yes

Yes

Maximum Virtual RAM per VM

1TB

1TB

Hot-Add Virtual RAM to VM

Yes

Yes

Dynamic Memory Management

Yes

Yes.

Guest NUMA Support

Yes

Yes

Maximum # of physical Hosts per Cluster

64

32

Maximum # of VMs per Cluster

8,000

4,000

Virtual Machine Snapshots

Yes

Yes

No of Snapshot Per VMS

50

32

Integrated Application Load Balancing for Scaling-Out Application Tiers

Yes

No

Bare metal deployment of new Hypervisor hosts and clusters

Yes

Yes

Bare metal deployment of new Storage hosts and clusters

Yes

No

Manage GPU Virtualization for Advanced VDI Graphics

Yes

Yes

Virtualization of USB devices

Yes

Yes

Virtualization of Serial Ports

Yes

Yes

Minimum Disk Footprint while still providing management of multiple virtualization hosts and guest VM’s

~800KB – Micro-kernelized hypervisor ( Ring -1 )
~5GB – Drivers + Management ( Parent Partition – Ring 0 + 3 )

~155MB – Monolithic hypervisor w/ Drivers( Ring -1 + 0 )
~4GB – Management  ( vCenter Server Appliance – Ring 3 )

Boot from Flash

Yes

Yes

Boot from SAN

Yes

Yes

VM Portability, High Availability and Disaster Recovery:

 Features

Microsoft

VMware

Live Migration of running VMs

Yes

Yes

Live Migration of running VMs without shared storage between hosts

Yes

Yes

Live Migration using compression of VM memory state

Yes

No

Live Migration over RDMA-enabled network adapters

Yes

No

Live Migration of VMs Clustered with Windows Server Failover Clustering (MSCS Guest Cluster)

Yes

No

Highly Available VMs

Yes

Yes

Failover Prioritization of Highly Available VMs

Yes

Yes

Affinity Rules for Highly Available VMs

Yes

Yes

Cluster-Aware Updating for Orchestrated Patch Management of Hosts.

Yes

Yes.

Guest OS Application Monitoring for Highly Available VMs

Yes

Yes

VM Guest Clustering via Shared Virtual Hard Disk files

Yes

Yes

Maximum # of Nodes per VM Guest Cluster

64

5

Intelligent Placement of new VM workloads

Yes

Yes

Automated Load Balancing of VM Workloads across Hosts

Yes

Yes

Power Optimization of Hosts when load-balancing VMs

Yes

Yes

Fault Tolerant VMs

No

Yes

Backup VMs and Applications

Yes

Yes.

Site-to-Site Asynchronous VM Replication

Yes

Yes

Storage:

Features

Microsoft

VMware

Maximum # Virtual SCSI Hard Disks per VM

256

60 ( PVSCSI )
120 (
Virtual SATA )

Maximum Size per Virtual Hard Disk

64TB

62TB

Native 4K Disk Support

Yes

No

Boot VM from Virtual SCSI disks

Yes

Yes

Hot-Add Virtual SCSI VM Storage for running VMs

Yes

Yes

Hot-Expand Virtual SCSI Hard Disks for running VMs

Yes

Yes

Hot-Shrink Virtual SCSI Hard Disks for running VMs

Yes

No

Storage Quality of Service

Yes

Yes

Virtual Fibre Channel to VMs

Yes

Yes.

Live Migrate Virtual Storage for running VMs

Yes

Yes

Flash-based Read Cache

Yes

Yes

Flash-based Write-back Cache

Yes

No

SAN-like Storage Virtualization using commodity hard disks.

Yes

No

Automated Tiered Storage between SSD and HDD using commodity hard disks.

Yes

No

Can consume storage via iSCSI, NFS, Fibre Channel and SMB 3.0.

Yes

Yes

Can present storage via iSCSI, NFS and SMB 3.0.

Yes

No

Storage Multipathing

Yes

Yes

SAN Offload Capability

Yes

Yes

Thin Provisioning and Trim Storage

Yes

Yes

Storage Encryption

Yes

No

Deduplication of storage used by running VMs

Yes

No

Provision VM Storage based on Storage Classifications

Yes

Yes

Dynamically balance and re-balance storage load based on demands

Yes

Yes

Integrated Provisioning and Management of Shared Storage

Yes

No

Networking:

 Features

Microsoft

VMware

Distributed Switches across Hosts

Yes

Yes

Extensible Virtual Switches

Yes

Replaceable, not extensible

NIC Teaming

Yes

Yes

No of NICs

32

32

Private VLANs (PVLAN)

Yes

Yes

ARP Spoofing Protection

Yes

No

DHCP Snooping Protection

Yes

No

Router Advertisement Guard Protection

Yes

No

Virtual Port ACLs

Yes

Yes

Trunk Mode to VMs

Yes

Yes

Port Monitoring

Yes

Yes

Port Mirroring

Yes

Yes

Dynamic Virtual Machine Queue

Yes

Yes

IPsec Task Offload

Yes

No

Single Root IO Virtualization (SR-IOV)

Yes

Yes

Virtual Receive Side Scaling ( Virtual RSS )

Yes

Yes

Network Quality of Service

Yes

Yes

Network Virtualization / Software-Defined Networking (SDN)

Yes

No

Integrated Network Management of both Virtual and Physical Network components

Yes

No

Virtualized Operating Systems Support: 

Operating Systems

Microsoft

VMware

Windows Server 2012 R2

Yes

Yes

Windows 8.1

Yes

Yes

Windows Server 2012

Yes

Yes

Windows 8

Yes

Yes

Windows Server 2008 R2 SP1

Yes

Yes

Windows Server 2008 R2

Yes

Yes

Windows 7 with SP1

Yes

Yes

Windows 7

Yes

Yes

Windows Server 2008 SP2

Yes

Yes

Windows Home Server 2011

Yes

No

Windows Small Business Server 2011

Yes

No

Windows Vista with SP2

Yes

Yes

Windows Server 2003 R2 SP2

Yes

Yes

Windows Server 2003 SP2

Yes

Yes

Windows XP with SP3

Yes

Yes

Windows XP x64 with SP2

Yes

Yes

CentOS 5.7, 5.8, 6.0 – 6.4

Yes

Yes

CentOS Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

Red Hat Enterprise Linux 5.7, 5.8, 6.0 – 6.4

Yes

Yes

Red Hat Enterprise Linux Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

SUSE Linux Enterprise Server 11 SP2 & SP3

Yes

Yes

SUS Linux Enterprise Desktop 11 SP2 & SP3

Yes

Yes

OpenSUSE 12.1

Yes

Yes

Ubuntu 12.04, 12.10, 13.10

Yes

Yes

Ubuntu Desktop 12.04, 12.10, 13.10

Yes

Yes

Oracle Linux 6.4

Yes

Yes

Mac OS X 10.7.x & 10.8.x

No

Yes

Sun Solaris 10

No

Yes

Windows Azure:

Here are a special factors that put Microsoft ahead of VMware: Microsoft Azure for on-premises and service provider cloud.

Windows Azure Pack is shipping with Windows Server 2012 R2. The Azure code will enable high-scale hosting and management of web and virtual machines.

Microsoft is leveraging its service provider expertise and footprint for Azure development while extending Azure into data centers on Windows servers. That gives Microsoft access to most if not all of the world’s data centers. It could become a powerhouse in months instead of years. Widespread adoption of Microsoft Azure platform gives Microsoft a winning age against competitor like VMware.

On premises client install Windows Azure pack to manage their system center 2012 R2 and use Azure as self-service and administration portal for IT department and department within organization. To gain similar functionality in VMware you have to buy vCloud Director, Chargeback and vShield separately.

Conclusion:

This is a clash of titanic proportion in between Microsoft and VMware. Ultimately end user and customer will be the winner. Both companies are thriving for new innovation in Hypervisor and virtualization market place. End user will enjoy new technology and business will gain from price battle between Microsoft and VMware. These two key components could significantly increase the adoption of hybrid cloud operating models. Microsoft has another term cards for cloud service provider which is Exchange 2013 and Lync 2013. Exchange 2013 and Lync 2013 are already widely used for Software as a Service (SaaS). VMware has nothing to offer in Messaging and collaboration platform. Microsoft could become for the cloud what it became for the PC. It could enforce consistency across clouds to an extent that perhaps no other player could. As the cloud shifts from infrastructure to apps, Microsoft could be in an increasingly powerful position and increase Hyper-v share even further by adding SaaS to its product line. History will repeat once again when Microsoft defeat VMware as Microsoft defeated Novell eDirectory, Corel WordPerfect and IBM Notes.

References:

http://blogs.technet.com/b/keithmayer/archive/2013/10/15/vmware-or-microsoft-comparing-vsphere-5-5-and-windows-server-2012-r2-at-a-glance.aspx#.UxaKbYXazIV

http://www.datacentertcotool.com/

http://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx#fbid=xrWmRt7RXCi

http://wikibon.org/wiki/v/VMware_vs_Microsoft:_It%27s_time_to_stop_the_madness

http://www.infoworld.com/d/microsoft-windows/7-ways-windows-server-2012-pays-itself-205092

http://www.trefis.com/stock/vmw/articles/221206/growing-competition-for-vmware-in-virtualization-market/2014-01-07

Supported Server and Client Guest Operating Systems on Hyper-V

Compatibility Guide for Guest Operating Systems Supported on VMware vSphere

vSphere 5.5 is Catching Up with Hyper-v 2012 R2

Is VMware catching up with Microsoft? Yes you heard correct. I said “VMware is catching up with Microsoft” VMware released a latest update vSphere 5.5 to catch up with Microsoft Windows server 2012 R2. Here is a short comparison of VMware improvement to catch up with Hyper-v 2012 R2.

Options

vSphere 5.5

Hyper-v 2012 R2

Host CPU Core

320 (Previous version 160)

320

vCPU/Host

2048

2,048

vCPU/Guest

64 (Previous version 8)

64

Host Memory

4TB (Previous version 2TB)

4TB

vRAM/VM

1TB (Previous version 32GB)

1TB

VM/Host

2048 (previous version512)

2048

Maximum Node

32

64

Max VM/Cluster

4000

8,000

Networking

Link Aggregation Control Protocol Enhancements

Traffic Filtering

Quality of Service Tagging

SR-IOV Enhancements

Enhanced Host-Level Packet Capture

40GB NIC support

10GigE Simultaneous Live Migrations is only for 8 Vms

Support for SR-IOV networking devices

Dynamic Virtual Machine Queue (D-VMQ)

Accelerating Network I/O

IPsec Task Offload for Virtual Machines

Metering virtual machine use in a multitenant environment

IP Address Management (IPAM)

Hyper-V Network Virtualization

Hyper-V Extensible Switch

Quality of Service (QoS)

Remote Desktop Protocol (RDP) WAN Optimizations

WebSocket Protocol

Server Name Indicator (SNI)

Direct Access and VPN

Private VLANS (PVLANS)

Trunk Mode to Virtual Machines

Unlimited 10GigE Simultaneous Live Migrations

Site-to-site network connections using private IP address

Cisco NVGRE (Network Virtualization using Generic Routing Encapsulation)

Storage

Support for 62TB VMDK

MSCS Updates

vSphere 5.1 Feature Updates

16GB E2E support

PDL AutoRemove

vSphere Replication Interoperability

vSphere Replication Multi-Point-in-Time Snapshot Retention

vSphere Flash Read Cache

64TB VMFS

64TB RDM

64TB VHDX

VHD de-duplication

high availability, performance, reliability, and scalability features on inexpensive commodity storage

Offloaded Data Transfer (ODX)

Resilient File System

Deploy large NTFS volumes

Thin Provisioning and

And Trim

Cluster Shared Volume version 2

iSCSI Software Target

Support for VMware Virtual Machines and NFS 4.1

High Performance Highly Available Storage with SMB

SMB Scale-Out

Virtual Fiber Channel

256TB+ pass through disk (RDM)

Local Storage

64TB (Previous version 2TB)

64TB

Dynamic Memory

Yes

Yes

Resource Metering

Yes (Previous version No)

Yes

Hardware GPU

Yes (Previous version No)

Yes

Unified VDI

No. Buy VMware View

yes

Guest OS Application monitoring

Yes (Previous version No)

yes

Incremental Backups

Yes (Previous version No)

yes

VM Replication

Yes (Previous version No)

yes

Guest Clustering with Dynamic Memory

No

yes

Multi-tenant

No (Buy VMware vCloud)

yes

VMware goes after biz critical apps with vSphere 5.5

VMware what’s New

Windows Server 2012 R2 what’s New

Microsoft Virtual Machine Converter: Switching from vSphere to Hyper-v Made Easy

    Are you having difficulty funding a renewal license of expensive VMware vSphere? There is an alternative brand that adds greater value to the business reducing costs, and accelerating your journey to the cloud. Making the shift from VMware to Microsoft could be the wise decision you ever made after years of working as a CIO or IS Manager. By migrating from VMware to Microsoft, you gain a unified infrastructure licensing model and simplified vendor management, off course it gives you less pain in your wallet too.
    Whether you are looking to add value to your organisation, save cost, support grown or you are a fanatical environmentalist reducing carbon foot print, Hyper-V is the correct choice for you. A move to Microsoft’s virtualization and management platform can help you better meet your business needs. Simply buying Windows Server 2012 data center, you get the cloud computing benefits of unlimited virtualization and lower costs consistently and predictably over time.
    System Center 2012 enables physical, virtual, private cloud, and public cloud management using a single platform. It offers support for multi-hypervisor management, third-party integration and process management, and deep application diagnostics and insight. You can see what is happening inside the performance of your applications, remediate issues faster, and achieve increased agility for your organization.
    With the help of free tools like Microsoft Assessment and Planning Toolkit (MAP), and with the Microsoft Virtual Machine Converter (MVMC), you can quickly, easily and safely migrate over to Hyper-V.  For enterprise customers with large numbers of virtual machines to migrate, the Migration Automation Toolkit (MAT) provides the scalability to handle mass migrations in an automated fashion. System Center 2012 and Hyper-v Server 2012 support guest virtual machine of all major Linux and Unix distribution inclusive Microsoft OS off course.
    In a nutshell Microsoft Virtual Machine Converter:
  • Provides a quick, low-risk option for VMware customers to evaluate Hyper-V.
  • Converts VMware virtual machines to Hyper-V virtual machines.
  • Convert virtual hardware and keep same configuration of original virtual machine.
  • Supports a clean migration to Hyper-V with un-installation of VMware tools on the source virtual machine.
  • Provides GUI or scriptable CLI and Windows PowerShell, making it simple to perform virtual machine conversion.
  • Installs integration services for Windows 2003 guests that are converted to Hyper-V virtual machines.
  • Supports conversion of virtual machines from VMware vSphere 4.1 and 5.0 hosts.
  • Support migration of guest machine that is part of a failover cluster.
  • Supports offline conversions of VMware-based virtual hard disks (VMDK) to a Hyper-V-based virtual hard disk file format (.vhd file).
      • Relevant Articles
        Microsoft Virtual Machine Converter Solution Accelerator
        Migration Automation Toolkit (MAT)
        Cost Calculator
        Download Windows Server 2012
        Download System Center 2012
        Hyper-v vs vSphere
        Is VMware’s fate heading towards Novell?

        Windows Server 2012: Failover Clustering Deep Dive

        Physical Hardware Requirements -Up to 23 instances of SQL Server requires the following resource:

        1. Processor 2 processors for 23 instances of SQL Server as a single cluster node would require 46 CPUs.
        2. Memory 2 GB of memory for 23 instances of SQL Server as a single cluster node would require 48 GB of RAM (2 GB of additional memory for the operating system).
        3. Network adapters- Microsoft certified network adapter. Converged adapter or iSCSI Adapter or HBA.
        4. Storage Adapter- multipath I/O (MPIO) supported hardware
        5. Storage – shared storage that is compatible with Windows Server 2008/2012. Storage requirements include the following:
        • Use basic disks, not dynamic disks.
        • Use NTFS partition.
        • Use either master boot record (MBR) or GUID partition table (GPT).
        • Storage volume larger than 2 terabytes, use GUID partition table (GPT).
        • Storage volumes smaller than 2 terabytes, use master boot record (MBR).
        • 4 disks for 23 instances of SQL Server as a cluster disk array would require 92 disks.
        • Cluster storage must not be Windows Distributed File System (DFS)

        Software Requirements

        Download SQL Server 2012 installation media. Review SQL Server 2012 Release Notes. Install the following prerequisite software on each failover cluster node and then restart nodes once before running Setup.

        1. Windows PowerShell 2.0
        2. .NET Framework 3.5 SP1
        3. .NET Framework 4

        Active Directory Requirements

        1. Cluster nodes must be member of same Active Directory Domain Services
        2. The servers in the cluster must use Domain Name System (DNS) for name resolution
        3. Use cluster naming convention for example Production Physical Node: DC1PPSQLNODE01 or Production virtual node DC2PVSQLNODE02

        Unsupported Configuration

        the following are the unsupported configuration: 

        1. Do not include cluster name with these characters like <, >, “,’,&
        2. Never install SQL server on a Domain Controller
        3. Never install cluster services in a domain controller or Forefront TMG 2010

        Permission Requirements

        System admin or project engineer who will be performing the tasks of creating cluster must be a member of at least Domain Users security group with permission to create domain computers objects in Active Directory and must be a member of administrators group on each clustered server.

        Network settings and IP addresses requirements

        you need at least two network card in each cluster node. One network card for domain or client connectivity and another network card heartbeat network which is shown below.

        image

        The following are the unique requirements for MS cluster.

        1. Use identical network settings on each node such as Speed, Duplex Mode, Flow Control, and Media Type.
        2. Ensure that each of these private networks uses a unique subnet.
        3. Ensure that each node has heartbeat network with same range of IP address
        4. Ensure that each node has unique range of subnet whether they are placed in single geographic location of diverse location.

        Domain Network should be configured with IP Address, Subnet Mask, Default Gateway and DNS record.

        image

        Heartbeat network should be configured with only IP address and subnet mask.

        image

        Additional Requirements

        1. Verify that antivirus software is not installed on your WSFC cluster.
        2. Ensure that all cluster nodes are configured identically, including COM+, disk drive letters, and users in the administrators group.
        3. Verify that you have cleared the system logs in all nodes and viewed the system logs again.
        4. Ensure that the logs are free of any error messages before continuing.
        5. Before you install or update a SQL Server failover cluster, disable all applications and services that might use SQL Server components during installation, but leave the disk resources online.
        6. SQL Server Setup automatically sets dependencies between the SQL Server cluster group and the disks that will be in the failover cluster. Do not set dependencies for disks before Setup.
        7. If you are using SMB File share as a storage option, the SQL Server Setup account must have Security Privilege on the file server. To do this, using the Local Security Policy console on the file server, add the SQL Server setup account to Manage auditing and security log rights.

        Supported Operating Systems

        • Windows Server 2012 64-bit x64 Datacenter
        • Windows Server 2012 64-bit x64 Standard
        • Windows Server 2008 R2 SP1 64-bit x64 Datacenter
        • Windows Server 2008 R2 SP1 64-bit x64 Enterprise
        • Windows Server 2008 R2 SP1 64-bit x64 Standard
        • Windows Server 2008 R2 SP1 64-bit x64 Web

        Understanding Quorum configuration

        In a simple definition, quorum is a voting mechanism in a Microsoft cluster. Each node has one vote. In a MSCS cluster, this voting mechanism constantly monitor cluster that how many nodes are online and how nodes are required to run the cluster smoothly. Each node contains a copy of cluster information and their information is also stored in witness disk/directory. For a MSCS, you have to choose a quorum among four possible quorum configurations.

        • Node Majority- Recommended for clusters with an odd number of nodes. 

        clip_image002

        • Node and Disk Majority – Recommended for clusters with an even number of nodes. Can sustain (Total no of Node)/2 failures if a disk witness node is online. Can sustain ((Total no of Node)/2)-1 failures if a disk witness node is offline.

        clip_image004 

        clip_image006 

        • Node and File Share Majority- Clusters with special configurations. Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

        clip_image008 

        clip_image010 

        • No Majority: Disk Only (not recommended)

        Why quorum is necessary? Network problems can interfere with communication between cluster nodes. This can cause serious issues. To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

        Understanding a multi-site cluster environment

        Hardware: A multi-site cluster requires redundant hardware with correct capacity, storage functionality, replication between sites, and network characteristics such as network latency.

        Number of nodes and corresponding quorum configuration: For a multi-site cluster, Microsoft recommend having an even number of nodes and, for the quorum configuration, using the Node and File Share Majority option that is, including a file share witness as part of the configuration. The file share witness can be located at a third site, that is, a different location from the main site and secondary site, so that it is not lost if one of the other two sites has problems.

        Network configuration—deciding between multi-subnets and a VLAN: configuring a multi-site cluster with different subnets is supported. However, when using multiple subnets, it is important to consider how clients will discover services or applications that have just failed over. The DNS servers must update one another with this new IP address before clients can discover the service or application that has failed over. If you use VLANs with multi-site you must reduce the Time to Live (TTL) of DNS discovery.

        Tuning of heartbeat settings: The heartbeat settings include the frequency at which the nodes send heartbeat signals to each other to indicate that they are still functioning, and the number of heartbeats that a node can miss before another node initiates failover and begins taking over the services and applications that had been running on the failed node. In a multi-site cluster, you might want to tune the “heartbeat” settings. You can tune these settings for heartbeat signals to account for differences in network latency caused by communication across subnets.

        Replication of data: Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. Therefore, the choice of the replication process requires careful consideration. There are many options you will find while replicating data. But before you make any decision, consult with your storage vendor, server hardware vendor and software vendors. Depending on vendor like NetApp and EMC, your replication design will change. Review the following considerations:

        Choosing replication level ( block, file system, or application level): The replication process can function through the hardware (at the block level), through the operating system (at the file system level), or through certain applications such as Microsoft Exchange Server (which has a feature called Cluster Continuous Replication or CCR). Work with your hardware and software vendors to choose a replication process that fits the requirements of your organization.

        Configuring replication to avoid data corruption: The replication process must be configured so that any interruptions to the process will not result in data corruption, but instead will always provide a set of data that matches the data from the main site as it existed at some moment in time. In other words, the replication must always preserve the order of I/O operations that occurred at the main site. This is crucial, because very few applications can recover if the data is corrupted during replication.

        Choosing between synchronous and asynchronous replication: The replication process can be synchronous, where no write operation finishes until the corresponding data is committed at the secondary site, or asynchronous, where the write operation can finish at the main site and then be replicated (as a background operation) to the secondary site.

        Synchronous Replication means that the replicated data is always up-to-date, but it slows application performance while each operation waits for replication. Synchronous replication is best for multi-site clusters that can are using high-bandwidth, low-latency connections. Typically, this means that a cluster using synchronous replication must not be stretched over a great distance. Synchronous replication can be performed within 200km distance where a reliable and robust WAN connectivity with enough bandwidth is available. For example, if you have GigE and Ten GigE MPLS connection you would choose synchronous replication depending on how big is your data.

        Asynchronous Replication can help maximize application performance, but if failover to the secondary site is necessary, some of the most recent user operations might not be reflected in the data after failover. This is because some operations that were finished recently might not yet be replicated. Asynchronous replication is best for clusters where you want to stretch the cluster over greater geographical distances with no significant application performance impact. Asynchronous replication is performed when distance is more than 200km and WAN connectivity is not robust between sites.

        Utilizing Windows Storage Server 2012 as shared storage

        Windows® Storage Server 2012 is the Windows Server® 2012 platform of choice for network-attached storage (NAS) appliances offered by Microsoft partners.

        Windows Storage Server 2012 enhances the traditional file serving capabilities and extends file based storage for application workloads like Hyper-V, SQL, Exchange and Internet information Services (IIS). Windows Storage Server 2012 provides the following features for an organization.

        Workgroup Edition

        • As many as 50 connections
        • Single processor socket
        • Up to 32 GB of memory
        • As many as 6 disks (no external SAS)

        Standard Edition

        • No license limit on number of connections
        • Multiple processor sockets
        • No license limit on memory
        • No license limit on number of disks
        • De-duplication, virtualization (host plus 2 virtual machines for storage and disk management tools), and networking services (no domain controller)
        • Failover clustering for higher availability
        • Microsoft BranchCache for reduced WAN traffic

        Presenting Storage from Windows Storage Server 2012 Standard

        From the Server Manager, Click Add roles and features, On the Before you begin page, Click Next. On the installation type page, Click Next. 

        image

        On the Server Roles Selection page, Select iSCSI Target and iSCSI target storage provider, Click Next

        image

        On the Feature page, Click Next. On the Confirm page, Click Install. Click Close.

        On the Server Manager, Click File and Storage Services, Click iSCSI

        image

        On the Task Button, Click New iSCSI Target, Select the Disk drive from where you want to present storage, Click Next

        image

        Type the Name of the Storage, Click Next

        image

        Type the size of the shared disk, Click Next

        image

        Select New iSCSI Target, Click Next

        image

        Type the name of the target, Click Next

        image

        Select the IP Address on the Enter a value for selected type, Type the IP address of cluster node, Click Ok. Repeat the process and add IP address for the cluster nodes.   

        image

        image

        Type the CHAP information. note that CHAP password must be 12 character. Click Next to continue.

        image

        Click Create to create a shared storage. Click Close once done.

        image

        image

        Repeat the step to create all shared drive of your preferred size and create a shared drive of 2GB size for quorum disk.

        image

        Deploying a Failover Cluster in Microsoft environment

        Step1: Connect the cluster servers to the networks and storage

        1. Review the details about networks in Hardware Requirements for a Two-Node Failover Cluster and Network infrastructure and domain account requirements for a two-node failover cluster, earlier in this guide.

        2. Connect and configure the networks that the servers in the cluster will use.

        3. Follow the manufacturer’s instructions for physically connecting the servers to the storage. For this article, we are using software iSCSI initiator. Open software iSCSI initiator from Server manager>Tools>iSCSI Initiator. Type the IP address of target that is the IP address of Microsoft Windows Storage Server 2012. Click Quick Connect, Click Done.

        image

        5. Open Computer Management, Click Disk Management, Initialize and format the disk using either MBR and GPT disk type. Go to second server, open Computer Management, Click Disk Management, bring the disk online simply by right clicking on the disk and clicking bring online. Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers that you will cluster (and only those servers).

        image

        6. On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) In Disk Management, confirm that the cluster disks are visible.

        image

        7. If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.

        8. Check the format of any exposed volume or LUN. Use NTFS file format.

        Step 2: Install the failover cluster feature

        In this step, you install the failover cluster feature. The servers must be running Windows Server 2012.

        1. Open Server Manager, click Add roles and features. Follow the screen, go to Feature page.

        2. In the Add Features Wizard, click Failover Clustering, and then click Install.

        image

        4. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.

        5. Repeat the process for each server that you want to include in the cluster.

        Step 3: Validate the cluster configuration

        Before creating a cluster, I strongly recommend that you validate your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters.

        1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

        image

        2. Confirm that Failover Cluster Manager is selected and then, in the center pane under Management, click Validate a Configuration. Click Next.

        image

        3. On the Select Server Page, type the fully qualified domain name of the nodes you would like to add in the cluster, then click Add.

        image 

        4. Follow the instructions in the wizard to specify the two servers and the tests, and then run the tests. To fully validate your configuration, run all tests before creating a cluster. Click next

        image

        5. On the confirmation page, Click Next

        image

        6. The Summary page appears after the tests run. To view the results, click Report. Click Finish. You will be prompted to create a cluster if you select Create the Cluster now using validation nodes.

        image 

        5. While still on the Summary page, click View Report and read the test results.

        image

        To view the results of the tests after you close the wizard, see

        SystemRootClusterReportsValidation Report date and time.html

        where SystemRoot is the folder in which the operating system is installed (for example, C:Windows).

        6. As necessary, make changes in the configuration and rerun the tests.

        Step4: Create a Failover cluster

        1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

        image

        2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster. If you did not close the validation nodes then the validation wizard automatically open cluster creation wizard. Follow the instructions in the wizard to specify, Click Next

        • The servers to include in the cluster.
        • The name of the cluster i.e. virtual name of cluster
        • IP address of the virtual node

        image

        3. Verify the IP address and cluster node name and click Next

        image

        4. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. Click Finish.

        image

        image

        Step5: Verify Cluster Configuration

        On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is unchecked for heartbeat network. verify IP range. Click Ok.

        image

        On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is checked for domain network. verify IP range. Click Ok.

        image

        On the Cluster Manager, Click Storage, Click disks, verify quorum disk and shared disks are available. You can add multiple of disks by simply click Add new disk on the Task Pan.

        image

        An automated MSCS cluster configuration will add quorum automatically. However you can manually configure desired cluster quorum by right clicking on cluster>More Actions>Configure Cluster Quorum Settings.

        image

        Configuring a Hyper-v Cluster

        In the previous steps you have configured a MSCS cluster, to configure a Hyper-v cluster all you need to do is install Hyper-v role in each cluster node. from the Server Manager, Click Add roles and features, follow the screen and install Hyper-v role. A reboot is required to install Hyper-v role.  Once role is installed in both node.

        Note that at this stage add Storage for Virtual Machines and networks for Live Migration, Storage network if using iSCSI, Virtual Machine network, and Management Network. detailed configuration is out of scope for this article as I am writing about MSCS cluster not Hyper-v.

        image

        from the Cluster Manager, Right Click on Networks, Click Network for Live Migration, Select appropriate network for live Migration.

        image

        If you would like to have virtual machine additional fault tolerance like Hyper-v Replica, Right Click Cluster virtual node, Click Configure Role, Click Next.

        image

        From Select Role page, Click Hyper-v Replica broker, Click Next. Follow the screen.

        image

        From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Hard Disk to configure virtual machine storage and virtual machine configuration disk drive. Once done, From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Virtual machine to create virtual machine.

        image

        Backing up Clustered data, application or server

        There are multiple methods for backing up information that is stored on Cluster Shared Volumes in a failover cluster running on

        • Windows Server 2008 R2
        • Hyper-V Server 2008 R2
        • Windows Server 2012
        • Hyper-V Server 2012

        Operating System Level backup

        The backup application runs within a virtual machine in the same way that a backup application runs within a physical server. When there are multiple virtual machines being managed centrally, each virtual machine can run a backup “agent” (instead of running an individual backup application) that is controlled from the central management server. Backup agent backs up application data, files, folder and systems state of operating systems.

        clip_image012

        Hyper-V Image Level backup

        The backup captures all the information about multiple virtual machines that are configured in a failover cluster that is using Cluster Shared Volumes. The backup application runs through Hyper-V, which means that it must use the VSS Hyper-V writer. The backup application must also be compatible with Cluster Shared Volumes. The backup application backs up the virtual machines that are selected by the administrator, including all the VHD files for those virtual machines, in one operation. VM1_Data.VHDX, VM2_data.VHDX and VM1_System.VHDX, VM2_system.VHDX are stored in a backup disk or tape. VM1_System.VHDX and VM2_System.VHDX contain system files and page files i.e. system state, snapshot and VM configuration are stored as well.

        clip_image014

        Publishing an Application or Service in a Failover Cluster Environment

        1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

        2. Right Click on Roles, click Configure Role to publish a service or application

        image 

        3. Select a Cluster Services or Application, and then click Next.

        image

        4. Follow the instructions in the wizard to specify the following details:

        • A name for the clustered file server
        • IP address of virtual node

        image

        5. On Select Storage page, Select the storage volume or volumes that the clustered file server should use. Click Next

        image

        6. On the confirmation Page, review and Click Next

        image

        7. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.

        8. To close the wizard, click Finish.

        image

        9. In the console tree, make sure Services and Applications is expanded, and then select the clustered file server that you just created.

        10. After completing the wizard, confirm that the clustered file server comes online. If it does not, review the state of the networks and storage and correct any issues. Then right-click the new clustered application or service and click Bring this service or application online.

        Perform a Failover Test

        To perform a basic test of failover, right-click the clustered file server, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered file server instance is moved.

        Configuring a New Failover Cluster by Using Windows PowerShell

        Task

        PowerShell command

        Run validation tests on a list of servers.

        Test-Cluster -Node server1,server2

        Where server1 and server2 are servers that you want to validate.

        Create a cluster using defaults for most settings.

        New-Cluster -Name cluster1 -Node server1,server2

        Where server1 and server2 are the servers that you want to include in the new cluster.

        Configure a clustered file server using defaults for most settings.

        Add-ClusterFileServerRole -Storage "Cluster Disk 4"

        Where Cluster Disk 4 is the disk that the clustered file server will use.

        Configure a clustered print server using defaults for most settings.

        Add-ClusterPrintServerRole -Storage "Cluster Disk 5"

        Where Cluster Disk 5 is the disk that the clustered print server will use.

        Configure a clustered virtual machine using defaults for most settings.

        Add-ClusterVirtualMachineRole -VirtualMachine VM1

        Where VM1 is an existing virtual machine that you want to place in a cluster.

        Add available disks.

        Get-ClusterAvailableDisk | Add-ClusterDisk

        Review the state of nodes.

        Get-ClusterNode

        Run validation tests on a new server.

        Test-Cluster -Node newserver,node1,node2

        Where newserver is the new server that you want to add to a cluster, and node1 and node2 are nodes in that cluster.

        Prepare a node for maintenance.

        Get-ClusterNode node2 | Get-ClusterGroup | Move-ClusterGroup

        Where node2 is the node from which you want to move clustered services and applications.

        Pause a node.

        Suspend-ClusterNode node2

        Where node2 is the node that you want to pause.

        Resume a node.

        Resume-ClusterNode node2

        Where node2 is the node that you want to resume.

        Stop the Cluster service on a node.

        Stop-ClusterNode node2

        Where node2 is the node on which you want to stop the Cluster service.

        Start the Cluster service on a node.

        Start-ClusterNode node2

        Where node2 is the node on which you want to start the Cluster service.

        Review the signature and other properties of a cluster disk.

        Get-ClusterResource "Cluster Disk 2" | Get-ClusterParameter

        Where Cluster Disk 2 is the disk for which you want to review the disk signature.

        Move Available Storage to a particular node.

        Move-ClusterGroup "Available Storage" -Node node1

        Where node1 is the node that you want to move Available Storage to.

        Turn on maintenance for a disk.

        Suspend-ClusterResource "Cluster Disk 2"

        Where Cluster Disk 2 is the disk in cluster storage for which you are turning on maintenance.

        Turn off maintenance for a disk.

        Resume-ClusterResource "Cluster Disk 2"

        Where Cluster Disk 2 is the disk in cluster storage for which you are turning off maintenance.

        Bring a clustered service or application online.

        Start-ClusterGroup "Clustered Server 1"

        Where Clustered Server 1 is a clustered server (such as a file server) that you want to bring online.

        Take a clustered service or application offline.

        Stop-ClusterGroup "Clustered Server 1"

        Where Clustered Server 1 is a clustered server (such as a file server) that you want to take offline.

        Move or Test a clustered service or application.

        Move-ClusterGroup "Clustered Server 1"

        Where Clustered Server 1 is a clustered server (such as a file server) that you want to test or move.

        Migrating clustered services and applications to a new failover cluster

        Use the following instructions to migrate clustered services and applications from your old cluster to your new cluster. After the Migrate a Cluster Wizard runs, it leaves most of the migrated resources offline, so that you can perform additional steps before you bring them online. If the new cluster uses old storage, plan how you will make LUNs or disks inaccessible to the old cluster and accessible to the new cluster (but do not make changes yet).

        1. To open the failover cluster snap-in, click Administrative Tools, and then click Failover Cluster Manager.

        2. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Manager, click Manage a Cluster, and then select the cluster that you want to configure.

        3. In the console tree, expand the cluster that you created to see the items underneath it.

        4. If the clustered servers are connected to a network that is not to be used for cluster communications (for example, a network intended only for iSCSI), then under Networks, right-click that network, click Properties, and then click Do not allow cluster network communication on this network. Click OK.

        5. In the console tree, select the cluster. Click Configure, click Migrate services and applications.

        6. Read the first page of the Migrate a Cluster Wizard, and then click Next.

        7. Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.

        8. Click View Report. The wizard also provides a report after it finishes, which describes any additional steps that might be needed before you bring the migrated resource groups online.

        9. Follow the instructions in the wizard to complete the following tasks:

          • Choose the resource group or groups that you want to migrate.
          • Specify whether the resource groups to be migrated will use new storage or the same storage that you used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one shared storage location to another.
          • If you are migrating from a cluster running Windows Server 2003 that has Network Name resources with Kerberos protocol enabled, specify the account name and password for the Active Directory account that is used by the Cluster service on the old cluster.
        1. After the wizard runs and the Summary page appears, click View Report.

        14. When the wizard completes, most migrated resources will be offline. Leave them offline at this stage.

        Completing the transition from the old cluster to the new cluster. You must perform the following steps to complete the transition to the new cluster running Windows Server 2012.

        1. Prepare for clients to experience downtime, probably brief.

        2. Take each resource group offline on the old cluster.

        3. Complete the transition for the storage:

          • If the new cluster will use old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster.
          • If the new cluster will use new storage, copy the appropriate folders and data to the storage. As needed for disk access on the old cluster, bring individual disk resources online on that cluster. (Keep other resources offline, to ensure that clients cannot change data on the disks in storage.) Also as needed, on the new cluster, use Disk Management to confirm that the appropriate LUNs or disks are visible to the new cluster and not visible to any other servers.

        4. If the new cluster uses mount points, adjust the mount points as needed, and make each disk resource that uses a mount point dependent on the resource of the disk that hosts the mount point.

        5. Bring the migrated services or applications online on the new cluster. To perform a basic test of failover on the new cluster, expand Services and Applications, and then click a migrated service or application that you want to test.

        6. To perform a basic test of failover for the migrated service or application, under Actions (on the right), click Move this service or application to another node, and then click an available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered service or application is moved.

        7. If there are any issues with failover, review the following:

          • View events in Failover Cluster Manager. To do this, in the console tree, right-click Cluster Events, and then click Query. In the Cluster Events Filter dialog box, select the criteria for the events that you want to display, or to return to the default criteria, click the Reset button. Click OK. To sort events, click a heading, for example, Level or Date and Time.
          • Confirm that necessary services, applications, or server roles are installed on all nodes. Confirm that services or applications are compatible with Windows Server 2008 R2 and run as expected.
          • If you used old storage for the new cluster, rerun the Validate a Cluster Configuration Wizard to confirm the validation results for all LUNs or disks in the storage.
          • Review migrated resource settings and dependencies.
          • If you migrated one or more Network Name resources with Kerberos protocol enabled, confirm that the following permissions change was made in Active Directory Users and Computers on a domain controller. In the computer accounts (computer objects) of your Kerberos protocol-enabled Network Name resources, Full Control must be assigned to the computer account for the failover cluster.

        Migrating Cluster Resource with new Mount Point

        When you are working with new storage for your cluster migration, you have some flexibility in the order in which you complete the tasks. The tasks that you must complete include creating the mount points, running the Migrate a Cluster Wizard, copying the data to the new storage, and confirming the disk letters and mount points for the new storage. After completing the other tasks, configure the disk resource dependencies in Failover Cluster Manager.

        A useful way to keep track of disks in the new storage is to give them labels that indicate your intended mount point configuration. For example, in the new storage, when you are mounting a new disk in a folder called Mount1-1 on another disk, you can also label the mounted disk as Mount1-1. (This assumes that the label Mount1-1 is not already in use in the old storage.) Then when you run the Migrate a Cluster Wizard and you need to specify that disk for a particular migrated resource, you can look at the list and select the disk labeled Mount1-1. Then you can return to Failover Cluster Manager to configure the disk resource for Mount1-1 so that it is dependent on the appropriate resource, for example, the resource for disk F. Similarly, you would configure the disk resources for all other disks mounted on disk F so that they depended on the disk resource for disk F.

        Migrating DHCP to a Cluster Running Windows Server 2012

        A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

        This guide describes the steps that are necessary when migrating a clustered DHCP server to a cluster running Windows Server 2008 R2, beyond the standard steps required for migrating clustered services and applications in general. The guide indicates when to use the Migrate a Cluster Wizard in the migration, but does not describe the wizard in detail.

        Step 1: Review requirements and create a cluster running Windows Server 2012

        Before beginning the migration described in this guide, review the requirements for a cluster running Windows Server 2008 R2, install the failover clustering feature on servers running Windows Server 2008 R2, and create a new cluster.

        Step 2: On the old cluster, adjust registry settings and permissions before migration

        To prepare for migration, you must make changes to registry settings and permissions on each node of the old cluster.

        1. Confirm that you have a current backup of the old cluster, one that includes the configuration information for the clustered DHCP server (also called the DHCP resource group).

        2. Confirm that the clustered DHCP server is online on the old cluster. It must be online while you complete the remainder of this procedure.

        3. On a node of the old cluster, open a command prompt as an administrator.

        4. Type: regedit Navigate to:

        HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesDHCPServerParameters

        5. Choose the option that applies to your cluster: If the old cluster is running Windows Server 2008, skip to step 7. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2:

          • Right-click Parameters, click New, click String Value, and for the name of the new value, type: ServiceMain
          • Right-click the new value (ServiceMain), click Modify, and for the value data, type: ServiceEntry
          • Right-click Parameters again, click New, click Expandable String Value, and for the name of the new value, type: ServiceDll
          • Right-click the new value (ServiceDll), click Modify, and for the value data, type: %systemroot%system32dhcpssvc.dll

        6. Right-click Parameters, and then click Permissions.

        7. Click Add. Locate the appropriate account and assign permissions:

          • On Windows Server 2008: Click Locations, select the local server, and then click OK. Under Enter the object names to select, type NT ServiceDHCPServer. Click OK. Select the DHCPServer account and then select the check box for Full Control.
          • On Windows Server 2003 or Windows Server 2003 R2: Click Locations, ensure that the domain name is selected, and then click OK. Under Enter the object names to select, type Everyone, and then click OK (and confirm your choice if prompted). Under Group or user names, select Everyone and then select the check box for Full Control.

        8. Repeat the process on the other node or nodes of the old cluster.

        Step 3: On a node in the old cluster, prepare for export, and then export the DHCP database to a file

        As part of migrating a clustered DHCP server, on the old cluster, you must export the DHCP database to a file. This requires preparatory steps that prevent the cluster from restarting the clustered DHCP resource during the export. The following procedure describes the process. On the old cluster, start the clustering snap-in and configure the restart setting for the clustered DHCP server (DHCP resource group):

        1. Click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

        2. If the console tree is collapsed, expand the tree under the cluster that you are migrating settings from. Expand Services and Applications and then, in the console tree, click the clustered DHCP server.

        3. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart.

        This step prevents the resource from restarting during the export of the DHCP database, which would stop the export.

        1. On the node of the old cluster that currently owns the clustered DHCP server, confirm that the clustered DHCP server is running. Then open a command prompt window as an administrator.

        2. Type: netsh dhcp server export <exportfile> all

        Where <exportfile> is the name of the file to which you want to export the DHCP database.

        3. After the export is complete, in the clustering interface (Cluster Administrator or Failover Cluster Management), right-click the clustered DHCP server (DHCP resource group) and then click either Take Offline or Take this service or application offline. If the command is unavailable, in the center pane, right-click each online resource and click either Take Offline or Take this resource offline. If prompted for confirmation, confirm your choice.

        4. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2, obtain the account name and password for the Cluster service account (the Active Directory account used by the Cluster service on the old cluster). Alternatively, you can obtain the name and password of another account that has access permissions for the Active Directory computer accounts (objects) that the old cluster uses. For a migration from a cluster running Windows Server 2003 or Windows Server 2003 R2, you will need this information for the next procedure.

        Step 4: On the new cluster, configure a network for DHCP clients and run the Migrate a Cluster Wizard

        Microsoft recommends that you make the network settings on the new cluster as similar as possible to the settings on the old cluster. In any case, on the new cluster, you must have at least one network that DHCP clients can use to communicate with the cluster. The following procedure describes the cluster setting needed on the client network, and indicates when to run the Migrate a Cluster Wizard.

        1. On the new cluster (running Windows Server 2012), click Server Manager, click Tools, and then click Failover Cluster Manager.

        2. If the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.

        3. If the console tree is collapsed, expand the tree under the cluster.

        4. Expand Networks, right-click the network that clients will use to connect to the DHCP server, and then click Properties.

        5. Make sure that Allow cluster network communication on this network and Allow clients to connect through this network are selected.

        6. To prepare for the migration process, find and take note of the drive letter used for the DHCP database on the old cluster. Ensure that the same drive letter exists on the new cluster. (This drive letter is one of the settings that the Migrate a Cluster Wizard will migrate.)

        7. In Failover Cluster Manager, in the console tree, select the new cluster, and then under Configure, click Migrate services and applications.

        8. Use the Migrate a Cluster Wizard to migrate the DHCP resource group from old to the new cluster. If you are using new storage on the new cluster, during the migration, be sure to specify the disk that has the same drive letter on the new cluster as was used for the DHCP database on the old cluster. The wizard will migrate resources and settings, but not the DHCP database.

        Step 5: On the new cluster, import the DHCP database, bring the clustered DHCP server online, and adjust permissions

        To complete the migration process, import the DHCP database that you exported to a file in Step 2. Then you can bring the clustered DHCP server online and adjust settings that were changed temporarily during the migration process.

        1. If you are reusing the old cluster storage for the new cluster, confirm that you have stored the exported DHCP database file in a safe location. Then be sure to delete all the DHCP files other than the exported DHCP database file from the old storage. This includes the DHCP database, log, and backup files.

        2. On the new cluster, in Failover Cluster Manager, expand Services and Applications, right-click the clustered DHCP server, and then click Bring this service or application online. The DHCP service starts with an empty database.

        3. Click the clustered DHCP server.

        4. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart. This step prevents the resource from restarting during the import of the DHCP database, which would stop the import.

        5. In the new cluster, on the node that currently owns the migrated DHCP server, view the disk used by the migrated DHCP server, and make sure that you have copied the exported DHCP database file to this disk.

        6. In the new cluster, on the node that currently owns the migrated DHCP server, open a command prompt as an administrator. Change to the disk used by the migrated DHCP server.

        7. Type: netsh dhcp server import <exportfile>

        Where <exportfile> is the filename of the file to which you exported the DHCP database.

        8. If the migrated DHCP server is not online, in Failover Cluster Manager, under Services and Applications, right-click the migrated DHCP server, and then click Bring this service or application online.

        9. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, attempt restart on current node.

        This returns the resource to the expected setting, instead of the “do not restart” setting that was temporarily needed during the import of the DHCP database.

        10. If the cluster was migrated from Windows Server 2003 or Windows Server 2003 R2, after the clustered DHCP server is online on the new cluster, make the following changes to permissions in the registry:

        • On the node that owns the clustered DHCP server, open a command prompt as an administrator.
        • Type: regedit Navigate to:
          HKEY_LOCAL_MACHINESYSTEMCurrentControlSetservicesDHCPServerParameters
        • Right-click Parameters, and then click Permissions.
        • Click Add, click Locations, and then select the local server.
        • Under Enter the object names to select, type NT ServiceDHCPServer and then click OK. Select the DHCPServer account and then select the check box for Full Control. Then click Apply.
        • Select the Everyone account (created through steps earlier in this topic) and then click Remove. This removes the account from the list of those that are assigned permissions.

        11. Perform the preceding steps only after DHCP is online on the new cluster. After you complete these steps, you can test the clustered DHCP server and begin to provide DHCP services to clients.

        Configuring a Multisite SQL Server Failover Cluster

        To install or upgrade a SQL Server failover cluster, you must run the Setup program on each node of the failover cluster. To add a node to an existing SQL Server failover cluster, you must run SQL Server Setup on the node that is to be added to the SQL Server failover cluster instance. Do not run Setup on the active node to manage the other nodes. The following options are available for SQL Server failover cluster installation:

        Option1: Integration Installation with Add Node

        Create and configure a single-node SQL Server failover cluster instance. When you configure the node successfully, you have a fully functional failover cluster instance. At this point, it does not have high availability because there is only one node in the failover cluster. On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.

        Option 2: Advanced/Enterprise Installation

        After you run the Prepare Failover Cluster on one node, Setup creates the Configuration.ini file that lists all the settings that you specified. On the additional nodes to be prepared, instead of following these steps, you can supply the autogenerated ConfigurationFile.ini file from first node as an input to the Setup command line. This step prepares the nodes ready to be clustered, but there is no operational instance of SQL Server at the end of this step.

        image

        After the nodes are prepared for clustering, run Setup on one of the prepared nodes. This step configures and finishes the failover cluster instance. At the end of this step, you will have an operational SQL Server failover cluster instance and all the nodes that were prepared previously for that instance will be the possible owners of the newly-created SQL Server failover cluster.

        Follow the procedure to install a new SQL Server failover cluster using Integrated Simple Cluster Install 

        1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe. To install from a network share, browse to the root folder on the share, and then double-click Setup.exe.
        1. The Installation Wizard starts the SQL Server Installation Center. To create a new cluster installation of SQL Server, click New SQL Server failover cluster installation on the installation page

        image

        1. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK.

        image

        1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report. To continue, click Next.
        2. On the Setup Support Files page, click Install to install the Setup support files.
        3. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

        image

        1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.
        2. On the Product key page, indicate whether you are installing a free edition of SQL Server, or whether you have a PID key for a production version of the product.
        3. On the License Terms page, read the license agreement, and then select the check box to accept the license terms and conditions.

        image 

        1. To help improve SQL Server, you can also enable the feature usage option and send reports to Microsoft. Click Next to continue.

        image

        1. On the Feature Selection page, select the components for your installation. You can select any combination of check boxes, but only the Database Engine and Analysis Services support failover clustering. Other selected components will run as a stand-alone feature without failover capability on the current node that you are running Setup on.

        image

        1. The prerequisites for the selected features are displayed on the right-hand pane. SQL Server Setup will install the prerequisite that are not already installed during the installation step described later in this procedure. SQL Server setup runs one more set of rules that are based on the features you selected to validate your configuration.

        image

        1. On the Instance Configuration page, specify whether to install a default or a named instance. SQL Server Network Name — Specify a network name for the new SQL Server failover cluster. that is the name of virtual node of the cluster.  This is the name that is used to identify your failover cluster on the network. Instance ID — By default, the instance name is used as the Instance ID. This is used to identify installation directories and registry keys for your instance of SQL Server. This is the case for default instances and named instances. For a default instance, the instance name and instance ID would be MSSQLSERVER. To use a nondefault instance ID, select the Instance ID box and provide a value. Instance root directory — By default, the instance root directory is C:Program FilesMicrosoft SQL Server. To specify a nondefault root directory, use the field provided, or click the ellipsis button to locate an installation folder.

        image

        1. Detected SQL Server instances and features on this computer – The grid shows instances of SQL Server that are on the computer where Setup is running. If a default instance is already installed on the computer, you must install a named instance of SQL Server. Click Next to continue.

        image

        1. The Disk Space Requirements page calculates the required disk space for the features that you specify, and compares requirements to the available disk space on the computer where Setup is running. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. To specify the SQL Server cluster resource group name, you have two options:
        • Use the drop-down box to specify an existing group to use.
        • Type the name of a new group to create. Be aware that the name “Available storage” is not a valid group name.

        image

        1. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster. More than one disk can be specified. Click Next to continue.

        image

        1. On the Cluster Network Configuration page, Specify the IP type and IP address for your failover cluster instance. Click Next to continue. Note that the IP address will resolve the name of the virtual node which you have mentioned earlier step.

        image

        1. On the Server Configuration — Service Accounts page, specify login accounts for SQL Server services. The actual services that are configured on this page depend on the features that you selected to install.

        image

        1. Use this page to specify Cluster Security Policy. Use default setting. Click Next to continue. Work flow for the rest of this topic depends on the features that you have specified for your installation. You might not see all the pages, depending on your selections (Database Engine, Analysis Services, Reporting Services).
        2. You can assign the same login account to all SQL Server services, or you can configure each service account individually. The startup type is set to manual for all cluster-aware services, including full-text search and SQL Server Agent, and cannot be changed during installation. Microsoft recommends that you configure service accounts individually to provide least privileges for each service, where SQL Server services are granted the minimum permissions they have to have complete their tasks. To specify the same logon account for all service accounts in this instance of SQL Server, provide credentials in the fields at the bottom of the page. When you are finished specifying login information for SQL Server services, click Next.
        • Use the Server Configuration – Collation tab, use default collations for the Database Engine and Analysis Services.
        • Use the Database Engine Configuration — Account Provisioning page to specify the following:
        • select Windows Authentication or Mixed Mode Authentication for your instance of SQL Server.

        image

        1. Use the Database Engine Configuration – Data Directories page to specify nondefault installation directories. To install to default directories, click Next. Use the Database Engine Configuration – FILESTREAM page to enable FILESTREAM for your instance of SQL Server. Click Next to continue.

        image

        1. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.
        2. Use the Analysis Services Configuration — Account Provisioning page to specify users or accounts that will have administrator permissions for Analysis Services. You must specify at least one system administrator for Analysis Services. To add the account under which SQL Server Setup is running, click Add Current User. To add or remove accounts from the list of system administrators, click Add or Remove, and then edit the list of users, groups, or computers that will have administrator privileges for Analysis Services. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

        image

        1. Use the Analysis Services Configuration — Data Directories page to specify nondefault installation directories. To install to default directories, click Next.

        image

        1. Use the Reporting Services Configuration page to specify the kind of Reporting Services installation to create. For failover cluster installation, the option is set to Unconfigured Reporting Services installation. You must configure Reporting Services services after you complete the installation. However, no harm to select Install and configure option if you are not an SQL expert.

        image

        1. On the Error Reporting page, specify the information that you want to send to Microsoft that will help improve SQL Server. By default, options for error reporting is disabled.

        image

        1. The System Configuration Checker runs one more set of rules to validate your configuration with the SQL Server features that you have specified.

        image

        1. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

        image

        1. During installation, the Installation Progress page provides status so that you can monitor installation progress as Setup continues. After installation, the Complete page provides a link to the summary log file for the installation and other important notes. To complete the SQL Server installation process, click Close.
        2. If you are instructed to restart the computer, do so now. It is important to read the message from the Installation Wizard when you have finished with Setup.
        3. To add nodes to the single-node failover you just created, run Setup on each additional node and follow the steps for Add Node operation.

        SQL Advanced/Enterprise Failover Cluster Install

        Step1: Prepare Environment

        1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe.

        2. Windows Installer 4.5 is required, and may be installed by the Installation Wizard. If you are prompted to restart your computer, restart and then start SQL Server Setup again.

        3. After the prerequisites are installed, the Installation Wizard starts the SQL Server Installation Center. To prepare the node for clustering, move to the Advanced page and then click Advanced cluster preparation

        4. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

        5. On the Setup Support Files page click Install to install the Setup support files.

        6. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

        7. On the Language Selection page, you can specify the language, to continue, click Next

        8. On the Product key page, select PIDed product key, Click Next

        9. On the License Terms page, accept the license terms and Click Next to continue.

        10. On the Feature Selection page, select the components for your installation as you did for simple installation which has been mentioned earlier.

        11. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

        12. To complete the SQL Server installation process, click Close.

        13. If you are instructed to restart the computer, do so now.

        14. Repeat the previous steps to prepare the other nodes for the failover cluster. You can also use the autogenerated configuration file to run prepare on the other nodes. A configurationfile.ini is generated in C:Program FilesMicrosoft SQL Server110Setup BootStrapLog20130603_014118configurationfile.ini which is shown below.

        image

        Step2 Install SQL Server

        1. After preparing all the nodes as described in the prepare step, run Setup on one of the prepared nodes, preferably the one that owns the shared disk. On the Advanced page of the SQL Server Installation Center, click Advanced cluster completion.

        2. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

        3. On the Setup Support Files page, click Install to install the Setup support files.

        4. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

        5. On the Language Selection page, you can specify the language, To continue, click Next.

        6. Use the Cluster node configuration page to select the instance name prepared for clustering

        7. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster.Click Next to continue

        8. On the Cluster Network Configuration page, specify the network resources for your failover cluster instance. Click Next to continue.

        9. Now follow the simple installation steps to select Database Engine, reporting, Analysis and Integration services.

        10. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

        11. Once installation is completed, click Close.

        Follow the procedure if you would like to remove a node from an existing SQL Server failover cluster

        1. Insert the SQL Server installation media. From the root folder, double-click setup.exe. To install from a network share, navigate to the root folder on the share, and then double-click Setup.exe.

        2. The Installation Wizard launches the SQL Server Installation Center. To remove a node to an existing failover cluster instance, click Maintenance in the left-hand pane, and then select Remove node from a SQL Server failover cluster.

        3. The System Configuration Checker will run a discovery operation on your computer. To continue, click OK.

        4. After you click install on the Setup Support Files page, the System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

        5. On the Cluster Node Configuration page, use the drop-down box to specify the name of the SQL Server failover cluster instance to be modified during this Setup operation. The node to be removed is listed in the Name of this node field.

        6. The Ready to Remove Node page displays a tree view of options that were specified during Setup. To continue, click Remove.

        7. During the remove operation, the Remove Node Progress page provides status.

        8. The Complete page provides a link to the summary log file for the remove node operation and other important notes. To complete the SQL Server remove node, click Close.

        Using Command Line Installation of SQL Server

        1. To install a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search component, run the following command

        Setup.exe /q /ACTION=Install /FEATURES=SQL /INSTANCENAME=MSSQLSERVER

        /SQLSVCACCOUNT=”<DomainNameUserName>” /SQLSVCPASSWORD

        2. To prepare a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search components, and Reporting Services. run the following command

        Setup.exe /q /ACTION=PrepareImage /FEATURES=SQL,RS /InstanceID =<MYINST> /IACCEPTSQLSERVERLICENSETERMS

        3. To complete a prepared, stand-alone instance that includes SQL Server Database Engine, Replication, and Full-Text Search components run the following command

        Setup.exe /q /ACTION=CompleteImage /INSTANCENAME=MYNEWINST /INSTANCEID=<MYINST>

        /SQLSVCACCOUNT=”<DomainNameUserName>” /SQLSVCPASSWORD

        4. To upgrade an existing instance or failover cluster node from SQL Server 2005, SQL Server 2008, or SQL Server 2008 R2.

        Setup.exe /q /ACTION=upgrade /INSTANCEID = <INSTANCEID>/INSTANCENAME=MSSQLSERVER /RSUPGRADEDATABASEACCOUNT=”<Provide a SQL DB Account>” /IACCEPTSQLSERVERLICENSETERMS

        5. To upgrade an existing instance of SQL Server 2012 to a different edition of SQL Server 2012.

        Setup.exe /q /ACTION=editionupgrade /INSTANCENAME=MSSQLSERVER /PID=<PID key for new edition>” /IACCEPTSQLSERVERLICENSETERMS

        6. To install an SQL server using configuration file, run the following command

        Setup.exe /ConfigurationFile=MyConfigurationFile.INI

        7. To install an SQL server using configuration file and provide service Account password, run the following command

        Setup.exe /SQLSVCPASSWORD=”typepassword” /AGTSVCPASSWORD=”typepassword”

        /ASSVCPASSWORD=”typepassword” /ISSVCPASSWORD=”typepassword” /RSSVCPASSWORD=”typepassword”

        /ConfigurationFile=MyConfigurationFile.INI

        8. To uninstall an existing instance of SQL Server. run the following command

        Setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER

        Reference and Further Reading

        Windows Storage Server 2012

        Virtualizing Microsoft SQL Server

        The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

        EMC Storage Replication

        Download Hyper-v Server 2012

        Download Windows Server 2012

        Windows Server 2012: Failover Clustering Deep Dive

          Physical Hardware Requirements -Up to 23 instances of SQL Server requires the following resource:

        1. Processor 2 processors for 23 instances of SQL Server as a single cluster node would require 46 CPUs.
        2. Memory 2 GB of memory for 23 instances of SQL Server as a single cluster node would require 48 GB of RAM (2 GB of additional memory for the operating system).
        3. Network adapters- Microsoft certified network adapter. Converged adapter or iSCSI Adapter or HBA.
        4. Storage Adapter- multipath I/O (MPIO) supported hardware
        5. Storage – shared storage that is compatible with Windows Server 2008/2012. Storage requirements include the following:
          • Use basic disks, not dynamic disks.
          • Use NTFS partition.
          • Use either master boot record (MBR) or GUID partition table (GPT).
          • Storage volume larger than 2 terabytes, use GUID partition table (GPT).
          • Storage volumes smaller than 2 terabytes, use master boot record (MBR).
          • 4 disks for 23 instances of SQL Server as a cluster disk array would require 92 disks.
          • Cluster storage must not be Windows Distributed File System (DFS)

              Software Requirements

              Download SQL Server 2012 installation media. Review SQL Server 2012 Release Notes. Install the following prerequisite software on each failover cluster node and then restart nodes once before running Setup.

              • Windows PowerShell 2.0
              • .NET Framework 3.5 SP1
              • .NET Framework 4

                Active Directory Requirements

                  • Cluster nodes must be member of same Active Directory Domain Services
                  • The servers in the cluster must use Domain Name System (DNS) for name resolution
                  • Use cluster naming convention for example Production Physical Node: DC1PPSQLNODE01 or Production virtual node DC2PVSQLNODE02
                    1. Unsupported Configuration

                      the following are the unsupported configuration: 

                      1. Do not include cluster name with these characters like <, >, “,’,&
                      2. Never install SQL server on a Domain Controller
                      3. Never install cluster services in a domain controller or Forefront TMG 2010

                        Permission Requirements

                        System admin or project engineer who will be performing the tasks of creating cluster must be a member of at least Domain Users security group with permission to create domain computers objects in Active Directory and must be a member of administrators group on each clustered server.

                        Network settings and IP addresses requirements

                        you need at least two network card in each cluster node. One network card for domain or client connectivity and another network card heartbeat network which is shown below.

                        image

                        The following are the unique requirements for MS cluster.

                        1. Use identical network settings on each node such as Speed, Duplex Mode, Flow Control, and Media Type.

                        2. Ensure that each of these private networks uses a unique subnet.

                        3. Ensure that each node has heartbeat network with same range of IP address

                        4. Ensure that each node has unique range of subnet whether they are placed in single geographic location of diverse location.

                            Domain Network should be configured with IP Address, Subnet Mask, Default Gateway and DNS record.

                            image

                            Heartbeat network should be configured with only IP address and subnet mask.

                            image

                            Additional Requirements

                            1. Verify that antivirus software is not installed on your WSFC cluster.

                            2. Ensure that all cluster nodes are configured identically, including COM+, disk drive letters, and users in the administrators group.

                            3. Verify that you have cleared the system logs in all nodes and viewed the system logs again.

                            4. Ensure that the logs are free of any error messages before continuing.

                            5. Before you install or update a SQL Server failover cluster, disable all applications and services that might use SQL Server components during installation, but leave the disk resources online.

                            6. SQL Server Setup automatically sets dependencies between the SQL Server cluster group and the disks that will be in the failover cluster. Do not set dependencies for disks before Setup.

                            7. If you are using SMB File share as a storage option, the SQL Server Setup account must have Security Privilege on the file server. To do this, using the Local Security Policy console on the file server, add the SQL Server setup account to Manage auditing and security log rights.

                                1. Supported Operating Systems

                                • Windows Server 2012 64-bit x64 Datacenter

                                • Windows Server 2012 64-bit x64 Standard

                                • Windows Server 2008 R2 SP1 64-bit x64 Datacenter

                                • Windows Server 2008 R2 SP1 64-bit x64 Enterprise

                                • Windows Server 2008 R2 SP1 64-bit x64 Standard

                                • Windows Server 2008 R2 SP1 64-bit x64 Web

                                    Understanding Quorum configuration

                                    In a simple definition, quorum is a voting mechanism in a Microsoft cluster. Each node has one vote. In a MSCS cluster, this voting mechanism constantly monitor cluster that how many nodes are online and how nodes are required to run the cluster smoothly. Each node contains a copy of cluster information and their information is also stored in witness disk/directory. For a MSCS, you have to choose a quorum among four possible quorum configurations.

                                    • Node Majority- Recommended for clusters with an odd number of nodes. 

                                        clip_image002

                                        • Node and Disk Majority – Recommended for clusters with an even number of nodes. Can sustain (Total no of Node)/2 failures if a disk witness node is online. Can sustain ((Total no of Node)/2)-1 failures if a disk witness node is offline.

                                            clip_image004 

                                            clip_image006 

                                            • Node and File Share Majority- Clusters with special configurations. Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.

                                                clip_image008 

                                                clip_image010 

                                                • No Majority: Disk Only (not recommended)

                                                    Why quorum is necessary? Network problems can interfere with communication between cluster nodes. This can cause serious issues. To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

                                                    Understanding a multi-site cluster environment

                                                    Hardware: A multi-site cluster requires redundant hardware with correct capacity, storage functionality, replication between sites, and network characteristics such as network latency.

                                                    Number of nodes and corresponding quorum configuration: For a multi-site cluster, Microsoft recommend having an even number of nodes and, for the quorum configuration, using the Node and File Share Majority option that is, including a file share witness as part of the configuration. The file share witness can be located at a third site, that is, a different location from the main site and secondary site, so that it is not lost if one of the other two sites has problems.

                                                    Network configuration—deciding between multi-subnets and a VLAN: configuring a multi-site cluster with different subnets is supported. However, when using multiple subnets, it is important to consider how clients will discover services or applications that have just failed over. The DNS servers must update one another with this new IP address before clients can discover the service or application that has failed over. If you use VLANs with multi-site you must reduce the Time to Live (TTL) of DNS discovery.

                                                    Tuning of heartbeat settings: The heartbeat settings include the frequency at which the nodes send heartbeat signals to each other to indicate that they are still functioning, and the number of heartbeats that a node can miss before another node initiates failover and begins taking over the services and applications that had been running on the failed node. In a multi-site cluster, you might want to tune the “heartbeat” settings. You can tune these settings for heartbeat signals to account for differences in network latency caused by communication across subnets.

                                                    Replication of data: Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. Therefore, the choice of the replication process requires careful consideration. There are many options you will find while replicating data. But before you make any decision, consult with your storage vendor, server hardware vendor and software vendors. Depending on vendor like NetApp and EMC, your replication design will change. Review the following considerations:

                                                    Choosing replication level ( block, file system, or application level): The replication process can function through the hardware (at the block level), through the operating system (at the file system level), or through certain applications such as Microsoft Exchange Server (which has a feature called Cluster Continuous Replication or CCR). Work with your hardware and software vendors to choose a replication process that fits the requirements of your organization.

                                                    Configuring replication to avoid data corruption: The replication process must be configured so that any interruptions to the process will not result in data corruption, but instead will always provide a set of data that matches the data from the main site as it existed at some moment in time. In other words, the replication must always preserve the order of I/O operations that occurred at the main site. This is crucial, because very few applications can recover if the data is corrupted during replication.

                                                    Choosing between synchronous and asynchronous replication: The replication process can be synchronous, where no write operation finishes until the corresponding data is committed at the secondary site, or asynchronous, where the write operation can finish at the main site and then be replicated (as a background operation) to the secondary site.

                                                    Synchronous Replication means that the replicated data is always up-to-date, but it slows application performance while each operation waits for replication. Synchronous replication is best for multi-site clusters that can are using high-bandwidth, low-latency connections. Typically, this means that a cluster using synchronous replication must not be stretched over a great distance. Synchronous replication can be performed within 200km distance where a reliable and robust WAN connectivity with enough bandwidth is available. For example, if you have GigE and Ten GigE MPLS connection you would choose synchronous replication depending on how big is your data.

                                                    Asynchronous Replication can help maximize application performance, but if failover to the secondary site is necessary, some of the most recent user operations might not be reflected in the data after failover. This is because some operations that were finished recently might not yet be replicated. Asynchronous replication is best for clusters where you want to stretch the cluster over greater geographical distances with no significant application performance impact. Asynchronous replication is performed when distance is more than 200km and WAN connectivity is not robust between sites.

                                                    Utilizing Windows Storage Server 2012 as shared storage

                                                    Windows® Storage Server 2012 is the Windows Server® 2012 platform of choice for network-attached storage (NAS) appliances offered by Microsoft partners.

                                                    Windows Storage Server 2012 enhances the traditional file serving capabilities and extends file based storage for application workloads like Hyper-V, SQL, Exchange and Internet information Services (IIS). Windows Storage Server 2012 provides the following features for an organization.

                                                    Workgroup Edition

                                                    • As many as 50 connections

                                                    • Single processor socket

                                                    • Up to 32 GB of memory

                                                    • As many as 6 disks (no external SAS)

                                                        Standard Edition

                                                        • No license limit on number of connections

                                                        • Multiple processor sockets

                                                        • No license limit on memory

                                                        • No license limit on number of disks

                                                        • De-duplication, virtualization (host plus 2 virtual machines for storage and disk management tools), and networking services (no domain controller)

                                                        • Failover clustering for higher availability

                                                        • Microsoft BranchCache for reduced WAN traffic

                                                            Presenting Storage from Windows Storage Server 2012 Standard

                                                            From the Server Manager, Click Add roles and features, On the Before you begin page, Click Next. On the installation type page, Click Next. 

                                                            image

                                                            On the Server Roles Selection page, Select iSCSI Target and iSCSI target storage provider, Click Next

                                                            image

                                                            On the Feature page, Click Next. On the Confirm page, Click Install. Click Close.

                                                            On the Server Manager, Click File and Storage Services, Click iSCSI

                                                            image

                                                            On the Task Button, Click New iSCSI Target, Select the Disk drive from where you want to present storage, Click Next

                                                            image

                                                            Type the Name of the Storage, Click Next

                                                            image

                                                            Type the size of the shared disk, Click Next

                                                            image

                                                            Select New iSCSI Target, Click Next

                                                            image

                                                            Type the name of the target, Click Next

                                                            image

                                                            Select the IP Address on the Enter a value for selected type, Type the IP address of cluster node, Click Ok. Repeat the process and add IP address for the cluster nodes.   

                                                            image

                                                            image

                                                            Type the CHAP information. note that CHAP password must be 12 character. Click Next to continue.

                                                            image

                                                            Click Create to create a shared storage. Click Close once done.

                                                            image

                                                            image

                                                            Repeat the step to create all shared drive of your preferred size and create a shared drive of 2GB size for quorum disk.

                                                            image

                                                            Deploying a Failover Cluster in Microsoft environment

                                                            Step1: Connect the cluster servers to the networks and storage

                                                            1. Review the details about networks in Hardware Requirements for a Two-Node Failover Cluster and Network infrastructure and domain account requirements for a two-node failover cluster, earlier in this guide.

                                                            2. Connect and configure the networks that the servers in the cluster will use.

                                                            3. Follow the manufacturer’s instructions for physically connecting the servers to the storage. For this article, we are using software iSCSI initiator. Open software iSCSI initiator from Server manager>Tools>iSCSI Initiator. Type the IP address of target that is the IP address of Microsoft Windows Storage Server 2012. Click Quick Connect, Click Done.

                                                            image

                                                            5. Open Computer Management, Click Disk Management, Initialize and format the disk using either MBR and GPT disk type. Go to second server, open Computer Management, Click Disk Management, bring the disk online simply by right clicking on the disk and clicking bring online. Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers that you will cluster (and only those servers).

                                                            image

                                                            6. On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.) In Disk Management, confirm that the cluster disks are visible.

                                                            image

                                                            7. If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.

                                                            8. Check the format of any exposed volume or LUN. Use NTFS file format.

                                                            Step 2: Install the failover cluster feature

                                                            In this step, you install the failover cluster feature. The servers must be running Windows Server 2012.

                                                            1. Open Server Manager, click Add roles and features. Follow the screen, go to Feature page.

                                                            2. In the Add Features Wizard, click Failover Clustering, and then click Install.

                                                            image

                                                            4. Follow the instructions in the wizard to complete the installation of the feature. When the wizard finishes, close it.

                                                            5. Repeat the process for each server that you want to include in the cluster.

                                                            Step 3: Validate the cluster configuration

                                                            Before creating a cluster, I strongly recommend that you validate your configuration. Validation helps you confirm that the configuration of your servers, network, and storage meets a set of specific requirements for failover clusters.

                                                            1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                            image

                                                            2. Confirm that Failover Cluster Manager is selected and then, in the center pane under Management, click Validate a Configuration. Click Next.

                                                            image

                                                            3. On the Select Server Page, type the fully qualified domain name of the nodes you would like to add in the cluster, then click Add.

                                                            image 

                                                            4. Follow the instructions in the wizard to specify the two servers and the tests, and then run the tests. To fully validate your configuration, run all tests before creating a cluster. Click next

                                                            image

                                                            5. On the confirmation page, Click Next

                                                            image

                                                            6. The Summary page appears after the tests run. To view the results, click Report. Click Finish. You will be prompted to create a cluster if you select Create the Cluster now using validation nodes.

                                                            image 

                                                            5. While still on the Summary page, click View Report and read the test results.

                                                            image

                                                            To view the results of the tests after you close the wizard, see

                                                            SystemRoot\Cluster\Reports\Validation Report date and time.html

                                                            where SystemRoot is the folder in which the operating system is installed (for example, C:\Windows).

                                                            6. As necessary, make changes in the configuration and rerun the tests.

                                                            Step4: Create a Failover cluster

                                                            1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                            image

                                                            2. Confirm that Failover Cluster Management is selected and then, in the center pane under Management, click Create a cluster. If you did not close the validation nodes then the validation wizard automatically open cluster creation wizard. Follow the instructions in the wizard to specify, Click Next

                                                            • The servers to include in the cluster.

                                                            • The name of the cluster i.e. virtual name of cluster

                                                            • IP address of the virtual node

                                                                image

                                                                3. Verify the IP address and cluster node name and click Next

                                                                image

                                                                4. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report. Click Finish.

                                                                image

                                                                image

                                                                Step5: Verify Cluster Configuration

                                                                On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is unchecked for heartbeat network. verify IP range. Click Ok.

                                                                image

                                                                On the Cluster Manager, Click networks, right click on each network, Click Property, make sure Allow clients to connect through this network is checked for domain network. verify IP range. Click Ok.

                                                                image

                                                                On the Cluster Manager, Click Storage, Click disks, verify quorum disk and shared disks are available. You can add multiple of disks by simply click Add new disk on the Task Pan.

                                                                image

                                                                An automated MSCS cluster configuration will add quorum automatically. However you can manually configure desired cluster quorum by right clicking on cluster>More Actions>Configure Cluster Quorum Settings.

                                                                image

                                                                Configuring a Hyper-v Cluster

                                                                In the previous steps you have configured a MSCS cluster, to configure a Hyper-v cluster all you need to do is install Hyper-v role in each cluster node. from the Server Manager, Click Add roles and features, follow the screen and install Hyper-v role. A reboot is required to install Hyper-v role.  Once role is installed in both node.

                                                                Note that at this stage add Storage for Virtual Machines and networks for Live Migration, Storage network if using iSCSI, Virtual Machine network, and Management Network. detailed configuration is out of scope for this article as I am writing about MSCS cluster not Hyper-v.

                                                                image

                                                                from the Cluster Manager, Right Click on Networks, Click Network for Live Migration, Select appropriate network for live Migration.

                                                                image

                                                                If you would like to have virtual machine additional fault tolerance like Hyper-v Replica, Right Click Cluster virtual node, Click Configure Role, Click Next.

                                                                image

                                                                From Select Role page, Click Hyper-v Replica broker, Click Next. Follow the screen.

                                                                image

                                                                From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Hard Disk to configure virtual machine storage and virtual machine configuration disk drive. Once done, From the Cluster manager, right Click on Roles, Click Virtual machine, Click New Virtual machine to create virtual machine.

                                                                image

                                                                Backing up Clustered data, application or server

                                                                There are multiple methods for backing up information that is stored on Cluster Shared Volumes in a failover cluster running on

                                                                • Windows Server 2008 R2

                                                                • Hyper-V Server 2008 R2

                                                                • Windows Server 2012

                                                                • Hyper-V Server 2012

                                                                    Operating System Level backup

                                                                    The backup application runs within a virtual machine in the same way that a backup application runs within a physical server. When there are multiple virtual machines being managed centrally, each virtual machine can run a backup “agent” (instead of running an individual backup application) that is controlled from the central management server. Backup agent backs up application data, files, folder and systems state of operating systems.

                                                                    clip_image012

                                                                    Hyper-V Image Level backup

                                                                    The backup captures all the information about multiple virtual machines that are configured in a failover cluster that is using Cluster Shared Volumes. The backup application runs through Hyper-V, which means that it must use the VSS Hyper-V writer. The backup application must also be compatible with Cluster Shared Volumes. The backup application backs up the virtual machines that are selected by the administrator, including all the VHD files for those virtual machines, in one operation. VM1_Data.VHDX, VM2_data.VHDX and VM1_System.VHDX, VM2_system.VHDX are stored in a backup disk or tape. VM1_System.VHDX and VM2_System.VHDX contain system files and page files i.e. system state, snapshot and VM configuration are stored as well.

                                                                    clip_image014

                                                                    Publishing an Application or Service in a Failover Cluster Environment

                                                                    1. To open the failover cluster snap-in, click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                                    2. Right Click on Roles, click Configure Role to publish a service or application

                                                                    image 

                                                                    3. Select a Cluster Services or Application, and then click Next.

                                                                    image

                                                                    4. Follow the instructions in the wizard to specify the following details:

                                                                    • A name for the clustered file server

                                                                    • IP address of virtual node

                                                                      • image

                                                                        5. On Select Storage page, Select the storage volume or volumes that the clustered file server should use. Click Next

                                                                        image

                                                                        6. On the confirmation Page, review and Click Next

                                                                        image

                                                                        7. After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.

                                                                        8. To close the wizard, click Finish.

                                                                        image

                                                                        9. In the console tree, make sure Services and Applications is expanded, and then select the clustered file server that you just created.

                                                                        10. After completing the wizard, confirm that the clustered file server comes online. If it does not, review the state of the networks and storage and correct any issues. Then right-click the new clustered application or service and click Bring this service or application online.

                                                                        Perform a Failover Test

                                                                        To perform a basic test of failover, right-click the clustered file server, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered file server instance is moved.

                                                                        Configuring a New Failover Cluster by Using Windows PowerShell

                                                                        Task

                                                                        PowerShell command

                                                                        Run validation tests on a list of servers.

                                                                        Test-Cluster -Node server1,server2

                                                                        Where server1 and server2 are servers that you want to validate.

                                                                        Create a cluster using defaults for most settings.

                                                                        New-Cluster -Name cluster1 -Node server1,server2

                                                                        Where server1 and server2 are the servers that you want to include in the new cluster.

                                                                        Configure a clustered file server using defaults for most settings.

                                                                        Add-ClusterFileServerRole -Storage "Cluster Disk 4"

                                                                        Where Cluster Disk 4 is the disk that the clustered file server will use.

                                                                        Configure a clustered print server using defaults for most settings.

                                                                        Add-ClusterPrintServerRole -Storage "Cluster Disk 5"

                                                                        Where Cluster Disk 5 is the disk that the clustered print server will use.

                                                                        Configure a clustered virtual machine using defaults for most settings.

                                                                        Add-ClusterVirtualMachineRole -VirtualMachine VM1

                                                                        Where VM1 is an existing virtual machine that you want to place in a cluster.

                                                                        Add available disks.

                                                                        Get-ClusterAvailableDisk | Add-ClusterDisk

                                                                        Review the state of nodes.

                                                                        Get-ClusterNode

                                                                        Run validation tests on a new server.

                                                                        Test-Cluster -Node newserver,node1,node2

                                                                        Where newserver is the new server that you want to add to a cluster, and node1 and node2 are nodes in that cluster.

                                                                        Prepare a node for maintenance.

                                                                        Get-ClusterNode node2 | Get-ClusterGroup | Move-ClusterGroup

                                                                        Where node2 is the node from which you want to move clustered services and applications.

                                                                        Pause a node.

                                                                        Suspend-ClusterNode node2

                                                                        Where node2 is the node that you want to pause.

                                                                        Resume a node.

                                                                        Resume-ClusterNode node2

                                                                        Where node2 is the node that you want to resume.

                                                                        Stop the Cluster service on a node.

                                                                        Stop-ClusterNode node2

                                                                        Where node2 is the node on which you want to stop the Cluster service.

                                                                        Start the Cluster service on a node.

                                                                        Start-ClusterNode node2

                                                                        Where node2 is the node on which you want to start the Cluster service.

                                                                        Review the signature and other properties of a cluster disk.

                                                                        Get-ClusterResource "Cluster Disk 2" | Get-ClusterParameter

                                                                        Where Cluster Disk 2 is the disk for which you want to review the disk signature.

                                                                        Move Available Storage to a particular node.

                                                                        Move-ClusterGroup "Available Storage" -Node node1

                                                                        Where node1 is the node that you want to move Available Storage to.

                                                                        Turn on maintenance for a disk.

                                                                        Suspend-ClusterResource "Cluster Disk 2"

                                                                        Where Cluster Disk 2 is the disk in cluster storage for which you are turning on maintenance.

                                                                        Turn off maintenance for a disk.

                                                                        Resume-ClusterResource "Cluster Disk 2"

                                                                        Where Cluster Disk 2 is the disk in cluster storage for which you are turning off maintenance.

                                                                        Bring a clustered service or application online.

                                                                        Start-ClusterGroup "Clustered Server 1"

                                                                        Where Clustered Server 1 is a clustered server (such as a file server) that you want to bring online.

                                                                        Take a clustered service or application offline.

                                                                        Stop-ClusterGroup "Clustered Server 1"

                                                                        Where Clustered Server 1 is a clustered server (such as a file server) that you want to take offline.

                                                                        Move or Test a clustered service or application.

                                                                        Move-ClusterGroup "Clustered Server 1"

                                                                        Where Clustered Server 1 is a clustered server (such as a file server) that you want to test or move.

                                                                            Migrating clustered services and applications to a new failover cluster

                                                                            Use the following instructions to migrate clustered services and applications from your old cluster to your new cluster. After the Migrate a Cluster Wizard runs, it leaves most of the migrated resources offline, so that you can perform additional steps before you bring them online. If the new cluster uses old storage, plan how you will make LUNs or disks inaccessible to the old cluster and accessible to the new cluster (but do not make changes yet).

                                                                            1. To open the failover cluster snap-in, click Administrative Tools, and then click Failover Cluster Manager.

                                                                            2. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Manager, click Manage a Cluster, and then select the cluster that you want to configure.

                                                                            3. In the console tree, expand the cluster that you created to see the items underneath it.

                                                                            4. If the clustered servers are connected to a network that is not to be used for cluster communications (for example, a network intended only for iSCSI), then under Networks, right-click that network, click Properties, and then click Do not allow cluster network communication on this network. Click OK.

                                                                            5. In the console tree, select the cluster. Click Configure, click Migrate services and applications.

                                                                            6. Read the first page of the Migrate a Cluster Wizard, and then click Next.

                                                                            7. Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.

                                                                            8. Click View Report. The wizard also provides a report after it finishes, which describes any additional steps that might be needed before you bring the migrated resource groups online.

                                                                            9. Follow the instructions in the wizard to complete the following tasks:

                                                                            1. Choose the resource group or groups that you want to migrate.

                                                                            2. Specify whether the resource groups to be migrated will use new storage or the same storage that you used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one shared storage location to another.

                                                                            3. If you are migrating from a cluster running Windows Server 2003 that has Network Name resources with Kerberos protocol enabled, specify the account name and password for the Active Directory account that is used by the Cluster service on the old cluster.

                                                                          1. After the wizard runs and the Summary page appears, click View Report.

                                                                                14. When the wizard completes, most migrated resources will be offline. Leave them offline at this stage.

                                                                                Completing the transition from the old cluster to the new cluster. You must perform the following steps to complete the transition to the new cluster running Windows Server 2012.

                                                                                1. Prepare for clients to experience downtime, probably brief.

                                                                                2. Take each resource group offline on the old cluster.

                                                                                3. Complete the transition for the storage:

                                                                                1. If the new cluster will use old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster.

                                                                                2. If the new cluster will use new storage, copy the appropriate folders and data to the storage. As needed for disk access on the old cluster, bring individual disk resources online on that cluster. (Keep other resources offline, to ensure that clients cannot change data on the disks in storage.) Also as needed, on the new cluster, use Disk Management to confirm that the appropriate LUNs or disks are visible to the new cluster and not visible to any other servers.

                                                                                    4. If the new cluster uses mount points, adjust the mount points as needed, and make each disk resource that uses a mount point dependent on the resource of the disk that hosts the mount point.

                                                                                    5. Bring the migrated services or applications online on the new cluster. To perform a basic test of failover on the new cluster, expand Services and Applications, and then click a migrated service or application that you want to test.

                                                                                    6. To perform a basic test of failover for the migrated service or application, under Actions (on the right), click Move this service or application to another node, and then click an available choice of node. When prompted, confirm your choice. You can observe the status changes in the center pane of the snap-in as the clustered service or application is moved.

                                                                                    7. If there are any issues with failover, review the following:

                                                                                    1. View events in Failover Cluster Manager. To do this, in the console tree, right-click Cluster Events, and then click Query. In the Cluster Events Filter dialog box, select the criteria for the events that you want to display, or to return to the default criteria, click the Reset button. Click OK. To sort events, click a heading, for example, Level or Date and Time.

                                                                                    2. Confirm that necessary services, applications, or server roles are installed on all nodes. Confirm that services or applications are compatible with Windows Server 2008 R2 and run as expected.

                                                                                    3. If you used old storage for the new cluster, rerun the Validate a Cluster Configuration Wizard to confirm the validation results for all LUNs or disks in the storage.

                                                                                    4. Review migrated resource settings and dependencies.

                                                                                    5. If you migrated one or more Network Name resources with Kerberos protocol enabled, confirm that the following permissions change was made in Active Directory Users and Computers on a domain controller. In the computer accounts (computer objects) of your Kerberos protocol-enabled Network Name resources, Full Control must be assigned to the computer account for the failover cluster.

                                                                                        Migrating Cluster Resource with new Mount Point

                                                                                        When you are working with new storage for your cluster migration, you have some flexibility in the order in which you complete the tasks. The tasks that you must complete include creating the mount points, running the Migrate a Cluster Wizard, copying the data to the new storage, and confirming the disk letters and mount points for the new storage. After completing the other tasks, configure the disk resource dependencies in Failover Cluster Manager.

                                                                                        A useful way to keep track of disks in the new storage is to give them labels that indicate your intended mount point configuration. For example, in the new storage, when you are mounting a new disk in a folder called \Mount1-1 on another disk, you can also label the mounted disk as Mount1-1. (This assumes that the label Mount1-1 is not already in use in the old storage.) Then when you run the Migrate a Cluster Wizard and you need to specify that disk for a particular migrated resource, you can look at the list and select the disk labeled Mount1-1. Then you can return to Failover Cluster Manager to configure the disk resource for Mount1-1 so that it is dependent on the appropriate resource, for example, the resource for disk F. Similarly, you would configure the disk resources for all other disks mounted on disk F so that they depended on the disk resource for disk F.

                                                                                        Migrating DHCP to a Cluster Running Windows Server 2012

                                                                                        A failover cluster is a group of independent computers that work together to increase the availability of applications and services. The clustered servers (called nodes) are connected by physical cables and by software. If one of the cluster nodes fails, another node begins to provide service (a process known as failover). Users experience a minimum of disruptions in service.

                                                                                        This guide describes the steps that are necessary when migrating a clustered DHCP server to a cluster running Windows Server 2008 R2, beyond the standard steps required for migrating clustered services and applications in general. The guide indicates when to use the Migrate a Cluster Wizard in the migration, but does not describe the wizard in detail.

                                                                                        Step 1: Review requirements and create a cluster running Windows Server 2012

                                                                                        Before beginning the migration described in this guide, review the requirements for a cluster running Windows Server 2008 R2, install the failover clustering feature on servers running Windows Server 2008 R2, and create a new cluster.

                                                                                        Step 2: On the old cluster, adjust registry settings and permissions before migration

                                                                                        To prepare for migration, you must make changes to registry settings and permissions on each node of the old cluster.

                                                                                        1. Confirm that you have a current backup of the old cluster, one that includes the configuration information for the clustered DHCP server (also called the DHCP resource group).

                                                                                        2. Confirm that the clustered DHCP server is online on the old cluster. It must be online while you complete the remainder of this procedure.

                                                                                        3. On a node of the old cluster, open a command prompt as an administrator.

                                                                                        4. Type: regedit Navigate to:

                                                                                        HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DHCPServer\Parameters

                                                                                        5. Choose the option that applies to your cluster: If the old cluster is running Windows Server 2008, skip to step 7. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2:

                                                                                        1. Right-click Parameters, click New, click String Value, and for the name of the new value, type: ServiceMain

                                                                                        2. Right-click the new value (ServiceMain), click Modify, and for the value data, type: ServiceEntry

                                                                                        3. Right-click Parameters again, click New, click Expandable String Value, and for the name of the new value, type: ServiceDll

                                                                                        4. Right-click the new value (ServiceDll), click Modify, and for the value data, type: %systemroot%\system32\dhcpssvc.dll

                                                                                            6. Right-click Parameters, and then click Permissions.

                                                                                            7. Click Add. Locate the appropriate account and assign permissions:

                                                                                            1. On Windows Server 2008: Click Locations, select the local server, and then click OK. Under Enter the object names to select, type NT Service\DHCPServer. Click OK. Select the DHCPServer account and then select the check box for Full Control.

                                                                                            2. On Windows Server 2003 or Windows Server 2003 R2: Click Locations, ensure that the domain name is selected, and then click OK. Under Enter the object names to select, type Everyone, and then click OK (and confirm your choice if prompted). Under Group or user names, select Everyone and then select the check box for Full Control.

                                                                                              8. Repeat the process on the other node or nodes of the old cluster.

                                                                                              Step 3: On a node in the old cluster, prepare for export, and then export the DHCP database to a file

                                                                                              As part of migrating a clustered DHCP server, on the old cluster, you must export the DHCP database to a file. This requires preparatory steps that prevent the cluster from restarting the clustered DHCP resource during the export. The following procedure describes the process. On the old cluster, start the clustering snap-in and configure the restart setting for the clustered DHCP server (DHCP resource group):

                                                                                              1. Click Start, click Administrative Tools, and then click Failover Cluster Management. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

                                                                                              2. If the console tree is collapsed, expand the tree under the cluster that you are migrating settings from. Expand Services and Applications and then, in the console tree, click the clustered DHCP server.

                                                                                              3. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart.

                                                                                              This step prevents the resource from restarting during the export of the DHCP database, which would stop the export.

                                                                                              1. On the node of the old cluster that currently owns the clustered DHCP server, confirm that the clustered DHCP server is running. Then open a command prompt window as an administrator.

                                                                                              2. Type: netsh dhcp server export <exportfile> all

                                                                                              Where <exportfile> is the name of the file to which you want to export the DHCP database.

                                                                                              3. After the export is complete, in the clustering interface (Cluster Administrator or Failover Cluster Management), right-click the clustered DHCP server (DHCP resource group) and then click either Take Offline or Take this service or application offline. If the command is unavailable, in the center pane, right-click each online resource and click either Take Offline or Take this resource offline. If prompted for confirmation, confirm your choice.

                                                                                              4. If the old cluster is running Windows Server 2003 or Windows Server 2003 R2, obtain the account name and password for the Cluster service account (the Active Directory account used by the Cluster service on the old cluster). Alternatively, you can obtain the name and password of another account that has access permissions for the Active Directory computer accounts (objects) that the old cluster uses. For a migration from a cluster running Windows Server 2003 or Windows Server 2003 R2, you will need this information for the next procedure.

                                                                                              Step 4: On the new cluster, configure a network for DHCP clients and run the Migrate a Cluster Wizard

                                                                                              Microsoft recommends that you make the network settings on the new cluster as similar as possible to the settings on the old cluster. In any case, on the new cluster, you must have at least one network that DHCP clients can use to communicate with the cluster. The following procedure describes the cluster setting needed on the client network, and indicates when to run the Migrate a Cluster Wizard.

                                                                                              1. On the new cluster (running Windows Server 2012), click Server Manager, click Tools, and then click Failover Cluster Manager.

                                                                                              2. If the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.

                                                                                              3. If the console tree is collapsed, expand the tree under the cluster.

                                                                                              4. Expand Networks, right-click the network that clients will use to connect to the DHCP server, and then click Properties.

                                                                                              5. Make sure that Allow cluster network communication on this network and Allow clients to connect through this network are selected.

                                                                                              6. To prepare for the migration process, find and take note of the drive letter used for the DHCP database on the old cluster. Ensure that the same drive letter exists on the new cluster. (This drive letter is one of the settings that the Migrate a Cluster Wizard will migrate.)

                                                                                              7. In Failover Cluster Manager, in the console tree, select the new cluster, and then under Configure, click Migrate services and applications.

                                                                                              8. Use the Migrate a Cluster Wizard to migrate the DHCP resource group from old to the new cluster. If you are using new storage on the new cluster, during the migration, be sure to specify the disk that has the same drive letter on the new cluster as was used for the DHCP database on the old cluster. The wizard will migrate resources and settings, but not the DHCP database.

                                                                                              Step 5: On the new cluster, import the DHCP database, bring the clustered DHCP server online, and adjust permissions

                                                                                              To complete the migration process, import the DHCP database that you exported to a file in Step 2. Then you can bring the clustered DHCP server online and adjust settings that were changed temporarily during the migration process.

                                                                                              1. If you are reusing the old cluster storage for the new cluster, confirm that you have stored the exported DHCP database file in a safe location. Then be sure to delete all the DHCP files other than the exported DHCP database file from the old storage. This includes the DHCP database, log, and backup files.

                                                                                              2. On the new cluster, in Failover Cluster Manager, expand Services and Applications, right-click the clustered DHCP server, and then click Bring this service or application online. The DHCP service starts with an empty database.

                                                                                              3. Click the clustered DHCP server.

                                                                                              4. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, do not restart. This step prevents the resource from restarting during the import of the DHCP database, which would stop the import.

                                                                                              5. In the new cluster, on the node that currently owns the migrated DHCP server, view the disk used by the migrated DHCP server, and make sure that you have copied the exported DHCP database file to this disk.

                                                                                              6. In the new cluster, on the node that currently owns the migrated DHCP server, open a command prompt as an administrator. Change to the disk used by the migrated DHCP server.

                                                                                              7. Type: netsh dhcp server import <exportfile>

                                                                                              Where <exportfile> is the filename of the file to which you exported the DHCP database.

                                                                                              8. If the migrated DHCP server is not online, in Failover Cluster Manager, under Services and Applications, right-click the migrated DHCP server, and then click Bring this service or application online.

                                                                                              9. In the center pane, right-click the DHCP server resource, click Properties, click the Policies tab, and then click If resource fails, attempt restart on current node.

                                                                                              This returns the resource to the expected setting, instead of the “do not restart” setting that was temporarily needed during the import of the DHCP database.

                                                                                              10. If the cluster was migrated from Windows Server 2003 or Windows Server 2003 R2, after the clustered DHCP server is online on the new cluster, make the following changes to permissions in the registry:

                                                                                            • On the node that owns the clustered DHCP server, open a command prompt as an administrator.

                                                                                            • Type: regedit Navigate to:

                                                                                              HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DHCPServer\Parameters

                                                                                            • Right-click Parameters, and then click Permissions.

                                                                                            • Click Add, click Locations, and then select the local server.

                                                                                            • Under Enter the object names to select, type NT Service\DHCPServer and then click OK. Select the DHCPServer account and then select the check box for Full Control. Then click Apply.

                                                                                            • Select the Everyone account (created through steps earlier in this topic) and then click Remove. This removes the account from the list of those that are assigned permissions.

                                                                                                11. Perform the preceding steps only after DHCP is online on the new cluster. After you complete these steps, you can test the clustered DHCP server and begin to provide DHCP services to clients.

                                                                                                Configuring Print Server Cluster

                                                                                              • Open Failover Cluster Management. In the console tree, if the cluster that you created is not displayed, right-click Failover Cluster Management, click Manage a Cluster, and then select the cluster you want to configure.

                                                                                              • Click Services and Applications. Under Actions (on the right), click Configure a Service or Application. then click Next. Click Print Server, and then click Next.

                                                                                              • Follow the instructions in the wizard to specify the following details: A name for the clustered print server, Any IP address and the storage volume or volumes that the clustered print server should use

                                                                                              • After the wizard runs and the Summary page appears, to view a report of the tasks the wizard performed, click View Report.To close the wizard, click Finish.

                                                                                              • In the console tree, make sure Services and Applications is expanded, and then select the clustered print server that you just created.

                                                                                              • Under Actions, click Manage Printers.

                                                                                              • An instance of the Failover Cluster Management interface appears with Print Management in the console tree.

                                                                                              • Under Print Management, click Print Servers and locate the clustered print server that you want to configure.

                                                                                              • Always perform management tasks on the clustered print server. Do not manage the individual cluster nodes as print servers.

                                                                                              • Right-click the clustered print server, and then click Add Printer. Follow the instructions in the wizard to add a printer.

                                                                                              • This is the same wizard you would use to add a printer on a nonclustered server.

                                                                                              • When you have finished configuring settings for the clustered print server, to close the instance of the Failover Cluster Management interface with Print Management in the console tree, click File and then click Exit.

                                                                                              • To perform a basic test of failover, right-click the clustered print server instance, click Move this service or application to another node, and click the available choice of node. When prompted, confirm your choice.

                                                                                                1. Configuring a Multisite SQL Server Failover Cluster

                                                                                                  To install or upgrade a SQL Server failover cluster, you must run the Setup program on each node of the failover cluster. To add a node to an existing SQL Server failover cluster, you must run SQL Server Setup on the node that is to be added to the SQL Server failover cluster instance. Do not run Setup on the active node to manage the other nodes. The following options are available for SQL Server failover cluster installation:

                                                                                                  Option1: Integration Installation with Add Node

                                                                                                  Create and configure a single-node SQL Server failover cluster instance. When you configure the node successfully, you have a fully functional failover cluster instance. At this point, it does not have high availability because there is only one node in the failover cluster. On each node to be added to the SQL Server failover cluster, run Setup with Add Node functionality to add that node.

                                                                                                  Option 2: Advanced/Enterprise Installation

                                                                                                  After you run the Prepare Failover Cluster on one node, Setup creates the Configuration.ini file that lists all the settings that you specified. On the additional nodes to be prepared, instead of following these steps, you can supply the autogenerated ConfigurationFile.ini file from first node as an input to the Setup command line. This step prepares the nodes ready to be clustered, but there is no operational instance of SQL Server at the end of this step.

                                                                                                  image

                                                                                                  After the nodes are prepared for clustering, run Setup on one of the prepared nodes. This step configures and finishes the failover cluster instance. At the end of this step, you will have an operational SQL Server failover cluster instance and all the nodes that were prepared previously for that instance will be the possible owners of the newly-created SQL Server failover cluster.

                                                                                                  Follow the procedure to install a new SQL Server failover cluster using Integrated Simple Cluster Install 

                                                                                                1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe. To install from a network share, browse to the root folder on the share, and then double-click Setup.exe.

                                                                                                  1. The Installation Wizard starts the SQL Server Installation Center. To create a new cluster installation of SQL Server, click New SQL Server failover cluster installation on the installation page

                                                                                                    image

                                                                                                    1. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK.

                                                                                                        image

                                                                                                        1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report. To continue, click Next.

                                                                                                        2. On the Setup Support Files page, click Install to install the Setup support files.

                                                                                                        3. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

                                                                                                            image

                                                                                                            1. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                            2. On the Product key page, indicate whether you are installing a free edition of SQL Server, or whether you have a PID key for a production version of the product.

                                                                                                            3. On the License Terms page, read the license agreement, and then select the check box to accept the license terms and conditions.

                                                                                                                image 

                                                                                                                1. To help improve SQL Server, you can also enable the feature usage option and send reports to Microsoft. Click Next to continue.

                                                                                                                    image

                                                                                                                    1. On the Feature Selection page, select the components for your installation. You can select any combination of check boxes, but only the Database Engine and Analysis Services support failover clustering. Other selected components will run as a stand-alone feature without failover capability on the current node that you are running Setup on.

                                                                                                                        image

                                                                                                                        1. The prerequisites for the selected features are displayed on the right-hand pane. SQL Server Setup will install the prerequisite that are not already installed during the installation step described later in this procedure. SQL Server setup runs one more set of rules that are based on the features you selected to validate your configuration.

                                                                                                                            image

                                                                                                                            1. On the Instance Configuration page, specify whether to install a default or a named instance. SQL Server Network Name — Specify a network name for the new SQL Server failover cluster. that is the name of virtual node of the cluster.  This is the name that is used to identify your failover cluster on the network. Instance ID — By default, the instance name is used as the Instance ID. This is used to identify installation directories and registry keys for your instance of SQL Server. This is the case for default instances and named instances. For a default instance, the instance name and instance ID would be MSSQLSERVER. To use a nondefault instance ID, select the Instance ID box and provide a value. Instance root directory — By default, the instance root directory is C:\Program Files\Microsoft SQL Server\. To specify a nondefault root directory, use the field provided, or click the ellipsis button to locate an installation folder.

                                                                                                                                image

                                                                                                                                1. Detected SQL Server instances and features on this computer – The grid shows instances of SQL Server that are on the computer where Setup is running. If a default instance is already installed on the computer, you must install a named instance of SQL Server. Click Next to continue.

                                                                                                                                    image

                                                                                                                                    1. The Disk Space Requirements page calculates the required disk space for the features that you specify, and compares requirements to the available disk space on the computer where Setup is running. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. To specify the SQL Server cluster resource group name, you have two options:

                                                                                                                                      • Use the drop-down box to specify an existing group to use.

                                                                                                                                      • Type the name of a new group to create. Be aware that the name “Available storage” is not a valid group name.

                                                                                                                                          image

                                                                                                                                        1. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster. More than one disk can be specified. Click Next to continue.

                                                                                                                                            image

                                                                                                                                            1. On the Cluster Network Configuration page, Specify the IP type and IP address for your failover cluster instance. Click Next to continue. Note that the IP address will resolve the name of the virtual node which you have mentioned earlier step.

                                                                                                                                                image

                                                                                                                                                1. On the Server Configuration — Service Accounts page, specify login accounts for SQL Server services. The actual services that are configured on this page depend on the features that you selected to install.

                                                                                                                                                    image

                                                                                                                                                    1. Use this page to specify Cluster Security Policy. Use default setting. Click Next to continue. Work flow for the rest of this topic depends on the features that you have specified for your installation. You might not see all the pages, depending on your selections (Database Engine, Analysis Services, Reporting Services).

                                                                                                                                                    2. You can assign the same login account to all SQL Server services, or you can configure each service account individually. The startup type is set to manual for all cluster-aware services, including full-text search and SQL Server Agent, and cannot be changed during installation. Microsoft recommends that you configure service accounts individually to provide least privileges for each service, where SQL Server services are granted the minimum permissions they have to have complete their tasks. To specify the same logon account for all service accounts in this instance of SQL Server, provide credentials in the fields at the bottom of the page. When you are finished specifying login information for SQL Server services, click Next.

                                                                                                                                                      • Use the Server Configuration – Collation tab, use default collations for the Database Engine and Analysis Services.

                                                                                                                                                      • Use the Database Engine Configuration — Account Provisioning page to specify the following:

                                                                                                                                                      • select Windows Authentication or Mixed Mode Authentication for your instance of SQL Server.

                                                                                                                                                          image

                                                                                                                                                        1. Use the Database Engine Configuration – Data Directories page to specify nondefault installation directories. To install to default directories, click Next. Use the Database Engine Configuration – FILESTREAM page to enable FILESTREAM for your instance of SQL Server. Click Next to continue.

                                                                                                                                                            image

                                                                                                                                                            1. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

                                                                                                                                                            2. Use the Analysis Services Configuration — Account Provisioning page to specify users or accounts that will have administrator permissions for Analysis Services. You must specify at least one system administrator for Analysis Services. To add the account under which SQL Server Setup is running, click Add Current User. To add or remove accounts from the list of system administrators, click Add or Remove, and then edit the list of users, groups, or computers that will have administrator privileges for Analysis Services. When you are finished editing the list, click OK. Verify the list of administrators in the configuration dialog box. When the list is complete, click Next.

                                                                                                                                                                image

                                                                                                                                                                1. Use the Analysis Services Configuration — Data Directories page to specify nondefault installation directories. To install to default directories, click Next.

                                                                                                                                                                    image

                                                                                                                                                                    1. Use the Reporting Services Configuration page to specify the kind of Reporting Services installation to create. For failover cluster installation, the option is set to Unconfigured Reporting Services installation. You must configure Reporting Services services after you complete the installation. However, no harm to select Install and configure option if you are not an SQL expert.

                                                                                                                                                                        image

                                                                                                                                                                        1. On the Error Reporting page, specify the information that you want to send to Microsoft that will help improve SQL Server. By default, options for error reporting is disabled.

                                                                                                                                                                            image

                                                                                                                                                                            1. The System Configuration Checker runs one more set of rules to validate your configuration with the SQL Server features that you have specified.

                                                                                                                                                                                image

                                                                                                                                                                                1. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                                    image

                                                                                                                                                                                    1. During installation, the Installation Progress page provides status so that you can monitor installation progress as Setup continues. After installation, the Complete page provides a link to the summary log file for the installation and other important notes. To complete the SQL Server installation process, click Close.

                                                                                                                                                                                    2. If you are instructed to restart the computer, do so now. It is important to read the message from the Installation Wizard when you have finished with Setup.

                                                                                                                                                                                    3. To add nodes to the single-node failover you just created, run Setup on each additional node and follow the steps for Add Node operation.

                                                                                                                                                                                        SQL Advanced/Enterprise Failover Cluster Install

                                                                                                                                                                                        Step1: Prepare Environment

                                                                                                                                                                                        1. Insert the SQL Server installation media, and from the root folder, double-click Setup.exe.

                                                                                                                                                                                        2. Windows Installer 4.5 is required, and may be installed by the Installation Wizard. If you are prompted to restart your computer, restart and then start SQL Server Setup again.

                                                                                                                                                                                        3. After the prerequisites are installed, the Installation Wizard starts the SQL Server Installation Center. To prepare the node for clustering, move to the Advanced page and then click Advanced cluster preparation

                                                                                                                                                                                        4. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                        5. On the Setup Support Files page click Install to install the Setup support files.

                                                                                                                                                                                        6. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                        7. On the Language Selection page, you can specify the language, to continue, click Next

                                                                                                                                                                                        8. On the Product key page, select PIDed product key, Click Next

                                                                                                                                                                                        9. On the License Terms page, accept the license terms and Click Next to continue.

                                                                                                                                                                                        10. On the Feature Selection page, select the components for your installation as you did for simple installation which has been mentioned earlier.

                                                                                                                                                                                        11. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                                        12. To complete the SQL Server installation process, click Close.

                                                                                                                                                                                        13. If you are instructed to restart the computer, do so now.

                                                                                                                                                                                        14. Repeat the previous steps to prepare the other nodes for the failover cluster. You can also use the autogenerated configuration file to run prepare on the other nodes. A configurationfile.ini is generated in C:\Program Files\Microsoft SQL Server\110\Setup BootStrap\Log\20130603_014118\configurationfile.ini which is shown below.

                                                                                                                                                                                            image

                                                                                                                                                                                            Step2 Install SQL Server

                                                                                                                                                                                            1. After preparing all the nodes as described in the prepare step, run Setup on one of the prepared nodes, preferably the one that owns the shared disk. On the Advanced page of the SQL Server Installation Center, click Advanced cluster completion.

                                                                                                                                                                                            2. The System Configuration Checker runs a discovery operation on your computer. To continue, click OK. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                            3. On the Setup Support Files page, click Install to install the Setup support files.

                                                                                                                                                                                            4. The System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue. You can view the details on the screen by clicking Show Details, or as an HTML report by clicking View detailed report.

                                                                                                                                                                                            5. On the Language Selection page, you can specify the language, To continue, click Next.

                                                                                                                                                                                            6. Use the Cluster node configuration page to select the instance name prepared for clustering

                                                                                                                                                                                            7. Use the Cluster Resource Group page to specify the cluster resource group name where SQL Server virtual server resources will be located. On the Cluster Disk Selection page, select the shared cluster disk resource for your SQL Server failover cluster.Click Next to continue

                                                                                                                                                                                            8. On the Cluster Network Configuration page, specify the network resources for your failover cluster instance. Click Next to continue.

                                                                                                                                                                                            9. Now follow the simple installation steps to select Database Engine, reporting, Analysis and Integration services.

                                                                                                                                                                                            10. The Ready to Install page displays a tree view of installation options that were specified during Setup. To continue, click Install. Setup will first install the required prerequisites for the selected features followed by the feature installation.

                                                                                                                                                                                            11. Once installation is completed, click Close.

                                                                                                                                                                                                Follow the procedure if you would like to remove a node from an existing SQL Server failover cluster

                                                                                                                                                                                                1. Insert the SQL Server installation media. From the root folder, double-click setup.exe. To install from a network share, navigate to the root folder on the share, and then double-click Setup.exe.

                                                                                                                                                                                                2. The Installation Wizard launches the SQL Server Installation Center. To remove a node to an existing failover cluster instance, click Maintenance in the left-hand pane, and then select Remove node from a SQL Server failover cluster.

                                                                                                                                                                                                3. The System Configuration Checker will run a discovery operation on your computer. To continue, click OK.

                                                                                                                                                                                                4. After you click install on the Setup Support Files page, the System Configuration Checker verifies the system state of your computer before Setup continues. After the check is complete, click Next to continue.

                                                                                                                                                                                                5. On the Cluster Node Configuration page, use the drop-down box to specify the name of the SQL Server failover cluster instance to be modified during this Setup operation. The node to be removed is listed in the Name of this node field.

                                                                                                                                                                                                6. The Ready to Remove Node page displays a tree view of options that were specified during Setup. To continue, click Remove.

                                                                                                                                                                                                7. During the remove operation, the Remove Node Progress page provides status.

                                                                                                                                                                                                8. The Complete page provides a link to the summary log file for the remove node operation and other important notes. To complete the SQL Server remove node, click Close.

                                                                                                                                                                                                  1. Using Command Line Installation of SQL Server

                                                                                                                                                                                                    1. To install a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search component, run the following command

                                                                                                                                                                                                    Setup.exe /q /ACTION=Install /FEATURES=SQL /INSTANCENAME=MSSQLSERVER

                                                                                                                                                                                                    /SQLSVCACCOUNT=”<DomainName\UserName>” /SQLSVCPASSWORD

                                                                                                                                                                                                    2. To prepare a new, stand-alone instance with the SQL Server Database Engine, Replication, and Full-Text Search components, and Reporting Services. run the following command

                                                                                                                                                                                                    Setup.exe /q /ACTION=PrepareImage /FEATURES=SQL,RS /InstanceID =<MYINST> /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                                    3. To complete a prepared, stand-alone instance that includes SQL Server Database Engine, Replication, and Full-Text Search components run the following command

                                                                                                                                                                                                    Setup.exe /q /ACTION=CompleteImage /INSTANCENAME=MYNEWINST /INSTANCEID=<MYINST>

                                                                                                                                                                                                    /SQLSVCACCOUNT=”<DomainName\UserName>” /SQLSVCPASSWORD

                                                                                                                                                                                                    4. To upgrade an existing instance or failover cluster node from SQL Server 2005, SQL Server 2008, or SQL Server 2008 R2.

                                                                                                                                                                                                    Setup.exe /q /ACTION=upgrade /INSTANCEID = <INSTANCEID>/INSTANCENAME=MSSQLSERVER /RSUPGRADEDATABASEACCOUNT=”<Provide a SQL DB Account>” /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                                    5. To upgrade an existing instance of SQL Server 2012 to a different edition of SQL Server 2012.

                                                                                                                                                                                                    Setup.exe /q /ACTION=editionupgrade /INSTANCENAME=MSSQLSERVER /PID=<PID key for new edition>” /IACCEPTSQLSERVERLICENSETERMS

                                                                                                                                                                                                    6. To install an SQL server using configuration file, run the following command

                                                                                                                                                                                                    Setup.exe /ConfigurationFile=MyConfigurationFile.INI

                                                                                                                                                                                                    7. To install an SQL server using configuration file and provide service Account password, run the following command

                                                                                                                                                                                                    Setup.exe /SQLSVCPASSWORD=”typepassword” /AGTSVCPASSWORD=”typepassword”

                                                                                                                                                                                                    /ASSVCPASSWORD=”typepassword” /ISSVCPASSWORD=”typepassword” /RSSVCPASSWORD=”typepassword”

                                                                                                                                                                                                    /ConfigurationFile=MyConfigurationFile.INI

                                                                                                                                                                                                    8. To uninstall an existing instance of SQL Server. run the following command

                                                                                                                                                                                                    Setup.exe /Action=Uninstall /FEATURES=SQL,AS,RS,IS,Tools /INSTANCENAME=MSSQLSERVER

                                                                                                                                                                                                    Reference and Further Reading

                                                                                                                                                                                                    Windows Storage Server 2012

                                                                                                                                                                                                    Virtualizing Microsoft SQL Server

                                                                                                                                                                                                    The Perfect Combination: SQL Server 2012, Windows Server 2012 and System Center 2012

                                                                                                                                                                                                    EMC Storage Replication

                                                                                                                                                                                                    Download Hyper-v Server 2012

                                                                                                                                                                                                    Download Windows Server 2012

                                                                                                                                                                                                    Is VMware’s fate heading towards Novell?

                                                                                                                                                                                                    Previously I wrote a blog on comparing price and features of Hyper-v and VMware. I got lot of feedback and questions why I believe Microsoft will win the battle. Here is a short answer for this question.

                                                                                                                                                                                                    Living in mining city of Australia, its truth that most mining, oil and gas company isn’t adopting Microsoft Hyper-v yet excluding Fortescue Metals (FMG). FMG took a smart decision to go for Microsoft cloud than any other cloud technology. But wind is shifting quickly. Not just mining, oil and gas companies. Here are other examples: ING Direct case study and Suncorp Bank case study. There is nothing to hide that Microsoft came late to Hypervisor game. Slowly but surely Microsoft is gaining momentum.

                                                                                                                                                                                                    I worked in almost 15 years now. I have seen in many occasions that Microsoft crashes its opponent and gain market in their own business. This is what happening in Hypervisor battle. Let’s be honest VMware is THE leader in virtualization. I am sure there are skeptics who believe, beating VMware isn’t possible. Those skeptics betted their money on Novell Netware, IBM Lotus Notes and Corel Word Perfect in those days. If I had told you in year 2000 that Active Directory would beat Novell e-directory, you would have burst out of laugh. But now there’s nothing to comment on this. By now you rarely see and work e-directory, word perfect or lotus notes. These examples says it all. VMware’s fate is written when Microsoft released Windows Server 2012, Hyper-v Server 2012 and System Center 2012. By the next Windows, Hyper-v and System Center release VMware may extinct.

                                                                                                                                                                                                    If you need more evidence then you can find Microsoft’s Oil and Gas customer’s success stories on Microsoft View Point.

                                                                                                                                                                                                    Pasting text to Hyper-V guests sometimes results in garbled characters- An Work Around

                                                                                                                                                                                                    To work around this issue:

                                                                                                                                                                                                    • RDP to virtual machine using the mstsc.exe
                                                                                                                                                                                                    • Increase the keyboard class buffer size in the virtual machine
                                                                                                                                                                                                    • Disable the synthetic keyboard in the virtual machine to force using the emulated keyboard

                                                                                                                                                                                                    To Increase the keyboard class buffer size in the virtual machine

                                                                                                                                                                                                    1. Logon to a running virtual machine as an Administrator.

                                                                                                                                                                                                    2. Hover mouse on the right hand side top corner, Click Search, Type regedit, and Right Click on Registry Editor, Click Run As Administrator.

                                                                                                                                                                                                    3. Locate and then click the following registry entry:

                                                                                                                                                                                                    HKLMSYSTEMCurrentControlSetServiceskbdclassParameters

                                                                                                                                                                                                    4. In the details page, double click: KeyboardDataQueueSize

                                                                                                                                                                                                    5. Select Decimal and type a value data of: 1024

                                                                                                                                                                                                    6. Click Ok. Close the Registry Editor. you can modify the same registry for a group of Hyper-v virtual machines using GPO. GPO location is Computer Configuration/Windows Settings/Security Settings/Registry. Right Click and add new registry. 

                                                                                                                                                                                                    To disable the synthetic keyboard for a virtual machine

                                                                                                                                                                                                    1. Logon to a running virtual machine as a member of the Administrators group.

                                                                                                                                                                                                    2. Hover mouse on the right hand side top corner, Click Search, Type devmgmt.msc, and then Right Click on device manager, Click Run As Administrator. 

                                                                                                                                                                                                    3. Click Keyboards, right click Microsoft Hyper-V Virtual Keyboard and click Disable.

                                                                                                                                                                                                    4. Close the Device Manager snap-in. Restart Virtual Machine.

                                                                                                                                                                                                    5. On Windows Server 2012 Core, download DevCon.exe from the Windows Driver Kit to disable this driver using the command-line.

                                                                                                                                                                                                    Microsoft’s Hyper-v Server 2012 and System Center 2012 Unleash KO Punch to VMware

                                                                                                                                                                                                    Hyper-V has been integral part of Windows Server 2008 and enhanced with great features in Windows Server 2012. According to Gartner’s magic quadrant Microsoft Hyper-v has been positioned in the leader category second to VMware. Combining Windows Server 2012 and System Center 2012 provide you a high performance Cloud Technology. Microsoft licensing model is highly flexible and charges only by physical processors and offer unlimited virtualization rights with Datacenter editions. With Hyper-v, your return on investment (ROI) increases as your workload density increases.

                                                                                                                                                                                                    Pricing Comparison:

                                                                                                                                                                                                    The pricing is based on the following assumptions:

                                                                                                                                                                                                    • Average consolidation ratio of 12 VMs per physical processor.
                                                                                                                                                                                                    • Number of physical hosts required 21. Each physical host contains 2 physical processors with six cores each.
                                                                                                                                                                                                    • Three years License and Maintenance; VMware cost includes Windows Server 2012 Datacenter edition for running guests
                                                                                                                                                                                                    • costs do not include hardware, storage or project cost
                                                                                                                                                                                                    • Pricing is based on published US prices for VMware and Microsoft as of September, 2012.
                                                                                                                                                                                                    • The cost above doesn’t include Microsoft Windows Server license cost for guest operating system.
                                                                                                                                                                                                    • Windows Server 2012 Datacenter allows you to run unlimited Windows Server 2012 on Hyper-v Server 2012 host.

                                                                                                                                                                                                    Server Virtualization Environment:

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Pricing Summary:

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Microsoft Server Virtualization Cost break-down

                                                                                                                                                                                                    image

                                                                                                                                                                                                    VMware Server Virtualization Cost break-down

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Features VS Cost Breakdown- Multi-Site Private Cloud Computing

                                                                                                                                                                                                    Together with Windows Server 2012 and System Center 2012 is truly a cloud and datacenter management solution with eight separate components such as management, monitoring, provisioning, disaster recovery integrated into one unified product. A unified System Center management solution delivers greater OPEX cost savings than VMware in addition to CAPEX cost savings.

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Number Game:

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Breakdown in resources (/Host/Guest/Cluster):

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Network Virtualization

                                                                                                                                                                                                     image

                                                                                                                                                                                                    DR Solutions

                                                                                                                                                                                                    image

                                                                                                                                                                                                    Truth about VMware lies:

                                                                                                                                                                                                    You don’t have to be Einstein to understand that VMware is in significant pressure from all sides. Hence they are misleading Cloud market with biased information. I would strongly recommend you to assess your business position, compare apple to apple before renewing/buying your next Cloud products. Though VMware is still no.1 player in Cloud Computing market but their fear is real that VMware loyal Customer is switching continuously to Microsoft Cloud Technology. A declining enterprise market leads them to spread the following one sided information.

                                                                                                                                                                                                    1. VMware claim: VMware vSphere 5.1 can achieve an 18.9% higher VM density per host than with Microsoft Hyper-V.

                                                                                                                                                                                                    Facts: In one of VMware’s own tests, when provided adequate memory to support the number of users the performance variance between vSphere 5.1 and Hyper-V R2 SP1 was only 2% (using 24VM’s).

                                                                                                                                                                                                    2. VMware claim: Hyper-V performance is poor. If performance is important to you, choose VMware.

                                                                                                                                                                                                    Facts: In reality, Hyper-V offers near-native levels of virtualization performance, for which there are multiple supporting proof points (including independent third party validations):

                                                                                                                                                                                                    • Enterprise Strategy Group Report (2011) – SharePoint, Exchange, & SQL on Hyper-V Host.
                                                                                                                                                                                                    • Microsoft & Intel – 700,000 IOPS to a VM | Near Native with VMq: Windows Server and Hyper-V are not a limiting factor to IO performance. There shouldn’t be any significant concern around IO for virtualizing with Hyper-V.
                                                                                                                                                                                                    • Project Virtual Reality Check (Terminal Services on Hyper-V).

                                                                                                                                                                                                    3. VMware claim: Hyper-V isn’t ready for the enterprise. It can’t handle the most intensive of workloads like VMware can.

                                                                                                                                                                                                    Facts: Hyper-V offers near native levels of performance for key workloads, ensuring that customers can virtualize their mission critical, high-performance applications and workloads with confidence on Hyper-V. Additionally, a growing number of enterprise customers are running their businesses on Microsoft Hyper-V. Please read Microsoft Private Cloud success stories.

                                                                                                                                                                                                    4. VMware claim: Hyper-V is lacking some of the key VMware features today. Features such as vMotion, HA, Memory Overcommit, DRS, Storage vMotion and Hot-Add are important features for us, and Hyper-V simple doesn’t come close.

                                                                                                                                                                                                    Facts: Hyper-V R2 SP1 and System Center 2012 provide Live Migration, High Availability, Storage Live Migration, Dynamic Memory Allocation, Hot-Add and subsequent removal of storage.

                                                                                                                                                                                                    5. VMware claim: VMware vSphere 5.1 is more secure than Hyper-V because it’s architecture and small code base.

                                                                                                                                                                                                    Facts: Small footprint doesn’t equal a more secure hypervisor. Both vSphere and Hyper-V use the same memory footprint to run. The disk Footprint in ESXi 5.0 (144 MB) doubled from ESXi 4.0 (70 MB). Microsoft follows the rigorous, industry-leading Secure Development Lifecycle (SDL) for all its products. It is possible to achieve a 40-60% reduction in patches using Server Core based on historical data.

                                                                                                                                                                                                    6. VMware claim: There is no virtual firewall in Hyper-V while VMware provides vShield Zones.

                                                                                                                                                                                                    Facts: Windows Server 2012 also includes an integrated firewall with advanced security features. An old version of vShield Zones is included with vSphere 5.1 (details here) and vShield Zones has several limitations like every VM’s traffic passes through the Zones virtual appliances which slows down the traffic.

                                                                                                                                                                                                    7. VMware claim: Microsoft doesn’t offer anything comparable to VMware Fault Tolerance.

                                                                                                                                                                                                    Facts: VMware Fault Tolerance has limited applicability and severe limitations. It cannot function with:

                                                                                                                                                                                                    • Thin Provisioning and Linked Clones
                                                                                                                                                                                                    • Storage vMotion
                                                                                                                                                                                                    • Hot plug devices and USB Pass-through
                                                                                                                                                                                                    • IPv6
                                                                                                                                                                                                    • vSMP
                                                                                                                                                                                                    • N-Port ID Virtualization (NPIV)
                                                                                                                                                                                                    • Serial/parallel ports
                                                                                                                                                                                                    • Physical and remote CD/floppy drives
                                                                                                                                                                                                    • no more than 4 FT VMs per host be used

                                                                                                                                                                                                    8. VMware claim: VMware significantly support for Linux operating systems than Hyper-V.

                                                                                                                                                                                                    Facts: In production environment, Hyper-v supports Microsoft Windows Server and Linux Server without modifying any guest operating systems or installing tools.

                                                                                                                                                                                                    9. VMware claim: VMware supports broad applications, while Hyper-V does not.

                                                                                                                                                                                                    Facts: Since VMware does not have certified logo program for any application, they are not in position to dictate which application are supported or not. On the contrary, every single application that achieves a logo for Windows Server can be run on guest operating system on a Hyper-V, and is therefore inherently supported. There are over 2500 ISV applications listed on Microsoft Pinpoint that work with Hyper-V. Truth is neither Microsoft nor VMware mention which application you can install on a guest operating systems. It’s completely up to you what you would like to run on guest operating systems.

                                                                                                                                                                                                    10. VMware claim: VMware’s Site Recovery Manager (SRM) enables us to simplify our DR story, and provides us with a solution to not only perform a planned failover, but test it whenever we like. Microsoft simply can’t deliver an alternative to this.

                                                                                                                                                                                                    Facts: System Center 2012 components like Data Protection Manager and Orchestrator can provide tailored DR solutions. Windows Server 2012 includes an inbox replication capability, Hyper-V Replica, at no cost.

                                                                                                                                                                                                    11. VMware claim: Microsoft Hyper-v isn’t ready for Hoster or Service Provider.

                                                                                                                                                                                                    Facts: Hyper-v has been adopted by service provider industry to host their own infrastructure and public cloud simultaneously on Hyper-v utilizing Microsoft Network Virtualization. Click here and filter using hosting and public cloud to find the list of hoster. Examples: hostway, softsyshosting , hyper-v-mart , geekhosting , BlueFire and many more.

                                                                                                                                                                                                    12. VMware Claim: Hyper-v does not fully comply with Trunking, VLANs

                                                                                                                                                                                                    Facts: Microsoft Network virtualization is more advanced than VMware standard Switch and DV Switch. Microsoft Hyper-v is fully compliant with 802.1q trunking, VLANs, VIP, networking Tunneling, multitenant IP management. VMware is catching up on network virtualization. Being in back foot VMware advertised to hire a PR professional to campaign on network virtualization.

                                                                                                                                                                                                    Bottom-line: Why Selecting Hyper-v Over VMware

                                                                                                                                                                                                    Other than cost savings, the following reasons why you should select Hyper-V and System Center 2012 over VMware vSphere 5.1

                                                                                                                                                                                                    1. Built-in Virtualization: Hyper-V is an integral part of Windows Server 2008 and Windows Server 2012

                                                                                                                                                                                                    2. Familiarity with Windows: In-house IT staff can utilize their familiarity and knowledge of Windows environment to deploy Hyper-v minimizing training cost and learning time.

                                                                                                                                                                                                    3. Single Platform Cloud Management Technology: System Center 2012 enables you to manage physical, virtual, private and public cloud using a common console view for multi-hypervisor management, 3rd party integration and process automation, ability to manage applications via a single view across private and public clouds, and deep application diagnostics and insights.

                                                                                                                                                                                                    4. Running common Microsoft Application: It is obvious that Microsoft application will run better on Hyper-v 2012. Still Microsoft has published third-party validated lab results that prove best-in-class performance for Microsoft workloads on Hyper-V.

                                                                                                                                                                                                    5. Private, Public or Hybrid Cloud: Microsoft provides complete solutions for Private, Public or Hybrid cloud with next generation computing technology like IaaS, PaaS, SaaS.

                                                                                                                                                                                                    6. Value for Money: Microsoft Private Cloud provides value for money. You will receive unrestricted virtualization license once you buy Windows Server 2012 Datacenter and System Center 2012.

                                                                                                                                                                                                    7. Easy Migration: Convert VMware virtual machine to Microsoft Hyper-v virtual machine in few easy steps. See this link.

                                                                                                                                                                                                    8. Single Vendor: Since your existing virtualization workload is mostly Windows Server, from vendor communication and contract management point of view, having Microsoft Hyper-v make more sense.

                                                                                                                                                                                                    References:

                                                                                                                                                                                                    Microsoft Cloud Summit Australia

                                                                                                                                                                                                    Microsoft Private Cloud Cost Calculator

                                                                                                                                                                                                    Microsoft Private Cloud Success Stories

                                                                                                                                                                                                    Microsoft Cloud Computing

                                                                                                                                                                                                    System Center 2012

                                                                                                                                                                                                    Windows Server 2012

                                                                                                                                                                                                    Hyper-v Server 2012

                                                                                                                                                                                                    Download Microsoft System Center Private Cloud Evaluation Software