Understanding Network Virtualization in SCVMM 2012 R2

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes:

SCVMM Network

Logical Networking of almost “Anything” hosted in SCVMM- Logical network is a concept of complete identification, transportation and forwarding of Ethernet traffic in virtualized environment.

  • Provision and manage logical networks resources of private and public cloud
  • Management of Logical networks, subnets, VLAN, Trunk or Uplinks, PVLAN, Mac address pool, Templates, profiles, static IP address pool, DHCP address pool, IP Address Management (IPAM)
  • Integrate and manage third party hardware load balancer and Cisco virtual switch 1000v
  • Provide functionality of Virtual IP Addresses (VIPs), quality of service (QoS), monitor network traffic and virtual switch extensions
  • Creation of virtual switches and virtual network gateways

Network Virtualization – Network virtualization is a parallel concept to a server virtualization, where it allows you to abstract and run multiple virtual networks on a single physical network

  • Connects virtual machines to other virtual machines, hosts, or applications running on the same logical network.
  • Provides an independent migration of virtual machine which means when a VM moved to a different host from original host, SCVMM will automatically migrate that virtual network with the VM so that it remains connected to the rest of the infrastructure.
  • Allows multiple tenants to have their own isolated networks for security and privacy reason.
  • Allows unique IP address ranges for a tenant for management flexibility.
  • Communicate using a gateway of a site or a different site if permitted by firewall
  • Connect a VM running on a virtual network to any physical network in the same site or a different location.
  • Connect cross-network using an inbox NVGRE gateway that can be deployed as a VM to provide this cross-network interoperability.

Network Virtualization is defined in Fabric>Networking Tab of SCVMM 2012 R2 management console. Virtual Machine networking is defined in VMs and Services>VM Networks Tab of SCVMM 2012 R2 management console.

Host Config

Network virtualization terminology in SCVMM 2012 R2:

Fabric.networking

Logical networks: A logical network in VMM which contains the information of VLAN, PVLAN and subnets of a site in a Hyper-v host or a Hyper-v clusters. An IP address pool and a VM network can be associated with a logical network. A logical network can connect to another network or many network or vice-versa. Cloud function of each logical network is:

Logical network Purpose Tenant Cloud
External ·Site-to-site endpoint IP addresses

·Load balancer virtual IP addresses (VIPs)

·Network address translation (NAT) IP addresses for virtual networks

·Tenant VMs that need direct connectivity to the external network with full inbound access

Yes
Infrastructure Used for service provider infrastructure, including host management, live migration, failover clustering, and remote storage. It cannot be accessed directly by tenants. No
Load Balancer ·Uses static IP addresses

·Has outbound access to the external network via the load balancer

·Has inbound access that is restricted to only the ports that are exposed through the VIPs on the load balancer

Yes
Network Virtualization · This network is automatically used for allocating provider addresses when a VM that is connected to a virtual network is placed onto a host.

·Only the gateway VMs connect to this directly.

· Tenant VMs connect to their own VM network. Each tenant’s VM network is connected to the Network Virtualization logical network.

·A tenant VM will never connect to this directly.

·Static IP addresses are automatically assigned.

Yes
Gateway Associated with forwarding gateways, which require one logical network per gateway. For each forwarding gateway, a logical network is associated with its respective scale unit and forwarding gateway. No
Services · The Services network is used for connectivity between services in the stamp by public-facing Windows Azure Pack features, and for SQL Server and MySQL Database DBaaS deployments.

·All deployments on the Services network are behind the load balancer and accessed through a virtual IP (VIP) on the load balancer.

·This logical network is also designed to provide support for any service provider-owned service and is likely to be used by high-density web servers initially, but potentially many other services over time.

No

IP Address Pool: An IP address pool is a range of IP addresses assigned to a logical network in a site which provides IP address, subnets, gateway, DNS, WINS related information to virtual machines and applications.

Mac Address Pool: Mac Address Pool contains default mac address ranges of virtual network adapter of virtual machine. You can also create customised mac address pool and assign that pool to virtual machines.

Pool Name Vendor Mac Address
Default MAC address pool Hyper-V and Citrix XenServer 00:1D:D8:B7:1C:00 – 00:1D:D8:F4:1F:FF
Default VMware MAC address pool VMware ESX 00:50:56:00:00:00 – 00:50:56:3F:FF:FF

Hardware Load Balancer: Hardware load balancer is a functionality within SCVMM networking to provide third party loading balancing of application and services. A virtual IP or IP address Pool can be associated with hardware load balancer.

VIP Templates: VIP templates is a standard template used to define virtual addresses associated with hardware load balancer. VIP is allocated to application, services and virtual machines hosted in SCVMM 2012 R2. A template that specifies the load-balancing behaviour for HTTPS traffic on a specific load balancer by manufacturer and model.

Logical Switch: logical switches act as containers for the properties or capabilities that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. Logical switches act as an extension of physical switch with a major difference that you don’t have to drive to data center, take a patch lead and connect to computer, then configure switch ports and assign VLAN tag to that port.  Logical switch where you define uplinks or physical adapter of Hyper-v hosts, associate uplinks with logical networks and sites.

Port Profiles: Port profiles act as containers for the security and privacy that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify these capabilities in port profiles, which you can then apply to the appropriate adapters. Port profiles are associated with an uplinks in logical switch.

Port Classification: Port classifications provide global names for identifying different types of virtual network adapter port profiles. A port classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.

Network Service: Network service is container whether you can add Windows and non-Windows network gateway and IP address management and monitoring information. An IP Address Management (IPAM) server that runs on Windows Server 2012 R2 to provide resources in VMM. You can use the IPAM server in network resource tab of SCVMM to configure and monitor logical networks and their associated network sites and IP address pools. You can also use the IPAM server to monitor the usage of VM networks that you have configured or changed in VMM.

Virtual switch extension: A virtual switch extension manager in a SCVMM allows you to use a software based vendor network-management console and the VMM management server together. For example you can install Cisco 1000v extension software in a VMM server and add the functionality of Cisco switches into the VMM console.

VM Network: A VM network in a logical network is the endpoint of network virtualization which directly connect a virtual machine to allow public or private communication among VMs or other network and services. A VM network is associated with a logical network for direct access to other VMs.

VM Networks

Related Articles:

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to implement hardware load balancer in SCVMM

The following procedure describe Network Load Balancing functionality in Microsoft SCVMM. Microsoft native NLB is automatically included into SCVMM when you install SCVMM. This procedure describe how to install and configure third party load balancer in SCVMM.

Prerequisites:

Note: Load balancer provider is a third party product must be obtained from third party website using third party credentials.

Step1: Download and install load balancer provider then restart SCVMM services in Windows services. For Citrix Netscaler VPX follow the procedure. 

  1. Log on to Netscaler using nsroot account or LDAP account. 
  2. Click on Dashboard>Click downloads on right hand side corner
  3. Click on NetScaler LB Provider for Microsoft System Center Virtual Machine Manager 2012 to download load balancer provider. 
  4. Copy the load balancer provider and install in SCVMM server.
  5. Restart SCVMM Windows Services. 

Step2: Create a Run As Account for Load Balancer

  1. Open the Settings workspace.
  2. On the Home tab, in the Create group, click Create Run As Account.
  3. The Create Run As Account dialog box opens.
  4. Enter a name and optional description to identify the credentials in VMM.
  5. Enter credentials for the Run As account in the User name and Password text boxes. This is the username and password of virtual load balancer you have download from third party website and deployed in Hyper-v.
  6. Unselect Validate domain credentials.
  7. Click OK to create the Run As account.

Step3: Add Hardware Load balancer. Follow the below procedure to add load balancer

  1. Open the Fabric workspace.
  2. In the Fabric pane, expand Networking>Load Balancer>Right click  then click Load Balancers.
  3. On the Credentials page, next to the Run As account box, click Browse, and then click a Run As account you created in step 3, click OK, and then click Next.
  4. On the Host Group page, select the check box next to each host group where the load balancer will be available. By default, any child host groups are also selected.
  5. On the Manufacturer and Model page, specify the load balancer manufacturer and model, and then click Next.
  6. On the Address page, Provide TCP/IP or FQDN and port number of Load Balancer>click Next
  7. On the Logical Network Affinity page, specify the load balancer affinity to logical networks, and then click Next.
  8. On the provide page select provider>Click Test>click next
  9. On the Summary page, confirm the settings, and then click Finish.

Step4: Creating a VIP Template for third party hardware load balancer

You can create two types of load balancer 1. Generic 2. Vendor Specific. 

For vendor specific load balancer do the following.

  1. In Virtual Machine Manager (VMM), open the Fabric workspace.
  2. In the Fabric pane, expand Networking, and then click VIP Templates.
  3. On the Home tab, in the Show group, click Fabric Resources.
  4. On the Home tab, in the Create group, click Create VIP Template.
  5. On the Name page, type name, description and port: 443 of the template>click Next
  6. On the Type Page>Select Specific>Select third party Vendor & NLB type> Click Next
  7. On the protocol page> Select either TCP or UDP or both based on your requirement>Click next>Click Next>Click Finish.

For a Generic Load Balancer provider change the step 6 and select Generic then follow the step.

  1. In Virtual Machine Manager (VMM), open the Fabric workspace.
  2. In the Fabric pane, expand Networking, and then click VIP Templates.
  3. On the Home tab, in the Show group, click Fabric Resources.
  4. On the Home tab, in the Create group, click Create VIP Template.
  5. On the Name page, type name, description and port: 443 of the template>click Next
  6. On the Type Page>Select Generic> Click Next
  7. On the protocol page> Select either TCP or UDP or both based on your requirement>Click next>Click Next>Click Finish.
  • HTTPS pass-through- Traffic directly terminate at virtual machine and is not decrypted at load balancer.
  • HTTPS terminate – traffic decrypted at load balancer and re-encrypted to virtual machine. This option is best for Exchange OWA and other application. You must log on to load balancer portal then import SSL certificate of OWA and also select re-encrypt option in VIP Template.
  • There are two other option in this page HTTP and custom as well.
  1. On the Persistence page> Select either persistence or non-persistent (custom) traffic. A Persistent traffic allow an OWA session directed to specific Exchange CAS server.
  2. On the load balancing page>Select Round-Robin>Click Next
  3. On the health monitor page>Click Insert> do the following>Click Next
  • Protocol: https
  • Request: Get/
  • Response: 200
  • Interval: 120
  • Timed-out: 60
  • Retry: 3

Note: The time-out value should be less than the interval value. The interval and time-out values are in seconds.

  1. On the Load Balancing page>Select load balancing method>Click Next
  2. On the Summary page, review the settings, and then click Finish.

Next step to create load balanced web services template and connect to load balancer. On the port profile of service template of the VM you have to select network load balanced then deploy the template into production. 

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Cisco Nexus 1000V Switch for Microsoft Hyper-V provides following advanced feature in Microsoft Hyper-v and SCVMM.

  • Integrate physical, virtual, and mixed environments
  • Allow dynamic policy provisioning and mobility-aware network policies
  • Improves security through integrated virtual services and advanced Cisco NX-OS features

The following table summarizes the capabilities and benefits of the Cisco Nexus 1000V Switch deployed with Microsoft Hyper-V and SCVMM.

Capabilities Features Benefits
Advanced Switching Private VLANs, Quality of Service (QoS), access control lists (ACLs), portsecurity, and Cisco vPath Get granular control of virtual machine-to-virtual machine interaction
Security Dynamic Host Configuration Protocol (DHCP) Snooping, Dynamic Address Resolution Protocol Inspection, and IP Source Guard Reduce common security threats in data center environments.
Monitoring NetFlow, packet statistics, Switched Port Analyzer (SPAN), and Encapsulated Remote SPAN Gain visibility into virtual machine-to-virtual machine traffic to reduce troubleshooting time.
Manageability Simple Network Management Protocol, NetConf, syslog, and other troubleshooting command-line interfaces Use existing network management tools to manage physical and virtual environments.

The Cisco Nexus 1000V Series has two major components:

Virtual Ethernet Module (VEM)- The software component is embedded on each Hyper-V host as a forwarding extension. Each virtual machine on the host is connected to the VEM through a virtual Ethernet port.

Virtual Supervisor Module (VSM)- The management module controls multiple VEMs and helps in defining virtual machine (VM)-centric network policies.

Supported Configurations

  • Microsoft SCVMM 2012 SP1/R2
  • 64 Microsoft Windows Server 2012/R2 with Hyper-V hosts
  • 2048 virtual Ethernet ports per VSM, with 216 virtual Ethernet ports per physical host
  • 2048 active VLANs
  • 2048 port profiles
  • 32 physical NICs per physical host
  • Compatible all Cisco Nexus and Cisco Catalyst switches as well as switches from other vendors

Comparison between Cisco Nexus 1000V editions:

Features Essential

Free Version

Advanced
VLANs, PVLANs, ACLs, QoS, Link Aggregation Control Protocol (LACP), and multicast Yes Yes
Cisco vPath (for virtual services) Yes Yes
Cisco NetFlow, SPAN, and ERSPAN (for traffic visibility) Yes Yes
SNMP, NetConf, syslogs, etc. (for manageability) Yes Yes
Microsoft SCVMM integration Yes Yes
DHCP snooping Yes
IP source guard Yes
Dynamic ARP Inspection Yes
Cisco VSG* Yes

Installation Steps for Cisco Nexus 1000V Switch for Microsoft Hyper-V are:

Step1: Download Cisco Nexus 1000v Appliance/ISO

Log on to Cisco using cisco account. Download software from this URL

Step2: Install SCVMM Components

step2

Step3: Install and configure VSM

step3

Step4: Configure SCVMM Fabric and VM Network

step4

Step5: Prepare Hyper-v Hosts

step5

Step6: Create 1000v logical switch

step6

Step7: Create VMs or connect existing VMs with logical switch

step7

References & Getting Started with Nexus 1000V

Cisco Nexus 1000v Quick Start Guide

Cisco Nexus 1000V Switch for Microsoft Hyper-V Deployment Guide

Cisco Nexus 1000v datasheet

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

 

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.

 

Unsupported:

Hyper-V on Windows 10 doesn’t support failover clustering

Migrating VMs from Standalone Hyper-v Host to clustered Hyper-v Host

Scenario 1: In-place migration of two standalone Windows Servers (Hyper-v role installed) into clustered Windows Servers (Hyper-v role installed).

Steps involved in this scenario. There will be downtime in this scenario.

  1. Delete all snapshots from VMs
  2. Update Windows Server to latest patches and hotfixes
  3. Reboot hosts
  4. Install Failover Clustering Windows Feature in both hosts
  5. Connect hosts with shared storage infrastructure either iSCSI or fibre channel
  6. Present shared storage (5GB for Quorum disk and additional disk for VMs store) to Hyper-v Hosts.
  7. Run Failover cluster Wizard, create cluster.
  8. From the failover cluster manager, Click Disk, select virtual machine storage and convert the disk to clustered share volume
  9. Open Hyper-v Manager from Server Manager, run storage migration and migrate all VM data to single location which is shared storage.
  10. Now use Configure Role Wizard from Failover Cluster Manager, Select Virtual Machine from drop down list, Select one or More VMs and migrate those VMs to Failover cluster node.
  11. Test Live migration.

Scenario 2: Migrating standalone Windows Servers (Hyper-v role installed) using local storage to different Windows Servers (Hyper-v role installed) cluster using shared storage.

In this scenario, clustered Windows servers doesn’t see local storage available in old Hyper-v host and old Hyper-v host doesn’t see shared storage in new Hyper-v clustered environment. There will be downtime when you migrate VMs. Delete any snapshot, backup all VMs before you proceed.

Option A: Download Veeam Backup & Replication 8 trial version, configure a VM as Veeam management server. Add Source host as standalone hyper-v host and target host as Hyper-v cluster. Replicate all the VMs. Shutdown old VMs in standalone Hyper-v Hosts, then Power on VMs in Hyper-v cluster. Delete old VMs.

Option B: Copy VHD and configuration file and save into clustered shared storage. Log on to one of the clustered hyper-v host, Open Hyper-v Manager, Import VM option to import VM. Then use Configure Role option in failover Cluster Manager in same host to migrate the VM into cluster, then Power on VM in cluster.

My recommendation: use Veeam B&R.

Scenario 3: Migrating standalone Windows Servers (Hyper-v role installed) using iSCSI storage to different Windows Servers (Hyper-v role installed) cluster using fibre channel or iSCSI storage.

Option A: shutdown VMs. Present same iSCSI storage connected standalone hosts to clustered hosts. Use storage migration to migrate VMs to clustered Hosts. Then use configure role option, Failover cluster manager to migrate VMs to Hyper-v cluster.

Option B: Again use Veeam to do the job.

There are many factors/challenges when migrating VMs from standalone environment to clustered environment.

  1. iSCSI storage to Fibre Channel storage. When new cluster has host bus adapter (HBA) and old standalone host doesn’t have HBA. You can use Microsoft iSCSI initiation to fulfil the initiator requirement in new host.
  2. Fibre channel storage to iSCSI storage. There will heaps of downtime to fulfil this requirement because of new architecture. Veeam can be part of a solution.
  3. Multi-site and geographically diverse cluster will depend on MPLS or IPVPN network latency and bandwidth.

In conclusion, there is no silver bullet for individual situation. You have to consult with Microsoft partner to get a correct migration path that best fit your requirements.

Windows Server 2012 R2 Gateway

Windows server 2012 R2 can be configured as a Gateway VM in a two or four node cluster on Hyper-v Host. Gateway VM or router enhance Data Center by providing them a secure router for public or private cloud. Gateway VM cluster can provide routing functionality up to 200 tenants. Each Gateway VM can provide routing functionality for up to 50 tenants.

Two different versions of the gateway router are available in Windows Server 2012 R2.

RRAS Multitenant Gateway – The RRAS Multitenant Gateway router can be used for multitenant or non-multitenant deployments, and is a full featured BGP router. To deploy an RRAS Multitenant Gateway router, you must use Windows PowerShell commands

RRAS Gateway configuration and options:

  • Configure the RRAS Multitenant Gateway for use with Hyper-V Network Virtualization
  • Configure the RRAS Multitenant Gateway for use with VLANs
  • Configure the RRAS Multitenant Gateway for Site-to-Site VPN Connections
  • Configure the RRAS Multitenant Gateway to Perform Network Address Translation for Tenant Computers
  • Configure the RRAS Multitenant Gateway for Dynamic Routing with BGP

Windows Server 2012 R2 Gateway – To deploy Windows Server Gateway, you must use System Center 2012 R2 and Virtual Machine Manager (VMM). The Windows Server Gateway router is designed for use with multitenant deployments.

Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.

This feature allow service provider the functionality to virtually isolate different subnets, VLANs and network traffic which resides in same physical core or distribution switch. Hyper-v network virtualization is a Network Virtualization Generic Routing Encapsulation NVGRE which allows tenant to bring their own TCP/IP and name space in cloud environment.

Systems requirements:

Option Hyper-v Host Gateway VM
CPU 2 Socket NUMA Node 8 vCPU for two VMs

4 vCPU for four VMs

CPU Core 8 1
Memory 48GB 8GB
Network Adapter Two 10GB NICs connect to Cisco Trunk Port1 4 virtual NICs

  • Operating Systems
  • Clustering heartbeat
  • External network
  • Internal network
Clustering Active-Active Active-Active or Active-Passive

1-NIC Teaming in Hyper-v Host- You can configure NIC teaming in Hyper-v Host for two 10GB NICs. Windows Server 2012 R2 Gateway VM with four vNIC that are connected to the Hyper-V Virtual Switch that is bound to the NIC Team.

Deployment Guides:

Windows Server 2012 R2 RRAS Deployment Guide

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Clustering Windows Server 2012 R2

VMware vSphere 6.0 VS Microsoft Hyper-v Server 2012 R2

Since the emergence of vSphere 6.0, I would like to write an article on vSphere 6.0 vs Windows Server 2012 R2. I collected vSphere 6.0 features from few blogs and VMware community forum. Note that vSphere 6.0 is in beta program which means VMware can amend anything before final release. New functionalities of vSphere 6.0 beta are already available in Windows Server 2012 R2. So let’s have a quick look on both virtualization products.

Features vSphere 6.0 Hyper-v Server 2012 R2
Certificates

 

Certificate Authority Active Directory Certificate Services
Certificate Store Certificate Store in Windows OS
Single Sign on VMware retained SSO 2.0 for vSphere 5.5 Active Directory Domain Services
Database vPostgres database for VC Appliance up to 8 vCenter Microsoft SQL Server

No Limitation

Management Tools Web Client & VI

VMware retained VI

SCVMM Console & Hyper-v Manager
Installer Combined single installer with all input upfront Combined single installer with all input upfront
vMotion Long distance Migration up to 100+ms RTTs Multisite Hyper-v Cluster and Live Migration
Storage Migration Storage vMotion with shared and unshared storage Hyper-v Live Storage Migration between local and shared storage
Combined Cloud Products Platform Services Controller (PSC) includes vCenter, vCOPs, vCloud Director, vCoud Automation Microsoft System Center combined App Controller, Configuration Manager, Data Protection Manager, Operations Manager, Orchestrator, Service Manager, Virtual Machine Manager
Service Registration View the services that are running in the system. Windows Services
Licensing Platform Services Controller (PSC) includes Licensing Volume Activation Role in Windows Server 2012 R2
Virtual Datacenters A Virtual Datacenter aggregates CPU, Memory, Storage and Network resources. Provision CPU, Memory, Storage and network using create Cloud wizard

Another key feature to be compared here that those who are planning to procure FC Tape library and maintain a virtual backup server note that vSphere doesn’t support FC Tape even with NPIV and Hyper-v support FC Tape using NPIV.

References:

http://www.wooditwork.com/2014/08/27/whats-new-vsphere-6-0-vcenter-esxi/

https://araihan.wordpress.com/2014/03/25/vmware-vs-hyper-v-can-microsoft-make-history-again/

https://araihan.wordpress.com/2013/01/24/microsofts-hyper-v-server-2012-and-system-center-2012-unleash-ko-punch-to-vmware/

https://araihan.wordpress.com/2015/08/20/hyper-v-server-2016-whats-new/