Understanding Software Defined Networking (SDN) and Network Virtualization

The evolution of virtualization lead to an evolution of wide range of virtualized technology including the key building block of a data center which is Network. A traditional network used be wired connection of physical switches and devices. A network administrator has nightmare making some configuration changes and possibility of breaking another configuration while doing same changes. Putting together a massive data center would have been expensive venture and lengthy project. Since the virtualization and cloud services on the horizon, anything can be offered as a service and almost anything can virtualised and software defined.

Since development of Microsoft SCVMM and VMware NSX, network function virtualization (NFV), network virtualization (NV) and software defined network (SDN) are making bold statement on-premises based customer and cloud based service provider. Out of all great benefits having a software defined network, two key benefits standout among all which are easy provisioning a network and easy change control of that network. You don’t have to fiddle around physical layer of network and you certainly don’t have to modify virtual host to provision a complete network with few mouse click. How does it work?

Software Defined Networking- Software defined networking (SDN) is a dynamic, manageable, cost-effective, and adaptable, high-bandwidth, agile open architecture. SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. Examples of Cisco software defined networking is here.

The fundamental building block of SDN is:

  • Programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.

Cisco SDN Capable Switches

Modular Switches

Cisco Nexus 9516
Cisco Nexus 9508
Cisco Nexus 9504

Fixed Switches

Cisco Nexus 9396PX
Cisco Nexus 9396TX
Cisco Nexus 93128TX
Cisco Nexus 9372PX
Cisco Nexus 9372TX
Cisco Nexus 9336PQ ACI Spine Switch
Cisco Nexus 9332PQ

Network Virtualization- A virtualized network is simply partitioning existing physical network and creating multiple logical network. Network virtualization literally tries to create logical segments in an existing network by dividing the network logically at the flow level. End goal is to allow multiple virtual machine in same logical segment or a private portion of network allocated by business. In a physical networking you cannot have same IP address range within same network and manage traffic for two different kind of services and application. But in a virtual world you can have same IP range segregated in logical network. Let’s say two different business/tenant have 10.124.3.x/24 IP address scheme in their internal network. But both business/tenant decided to migrate to Microsoft Azure platform and bring their own IP address scheme (10.124.3.x/24) with them. It is absolutely possible for them to retain their own IP address and migrate to Microsoft Azure. You will not see changes within Azure portal. You even don’t know that another organisation have same internal IP address scheme and possibly hosted in same Hyper-v host. It is programmatically and logically managed by Azure Stack and SCVMM network virtualization technology.

Network Functions Virtualization- Network function virtualization is virtualising layer 4 to layer 7 of OSI model in a software defined network. NFV runs on high-performance x86 platforms, and it enables users to turn up functions on selected tunnels in the network. The end goal is to allow administrator to create a service profile for a VM then create logical workflow within the network (the tunnel) and then build virtual services on that specific logical environment. NFV saves a lot of time on provisioning and managing application level of network. Functions like IDS, firewall and load balancer can be virtualised in Microsoft SCVMM and VMware NSX.

Here are some Cisco NFV products.

IOS-XRv Virtual Router: Scale your network when and where you need with this carrier-class router.

Network Service Virtualization- Network Service Virtualization (NSV) virtualizes a network service, for example, a firewall module or IPS software instance, by dividing the software image so that it may be accessed independently among different applications all from a common hardware base. NSV eliminates cost of acquiring a separate hardware for single purpose instead it uses same hardware to service different purpose every time a network is accessed or service is requested. It also open the door for service provider offer security as a service to various customer.

Network security appliances are now bundled as a set of security functions within one appliance. For example, firewalls were offered on special purpose hardware as were IPS (Intrusion Protection System), Web Filter, Content Filter, VPN (Virtual Private Network), NBAD (Network-Based Anomaly Detection) and other security products. This integration allows for greater software collaboration between security elements, lowers cost of acquisition and streamlines operations.

Cisco virtualized network services available on the Cisco Catalyst 6500 series platform.

Network security virtualization

  • Virtual firewall contexts also called security contexts
  • Up to 250 mixed-mode multiple virtual firewalls
  • Routed firewalls (Layer 3)
  • Transparent firewalls (Layer 2, or stealth)
  • Mixed-mode firewalls combination of both Layer 2 and Layer 3 firewalls coexisting on the same physical firewall. 

Virtual Route Forwarding (VRF) network services

  • NetFlow on VRF interfaces
  • VRF-aware syslog
  • VRF-aware TACACS
  • VRF-aware Telnet
  • Virtualized address management policies using VRF-aware DHCP
  • VRF-aware TACACS
  • Optimized traffic redirection using PBR-set VRF

Finally you can have all these in one basket without incurring cost for each component once you have System Center Virtual Machine Manager or Microsoft Azure Stack implemented in on-premises infrastructure or you choose to migrate to Microsoft Azure platform.

Relevant Articles

Comparing VMware vSwitch with SCVMM Network Virtualization

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Comparing VMware vSwitch with SCVMM Network Virtualization

Feature VMware vSphere System Center

VMM 2012 R2

Standard vSwitch DV Switch
Switch Features Yes Yes Yes
Layer 2 Forwarding Yes Yes Yes
IEEE 802.1Q VLAN Tagging Yes Yes Yes
Multicast Support Yes Yes Yes
Network Policy Yes Yes
Network Migration Yes Yes
NVGRE/ VXLAN Procure NSX or Cisco Appliance Yes
L3 Network Support Procure NSX or Cisco Appliance Yes
Network Virtualization Procure NSX or Cisco Appliance Yes
NIC Teaming Yes Yes Yes
Network Load Balancing Procure NSX or Cisco Appliance Yes
Virtual Switch Extension Yes Yes
Physical Switch Connectivity
EtherChannel Yes Yes Yes
Load Balancing Algorithms
Port Monitoring Yes Yes Yes
Third party Hardware load balancing Yes Yes
Traffic Management Features
Bandwidth Limiting Yes Yes
Traffic Monitoring Yes Yes
Security Features
Port Security Yes Yes Yes
Private VLANs Yes Yes
Management Features
Manageability Yes Yes Yes
Third Party APIs Yes Yes
Port Policy Yes Yes Yes
Netflow Yes* Yes* Yes
Syslog Yes** Yes** Yes
SNMP Yes Yes Yes

* Experimental Support

** Virtual switch network syslog information is exported and included with VMware ESX events.

References:

VMware Distributed Switch

VMware NSX

Microsoft System Center Features 

Related Articles:

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Understanding Network Virtualization in SCVMM 2012 R2

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes:

SCVMM Network

Logical Networking of almost “Anything” hosted in SCVMM- Logical network is a concept of complete identification, transportation and forwarding of Ethernet traffic in virtualized environment.

  • Provision and manage logical networks resources of private and public cloud
  • Management of Logical networks, subnets, VLAN, Trunk or Uplinks, PVLAN, Mac address pool, Templates, profiles, static IP address pool, DHCP address pool, IP Address Management (IPAM)
  • Integrate and manage third party hardware load balancer and Cisco virtual switch 1000v
  • Provide functionality of Virtual IP Addresses (VIPs), quality of service (QoS), monitor network traffic and virtual switch extensions
  • Creation of virtual switches and virtual network gateways

Network Virtualization – Network virtualization is a parallel concept to a server virtualization, where it allows you to abstract and run multiple virtual networks on a single physical network

  • Connects virtual machines to other virtual machines, hosts, or applications running on the same logical network.
  • Provides an independent migration of virtual machine which means when a VM moved to a different host from original host, SCVMM will automatically migrate that virtual network with the VM so that it remains connected to the rest of the infrastructure.
  • Allows multiple tenants to have their own isolated networks for security and privacy reason.
  • Allows unique IP address ranges for a tenant for management flexibility.
  • Communicate using a gateway of a site or a different site if permitted by firewall
  • Connect a VM running on a virtual network to any physical network in the same site or a different location.
  • Connect cross-network using an inbox NVGRE gateway that can be deployed as a VM to provide this cross-network interoperability.

Network Virtualization is defined in Fabric>Networking Tab of SCVMM 2012 R2 management console. Virtual Machine networking is defined in VMs and Services>VM Networks Tab of SCVMM 2012 R2 management console.

Host Config

Network virtualization terminology in SCVMM 2012 R2:

Fabric.networking

Logical networks: A logical network in VMM which contains the information of VLAN, PVLAN and subnets of a site in a Hyper-v host or a Hyper-v clusters. An IP address pool and a VM network can be associated with a logical network. A logical network can connect to another network or many network or vice-versa. Cloud function of each logical network is:

Logical network Purpose Tenant Cloud
External ·Site-to-site endpoint IP addresses

·Load balancer virtual IP addresses (VIPs)

·Network address translation (NAT) IP addresses for virtual networks

·Tenant VMs that need direct connectivity to the external network with full inbound access

Yes
Infrastructure Used for service provider infrastructure, including host management, live migration, failover clustering, and remote storage. It cannot be accessed directly by tenants. No
Load Balancer ·Uses static IP addresses

·Has outbound access to the external network via the load balancer

·Has inbound access that is restricted to only the ports that are exposed through the VIPs on the load balancer

Yes
Network Virtualization · This network is automatically used for allocating provider addresses when a VM that is connected to a virtual network is placed onto a host.

·Only the gateway VMs connect to this directly.

· Tenant VMs connect to their own VM network. Each tenant’s VM network is connected to the Network Virtualization logical network.

·A tenant VM will never connect to this directly.

·Static IP addresses are automatically assigned.

Yes
Gateway Associated with forwarding gateways, which require one logical network per gateway. For each forwarding gateway, a logical network is associated with its respective scale unit and forwarding gateway. No
Services · The Services network is used for connectivity between services in the stamp by public-facing Windows Azure Pack features, and for SQL Server and MySQL Database DBaaS deployments.

·All deployments on the Services network are behind the load balancer and accessed through a virtual IP (VIP) on the load balancer.

·This logical network is also designed to provide support for any service provider-owned service and is likely to be used by high-density web servers initially, but potentially many other services over time.

No

IP Address Pool: An IP address pool is a range of IP addresses assigned to a logical network in a site which provides IP address, subnets, gateway, DNS, WINS related information to virtual machines and applications.

Mac Address Pool: Mac Address Pool contains default mac address ranges of virtual network adapter of virtual machine. You can also create customised mac address pool and assign that pool to virtual machines.

Pool Name Vendor Mac Address
Default MAC address pool Hyper-V and Citrix XenServer 00:1D:D8:B7:1C:00 – 00:1D:D8:F4:1F:FF
Default VMware MAC address pool VMware ESX 00:50:56:00:00:00 – 00:50:56:3F:FF:FF

Hardware Load Balancer: Hardware load balancer is a functionality within SCVMM networking to provide third party loading balancing of application and services. A virtual IP or IP address Pool can be associated with hardware load balancer.

VIP Templates: VIP templates is a standard template used to define virtual addresses associated with hardware load balancer. VIP is allocated to application, services and virtual machines hosted in SCVMM 2012 R2. A template that specifies the load-balancing behaviour for HTTPS traffic on a specific load balancer by manufacturer and model.

Logical Switch: logical switches act as containers for the properties or capabilities that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. Logical switches act as an extension of physical switch with a major difference that you don’t have to drive to data center, take a patch lead and connect to computer, then configure switch ports and assign VLAN tag to that port.  Logical switch where you define uplinks or physical adapter of Hyper-v hosts, associate uplinks with logical networks and sites.

Port Profiles: Port profiles act as containers for the security and privacy that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify these capabilities in port profiles, which you can then apply to the appropriate adapters. Port profiles are associated with an uplinks in logical switch.

Port Classification: Port classifications provide global names for identifying different types of virtual network adapter port profiles. A port classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.

Network Service: Network service is container whether you can add Windows and non-Windows network gateway and IP address management and monitoring information. An IP Address Management (IPAM) server that runs on Windows Server 2012 R2 to provide resources in VMM. You can use the IPAM server in network resource tab of SCVMM to configure and monitor logical networks and their associated network sites and IP address pools. You can also use the IPAM server to monitor the usage of VM networks that you have configured or changed in VMM.

Virtual switch extension: A virtual switch extension manager in a SCVMM allows you to use a software based vendor network-management console and the VMM management server together. For example you can install Cisco 1000v extension software in a VMM server and add the functionality of Cisco switches into the VMM console.

VM Network: A VM network in a logical network is the endpoint of network virtualization which directly connect a virtual machine to allow public or private communication among VMs or other network and services. A VM network is associated with a logical network for direct access to other VMs.

VM Networks

Related Articles:

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to implement hardware load balancer in SCVMM

The following procedure describe Network Load Balancing functionality in Microsoft SCVMM. Microsoft native NLB is automatically included into SCVMM when you install SCVMM. This procedure describe how to install and configure third party load balancer in SCVMM.

Prerequisites:

Note: Load balancer provider is a third party product must be obtained from third party website using third party credentials.

Step1: Download and install load balancer provider then restart SCVMM services in Windows services. For Citrix Netscaler VPX follow the procedure. 

  1. Log on to Netscaler using nsroot account or LDAP account. 
  2. Click on Dashboard>Click downloads on right hand side corner
  3. Click on NetScaler LB Provider for Microsoft System Center Virtual Machine Manager 2012 to download load balancer provider. 
  4. Copy the load balancer provider and install in SCVMM server.
  5. Restart SCVMM Windows Services. 

Step2: Create a Run As Account for Load Balancer

  1. Open the Settings workspace.
  2. On the Home tab, in the Create group, click Create Run As Account.
  3. The Create Run As Account dialog box opens.
  4. Enter a name and optional description to identify the credentials in VMM.
  5. Enter credentials for the Run As account in the User name and Password text boxes. This is the username and password of virtual load balancer you have download from third party website and deployed in Hyper-v.
  6. Unselect Validate domain credentials.
  7. Click OK to create the Run As account.

Step3: Add Hardware Load balancer. Follow the below procedure to add load balancer

  1. Open the Fabric workspace.
  2. In the Fabric pane, expand Networking>Load Balancer>Right click  then click Load Balancers.
  3. On the Credentials page, next to the Run As account box, click Browse, and then click a Run As account you created in step 3, click OK, and then click Next.
  4. On the Host Group page, select the check box next to each host group where the load balancer will be available. By default, any child host groups are also selected.
  5. On the Manufacturer and Model page, specify the load balancer manufacturer and model, and then click Next.
  6. On the Address page, Provide TCP/IP or FQDN and port number of Load Balancer>click Next
  7. On the Logical Network Affinity page, specify the load balancer affinity to logical networks, and then click Next.
  8. On the provide page select provider>Click Test>click next
  9. On the Summary page, confirm the settings, and then click Finish.

Step4: Creating a VIP Template for third party hardware load balancer

You can create two types of load balancer 1. Generic 2. Vendor Specific. 

For vendor specific load balancer do the following.

  1. In Virtual Machine Manager (VMM), open the Fabric workspace.
  2. In the Fabric pane, expand Networking, and then click VIP Templates.
  3. On the Home tab, in the Show group, click Fabric Resources.
  4. On the Home tab, in the Create group, click Create VIP Template.
  5. On the Name page, type name, description and port: 443 of the template>click Next
  6. On the Type Page>Select Specific>Select third party Vendor & NLB type> Click Next
  7. On the protocol page> Select either TCP or UDP or both based on your requirement>Click next>Click Next>Click Finish.

For a Generic Load Balancer provider change the step 6 and select Generic then follow the step.

  1. In Virtual Machine Manager (VMM), open the Fabric workspace.
  2. In the Fabric pane, expand Networking, and then click VIP Templates.
  3. On the Home tab, in the Show group, click Fabric Resources.
  4. On the Home tab, in the Create group, click Create VIP Template.
  5. On the Name page, type name, description and port: 443 of the template>click Next
  6. On the Type Page>Select Generic> Click Next
  7. On the protocol page> Select either TCP or UDP or both based on your requirement>Click next>Click Next>Click Finish.
  • HTTPS pass-through- Traffic directly terminate at virtual machine and is not decrypted at load balancer.
  • HTTPS terminate – traffic decrypted at load balancer and re-encrypted to virtual machine. This option is best for Exchange OWA and other application. You must log on to load balancer portal then import SSL certificate of OWA and also select re-encrypt option in VIP Template.
  • There are two other option in this page HTTP and custom as well.
  1. On the Persistence page> Select either persistence or non-persistent (custom) traffic. A Persistent traffic allow an OWA session directed to specific Exchange CAS server.
  2. On the load balancing page>Select Round-Robin>Click Next
  3. On the health monitor page>Click Insert> do the following>Click Next
  • Protocol: https
  • Request: Get/
  • Response: 200
  • Interval: 120
  • Timed-out: 60
  • Retry: 3

Note: The time-out value should be less than the interval value. The interval and time-out values are in seconds.

  1. On the Load Balancing page>Select load balancing method>Click Next
  2. On the Summary page, review the settings, and then click Finish.

Next step to create load balanced web services template and connect to load balancer. On the port profile of service template of the VM you have to select network load balanced then deploy the template into production. 

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Cisco Nexus 1000V Switch for Microsoft Hyper-V provides following advanced feature in Microsoft Hyper-v and SCVMM.

  • Integrate physical, virtual, and mixed environments
  • Allow dynamic policy provisioning and mobility-aware network policies
  • Improves security through integrated virtual services and advanced Cisco NX-OS features

The following table summarizes the capabilities and benefits of the Cisco Nexus 1000V Switch deployed with Microsoft Hyper-V and SCVMM.

Capabilities Features Benefits
Advanced Switching Private VLANs, Quality of Service (QoS), access control lists (ACLs), portsecurity, and Cisco vPath Get granular control of virtual machine-to-virtual machine interaction
Security Dynamic Host Configuration Protocol (DHCP) Snooping, Dynamic Address Resolution Protocol Inspection, and IP Source Guard Reduce common security threats in data center environments.
Monitoring NetFlow, packet statistics, Switched Port Analyzer (SPAN), and Encapsulated Remote SPAN Gain visibility into virtual machine-to-virtual machine traffic to reduce troubleshooting time.
Manageability Simple Network Management Protocol, NetConf, syslog, and other troubleshooting command-line interfaces Use existing network management tools to manage physical and virtual environments.

The Cisco Nexus 1000V Series has two major components:

Virtual Ethernet Module (VEM)- The software component is embedded on each Hyper-V host as a forwarding extension. Each virtual machine on the host is connected to the VEM through a virtual Ethernet port.

Virtual Supervisor Module (VSM)- The management module controls multiple VEMs and helps in defining virtual machine (VM)-centric network policies.

Supported Configurations

  • Microsoft SCVMM 2012 SP1/R2
  • 64 Microsoft Windows Server 2012/R2 with Hyper-V hosts
  • 2048 virtual Ethernet ports per VSM, with 216 virtual Ethernet ports per physical host
  • 2048 active VLANs
  • 2048 port profiles
  • 32 physical NICs per physical host
  • Compatible all Cisco Nexus and Cisco Catalyst switches as well as switches from other vendors

Comparison between Cisco Nexus 1000V editions:

Features Essential

Free Version

Advanced
VLANs, PVLANs, ACLs, QoS, Link Aggregation Control Protocol (LACP), and multicast Yes Yes
Cisco vPath (for virtual services) Yes Yes
Cisco NetFlow, SPAN, and ERSPAN (for traffic visibility) Yes Yes
SNMP, NetConf, syslogs, etc. (for manageability) Yes Yes
Microsoft SCVMM integration Yes Yes
DHCP snooping Yes
IP source guard Yes
Dynamic ARP Inspection Yes
Cisco VSG* Yes

Installation Steps for Cisco Nexus 1000V Switch for Microsoft Hyper-V are:

Step1: Download Cisco Nexus 1000v Appliance/ISO

Log on to Cisco using cisco account. Download software from this URL

Step2: Install SCVMM Components

step2

Step3: Install and configure VSM

step3

Step4: Configure SCVMM Fabric and VM Network

step4

Step5: Prepare Hyper-v Hosts

step5

Step6: Create 1000v logical switch

step6

Step7: Create VMs or connect existing VMs with logical switch

step7

References & Getting Started with Nexus 1000V

Cisco Nexus 1000v Quick Start Guide

Cisco Nexus 1000V Switch for Microsoft Hyper-V Deployment Guide

Cisco Nexus 1000v datasheet

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

 

Windows Server 2012 R2 Gateway

Windows server 2012 R2 can be configured as a Gateway VM in a two or four node cluster on Hyper-v Host. Gateway VM or router enhance Data Center by providing them a secure router for public or private cloud. Gateway VM cluster can provide routing functionality up to 200 tenants. Each Gateway VM can provide routing functionality for up to 50 tenants.

Two different versions of the gateway router are available in Windows Server 2012 R2.

RRAS Multitenant Gateway – The RRAS Multitenant Gateway router can be used for multitenant or non-multitenant deployments, and is a full featured BGP router. To deploy an RRAS Multitenant Gateway router, you must use Windows PowerShell commands

RRAS Gateway configuration and options:

  • Configure the RRAS Multitenant Gateway for use with Hyper-V Network Virtualization
  • Configure the RRAS Multitenant Gateway for use with VLANs
  • Configure the RRAS Multitenant Gateway for Site-to-Site VPN Connections
  • Configure the RRAS Multitenant Gateway to Perform Network Address Translation for Tenant Computers
  • Configure the RRAS Multitenant Gateway for Dynamic Routing with BGP

Windows Server 2012 R2 Gateway – To deploy Windows Server Gateway, you must use System Center 2012 R2 and Virtual Machine Manager (VMM). The Windows Server Gateway router is designed for use with multitenant deployments.

Multi-tenancy is the ability of a cloud infrastructure to support the virtual machine workloads of multiple tenants, but isolate them from each other, while all of the workloads run on the same infrastructure. The multiple workloads of an individual tenant can interconnect and be managed remotely, but these systems do not interconnect with the workloads of other tenants, nor can other tenants remotely manage them.

This feature allow service provider the functionality to virtually isolate different subnets, VLANs and network traffic which resides in same physical core or distribution switch. Hyper-v network virtualization is a Network Virtualization Generic Routing Encapsulation NVGRE which allows tenant to bring their own TCP/IP and name space in cloud environment.

Systems requirements:

Option Hyper-v Host Gateway VM
CPU 2 Socket NUMA Node 8 vCPU for two VMs

4 vCPU for four VMs

CPU Core 8 1
Memory 48GB 8GB
Network Adapter Two 10GB NICs connect to Cisco Trunk Port1 4 virtual NICs

  • Operating Systems
  • Clustering heartbeat
  • External network
  • Internal network
Clustering Active-Active Active-Active or Active-Passive

1-NIC Teaming in Hyper-v Host- You can configure NIC teaming in Hyper-v Host for two 10GB NICs. Windows Server 2012 R2 Gateway VM with four vNIC that are connected to the Hyper-V Virtual Switch that is bound to the NIC Team.

Deployment Guides:

Windows Server 2012 R2 RRAS Deployment Guide

Test Lab Guide: Windows Server 2012 R2 Hyper-V Network Virtualization with System Center 2012 R2 VMM

Clustering Windows Server 2012 R2

VMware vs Hyper-v: Can Microsoft Make History Again?

In 1852 Karl Marx published “The Eighteenth Brumaire of Louis Napoleon”. In his book, Karl Marx quotes “that history repeats itself, “the first as tragedy, then as farce”, referring respectively to Napoleon I and to his nephew Louis Napoleon (Napoleon III).

Here I am not talking about Karl Marx, I am not a specialist on this matter. I am a computer geek. So Why I am refer to Karl Marx? I believe above remarks can be connected to a history between Microsoft and Novell.

In my past blog I compared VMware and Hyper-v:

http://microsoftguru.com.au/2013/01/24/microsofts-hyper-v-server-2012-and-system-center-2012-unleash-ko-punch-to-vmware/

http://microsoftguru.com.au/2013/09/14/vsphere-5-5-is-catching-up-with-hyper-v-2012-r2/

http://microsoftguru.com.au/2013/04/07/is-vmwares-fate-heading-towards-novell/

I found some similar articles echoed by other commentator:

http://blogs.gartner.com/david_cappuccio/2009/06/30/just-a-thought-will-vmware-become-the-next-novell/

http://virtualizedgeek.com/2012/12/04/is-vmware-headed-the-slow-painful-death-of-novell/

Here is Gartner Inc.’s verdict:

http://www.gartner.com/technology/reprints.do?id=1-1GJA88J&ct=130628&st=sb

http://www.gartner.com/technology/reprints.do?id=1-1LV8IX1&ct=131016&st=sb

So the question is; can Microsoft defeat VMware? Can Microsoft make history again? Here is why I believe Microsoft will make history once again regardless what VMware fan boy think. Let start….

What’s New in Windows Server 2012 R2 Hyper-V

Microsoft has traditionally put out point releases to its server operating systems about every two years. Windows Server operating systems is no longer a traditional operating systems. This is cloud OS in true terms and uses. Let’s see what’s new in Windows Server 2012 R2 in terms of virtualization.

· New Generation 2 Virtual Machines

· Automatic Server OS Activation inside VMs

· Upgrade and Live Migration Improvements in Windows Server 2012 R2

· Online VHDX Virtual Disk Resize

· Live VM Export and Clone

· Linux Guest V Enhancements

· Storage Quality of Service ( QoS )

· Guest Clustering with Shared VHDXs

· Hyper-V Replica Site-to-Site Replication Enhancements

Generation 2 VMs

Hyper-V in Windows Server 2012 R2 supports the concept of a totally new architecture based on modern hardware with no emulated devices. This makes it possible to add a number of new features, such as secure boot for VMs and booting off of virtual SCSI or virtual network adapters.

VM Direct Connect

In Windows Server 2012 R2 Hyper-V with the addition of VM Direct Connect allows a direct remote desktop connection to any running VM over what’s now called the VM bus. It’s also integrated into the Hyper-V management experience.

Extend replication to a third site

Hyper-V Replica in Windows Server 2012 is currently limited to a single replication target. This makes it difficult to support scenarios like a service provider wanting to act both as a target for a customer to replicate and a source to replicate to another offsite facility. Windows Server 2012 R2 and Hyper-V now provide a tertiary replication capability to support just such a scenario. By the same token, enterprises can now save one replica in-house and push a second replica off-site.

Compression for faster migration

Two new options in Windows Server 2012 Hyper-V help improve the performance of live migrations. The first is the ability to enable compression on the data to reduce the total number of bytes transmitted over the wire. The obvious caveat is that tapping CPU resources for data compression could potentially impact other operations, so you’ll need to take that into consideration. The second option, SMB Direct, requires network adapters that support RDMA. Microsoft’s advice: If you have 10 GB available, use RDMA (10x improvement); otherwise, use compression (2x improvement). Compression is the default choice and it works for the large majority of use cases.

Online VM exporting and cloning

It’s now possible to export or clone a running VM from System Center Virtual Machine Manager 2012 R2 with a few mouse clicks. As with pretty much anything related to managing Windows Server 2012, you can accomplish the same task using Windows PowerShell.

Online VHDX resizing

In Windows Server 2012 Hyper-V, it is not possible to resize a virtual hard disk attached to a running VM. Windows Server 2012 R2 removes this restriction, making it possible to not only expand but even reduce the size of the virtual disk (VHDX format only) without stopping the running VM.

Storage QoS

Windows Server 2012 R2 includes the ability to limit individual VMs to a specific level of I/O throughput. The IOPS are measured by monitoring the actual disk rate to and from the attached virtual hard drives. If you have applications capable of consuming large amounts of I/O, you’ll want to consider this setting to ensure that a single I/O-hungry VM won’t starve neighbor VMs or take down the entire host.

Dynamic Memory support for Linux

In the Windows Server 2012 R2 release, Hyper-V gains the ability to dynamically expand the amount of memory available to a running VM. This capability is especially handy for any Linux workload (notably Web servers) where the amount of memory needed by the VM changes over time. Windows Server 2012 R2 Hyper-V also brings Windows Server backups to Linux guests.

Shared VHDX

With Windows Server R2 Hyper-V, Windows guest clusters (think traditional Windows Server failover clustering but using a pair of VMs) no longer require an iSCSI or Fibre Channel SAN, but can be configured using commodity storage: namely a shared VHDX file stored on a Cluster Shared Volume. Note that while the clustered VMs can be live migrated as per usual, a live storage migration of the VHDX file requires one of the cluster nodes to be taken offline.

Bigger Bang for the Buck: Licensing Windows Server 2012 R2

The Windows Server 2012 R2 product is streamlined and simple, making it easy for customers to choose the edition that is right for their needs.

Datacenter edition – Unlimited Windows Server 2012 R2 virtualization license.

Standard edition 2 virtualized server license or lightly virtualized environments.

Essentials edition for small businesses with up to 25 users running on servers with up to two processors.

Foundation edition for small businesses with up to 15 users running on single processor servers.

Edition

Feature comparison

Licensing model

Server Pricing*

Datacenter

Unlimited virtual OSE

All features

Processor + CAL

$6,155

Standard

Two virtual OSE

All features

Processor + CAL

$882

Essentials

2 processor

One OSE

Limited features

Server

25 user limit

$501

Foundation

1 processor

Limited features

Server

15 user limit

OEM Only

Client Access Licenses (CALs) will continue to be required for access to Windows Server 2012 R2 servers and management access licenses continue to be required for endpoints being managed by System Center. You need Windows Server 2012 CAL to access Windows Server 2012. You also need CAL to access Remote Desktop Services (RDS) and Active Directory Rights Management Services (AD RMS).

What’s New SCVMM 2012 R2

· Public Cloud for Service Provider using Windows Azure 

· Private Cloud with System Center 2012 R2 VMM

· Any storage approach- Use any kind of Storage: DAS, SAN, NAS, Windows Server 2012 File Server, Scale-out File Server Cluster

· Networking – Management of physical network switches via OMI as well as virtual network infrastructure ( PVLANs, NV-GRE Virtualized Networks, NV-GRE Gateways )

· Virtualization host agnostic – Intel/AMD/OEM Hardware running Windows Server 2012/R2/2008 R2 Hyper-V, VMware or Citrix XenServer

· Cisco Nexus 1000V Switch

· Bootstrapping a repeatable architecture

· Bare-Metal Provisioning Scale-Out File Server Cluster and Storage Spaces

· Provisioning Synthetic Fibre Channel in Guest VMs using VMM

· Guest Clustering with Shared VHDXs

· VMM Integration with IP Address Management ( IPAM )

· Hybrid Networking with Windows Azure Pack and System Center 2012 R2 VMM

· Windows Azure Hyper-V Recovery Manager

· Delegating Access Per Private Cloud

· OM Dashboard for VMM Fabric Monitoring

Fire Power of System Center: Licensing System Center 2012 R2

System Center 2012 R2 has two version: Data Center and Standard. Both version is comprised with the following components

· Operations Manager

· Configuration Manager

· Data Protection Manager

· Service Manager

· Virtual Machine Manager

· Endpoint Protection

· Orchestrator

· App Controller

System Center license is per processor based license. Cost of System Center 2012 R2 data center is USD 3607 and cost of System Center 2012 R2 Standard is USD1323. System Center license comes with a SQL Server standard edition license. This SQL server can only be used for System Center purpose. You can virtualized unlimited number of VMs in SC 2012 R2 data center edition.

Comparing Server 2008 R2 and Server 2012 R2 in terms of virtualization.

Hyper-v is not the same as you knew in Windows Server 2008. To clear fog of your mind about Hyper-v, the following table shows the improvement Microsoft has made over the years.

Comparing VMware with Windows Server 2012 R2

While VMware still number one in Hypervisor markets but the Redmond giant can also leverage on almost a billion Windows OS user globally, as well as its expertise in software and a robust range of services (including Azure, Bing, MSN, Office 365, Skype and many more). A new battle ground is ready between Microsoft and VMware would make 2014 a pivotal hybrid cloud year. The hybrid cloud could indeed give Microsoft the chance to prevail in ways that it couldn’t with the launch of Hyper-V; Hyper-V’s market share has been gradually increasing since early 2011. According to Gartner, Microsoft gained 28% Hypervisor market share last year.

Let’s dig deeper into comparison….

The following comparison is based on Windows Server 2012 R2 Data Center edition and System Center 2012 R2 Data Center edition Vs vSphere 5.5 Enterprise Plus and vCenter Server 5.5.

Licensing:

Options

Microsoft

VMware

# of Physical CPUs per License

2

1

# of Managed OSE’s per License

Unlimited

Unlimited

# of Windows Server VM Licenses per Host

Unlimited

0

Includes Anti-virus / Anti-malware protection

Yes

Yes

Includes full SQL Database Server licenses for management databases

Yes

No

Database, Hosts & VMs

A single database license is enough for 1,000 hosts and 25,000 VMs per management server.

Purchase additional database server licenses to scale beyond managing 100 hosts and 3,000 VMs with vCenter Server Appliance.

Includes licensing for Enterprise Operations Monitoring and Management of hosts, guest VMs and application workloads running within VMs.

Yes

No 

Includes licensing for Private Cloud Management capabilities – pooled resources, self-service, delegation, automation, elasticity, chargeback

Yes

No

Includes management tools for provisioning and managing VDI solutions for virtualized Windows desktops.

Yes

No

Includes web-based management console

Yes

Yes

Virtualization Scalability:

Options

Microsoft

VMware

Maximum # of Logical Processors per Host

320

320

Maximum Physical RAM per Host

4TB

4TB

Maximum Active VMs per Host

1,024

512

Maximum Virtual CPUs per VM

64

64

Hot-Adjust Virtual CPU Resources to VM

Yes

Yes

Maximum Virtual RAM per VM

1TB

1TB

Hot-Add Virtual RAM to VM

Yes

Yes

Dynamic Memory Management

Yes

Yes.

Guest NUMA Support

Yes

Yes

Maximum # of physical Hosts per Cluster

64

32

Maximum # of VMs per Cluster

8,000

4,000

Virtual Machine Snapshots

Yes

Yes

No of Snapshot Per VMS

50

32

Integrated Application Load Balancing for Scaling-Out Application Tiers

Yes

No

Bare metal deployment of new Hypervisor hosts and clusters

Yes

Yes

Bare metal deployment of new Storage hosts and clusters

Yes

No

Manage GPU Virtualization for Advanced VDI Graphics

Yes

Yes

Virtualization of USB devices

Yes

Yes

Virtualization of Serial Ports

Yes

Yes

Minimum Disk Footprint while still providing management of multiple virtualization hosts and guest VM’s

~800KB – Micro-kernelized hypervisor ( Ring -1 )
~5GB – Drivers + Management ( Parent Partition – Ring 0 + 3 )

~155MB – Monolithic hypervisor w/ Drivers( Ring -1 + 0 )
~4GB – Management  ( vCenter Server Appliance – Ring 3 )

Boot from Flash

Yes

Yes

Boot from SAN

Yes

Yes

VM Portability, High Availability and Disaster Recovery:

 Features

Microsoft

VMware

Live Migration of running VMs

Yes

Yes

Live Migration of running VMs without shared storage between hosts

Yes

Yes

Live Migration using compression of VM memory state

Yes

No

Live Migration over RDMA-enabled network adapters

Yes

No

Live Migration of VMs Clustered with Windows Server Failover Clustering (MSCS Guest Cluster)

Yes

No

Highly Available VMs

Yes

Yes

Failover Prioritization of Highly Available VMs

Yes

Yes

Affinity Rules for Highly Available VMs

Yes

Yes

Cluster-Aware Updating for Orchestrated Patch Management of Hosts.

Yes

Yes.

Guest OS Application Monitoring for Highly Available VMs

Yes

Yes

VM Guest Clustering via Shared Virtual Hard Disk files

Yes

Yes

Maximum # of Nodes per VM Guest Cluster

64

5

Intelligent Placement of new VM workloads

Yes

Yes

Automated Load Balancing of VM Workloads across Hosts

Yes

Yes

Power Optimization of Hosts when load-balancing VMs

Yes

Yes

Fault Tolerant VMs

No

Yes

Backup VMs and Applications

Yes

Yes.

Site-to-Site Asynchronous VM Replication

Yes

Yes

Storage:

Features

Microsoft

VMware

Maximum # Virtual SCSI Hard Disks per VM

256

60 ( PVSCSI )
120 (
Virtual SATA )

Maximum Size per Virtual Hard Disk

64TB

62TB

Native 4K Disk Support

Yes

No

Boot VM from Virtual SCSI disks

Yes

Yes

Hot-Add Virtual SCSI VM Storage for running VMs

Yes

Yes

Hot-Expand Virtual SCSI Hard Disks for running VMs

Yes

Yes

Hot-Shrink Virtual SCSI Hard Disks for running VMs

Yes

No

Storage Quality of Service

Yes

Yes

Virtual Fibre Channel to VMs

Yes

Yes.

Live Migrate Virtual Storage for running VMs

Yes

Yes

Flash-based Read Cache

Yes

Yes

Flash-based Write-back Cache

Yes

No

SAN-like Storage Virtualization using commodity hard disks.

Yes

No

Automated Tiered Storage between SSD and HDD using commodity hard disks.

Yes

No

Can consume storage via iSCSI, NFS, Fibre Channel and SMB 3.0.

Yes

Yes

Can present storage via iSCSI, NFS and SMB 3.0.

Yes

No

Storage Multipathing

Yes

Yes

SAN Offload Capability

Yes

Yes

Thin Provisioning and Trim Storage

Yes

Yes

Storage Encryption

Yes

No

Deduplication of storage used by running VMs

Yes

No

Provision VM Storage based on Storage Classifications

Yes

Yes

Dynamically balance and re-balance storage load based on demands

Yes

Yes

Integrated Provisioning and Management of Shared Storage

Yes

No

Networking:

 Features

Microsoft

VMware

Distributed Switches across Hosts

Yes

Yes

Extensible Virtual Switches

Yes

Replaceable, not extensible

NIC Teaming

Yes

Yes

No of NICs

32

32

Private VLANs (PVLAN)

Yes

Yes

ARP Spoofing Protection

Yes

No

DHCP Snooping Protection

Yes

No

Router Advertisement Guard Protection

Yes

No

Virtual Port ACLs

Yes

Yes

Trunk Mode to VMs

Yes

Yes

Port Monitoring

Yes

Yes

Port Mirroring

Yes

Yes

Dynamic Virtual Machine Queue

Yes

Yes

IPsec Task Offload

Yes

No

Single Root IO Virtualization (SR-IOV)

Yes

Yes

Virtual Receive Side Scaling ( Virtual RSS )

Yes

Yes

Network Quality of Service

Yes

Yes

Network Virtualization / Software-Defined Networking (SDN)

Yes

No

Integrated Network Management of both Virtual and Physical Network components

Yes

No

Virtualized Operating Systems Support: 

Operating Systems

Microsoft

VMware

Windows Server 2012 R2

Yes

Yes

Windows 8.1

Yes

Yes

Windows Server 2012

Yes

Yes

Windows 8

Yes

Yes

Windows Server 2008 R2 SP1

Yes

Yes

Windows Server 2008 R2

Yes

Yes

Windows 7 with SP1

Yes

Yes

Windows 7

Yes

Yes

Windows Server 2008 SP2

Yes

Yes

Windows Home Server 2011

Yes

No

Windows Small Business Server 2011

Yes

No

Windows Vista with SP2

Yes

Yes

Windows Server 2003 R2 SP2

Yes

Yes

Windows Server 2003 SP2

Yes

Yes

Windows XP with SP3

Yes

Yes

Windows XP x64 with SP2

Yes

Yes

CentOS 5.7, 5.8, 6.0 – 6.4

Yes

Yes

CentOS Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

Red Hat Enterprise Linux 5.7, 5.8, 6.0 – 6.4

Yes

Yes

Red Hat Enterprise Linux Desktop 5.7, 5.8, 6.0 – 6.4

Yes

Yes

SUSE Linux Enterprise Server 11 SP2 & SP3

Yes

Yes

SUS Linux Enterprise Desktop 11 SP2 & SP3

Yes

Yes

OpenSUSE 12.1

Yes

Yes

Ubuntu 12.04, 12.10, 13.10

Yes

Yes

Ubuntu Desktop 12.04, 12.10, 13.10

Yes

Yes

Oracle Linux 6.4

Yes

Yes

Mac OS X 10.7.x & 10.8.x

No

Yes

Sun Solaris 10

No

Yes

Windows Azure:

Here are a special factors that put Microsoft ahead of VMware: Microsoft Azure for on-premises and service provider cloud.

Windows Azure Pack is shipping with Windows Server 2012 R2. The Azure code will enable high-scale hosting and management of web and virtual machines.

Microsoft is leveraging its service provider expertise and footprint for Azure development while extending Azure into data centers on Windows servers. That gives Microsoft access to most if not all of the world’s data centers. It could become a powerhouse in months instead of years. Widespread adoption of Microsoft Azure platform gives Microsoft a winning age against competitor like VMware.

On premises client install Windows Azure pack to manage their system center 2012 R2 and use Azure as self-service and administration portal for IT department and department within organization. To gain similar functionality in VMware you have to buy vCloud Director, Chargeback and vShield separately.

Conclusion:

This is a clash of titanic proportion in between Microsoft and VMware. Ultimately end user and customer will be the winner. Both companies are thriving for new innovation in Hypervisor and virtualization market place. End user will enjoy new technology and business will gain from price battle between Microsoft and VMware. These two key components could significantly increase the adoption of hybrid cloud operating models. Microsoft has another term cards for cloud service provider which is Exchange 2013 and Lync 2013. Exchange 2013 and Lync 2013 are already widely used for Software as a Service (SaaS). VMware has nothing to offer in Messaging and collaboration platform. Microsoft could become for the cloud what it became for the PC. It could enforce consistency across clouds to an extent that perhaps no other player could. As the cloud shifts from infrastructure to apps, Microsoft could be in an increasingly powerful position and increase Hyper-v share even further by adding SaaS to its product line. History will repeat once again when Microsoft defeat VMware as Microsoft defeated Novell eDirectory, Corel WordPerfect and IBM Notes.

References:

http://blogs.technet.com/b/keithmayer/archive/2013/10/15/vmware-or-microsoft-comparing-vsphere-5-5-and-windows-server-2012-r2-at-a-glance.aspx#.UxaKbYXazIV

http://www.datacentertcotool.com/

http://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx#fbid=xrWmRt7RXCi

http://wikibon.org/wiki/v/VMware_vs_Microsoft:_It%27s_time_to_stop_the_madness

http://www.infoworld.com/d/microsoft-windows/7-ways-windows-server-2012-pays-itself-205092

http://www.trefis.com/stock/vmw/articles/221206/growing-competition-for-vmware-in-virtualization-market/2014-01-07

Supported Server and Client Guest Operating Systems on Hyper-V

Compatibility Guide for Guest Operating Systems Supported on VMware vSphere