Understanding Network Virtualization in SCVMM 2012 R2

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes:

SCVMM Network

Logical Networking of almost “Anything” hosted in SCVMM- Logical network is a concept of complete identification, transportation and forwarding of Ethernet traffic in virtualized environment.

  • Provision and manage logical networks resources of private and public cloud
  • Management of Logical networks, subnets, VLAN, Trunk or Uplinks, PVLAN, Mac address pool, Templates, profiles, static IP address pool, DHCP address pool, IP Address Management (IPAM)
  • Integrate and manage third party hardware load balancer and Cisco virtual switch 1000v
  • Provide functionality of Virtual IP Addresses (VIPs), quality of service (QoS), monitor network traffic and virtual switch extensions
  • Creation of virtual switches and virtual network gateways

Network Virtualization – Network virtualization is a parallel concept to a server virtualization, where it allows you to abstract and run multiple virtual networks on a single physical network

  • Connects virtual machines to other virtual machines, hosts, or applications running on the same logical network.
  • Provides an independent migration of virtual machine which means when a VM moved to a different host from original host, SCVMM will automatically migrate that virtual network with the VM so that it remains connected to the rest of the infrastructure.
  • Allows multiple tenants to have their own isolated networks for security and privacy reason.
  • Allows unique IP address ranges for a tenant for management flexibility.
  • Communicate using a gateway of a site or a different site if permitted by firewall
  • Connect a VM running on a virtual network to any physical network in the same site or a different location.
  • Connect cross-network using an inbox NVGRE gateway that can be deployed as a VM to provide this cross-network interoperability.

Network Virtualization is defined in Fabric>Networking Tab of SCVMM 2012 R2 management console. Virtual Machine networking is defined in VMs and Services>VM Networks Tab of SCVMM 2012 R2 management console.

Host Config

Network virtualization terminology in SCVMM 2012 R2:


Logical networks: A logical network in VMM which contains the information of VLAN, PVLAN and subnets of a site in a Hyper-v host or a Hyper-v clusters. An IP address pool and a VM network can be associated with a logical network. A logical network can connect to another network or many network or vice-versa. Cloud function of each logical network is:

Logical network Purpose Tenant Cloud
External ·Site-to-site endpoint IP addresses

·Load balancer virtual IP addresses (VIPs)

·Network address translation (NAT) IP addresses for virtual networks

·Tenant VMs that need direct connectivity to the external network with full inbound access

Infrastructure Used for service provider infrastructure, including host management, live migration, failover clustering, and remote storage. It cannot be accessed directly by tenants. No
Load Balancer ·Uses static IP addresses

·Has outbound access to the external network via the load balancer

·Has inbound access that is restricted to only the ports that are exposed through the VIPs on the load balancer

Network Virtualization · This network is automatically used for allocating provider addresses when a VM that is connected to a virtual network is placed onto a host.

·Only the gateway VMs connect to this directly.

· Tenant VMs connect to their own VM network. Each tenant’s VM network is connected to the Network Virtualization logical network.

·A tenant VM will never connect to this directly.

·Static IP addresses are automatically assigned.

Gateway Associated with forwarding gateways, which require one logical network per gateway. For each forwarding gateway, a logical network is associated with its respective scale unit and forwarding gateway. No
Services · The Services network is used for connectivity between services in the stamp by public-facing Windows Azure Pack features, and for SQL Server and MySQL Database DBaaS deployments.

·All deployments on the Services network are behind the load balancer and accessed through a virtual IP (VIP) on the load balancer.

·This logical network is also designed to provide support for any service provider-owned service and is likely to be used by high-density web servers initially, but potentially many other services over time.


IP Address Pool: An IP address pool is a range of IP addresses assigned to a logical network in a site which provides IP address, subnets, gateway, DNS, WINS related information to virtual machines and applications.

Mac Address Pool: Mac Address Pool contains default mac address ranges of virtual network adapter of virtual machine. You can also create customised mac address pool and assign that pool to virtual machines.

Pool Name Vendor Mac Address
Default MAC address pool Hyper-V and Citrix XenServer 00:1D:D8:B7:1C:00 – 00:1D:D8:F4:1F:FF
Default VMware MAC address pool VMware ESX 00:50:56:00:00:00 – 00:50:56:3F:FF:FF

Hardware Load Balancer: Hardware load balancer is a functionality within SCVMM networking to provide third party loading balancing of application and services. A virtual IP or IP address Pool can be associated with hardware load balancer.

VIP Templates: VIP templates is a standard template used to define virtual addresses associated with hardware load balancer. VIP is allocated to application, services and virtual machines hosted in SCVMM 2012 R2. A template that specifies the load-balancing behaviour for HTTPS traffic on a specific load balancer by manufacturer and model.

Logical Switch: logical switches act as containers for the properties or capabilities that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. Logical switches act as an extension of physical switch with a major difference that you don’t have to drive to data center, take a patch lead and connect to computer, then configure switch ports and assign VLAN tag to that port.  Logical switch where you define uplinks or physical adapter of Hyper-v hosts, associate uplinks with logical networks and sites.

Port Profiles: Port profiles act as containers for the security and privacy that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify these capabilities in port profiles, which you can then apply to the appropriate adapters. Port profiles are associated with an uplinks in logical switch.

Port Classification: Port classifications provide global names for identifying different types of virtual network adapter port profiles. A port classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.

Network Service: Network service is container whether you can add Windows and non-Windows network gateway and IP address management and monitoring information. An IP Address Management (IPAM) server that runs on Windows Server 2012 R2 to provide resources in VMM. You can use the IPAM server in network resource tab of SCVMM to configure and monitor logical networks and their associated network sites and IP address pools. You can also use the IPAM server to monitor the usage of VM networks that you have configured or changed in VMM.

Virtual switch extension: A virtual switch extension manager in a SCVMM allows you to use a software based vendor network-management console and the VMM management server together. For example you can install Cisco 1000v extension software in a VMM server and add the functionality of Cisco switches into the VMM console.

VM Network: A VM network in a logical network is the endpoint of network virtualization which directly connect a virtual machine to allow public or private communication among VMs or other network and services. A VM network is associated with a logical network for direct access to other VMs.

VM Networks

Related Articles:

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to deploy VDI using Microsoft RDS in Windows Server 2012 R2

Remote Desktop Services is a server role consists of several role services. Remote Desktop Services (RDS) accelerates and securely extends desktop and applications to any device and anyplace for remote and roaming worker. Remote Desktop Services provide both a virtual desktop infrastructure (VDI) and session-based desktops.

In Windows Server 2012 R2, the following roles are available in Remote Desktop Services: 

Role service name Role service description
RD Virtualization Host RD Virtualization Host integrates with Hyper-V to deploy pooled or personal virtual desktop collections
RD Session Host RD Session Host enables a server to host RemoteApp programs or session-based desktops.
RD Connection Broker RD Connection Broker provides the following services

  • Allows users to reconnect to their existing virtual desktops, RemoteApp programs, and session-based desktops.
  • Enables you to evenly distribute the load among RD Session Host servers in a session collection or pooled virtual desktops in a pooled virtual desktop collection.
  • Provides access to virtual desktops in a virtual desktop collection.
RD Web Access RD Web Access enables you the following services

  • RemoteApp and session-based desktops Desktop Connection through the Start menu or through a web browser.
  • RemoteApp programs and virtual desktops in a virtual desktop collection.
RD Licensing RD Licensing manages the licenses for RD Session Host and VDI.
RD Gateway RD Gateway enables you to authorized users to connect to VDI, RemoteApp

For a RDS lab, you will need following servers.

  • RDSVHSRV01- Remote Desktop Virtualization Host server. Hyper-v Server.
  • RDSWEBSRV01- Remote Desktop Web Access server
  • RDSCBSRV01- Remote Desktop Connection Broker server.
  • RDSSHSRV01- Remote Desktop Session Host Server
  • FileSRV01- File Server to Store User Profile

This test lab consist of subnets for internal network and a DHCP Client i.e. Client1 machine using Windows 8 operating system. A test domain called testdomain.com. You need a Shared folder hosted in File Server or SAN to Hyper-v Cluster as Virtualization Host server. All RD Virtualization Host computer accounts must have granted Read/Write permission to the shared folder. I assume you have a functional domain controller, DNS, DHCP and a Hyper-v cluster. Now you can follow the steps below.

Step1: Create a Server Group

1. Open Server Manager from Task bar. Click Dashboard, Click View, Click Show Welcome Tile, Click Create a Server Group, Type the name of the Group is RDS Servers

2. Click Active Directory , In the Name (CN): box, type RDS, then click Find Now.

3. Select RDSWEBSRV01, RDSSHSRV01, RDSCDSRV01, RDSVHSRV01 and then click the right arrow.

4. Click OK.

Step2: Deploy the VDI standard deployment

1. Log on to the Windows server by using the testdomain\Administrator account.

2. Open Server Manager from Taskbar, Click Manage, click Add roles and features.

3. On the Before You Begin page of the Add Roles and Features Wizard, click Next.

4. On the Select Installation Type page, click Remote Desktop Services scenario-based Installation, and then click Next.


5. On the Select deployment type page, click Standard deployment, and then click Next. A standard deployment allows you to deploy RDS on multiple servers splitting the roles and features among them. A quick start allows you to deploy RDS on to single servers and publish apps.


6. On the Select deployment scenario page, click Virtual Desktop Infrastructure, and then click Next.


7. On the role services page, review roles then click Next.


8. On the Specify RD Connection Broker server page, click RDSCBSRV01.Testdomain.com, click the right arrow, and then click Next.


9. On the Specify RD Web Access server page, click RDSWEBSRV01.Testdomain.com, click the right arrow, and then click Next.


10. On the Specify RD Virtualization Host server page, click RDSVHSRV01.Testdomain.com, click the right arrow, and then click Next. RDSVHSRV01 is a physical machine configured with Hyper-v. Check Create a New Virtual Switch on the selected server.


11. On the Confirm selections page, Check the Restart the destination server automatically if required check box, and then click Deploy.


12. After the installation is complete, click Close.




Step3: Test the VDI standard deployment connectivity

You can ensure that VDI standard deployment deployed successfully by using Server Manager to check the Remote Desktop Services deployment overview.

1. Log on to the DC1 server by using the testdomain\Administrator account.

2. click Server Manager, Click Remote Desktop Services, and then click Overview.

3. In the DEPLOYMENT OVERVIEW section, ensure that the RD Web Access, RD Connection Broker, and RD Virtualization Host role services are installed. If there is an icon and not a green plus sign (+) next to the role service name, the role service is installed and part of the deployment



Step4: Configure FileSRV1

You must create a network share on a computer in the testdomain domain to store the user profile disks. Use the following procedures to connect to the virtual desktop collection:

  • Create the user profile disk network share
  • Adjust permissions on the network share

Create the user profile disk network share

1. Log on to the FileSRV1 computer by using the TESTDOMAIN\Administrator user account.

2. Open Windows Explorer.

3. Click Computer, and then double-click Local Disk (C:).

4. Click Home, click New Folder, type RDSUserProfile and then press ENTER.

5. Right-click the RDSUSERPROFILE folder, and then click Properties.

6. Click Sharing, and then click Advanced Sharing.

7. Select the Share this folder check box.

8. Click Permissions, and then grant Full Control permissions to the Everyone group.

9. Click OK twice, and then click Close.

Setup permissions on the network share

1. Right-click the RDSUSERPROFILE folder, and then click Properties.

2. Click Security, and then click Edit.

3. Click Add.

4. Click Object Types, select the Computers check box, and then click OK.

5. In the Enter the object names to select box, type RDSVHSRV01.Testdomain.com, and then click OK.

6. Click RDSVHSRV01, and then select the Allow check box next to Modify.

7. Click OK two times.

Step5: Configure RDSVHSRV01

You must add the virtual desktop template to Hyper-V so you can assign it to the pooled virtual desktop collection.

Create Virtual Desktop Template in RDSVHSRV01

1. Log on to the RDSVHSRV01 computer as a Testdomain\Administrator user account.

2. Click Start, and then click Hyper-V Manager.

3. Right-click RDSVHSRV01, point to New, and then click Virtual Machine.

4. On the Before You Begin page, click Next.

5. On the Specify Name and Location page, in the Name box, type Virtual Desktop Template, and then click Next.


6. On the Assign Memory page, in the Startup memory box, type 1024, and then click Next.


7. On the Configure Networking page, in the Connection box, click RDS Virtual, and then click Next.


8. On the Connect Virtual Hard Disk page, click the Use an existing virtual hard disk option.


9. Click Browse, navigate to the virtual hard disk that should be used as the virtual desktop template, and then click Open. Click Next.


10. On the Summary page, click Finish.

Step6: Create the managed pooled virtual desktop collection in RDSVHSRV01

Create the managed pooled virtual desktop collection so that users can connect to desktops in the collection.

1. Log on to the RDSCBSRV01 server as a TESTDOMAIN\Administrator user account.

2. Server Manager will start automatically. If it does not automatically start, click Start, type servermanager.exe, and then click Server Manager.

3. In the left pane, click Remote Desktop Services, and then click Collections.

4. Click Tasks, and then click Create Virtual Desktop Collection.


5. On the Before you begin page, click Next.

6. On the Name the collection page, in the Name box, type Testdomain Managed Pool, and then click Next.


7. On the Specify the collection type page, click the Pooled virtual desktop collection option, ensure that the Automatically create and manage virtual desktops check box is selected, and then click Next.


8. On the Specify the virtual desktop template page, click Virtual Desktop Template, and then click Next.


9. On the Specify the virtual desktop settings page, click Provide unattended settings, and then click Next. In this step of the wizard, you can also choose to provide an answer file. A Simple Answer File can be obtained from URL1 and URL2

10. On the Specify the unattended settings page, enter the following information and retain the default settings for the options that are not specified, and then click Next.

§ In the Local Administrator account password and Confirm password boxes, type the same strong password.

§ In the Time zone box, click the time zone that is appropriate for your location.

11. On the Specify users and collection size page, accept the default selections, and then click Next.

12. On the Specify virtual desktop allocation page, accept the default selections, and then click Next.

13. On the Specify virtual desktop storage page, accept the default selections, and then click Next.

14. On the Specify user profile disks page, in the Location user profile disks box, type \\FileSRV01\RDSUserProfile, and then click Next. Make sure that the RD Virtualization Host computer accounts have read and write access to this location.

15. On the Confirm selections page, click Create.

Step8: Test Remote Desktop Services connectivity

You can ensure the managed pooled virtual desktop collection was created successfully by connecting to the RD Web Access server and then connecting to the virtual desktop in the Testdomain Managed Pool collection.

1. Open Internet Explorer.

2. In the Internet Explorer address bar, type https://RDSWEBSRV01.Testdomain.com/RDWeb, and then press ENTER.

3. Click Continue to this website (not recommended).


4. In the Domain\user name box, type TESTDOMAIN\Administrator.

5. In the Password box, type the password for the TESTDOMAIN\Administrator user account, and then click Sign in.

6. Click Testdomain Managed Pool, and then click Connect.

Relevant Configuration

Remote Desktop Services with ADFS SSO

Remote Desktop Services with Windows Authentication

RDS With Windows Authentication

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.


  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster


  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.


  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.


Configure correct Zone as shown below.


Configure correct Zone Config as shown below.


Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 


Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel


Type Name of the Fibre Channel> Apply>Ok.


Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.


Record WWPN from the Virtual Fibre Channel.


Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.


Map the volume or LUN to the virtual server.


Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.


Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.


Now you have FC SAN in your virtual machine



Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.


Backup software can see Tape Library and inventory tapes.


Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts