Migrate a SQL Server database to Azure SQL Database

Azure Database Migration Service partners with DMA to migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database, Azure SQL Database Managed Instance or SQL Server on Azure virtual machines.

 

SQL Migration.png
Azure SQL Migration (Source Microsoft Corp)

 

Moving a SQL Server database to Microsoft Azure SQL Database with Data Migration Assistant is a three-part process:

  1. Prepare a database in a SQL Server for migration to Azure SQL Database using the Data Migration Assistant (DMA).
  2. Export the database to a BACPAC file.
  3. Import the BACPAC file into an Azure SQL Database.

Using Microsoft Data Migration Assistant

Step 1: Prepare for migration

Complete these prerequisites:

  • Install the newest version of Microsoft SQL Server Management Studio (SSMS). Installing SSMS also installs the newest version of SQLPackage, a command-line utility that can be used to automate a range of database development tasks.
  • Download and Install the Microsoft Data Migration Assistant (DMA).
  • Identify and have access to a database to migrate.

Follow these steps to use Data Migration Assistant to assess the readiness of your database for migration to Azure SQL Database:

  1. Open the Microsoft Data Migration Assistant. You can run DMA on any computer with connectivity to the SQL Server instance containing the database that you plan to migrate; you do not need to install it on the computer hosting the SQL Server instance.
  2. In the left-hand menu, click New to create an Assessment project. Fill in the form with a Project name (all other values should be left at their default values), and then click Create.
  3. On the Options page, click Next.
  4. On the Select sources page, enter the name of SQL Server instance containing the server you plan to migrate. Change the other values on this page if necessary, and then click Connect.
  5. In the Add sources portion of the Select sources page, select the checkboxes for the databases to be tested for compatibility, and then click Add.
  6. Click Start Assessment.
  7. When the assessment completes, look for the checkmark in the green circle to see if the database is sufficiently compatible to migrate.
  8. Review the results the SQL Server feature parity results. Specifically review the information about unsupported and partially supported features, and the recommended actions.
  9. Review the Compatibility issues by clicking that option in the upper left. Specifically review the information about migration blockers, behavior changes, and deprecated features for each compatibility level. For the AdventureWorks2008R2 database, review the changes to Full-Text Search since SQL Server 2008, and the changes to SERVERPROPERTY(‘LCID’) since SQL Server 2000. For details about these changes, links for more information are provided. Many search options and settings for Full-Text Search have changed.
  10. Optionally, click Export report to save the report as a JSON file.
  11. Close the Data Migration Assistant.

Step 2: Export to BACPAC file

Follow these steps to use the SQLPackage command-line utility to export the AdventureWorks2008R2 database to local storage.

  1. Open a Windows command prompt and change your directory to a folder in which you have the 130 version of SQLPackage, such as C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin.
  2. Execute the following SQLPackage command at the command prompt to export the AdventureWorks2008R2 database from localhost to AdventureWorks2008R2.bacpac. Change any of these values as appropriate to your environment.

SQLPackageCopy

sqlpackage.exe /Action:Export /ssn:localhost /sdn:AdventureWorks2008R2 /tf:AdventureWorks2008R2.bacpac

Once the execution is complete the generated BACPAC file is stored in the directory where the sqlpackage executable is located. In this example, C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin.

  1. Log in to the Azure portal.
  2. Create a SQL Server logical server

A SQL Server logical server acts as a central administrative point for multiple databases. Follow these steps to create a SQL server logical server to contain the migrated Adventure Works OLTP SQL Server database.

  1. Click the New button found on the upper left-hand corner of the Azure portal.
  2. Type sql server in the search window on the New page, and select SQL server (logical server) from the filtered list.
  3. Click Create, and enter the properties for the new SQL Server (logical server).
  4. Complete the SQL server (logical server) form with the values from the red box in this image.
  5. Click Create to provision the logical server. Provisioning takes a few minutes.

Step 3: Create a server-level firewall rule

  1. Click All resources from the left-hand menu, and click the new server on the All resources page. The overview page for the server opens and provides options for further configuration.
  2. Click Firewall in the left-hand menu under Settings on the overview page.
  3. Click Add client IP on the toolbar to add the IP address of the computer you are currently using, and then click Save. This creates a server-level firewall rule for this IP address.
  4. Click OK.

Step 3: Import a BACPAC file to Azure SQL Database

The SQLPackage command-line utility is the preferred method to import your BACPAC database to Azure SQL Database for most production environments.

SqlPackage.exe /a:import /tcs:”Data Source=<your_server_name>.database.windows.net;Initial Catalog=<your_new_database_name>;User Id=<change_to_your_admin_user_account>;Password=<change_to_your_password>” /sf:AdventureWorks2008R2.bacpac /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6

Connect using SQL Server Management Studio (SSMS)

  1. Open SQL Server Management Studio.
  2. In the Connect to Server dialog box, enter this information.
  • Server type: Specify Database engine
  • Server name: Enter your fully qualified server name, such as mynewserver20170403.database.windows.net
  • Authentication: Specify SQL Server Authentication
  • Login: Enter your server admin account
  • Password: Enter the password for your server admin account
  1. Click Connect.
  2. In Object Explorer, expand Databases, and then expand myMigratedDatabase to view the objects in the sample database.

Using Azure Database Migration Service

Azure Database Migration Service (ADMS), now in limited preview, can help you migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database, Azure SQL Database Managed Instance, or SQL Server on an Azure Virtual Machine.

ADMS is designed to simplify the complex workflows you can encounter when migrating various database types to databases in Azure.

  1. In the Azure portal, select Data Migration Service, and then click New Migration Project.
  2. In New Migration Project, enter a unique project name, server source type and a target server type.
  3. Click Start.
  4. Provide all options under Migration target details, and then click Save.
  5. Provide all options under Migration source detail, and then click Save.
  6. In the Select source databases list, select each source database you want to migrate, and then click Save.
  7. Review the details summary, and then click Run Migration to start the migration. The amount of time the migration will run depends on a variety of factors including size and complexity of the database, source disk speed, and network speed.
  8. Once the migration is finished, a Completed status will be displayed in the SQL Migration dashboard.

Migrating VMware Virtual Workloads to Microsoft Azure Cloud

Overview

Migrating to the cloud doesn’t have to be difficult, but many organizations struggle to get started. Before they can showcase the cost benefits of moving to the cloud or determine if their workloads will lift and shift without effort, they need deep visibility into their own environment and the tight interdependencies between applications, workloads, and data. Azure Migrate, Azure Database Migration Service, and Azure Cost Management provides a frictionless approach to moving VMware VMs to Azure.

VMware to Azure.PNG

Microsoft – Cloud Security Certification

Microsoft Azure has been certified by Australian Signals Dicrectorate (ASD), Department of Defence. Check your region to verify Azure certification by the regulator if you have regulatory compliance requirements.

  • Microsoft has undergone an Information Security Registered Assessors Program (IRAP) assessment of Australian Signals Directorate (ASD) and been certified on the Certified Cloud Services List (CCSL) by ASD for Azure, Dynamics 365, and Office 365
  • Microsoft Azure has been awarded PROTECTED classification level by the Australian Signals Directorate (ASD). Microsoft Azure is the first global cloud provider which has been awarded PROTECTED
  • Azure, Cloud App Security, Intune, Office 365, Dynamics 365 and Power BI are awarded certification after rigorous independent assessments of cloud providers by the Cloud Security Alliance (CSA)
  • Azure, Cloud App Security, Intune, Office 365, Dynamics 365 and Power BI are awarded ISO/IEC 27001 certification meeting criteria specified in the ISO certification

Licensing Cost & Azure Hybrid Benefit

  • customers with Software Assurance to run Windows Server VMs on Azure at a lower rate.
  • save up to 40 percent on Windows Server VMs
  • Use existing SQL Server licenses toward SQL Database managed instances
  • Azure Reserved Virtual Machine Instances to further reduce costs—up to 72% on PAYG prices per year or per three years terms on both Windows and Linux virtual machines.
  • pay only for the underlying compute and storage for SQL VM
  • 82% savings over PAYG rates on Azure and up to 67% compared to AWS RIs for Windows VMs.
  • 49% cost savings estimated using the Azure TCO calculator comparing on-premsies VMware VMs. Actual savings may vary based on region, instance type and usage. Reference Nucleus Research
  • You can specify whether you’re enrolled in Software Assurance and can use the Azure Hybrid Use Benefit.

Hybrid Cloud1

Migration Path

Microsoft offers an end-to-end solution to provide you with a proven framework and tools to migrate your first workload and give you a complete roadmap for discovery, migration, and continual optimization, including better insights and strategies for running your entire datacenter portfolio on Azure. Migrating to Azure is simple three-stage process and focuses on how to identify virtual machines, applications, and data that can easily be moved to the cloud.

Hybrid Cloud.PNG

Supported Platform

  • VMware vCenter Server 5.5, 0 and later version managed virtual machines
  • Any On-premises Storage (vSAN, FC SAN, NFS or iSCSI)
  • Appliance-based, agentless, and non-intrusive discovery of on-premises virtual machines.
  • Currently Azure Migrate supports only Locally redundant storage (LRS). However, once you migrated to Azure, you can use Geo-redundant storage.
  • Lift & Shift migration to Azure IaaS Cloud
  • Azure migrate will recommend the use of Azure Database Migration Service
  • Use Azure Site Recovery Manager to migrate business critical and large VMs to Azure Cloud

Stage 1 – Assess Your VMware vSphere Environment

Use these four steps to discover and assess your on-premises workloads for migration to Azure.

  1. Prepare your environment.
  2. Discover virtual machines.
  3. Group virtual machines.
  4. Assess the groups of virtual machines.

Step 1: Prepare your environment

  1. To get started with Azure Migrate, you need a Microsoft Azure account or the free trial.
  2. Assess VMware Virtual machines located on vSphere ESXi hosts that are managed with a vCenter server running version 5.5 or 6.0.
  3. The ESXi host or cluster on which the Collector VM (version 8.0) runs must be running version 5.0 or later.
  4. To discover virtual machines, Azure Migrate needs an account with read-only administrator credentials for the vCenter server.
  5. Create a vCenter virtual machine in .ova format. Download an appliance and import it to the vCenter server to create the virtual machine. The virtual machine must be able to connect to the internet to send metadata to Azure.
  6. Set statistics settings for the vCenter server to statistics level 2. The default Level 1 will work, but Azure Migrate won’t be able to collect data for performance-based sizing for storage.

Tag your virtual machines in vCenter (optional)

Use these steps to tag your virtual machines in vCenter server.

  1. In the VMware vSphere Web Client, navigate to the vCenter server instance.
  2. To review current tags, click Tags.
  3. To tag a virtual machine, click Related Objects > Virtual Machines, and select the virtual machine.
  4. In Summary > Tags, click Assign.
  5. Click New Tag, and specify a tag name and description.
  6. To create a category for the tag, select New Category in the drop-down list.
  7. Specify a category name and description and the cardinality, and click OK.

Step 2: Discover virtual machines

Using Azure Migrate to discover on-premises workloads involves these steps.

  1. Create a Project.
  2. Download the Collector appliance.
  3. Create the Collector virtual machine.
  4. Run the Collector to discover virtual machines.
  5. Verify discovered virtual machines in the portal.

Create a Project

Azure Migrate projects hold the metadata of your on-premises machines and enables you to assess migration suitability.  Use these steps to create a project.

  1. Log on to the Azure portal and click New.
  2. Search for Azure Migrate in the search box, and select the service Azure Migrate (preview) in the search results, and then click Create.
  3. Select the Azure Migrate service from the search results.
  4. Click Create.
  5. Specify a name for the new project.
  6. Select the subscription you want the project to get associated to.
  7. Create a new resource group, or select an existing one.
  8. Specify an Azure location.
  9. To quickly access the project from the Dashboard, select Pin to dashboard.
  10. Click Create. The new project appears on the Dashboard, under All resources, and in the Projects blade.

Download the Collector appliance

  1. Select the project, and click Discover & Assess on the Overview blade.
  2. Click Discover Machines, and then click Download.
  3. Copy the Project ID and project key values to use when you configure the Collector.

Deploy the Collector virtual machine

In the vCenter Server, import the Collector appliance as a virtual machine using the Deploy OVF Template wizard.

  1. In vSphere Client console, click File > Deploy OVF Template.
  2. In the Deploy OVF Template Wizard > Source, specify the location for the .ovf file.
  3. In Name and Location, specify a friendly name for the Collector virtual machine, the inventory object in which the virtual machine will be hosted.
  4. In Host/Cluster, specify the host or cluster on which the Collector virtual machine will run.
  5. In Storage, specify the storage destination for the Collector virtual machine.
  6. In Disk Format, specify the disk type and size.
  7. In Network Mapping, specify the network to which the Collector virtual machine will connect. The network must be connected to the internet to send metadata to Azure.
  8. Review and confirm the settings, and then click Finish.

Run the Collector to discover virtual machines

  1. In the vSphere Client console, right-click the virtual machine > Open Console.
  2. Provide the language, time zone, and password preferences for the appliance.
  3. In the Azure Migrate Collector, open Set Up Prerequisites, and then

o Accept the license terms, and read the third-party information.

o The Collector checks that the virtual machine has internet access. If the virtual machine accesses the internet via a proxy, click Proxy settings, and specify the proxy address and listening port. Specify credentials if proxy access needs authentication.

o The Collector checks that the Windows profiler service is running. The service is installed by default on the Collector virtual machine.

o Select to download and install the VMware PowerCLI.

  1. In Discover Machines, do the following:

o Specify the name (FQDN) or IP address of the vCenter server and the read-only account the Collector will use to discover virtual machines on the vCenter server.

o Select a scope for virtual machine discovery. The Collector can only discover virtual machines within the specified scope. Scope can be set to a specific folder, datacenter, or cluster, but it shouldn’t contain more than 1000 virtual machines.

o If you’re using tagging on the vCenter server, select tag categories for virtual machine grouping. Azure Migrate automatically groups virtual machines based on tag values in the category. If you’re not using tagging, you can group virtual machines in the Azure portal.

  1. In Select Project, specify the Azure Migrate project ID and key you copied from the Azure portal. If didn’t copy them, open Azure in a browser from the Collector virtual machine. In the project Overview page, click Discover Machines, and copy the values.
  2. In Complete Discovery, you can monitor the discovery status, and check that metadata is collected from the virtual machines in scope. The Collector provides an approximate discovery time.

Verify discovered virtual machines in the portal

  1. In the migration project, click Manage > Machines.
  2. Check that the virtual machines you want to discover appear in the portal.

Step 3: Group virtual machines

Enterprises typically migrate virtual machines with dependencies together at the same time to ensure their functionality after migration to Azure. Azure Migrate allows you to categorize the virtual machines by group so you can assess all the virtual machines in a group.

  • If you provided a tag category—which was an optional step while configuring the Collector—groups will be automatically created for the workloads based on the tag values.
  • If a tag category is not provided while configuring the Collector, you can create groups of virtual machines in the Azure Migrate portal.

Optional: Assess machine dependencies before adding them to a group

  1. In Manage > Machines, search the Machine for which you want to view the dependencies.
  2. In the Dependencies column for the machine, click Install agent.
  3. To calculate dependencies, download and install these agents on the machine: o Microsoft Monitoring agent

o Dependency agent

  1. Copy the workspace ID and key to use later when you install the Microsoft Monitoring agent on a machine.
  2. After you install the agents on the machine, return to the portal and click Machines. This time the Dependencies column for the machine should contain the text View dependencies. Click View dependencies.
  3. By default, the dependency time range is an hour. Click the time range to shorten it, specify start and end dates, or change the duration. Press Ctrl + Click to select multiple machines on the map, and then click Group machines.
  4. In Group machines, specify a group name. Verify the machines you added have the dependency agents installed and have been discovered by Azure Migrate. Machines must be discovered to assess them. We recommend that you install the dependency agents to complete dependency mapping.
  5. Click OK to save the group settings. Alternatively, you can add machines to an existing group.

Create a Group

You can create groups of virtual machines from the Machines blade or from the Groups blade, using a similar process.

Create a group from the Machines blade

  1. Navigate to the Dashboard of a project and click the Machines tile.
  2. Click Group Machines.
  3. Specify a name for the group in the Name box, and then select the machines that you want to add to the group.
  4. Click Create.

Add/Remove machines to/from an existing group if you require

  1. Navigate to the dashboard of a project and click the Groups tile.
  2. Select the Group you want to add/remove machines to/from.
  3. Click Add Machines or Remove Machines.
  4. Select the machines that you want to add/remove to/from the group.
  5. Click Add or Remove.

Step 4: Assess groups of virtual machines

Create an assessment

Follow these steps to generate an assessment for the group.

  1. Select the project you want under Project.
  2. On the project dashboard, click Groups.
  3. Create a new group or select an existing group to assess under Group.
  4. Click Create Assessment to create a new assessment for the group.

The assessment includes these details.

  • Summary of the number of machines suitable for Azure which is referred to as Azure Readiness.
  • Monthly estimate of the cost for running the machines in Azure after migration.
  • Storage monthly cost estimate.

Assessment calculation

Azure Migrate performs three checks on virtual machines in this order:

  1. Azure Suitability Analysis
  2. Performance-based sizing
  3. Monthly cost estimate

Stage 2: Migrate virtual machines using Azure Site Recovery

Before you start deployment, review the architecture and make sure you understand all the components you need to deploy.

Next, make sure you understand the prerequisites and limitations for a Microsoft Azure account, Azure networks, and storage accounts. You also need:

  • On-premises Site Recovery components
  • On-premises VMware prerequisites
  • Mobility service component installed on the virtual machine you want to replicate.

These are the general steps to migrate:

  1. Set up Azure services such as Virtual Networks, Availability Group, Network Load Balancer, Address Space, Subnets, Resource Group, Storage Accounts, Public IPs.
  2. Connect to VMware servers.
  3. Set up the target environment.
  4. Complete migration.

I assume, you have completed the step1. So I am moving on to step 2.

Create a Recovery Services vault

  1. Sign in to the Azure portal > Recovery Services.
  2. Click New > Monitoring & Management > Backup and Site Recovery.
  3. In Name, specify a friendly name to identify the vault. If you have more than one subscription, select one of them.
  4. Create a resource group, or select an existing one. Specify an Azure region. To check supported regions, see geographic availability in Azure Site Recovery Pricing Details.
  5. If you want to quickly access the vault from the dashboard, click Pin to dashboard, and then click Create.
  6. The new vault will appear on Dashboard > All resources and on the main Recovery Services vaults blade.

Select a protection goal

In this task, select what you want to replicate, and where you want to replicate to.

  1. Click Recovery Services vaults > vault.
  2. In the Resource Menu, click Site Recovery > Prepare Infrastructure > Protection goal.
  3. In Protection goal, select To Azure > Yes, with VMware vSphere Hypervisor.

Set up the source environment

In this task, set up the configuration server, register it in the vault, and discover virtual machines.

  1. Click Site Recovery > Step 1: Prepare Infrastructure > Source.
  2. If you don’t have a configuration server, click Configuration server.
  3. In Add Server, check that Configuration Server appears in Server type.
  4. Download the Site Recovery Unified Setup installation file.
  5. Download the vault registration key. You need this when you run Unified Setup. The key is valid for five days after you generate it.

Register the configuration server in the vault

The next task requires you to run Unified Setup to install the configuration server, the process server, and the master target server. First however, do these three steps.

  1. On the configuration server virtual machine, make sure that the system clock is synchronized with a Time Server. It should match. If it’s 15 minutes in front or behind, setup might fail.
  2. Run setup as a Local Administrator on the configuration server virtual machine.
  3. Make sure TLS 1.0 is enabled on the virtual machine.

Now you are ready to run Setup.

  1. Run the Unified Setup installation file.
  2. In Before You Begin, select Install the configuration server and process server.
  3. From the Third-Party Software License screen, click I Accept to download and install MySQL.
  4. From the Registration screen, select the registration key you downloaded from the vault, and then click Next.
  5. From the Internet Settings screen, specify how the Provider running on the configuration server connects to Azure Site Recovery over the Internet.
  6. If you want to connect with the proxy that’s currently set up on the machine, select Connect to Azure Site Recovery using a proxy server.
  7. If you want the Provider to connect directly, select Connect directly to Azure Site Recovery without a proxy server.
  8. If the existing proxy requires authentication, or if you want to use a custom proxy for the Provider connection, select Connect with custom proxy settings. o If you use a custom proxy, you need to specify the address, port, and credentials.
  9. From the Prerequisites Check screen, run a check to make sure that installation can run. If a warning appears about the Global time sync check, verify that the time on the system clock (Date and Time settings) is the same as the time zone.
  10. In the MySQL Configuration screen, create credentials for logging on to the MySQL server instance that is installed.
  11. From the Environment Details screen, select whether to replicate VMware virtual machines. If you will, Setup checks that PowerCLI 6.0 is installed.
  12. From the Install Location screen, select where you want to install the binaries and store the cache. The drive you select must have at least 5 GB of disk space available, but we recommend a cache drive with at least 600 GB of available space.
  13. From the Network Selection screen, specify the listener (network adapter and SSL port) on which the configuration server sends and receives replication data. Port 9443 is the default port used for sending and receiving replication traffic, but you can modify this port number to suit your environment’s requirements. In addition to the port 9443, we also open port 443, which is used by a web server to orchestrate replication operations. Do not use port 443 for sending or receiving replication traffic.
  14. In the Summary screen, review the information and click Install. When installation finishes, a passphrase is generated. You will need this when you enable replication, so copy it and keep it in a secure location. After registration finishes, the server is displayed on the Settings > Servers in the vault.

Step 2: Connect to VMware servers

To allow Azure Site Recovery to discover virtual machines running in your on-premises environment, you need to connect your VMware vCenter Server or vSphere ESXi hosts with Site Recovery. Note the following before you start:

  • If you add the vCenter server or vSphere hosts to Site Recovery with an account without administrator privileges on the server, the account needs these privileges enabled:

o Datacenter, Datastore, Folder, Host, Network, Resource, Virtual machine, vSphere Distributed Switch.

o The vCenter server needs Storage views permissions.

  • When you add VMware servers to Site Recovery, it can take 15 minutes or longer for them to appear in the portal.

Step 3: Set up the target environment

Before you set up the target environment, make sure you have an Azure storage account and a virtual network set up.

  1. Click Prepare infrastructure > Target, and select the Azure subscription you want to use.
  2. Specify whether your target deployment model is Resource Manager-based, or classic.
  3. Site Recovery verifies that you have one or more compatible Azure storage accounts and networks.

Create replication policy

You need a replication policy to automate the replication to Azure.

  1. To create a new replication policy, click Site Recovery infrastructure > Replication Policies > Replication Policy.
  2. Under RPO threshold, specify the RPO limit. This value specifies how often data recovery points are created. An alert is generated if continuous replication exceeds this limit.
  3. Under Recovery point retention, specify (in hours) how long the retention window is for each recovery point. Replicated virtual machines can be recovered to any point in a window. Up to 24 hours retention is supported for machines replicated to premium storage, and 72 hours for standard storage.
  4. Under App-consistent snapshot frequency, specify how often (in minutes) recovery points containing application-consistent snapshots will be created.
  5. Click OK to create the policy.
  6. When you create a new policy it’s automatically associated with the configuration server. By default, a matching policy is automatically created for failback. For example, if the replication policy is rep-policy then the failback policy will be rep-policy-failback. The failback policy isn’t used until you initiate a failback from Azure.

Prepare for push installation of the Mobility service

The Mobility service must be installed on all virtual machines you want to replicate. There are several ways to install the service, including manual installation, push installation from the Site Recovery process server, and installation using methods such as System Center Configuration Manager. Here you can review prerequisites and installation methods for the Mobility Service.

If you want to use push installation from the Azure Site Recovery process server, you need to prepare an account that Azure Site Recovery can use to access the virtual machine.

The following describes the options:

  • You can use a domain or local account

For Windows, if you’re not using a domain account, you need to disable Remote User Access control on the local machine. To do this, in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System, add the DWORD entry LocalAccountTokenFilterPolicy, with a value of 1.

  • If you want to add the registry entry for Windows from a CLI, type: REG ADD HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1.
  • For Linux, the account should be root on the source Linux server.

Install Mobility Service manually by using the GUI

  1. Copy the installer executable to the virtual machine that is being migrated to Azure, and then open the installer.
  2. On the Installation Option pane, select Install Mobility Service.
  3. Select the install location and click Install to being the installation procedure.
  4. You can use Installation Progress page to monitor the installer’s progress.
  5. Once installation is complete, click the Proceed to Configuration button to register the Mobility Service with your Configuration server.
  6. Click on the Register button to complete the registration.

Configure replication

After you have installed and configured both the Process Server and the Mobility Service agents, continue configuring replication in Azure.

  1. In the Azure portal, navigate to Site Recovery > Step1: Replicate Application > Enable Replication, and then click Step 1: Source Configure > Source.
  2. In Source, select On-Premises.
  3. In Source location, select your Configuration Server.
  4. In Machine type, select Virtual Machines.
  5. In vCenter/vSphere Hypervisor, select the vCenter server that manages the vSphere host, or select the host.
  6. Select the process server or the configuration server if you haven’t created any additional process servers, and then click OK.
  7. In Target, select the subscription and the resource group in which you want to create the migrated virtual machines. Choose the deployment model for the migrated virtual machines that you want to use in Azure (classic or resource manager).
  8. Select the Azure storage account you want to use for replicating data. If you don’t want to use an account you’ve already set up, you can create a new one.
  9. Select the Azure network and subnet to which Azure Virtual Machines will connect when they’re created after migration. Select Configure now for selected machines to apply the network setting to all machines you select for protection, or select Configure later to select the Azure network per virtual machine.
  10. Point to Virtual Machines > Select, select each enabled machine you want to replicate, and then click OK.
  11. In Properties > Configure properties, select the process server account that will automatically install the Mobility service on the machine.
  12. By default, all disks are replicated. Click All Disks and clear any disks you don’t want to replicate, and then click OK. You can set additional virtual machine disk properties later if needed.
  13. In Replication settings > Configure replication settings, verify that the correct replication policy is selected. If you modify a policy, changes will be applied to the replicating machine and to new machines.
  14. Enable Multi-VM consistency if you want to gather machines into a replication group, specify a name for the group, and then click OK.
  15. Click Enable Replication. You can track progress of the Enable Protection job in Settings > Jobs > Site Recovery Jobs. After the Finalize Protection job runs the machine is ready for failover.

Step 4: Complete migration

Because migration is different than failover, it is important to configure Site Recovery for a migration.

For migration, you don’t need to commit a failover or delete machines. Instead, select the Complete Migration option for each machine you want to migrate.

  1. In Replicated Items, right-click the virtual machine, and then click Complete Migration.
  2. Click OK to complete the migration.

You can track progress in the virtual machine properties by monitoring the Complete Migration job in Site Recovery jobs. The Complete Migration action completes the migration process, removes replication for the machine, and stops Site Recovery billing for the machine.

At this point, your virtual machine has been migrated to Azure and you can begin using the IP addresses you set up in Networking. If you must migrate a database, the next section outlines migrating SQL Server databases using Migration Data Assistant and Azure Database Migration Service. Otherwise, the migration process continues with

Stage 3: Optimize migrated workloads

Cloudyn helps ensure migrated virtual machines continue to deliver targeted resource utilization and best cost by recommending changes. Track costs against budget using spending reports that help identify which virtual machine types are consuming budget and support decisions on how to modify the Azure environment to maximize ROI. Cloudyn benefits include:

  • Visibility into resource costs
  • Visibility into application and departmental costs
  • Budgeting
  • Cost optimization with right-sizing guidance

As organizations move on-premises virtual machines to Azure, a best practice is to move workloads through three stages: discover, migrate, and optimize. Microsoft and its partners offer tools to help increase the efficiency and reduce the complexity of those stages.

Understanding Network Virtualization in SCVMM 2012 R2

Networking in SCVMM is a communication mechanism to and from SCVMM Server, Hyper-v Hosts, Hyper-v Cluster, virtual machines, application, services, physical switches, load balancer and third party hypervisor. Functionality includes:

SCVMM Network

Logical Networking of almost “Anything” hosted in SCVMM- Logical network is a concept of complete identification, transportation and forwarding of Ethernet traffic in virtualized environment.

  • Provision and manage logical networks resources of private and public cloud
  • Management of Logical networks, subnets, VLAN, Trunk or Uplinks, PVLAN, Mac address pool, Templates, profiles, static IP address pool, DHCP address pool, IP Address Management (IPAM)
  • Integrate and manage third party hardware load balancer and Cisco virtual switch 1000v
  • Provide functionality of Virtual IP Addresses (VIPs), quality of service (QoS), monitor network traffic and virtual switch extensions
  • Creation of virtual switches and virtual network gateways

Network Virtualization – Network virtualization is a parallel concept to a server virtualization, where it allows you to abstract and run multiple virtual networks on a single physical network

  • Connects virtual machines to other virtual machines, hosts, or applications running on the same logical network.
  • Provides an independent migration of virtual machine which means when a VM moved to a different host from original host, SCVMM will automatically migrate that virtual network with the VM so that it remains connected to the rest of the infrastructure.
  • Allows multiple tenants to have their own isolated networks for security and privacy reason.
  • Allows unique IP address ranges for a tenant for management flexibility.
  • Communicate using a gateway of a site or a different site if permitted by firewall
  • Connect a VM running on a virtual network to any physical network in the same site or a different location.
  • Connect cross-network using an inbox NVGRE gateway that can be deployed as a VM to provide this cross-network interoperability.

Network Virtualization is defined in Fabric>Networking Tab of SCVMM 2012 R2 management console. Virtual Machine networking is defined in VMs and Services>VM Networks Tab of SCVMM 2012 R2 management console.

Host Config

Network virtualization terminology in SCVMM 2012 R2:

Fabric.networking

Logical networks: A logical network in VMM which contains the information of VLAN, PVLAN and subnets of a site in a Hyper-v host or a Hyper-v clusters. An IP address pool and a VM network can be associated with a logical network. A logical network can connect to another network or many network or vice-versa. Cloud function of each logical network is:

Logical network Purpose Tenant Cloud
External ·Site-to-site endpoint IP addresses

·Load balancer virtual IP addresses (VIPs)

·Network address translation (NAT) IP addresses for virtual networks

·Tenant VMs that need direct connectivity to the external network with full inbound access

Yes
Infrastructure Used for service provider infrastructure, including host management, live migration, failover clustering, and remote storage. It cannot be accessed directly by tenants. No
Load Balancer ·Uses static IP addresses

·Has outbound access to the external network via the load balancer

·Has inbound access that is restricted to only the ports that are exposed through the VIPs on the load balancer

Yes
Network Virtualization · This network is automatically used for allocating provider addresses when a VM that is connected to a virtual network is placed onto a host.

·Only the gateway VMs connect to this directly.

· Tenant VMs connect to their own VM network. Each tenant’s VM network is connected to the Network Virtualization logical network.

·A tenant VM will never connect to this directly.

·Static IP addresses are automatically assigned.

Yes
Gateway Associated with forwarding gateways, which require one logical network per gateway. For each forwarding gateway, a logical network is associated with its respective scale unit and forwarding gateway. No
Services · The Services network is used for connectivity between services in the stamp by public-facing Windows Azure Pack features, and for SQL Server and MySQL Database DBaaS deployments.

·All deployments on the Services network are behind the load balancer and accessed through a virtual IP (VIP) on the load balancer.

·This logical network is also designed to provide support for any service provider-owned service and is likely to be used by high-density web servers initially, but potentially many other services over time.

No

IP Address Pool: An IP address pool is a range of IP addresses assigned to a logical network in a site which provides IP address, subnets, gateway, DNS, WINS related information to virtual machines and applications.

Mac Address Pool: Mac Address Pool contains default mac address ranges of virtual network adapter of virtual machine. You can also create customised mac address pool and assign that pool to virtual machines.

Pool Name Vendor Mac Address
Default MAC address pool Hyper-V and Citrix XenServer 00:1D:D8:B7:1C:00 – 00:1D:D8:F4:1F:FF
Default VMware MAC address pool VMware ESX 00:50:56:00:00:00 – 00:50:56:3F:FF:FF

Hardware Load Balancer: Hardware load balancer is a functionality within SCVMM networking to provide third party loading balancing of application and services. A virtual IP or IP address Pool can be associated with hardware load balancer.

VIP Templates: VIP templates is a standard template used to define virtual addresses associated with hardware load balancer. VIP is allocated to application, services and virtual machines hosted in SCVMM 2012 R2. A template that specifies the load-balancing behaviour for HTTPS traffic on a specific load balancer by manufacturer and model.

Logical Switch: logical switches act as containers for the properties or capabilities that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify the capabilities in port profiles and logical switches, which you can then apply to the appropriate adapters. Logical switches act as an extension of physical switch with a major difference that you don’t have to drive to data center, take a patch lead and connect to computer, then configure switch ports and assign VLAN tag to that port.  Logical switch where you define uplinks or physical adapter of Hyper-v hosts, associate uplinks with logical networks and sites.

Port Profiles: Port profiles act as containers for the security and privacy that you want network adapters to have. Instead of configuring individual properties or capabilities for each network adapter, you can specify these capabilities in port profiles, which you can then apply to the appropriate adapters. Port profiles are associated with an uplinks in logical switch.

Port Classification: Port classifications provide global names for identifying different types of virtual network adapter port profiles. A port classification can be used across multiple logical switches while the settings for the port classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and another port classification named SLOW to identify ports that are configured to have less bandwidth.

Network Service: Network service is container whether you can add Windows and non-Windows network gateway and IP address management and monitoring information. An IP Address Management (IPAM) server that runs on Windows Server 2012 R2 to provide resources in VMM. You can use the IPAM server in network resource tab of SCVMM to configure and monitor logical networks and their associated network sites and IP address pools. You can also use the IPAM server to monitor the usage of VM networks that you have configured or changed in VMM.

Virtual switch extension: A virtual switch extension manager in a SCVMM allows you to use a software based vendor network-management console and the VMM management server together. For example you can install Cisco 1000v extension software in a VMM server and add the functionality of Cisco switches into the VMM console.

VM Network: A VM network in a logical network is the endpoint of network virtualization which directly connect a virtual machine to allow public or private communication among VMs or other network and services. A VM network is associated with a logical network for direct access to other VMs.

VM Networks

Related Articles:

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

How to deploy VDI using Microsoft RDS in Windows Server 2012 R2

Remote Desktop Services is a server role consists of several role services. Remote Desktop Services (RDS) accelerates and securely extends desktop and applications to any device and anyplace for remote and roaming worker. Remote Desktop Services provide both a virtual desktop infrastructure (VDI) and session-based desktops.

In Windows Server 2012 R2, the following roles are available in Remote Desktop Services: 

Role service name Role service description
RD Virtualization Host RD Virtualization Host integrates with Hyper-V to deploy pooled or personal virtual desktop collections
RD Session Host RD Session Host enables a server to host RemoteApp programs or session-based desktops.
RD Connection Broker RD Connection Broker provides the following services

  • Allows users to reconnect to their existing virtual desktops, RemoteApp programs, and session-based desktops.
  • Enables you to evenly distribute the load among RD Session Host servers in a session collection or pooled virtual desktops in a pooled virtual desktop collection.
  • Provides access to virtual desktops in a virtual desktop collection.
RD Web Access RD Web Access enables you the following services

  • RemoteApp and session-based desktops Desktop Connection through the Start menu or through a web browser.
  • RemoteApp programs and virtual desktops in a virtual desktop collection.
RD Licensing RD Licensing manages the licenses for RD Session Host and VDI.
RD Gateway RD Gateway enables you to authorized users to connect to VDI, RemoteApp

For a RDS lab, you will need following servers.

  • RDSVHSRV01- Remote Desktop Virtualization Host server. Hyper-v Server.
  • RDSWEBSRV01- Remote Desktop Web Access server
  • RDSCBSRV01- Remote Desktop Connection Broker server.
  • RDSSHSRV01- Remote Desktop Session Host Server
  • FileSRV01- File Server to Store User Profile

This test lab consist of 192.168.1.1/24 subnets for internal network and a DHCP Client i.e. Client1 machine using Windows 8 operating system. A test domain called testdomain.com. You need a Shared folder hosted in File Server or SAN to Hyper-v Cluster as Virtualization Host server. All RD Virtualization Host computer accounts must have granted Read/Write permission to the shared folder. I assume you have a functional domain controller, DNS, DHCP and a Hyper-v cluster. Now you can follow the steps below.

Step1: Create a Server Group

1. Open Server Manager from Task bar. Click Dashboard, Click View, Click Show Welcome Tile, Click Create a Server Group, Type the name of the Group is RDS Servers

2. Click Active Directory , In the Name (CN): box, type RDS, then click Find Now.

3. Select RDSWEBSRV01, RDSSHSRV01, RDSCDSRV01, RDSVHSRV01 and then click the right arrow.

4. Click OK.

Step2: Deploy the VDI standard deployment

1. Log on to the Windows server by using the testdomain\Administrator account.

2. Open Server Manager from Taskbar, Click Manage, click Add roles and features.

3. On the Before You Begin page of the Add Roles and Features Wizard, click Next.

4. On the Select Installation Type page, click Remote Desktop Services scenario-based Installation, and then click Next.

clip_image002

5. On the Select deployment type page, click Standard deployment, and then click Next. A standard deployment allows you to deploy RDS on multiple servers splitting the roles and features among them. A quick start allows you to deploy RDS on to single servers and publish apps.

clip_image004

6. On the Select deployment scenario page, click Virtual Desktop Infrastructure, and then click Next.

clip_image006

7. On the role services page, review roles then click Next.

clip_image008

8. On the Specify RD Connection Broker server page, click RDSCBSRV01.Testdomain.com, click the right arrow, and then click Next.

clip_image010

9. On the Specify RD Web Access server page, click RDSWEBSRV01.Testdomain.com, click the right arrow, and then click Next.

clip_image012

10. On the Specify RD Virtualization Host server page, click RDSVHSRV01.Testdomain.com, click the right arrow, and then click Next. RDSVHSRV01 is a physical machine configured with Hyper-v. Check Create a New Virtual Switch on the selected server.

clip_image014

11. On the Confirm selections page, Check the Restart the destination server automatically if required check box, and then click Deploy.

clip_image016

12. After the installation is complete, click Close.

clip_image018

 

 

Step3: Test the VDI standard deployment connectivity

You can ensure that VDI standard deployment deployed successfully by using Server Manager to check the Remote Desktop Services deployment overview.

1. Log on to the DC1 server by using the testdomain\Administrator account.

2. click Server Manager, Click Remote Desktop Services, and then click Overview.

3. In the DEPLOYMENT OVERVIEW section, ensure that the RD Web Access, RD Connection Broker, and RD Virtualization Host role services are installed. If there is an icon and not a green plus sign (+) next to the role service name, the role service is installed and part of the deployment

clip_image020

 

Step4: Configure FileSRV1

You must create a network share on a computer in the testdomain domain to store the user profile disks. Use the following procedures to connect to the virtual desktop collection:

  • Create the user profile disk network share
  • Adjust permissions on the network share

Create the user profile disk network share

1. Log on to the FileSRV1 computer by using the TESTDOMAIN\Administrator user account.

2. Open Windows Explorer.

3. Click Computer, and then double-click Local Disk (C:).

4. Click Home, click New Folder, type RDSUserProfile and then press ENTER.

5. Right-click the RDSUSERPROFILE folder, and then click Properties.

6. Click Sharing, and then click Advanced Sharing.

7. Select the Share this folder check box.

8. Click Permissions, and then grant Full Control permissions to the Everyone group.

9. Click OK twice, and then click Close.

Setup permissions on the network share

1. Right-click the RDSUSERPROFILE folder, and then click Properties.

2. Click Security, and then click Edit.

3. Click Add.

4. Click Object Types, select the Computers check box, and then click OK.

5. In the Enter the object names to select box, type RDSVHSRV01.Testdomain.com, and then click OK.

6. Click RDSVHSRV01, and then select the Allow check box next to Modify.

7. Click OK two times.

Step5: Configure RDSVHSRV01

You must add the virtual desktop template to Hyper-V so you can assign it to the pooled virtual desktop collection.

Create Virtual Desktop Template in RDSVHSRV01

1. Log on to the RDSVHSRV01 computer as a Testdomain\Administrator user account.

2. Click Start, and then click Hyper-V Manager.

3. Right-click RDSVHSRV01, point to New, and then click Virtual Machine.

4. On the Before You Begin page, click Next.

5. On the Specify Name and Location page, in the Name box, type Virtual Desktop Template, and then click Next.

clip_image022

6. On the Assign Memory page, in the Startup memory box, type 1024, and then click Next.

clip_image024

7. On the Configure Networking page, in the Connection box, click RDS Virtual, and then click Next.

clip_image026

8. On the Connect Virtual Hard Disk page, click the Use an existing virtual hard disk option.

clip_image028

9. Click Browse, navigate to the virtual hard disk that should be used as the virtual desktop template, and then click Open. Click Next.

clip_image030

10. On the Summary page, click Finish.

Step6: Create the managed pooled virtual desktop collection in RDSVHSRV01

Create the managed pooled virtual desktop collection so that users can connect to desktops in the collection.

1. Log on to the RDSCBSRV01 server as a TESTDOMAIN\Administrator user account.

2. Server Manager will start automatically. If it does not automatically start, click Start, type servermanager.exe, and then click Server Manager.

3. In the left pane, click Remote Desktop Services, and then click Collections.

4. Click Tasks, and then click Create Virtual Desktop Collection.

clip_image031

5. On the Before you begin page, click Next.

6. On the Name the collection page, in the Name box, type Testdomain Managed Pool, and then click Next.

clip_image033

7. On the Specify the collection type page, click the Pooled virtual desktop collection option, ensure that the Automatically create and manage virtual desktops check box is selected, and then click Next.

clip_image035

8. On the Specify the virtual desktop template page, click Virtual Desktop Template, and then click Next.

clip_image037

9. On the Specify the virtual desktop settings page, click Provide unattended settings, and then click Next. In this step of the wizard, you can also choose to provide an answer file. A Simple Answer File can be obtained from URL1 and URL2

10. On the Specify the unattended settings page, enter the following information and retain the default settings for the options that are not specified, and then click Next.

§ In the Local Administrator account password and Confirm password boxes, type the same strong password.

§ In the Time zone box, click the time zone that is appropriate for your location.

11. On the Specify users and collection size page, accept the default selections, and then click Next.

12. On the Specify virtual desktop allocation page, accept the default selections, and then click Next.

13. On the Specify virtual desktop storage page, accept the default selections, and then click Next.

14. On the Specify user profile disks page, in the Location user profile disks box, type \\FileSRV01\RDSUserProfile, and then click Next. Make sure that the RD Virtualization Host computer accounts have read and write access to this location.

15. On the Confirm selections page, click Create.

Step8: Test Remote Desktop Services connectivity

You can ensure the managed pooled virtual desktop collection was created successfully by connecting to the RD Web Access server and then connecting to the virtual desktop in the Testdomain Managed Pool collection.

1. Open Internet Explorer.

2. In the Internet Explorer address bar, type https://RDSWEBSRV01.Testdomain.com/RDWeb, and then press ENTER.

3. Click Continue to this website (not recommended).

clip_image039

4. In the Domain\user name box, type TESTDOMAIN\Administrator.

5. In the Password box, type the password for the TESTDOMAIN\Administrator user account, and then click Sign in.

6. Click Testdomain Managed Pool, and then click Connect.

Relevant Configuration

Remote Desktop Services with ADFS SSO

Remote Desktop Services with Windows Authentication

RDS With Windows Authentication

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts