In this example, the powershell Cmdlets edit the VM NIC properties and change the subnet from one vNet to another vNet. Step1: Get Azure VM, NIC and Resource Group Properties. Stop-AzVM -Name “vm” -ResourceGroupName “RG01” $vm = Get-AzVm -Name “vm” … Continue reading
This gallery contains 1 photo.
Azure Database Migration Service partners with DMA to migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database, Azure SQL Database Managed Instance or SQL Server on Azure virtual machines. Moving a SQL Server database … Continue reading
Microsoft Azure can be integrated with Nimble Cloud-Connected Storage based on the Nimble Storage Predictive Flash platform via Microsoft Azure ExpressRoute or Equinix Cloud Exchange connectivity solutions. The Nimble storage is located in Equinix colocation facilities at proximity to Azure … Continue reading
Azure Site Recovery orchestrates and manages disaster recovery for Azure VMs in Azure Cloud, and on-premises VMs in VMware, System Center VMM and physical servers. Prerequisites: VMware Virtual Server Azure Subscription Azure Virtual Network ExpressRoute between On-premises to Azure Network … Continue reading
Feature VMware vSphere System Center VMM 2012 R2 Standard vSwitch DV Switch Switch Features Yes Yes Yes Layer 2 Forwarding Yes Yes Yes IEEE 802.1Q VLAN Tagging Yes Yes Yes Multicast Support Yes Yes Yes Network Policy – Yes Yes … Continue reading
Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.
- Existing Fibre Channel investments to support virtualized workloads.
- Connect Fibre Channel Tape Library from within a guest operating systems.
- Support for many related features, such as virtual SANs, live migration, and MPIO.
- Create MSCS Cluster of guest operating systems in Hyper-v Cluster
- Live Migration will not work if SAN zoning isn’t configured correctly.
- Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
- Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
- Virtual Fibre Channel logical units cannot be used as boot media.
- Windows Server 2012 or 2012 R2 with the Hyper-V role.
- Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
- A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
- An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
- Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
- Storage accessed through a virtual Fibre Channel supports devices that present logical units.
- MPIO Feature installed in Windows Server.
- Microsoft Hotfix KB2894032
Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.
- 2X Brocade 300 series Fabric
- 1X FC SAN
- 1X FC Tape Library
- 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.
Step1: Update Firmware of all Fabric.
Use this LINK to update firmware.
Step2: Update Firmware of FC SAN
See OEM or vendor installation guide. See this LINK for IBM guide.
Step3: Enable hardware virtualization in Server BIOS
See OEM or Vendor Guidelines
Step4: Update Firmware of Server
See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade.
Step5: Install MPIO driver in Hyper-v Host
See OEM or Vendor Guidelines
Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone
Step7: Configure Correct Zone and NPIV in Fabric
SSH to Fabric and Type the following command to verify NPIV.
If NPIV is enabled, it will show NPIV ON.
To enable NPIV on a specific port type portCfgNPIVPort 0 1 (where 0 is the port number and 1 is the mode 1=enable, 0=disable)
Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.
Configure correct Zone as shown below.
Configure correct Zone Config as shown below.
Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed.
Step8: Configure Virtual Fibre Channel
Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel
Type Name of the Fibre Channel> Apply>Ok.
Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.
On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.
Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.
Open Failover Cluster Manager, Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.
Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.
Record WWPN from the Virtual Fibre Channel.
Power on the virtual Machine.
Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.
Step10: Present Storage
Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.
Map the volume or LUN to the virtual server.
Step11: Install MPIO Driver in Guest Operating Systems
Open Server Manager>Add Role & Feature>Add MPIO Feature.
Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.
Now you have FC SAN in your virtual machine
Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.
Download and install correct FC Tape driver and install the driver into the virtual backup server.
Now you have correct FC Tape library in virtual machine.
Backup software can see Tape Library and inventory tapes.