Gartner’s verdict on mid-range and enterprise class storage arrays

Previously I wrote an article on how to select a SAN based on your requirement. Let’s learn what Gartner’s verdict on Storage. Gartner scores storage arrays in mid-range and enterprise class storage. Here are details of Gartner score.

Mid-Range Storage

Mid-range storage arrays are scored on manageability, Reliability and Availability (RAS), performance, snapshot and replication, scalability, the ecosystem, multi-tenancy and security, and storage efficiency.

mid1

Figure: Product Rating

mid2

Figure: Storage Capabilities

mid3

Figure: Product Capabilities

mid4

Figure: Total Score

Enterprise Class Storage

Enterprise class storage is scored on performance, reliability, scalability, serviceability, manageability, RAS, snapshot and replication, ecosystem, multi-tenancy, security, and storage efficiency. Vendor reputation are more important in this criteria. Product types are clustered, scale-out, scale-up, high-end (monolithic) arrays and federated architectures. EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. These vendors are providing functionality, performance, RAS and scalability to be considered in this class.

ENT1

Figure: Product Ratings (Source: Gartner)

Where does Dell Compellent Stand?

There are known disadvantages in Dell Compellent storage array, users with more than two nodes must carefully plan firmware upgrades during a time of low I/O activity or during periods of planned downtime. Even though Dell Compellent advertised as flash cached, Read SSD and Write SSD with storage tiering, snapshot. In realty Dell Compellent does its own thing in background which most customer isn’t aware of. Dell Compellent run RAID scrub every day whether you like it or not which generate huge IOPS in all tiered arrays which are both SSD and SATA disks. You will experience poor IO performance during RAID scrub. When Write SSD is full Compellent controller automatically trigger an on demand storage tiering during business hour and forcing data to be written permanently in tier 3 disks which will literally kill virtualization, VDI and file systems. Storage tiering and RAID scrub will send storage latency off the roof. If you are big virtualization and VDI shop than you are left with no option but to experience this poor performance and let RAID scrub and tiering finish at snail pace. If you have terabytes of data to be backed up every night you will experience extended backup window, un-achievable RPO and RTO regardless of change block tracking (CBT) enabled in backup products.

If you are one of Compellent customer wondering why Garner didn’t include Dell Compellent in Enterprise class. Now you know why Dell Compellent is excluded in enterprise class matrix as Dell Compellent doesn’t fit into the functionality and capability requirement to be considered as enterprise class. There is another factor that may worry existing Dell EqualLogic customer as there is no direct migration path and upgrade path have been communicated with on premises storage customers once OEM relationship between Dell and EMC ends. Dell pro-support and partner channel confirms that Dell will no longer sell SAS drive which means IO intense array will lose storage performance. These situations put users of co-branded Dell:EMC CX systems in the difficult position of having to choose between changing changing storage system technologies or changing storage vendor all together.

How to configure SAN replication between IBM Storwize V3700 systems

The Metro Mirror and Global Mirror Copy Services features enable you to set up synchronous and asynchronous replication between two volumes between two separate IBM storage, so that updates are made by an application to one volume in one storage systems in prod site are mirrored on the other volume in anther storage systems in DR site.

  • The Metro Mirror feature provides a synchronous-replication. When a host writes to the primary volume, it does not receive confirmation of I/O completion until the write operation has completed for the copy on both the primary volume and the secondary volume. This ensures that the secondary volume is always up-to-date with the primary volume in the event that a failover operation must be performed. However, the host is limited to the latency and bandwidth limitations of the communication link to the secondary volume.
  • The Global Mirror feature provides an asynchronous-replication. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation has completed for the copy on the secondary volume. If a failover operation is performed, the application must recover and apply any updates that were not committed to the secondary volume. If I/O operations on the primary volume are paused for a small length of time, the secondary volume can become an exact match of the primary volume.

Prerequisites:

  1. Both systems are connected via dark fibre, L2 MPLS or IPVPN for replication over IP
  2. Both systems are connected via fibre for replication over FC.
  3. Both systems have up to date and latest firmware.
  4. Easy Tier and SSD installed in both systems.
  5. Remote Copy license activated in both systems.
  6. Volumes are identical in Prod and DR SAN.

Configure Metro Mirror in IBM v3700 Systems

Step1: Activate License

Log on to IBM V3700>Settings>System>Licensing

Activate Remote Copy and Easy Tier License.

    image

Step2: Configure Ethernet Ports & iSCSI in Production SAN and DR SAN

Both systems will communicate via management network but volume will be replicated via Ethernet if remote copy is configured to use replication over Ethernet. This step is necessary for Metro Mirror over Ethernet. Skip this step if you are using FC.

Log on to Production IBM v3700 systems. Settings>Network>Ethernet Ports. Right Click on Node1 Port 2> Configure Copy Group1 and Copy Group2. Assign IP address, Enable iSCSI, Select Copy Group1. Repeat to Create Copy Group2.

image

image

Repeat the step to configure Copy Groups in DR SAN.  
Note: TCP/IP assigned in DR SAN can be from same subnet of production SAN or can be different than production subnet as long as both subnets can communicate with each other.  
Step3: Create Partnership in Prod & DR SAN
Log on to Production V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of DR SAN

clip_image001

Fully Configured Indicates that the partnership is defined on the local and remote systems and is started.

image 

Initial synchronization bandwidth is 2048MBps but once I take the DR storage to DR site. I will modify that to 1024MBps. Initial Synchronization will take place in prod site. You can have your own bandwidth specification.

Log on to DR V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of Prod SAN

clip_image002

Step4: Create Relationships between Volume

log on to Prod SAN>Copy Services>Remote Copy>Add Relationship>Select Metro Mirror

image

image

image

Specify DR SAN as Auxiliary Systems where DR Volume is located. IBM should have used word “Master/Slave” or “Prod/DR systems”.   

image

Add identical volume>Click Next.

image

image

Step5: Monitor Performance and Copy Services

image

image

Consistency group states

Consistent (synchronized) – The primary volumes are accessible for read and write I/O operations. The secondary volumes are accessible for read-only I/O operations.

Inconsistent (copying) – The primary volumes are accessible for read and write I/O operations, but the secondary volumes are not accessible for either operation. This state is entered after the startrcconsistgrp command is issued to a consistency group in the InconsistentStopped state. This state is also entered when the startrcconsistgrp command is issued, with the force option, to a consistency group in the Idling or ConsistentStopped state.

The background copy bandwidth can affect foreground I/O latency in one of three ways:

  • If the background copy bandwidth is set too high for the intersystem link capacity, the following results can occur:
    • The intersystem link is not able to process the background copy I/Os fast enough, and the I/Os can back up (accumulate).
    • For Metro Mirror, there is a delay in the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror, the work is backlogged, which delays the processing of write operations and causes the relationship to stop. For Global Mirror in multiple-cycling mode, a backlog in the intersystem link can congest the local fabric and cause delays to data transfers.
    • The foreground I/O latency increases as detected by applications.
  • If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os.
  • If the background copy bandwidth is set too high for the storage at the secondary site, background copy write operations at the secondary overload the secondary storage and again delay the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror without cycling mode, the work is backlogged and again the relationship is stopped

image

image

Once volumes are synchronized you are ready to integrate storage with System Center 2012 R2.

Further Readings:

SAN Replication based Enterprise Grade Disaster Recovery with ASR and System Center

What’s New in 2012 R2: Cloud-integrated Disaster Recovery

Understanding IT Disaster Recovery Plan

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Gallery

Step1: Hardware Installation Follow the official IBM video tutorial to rack and stack IBM V3700.   Cabling V3700, ESX Host and Fabric. Connect each canister of V3700 Storage to two Fabric. Canister1 FC Port 1—>Fabric1 and Canister 1 FC Port … Continue reading