Gartner’s verdict on mid-range and enterprise class storage arrays

Previously I wrote an article on how to select a SAN based on your requirement. Let’s learn what Gartner’s verdict on Storage. Gartner scores storage arrays in mid-range and enterprise class storage. Here are details of Gartner score.

Mid-Range Storage

Mid-range storage arrays are scored on manageability, Reliability and Availability (RAS), performance, snapshot and replication, scalability, the ecosystem, multi-tenancy and security, and storage efficiency.

mid1

Figure: Product Rating

mid2

Figure: Storage Capabilities

mid3

Figure: Product Capabilities

mid4

Figure: Total Score

Enterprise Class Storage

Enterprise class storage is scored on performance, reliability, scalability, serviceability, manageability, RAS, snapshot and replication, ecosystem, multi-tenancy, security, and storage efficiency. Vendor reputation are more important in this criteria. Product types are clustered, scale-out, scale-up, high-end (monolithic) arrays and federated architectures. EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. These vendors are providing functionality, performance, RAS and scalability to be considered in this class.

ENT1

Figure: Product Ratings (Source: Gartner)

Where does Dell Compellent Stand?

There are known disadvantages in Dell Compellent storage array, users with more than two nodes must carefully plan firmware upgrades during a time of low I/O activity or during periods of planned downtime. Even though Dell Compellent advertised as flash cached, Read SSD and Write SSD with storage tiering, snapshot. In realty Dell Compellent does its own thing in background which most customer isn’t aware of. Dell Compellent run RAID scrub every day whether you like it or not which generate huge IOPS in all tiered arrays which are both SSD and SATA disks. You will experience poor IO performance during RAID scrub. When Write SSD is full Compellent controller automatically trigger an on demand storage tiering during business hour and forcing data to be written permanently in tier 3 disks which will literally kill virtualization, VDI and file systems. Storage tiering and RAID scrub will send storage latency off the roof. If you are big virtualization and VDI shop than you are left with no option but to experience this poor performance and let RAID scrub and tiering finish at snail pace. If you have terabytes of data to be backed up every night you will experience extended backup window, un-achievable RPO and RTO regardless of change block tracking (CBT) enabled in backup products.

If you are one of Compellent customer wondering why Garner didn’t include Dell Compellent in Enterprise class. Now you know why Dell Compellent is excluded in enterprise class matrix as Dell Compellent doesn’t fit into the functionality and capability requirement to be considered as enterprise class. There is another factor that may worry existing Dell EqualLogic customer as there is no direct migration path and upgrade path have been communicated with on premises storage customers once OEM relationship between Dell and EMC ends. Dell pro-support and partner channel confirms that Dell will no longer sell SAS drive which means IO intense array will lose storage performance. These situations put users of co-branded Dell:EMC CX systems in the difficult position of having to choose between changing changing storage system technologies or changing storage vendor all together.

How to configure SAN replication between IBM Storwize V3700 systems

The Metro Mirror and Global Mirror Copy Services features enable you to set up synchronous and asynchronous replication between two volumes between two separate IBM storage, so that updates are made by an application to one volume in one storage systems in prod site are mirrored on the other volume in anther storage systems in DR site.

  • The Metro Mirror feature provides a synchronous-replication. When a host writes to the primary volume, it does not receive confirmation of I/O completion until the write operation has completed for the copy on both the primary volume and the secondary volume. This ensures that the secondary volume is always up-to-date with the primary volume in the event that a failover operation must be performed. However, the host is limited to the latency and bandwidth limitations of the communication link to the secondary volume.
  • The Global Mirror feature provides an asynchronous-replication. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation has completed for the copy on the secondary volume. If a failover operation is performed, the application must recover and apply any updates that were not committed to the secondary volume. If I/O operations on the primary volume are paused for a small length of time, the secondary volume can become an exact match of the primary volume.

Prerequisites:

  1. Both systems are connected via dark fibre, L2 MPLS or IPVPN for replication over IP
  2. Both systems are connected via fibre for replication over FC.
  3. Both systems have up to date and latest firmware.
  4. Easy Tier and SSD installed in both systems.
  5. Remote Copy license activated in both systems.
  6. Volumes are identical in Prod and DR SAN.

Configure Metro Mirror in IBM v3700 Systems

Step1: Activate License

Log on to IBM V3700>Settings>System>Licensing

Activate Remote Copy and Easy Tier License.

    image

Step2: Configure Ethernet Ports & iSCSI in Production SAN and DR SAN

Both systems will communicate via management network but volume will be replicated via Ethernet if remote copy is configured to use replication over Ethernet. This step is necessary for Metro Mirror over Ethernet. Skip this step if you are using FC.

Log on to Production IBM v3700 systems. Settings>Network>Ethernet Ports. Right Click on Node1 Port 2> Configure Copy Group1 and Copy Group2. Assign IP address, Enable iSCSI, Select Copy Group1. Repeat to Create Copy Group2.

image

image

Repeat the step to configure Copy Groups in DR SAN.  
Note: TCP/IP assigned in DR SAN can be from same subnet of production SAN or can be different than production subnet as long as both subnets can communicate with each other.  
Step3: Create Partnership in Prod & DR SAN
Log on to Production V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of DR SAN

clip_image001

Fully Configured Indicates that the partnership is defined on the local and remote systems and is started.

image 

Initial synchronization bandwidth is 2048MBps but once I take the DR storage to DR site. I will modify that to 1024MBps. Initial Synchronization will take place in prod site. You can have your own bandwidth specification.

Log on to DR V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of Prod SAN

clip_image002

Step4: Create Relationships between Volume

log on to Prod SAN>Copy Services>Remote Copy>Add Relationship>Select Metro Mirror

image

image

image

Specify DR SAN as Auxiliary Systems where DR Volume is located. IBM should have used word “Master/Slave” or “Prod/DR systems”.   

image

Add identical volume>Click Next.

image

image

Step5: Monitor Performance and Copy Services

image

image

Consistency group states

Consistent (synchronized) – The primary volumes are accessible for read and write I/O operations. The secondary volumes are accessible for read-only I/O operations.

Inconsistent (copying) – The primary volumes are accessible for read and write I/O operations, but the secondary volumes are not accessible for either operation. This state is entered after the startrcconsistgrp command is issued to a consistency group in the InconsistentStopped state. This state is also entered when the startrcconsistgrp command is issued, with the force option, to a consistency group in the Idling or ConsistentStopped state.

The background copy bandwidth can affect foreground I/O latency in one of three ways:

  • If the background copy bandwidth is set too high for the intersystem link capacity, the following results can occur:
    • The intersystem link is not able to process the background copy I/Os fast enough, and the I/Os can back up (accumulate).
    • For Metro Mirror, there is a delay in the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror, the work is backlogged, which delays the processing of write operations and causes the relationship to stop. For Global Mirror in multiple-cycling mode, a backlog in the intersystem link can congest the local fabric and cause delays to data transfers.
    • The foreground I/O latency increases as detected by applications.
  • If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os.
  • If the background copy bandwidth is set too high for the storage at the secondary site, background copy write operations at the secondary overload the secondary storage and again delay the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror without cycling mode, the work is backlogged and again the relationship is stopped

image

image

Once volumes are synchronized you are ready to integrate storage with System Center 2012 R2.

Further Readings:

SAN Replication based Enterprise Grade Disaster Recovery with ASR and System Center

What’s New in 2012 R2: Cloud-integrated Disaster Recovery

Understanding IT Disaster Recovery Plan

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

How to Connect and Configure Virtual Fibre Channel, FC Storage and FC Tape Library from within a Virtual Machine in Hyper-v Server 2012 R2

Windows Server 2012 R2 with Hyper-v Role provides Fibre Channel ports within the guest operating system, which allows you to connect to Fibre Channel directly from within virtual machines. This feature enables you to virtualize workloads that use direct FC storage and also allows you to cluster guest operating systems leveraging Fibre Channel, and provides an important new storage option for servers hosted in your virtual infrastructure.

Benefits:

  • Existing Fibre Channel investments to support virtualized workloads.
  • Connect Fibre Channel Tape Library from within a guest operating systems.
  • Support for many related features, such as virtual SANs, live migration, and MPIO.
  • Create MSCS Cluster of guest operating systems in Hyper-v Cluster

Limitation:

  • Live Migration will not work if SAN zoning isn’t configured correctly.
  • Live Migration will not work if LUN mismatch detected by Hyper-v cluster.
  • Virtual workload is tied with a single Hyper-v Host making it a single point of failure if a single HBA is used.
  • Virtual Fibre Channel logical units cannot be used as boot media.

Prerequisites:

  • Windows Server 2012 or 2012 R2 with the Hyper-V role.
  • Hyper-V requires a computer with processor support for hardware virtualization. See details in BIOS setup of server hardware.
  • A computer with one or more Fibre Channel host bus adapters (HBAs) that have an updated HBA driver that supports virtual Fibre Channel.
  • An NPIV-enabled Fabric, HBA and FC SAN. Almost all new generation brocade fabric and storage support this feature.NPIV is disabled in HBA by default.
  • Virtual machines configured to use a virtual Fibre Channel adapter, which must use Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 or Windows Server 2012 R2 as the guest operating system. Maximum 4 vFC ports are supported in guest OS.
  • Storage accessed through a virtual Fibre Channel supports devices that present logical units.
  • MPIO Feature installed in Windows Server.
  • Microsoft Hotfix KB2894032

Before I begin elaborating steps involve in configuring virtual fibre channel. I assume you have physical connectivity and physical multipath is configured and connected as per vendor best practice. In this example configuration, I will be presenting storage and FC Tape Library to virtualized Backup Server. I used the following hardware.

  • 2X Brocade 300 series Fabric
  • 1X FC SAN
  • 1X FC Tape Library
  • 2X Windows Server 2012 R2 with Hyper-v Role installed and configured as a cluster. Each host connected to two Fabric using dual HBA port.

Step1: Update Firmware of all Fabric.

Use this LINK to update firmware.

Step2: Update Firmware of FC SAN

See OEM or vendor installation guide. See this LINK for IBM guide.

Step3: Enable hardware virtualization in Server BIOS

See OEM or Vendor Guidelines

Step4: Update Firmware of Server

See OEM or Vendor Guidelines. See Example of Dell Firmware Upgrade

Step5: Install MPIO driver in Hyper-v Host

See OEM or Vendor Guidelines

Step6: Physically Connect FC Tape Library, FC Storage and Servers to correct FC Zone

Step7: Configure Correct Zone and NPIV in Fabric

SSH to Fabric and Type the following command to verify NPIV.

Fabric:root>portcfgshow 0

If NPIV is enabled, it will show NPIV ON.

To enable NPIV on a specific port type portCfgNPIVPort 0 1  (where 0 is the port number and 1 is the mode 1=enable, 0=disable)

Open Brocade Fabric, Configure Alias. Red marked are Virtual HBA and FC Tape shown in Fabric. Note that you must place FC Tape, Hyper-v Host(s), Virtual Machine and FC SAN in the same zone otherwise it will not work.

image

Configure correct Zone as shown below.

image

Configure correct Zone Config as shown below.

image

Once you configured correct Zone in Fabric, you will see FC Tape showing in Windows Server 2012 R2 where Hyper-v Role is installed. Do not update tape driver in Hyper-v host as we will use guest or virtual machine as backup server where correct tape driver is needed. 

image

Step8: Configure Virtual Fibre Channel

Open Hyper-v Manager, Click Virtual SAN Manager>Create new Fibre Channel

image

Type Name of the Fibre Channel> Apply>Ok.

image

Repeat the process to create multiple VFC for MPIO and Live Migration purpose. Remember Physical HBA must be connected to 2 Brocade Fabric.

On the vFC configuration, keep naming convention identical on both host. If you have two physical HBA, configure two vFC in Hyper-v Host. Example: VFC1 and VFC2. Create two VFC in another host with identical Name VFC1 and VFC2. Assign both VFC to virtual machines.

Step9: Attach Virtual Fibre Channel Adapter on to virtual Machine.

Open Failover Cluster Manager,  Select the virtual machine where FC Tape will be visible>Shutdown the Virtual machine.

Go to Settings of the virtual machine>Add Fibre Channel Adapter>Apply>Ok.

image

Record WWPN from the Virtual Fibre Channel.

image

Power on the virtual Machine.

Repeat the process to add multiple VFCs which are VFC1 and VFC2 to virtual machine.

Step10: Present Storage

Log on FC storage>Add Host in the storage. WWPN shown here must match the WWPN in the virtual fibre channel adapter.

image

Map the volume or LUN to the virtual server.

image

Step11: Install MPIO Driver in Guest Operating Systems

Open Server Manager>Add Role & Feature>Add MPIO Feature.

image

Download manufacturer MPIO driver for the storage. MPIO driver must be correct version and latest to function correctly.

image

Now you have FC SAN in your virtual machine

image

image

Step12: Install Correct FC Tape Library Driver in Guest Operating Systems.

Download and install correct FC Tape driver and install the driver into the virtual backup server.

Now you have correct FC Tape library in virtual machine.

image

Backup software can see Tape Library and inventory tapes.

image

Further Readings:

Brocade Fabric with Virtual FC in Hyper-v

Hyper-V Virtual Fibre Channel Overview

Clustered virtual machine cannot access LUNs over a Synthetic Fibre Channel after you perform live migration on Windows Server 2012 or Windows Server 2012 R2-based Hyper-V hosts

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step1: Hardware Installation

Follow the official IBM video tutorial to rack and stack IBM V3700.

 

image

image

image

Cabling V3700, ESX Host and Fabric.

  • Connect each canister of V3700 Storage to two Fabric. Canister1 FC Port 1—>Fabric1 and Canister 1 FC Port 2—>Fabric2. Canister2 FC Port 1—>Fabric1 and Canister 2 FC Port 2—>Fabric2.
  • Connect two HBAs of each Host to two Fabric.
  • Connect Disk Enclosure to two canister
  • Connect Both Power cable of each devices to multiple power supply

Step2: Initial Setup

Management IP Connection: Once you racked the storage, connect Canister1 Port 1 and Canister 2 Port 1 into two Gigabit Ethernet port in same VLAN. Make LACP is configured in both port in your switch.

image

Follow the official IBM video tutorial to setup management IP or download Redbook Implementation Guide and follow section 2.4.1

 

One initial setup is complete. Log on to V3700 storage using Management IP.

Click Monitoring>System>Enclosure>Canister. Record serial number, part number and machine signature.

image

Step3: Configure Event Notification

Click Settings>Event Notification>Edit. Add correct email address. You must add management IP address of v3700 into Exchange Relay Connector. Now Click test button to check you received email.

image

image

Step4: Licensing Features

Click Settings>General>Licensing>Actions>Automatic>Add>Type Activation Key>Activate. repeat the step for all licensed features.

image

IBM V3700 At a Glance

image

Simple Definition:

MDisk: Bounce of disk configured in preset RAID or User defined RAID. For example you have 4 SSD hard drive and 20 SAS hard drive. You can chose automatic configuration of storage which will be configured by Wizard. However you can chose to have 1 RAID5 single parity MDisk for SSD and 2x RAID5 MDisk with parity for 20 SAS hard drive. In this case you will have three MDisk in your controller. Simply MDisk is RAID Group.

Pool: Storage Pool act as container for MDisks and available capacity ready to be provisioned.

Volume: Volumes are logical segregation of Pools. Volume can defines as LUNs and ready to be mapped or connected to ESX host or Hyper-v Host.

Step5: Configure Storage

Click Pools>Internal Storage>Configure Storage>Select recommended or user defined. Follow the wizard to complete MDisk.

image

image

Step6: Configure Pool

image

Click Pools>Click Volumes by Pool>Click New Volume>Generic>Select Quantity, Size, Pool then Click Create.

image

image

image

Repeat the steps to create multiple volumes for Hyper-v Host or ESXI host.

Important! In IBM V3700, if you configure storage using GUI, GUI console automatically select default functionality of the storage for example number of mdisk. you will not have option to select disks and mdisk you want. Example, you may want create large mdisk containing 15 disks but GUI will create 3xmdisk which means you will loose lot of capacity by doing so. To avoid this, you can create mdisk, raid and pool using command line.

Telnet to your storage using putty.  type username: superuser and password is your password. Type the following command.
Creating RAID5 arrays and a pool

svctask mkmdiskgrp -ext 1024 -guiid 0 -name Pool01 -warning 80%

Creating a spare with drive 6

svctask chdrive -use spare 6

Creating a RAID Group (12 disk)

svctask mkarray -drive 4:11:5:9:3:7:8:10:12:13:14 -level raid5 -sparegoal 1 0

Creating SSD Raid5 with easy tier (disk0, disk1, disk2 are SSD)

svctask mkarray -drive 1:2:0 -level raid5 -sparegoal 0 Pool01

Now go back to GUI and check you have created two mdisk and a Pool with easy tier activated.

Step7: Configure Fabric

Once you rack and stack Brocade 300B switches. Connect both Brocade switches using CAT6 cable to your Ethernet Switch. This could be your core switch or an access switch. Connect your laptop to network as well.

References are here. Quick Start Guide and Setup Guide

Default Passwords

username: root password: fibranne
username: admin password: password

Connect the console cable provided within brocade box to your laptop. Insert EZSetup cd into cdrom of your laptop. Run Easy Setup Wizard.

Alternatively, you can connect the switch to the network and connect to it via http://10.77.77.77 (Give yourself an IP address of 10.77.77.1/24). Use the username root and the password fibranne

If you don’t want to do that, connect the console cable (provided) to your PC and launch the EZ Setup Software supplied with the switch. Select English > OK.

At the Welcome Screen > Click Next > Click Next > accept the EULA > Install > Done > Select Serial Cable > Click Next > Click Next (make sure HyperTerminal is NOT on or it will fail). It should find the switch > Set its IP details.

image

Next > Follow Instructions.

Install Java on your laptop. Open browser, Type IP address of Brocade, you can connect to the switch via its web console Once logged in using default username:root and password: fibranne

image

Click Switch Admin>Type DNS Server, Domain Name, Apply. Click License>Add new License, Apply.

image

image

Click Zone Admin>Click Alias>New Alias>Type the name of Alias. Click Ok. Expand Switch Port, Select WWNN>Add members. Repeat for Canister Node 1, Canister Node 2, ESX Host1, ESX Host2, ESX Host 3….

image

image

Select Zone Tab> New Zone > Type the Name of Zone, Example vSphere or Hyper_v. Select Aliases>Click Add Members> Add all Aliases.

image

Select Zone Config Tab>New Zone Config>Type the name of Zone Config>Select vSphere or Hyper_V Zone>Add Members

image

Click Save Config. Click Enable Config.

image

image

Repeat above steps for all fabric.

Step8: Configure Hosts in V3700

Click Hosts>New Host>Fibre Channel Host>Generic>Type the Name of the Host>Select Port>Create Host.

image

image

image

image

Right Click on each host>Map Volumes.

image

Step9: Configure ESXi Host

Right Click on ESX Cluster>Rescan for datastores>

image

image

Click ESX Host>Configuration>Storage>Add Storage>Click Next>Select Correct and Matching UID>Type the matching Name same as volume name within IBM V3700. Click Next>Select VMFS5>Finish.

image

Rescan for datatores for all Host again. you will see same data store popped up in all Hosts.

Finding HBA, WWNN

Select ESX Host>Configuration>Storage Adapters

image

image

Verifying Multipathing

Right Click on Storage>property>Manage Paths

image

image

Finding and Matching UID

Log on to IBM V3700 Storage>Volume>Volumes

image

Click ESX Host>Configuration>Storage>Devices

image