Backup VMware Server Workloads to Azure Backup Server

In my previous article, I explained how to install and configure Azure Backup Server. This article explains how to configure Azure Backup Server to help protect VMware  Server workloads. I am assuming that you already have Azure Backup Server installed. Azure Backup Server can back up, or help protect, VMware vCenter Server version 5.5 and later version.

 

 

 

 

 

 

Step1: Create a secure connection to the vCenter Server

By default, Azure Backup Server communicates with each vCenter Server via an HTTPS channel. To turn on the secure communication, we recommend that you install the VMware Certificate Authority (CA) certificate on Azure Backup Server.

To fix this issue, and create a secure connection, download the trusted root CA certificates.

  1. In the browser on Azure Backup Server, enter the URL to the vSphere Web Client. The vSphere Web Client login page appears. Example, https://vcenter.domain.com

At the bottom of the information for administrators and developers, locate the Download trusted root CA certificates link.

  1. Click Download trusted root CA certificates.

The vCenter Server downloads a file to your local computer. The file’s name is named download. Depending on your browser, you receive a message that asks whether to open or save the file.

  1. Save the file to a location on Azure Backup Server. When you save the file, add the .zip file name extension. The file is a .zip file that contains the information about the certificates. With the .zip extension, you can use the extraction tools.
  2. Right-click zip, and then select Extract Allto extract the contents. The CRL file has an extension that begins with a sequence like .r0 or .r1. The CRL file is associated with a certificate.
  3. In the certsfolder, right-click the root certificate file, and then click Rename. Change the root certificate’s extension to .crt. When you’re asked if you’re sure you want to change the extension, click Yes or OK.  Right-click the root certificate and from the pop-up menu, select Install Certificate. The Certificate Import Wizard dialog box appears.
  4. In the Certificate Import Wizarddialog box, select Local Machine as the destination for the certificate, and then click Next to continue.

If you’re asked if you want to allow changes to the computer, click Yes or OK, to all the changes.

  1. On the Certificate Storepage, select Place all certificates in the following store, and then click Browse to choose the certificate store.

The Select Certificate Store dialog box appears.

  1. Select Trusted Root Certification Authoritiesas the destination folder for the certificates, and then click OK. The Trusted Root Certification Authorities folder is confirmed as the certificate store. Click Next.
  2. On the Completing the Certificate Import Wizardpage, verify that the certificate is in the desired folder, and then click Finish.
  3. Sign in to the vCenter Server to confirm that your connection is secure.

If you have secure boundaries within your organization, and don’t want to turn on the HTTPS protocol, use the following procedure to disable the secure communications.

Step2: Disable secure communication protocol

If your organization doesn’t require the HTTPS protocol, use the following steps to disable HTTPS. To disable the default behavior, create a registry key that ignores the default behavior.

  1. Copy and paste the following text into a .txt file.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Data Protection Manager\VMWare]

“IgnoreCertificateValidation”=dword:00000001

  1. Save the file to your Azure Backup Server computer. For the file name, use DisableSecureAuthentication.reg.
  2. Double-click the file to activate the registry entry.

Step3: Create a role and user account on the vCenter Server

To establish the necessary user credentials to back up the vCenter Server workloads, create a role with specific privileges, and then associate the user account with the role.

Azure Backup Server uses a username and password to authenticate with the vCenter Server. Azure Backup Server uses these credentials as authentication for all backup operations.

To add a vCenter Server role and its privileges for a backup administrator:

  1. Sign in to the vCenter Server, and then in the vCenter Server Navigatorpanel, click Administration.
  2. In Administrationselect Roles, and then in the Roles panel click the add role icon (the + symbol). The Create Role dialog box appears.
  3. In the Create Roledialog box, in the Role name box, enter BackupAdminRole. The role name can be whatever you like, but it should be recognizable for the role’s purpose.
  4. Select the privileges for the appropriate version of vCenter, and then click OK. The following table identifies the required privileges for vCenter 6.0 and vCenter 5.5.

When you select the privileges, click the icon next to the parent label to expand the parent and view the child privileges. To select the VirtualMachine privileges, you need to go several levels into the parent child hierarchy. You don’t need to select all child privileges within a parent privilege. After you click OK, the new role appears in the list on the Roles panel.

Privileges for vCenter 6.0 Privileges for vCenter 5.5
Datastore.AllocateSpace Datastore.AllocateSpace
Global.ManageCustomFields Global.ManageCustomerFields
Global.SetCustomFields
Host.Local.CreateVM Network.Assign
Network.Assign
Resource.AssignVMToPool
VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddNewDisk
VirtualMachine.Config.AdvanceConfig VirtualMachine.Config.AdvancedConfig
VirtualMachine.Config.ChangeTracking VirtualMachine.Config.ChangeTracking
VirtualMachine.Config.HostUSBDevice
VirtualMachine.Config.QueryUnownedFiles
VirtualMachine.Config.SwapPlacement VirtualMachine.Config.SwapPlacement
VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOff
VirtualMachine.Inventory.Create VirtualMachine.Inventory.Create
VirtualMachine.Provisioning.DiskRandomAccess
VirtualMachine.Provisioning.DiskRandomRead VirtualMachine.Provisioning.DiskRandomRead
VirtualMachine.State.CreateSnapshot VirtualMachine.State.CreateSnapshot
VirtualMachine.State.RemoveSnapshot VirtualMachine.State.RemoveSnapshot

Step4: Create a vCenter Server user account and permissions

After the role with privileges is set up, create a user account. The user account has a name and password, which provides the credentials that are used for authentication.

  1. To create a user account, in the vCenter Server Navigatorpanel, click Users and Groups. The vCenter Users and Groups panel appears.
  2. In the vCenter Users and Groupspanel, select the Users tab, and then click the add users icon (the + symbol). The New User dialog box appears.
  3. In the New Userdialog box, add the user’s information and then click OK. In this procedure, the username is BackupAdmin. The new user account appears in the list.
  4. To associate the user account with the role, in the Navigatorpanel, click Global Permissions. In the Global Permissions panel, select the Manage tab, and then click the add icon (the + symbol). The Global Permissions Root – Add Permission dialog box appears.
  5. In the Global Permission Root – Add Permissiondialog box, click Add to choose the user or group.  The Select Users/Groups dialog box appears.
  6. In the Select Users/Groupsdialog box, choose BackupAdmin and then click Add. In Users, the domain\username format is used for the user account. If you want to use a different domain, choose it from the Domain Click OK to add the selected users to the Add Permission dialog box.
  7. Now that you’ve identified the user, assign the user to the role. In Assigned Role, from the drop-down list, select BackupAdminRole, and then click OK. On the Managetab in the Global Permissions panel, the new user account and the associated role appear in the list.

Step6: Establish vCenter Server credentials on Azure Backup Server

  1. To open Azure Backup Server, double-click the icon on the Azure Backup Server desktop.
  2. In the Azure Backup Server console, click Management, click Production Servers, and then on the tool ribbon, click Manage VMware. The Manage Credentialsdialog box appears.
  3. In the Manage Credentialsdialog box, click Add to open the Add Credential dialog box.
  4. In the Add Credentialdialog box, enter a name and a description for the new credential. Then specify the username and password. The name, Contoso Vcenter credential is used to identify the credential in the next procedure. Use the same username and password that is used for the vCenter Server. If the vCenter Server and Azure Backup Server are not in the same domain, in User name, specify the domain.

Click Add to add the new credential to Azure Backup Server. The new credential appears in the list in the Manage Credentials dialog box.

  1. To close the Manage Credentialsdialog box, click the X in the upper-right corner.

Step7: Add the vCenter Server to Azure Backup Server

Production Server Addition Wizard is used to add the vCenter Server to Azure Backup Server. To open Production Server Addition Wizard, complete the following procedure:

  1. In the Azure Backup Server console, click Management, click Production Servers, and then click Add. The Production Server Addition Wizarddialog box appears.
  2. On the Select Production Server typepage, select VMware Servers, and then click Next.
  3. In Server Name/IP Address, specify the fully qualified domain name (FQDN) or IP address of the VMware server. If all the ESXi servers are managed by the same vCenter, you can use the vCenter name.
  4. In SSL Port, enter the port that is used to communicate with the VMware server. Use port 443, which is the default port, unless you know that a different port is required.
  5. In Specify Credential, select the credential that you created earlier.
  6. Click Addto add the VMware server to the list of Added VMware Servers, and then click Next to move to the next page in the wizard.
  7. In the Summarypage, click Add to add the specified VMware server to Azure Backup Server. The VMware server backup is an agentless backup, and the new server is added immediately. The Finishpage shows you the results.

After you add the vCenter Server to Azure Backup Server, the next step is to create a protection group. The protection group specifies the various details for short or long-term retention, and it is where you define and apply the backup policy. The backup policy is the schedule for when backups occur, and what is backed up.

Step8: Configure a protection group

After you check that you have proper storage, use the Create New Protection Group wizard to add VMware virtual machines.

  1. In the Azure Backup Server console, click Protection, and in the tool ribbon, click Newto open the Create New Protection Group wizard.

The Create New Protection Group wizard dialog box appears. Click Next to advance to the Select protection group type page.

  1. On the Select Protection group typepage, select Servers and then click Next. The Select group memberspage appears.
  2. On the Select group memberspage, the available members and the selected members appear. Select the members that you want to protect, and then click Next.

When you select a member, if you select a folder that contains other folders or VMs, those folders and VMs are also selected. The inclusion of the folders and VMs in the parent folder is called folder-level protection. To remove a folder or VM, clear the check box.

  1. On the Select Data Protection Methodpage, enter a name for the protection group. Short-term protection (to disk) and online protection are selected. If you want to use online protection (to Azure), you must use short-term protection to disk. Click Next to proceed to the short-term protection range.
  2. On the Specify Short-Term Goalspage, for Retention Range, specify the number of days that you want to retain recovery points that are stored to disk. If you want to change the time and days when recovery points are taken, click Modify. The short-term recovery points are full backups. They are not incremental backups. When you are satisfied with the short-term goals, click Next.
  3. On the Review Disk Allocationpage, review and if necessary, modify the disk space for the VMs. The recommended disk allocations are based on the retention range that is specified in the Specify Short-Term Goals page, the type of workload, and the size of the protected data (identified in step 3).
    • Data size:Size of the data in the protection group.
    • Disk space:The recommended amount of disk space for the protection group. If you want to modify this setting, you should allocate total space that is slightly larger than the amount that you estimate each data source grows.
    • Colocate data:If you turn on colocation, multiple data sources in the protection can map to a single replica and recovery point volume. Colocation isn’t supported for all workloads.
    • Automatically grow:If you turn on this setting, if data in the protected group outgrows the initial allocation, System Center Data Protection Manager tries to increase the disk size by 25 percent.
    • Storage pool details:Shows the status of the storage pool, including total and remaining disk size.

When you are satisfied with the space allocation, click Next.

  1. On the Choose Replica Creation Methodpage, specify how you want to generate the initial copy, or replica, of the protected data on Azure Backup Server.

The default is Automatically over the network and Now. If you use the default, we recommend that you specify an off-peak time. Choose Later and specify a day and time.  For large amounts of data or less-than-optimal network conditions, consider replicating the data offline by using removable media. After you have made your choices, click Next.

  1. On the Consistency Check Optionspage, select how and when to automate the consistency checks. You can run consistency checks when replica data becomes inconsistent, or on a set schedule. If you don’t want to configure automatic consistency checks, you can run a manual check. In the protection area of the Azure Backup Server console, right-click the protection group and then select Perform Consistency Check. Click Next to move to the next page.
  2. On the Specify Online Protection Datapage, select one or more data sources that you want to protect. You can select the members individually, or click Select All to choose all members. After you choose the members, click Next.
  3. On the Specify Online Backup Schedulepage, specify the schedule to generate recovery points from the disk backup. After the recovery point is generated, it is transferred to the Recovery Services vault in Azure. When you are satisfied with the online backup schedule, click Next.
  4. On the Specify Online Retention Policypage, indicate how long you want to retain the backup data in Azure. After the policy is defined, click Next.
  5. On the Summarypage, review the details for your protection group members and settings, and then click Create Group.

Now you are ready to backup VMware VM using Backup Server v2.

Azure Backup Server v2

Azure Backup is used for backups and DR, and it works with managed disks as well as unmanaged disks. You can create a backup job with time-based backups, easy VM restoration, and backup retention policies.

Azure Backup for VMware

The following table is a summary of the solutions available for DR.

Scenario Automatic replication DR solution
Premium SSD disks

Managed disks

Local (locally redundant storage)

Cross region (read-access geo-redundant storage)

Azure Backup

Azure Backup Server

Unmanaged LRS and GRS Local (locally redundant storage)

Cross region (geo-redundant storage)

Azure Backup

Azure Backup Server

This article illustrates on how to use Azure Backup Server v2 to backup on-premises and Azure Workloads. Though Azure Backup Server shares much of the same functionality as DPM. Azure Backup Server does not back up to tape, nor does it integrate with System Center. Azure Backup Server is a dedicated role. Do not run any other application or role with the Azure Backup Server.

 

 

You can deploy Azure Backup Server from the Azure marketplace or on a On-premises server. The requirement to deploy Azure Backup server on a on-prem infrastructure is to have the below OS.

Operating System Platform SKU
Windows Server 2016 and latest SPs 64 bit Standard, Datacenter
Windows Server 2012/R2 and latest SPs 64 bit Standard, Datacenter,

Microsoft recommends you start with a gallery image of Windows Server 2012 R2 Datacenter or Windows Server 2016 Datacenter to create a Azure Backup Server. Here are the steps, you need to go through to deploy Azure Backup server.

Step1: Install Windows Virtual Machine from the Marketplace

  1. Sign in to the Azure portal at https://portal.azure.com.
  2. Choose Create a resource in the upper left-hand corner of the Azure portal.
  3. In the search box above the list of Azure Marketplace resources, search for and select Windows Server 2016 Datacenter, then choose Create.
  4. Provide a VM name, such as myVM, leave the disk type as SSD, then provide a username, such as azureuser. The password must be at least 12 characters long and meet the defined complexity requirements.
  5. Choose to Create newresource group, then provide a name, such as myResourceGroup. Choose your Location, then select OK.
  6. Select a size for the VM. You can filter by Compute typeor Disk type, for example. A suggested VM size is D2s_v3. Click Select after you have chosen a size.
  7. On the Settingspage, in Network > Network Security Group > Select public inbound ports, select HTTPand RDP (3389) from the drop-down. Leave the rest of the defaults and select OK.
  8. On the summary page, select Createto start the VM deployment.
  9. The VM is pinned to the Azure portal dashboard. Once the deployment has completed, the VM summary automatically opens.

Step2: Create Recovery Vault

  1. Sign in to your subscription in the Azure portal.
  2. In the left-hand menu, select All Services.
  3. In the All services dialog, type Recovery Services. As you begin typing, your input filters the list of resources. Once you see it, select Recovery Services vaults.
  4. On the Recovery Services vaultsmenu, select Add. The Recovery Services vaults menu opens. It prompts you to provide information for NameSubscriptionResource group, and Location.
  5. When you are ready to create the Recovery Services vault, click Create.

Step3: Select Appropriate Storage Type

  1. Select your vault to open the vault dashboard and the Settings menu. If the Settingsmenu doesn’t open, click All settings in the vault dashboard.
  2. On the Settingsmenu, click Backup Infrastructure > Backup Configuration to open the Backup Configuration On the Backup Configuration menu, choose the storage replication option for your vault.
  3. Select LRS or GRS type storage.

Step4: Download Backup Software

  1. Sign in to the Azure portal.
  2. click Browse.In the list of resources, type Recovery Services.
  3. As you begin typing, the list will filter based on your input. When you see Recovery Services vaults,
  4. From the list of Recovery Services vaults, select a vault.
  5. The Settingsblade opens up by default. If it is closed, click on Settings to open the settings blade.
  6. Click Backupto open the Getting Started wizard. In the Getting Started with backup blade that opens, Backup Goals will be auto-selected.
  7. In the Backup Goalblade, from the Where is your workload running menu, select On-premises.
  8. From the What do you want to backup?drop-down menu, select the workloads you want to protect using Azure Backup Server, and then click OK.
  9. In the Prepare infrastructureblade that opens, click the Download links for Install Azure Backup Server and Download vault credentials. You use the vault credentials during registration of Azure Backup Server to the recovery services vault. The links take you to the Download Center where the software package can be downloaded.
  10. Select all the files and click Next. Download all the files coming in from the Microsoft Azure Backup download page, and place all the files in the same folder.

Step5: Extract Software Package

After you’ve downloaded all the files, click MicrosoftAzureBackupInstaller.exe. This will start the Microsoft Azure Backup Setup Wizard to extract the setup files to a location specified by you. Continue through the wizard and click on the Extract button to begin the extraction process.

Step 6: Install Software Package

  1. Click Microsoft Azure Backupto launch the setup wizard.
  2. On the Welcome screen click the Next This takes you to the Prerequisite Checkssection. On this screen, click Check to determine if the hardware and software prerequisites for Azure Backup Server have been met. If all prerequisites are met successfully, you will see a message indicating that the machine meets the requirements. Click on the Next button.
  3. Microsoft Azure Backup Server requires SQL Server Standard. Further,the Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries needed if you do not wish to use your own SQL. When starting with a new Azure Backup Server installation, you should pick the option Install new Instance of SQL Server with this Setupand click the Check and Install Once the prerequisites are successfully installed, click Next.
  4. Provide a location for the installation of Microsoft Azure Backup server files and click Next.
  5. Provide a strong password for restricted local user accounts and click Next.
  6. Select whether you want to use Microsoft Updateto check for updates and click Next.
  7. Review the Summary of Settingsand click Install.
  8. The installation happens in phases. In the first phase the Microsoft Azure Recovery Services Agent is installed on the server. The wizard also checks for Internet connectivity. If Internet connectivity is available you can proceed with installation, if not, you need to provide proxy details to connect to the Internet.
  9. Once registration of the Microsoft Azure Backup server successfully completes, the overall setup wizard proceeds to the installation and configuration of SQL Server and the Azure Backup Server components. Once the SQL Server component installation completes, the Azure Backup Server components are installed.
  10. When the installation step has completed, the product’s desktop icons will have been created as well. Just double-click the icon to launch the product.

Step7: Add a Data Disk to Azure Backup Server

  1. Log on to Azure Portal. In the menu on the left, click Virtual Machines.
  2. Select the virtual machine from the list.
  3. On the virtual machine page, click Disks.
  4. On the Diskspage, click + Add data disk.
  5. In the drop-down for the new disk, select Create disk.
  6. In the Create managed diskpage, type in a name for the disk and adjust the other settings as necessary. When you are done, click Create.
  7. In the Diskspage, click Save to save the new disk configuration for the VM.
  8. After Azure creates the disk and attaches it to the virtual machine, the new disk is listed in the virtual machine’s disk settings under Data disks.

Step8: Initialise the Disk of the Azure Backup Server

  1. Connect to the VM.
  2. Click the start menu inside the VM and type mscand hit Enter. Disk Management snap-in opens.
  3. Disk Management recognizes that you have a new, un-initialized disk and the Initialize Diskwindow pops up.
  4. Make sure the new disk is selected and click OKto initialize it.

Step9: Create a Storage Pool for Azure Backup Server

  1. Navigating to the Storage Pools page in Server Manager
  2. Launch Server Manager and navigate to the “File and Storage Services” page.
  3. Navigate to the Storage Pools page. Refresh the UI by clicking on the Refresh button.
  4. Logon as a user with admin privileges to your server, launch Server Manager, and then navigate to the “Storage Pools” page within the File and Storage Services Role.
  5. Right-click the “Available Disks” pool for the Storage Spaces subsystem and launch the New Storage Pool Wizard.
  6. Launch the New Storage Pool Wizard from the TASKS list.
  7. In the New Storage Pool Wizard, enter desired pool name and optional description. Make sure that you have selected the Primordial pool for the Storage Spaces subsystem.
  8. Select the number of disks needed for pool creation. If you want to designate a physical disk as a hot spare, then select the “Hot Spare” allocation type.
  9. Confirm the selected settings and initiate pool creation by selecting “Create” on the “Confirm selections” page.

Step10: Create a Virtual Disk

  1. Right-click the concrete pool that you just created (the pool where type value is Storage Pool), and then launch the New Virtual Disk Wizard.
  2. In the New Virtual Disk Wizard, make sure that you have selected the appropriate pool. Enter the desired virtual disk name and optional description.
  3. Select the desired storage layout and provisioning scheme as per your storage requirements.
  4. On the “Specify the size of the virtual disk” page, enter the desired size for the new virtual disk or pick the “Maximum size” option.
  • If you pick the “Maximum size” option, the system will try to create the largest size possible for the virtual disk.
  • If you select the check box for “Create the largest virtual disk possible, up to the specified size” while specifying the size then the system will try to create the largest size possible for the virtual disk up to the requested size.
  • It is also important to note that the value showing up as the storage pool free space (in our example 43.8GB) shows the actual free allocation the pool has overall. For cases with a fixed provisioning of a non-simple storage layout such as Mirror or Parity, when defining the size of the virtual disk, you have to take into account the overhead of storage needed to create the extra copies of the virtual disks extents for resiliency. As a basic example, with the 43.8GB free space in the pool, creating a 30GB mirrored virtual disk is not possible since it will take at least 60GB of free space in the pool to create a mirrored virtual disk to hold the two copies of the mirrored data.
  1. Confirm the settings and initiate virtual disk creation by selecting “Create” on the “Confirm selections” page.

Step11: Add Disk to Azure Backup Server

  1. Launch  Azure Backup Server and locate Disk Storage and Add – Select the disk, you want to add
  2. Once Added, this disk will be formatted with REFS file system and the storage will be available for Azure Backup Server.

Now you are ready to use Azure Backup Server. On my next blog, I will explain how to backup VMware VM using Azure Backup Server.

Microsoft Software Defined Storage AKA Scale-out File Server (SOFS)

Business Challenges:

  • $/IOPS and $/TB
  • Continuous Availability
  • Fault Tolerance
  • Storage Performance
  • Segregation of production, development and disaster recovery storage
  • De-duplication of unstructured data
  • Segregation of data between production site and disaster recovery site
  • Continuous break fix of Distributed File Systems (DFS) & File Server
  • Continuously extending storage on the DFS servers
  • Single point of failure
  • File systems is not available always
  • Security of file systems is constant concern
  • Propitiatory non-scalable storage
  • Management of physical storage
  • Vendor lock-in contract for physical storage
  • Migration path from single vendor to multi vendor storage provider
  • Management overhead of unstructured data
  • Comprehensive management of storage platform

Solutions:

Microsoft Software Defined Storage AKA Scale-Out File Server is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster.Microsoft Software Defined Storage offerings compared with third party offering:

Storage feature Third-party NAS/SAN Microsoft Software-Defined Storage
Fabric Block protocol

 

File protocol Network

 

Network Low latency network with FC

 

Low latency with SMB3Direct Management

 

Management Management of LUNs

 

Management of file shares Data de-duplication

 

Data De-duplication Data de-duplication

 

Data de-duplication Resiliency

 

Resiliency RAID resiliency groups

 

Flexible resiliency options Pooling

 

Pooling Pooling of disks

 

Pooling of disks Availability

 

Availability High

 

Continuous (via redundancy) Copy offload, Snapshots

 

Copy Offloads, Snapshots Copy offload, Snapshots

 

SMB copy offload, Snapshots Tiering

 

Tiering Storage tiering

 

Performance with tiering Persistent write-back cache

 

Persistent Write-back cache Persistent write-back cache

 

Persistent write-back cache Scale up

 

Scale up Scale up

 

Automatic scale-out rebalancing Storage Quality of Service (QoS)

 

Storage Quality of Service (QoS) Storage QoS

 

Storage QoS (Windows Server 2016) Replication

 

Replication Replication

 

Storage Replica (Windows Server 2016) Updates

 

Updates Firmware updates

 

Rolling cluster upgrades (Windows Server 2016)

 

    Storage Spaces Direct (Windows Server 2016)

 

    Azure-consistent storage (Windows Server 2016)

 

 Functional use of Microsoft Scale-Out File Servers:

1. Application Workloads

  • Microsoft Hyper-v Cluster
  • Microsoft SQL Server Cluster
  • Microsoft SharePoint
  • Microsoft Exchange Server
  • Microsoft Dynamics
  • Microsoft System Center DPM Storage Target
  • Veeam Backup Repository

2. Disaster Recovery Solution

  • Backup Target
  • Object storage
  • Encrypted storage target
  • Hyper-v Replica
  • System Center DPM

3. Unstructured Data

  • Continuously Available File Shares
  • DFS Namespace folder target server
  • Microsoft Data de-duplication
  • Roaming user Profiles
  • Home Directories
  • Citrix User Profiles
  • Outlook Cached location for Citrix XenApp Session Server

4. Management

  • Single Management Point for all Scale-out File Servers
  • Provide wizard driven tools for storage related tasks
  • Integrated with Microsoft System Center

Business Values:

  • Scalability
  • Load balancing
  • Fault tolerance
  • Ease of installation
  • Ease of management/operations
  • Flexibility
  • Security
  • High performance
  • Compliance & Certification

SOFS Architecture:

Microsoft Scale-out File Server (SOFS) is  considered as a Storage Defined Storage (SDS).  Microsoft SOFS is independent of hardware vendor as long as the compute and storage is certified by Microsoft Corporation. The following figure shows Microsoft Hyper-v cluster, SQL Cluster and Object Storage on the SOFS.

image

                 Figure: Microsoft Software Defined Storage (SDS) Architecture

image

                     Figure: Microsoft Scale-out File Server (SOFS) Architecture

image

                                      Figure: Microsoft SDS Components

image

                        Figure: Unified Storage Management (See Reference)

Microsoft Software Defined Storage AKA Scale-out File Server Benefits:

SOFS:

  • Continuous availability file stores for Hyper-V and SQL Server
  • Load-balanced IO across all nodes
  • Distributed access across all nodes
  • VSS support
  • Transparent failover and client redirection
  • Continuous availability at a share level versus a server level

De-duplication:

  • Identifies duplicate chunks of data and only stores one copy
  • Provides up to 90% reduction in storage required for OS VHD files
  • Reduces CPU and Memory pressure
  • Offers excellent reliability and integrity
  • Outperforms Single Instance Storage (SIS) or NTFS compression.

SMB Multichannel

  • Automatic detection of SMB Multi-Path networks
  • Resilience against path failures
  • Transparent failover with recovery
  • Improved throughput
  • Automatic configuration with little administrative overhead

SMB Direct:

  • The Microsoft implementation of RDMA.
  • The ability to direct data transfers from a storage location to an application.
  • Higher performance and lower latency through CPU offloading
  • High-speed network utilization (including InfiniBand and iWARP)
  • Remote storage at the speed of local storage
  • A transfer rate of approximately 50Gbps on a single NIC port
  • Compatibility with SMB Multichannel for load balancing and failover

VHDX Virtual Disk:

  • Online VHDX Resize
  • Storage QoS (Quality of Service)

Live Migration

  • Easy migration of virtual machine into a cluster while the virtual machine is running
  • Improved virtual machine mobility
  • Flexible placement of virtual machine storage based on demand
  • Migration of virtual machine storage to shared storage without downtime

Storage Protocol:

  • SAN discovery (FCP, SAS, iSCSI i.e. EMC VNX, EMC VMAX)
  • NAS discovery (Self-contained NAS, NAS Head i.e. NetApp OnTap)
  • File Server Discovery (Microsoft Scale-Out File Server, Unified Storage)

Unified Management:

  • A new architecture provides ~10x faster disk/partition enumeration operations
  • Remote and cluster-awareness capabilities
  • SM-API exposes new Windows Server 2012 R2 features (Tiering, Write-back cache, and so on)
  • SM-API features added to System Center VMM
  • End-to-end storage high availability space provisioning in minutes in VMM console
  • More Windows PowerShell

ReFS:

  • More resilience to power failures
  • Highest levels of system availability
  • Larger volumes with better durability
  • Scalable to petabyte size volumes

Storage Replica:

  • Hardware agnostic storage configuration
  • Provide a DR solution for planned and unplanned outages of mission critical workloads.
  • Use SMB3 transport with proven reliability, scalability, and performance.
  • Stretched failover clusters within metropolitan distances.
  • Manage end to end storage and clustering for Hyper-V, Storage Replica, Storage Spaces, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS using Microsoft software
  • Reduce downtime, and increase reliability and productivity intrinsic to Windows.

Cloud Integration:

  • Cloud-based storage service for online backups
  • Windows PowerShell instrumented
  • Simple, reliable Disaster Recovery solution for applications and data
  • Supports System Center 2012 R2 DPM

Implementing Scale-out File Server

Scale-out File Server Recommended Configuration:

  1. Gather all virtual servers IOPS requirements*
  2. Gather Applications IOPS requirements
  3. Total IOPS of all applications & Virtual machines must be less than available IOPS of physical storage 
  4. Keep latency below 3 ms at all time for high performance
  5. Gather required capacity + potential growth + best practice
  6. N+1 Compute, Network and Storage Hardware
  7. Use low latency, high throughput networks
  8. Segregate storage network from data network using logical network (VLAN) or fibre channel
  9. Tools to be used

*Not all virtual servers are same, DHCP server generate few IOPS, SQL server and Exchange can generate thousands of IOPS.

*Do not place SQL Server on the same logical volume (LUN) with Exchange Server or Microsoft Dynamics or Backup Server.

*Isolate high IO workloads to separate logical volume or even separate storage pool if possible.

Prerequisites for Scale-Out File Server

  1. Install File and Storage Services server role, and the Failover Clustering feature on the cluster nodes
  2. Configure Microsoft failover Clusters using this article Windows Server 2012: Failover Clustering Deep Dive Part II
  3. Add Cluster Share Volume
  • Log on to the server as a member of the local Administrators group.
  • Open Server Manager> Click Tools, and then click Failover Cluster Manager.
  • Click Storage, right-click the disk that you want to add to the cluster shared volume, and then click Add to Cluster Shared Volumes> Add Storage Presented to this cluster.

Configure Scale-out File Server

  1. Open Failover Cluster Manager> Right-click the name of the cluster, and then click Configure Role.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click File Server, and then click Next.
  4. On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.
  5. On the Client Access Point page, in the Name box, type a NETBIOS of Scale-Out File Server, and then click Next.
  6. On the Confirmation page, confirm your settings, and then click Next.
  7. On the Summary page, click Finish.

Create Continuously Available File Share

  1. Open Failover Cluster Manager>Expand the cluster, and then click Roles.
  2. Right-click the file server role, and then click Add File Share.
  3. On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  4. On the Select the server and path for this share page, click the name of the cluster shared volume, and then click Next.
  5. On the Specify share name page, in the Share name box, type a name for the file share, and then click Next.
  6. On the Configure share settings page, ensure that the Continuously Available check box is selected, and then click Next.
  7. On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • To use Scale-Out File Server file share for Hyper-V: All Hyper-V computer accounts, the SYSTEM account, cluster computer account for any Hyper-V clusters, and all Hyper-V administrators must be granted full control on the share and the file system.
  • To use Scale-Out File Server on Microsoft SQL Server: The SQL Server service account must be granted full control on the share and the file system

      8. On the Confirm selections page, click Create. On the View results page, click Close.

Use SOFS for Hyper-v Server VHDX Store:

  1. Open Hyper-V Manager. Click Start, and then click Hyper-V Manager.
  2. Open Hyper-v Settings> Virtual Hard Disks> Specify Location of Store as \\SOFS\VHDShare\ and Specify location of Virtual Machine Configuration \\SOFS\VHDCShare
  3. Click Ok.

Use SOFS in System Center VMM: 

  1. Add Windows File Server in VMM
  2. Assign SOFS Share to Fabric & Hosts

Use SOFS for SQL Database Store:

1. Assign SQL Service Account Full permission to SOFS Share

  • Open Windows Explorer and navigate to the scale-out file share.
  • Right-click the folder, and then click Properties.
  • Click the Sharing tab, click Advanced Sharing, and then click Permissions.
  • Ensure that the SQL Server service account has full-control permissions.
  • Click OK twice.
  • Click the Security tab. Ensure that the SQL Server service account has full-control permissions.

2. In SQL Server 2012, you can choose to store all database files in a scale-out file share during installation.  

3. On the step 20 of SQL Setup Wizard , provide a location of Scale-out File Server which is \\SOFS\SQLData and \\SOFS\SQLLogs

4. Create a Database on SOFS Share but on the existing SQL Server using SQL Script

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’\\SOFS\SQLDB\TestDB.mdf’ )
LOG ON
( NAME = N’TestDBLog’, FILENAME = N’\\SOFS\SQLDBLog\TestDBLogs.ldf’)
GO

Use Backup & Recovery:

System Center Data Protection Manager 2012 R2

Configure and add a dedupe storage target into DPM 2012 R2. DPM 2012 R2 will not backup SOFS itself but it will backup VHDX files stored on SOFS. Follow Deduplicate DPM storage and protection for virtual machines with SMB storage  guide to backup virtual machines.

Veeam Availability Suite

  1. Log on to Veeam Availability Console>Click Backup Repository> Right Click New backup Repository
  2. Select Shared Folder on the Type Tab
  3. Add SMB Backup Target \\SOFS\Repository
  4. Follow the Wizard. Make Sure Service Account of Veeam has full access permission to \\SOFS\Repository  Share.
  5. Click Scale-out Repositories>Right Click Add Scale-out backup repository> Type the Name
  6. Select the backup repository you created in previous>Follow the Wizard to complete tasks.

References:

Microsoft Storage Architecture

Storage Spaces Physical Disk Validation Script

Validate Hardware

Deploy Clustered Storage Spaces

Storage Spaces Tiering in Windows Server 2012 R2

SMB Transparent Failover

Cluster Shared Volume (CSV) Inside Out

Storage Spaces – Designing for Performance

Related Articles:

Scale-Out File Server Cluster using Azure VMs

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Understanding Software Defined Storage (SDS)

Software defined storage is an evolution of storage technology in cloud era. It is a deployment of storage technology without any dependencies on storage hardware. Software defined storage (SDS) eliminates all traditional aspect of storage such as managing storage policy, security, provisioning, upgrading and scaling of storage without the headache of hardware layer. Software defined storage (SDS) is completely software based product instead of hardware based product. A software defined storage must have the following characteristics.

Characteristics of SDS

  • Management of complete stack of storage using software
  • Automation-policy driven storage provisioning with SLA
  • Ability to run private, public or hybrid cloud platform
  • Creation of uses metric and billing in control panel
  • Logical storage services and capabilities eliminating dependence on the underlying physical storage systems
  • Creation of logical storage pool
  • Creation of logical tiering of storage volumes
  • Aggregate various physical storage into one or multiple logical pool
  • Storage virtualization
  • Thin provisioning of volume from logical pool of storage
  • Scale out storage architecture such as Microsoft Scale out File Servers
  • Virtual volumes (vVols), a proposal from VMware for a more transparent mapping between large volumes and the VM disk images within them
  • Parallel NFS (pNFS), a specific implementation which evolved within the NFS
  • OpenStack APIs for storage interaction which have been applied to open-source projects as well as to vendor products.
  • Independent of underlying storage hardware

A software defined storage must not have the following limitations.

  • Glorified hardware which juggle between network and disk e.g. Dell Compellent
  • Dependent systems between hardware and software e.g. Dell Compellent
  • High latency and low IOPS for production VMs
  • Active-passive management controller
  • Repetitive hardware and software maintenance
  • Administrative and management overhead
  • Cost of retaining hardware and software e.g. life cycle management
  • Factory defined limitation e.g. can’t do situation
  • Production downtime for maintenance work e.g. Dell Compellent maintenance

The following vendors provides various software defined storage in current market.

Software Only vendor

  • Atlantis Computing
  • DataCore Software
  • SANBOLIC
  • Nexenta
  • Maxta
  • CloudByte
  • VMware
  • Microsoft

Mainstream Storage vendor

  • EMC ViPR
  • HP StoreVirtual
  • Hitachi
  • IBM SmartCloud Virtual Storage Center
  • NetApp Data ONTAP

Storage Appliance vendor

  • Tintri
  • Nimble
  • Solidfire
  • Nutanix
  • Zadara Storage

Hyper Converged Appliance

  • Cisco (Starting price from $59K for Hyperflex systems+1 year support inclusive)
  • Nutanix
  • VCE (Starting price from $60K for RXRAIL systems+support)
  • Simplivity Corporation
  • Maxta
  • Pivot3 Inc.
  • Scale Computing Inc
  • EMC Corporation
  • VMware Inc

Ultimately, SDS should and will provide businesses will worry free management of storage without limitation of hardware. There are compelling use cases of software defined storage for an enterprise to adopt software defined storage.

Relavent Articles

Dell Compellent: A Poor Man’s SAN

I have been deploying Storage Area Network for almost ten years in my 18 years Information Technology career. I have deployed various traditional, software defined and converged SANs manufactured by a global vendor like IBM, EMC, NetApp, HP, Dell, etc. I was tasked with the deployment of Dell Compellent in my previous role for several clients. I was excited about the opportunities and paused after reading the documentation presented to me. I could not co-relate implementation of a SAN and expected outcome desired by the customers. When wild sales pitch is sold to businesses with high promises, then there will always be hidden risks that come with the sales pitch.

  • Lesson number one, never trusts someone blindly although they have a very decent track record, resellers are often after a quick sale and get out.
  • Lesson number two, make sure you know who to trust as your partner in the transition to have a new SAN.

Decide what to procure based on your business case, TCO, workload analysis, capacity planning, lesson learnt and outcome of requirement analysis. Consider current technology trend, where you are at now, a technology roadmap and where you want to be in the future, e.g. Google Cloud, AWS or Azure. Capital investment can be the one off exercise these days before you pull the plug off on the on-premises infrastructure and fork-lift to Azure, Amazon or Google Cloud. Consider aligning technology stream with the business you do.

I have written this article to share my experience and disclose everything I learnt through my engagement on Dell Compellent deployment projects so that you can make a call by yourself. I will elaborate each feature of Dell Compellent and what exactly this feature does when you deploy a Compellent.

For the record, I have no beef with Dell. Let’s start now… “Marketing/sales pitch” vs “practical implication.”

Target Market: Small Business

Lets not go into detail, that will be a different topic for another day. Please read Dell’s business proposition “Ideally suited to smaller deployments across a variety of workloads, the SC Series products are easy to use and value optimized. We will continue to optimize the SC Series for value and server-attach.”

Management Interface: Dell Compellent Storage Center has a GUI designed to be accessible allegedly ease of use. Wizards offer few everyday tasks such as allocation, configuration, and administration functions. Compellent Storage Center monitoring tools provide very little insight on how storage backend is doing. You have to engage Dell remote support for diagnostic, and monitoring tools with alert and notification services. Storage center is not as granular as the competitor NetApp and EMC. Storage center has little information on storage performance, bottle neck and backend storage issues. Compellent is by design thin provisioned storage. There is no option in management center to assign as thick provisioned volume. IOPS and latency are calculated in volume and IOPS and latency are calculated in disks are far too different than real IOPS. You may see little IOPS in volume but click at drive level IOPS you will see storage controller is struggling to cope with the IOPS. Management center does not provide any clues who is generating this much IOPS.

Contact technical support they will say RAID scrub is killing your storage. Your standard request to tech support that stops the RAID scrub in a business hour. “You cannot do it” another classic reply by tech support. If you go through Compellent management center, you will find nothing that can schedule or stop RAID scrub.

Data Progression: In theory, Data Progression is an automated tiering technology that should have optimised the location of data, both on a schedule and on demand as prompted by a storage profile. Compellent’s tiering profiles streamline policy administration by assigning tier attributes based on the profile. On-demand data progression in a business hour will drive Compellent into crazy. If you are Citrix VDI mainstream than your workload is pretty much dead until data progression is complete.

A side effect of this technology is storage controller struggle to maintain on demand data progression and IO request at the same time hence there will be queue depth, and longer seek time in backend storage. In this situation, storage seek time is higher than normal.

Storage Profile: Storage profile in lay man’s terms is segregating expensive and cheap disk and profiling them in tier 1 (SSD RAID 10), tier 2 (15K Fibre Channel RAID 10, RAID 5, RAID 6) and tier 3 (7.2K SATA RAID 5, RAID 6). The storage profile determines how the system reads and writes data to disk for each volume as they are known in Compellent terms and how the data ages over time a feature called Data Progression. For example, random read request goes to tier 1 where you kept hot data, and a year old emails go to tier 3.

Storage Profiles supposed to allow the administrator to manage both writable blocks and replay blocks for a volume. It is fundamentally a tiering of storage in a controlled way. In theory, it supposed to be in a controlled environment. However, in reality, it does add extra workload to Dell Compellent controller. Let’s say you have tiered your storage according to your read and write intense IO. What happens when to READ and WRITE intense volume gets full?. Storage controller automatically triggers an on demand data progression from upper tier to lower tier to store data. Hence a WRITE intense IO is generated in lower tier what you wanted to avoid in the first place that’s why you profiled or tiered your storage. Mixing data progression with storage tiering defeats whole purpose of storage profiling.

Compellent Replay: Replay is essentially a storage snapshot in Dell terms. Dell Compellent Data Instant Replay software creates point-in-time copies called Replays. With Data Instant Replay Dell Compellent storage Replays at any time interval with minimal storage capacity. But here is the catch you will be most likely to run storage replay during the daily backup window. Backup generates lots of READ IOPS and Replays generate lots of READ and WRITE IOPS at the same time which is a daily backup window. Hence your backup is going to be dead slow. You will run out of the backup window and never be going to finish backup before the business hours. It will be a nightmare to fulfil data retention SLA and restore of any file systems and sensitive applications.

IOPS & Latency: Input/Output per second is a measurement unit of any hard disk and storage area network (SAN). This is a key performance matrix of a SAN regardless of manufacture, and this matrix remains unchanged. If you are to measure a SAN, this is where you begin. Never think that you have a bounce of virtual machines and it’s okay to buy SAN without IOPS consideration. There is the difference between a virtualised DHCP server and virtualised SQL server. A DHCP server may generate 20 IOPS but a SQL server can generate 5000 IOPS depends on what you are running on that SQL server. Every query you send to a SQL server or the application depends on the SQL server generate IOPS both read and write IOPS. For a Citrix XenApp and XenDesktop customer, you have to take into consideration that every time you launch a VDI session, open an word document, you generate IOPS, once you click save button on a word document, you generate write IOPS. Now you multiply the IOPS of each VDI session by the number of users, number of applications, number VDI and users inputs to estimate your real IOPS.

Now think about latency, in plain English, latency is the number of seconds or milli seconds you wait to retrieve information from a hard disk drive. This is calculated in round-trip between your request and the hard disk serve your request. Now you think millions of requests are bombarded on the storage area network. A SAN must sustain those requests and serve application requests, again it depends on what sort of workload you are running on a SAN. For example, file servers, Citrix profile, Citrix VDI, Exchange Server and SQL servers need low latency SAN.

In Dell Compellent, you may see volume IOPS e.g. 2000 but if you view disks hosting the same volume, then you might see 5000 IOPS. Then you must ask question how-come 5000-2000=3000 IOPS are generated automatically. Does Compellent has any tools to interrogate storage controller to see how additional workloads are generated? No it doesn’t. Your only bet is Dell support telling you the truth if you are lucky. The answer is automated RAID scrub is generating extra workloads on storage i.e. 3000 IOPS which could have been utilized for real workloads.

To co-relate this analysis with an all flash array storage, e.g. Dell Compellent, the SAN must be able to offer you the major benefits of a storage area network. If this storage cannot provide you low latency and high IO throughput for sensitive applications and workloads then you need to go back to drawing board or hire a consultant who can analyse your requirements and recommend you the options that match your need and budget. For further reading find Citrix validated solutions, storage best practices recommended by VMware and Microsoft. There are many tooling available in the market for you to analyse workload on applications, on a virtual or a physical infrastructure.

RAID Scrub: Data scrubbing is an error correction technique that uses a background task to inspect storage for errors periodically, and then correct detected errors using redundant data in the form of different checksums or copies of data. Data scrubbing reduces the likelihood that single correctable errors will accumulate, leading to reduced risks of uncorrectable errors.

In NetApp, you can schedule a RAID Scrub that suits your time and necessity however in Dell Compellent you cannot schedule a RAID Scrub through GUI or Command line. Dell technical support advised that this is an automated process takes places every day to correct RAID groups in Dell Compellent. There is a major side effect running automatic RAID scrub. RAID scrub will drive your storage to insane IOPS level, and latency will peak to high causing production volume to suffer and under perform. Performance of virtualisation will be degraded so badly that production environment will struggle to serve IO request. Dell advised that Dell can do nothing about RAID scrub because RAID scrub in SCOS operating systems is an automated process.

Compellent Multipathing: By implementing MPIO solution you eliminate any single point of failure in any physical path (s) and logical path(s) among any components such as adapters, cables, fabric switches, servers and storage. If one or more of these elements fails, causing the path to fail, multipathing logic uses an alternate path for I/O so that applications can still access their data. Each network interface card (in the iSCSI case) or HBA should be connected by using redundant switch infrastructures to provide continued access to storage in the event of a failure in a storage fabric component. This is the fundamental concept of any storage area network AKA SAN.

New generation SANs are integrated with multipath I/O (MPIO) support. Both Microsoft and VMware virtualisation architecture supports iSCSI, Fibre Channel and serial attached storage (SAS) SAN connectivity by establishing multiple sessions or connections to the storage array. Failover times may vary by storage vendor, and can be configured various way but the logic of MPIO remains unchanged.

New MPIO features in Windows Server include a Device Specific Module (DSM) designed to work with storage arrays that support the asymmetric logical unit access (ALUA) controller model (as defined in SPC-3), as well as storage arrays that follow the Active/Active controller model.

The Microsoft DSM provides the following load balancing policies. Microsoft load balance policies are generally dependent on the controller design (ALUA or true Active/Active) of the storage array attached to Windows-based computers.

Failover
Failback
Round-robin
Round-robin with a subset of paths
Dynamic Least Queue Depth
Weighted Path

VMware based systems also provide Fixed Path, Most Recently Used (MRU) and Round-Robin Configuration which is the most optimum configuration for VMware virtual infrastructure.

To explain ALUA in simple terms is that Server can see any LUN via both storage processors or Controller or NAS Head as active but only one of these storage processors “owns” the LUN. Both Storage Processor can view logical activities of storage using physical connection either via SAN switch to the server or via direct SAS cable connections. Hyper-v or vSphere ESXi server knows which processor owns which LUNs and sends traffic preferably directly to the owner. In case of controller or processor or NAS Head Failure Hyper-v or vSphere server automatically send traffic to an active processor without loss of any productivity. This is an essential feature of EMC, NetApp and HP products.

Let’s look at Dell Compellent now. Dell Compellent does not offer true Active/Active Controllers for any Storage. Dell Controllers Explained! Dell Verified Answer. Reference from Dell Forum….

“In the Compellent Architecture, both controllers are active. Failover is done at either the port or controller level depending on how the system was installed. Volumes are “owned” by a particular controller for mapping to servers. Changing the owning controller can be done – but it does take a volume down.”

I can confirm that this is exactly Dell Customer support advised me when I called them. Dell Compellent can take up to 60~90 seconds to failover from one controller to another. Which means entire virtual environment will go offline for a while and get back online. To update firmware or to replace a controller you have to bring everything down then bring everything back online which will cause a major outage and productivity loss for the entire organisation.

Multipathing options supported by the Host Utilities

Multipath I/O Overview

Multipathing Considerations

Dell Compellent is not an ALUA Storage Array

Performance Issue:  To identify Dell Compellent bottleneck for a virtualisation platform hosted in Compellent. Run the following in Windows perfmon in a virtual machine or a physical machine where a volume of Compellent storage is presented via HBA or iSCSI initiator. Use Windows perfmon, create a data collector set of the below attributes and generate a report using PAL tools. Extract seek time, latency, IOPS and queue depth in the Compellent storage. You will see a bottleneck in every area of storage you can expect. Read further on Windows Performance Monitoring Tools

\LogicalDisk\Avg. Disk Sec/Read

\LogicalDisk\Avg. Disk Sec/Write

\LogicalDisk\Disk Bytes/Sec

\LogicalDisk\Disk Reads/Sec

\LogicalDisk\Disk Writes/Sec

\LogicalDisk\Split IO/sec

\LogicalDisk\Disk Transfers/sec

Use the following Tools to analyse workloads and storage performance in your storage area network: 

Capacity planning & workload analysis tools

Multi-vendor storage performance and capacity monitoring

RVTools 

Windows Perfmon

PAL Analaysis Tools

Storage load generator / performance test tool

Dell EqualLogic Storage Management Pack Suite for SCOM

Monitoring EMC storage using SCOM 2012 SP1 with ESI Management Packs

IBM Storage Management Pack for Microsoft System Center Operations Manager

Cost Compare:

The cost of each gigabyte of storage is declining rapidly in every segment of the market. Enterprise storage today costs what desktop storage did less than a decade ago. So why are your overall costs increasing when buying storage? Let’s make it simple! Ask yourself questions?

How much will the storage cost? How much will the SAN cost to implement? How much will the SAN cost to operate? Now use the below tools to calculate the real cost of the owing black box?

Amazon EBS Pricing

Google Cloud Platform Pricing Calculator

Azure Storage Pricing

IBM TCO Calculator

vSAN Hybrid TCO and Sizing Calculator

HPE Business Value Calculator

Microsoft TCO Calculator

So what will you be looking in a SAN? 

  • Lower TCO
  • Storage Performance
  • Scale-to-Fit
  • Quality of Service
  • Uncompromised Availability and uptime
  • Cloud Ready
  • Reduction of Data (de-duplication)
  • Reduction of backup
  • Analytics and automation
  • Reduction of Data Centre footprint 

Summary: Dell Compellent makes a compelling argument for all-flash performance tiers. Yes, this argument is in sales pitch not in reality. A price conscious poor man who needs just any SAN and has a lower IO environment can have Compellent. For mainstream enterprise storage, Dell Compellent is a bad experience and can bring disaster to corporate Storage Area Network (SAN).

I had no doubt when Compellent introduced all flash arrays it was innovative but Compellent’s best days are gone. Just shop around, you will find better all-flash, converged, hybrid and virtual arrays which are built on better software, controllers and SSDs. There are flash arrays in the market which run clever codes and algorithm within the software to produce high IO, low latency and performance for sensitive applications.

Related Articles: 

EMC Unity Hybrid Storage for Azure Cloud Integration

Pro Tips For Storage Performance Testing

Storage Top 10 Best Practices

SQLIO download page

SQLIOSim tool KB article

SQL Server Storage Engine Team Blog

SQL Server I/O Basics

SQL Server I/O Basics – Chapter 2

Predeployment I/O Best Practices

Disk Partition Alignment Best Practices for SQL Server

EMC Symmetrix DMX-4 Enterprise Flash Drives with Microsoft SQL Server Databases

Implementing EMC CLARiiON CX4 with Enterprise Flash Drives for Microsoft SQL Server 2008 Databases

Microsoft. “Lab Validation Report: Microsoft Windows Server 2012 with Hyper-V and Exchange 2013

Microsoft TechNet. “Exchange 2013 Virtualization.”

Microsoft TechNet. “Failover Clustering Hardware Requirements and Storage Options.” Aug 2012.