Nimble Hybrid Storage for Azure VM

Gallery

Microsoft Azure can be integrated with Nimble Cloud-Connected Storage based on the Nimble Storage Predictive Flash platform via Microsoft Azure ExpressRoute or Equinix Cloud Exchange connectivity solutions. The Nimble storage is located in Equinix colocation facilities at proximity to Azure … Continue reading

EMC Unity Hybrid Storage for Azure Cloud Integration

Gallery

The customers who have placed their workload in both on-premises and cloud forming a “Hybrid Cloud” model for your Organisation, you probably need on-premises storage which meets the requirement of hybrid workloads. EMC’s Unity hybrid flash storage series may be … Continue reading

Microsoft Software Defined Storage AKA Scale-out File Server (SOFS)

Business Challenges:

  • $/IOPS and $/TB
  • Continuous Availability
  • Fault Tolerance
  • Storage Performance
  • Segregation of production, development and disaster recovery storage
  • De-duplication of unstructured data
  • Segregation of data between production site and disaster recovery site
  • Continuous break fix of Distributed File Systems (DFS) & File Server
  • Continuously extending storage on the DFS servers
  • Single point of failure
  • File systems is not available always
  • Security of file systems is constant concern
  • Propitiatory non-scalable storage
  • Management of physical storage
  • Vendor lock-in contract for physical storage
  • Migration path from single vendor to multi vendor storage provider
  • Management overhead of unstructured data
  • Comprehensive management of storage platform

Solutions:

Microsoft Software Defined Storage AKA Scale-Out File Server is a feature that is designed to provide scale-out file shares that are continuously available for file-based server application storage. Scale-out file shares provides the ability to share the same folder from multiple nodes of the same cluster.Microsoft Software Defined Storage offerings compared with third party offering:

Storage feature Third-party NAS/SAN Microsoft Software-Defined Storage
Fabric Block protocol

 

File protocol Network

 

Network Low latency network with FC

 

Low latency with SMB3Direct Management

 

Management Management of LUNs

 

Management of file shares Data de-duplication

 

Data De-duplication Data de-duplication

 

Data de-duplication Resiliency

 

Resiliency RAID resiliency groups

 

Flexible resiliency options Pooling

 

Pooling Pooling of disks

 

Pooling of disks Availability

 

Availability High

 

Continuous (via redundancy) Copy offload, Snapshots

 

Copy Offloads, Snapshots Copy offload, Snapshots

 

SMB copy offload, Snapshots Tiering

 

Tiering Storage tiering

 

Performance with tiering Persistent write-back cache

 

Persistent Write-back cache Persistent write-back cache

 

Persistent write-back cache Scale up

 

Scale up Scale up

 

Automatic scale-out rebalancing Storage Quality of Service (QoS)

 

Storage Quality of Service (QoS) Storage QoS

 

Storage QoS (Windows Server 2016) Replication

 

Replication Replication

 

Storage Replica (Windows Server 2016) Updates

 

Updates Firmware updates

 

Rolling cluster upgrades (Windows Server 2016)

 

    Storage Spaces Direct (Windows Server 2016)

 

    Azure-consistent storage (Windows Server 2016)

 

 Functional use of Microsoft Scale-Out File Servers:

1. Application Workloads

  • Microsoft Hyper-v Cluster
  • Microsoft SQL Server Cluster
  • Microsoft SharePoint
  • Microsoft Exchange Server
  • Microsoft Dynamics
  • Microsoft System Center DPM Storage Target
  • Veeam Backup Repository

2. Disaster Recovery Solution

  • Backup Target
  • Object storage
  • Encrypted storage target
  • Hyper-v Replica
  • System Center DPM

3. Unstructured Data

  • Continuously Available File Shares
  • DFS Namespace folder target server
  • Microsoft Data de-duplication
  • Roaming user Profiles
  • Home Directories
  • Citrix User Profiles
  • Outlook Cached location for Citrix XenApp Session Server

4. Management

  • Single Management Point for all Scale-out File Servers
  • Provide wizard driven tools for storage related tasks
  • Integrated with Microsoft System Center

Business Values:

  • Scalability
  • Load balancing
  • Fault tolerance
  • Ease of installation
  • Ease of management/operations
  • Flexibility
  • Security
  • High performance
  • Compliance & Certification

SOFS Architecture:

Microsoft Scale-out File Server (SOFS) is  considered as a Storage Defined Storage (SDS).  Microsoft SOFS is independent of hardware vendor as long as the compute and storage is certified by Microsoft Corporation. The following figure shows Microsoft Hyper-v cluster, SQL Cluster and Object Storage on the SOFS.

image

                 Figure: Microsoft Software Defined Storage (SDS) Architecture

image

                     Figure: Microsoft Scale-out File Server (SOFS) Architecture

image

                                      Figure: Microsoft SDS Components

image

                        Figure: Unified Storage Management (See Reference)

Microsoft Software Defined Storage AKA Scale-out File Server Benefits:

SOFS:

  • Continuous availability file stores for Hyper-V and SQL Server
  • Load-balanced IO across all nodes
  • Distributed access across all nodes
  • VSS support
  • Transparent failover and client redirection
  • Continuous availability at a share level versus a server level

De-duplication:

  • Identifies duplicate chunks of data and only stores one copy
  • Provides up to 90% reduction in storage required for OS VHD files
  • Reduces CPU and Memory pressure
  • Offers excellent reliability and integrity
  • Outperforms Single Instance Storage (SIS) or NTFS compression.

SMB Multichannel

  • Automatic detection of SMB Multi-Path networks
  • Resilience against path failures
  • Transparent failover with recovery
  • Improved throughput
  • Automatic configuration with little administrative overhead

SMB Direct:

  • The Microsoft implementation of RDMA.
  • The ability to direct data transfers from a storage location to an application.
  • Higher performance and lower latency through CPU offloading
  • High-speed network utilization (including InfiniBand and iWARP)
  • Remote storage at the speed of local storage
  • A transfer rate of approximately 50Gbps on a single NIC port
  • Compatibility with SMB Multichannel for load balancing and failover

VHDX Virtual Disk:

  • Online VHDX Resize
  • Storage QoS (Quality of Service)

Live Migration

  • Easy migration of virtual machine into a cluster while the virtual machine is running
  • Improved virtual machine mobility
  • Flexible placement of virtual machine storage based on demand
  • Migration of virtual machine storage to shared storage without downtime

Storage Protocol:

  • SAN discovery (FCP, SAS, iSCSI i.e. EMC VNX, EMC VMAX)
  • NAS discovery (Self-contained NAS, NAS Head i.e. NetApp OnTap)
  • File Server Discovery (Microsoft Scale-Out File Server, Unified Storage)

Unified Management:

  • A new architecture provides ~10x faster disk/partition enumeration operations
  • Remote and cluster-awareness capabilities
  • SM-API exposes new Windows Server 2012 R2 features (Tiering, Write-back cache, and so on)
  • SM-API features added to System Center VMM
  • End-to-end storage high availability space provisioning in minutes in VMM console
  • More Windows PowerShell

ReFS:

  • More resilience to power failures
  • Highest levels of system availability
  • Larger volumes with better durability
  • Scalable to petabyte size volumes

Storage Replica:

  • Hardware agnostic storage configuration
  • Provide a DR solution for planned and unplanned outages of mission critical workloads.
  • Use SMB3 transport with proven reliability, scalability, and performance.
  • Stretched failover clusters within metropolitan distances.
  • Manage end to end storage and clustering for Hyper-V, Storage Replica, Storage Spaces, Scale-Out File Server, SMB3, Deduplication, and ReFS/NTFS using Microsoft software
  • Reduce downtime, and increase reliability and productivity intrinsic to Windows.

Cloud Integration:

  • Cloud-based storage service for online backups
  • Windows PowerShell instrumented
  • Simple, reliable Disaster Recovery solution for applications and data
  • Supports System Center 2012 R2 DPM

Implementing Scale-out File Server

Scale-out File Server Recommended Configuration:

  1. Gather all virtual servers IOPS requirements*
  2. Gather Applications IOPS requirements
  3. Total IOPS of all applications & Virtual machines must be less than available IOPS of physical storage 
  4. Keep latency below 3 ms at all time for high performance
  5. Gather required capacity + potential growth + best practice
  6. N+1 Compute, Network and Storage Hardware
  7. Use low latency, high throughput networks
  8. Segregate storage network from data network using logical network (VLAN) or fibre channel
  9. Tools to be used

*Not all virtual servers are same, DHCP server generate few IOPS, SQL server and Exchange can generate thousands of IOPS.

*Do not place SQL Server on the same logical volume (LUN) with Exchange Server or Microsoft Dynamics or Backup Server.

*Isolate high IO workloads to separate logical volume or even separate storage pool if possible.

Prerequisites for Scale-Out File Server

  1. Install File and Storage Services server role, and the Failover Clustering feature on the cluster nodes
  2. Configure Microsoft failover Clusters using this article Windows Server 2012: Failover Clustering Deep Dive Part II
  3. Add Cluster Share Volume
  • Log on to the server as a member of the local Administrators group.
  • Open Server Manager> Click Tools, and then click Failover Cluster Manager.
  • Click Storage, right-click the disk that you want to add to the cluster shared volume, and then click Add to Cluster Shared Volumes> Add Storage Presented to this cluster.

Configure Scale-out File Server

  1. Open Failover Cluster Manager> Right-click the name of the cluster, and then click Configure Role.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click File Server, and then click Next.
  4. On the File Server Type page, select the Scale-Out File Server for application data option, and then click Next.
  5. On the Client Access Point page, in the Name box, type a NETBIOS of Scale-Out File Server, and then click Next.
  6. On the Confirmation page, confirm your settings, and then click Next.
  7. On the Summary page, click Finish.

Create Continuously Available File Share

  1. Open Failover Cluster Manager>Expand the cluster, and then click Roles.
  2. Right-click the file server role, and then click Add File Share.
  3. On the Select the profile for this share page, click SMB Share – Applications, and then click Next.
  4. On the Select the server and path for this share page, click the name of the cluster shared volume, and then click Next.
  5. On the Specify share name page, in the Share name box, type a name for the file share, and then click Next.
  6. On the Configure share settings page, ensure that the Continuously Available check box is selected, and then click Next.
  7. On the Specify permissions to control access page, click Customize permissions, grant the following permissions, and then click Next:
  • To use Scale-Out File Server file share for Hyper-V: All Hyper-V computer accounts, the SYSTEM account, cluster computer account for any Hyper-V clusters, and all Hyper-V administrators must be granted full control on the share and the file system.
  • To use Scale-Out File Server on Microsoft SQL Server: The SQL Server service account must be granted full control on the share and the file system

      8. On the Confirm selections page, click Create. On the View results page, click Close.

Use SOFS for Hyper-v Server VHDX Store:

  1. Open Hyper-V Manager. Click Start, and then click Hyper-V Manager.
  2. Open Hyper-v Settings> Virtual Hard Disks> Specify Location of Store as \\SOFS\VHDShare\ and Specify location of Virtual Machine Configuration \\SOFS\VHDCShare
  3. Click Ok.

Use SOFS in System Center VMM: 

  1. Add Windows File Server in VMM
  2. Assign SOFS Share to Fabric & Hosts

Use SOFS for SQL Database Store:

1. Assign SQL Service Account Full permission to SOFS Share

  • Open Windows Explorer and navigate to the scale-out file share.
  • Right-click the folder, and then click Properties.
  • Click the Sharing tab, click Advanced Sharing, and then click Permissions.
  • Ensure that the SQL Server service account has full-control permissions.
  • Click OK twice.
  • Click the Security tab. Ensure that the SQL Server service account has full-control permissions.

2. In SQL Server 2012, you can choose to store all database files in a scale-out file share during installation.  

3. On the step 20 of SQL Setup Wizard , provide a location of Scale-out File Server which is \\SOFS\SQLData and \\SOFS\SQLLogs

4. Create a Database on SOFS Share but on the existing SQL Server using SQL Script

CREATE DATABASE [TestDB]
ON  PRIMARY
( NAME = N’TestDB’, FILENAME = N’\\SOFS\SQLDB\TestDB.mdf’ )
LOG ON
( NAME = N’TestDBLog’, FILENAME = N’\\SOFS\SQLDBLog\TestDBLogs.ldf’)
GO

Use Backup & Recovery:

System Center Data Protection Manager 2012 R2

Configure and add a dedupe storage target into DPM 2012 R2. DPM 2012 R2 will not backup SOFS itself but it will backup VHDX files stored on SOFS. Follow Deduplicate DPM storage and protection for virtual machines with SMB storage  guide to backup virtual machines.

Veeam Availability Suite

  1. Log on to Veeam Availability Console>Click Backup Repository> Right Click New backup Repository
  2. Select Shared Folder on the Type Tab
  3. Add SMB Backup Target \\SOFS\Repository
  4. Follow the Wizard. Make Sure Service Account of Veeam has full access permission to \\SOFS\Repository  Share.
  5. Click Scale-out Repositories>Right Click Add Scale-out backup repository> Type the Name
  6. Select the backup repository you created in previous>Follow the Wizard to complete tasks.

References:

Microsoft Storage Architecture

Storage Spaces Physical Disk Validation Script

Validate Hardware

Deploy Clustered Storage Spaces

Storage Spaces Tiering in Windows Server 2012 R2

SMB Transparent Failover

Cluster Shared Volume (CSV) Inside Out

Storage Spaces – Designing for Performance

Related Articles:

Scale-Out File Server Cluster using Azure VMs

Microsoft Multi-Site Failover Cluster for DR & Business Continuity

Dell Compellent: A Poor Man’s SAN

Gallery

I have been deploying Storage Area Network for almost ten years in my 18 years Information Technology career. I have deployed various traditional, software defined and converged SANs manufactured by a global vendor like IBM, EMC, NetApp, HP, Dell, etc. … Continue reading

Buying a SAN? How to select a SAN for your business?

A storage area network (SAN) is any high-performance network whose primary purpose is to enable storage devices to communicate with computer systems and with each other. With a SAN, the concept of a single host computer that owns data or storage isn’t meaningful. A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.

A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities.

Good SAN

A good provides the following functionality to the business.

Highly availability: A single SAN connecting all computers to all storage puts a lot of enterprise information accessibility eggs into one basket. The SAN had better be pretty indestructible or the enterprise could literally be out of business. A good SAN implementation will have built-in protection against just about any kind of failure imaginable. As we will see in later chapters, this means that not only must the links and switches composing the SAN infrastructure be able to survive component failures, but the storage devices, their interfaces to the SAN, and the computers themselves must all have built-in strategies for surviving and recovering from failures as well.

Performance:

If a SAN interconnects a lot of computers and a lot of storage, it had better be able to deliver the performance they all need to do their respective jobs simultaneously. A good SAN delivers both high data transfer rates and low I/O request latency. Moreover, the SAN’s performance must be able to grow as the organization’s information storage and processing needs grow. As with other enterprise networks, it just isn’t practical to replace a SAN very often.

On the positive side, a SAN that does scale provides an extra application performance boost by separating high-volume I/O traffic from client/server message traffic, giving each a path that is optimal for its characteristics and eliminating cross talk between them.

The investment required to implement a SAN is high, both in terms of direct capital cost and in terms of the time and energy required to learn the technology and to design, deploy, tune, and manage the SAN. Any well-managed enterprise will do a cost-benefit analysis before deciding to implement storage networking. The results of such an analysis will almost certainly indicate that the biggest payback comes from using a SAN to connect the enterprise’s most important data to the computers that run its most critical applications.

But its most critical data is the data an enterprise can least afford to be without. Together, the natural desire for maximum return on investment and the criticality of operational data lead to Rule 1 of storage networking.

Great SAN

A great SAN provides additional business benefits plus additional features depending on products and manufacturer. The features of storage networking, such as universal connectivity, high availability, high performance, and advanced function, and the benefits of storage networking that support larger organizational goals, such as reduced cost and improved quality of service.

  • SAN connectivity enables the grouping of computers into cooperative clusters that can recover quickly from equipment or application failures and allow data processing to continue 24 hours a day, every day of the year.
  • With long-distance storage networking, 24 × 7 access to important data can be extended across metropolitan areas and indeed, with some implementations, around the world. Not only does this help protect access to information against disasters; it can also keep primary data close to where it’s used on a round-the-clock basis.
  • SANs remove high-intensity I/O traffic from the LAN used to service clients. This can sharply reduce the occurrence of unpredictable, long application response times, enabling new applications to be implemented or allowing existing distributed applications to evolve in ways that would not be possible if the LAN were also carting I/O traffic.
  • A dedicated backup server on a SAN can make more frequent backups possible because it reduces the impact of backup on application servers to almost nothing. More frequent backups means more up-to-date restores that require less time to execute.

Replication and disaster recovery

With so much data stored on a SAN, your client will likely want you to build disaster recovery into the system. SANs can be set up to automatically mirror data to another site, which could be a fail safe SAN a few meters away or a disaster recovery (DR) site hundreds or thousands of miles away.

If your client wants to build mirroring into the storage area network design, one of the first considerations is whether to replicate synchronously or asynchronously. Synchronous mirroring means that as data is written to the primary SAN, each change is sent to the secondary and must be acknowledged before the next write can happen.

The alternative is to asynchronously mirror changes to the secondary site. You can configure this replication to happen as quickly as every second, or every few minutes or hours, Schulz said. While this means that your client could permanently lose some data, if the primary SAN goes down before it has a chance to copy its data to the secondary, your client should make calculations based on its recovery point objective (RPO) to determine how often it needs to mirror.

Security

With several servers able to share the same physical hardware, it should be no surprise that security plays an important role in a storage area network design. Your client will want to know that servers can only access data if they’re specifically allowed to. If your client is using iSCSI, which runs on a standard Ethernet network, it’s also crucial to make sure outside parties won’t be able to hack into the network and have raw access to the SAN.

Capacity and scalability

A good storage area network design should not only accommodate your client’s current storage needs, but it should also be scalable so that your client can upgrade the SAN as needed throughout the expected lifespan of the system. Because a SAN’s switch connects storage devices on one side and servers on the other, its number of ports can affect both storage capacity and speed, Schulz said. By allowing enough ports to support multiple, simultaneous connections to each server, switches can multiply the bandwidth to servers. On the storage device side, you should make sure you have enough ports for redundant connections to existing storage units, as well as units your client may want to add later.

Uptime and availability

Because several servers will rely on a SAN for all of their data, it’s important to make the system very reliable and eliminate any single points of failure. Most SAN hardware vendors offer redundancy within each unit — like dual power supplies, internal controllers and emergency batteries — but you should make sure that redundancy extends all the way to the server. Availability and redundancy can be extended to multiple systems and cross datacentre which comes with cost benefit analysis and specific business requirement. If your business drives to you to have zero downtime policy then data should be replicated to a disaster recovery sites using identical SAN as production. Then use appropriate software to manage those replicated SAN.

Software and Hardware Capability

A great SAN management software deliver all the capabilities of SAN hardware to the devices connected to the SAN. It’s very reasonable to expect to share a SAN-attached tape drive among several servers because tape drives are expensive and they’re only actually in use while back-ups are occurring. If a tape drive is connected to computers through a SAN, different computers could use it at different times. All the computers get backed up. The tape drive investment is used efficiently, and capital expenditure stays low.

A SAN provide fully redundant, high performance and highly available hardware, software for application and business data to compute resources. Intelligent storage also provide data movement capabilities between devices.

Best OR Cheap

No vendor has ever developed all the components required to build a complete SAN but most vendors are engaged in partnerships to qualify and offer complete SANs consisting of the partner’s products.

Best-in-class SAN provides totally different performance and attributes to business. A cheap SAN would provide a SAN using existing Ethernet network however you should ask yourself following questions and find answers to determine what you need? Best or cheap?

  1. Has this SAN capable of delivering business benefits?
  2. Has this SAN capable of managing your corporate workloads?
  3. Are you getting correct I/O for your workloads?
  4. Are you getting correct performance matrix for your application, file systems and virtual infrastructure?
  5. Are you getting value for money?
  6. Do you have a growth potential?
  7. Would your next data migration and software upgrade be seamless?
  8. Is this SAN a heterogeneous solutions for you?

Storage as a Service vs on-premises

There are many vendors who provides storage as a service with lucrative pricing model. However you should consider the following before choosing storage as a service.

  1. Does this vendor a partner of recognised storage manufacturer?
  2. Does this vendor have certified and experienced engineering team to look after your data?
  3. Does this vendor provide 24x7x365 support?
  4. Does this vendor provide true storage tiering?
  5. What is geographic distance between storage as a service provider’s data center and your infrastructure and how much WAN connectivity would cost you?
  6. What would be storage latency and I/O?
  7. Are you buying one off capacity or long term corporate storage solution?

If answers of these questions favour your business then I would recommend you buy storage as a service otherwise on premises is best for you.

NAS OR FC SAN OR iSCSI SAN OR Unified Storage

A NAS device provides file access to clients to which it connects using file access protocols (primarily CIFS and NFS) transported on Ethernet and TCP/IP.

A FC SAN device is a block-access (i.e. it is a disk or it emulates one or more disks) that connects to its clients using Fibre Channel and a block data access protocol such as SCSI.

An iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

You have to know your business before you can answer the question NAS/FC SAN/iSCSI SAN or Unified? Would you like to maximise your benefits from same investment well you know the answer you are looking for unified storage solutions like NetApp or EMC ISILON. If you are looking for enterprise class high performance storage, isolate your Ethernet from storage traffic, reduce backup time, minimise RPO and RTO then FC SAN is best for you example EMC VNX and NetApp OnCommand Cluster. If your intention is to use existing Ethernet and have a shared storage then you are looking for iSCSI SAN example Nimble storage or Dell SC series storage. But having said that you also needs to consider your structured corporate data, unstructured corporate data and application performance before making a judgement call.

Decision Making Process

Let’s make a decision matrix as follows. Just fill the blanks and see the outcome.

Workloads I/O Capacity Requirement (in TB) Storage Protocol

(FC, iSCSI, NFS, CIFS)

Virtualization
Unstructured Data
Structured Data
Messaging Systems
Application virtualization
Collaboration application
Business Application

Functionality Matrix

Option Rating Requirement (1=high 3=Medium 5=low )
Redundancy
Uptime
Capacity
Data movement
Management

Risk Assessment

Risk Type Rating (Low, Medium, High)
Loss of productivity
Loss of redundancy
Reduced Capacity
Uptime
Limited upgrade capacity
Disruptive migration path

Service Data – SLA

Service Type SLA Target
Hardware Replacement
Uptime
Vendor Support

Rate storage via Gartner Magic Quadrant. Gartner magic quadrant leaders are (as of October 2015):

  1. EMC
  2. HP
  3. Hitachi
  4. Dell
  5. NetApp
  6. IBM
  7. Nimble Storage

To make your decision easy select a storage that enables you to cost effective way manage large and rapidly growing data. A storage that is built for agility, simplicity and provide both tiered storage approach for specialized needs and the ability to unify all digital contents into a single high performance and highly scalable shared pool of storage. A storage that accelerate productivity and reduce capital and operational expenditures, while seamlessly scaling storage with the growth of mission critical data.