Nimble Hybrid Storage for Azure VM

Microsoft Azure can be integrated with Nimble Cloud-Connected Storage based on the Nimble Storage Predictive Flash platform via Microsoft Azure ExpressRoute or Equinix Cloud Exchange connectivity solutions.

The Nimble storage is located in Equinix colocation facilities at proximity to Azure data centres to deliver fast, low-latency performance.

Key Features:

  • 9997% uptime and reliability over thousands of systems deployed in production.
  • Triple-parity RAID protection, data durability is improved by over 1,000x compared to traditional RAID6 protection.
  • Accelerates performance and optimises capacity via ExpressRoute and Equinix Cloud Exchange
  • On-Demand pay-for-what-you-use pricing model. Cloud Volumes pricing will start at $0.10/GB/month
  • Data mobility between Azure Cloud and Nimble Storage
  • Nimble’s Cloud Volumes (NCV) store block data for use by Azure compute instances
  • Data protection using Veeam Availability Suite or Veritas NetBackup

Direct Connectivity to Azure

Azure virtual machines connect directly to block storage volumes running on Nimble arrays. This provides access to secure, feature-rich and high-performance storage over a fast and low-latency connection.

Equinix Cloud Exchange provides further flexibility with Azure and Nimble storage connectivity by providing self-service on-demand provisioning and switchable virtual connections in the colocation facility. You can achieve this functionality using Nimble native tooling.

Hybrid Cloud Model

For hybrid clouds where you do need to move data from your on-premise storage to your cloud-connected storage Nimble’s efficient data replication ensures all data is compressed and only changed data is sent to minimise bandwidth requirements.

Nimble’s efficient data replication allows you to gain efficiency, reduce data transfer times, moreover, reduce network costs by avoiding massive data migrations to and from your on-premise storage or private cloud to the public cloud.

Regulatory Compliance

Breakdown one of the top barriers to cloud adoption. You always own and control your data when you use Nimble Cloud-Connected Storage allowing you to address data security as well as corporate compliance or governance requirements.

Low-Cost Disaster Recovery Solution

Pay for disaster recovery only when you need it instead of keeping fully operational secondary servers up at all times. Leverage the ability to quickly turn on Azure virtual machines to enable your DR site for drills and actual failures and turn them off when you are done. All the while Nimble’s efficient data replication ensures your DR data is up-to-date and secure.

Dev/Test Environments

If your production environment is on-premise, it is difficult to leverage the cloud for Dev/test since you need to move data back and forth between the cloud. With Nimble Cloud-Connected Storage, instant snapshots are made of your production environment and zero-copy clones of that data are immediately available for Azure virtual machines that can be spun up quickly for dev/test.

Secure Private Storage for the Public Cloud Apps

Stop debating which applications can move to the cloud due to concerns about Security, privacy performance, and reliability. With Nimble Cloud-Connected Storage, you will always control your data while taking advantage of Azure virtual machines for cloud compute.

Other use cases such as big data analytics and application cloud bursting can leverage Nimble Cloud-Connected Storage to gain agility, improve performance, while maintaining sovereignty and ownership of your data.

 

 

 

Veeam integrate with EMC and NetApp Storage Snapshots!

Taking a VMware snapshots and Hyper-v checkpoint can produce a serious workload on VM performance, and it can take considerable effort by sys admin to overcome this technical challenge and meet the required service level agreement. Most Veeam user will run their backup and replication after hours considering impact to the production environment, but this can’t be your only backup solution. What if storage itself goes down, or gets corrupted? Even with storage-based replication, you need to take your data out of the single fault domain. This is why many customers prefer to additionally make true backups stored on different storage. Never to store production and backup on to a same storage.

Veeam1

Source: Veeam

Now you can take advantage of storage snapshot. Veeam decided to work with storage vendor such as EMC and NetApp to integrate production storage, leveraging storage snapshot functionality to reduce the impact on the environment from snapshot/checkpoint removal during backup and replication.

Supported Storage

  • EMC VNX/VNXe
  • NetApp FAS
  • NetApp FlexArray (V-Series)
  • NetApp Data ONTAP Edge VSA
  • HP 3PAR StoreServ
  • HP StoreVirtual
  • HP StoreVirtual VSA
  • IBM N series

Unsupported Storage

  • Dell Compellent

NOTE: My own experience with HP StoreVirtual and HP 3PAR are awful. I had to remove HP StoreVirtual from production store and introduce other fibre channel to cope with workload. Even though Veeam tested snapshot mechanism with HP, I would recommend avoid HP StoreVirtual if you have high IO workload.

Benefits

Veeam suggest that you can get lower RPOs and lower RTOs with Backup from Storage Snapshots and Veeam Explorer for Storage Snapshots.

Veeam and EMC together allow you to:

  • Minimize impact on production VMs
  • Rapidly create backups from EMC VNX or VNXe storage snapshots up to 20 times faster than the competition
  • Easily recover individual items in two minutes or less, without staging or intermediate steps

As a result of integrating Veeam with EMC, you can backup 20 times faster and restore faster using Veeam Explorer. Hence users can achieve much lower RPOs (recovery point objectives) and lower RTOs (recovery time objectives) with minimal impact on production VMs.

How it works

Veeam Backup & Replication works with EMC and NetApp storage, along with VMware to create backups and replicas from storage snapshots in the following way.

Veeam2

Source: Veeam

The backup and replication job:

  1. Analyzes which VMs in the job have disks on supported storage.
  2. Triggers a vSphere snapshot for all VMs located on the same storage volume. (As a part of a vSphere snapshot, Veeam’s application-aware processing of each VM is performed normally.)
  3. Triggers a snapshot of said storage volume once all VM snapshots have been created.
  4. Retrieves the CBT information for VM snapshots created on step 2.
  5. Immediately triggers the removal of the vSphere snapshots on the production VMs.
  6. Mounts the storage snapshot to one of the backup proxies connected into the storage fabric.
  7. Reads new and changed virtual disk data blocks directly from the storage snapshot and transports them to the backup repository or replica VM.
  8. Triggers the removal storage snapshot once all VMs have been backed up.

VMs run off snapshots for the shortest possible time (Subject to storage array- EMC works better), while jobs obtain data from VM snapshot files preserved in the storage snapshot. As the result, VM snapshots do not get a chance to grow large and can be committed very quickly without overloading production storage with extended merge procedure, as is the case with classic techniques for backing up from VM snapshots.

Integration with EMC storage will bring great benefit to customers who wants to take advantage of their storage array. Veeam Availability Suite v9 will provide the chance to reduce IO on to your storage array and bring your SLA under control.

References:

Backup from storage snapshots

Integration with emc storage snapshot

Veeam integrates with emc snapshots

New Veeam availability suite version 9

 

 

 

Gartner’s verdict on mid-range and enterprise class storage arrays

Previously I wrote an article on how to select a SAN based on your requirement. Let’s learn what Gartner’s verdict on Storage. Gartner scores storage arrays in mid-range and enterprise class storage. Here are details of Gartner score.

Mid-Range Storage

Mid-range storage arrays are scored on manageability, Reliability and Availability (RAS), performance, snapshot and replication, scalability, the ecosystem, multi-tenancy and security, and storage efficiency.

mid1

Figure: Product Rating

mid2

Figure: Storage Capabilities

mid3

Figure: Product Capabilities

mid4

Figure: Total Score

Enterprise Class Storage

Enterprise class storage is scored on performance, reliability, scalability, serviceability, manageability, RAS, snapshot and replication, ecosystem, multi-tenancy, security, and storage efficiency. Vendor reputation are more important in this criteria. Product types are clustered, scale-out, scale-up, high-end (monolithic) arrays and federated architectures. EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. These vendors are providing functionality, performance, RAS and scalability to be considered in this class.

ENT1

Figure: Product Ratings (Source: Gartner)

Where does Dell Compellent Stand?

There are known disadvantages in Dell Compellent storage array, users with more than two nodes must carefully plan firmware upgrades during a time of low I/O activity or during periods of planned downtime. Even though Dell Compellent advertised as flash cached, Read SSD and Write SSD with storage tiering, snapshot. In realty Dell Compellent does its own thing in background which most customer isn’t aware of. Dell Compellent run RAID scrub every day whether you like it or not which generate huge IOPS in all tiered arrays which are both SSD and SATA disks. You will experience poor IO performance during RAID scrub. When Write SSD is full Compellent controller automatically trigger an on demand storage tiering during business hour and forcing data to be written permanently in tier 3 disks which will literally kill virtualization, VDI and file systems. Storage tiering and RAID scrub will send storage latency off the roof. If you are big virtualization and VDI shop than you are left with no option but to experience this poor performance and let RAID scrub and tiering finish at snail pace. If you have terabytes of data to be backed up every night you will experience extended backup window, un-achievable RPO and RTO regardless of change block tracking (CBT) enabled in backup products.

If you are one of Compellent customer wondering why Garner didn’t include Dell Compellent in Enterprise class. Now you know why Dell Compellent is excluded in enterprise class matrix as Dell Compellent doesn’t fit into the functionality and capability requirement to be considered as enterprise class. There is another factor that may worry existing Dell EqualLogic customer as there is no direct migration path and upgrade path have been communicated with on premises storage customers once OEM relationship between Dell and EMC ends. Dell pro-support and partner channel confirms that Dell will no longer sell SAS drive which means IO intense array will lose storage performance. These situations put users of co-branded Dell:EMC CX systems in the difficult position of having to choose between changing changing storage system technologies or changing storage vendor all together.

Buying a SAN? How to select a SAN for your business?

A storage area network (SAN) is any high-performance network whose primary purpose is to enable storage devices to communicate with computer systems and with each other. With a SAN, the concept of a single host computer that owns data or storage isn’t meaningful. A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.

A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities.

Good SAN

A good provides the following functionality to the business.

Highly availability: A single SAN connecting all computers to all storage puts a lot of enterprise information accessibility eggs into one basket. The SAN had better be pretty indestructible or the enterprise could literally be out of business. A good SAN implementation will have built-in protection against just about any kind of failure imaginable. As we will see in later chapters, this means that not only must the links and switches composing the SAN infrastructure be able to survive component failures, but the storage devices, their interfaces to the SAN, and the computers themselves must all have built-in strategies for surviving and recovering from failures as well.

Performance:

If a SAN interconnects a lot of computers and a lot of storage, it had better be able to deliver the performance they all need to do their respective jobs simultaneously. A good SAN delivers both high data transfer rates and low I/O request latency. Moreover, the SAN’s performance must be able to grow as the organization’s information storage and processing needs grow. As with other enterprise networks, it just isn’t practical to replace a SAN very often.

On the positive side, a SAN that does scale provides an extra application performance boost by separating high-volume I/O traffic from client/server message traffic, giving each a path that is optimal for its characteristics and eliminating cross talk between them.

The investment required to implement a SAN is high, both in terms of direct capital cost and in terms of the time and energy required to learn the technology and to design, deploy, tune, and manage the SAN. Any well-managed enterprise will do a cost-benefit analysis before deciding to implement storage networking. The results of such an analysis will almost certainly indicate that the biggest payback comes from using a SAN to connect the enterprise’s most important data to the computers that run its most critical applications.

But its most critical data is the data an enterprise can least afford to be without. Together, the natural desire for maximum return on investment and the criticality of operational data lead to Rule 1 of storage networking.

Great SAN

A great SAN provides additional business benefits plus additional features depending on products and manufacturer. The features of storage networking, such as universal connectivity, high availability, high performance, and advanced function, and the benefits of storage networking that support larger organizational goals, such as reduced cost and improved quality of service.

  • SAN connectivity enables the grouping of computers into cooperative clusters that can recover quickly from equipment or application failures and allow data processing to continue 24 hours a day, every day of the year.
  • With long-distance storage networking, 24 × 7 access to important data can be extended across metropolitan areas and indeed, with some implementations, around the world. Not only does this help protect access to information against disasters; it can also keep primary data close to where it’s used on a round-the-clock basis.
  • SANs remove high-intensity I/O traffic from the LAN used to service clients. This can sharply reduce the occurrence of unpredictable, long application response times, enabling new applications to be implemented or allowing existing distributed applications to evolve in ways that would not be possible if the LAN were also carting I/O traffic.
  • A dedicated backup server on a SAN can make more frequent backups possible because it reduces the impact of backup on application servers to almost nothing. More frequent backups means more up-to-date restores that require less time to execute.

Replication and disaster recovery

With so much data stored on a SAN, your client will likely want you to build disaster recovery into the system. SANs can be set up to automatically mirror data to another site, which could be a fail safe SAN a few meters away or a disaster recovery (DR) site hundreds or thousands of miles away.

If your client wants to build mirroring into the storage area network design, one of the first considerations is whether to replicate synchronously or asynchronously. Synchronous mirroring means that as data is written to the primary SAN, each change is sent to the secondary and must be acknowledged before the next write can happen.

The alternative is to asynchronously mirror changes to the secondary site. You can configure this replication to happen as quickly as every second, or every few minutes or hours, Schulz said. While this means that your client could permanently lose some data, if the primary SAN goes down before it has a chance to copy its data to the secondary, your client should make calculations based on its recovery point objective (RPO) to determine how often it needs to mirror.

Security

With several servers able to share the same physical hardware, it should be no surprise that security plays an important role in a storage area network design. Your client will want to know that servers can only access data if they’re specifically allowed to. If your client is using iSCSI, which runs on a standard Ethernet network, it’s also crucial to make sure outside parties won’t be able to hack into the network and have raw access to the SAN.

Capacity and scalability

A good storage area network design should not only accommodate your client’s current storage needs, but it should also be scalable so that your client can upgrade the SAN as needed throughout the expected lifespan of the system. Because a SAN’s switch connects storage devices on one side and servers on the other, its number of ports can affect both storage capacity and speed, Schulz said. By allowing enough ports to support multiple, simultaneous connections to each server, switches can multiply the bandwidth to servers. On the storage device side, you should make sure you have enough ports for redundant connections to existing storage units, as well as units your client may want to add later.

Uptime and availability

Because several servers will rely on a SAN for all of their data, it’s important to make the system very reliable and eliminate any single points of failure. Most SAN hardware vendors offer redundancy within each unit — like dual power supplies, internal controllers and emergency batteries — but you should make sure that redundancy extends all the way to the server. Availability and redundancy can be extended to multiple systems and cross datacentre which comes with cost benefit analysis and specific business requirement. If your business drives to you to have zero downtime policy then data should be replicated to a disaster recovery sites using identical SAN as production. Then use appropriate software to manage those replicated SAN.

Software and Hardware Capability

A great SAN management software deliver all the capabilities of SAN hardware to the devices connected to the SAN. It’s very reasonable to expect to share a SAN-attached tape drive among several servers because tape drives are expensive and they’re only actually in use while back-ups are occurring. If a tape drive is connected to computers through a SAN, different computers could use it at different times. All the computers get backed up. The tape drive investment is used efficiently, and capital expenditure stays low.

A SAN provide fully redundant, high performance and highly available hardware, software for application and business data to compute resources. Intelligent storage also provide data movement capabilities between devices.

Best OR Cheap

No vendor has ever developed all the components required to build a complete SAN but most vendors are engaged in partnerships to qualify and offer complete SANs consisting of the partner’s products.

Best-in-class SAN provides totally different performance and attributes to business. A cheap SAN would provide a SAN using existing Ethernet network however you should ask yourself following questions and find answers to determine what you need? Best or cheap?

  1. Has this SAN capable of delivering business benefits?
  2. Has this SAN capable of managing your corporate workloads?
  3. Are you getting correct I/O for your workloads?
  4. Are you getting correct performance matrix for your application, file systems and virtual infrastructure?
  5. Are you getting value for money?
  6. Do you have a growth potential?
  7. Would your next data migration and software upgrade be seamless?
  8. Is this SAN a heterogeneous solutions for you?

Storage as a Service vs on-premises

There are many vendors who provides storage as a service with lucrative pricing model. However you should consider the following before choosing storage as a service.

  1. Does this vendor a partner of recognised storage manufacturer?
  2. Does this vendor have certified and experienced engineering team to look after your data?
  3. Does this vendor provide 24x7x365 support?
  4. Does this vendor provide true storage tiering?
  5. What is geographic distance between storage as a service provider’s data center and your infrastructure and how much WAN connectivity would cost you?
  6. What would be storage latency and I/O?
  7. Are you buying one off capacity or long term corporate storage solution?

If answers of these questions favour your business then I would recommend you buy storage as a service otherwise on premises is best for you.

NAS OR FC SAN OR iSCSI SAN OR Unified Storage

A NAS device provides file access to clients to which it connects using file access protocols (primarily CIFS and NFS) transported on Ethernet and TCP/IP.

A FC SAN device is a block-access (i.e. it is a disk or it emulates one or more disks) that connects to its clients using Fibre Channel and a block data access protocol such as SCSI.

An iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

You have to know your business before you can answer the question NAS/FC SAN/iSCSI SAN or Unified? Would you like to maximise your benefits from same investment well you know the answer you are looking for unified storage solutions like NetApp or EMC ISILON. If you are looking for enterprise class high performance storage, isolate your Ethernet from storage traffic, reduce backup time, minimise RPO and RTO then FC SAN is best for you example EMC VNX and NetApp OnCommand Cluster. If your intention is to use existing Ethernet and have a shared storage then you are looking for iSCSI SAN example Nimble storage or Dell SC series storage. But having said that you also needs to consider your structured corporate data, unstructured corporate data and application performance before making a judgement call.

Decision Making Process

Let’s make a decision matrix as follows. Just fill the blanks and see the outcome.

Workloads I/O Capacity Requirement (in TB) Storage Protocol

(FC, iSCSI, NFS, CIFS)

Virtualization
Unstructured Data
Structured Data
Messaging Systems
Application virtualization
Collaboration application
Business Application

Functionality Matrix

Option Rating Requirement (1=high 3=Medium 5=low )
Redundancy
Uptime
Capacity
Data movement
Management

Risk Assessment

Risk Type Rating (Low, Medium, High)
Loss of productivity
Loss of redundancy
Reduced Capacity
Uptime
Limited upgrade capacity
Disruptive migration path

Service Data – SLA

Service Type SLA Target
Hardware Replacement
Uptime
Vendor Support

Rate storage via Gartner Magic Quadrant. Gartner magic quadrant leaders are (as of October 2015):

  1. EMC
  2. HP
  3. Hitachi
  4. Dell
  5. NetApp
  6. IBM
  7. Nimble Storage

To make your decision easy select a storage that enables you to cost effective way manage large and rapidly growing data. A storage that is built for agility, simplicity and provide both tiered storage approach for specialized needs and the ability to unify all digital contents into a single high performance and highly scalable shared pool of storage. A storage that accelerate productivity and reduce capital and operational expenditures, while seamlessly scaling storage with the growth of mission critical data.