The customers who have placed their workload in both on-premises and cloud forming a “Hybrid Cloud” model for your Organisation, you probably need on-premises storage which meets the requirement of hybrid workloads. EMC’s Unity hybrid flash storage series may be … Continue reading
Software defined storage is an evolution of storage technology in cloud era. It is a deployment of storage technology without any dependencies on storage hardware. Software defined storage (SDS) eliminates all traditional aspect of storage such as managing storage policy, security, provisioning, upgrading and scaling of storage without the headache of hardware layer. Software defined storage (SDS) is completely software based product instead of hardware based product. A software defined storage must have the following characteristics.
Characteristics of SDS
- Management of complete stack of storage using software
- Automation-policy driven storage provisioning with SLA
- Ability to run private, public or hybrid cloud platform
- Creation of uses metric and billing in control panel
- Logical storage services and capabilities eliminating dependence on the underlying physical storage systems
- Creation of logical storage pool
- Creation of logical tiering of storage volumes
- Aggregate various physical storage into one or multiple logical pool
- Storage virtualization
- Thin provisioning of volume from logical pool of storage
- Scale out storage architecture such as Microsoft Scale out File Servers
- Virtual volumes (vVols), a proposal from VMware for a more transparent mapping between large volumes and the VM disk images within them
- Parallel NFS (pNFS), a specific implementation which evolved within the NFS
- OpenStack APIs for storage interaction which have been applied to open-source projects as well as to vendor products.
- Independent of underlying storage hardware
A software defined storage must not have the following limitations.
- Glorified hardware which juggle between network and disk e.g. Dell Compellent
- Dependent systems between hardware and software e.g. Dell Compellent
- High latency and low IOPS for production VMs
- Active-passive management controller
- Repetitive hardware and software maintenance
- Administrative and management overhead
- Cost of retaining hardware and software e.g. life cycle management
- Factory defined limitation e.g. can’t do situation
- Production downtime for maintenance work e.g. Dell Compellent maintenance
The following vendors provides various software defined storage in current market.
Software Only vendor
- Atlantis Computing
- DataCore Software
Mainstream Storage vendor
- EMC ViPR
- HP StoreVirtual
- IBM SmartCloud Virtual Storage Center
- NetApp Data ONTAP
Storage Appliance vendor
- Zadara Storage
Hyper Converged Appliance
- Cisco (Starting price from $59K for Hyperflex systems+1 year support inclusive)
- VCE (Starting price from $60K for RXRAIL systems+support)
- Simplivity Corporation
- Pivot3 Inc.
- Scale Computing Inc
- EMC Corporation
- VMware Inc
Ultimately, SDS should and will provide businesses will worry free management of storage without limitation of hardware. There are compelling use cases of software defined storage for an enterprise to adopt software defined storage.
- Gartner’s verdict on mid-range and enterprise class storage arrays
- Buying a SAN? How to select a SAN for your business?
- Dell Compellent: A Poor Man’s SAN
- Dell Compellent Storage to be discontinued after Dell-EMC merger
I have been deploying Storage Area Network for almost ten years in my 18 years Information Technology career. I have deployed various traditional, software defined and converged SANs manufactured by a global vendor like IBM, EMC, NetApp, HP, Dell, etc. … Continue reading
DELL is buying EMC. This is an old news. You already know this. There are many business reasons Dell is buying EMC. EMC is the number one storage vendor and a big cat of NASDAQ. One key business justification is to get into Enterprise market with enterprise class product lines and second big reason is break into cloud market utilising dominant presence of EMC. Have you rationalised your opinion on what Dell storage product line likely to be once merger is complete. There are many argument in for and against of various combination of storage line Dell will come up. Let’s look at current product lines of Dell and EMC.
Dell Current Product Line:
- Network Attached storage based on Dell 2U rack servers.
- Direct Attached Storage
- iSCSI and FCoE SAN solution such as PowerVault MD, EqualLogic, Compellent
EMC Product Line:
- EMC XtremIO – the XtremIO all-flash array—ideal for virtual desktop infrastructure (VDI), virtual server, and database
- EMC VMAX enterprise class storage- Mission critical storage for hyper consolidation and delivering IT as a service.
- EMC VNX/VNXe – hybrid flash storage platform, optimized for virtual applications.
Software Defined Storage
- Software defined storage such as Dell XC Series powered by Nutanix
- EMC Isilon – High-performance, clustered network-attached storage (NAS) that scales to your performance and capacity requirements.
- EMC ScaleIO – Hyper-converged solution that uses your existing servers, and turns them into a software defined SAN with massive scalability, 10X better performance and 60% lower cost than traditional storage.
- EMC Elastic Cloud Storage (ECS) – Cloud-scale, geo-federated namespace, multi-tenant security and in-place analytics ECS is the universal platform
- EMC ViPR Controller – deliver automation and management insights from your multivendor storage.
- EMC Service Assurance Suite – Service Assurance Suite delivers service-aware software defined network management that optimizes your physical and virtual networks, increases operational efficiency by ensuring SLA’s, and reduces cost by maximizing resources.
- EMC ViPR SRM – optimize your multivendor block, file & object storage tiers to application service levels you’ll maximize resources, reduce costs and improve your return on investment.
Other Partnership and Products of EMC
EMC Vblock Systems – VCE is a technology partnership which EMC plays major role to deliver converged cloud solutions for midrange to enterprise client. Converged Infrastructure technology that provide virtualization, server, network, storage and backup, VCE Converged Solutions simplify all aspects of IT.
EMC Hybrid Cloud- Federation Enterprise Hybrid Cloud for delivering IT-as-a-Service. With thousands of engineering hours, the Federation brings together best-in-class components from EMC, VCE, and VMware to create a fully integrated, enterprise-ready solution.
VMware Partnerships-EMC Corp plans to keep its majority stake in VMware Inc. EMC, which owns about 80% of VMware, bought the company in 2004 for $700 million. VMware accounted for about 22 percent of EMC’s revenue of $23.2 billion in 2013. EMC and VMware share a cloud vision. Through joint product development, solutions, and services, EMC is the number one choice for VMware customers for storage, backup, security, and management solutions.
RSA Information Security division- RSA info security offers data protection and identity management.
Pivotal- EMC and VMware partnership to manufacture software and big data solutions.
Virtustream- EMC and VMware joint $1.2B acquisition of this brand to provide public cloud services.
Dell to discontinue Compellent after merger–make sense
There are too many eggs on the busket already. Would Dell continue to sell identical products in different name or stream line all products. Dell is after streamlining all products. It is well known by Dell loyal customer that EqualLogic will disappear from Dell product line at 2018. We learnt that in Dell partner’s conference. Then question will remain what will happen to Compellent? In current market place, VNX directly compete with Dell Compellent. But VNX has more customer base than Compellent. VNX is in market for almost 20 years and still growing fast. Compellent is in new shape and in market with SC series product line. But Compellent has frustrated its customer with poor performance and poor customer support. A very poor explanation and guidelines from Dell presales team on how to align Dell storage with business requirement.
Dell can offer customer will both VNX and Compellent knowing Compellent did not work from the beginning or Dell streamline its product and kill Compellent all together. Then promote VNX as it worked past 20 years and has a proven track record. Killing Compellent will disturb few already unhappy customer who simply wanted cheap SATA disks. But killing VNX will disturb wide range of customers and annoy them once and for all. Consequence of that would be losing customers to HP and NetApp which dell desperately wants to avoid and gain control of storage market. This way Dell-EMC will retain undisputed title of EMC as a number one storage vendor. This make sense for any non IT savvy walking on the street. I am certain and believe that Dell will discontinue Compellent serries all together. Protecting $67 billion dollar acquisition of EMC is more important than protecting $960 million acquisition of Compellent. It would obviously make sense for Michael Dell to kill Compellent and promote VNX as a sole mid range storage.
Taking a VMware snapshots and Hyper-v checkpoint can produce a serious workload on VM performance, and it can take considerable effort by sys admin to overcome this technical challenge and meet the required service level agreement. Most Veeam user will run their backup and replication after hours considering impact to the production environment, but this can’t be your only backup solution. What if storage itself goes down, or gets corrupted? Even with storage-based replication, you need to take your data out of the single fault domain. This is why many customers prefer to additionally make true backups stored on different storage. Never to store production and backup on to a same storage.
Now you can take advantage of storage snapshot. Veeam decided to work with storage vendor such as EMC and NetApp to integrate production storage, leveraging storage snapshot functionality to reduce the impact on the environment from snapshot/checkpoint removal during backup and replication.
- EMC VNX/VNXe
- NetApp FAS
- NetApp FlexArray (V-Series)
- NetApp Data ONTAP Edge VSA
- HP 3PAR StoreServ
- HP StoreVirtual
- HP StoreVirtual VSA
- IBM N series
- Dell Compellent
NOTE: My own experience with HP StoreVirtual and HP 3PAR are awful. I had to remove HP StoreVirtual from production store and introduce other fibre channel to cope with workload. Even though Veeam tested snapshot mechanism with HP, I would recommend avoid HP StoreVirtual if you have high IO workload.
Veeam suggest that you can get lower RPOs and lower RTOs with Backup from Storage Snapshots and Veeam Explorer for Storage Snapshots.
Veeam and EMC together allow you to:
- Minimize impact on production VMs
- Rapidly create backups from EMC VNX or VNXe storage snapshots up to 20 times faster than the competition
- Easily recover individual items in two minutes or less, without staging or intermediate steps
As a result of integrating Veeam with EMC, you can backup 20 times faster and restore faster using Veeam Explorer. Hence users can achieve much lower RPOs (recovery point objectives) and lower RTOs (recovery time objectives) with minimal impact on production VMs.
How it works
Veeam Backup & Replication works with EMC and NetApp storage, along with VMware to create backups and replicas from storage snapshots in the following way.
The backup and replication job:
- Analyzes which VMs in the job have disks on supported storage.
- Triggers a vSphere snapshot for all VMs located on the same storage volume. (As a part of a vSphere snapshot, Veeam’s application-aware processing of each VM is performed normally.)
- Triggers a snapshot of said storage volume once all VM snapshots have been created.
- Retrieves the CBT information for VM snapshots created on step 2.
- Immediately triggers the removal of the vSphere snapshots on the production VMs.
- Mounts the storage snapshot to one of the backup proxies connected into the storage fabric.
- Reads new and changed virtual disk data blocks directly from the storage snapshot and transports them to the backup repository or replica VM.
- Triggers the removal storage snapshot once all VMs have been backed up.
VMs run off snapshots for the shortest possible time (Subject to storage array- EMC works better), while jobs obtain data from VM snapshot files preserved in the storage snapshot. As the result, VM snapshots do not get a chance to grow large and can be committed very quickly without overloading production storage with extended merge procedure, as is the case with classic techniques for backing up from VM snapshots.
Integration with EMC storage will bring great benefit to customers who wants to take advantage of their storage array. Veeam Availability Suite v9 will provide the chance to reduce IO on to your storage array and bring your SLA under control.
Previously I wrote an article on how to select a SAN based on your requirement. Let’s learn what Gartner’s verdict on Storage. Gartner scores storage arrays in mid-range and enterprise class storage. Here are details of Gartner score.
Mid-range storage arrays are scored on manageability, Reliability and Availability (RAS), performance, snapshot and replication, scalability, the ecosystem, multi-tenancy and security, and storage efficiency.
Figure: Product Rating
Figure: Storage Capabilities
Figure: Product Capabilities
Figure: Total Score
Enterprise Class Storage
Enterprise class storage is scored on performance, reliability, scalability, serviceability, manageability, RAS, snapshot and replication, ecosystem, multi-tenancy, security, and storage efficiency. Vendor reputation are more important in this criteria. Product types are clustered, scale-out, scale-up, high-end (monolithic) arrays and federated architectures. EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. These vendors are providing functionality, performance, RAS and scalability to be considered in this class.
Figure: Product Ratings (Source: Gartner)
Where does Dell Compellent Stand?
There are known disadvantages in Dell Compellent storage array, users with more than two nodes must carefully plan firmware upgrades during a time of low I/O activity or during periods of planned downtime. Even though Dell Compellent advertised as flash cached, Read SSD and Write SSD with storage tiering, snapshot. In realty Dell Compellent does its own thing in background which most customer isn’t aware of. Dell Compellent run RAID scrub every day whether you like it or not which generate huge IOPS in all tiered arrays which are both SSD and SATA disks. You will experience poor IO performance during RAID scrub. When Write SSD is full Compellent controller automatically trigger an on demand storage tiering during business hour and forcing data to be written permanently in tier 3 disks which will literally kill virtualization, VDI and file systems. Storage tiering and RAID scrub will send storage latency off the roof. If you are big virtualization and VDI shop than you are left with no option but to experience this poor performance and let RAID scrub and tiering finish at snail pace. If you have terabytes of data to be backed up every night you will experience extended backup window, un-achievable RPO and RTO regardless of change block tracking (CBT) enabled in backup products.
If you are one of Compellent customer wondering why Garner didn’t include Dell Compellent in Enterprise class. Now you know why Dell Compellent is excluded in enterprise class matrix as Dell Compellent doesn’t fit into the functionality and capability requirement to be considered as enterprise class. There is another factor that may worry existing Dell EqualLogic customer as there is no direct migration path and upgrade path have been communicated with on premises storage customers once OEM relationship between Dell and EMC ends. Dell pro-support and partner channel confirms that Dell will no longer sell SAS drive which means IO intense array will lose storage performance. These situations put users of co-branded Dell:EMC CX systems in the difficult position of having to choose between changing changing storage system technologies or changing storage vendor all together.
A storage area network (SAN) is any high-performance network whose primary purpose is to enable storage devices to communicate with computer systems and with each other. With a SAN, the concept of a single host computer that owns data or storage isn’t meaningful. A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.
A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities.
A good provides the following functionality to the business.
Highly availability: A single SAN connecting all computers to all storage puts a lot of enterprise information accessibility eggs into one basket. The SAN had better be pretty indestructible or the enterprise could literally be out of business. A good SAN implementation will have built-in protection against just about any kind of failure imaginable. As we will see in later chapters, this means that not only must the links and switches composing the SAN infrastructure be able to survive component failures, but the storage devices, their interfaces to the SAN, and the computers themselves must all have built-in strategies for surviving and recovering from failures as well.
If a SAN interconnects a lot of computers and a lot of storage, it had better be able to deliver the performance they all need to do their respective jobs simultaneously. A good SAN delivers both high data transfer rates and low I/O request latency. Moreover, the SAN’s performance must be able to grow as the organization’s information storage and processing needs grow. As with other enterprise networks, it just isn’t practical to replace a SAN very often.
On the positive side, a SAN that does scale provides an extra application performance boost by separating high-volume I/O traffic from client/server message traffic, giving each a path that is optimal for its characteristics and eliminating cross talk between them.
The investment required to implement a SAN is high, both in terms of direct capital cost and in terms of the time and energy required to learn the technology and to design, deploy, tune, and manage the SAN. Any well-managed enterprise will do a cost-benefit analysis before deciding to implement storage networking. The results of such an analysis will almost certainly indicate that the biggest payback comes from using a SAN to connect the enterprise’s most important data to the computers that run its most critical applications.
But its most critical data is the data an enterprise can least afford to be without. Together, the natural desire for maximum return on investment and the criticality of operational data lead to Rule 1 of storage networking.
A great SAN provides additional business benefits plus additional features depending on products and manufacturer. The features of storage networking, such as universal connectivity, high availability, high performance, and advanced function, and the benefits of storage networking that support larger organizational goals, such as reduced cost and improved quality of service.
- SAN connectivity enables the grouping of computers into cooperative clusters that can recover quickly from equipment or application failures and allow data processing to continue 24 hours a day, every day of the year.
- With long-distance storage networking, 24 × 7 access to important data can be extended across metropolitan areas and indeed, with some implementations, around the world. Not only does this help protect access to information against disasters; it can also keep primary data close to where it’s used on a round-the-clock basis.
- SANs remove high-intensity I/O traffic from the LAN used to service clients. This can sharply reduce the occurrence of unpredictable, long application response times, enabling new applications to be implemented or allowing existing distributed applications to evolve in ways that would not be possible if the LAN were also carting I/O traffic.
- A dedicated backup server on a SAN can make more frequent backups possible because it reduces the impact of backup on application servers to almost nothing. More frequent backups means more up-to-date restores that require less time to execute.
Replication and disaster recovery
With so much data stored on a SAN, your client will likely want you to build disaster recovery into the system. SANs can be set up to automatically mirror data to another site, which could be a fail safe SAN a few meters away or a disaster recovery (DR) site hundreds or thousands of miles away.
If your client wants to build mirroring into the storage area network design, one of the first considerations is whether to replicate synchronously or asynchronously. Synchronous mirroring means that as data is written to the primary SAN, each change is sent to the secondary and must be acknowledged before the next write can happen.
The alternative is to asynchronously mirror changes to the secondary site. You can configure this replication to happen as quickly as every second, or every few minutes or hours, Schulz said. While this means that your client could permanently lose some data, if the primary SAN goes down before it has a chance to copy its data to the secondary, your client should make calculations based on its recovery point objective (RPO) to determine how often it needs to mirror.
With several servers able to share the same physical hardware, it should be no surprise that security plays an important role in a storage area network design. Your client will want to know that servers can only access data if they’re specifically allowed to. If your client is using iSCSI, which runs on a standard Ethernet network, it’s also crucial to make sure outside parties won’t be able to hack into the network and have raw access to the SAN.
Capacity and scalability
A good storage area network design should not only accommodate your client’s current storage needs, but it should also be scalable so that your client can upgrade the SAN as needed throughout the expected lifespan of the system. Because a SAN’s switch connects storage devices on one side and servers on the other, its number of ports can affect both storage capacity and speed, Schulz said. By allowing enough ports to support multiple, simultaneous connections to each server, switches can multiply the bandwidth to servers. On the storage device side, you should make sure you have enough ports for redundant connections to existing storage units, as well as units your client may want to add later.
Uptime and availability
Because several servers will rely on a SAN for all of their data, it’s important to make the system very reliable and eliminate any single points of failure. Most SAN hardware vendors offer redundancy within each unit — like dual power supplies, internal controllers and emergency batteries — but you should make sure that redundancy extends all the way to the server. Availability and redundancy can be extended to multiple systems and cross datacentre which comes with cost benefit analysis and specific business requirement. If your business drives to you to have zero downtime policy then data should be replicated to a disaster recovery sites using identical SAN as production. Then use appropriate software to manage those replicated SAN.
Software and Hardware Capability
A great SAN management software deliver all the capabilities of SAN hardware to the devices connected to the SAN. It’s very reasonable to expect to share a SAN-attached tape drive among several servers because tape drives are expensive and they’re only actually in use while back-ups are occurring. If a tape drive is connected to computers through a SAN, different computers could use it at different times. All the computers get backed up. The tape drive investment is used efficiently, and capital expenditure stays low.
A SAN provide fully redundant, high performance and highly available hardware, software for application and business data to compute resources. Intelligent storage also provide data movement capabilities between devices.
Best OR Cheap
No vendor has ever developed all the components required to build a complete SAN but most vendors are engaged in partnerships to qualify and offer complete SANs consisting of the partner’s products.
Best-in-class SAN provides totally different performance and attributes to business. A cheap SAN would provide a SAN using existing Ethernet network however you should ask yourself following questions and find answers to determine what you need? Best or cheap?
- Has this SAN capable of delivering business benefits?
- Has this SAN capable of managing your corporate workloads?
- Are you getting correct I/O for your workloads?
- Are you getting correct performance matrix for your application, file systems and virtual infrastructure?
- Are you getting value for money?
- Do you have a growth potential?
- Would your next data migration and software upgrade be seamless?
- Is this SAN a heterogeneous solutions for you?
Storage as a Service vs on-premises
There are many vendors who provides storage as a service with lucrative pricing model. However you should consider the following before choosing storage as a service.
- Does this vendor a partner of recognised storage manufacturer?
- Does this vendor have certified and experienced engineering team to look after your data?
- Does this vendor provide 24x7x365 support?
- Does this vendor provide true storage tiering?
- What is geographic distance between storage as a service provider’s data center and your infrastructure and how much WAN connectivity would cost you?
- What would be storage latency and I/O?
- Are you buying one off capacity or long term corporate storage solution?
If answers of these questions favour your business then I would recommend you buy storage as a service otherwise on premises is best for you.
NAS OR FC SAN OR iSCSI SAN OR Unified Storage
A NAS device provides file access to clients to which it connects using file access protocols (primarily CIFS and NFS) transported on Ethernet and TCP/IP.
A FC SAN device is a block-access (i.e. it is a disk or it emulates one or more disks) that connects to its clients using Fibre Channel and a block data access protocol such as SCSI.
An iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.
You have to know your business before you can answer the question NAS/FC SAN/iSCSI SAN or Unified? Would you like to maximise your benefits from same investment well you know the answer you are looking for unified storage solutions like NetApp or EMC ISILON. If you are looking for enterprise class high performance storage, isolate your Ethernet from storage traffic, reduce backup time, minimise RPO and RTO then FC SAN is best for you example EMC VNX and NetApp OnCommand Cluster. If your intention is to use existing Ethernet and have a shared storage then you are looking for iSCSI SAN example Nimble storage or Dell SC series storage. But having said that you also needs to consider your structured corporate data, unstructured corporate data and application performance before making a judgement call.
Decision Making Process
Let’s make a decision matrix as follows. Just fill the blanks and see the outcome.
|Workloads||I/O||Capacity Requirement (in TB)||Storage Protocol
(FC, iSCSI, NFS, CIFS)
|Option||Rating Requirement (1=high 3=Medium 5=low )|
|Risk Type||Rating (Low, Medium, High)|
|Loss of productivity|
|Loss of redundancy|
|Limited upgrade capacity|
|Disruptive migration path|
Service Data – SLA
|Service Type||SLA Target|
Rate storage via Gartner Magic Quadrant. Gartner magic quadrant leaders are (as of October 2015):
- Nimble Storage
To make your decision easy select a storage that enables you to cost effective way manage large and rapidly growing data. A storage that is built for agility, simplicity and provide both tiered storage approach for specialized needs and the ability to unify all digital contents into a single high performance and highly scalable shared pool of storage. A storage that accelerate productivity and reduce capital and operational expenditures, while seamlessly scaling storage with the growth of mission critical data.