EMC Unity Hybrid Storage for Azure Cloud Integration

The customers who have placed their workload in both on-premises and cloud forming a “Hybrid Cloud” model for your Organisation, you probably need on-premises storage which meets the requirement of hybrid workloads. EMC’s Unity hybrid flash storage series may be the answer to your business critical problem. This unified storage array is designed for organisations from midmarket to the enterprise. Cover the broadest range of workloads – SAN and NAS both. The EMC unity has been designed for workloads rather than a tin seating on your data centre consuming power and cooling bills, and you are calling it a SAN. After all, that was a traditional tin-based SAN solution.

Previously I wrote an article about Dell Compellent. I received an overwhelming response from the Compellent user. I have been asked many occasion what other option do we have if not the Compellent storage.

To answer the question, I would choose from either EMC Unity Hybrid Storage, Nimble and NetApp Storage subject to the in-depth analysis of workloads, casestudy and business requirements. But again, this is a “Subject to x,y,z” question. The tin-based storage does not fulfil the modern business requirement. I would personally like to use Azure or AWS than procure any tin and pay for power, cooling and racks.

EMC Unity

The Unity midrange storage for flash and rich data services based on dense SSD technology helps provide outstanding TCO. The Unity provides intelligent insight into SAN health with CloudIQ, which provides cloud-based proactive monitoring and predictive analytics. Additionally, the ongoing operation is simple with proactive assistance and automated remote support.

What I like about Unity is that the Unity Software, most notably CloudIQ, Appsync and Cloud Tiering Appliance. The Unity has the capabilities include point-in-time snapshots, local and remote data replication, built-in encryption, and deep integration with VMware, Microsoft Apps, Hyper-v, Azure Blob, AWS S3 and OpenStack ecosystems. The Unity provides an automated tiering and flash-caching, the most active data is served from flash.

Management

The Unity provides the most user-friendly GUI management interface. After installing and powering on the purpose-built Dell EMC Unity system for the first time, the operating environment will boot. The interfaces are well-defined and highlighted for areas of interest – drive faults, network link failures, etc. Within Unisphere are some options for support, including Unisphere Online Help and the Support page where FAQs, videos, white papers, chat sessions, and more

Provisioning Storage

The EMC Unity offers both block and file provisioning in the same enclosure. The Disk Drives are provisioned into Pools that can be used to host both block and file data. Connectivity is offered for both block and file protocols using iSCSI and Fibre Channel. You can access LUNs, Consistency Groups, Thin Clones, VMware Datastores (VMFS), and VMware Virtual Volumes.

Fast VP

The FAST VP (Fully Automated Storage Tiering for Virtual Pools) is a very smart solution for dynamically matching storage requirements with changes in the frequency of data access. Fast VP segregate disk drives in three tiers

  • Extreme Performance Tier – SSD
  • Performance tier – SAS
  • Capacity Tier – NL-SAS

Fast VP Policies – FAST VP is an automated feature but provide controls to setup user-defined tiering policies to ensure the best performance for various environments. FAST VP uses an algorithm to make data relocation decisions based on the activity level of each slice.

  • Highest Available Tier
  • Auto-Tier
  • Start High then Auto-Tier
  • Lowest Available Tier
  • No Data Movement

Cloud Tiering Appliance (CTA)

If you are an organisation with hybrid cloud and you would like to move data from on-premises to Azure Cloud or AWS S3, then Cloud Tiering Appliance (CTA) is the best solutions for you to move data to a cloud-based on user-configured policies. The other way is also true which means you can return your data to on-premises using this appliance.

Why do you need this appliance? If you run of storage or free-up space, you can do it on the fly without capital expenditure. This ability optimises primary storage usage, dramatically improves storage efficiency, shortens the time required to back up data, and reduces overall TCO for primary storage. This functionality also reduces your own data centre footprint. You can move both file and block data to Azure Cloud or AWS S3 using CTA.

EMC CloudIQ

Another cool feature is CloudIQ. CloudIQ provides the operational insights and overall health scores EMC midrange storage. CloudIQ provides Central monitoring, predictive analytics and health monitoring.

CloudIQ is a no-cost SaaS application that non-disruptively provides overall health scores for Unity systems through cloud-based proactive monitoring and intelligent, predictive analytics.

AppSync Data Protection

Your priority is workload. You must protect workloads and simplify management of workloads. AppSync empowers you to satisfy copy demand for data repurposing, operational recovery, and disaster recovery with AppSync.

AppSync simplifies, orchestrates, and automates the process of generating and consuming copies of production data. You can integrate AppSync with Oracle, Microsoft SQL Server, and Microsoft Exchange for application-consistent copy management. AppSync is the single user interface and provides VM-consistent copies of data stores and individual VM recovery for VMware environments

RecoveryPoint

EMC RecoverPoint provides continuous data protection with multiple recovery points to restore applications instantly to a specific point in time. EMC RecoveryPoint protects applications with bidirectional synchronous and asynchronous replication for recovery of physical, virtual, and cloud infrastructures. Minimize network utilisation with unique bandwidth compression and deduplication, significantly reducing replicated data over the network.

RecoveryPoint is software-only solutions to manage the disaster recovery provisioning and control their replication policies and recovery, ensuring that VM service levels are met.

EMC Storage Analytics

The Storage Analytics software lets you extend VMware vRealize Operations analytics to supported EMC storage platforms. Optimize performance and diagnose issues across physical storage and virtual machines with EMC Storage Analytics (ESA).

The Storage Analytics is dashboards based visual tools provide deep visibility into EMC infrastructure. Actionable capacity and performance analysis help you troubleshoot, identify, and act on issues fast.

Encryption

EMC Unity lets you encrypt user data as it is written to the backend drives, and decrypted during departure. Because encryption and decryption are handled via a dedicated hardware piece on the SAS interface, there is minimal impact on Unity Storage. The system also supports external key management through the use of the Key Management Interoperability Protocol (KMIP).

Conclusion

The Unity Hybrid Storage reduce cost, datacentre footprint, complexity and management overhead of your SAN systems while maintaining workload performance, protection and path to migrate data to Azure Cloud or AWS.

Understanding Software Defined Storage (SDS)

Software defined storage is an evolution of storage technology in cloud era. It is a deployment of storage technology without any dependencies on storage hardware. Software defined storage (SDS) eliminates all traditional aspect of storage such as managing storage policy, security, provisioning, upgrading and scaling of storage without the headache of hardware layer. Software defined storage (SDS) is completely software based product instead of hardware based product. A software defined storage must have the following characteristics.

Characteristics of SDS

  • Management of complete stack of storage using software
  • Automation-policy driven storage provisioning with SLA
  • Ability to run private, public or hybrid cloud platform
  • Creation of uses metric and billing in control panel
  • Logical storage services and capabilities eliminating dependence on the underlying physical storage systems
  • Creation of logical storage pool
  • Creation of logical tiering of storage volumes
  • Aggregate various physical storage into one or multiple logical pool
  • Storage virtualization
  • Thin provisioning of volume from logical pool of storage
  • Scale out storage architecture such as Microsoft Scale out File Servers
  • Virtual volumes (vVols), a proposal from VMware for a more transparent mapping between large volumes and the VM disk images within them
  • Parallel NFS (pNFS), a specific implementation which evolved within the NFS
  • OpenStack APIs for storage interaction which have been applied to open-source projects as well as to vendor products.
  • Independent of underlying storage hardware

A software defined storage must not have the following limitations.

  • Glorified hardware which juggle between network and disk e.g. Dell Compellent
  • Dependent systems between hardware and software e.g. Dell Compellent
  • High latency and low IOPS for production VMs
  • Active-passive management controller
  • Repetitive hardware and software maintenance
  • Administrative and management overhead
  • Cost of retaining hardware and software e.g. life cycle management
  • Factory defined limitation e.g. can’t do situation
  • Production downtime for maintenance work e.g. Dell Compellent maintenance

The following vendors provides various software defined storage in current market.

Software Only vendor

  • Atlantis Computing
  • DataCore Software
  • SANBOLIC
  • Nexenta
  • Maxta
  • CloudByte
  • VMware
  • Microsoft

Mainstream Storage vendor

  • EMC ViPR
  • HP StoreVirtual
  • Hitachi
  • IBM SmartCloud Virtual Storage Center
  • NetApp Data ONTAP

Storage Appliance vendor

  • Tintri
  • Nimble
  • Solidfire
  • Nutanix
  • Zadara Storage

Hyper Converged Appliance

  • Cisco (Starting price from $59K for Hyperflex systems+1 year support inclusive)
  • Nutanix
  • VCE (Starting price from $60K for RXRAIL systems+support)
  • Simplivity Corporation
  • Maxta
  • Pivot3 Inc.
  • Scale Computing Inc
  • EMC Corporation
  • VMware Inc

Ultimately, SDS should and will provide businesses will worry free management of storage without limitation of hardware. There are compelling use cases of software defined storage for an enterprise to adopt software defined storage.

Relavent Articles

Gartner’s verdict on mid-range and enterprise class storage arrays

Previously I wrote an article on how to select a SAN based on your requirement. Let’s learn what Gartner’s verdict on Storage. Gartner scores storage arrays in mid-range and enterprise class storage. Here are details of Gartner score.

Mid-Range Storage

Mid-range storage arrays are scored on manageability, Reliability and Availability (RAS), performance, snapshot and replication, scalability, the ecosystem, multi-tenancy and security, and storage efficiency.

mid1

Figure: Product Rating

mid2

Figure: Storage Capabilities

mid3

Figure: Product Capabilities

mid4

Figure: Total Score

Enterprise Class Storage

Enterprise class storage is scored on performance, reliability, scalability, serviceability, manageability, RAS, snapshot and replication, ecosystem, multi-tenancy, security, and storage efficiency. Vendor reputation are more important in this criteria. Product types are clustered, scale-out, scale-up, high-end (monolithic) arrays and federated architectures. EMC, Hitachi, HP, Huawei, Fujitsu, DDN, and Oracle arrays can all cluster across more than 2 controllers. These vendors are providing functionality, performance, RAS and scalability to be considered in this class.

ENT1

Figure: Product Ratings (Source: Gartner)

Where does Dell Compellent Stand?

There are known disadvantages in Dell Compellent storage array, users with more than two nodes must carefully plan firmware upgrades during a time of low I/O activity or during periods of planned downtime. Even though Dell Compellent advertised as flash cached, Read SSD and Write SSD with storage tiering, snapshot. In realty Dell Compellent does its own thing in background which most customer isn’t aware of. Dell Compellent run RAID scrub every day whether you like it or not which generate huge IOPS in all tiered arrays which are both SSD and SATA disks. You will experience poor IO performance during RAID scrub. When Write SSD is full Compellent controller automatically trigger an on demand storage tiering during business hour and forcing data to be written permanently in tier 3 disks which will literally kill virtualization, VDI and file systems. Storage tiering and RAID scrub will send storage latency off the roof. If you are big virtualization and VDI shop than you are left with no option but to experience this poor performance and let RAID scrub and tiering finish at snail pace. If you have terabytes of data to be backed up every night you will experience extended backup window, un-achievable RPO and RTO regardless of change block tracking (CBT) enabled in backup products.

If you are one of Compellent customer wondering why Garner didn’t include Dell Compellent in Enterprise class. Now you know why Dell Compellent is excluded in enterprise class matrix as Dell Compellent doesn’t fit into the functionality and capability requirement to be considered as enterprise class. There is another factor that may worry existing Dell EqualLogic customer as there is no direct migration path and upgrade path have been communicated with on premises storage customers once OEM relationship between Dell and EMC ends. Dell pro-support and partner channel confirms that Dell will no longer sell SAS drive which means IO intense array will lose storage performance. These situations put users of co-branded Dell:EMC CX systems in the difficult position of having to choose between changing changing storage system technologies or changing storage vendor all together.

Buying a SAN? How to select a SAN for your business?

A storage area network (SAN) is any high-performance network whose primary purpose is to enable storage devices to communicate with computer systems and with each other. With a SAN, the concept of a single host computer that owns data or storage isn’t meaningful. A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.

A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities.

Good SAN

A good provides the following functionality to the business.

Highly availability: A single SAN connecting all computers to all storage puts a lot of enterprise information accessibility eggs into one basket. The SAN had better be pretty indestructible or the enterprise could literally be out of business. A good SAN implementation will have built-in protection against just about any kind of failure imaginable. As we will see in later chapters, this means that not only must the links and switches composing the SAN infrastructure be able to survive component failures, but the storage devices, their interfaces to the SAN, and the computers themselves must all have built-in strategies for surviving and recovering from failures as well.

Performance:

If a SAN interconnects a lot of computers and a lot of storage, it had better be able to deliver the performance they all need to do their respective jobs simultaneously. A good SAN delivers both high data transfer rates and low I/O request latency. Moreover, the SAN’s performance must be able to grow as the organization’s information storage and processing needs grow. As with other enterprise networks, it just isn’t practical to replace a SAN very often.

On the positive side, a SAN that does scale provides an extra application performance boost by separating high-volume I/O traffic from client/server message traffic, giving each a path that is optimal for its characteristics and eliminating cross talk between them.

The investment required to implement a SAN is high, both in terms of direct capital cost and in terms of the time and energy required to learn the technology and to design, deploy, tune, and manage the SAN. Any well-managed enterprise will do a cost-benefit analysis before deciding to implement storage networking. The results of such an analysis will almost certainly indicate that the biggest payback comes from using a SAN to connect the enterprise’s most important data to the computers that run its most critical applications.

But its most critical data is the data an enterprise can least afford to be without. Together, the natural desire for maximum return on investment and the criticality of operational data lead to Rule 1 of storage networking.

Great SAN

A great SAN provides additional business benefits plus additional features depending on products and manufacturer. The features of storage networking, such as universal connectivity, high availability, high performance, and advanced function, and the benefits of storage networking that support larger organizational goals, such as reduced cost and improved quality of service.

  • SAN connectivity enables the grouping of computers into cooperative clusters that can recover quickly from equipment or application failures and allow data processing to continue 24 hours a day, every day of the year.
  • With long-distance storage networking, 24 × 7 access to important data can be extended across metropolitan areas and indeed, with some implementations, around the world. Not only does this help protect access to information against disasters; it can also keep primary data close to where it’s used on a round-the-clock basis.
  • SANs remove high-intensity I/O traffic from the LAN used to service clients. This can sharply reduce the occurrence of unpredictable, long application response times, enabling new applications to be implemented or allowing existing distributed applications to evolve in ways that would not be possible if the LAN were also carting I/O traffic.
  • A dedicated backup server on a SAN can make more frequent backups possible because it reduces the impact of backup on application servers to almost nothing. More frequent backups means more up-to-date restores that require less time to execute.

Replication and disaster recovery

With so much data stored on a SAN, your client will likely want you to build disaster recovery into the system. SANs can be set up to automatically mirror data to another site, which could be a fail safe SAN a few meters away or a disaster recovery (DR) site hundreds or thousands of miles away.

If your client wants to build mirroring into the storage area network design, one of the first considerations is whether to replicate synchronously or asynchronously. Synchronous mirroring means that as data is written to the primary SAN, each change is sent to the secondary and must be acknowledged before the next write can happen.

The alternative is to asynchronously mirror changes to the secondary site. You can configure this replication to happen as quickly as every second, or every few minutes or hours, Schulz said. While this means that your client could permanently lose some data, if the primary SAN goes down before it has a chance to copy its data to the secondary, your client should make calculations based on its recovery point objective (RPO) to determine how often it needs to mirror.

Security

With several servers able to share the same physical hardware, it should be no surprise that security plays an important role in a storage area network design. Your client will want to know that servers can only access data if they’re specifically allowed to. If your client is using iSCSI, which runs on a standard Ethernet network, it’s also crucial to make sure outside parties won’t be able to hack into the network and have raw access to the SAN.

Capacity and scalability

A good storage area network design should not only accommodate your client’s current storage needs, but it should also be scalable so that your client can upgrade the SAN as needed throughout the expected lifespan of the system. Because a SAN’s switch connects storage devices on one side and servers on the other, its number of ports can affect both storage capacity and speed, Schulz said. By allowing enough ports to support multiple, simultaneous connections to each server, switches can multiply the bandwidth to servers. On the storage device side, you should make sure you have enough ports for redundant connections to existing storage units, as well as units your client may want to add later.

Uptime and availability

Because several servers will rely on a SAN for all of their data, it’s important to make the system very reliable and eliminate any single points of failure. Most SAN hardware vendors offer redundancy within each unit — like dual power supplies, internal controllers and emergency batteries — but you should make sure that redundancy extends all the way to the server. Availability and redundancy can be extended to multiple systems and cross datacentre which comes with cost benefit analysis and specific business requirement. If your business drives to you to have zero downtime policy then data should be replicated to a disaster recovery sites using identical SAN as production. Then use appropriate software to manage those replicated SAN.

Software and Hardware Capability

A great SAN management software deliver all the capabilities of SAN hardware to the devices connected to the SAN. It’s very reasonable to expect to share a SAN-attached tape drive among several servers because tape drives are expensive and they’re only actually in use while back-ups are occurring. If a tape drive is connected to computers through a SAN, different computers could use it at different times. All the computers get backed up. The tape drive investment is used efficiently, and capital expenditure stays low.

A SAN provide fully redundant, high performance and highly available hardware, software for application and business data to compute resources. Intelligent storage also provide data movement capabilities between devices.

Best OR Cheap

No vendor has ever developed all the components required to build a complete SAN but most vendors are engaged in partnerships to qualify and offer complete SANs consisting of the partner’s products.

Best-in-class SAN provides totally different performance and attributes to business. A cheap SAN would provide a SAN using existing Ethernet network however you should ask yourself following questions and find answers to determine what you need? Best or cheap?

  1. Has this SAN capable of delivering business benefits?
  2. Has this SAN capable of managing your corporate workloads?
  3. Are you getting correct I/O for your workloads?
  4. Are you getting correct performance matrix for your application, file systems and virtual infrastructure?
  5. Are you getting value for money?
  6. Do you have a growth potential?
  7. Would your next data migration and software upgrade be seamless?
  8. Is this SAN a heterogeneous solutions for you?

Storage as a Service vs on-premises

There are many vendors who provides storage as a service with lucrative pricing model. However you should consider the following before choosing storage as a service.

  1. Does this vendor a partner of recognised storage manufacturer?
  2. Does this vendor have certified and experienced engineering team to look after your data?
  3. Does this vendor provide 24x7x365 support?
  4. Does this vendor provide true storage tiering?
  5. What is geographic distance between storage as a service provider’s data center and your infrastructure and how much WAN connectivity would cost you?
  6. What would be storage latency and I/O?
  7. Are you buying one off capacity or long term corporate storage solution?

If answers of these questions favour your business then I would recommend you buy storage as a service otherwise on premises is best for you.

NAS OR FC SAN OR iSCSI SAN OR Unified Storage

A NAS device provides file access to clients to which it connects using file access protocols (primarily CIFS and NFS) transported on Ethernet and TCP/IP.

A FC SAN device is a block-access (i.e. it is a disk or it emulates one or more disks) that connects to its clients using Fibre Channel and a block data access protocol such as SCSI.

An iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

You have to know your business before you can answer the question NAS/FC SAN/iSCSI SAN or Unified? Would you like to maximise your benefits from same investment well you know the answer you are looking for unified storage solutions like NetApp or EMC ISILON. If you are looking for enterprise class high performance storage, isolate your Ethernet from storage traffic, reduce backup time, minimise RPO and RTO then FC SAN is best for you example EMC VNX and NetApp OnCommand Cluster. If your intention is to use existing Ethernet and have a shared storage then you are looking for iSCSI SAN example Nimble storage or Dell SC series storage. But having said that you also needs to consider your structured corporate data, unstructured corporate data and application performance before making a judgement call.

Decision Making Process

Let’s make a decision matrix as follows. Just fill the blanks and see the outcome.

Workloads I/O Capacity Requirement (in TB) Storage Protocol

(FC, iSCSI, NFS, CIFS)

Virtualization
Unstructured Data
Structured Data
Messaging Systems
Application virtualization
Collaboration application
Business Application

Functionality Matrix

Option Rating Requirement (1=high 3=Medium 5=low )
Redundancy
Uptime
Capacity
Data movement
Management

Risk Assessment

Risk Type Rating (Low, Medium, High)
Loss of productivity
Loss of redundancy
Reduced Capacity
Uptime
Limited upgrade capacity
Disruptive migration path

Service Data – SLA

Service Type SLA Target
Hardware Replacement
Uptime
Vendor Support

Rate storage via Gartner Magic Quadrant. Gartner magic quadrant leaders are (as of October 2015):

  1. EMC
  2. HP
  3. Hitachi
  4. Dell
  5. NetApp
  6. IBM
  7. Nimble Storage

To make your decision easy select a storage that enables you to cost effective way manage large and rapidly growing data. A storage that is built for agility, simplicity and provide both tiered storage approach for specialized needs and the ability to unify all digital contents into a single high performance and highly scalable shared pool of storage. A storage that accelerate productivity and reduce capital and operational expenditures, while seamlessly scaling storage with the growth of mission critical data.

How to configure SAN replication between IBM Storwize V3700 systems

The Metro Mirror and Global Mirror Copy Services features enable you to set up synchronous and asynchronous replication between two volumes between two separate IBM storage, so that updates are made by an application to one volume in one storage systems in prod site are mirrored on the other volume in anther storage systems in DR site.

  • The Metro Mirror feature provides a synchronous-replication. When a host writes to the primary volume, it does not receive confirmation of I/O completion until the write operation has completed for the copy on both the primary volume and the secondary volume. This ensures that the secondary volume is always up-to-date with the primary volume in the event that a failover operation must be performed. However, the host is limited to the latency and bandwidth limitations of the communication link to the secondary volume.
  • The Global Mirror feature provides an asynchronous-replication. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation has completed for the copy on the secondary volume. If a failover operation is performed, the application must recover and apply any updates that were not committed to the secondary volume. If I/O operations on the primary volume are paused for a small length of time, the secondary volume can become an exact match of the primary volume.

Prerequisites:

  1. Both systems are connected via dark fibre, L2 MPLS or IPVPN for replication over IP
  2. Both systems are connected via fibre for replication over FC.
  3. Both systems have up to date and latest firmware.
  4. Easy Tier and SSD installed in both systems.
  5. Remote Copy license activated in both systems.
  6. Volumes are identical in Prod and DR SAN.

Configure Metro Mirror in IBM v3700 Systems

Step1: Activate License

Log on to IBM V3700>Settings>System>Licensing

Activate Remote Copy and Easy Tier License.

    image

Step2: Configure Ethernet Ports & iSCSI in Production SAN and DR SAN

Both systems will communicate via management network but volume will be replicated via Ethernet if remote copy is configured to use replication over Ethernet. This step is necessary for Metro Mirror over Ethernet. Skip this step if you are using FC.

Log on to Production IBM v3700 systems. Settings>Network>Ethernet Ports. Right Click on Node1 Port 2> Configure Copy Group1 and Copy Group2. Assign IP address, Enable iSCSI, Select Copy Group1. Repeat to Create Copy Group2.

image

image

Repeat the step to configure Copy Groups in DR SAN.  
Note: TCP/IP assigned in DR SAN can be from same subnet of production SAN or can be different than production subnet as long as both subnets can communicate with each other.  
Step3: Create Partnership in Prod & DR SAN
Log on to Production V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of DR SAN

clip_image001

Fully Configured Indicates that the partnership is defined on the local and remote systems and is started.

image 

Initial synchronization bandwidth is 2048MBps but once I take the DR storage to DR site. I will modify that to 1024MBps. Initial Synchronization will take place in prod site. You can have your own bandwidth specification.

Log on to DR V3700>Copy Services>Partnerships>Create Partnership>Add NetBIOS Name and Management IP of Prod SAN

clip_image002

Step4: Create Relationships between Volume

log on to Prod SAN>Copy Services>Remote Copy>Add Relationship>Select Metro Mirror

image

image

image

Specify DR SAN as Auxiliary Systems where DR Volume is located. IBM should have used word “Master/Slave” or “Prod/DR systems”.   

image

Add identical volume>Click Next.

image

image

Step5: Monitor Performance and Copy Services

image

image

Consistency group states

Consistent (synchronized) – The primary volumes are accessible for read and write I/O operations. The secondary volumes are accessible for read-only I/O operations.

Inconsistent (copying) – The primary volumes are accessible for read and write I/O operations, but the secondary volumes are not accessible for either operation. This state is entered after the startrcconsistgrp command is issued to a consistency group in the InconsistentStopped state. This state is also entered when the startrcconsistgrp command is issued, with the force option, to a consistency group in the Idling or ConsistentStopped state.

The background copy bandwidth can affect foreground I/O latency in one of three ways:

  • If the background copy bandwidth is set too high for the intersystem link capacity, the following results can occur:
    • The intersystem link is not able to process the background copy I/Os fast enough, and the I/Os can back up (accumulate).
    • For Metro Mirror, there is a delay in the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror, the work is backlogged, which delays the processing of write operations and causes the relationship to stop. For Global Mirror in multiple-cycling mode, a backlog in the intersystem link can congest the local fabric and cause delays to data transfers.
    • The foreground I/O latency increases as detected by applications.
  • If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os.
  • If the background copy bandwidth is set too high for the storage at the secondary site, background copy write operations at the secondary overload the secondary storage and again delay the synchronous secondary write operations of foreground I/Os.
    • For Global Mirror without cycling mode, the work is backlogged and again the relationship is stopped

image

image

Once volumes are synchronized you are ready to integrate storage with System Center 2012 R2.

Further Readings:

SAN Replication based Enterprise Grade Disaster Recovery with ASR and System Center

What’s New in 2012 R2: Cloud-integrated Disaster Recovery

Understanding IT Disaster Recovery Plan

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Multi-Site Hyper-v Cluster for High Availability and Disaster Recovery

In most of the SMB customer, the nodes of the cluster that reside at their primary data center provide access to the clustered service or application, with failover occurring only between clustered nodes. However for an enterprise customer, failure of a business critical application is not an option. In this case, disaster recovery and high availability are bonded together so that when both/all nodes at the primary site are lost, the nodes at the secondary site begin providing service automatically, or with minimal intervention.

The maximum availability of any services or application depends on how you design your platform that hosts these services. It is important to follow best practices in Compute, Network and Storage infrastructure to maximize uptime and maintain SLA.

The following diagram shows a multi-site failover cluster that uses four nodes and supports a clustered service or application.

 

image

 

The following rack diagram shows the identical compute, storage and networking infrastructure in both site.

image

Physical Infrastructure

  • Primary and Secondary sites are connected via 2x10Gbps dark fibre
  • Storage vendor specific replication software such as EMC recovery point
  • Storage must have redundant storage processor
  • There must be redundant Switches for networking and storage
  • Each server must be connected to redundant switches with redundant NIC for each purpose
  • Each Hyper-v server must have minimum dual Host Bus Adapter (HBA) port connected to redundant MDS switches
  • Each network must be connected to dual NIC from server to switches
  • Only iLO/DRAC will have a single connection
  • Each site must have redundant power supply.

Storage Requirements

Since I am talking about highly available and redundant systems design, this sort of design must consist of replicated or clustered storage presented to multi-site Hyper-v cluster nodes. Replication of data between sites is very important in a multi-site cluster, and is accomplished in different ways by different hardware vendors. You will achieve high performance through hardware or block level replication instead of software. You should contact your storage vendor to come up with solutions that provide replicated or clustered storage.

Examples are:

StarWind Software

Steeleye DataKeeper

EMC Recovery Point

HP Storage

Network Requirements:

A multi-site cluster running Windows Server 2008 can contain nodes that are in different subnet however as a best practice, you must configure Hyper-v cluster in same subnet. You applications and services can reside in separate subnets. To avoid any conflict, you should use dark fibre connection or MPLS network between multi-sites that allows VLANs.

Note that you must configure Hyper-v with static IP. In a multi-site cluster, you might want to tune the “heartbeat” settings, see http://go.microsoft.com/fwlink/?LinkId=130588 for details.

Network Configuration Spread Sheet

Network

VLAN ID

NICs and Switch Ports speed

iLO/DRAC

10

1Gbps

MGMT

20

2x1Gbps

Live Migration

30

2x10Gbps

Storage Migration

40

2x10Gbps

Virtual Machine

50,60

4x10Gbps

iSCSI Network

70

4x10Gbps

Heartbeat network

80

2x1Gbps

Storage Replication

(Separate from Hyper-v)

90

Dark Fibre

2x10Gbps

Note that iSCSI network is only required if you are using IP Storage instead of Fibre Channel (FC) storage.

Cluster Selection: Node and File Share Majority (For Cluster with Special Configurations)

Quorum Selection: Since you will be configuring Node and File Share Majority cluster, you will have the option to place quorum files to shared folder. Where do you place this shared folder? Since we are talking about fully redundant and highly available Hyper-v Cluster, we have several options to place quorum shared folder.

Option1: Secondary Site

Option 2: Third Site

Visit http://technet.microsoft.com/en-us/library/cc770620%28WS.10%29.aspx for more details on quorum.

Storage Configuration:

Visit http://www.starwindsoftware.com/images/content/technical_papers/StarWind_HA_Hyper-V_6.0.pdf , http://docs.us.sios.com/ and http://us.sios.com/wp-content/uploads/sios-datakeeper-replication-multi-site-clustering-windows-servers-enterprise.pdf for clustered storage configuration for Hyper-v.

Hyper-v Cluster Configuration:

Visit http://microsoftguru.com.au/2013/06/04/windows-server-2012-failover-clustering-deep-dive/ for detailed cluster configuration guide.

Replace Common Name (CN) and SAN Certificates with Wild Card Certificate— Step by Step

If you have a Common Name certificate or Subject Alternative Name certificate in Exchange webmail or other website and you would like to change that to wild card certificate to consolidate your certificate uses in wide variety of infrastructure and save money. You can do so safely with a minor downtime with no or little loss of productivity.

Microsoft accept certified SSL provider which are recorded in this url http://support.microsoft.com/kb/929395/en-us

Here is a guide lines how to accomplish this objective.

Step1: Check Current Exchange SSL Certificate

Open Exchange Management Shell and Issue Get-ExchangeCertificate Command. Record the information for future reference.

Step2: Record Proposed Exchange SSL Wildcard Certificate

  • Common Name: *.yourdomain.com.au
  • SAN: N/A
  • Organisation: Your Company
  • Department: ICT
  • City: Perth
  • State: WA
  • Country: Australia
  • Key Size: 2048

Step3: Generate a wildcard certificate request

You can use https://www.digicert.com/easy-csr/exchange2007.htm to generate a certificate command for exchange server.

New-ExchangeCertificate -GenerateRequest -Path c:star_your_company.csr -KeySize 2048 -SubjectName “c=AU, s=Western Australia, l=Perth, o=Your Company, ou=ICT, cn=*.yourdomain.com.au” -PrivateKeyExportable $True

Step4: Sign the certificate request and download SSL certificate in PKCS#7 format

For more information, you can go to help file of your certificate provider. But for example I am using rapidSSL. Reference https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&id=SO14293&actp=search&viewlocale=en_US&searchid=1380764656808

1. Click https://products.geotrust.com/geocenter/reissuance/reissue.do

2. Provide the common name, technical contact e-mail address associated with the SSL order,
and the image number generated from the Geotrust User Authentication page.

3. Select Request Access against the correct order ID. An e-mail will be sent to the technical contact e-mail address specified above.

4. Click on the link listed in the e-mail to enter the User Portal Click View Certificate Information. Select the appropriate PKCS#7 or  X.509 format from the drop down menu depending on the server requirements. NOTE: Microsoft IIS users select PKCS#7 format and save the file with .p7b extension.

5. Save the certificate locally and install per the server software. 

Step5: Locate and Disable the Existing CA certificate

Now this step is a disruptive step for webmail. You must do it after hours.

1. Create a Certificate Snap-In in Microsoft Management Console (MMC) by following the steps from this link: SO14292

2. With the MMC and the Certificates snap-in open, expand the Trusted Root Certification Authorities folder on the left and select the Certificates sub-folder.

3. Locate the following certificate in the MMC: If this certificate is present, it must be disabled. Right click the certificate, Select Properties

4. In the Certificate purposes section, select  Disable all purposes for this certificate
Click OK to close the MMC without saving the console settings.

Step6: Install Certificate

To install a SSL certificate onto Microsoft Exchange, you will need to use the Exchange
Management Shell (EMS). Microsoft reference http://technet.microsoft.com/en-us/library/bb851505(v=exchg.80).aspx

1. Copy the SSL certificate file, for example newcert.p7b and save it to C: on your Exchange server.

2. Run the Import-ExchangeCertificate and Enable-ExchangeCertificate commands together. For Example

Import-ExchangeCertificate -Path C:newcert.p7b | Enable-ExchangeCertificate –Services  “SMTP, IMAP, POP, IIS”

3. Verify that your certificate is enabled by running the Get-ExchangeCertificate command.

For Example Get-ExchangeCertificate -DomainName yourdomain.com.au

4. In the Services column, letters SIP and W stand for SMTP, IMAP, POP3 and Web (IIS). If your certificate isn’t properly enabled, you can re-run the Enable-ExchangeCertificate command by pasting the thumbprint of your certificate as the -ThumbPrint argument such as: Enable-ExchangeCertificate -ThumbPrint [paste] -Services ” IIS”

Step7: Configure Outlook settings

Microsoft reference http://technet.microsoft.com/en-us/library/cc535023(v=exchg.80).aspx

To use the Exchange Management Shell to configure Autodiscover settings by using the Set-OutlookProvider cmdlet if you are using Exchange 2007.

Set-OutlookProvider -Identity EXPR -CertPrincipalName msstd:*.yourdomain.com.au

To change Outlook 2007 connection settings to resolve a certificate error

1. In Outlook 2007, on the Tools menu, click Account Settings.

2. Select your e-mail address listed under Name, and then click Change.

3. Click More Settings. On the Connection tab, click Exchange Proxy Settings.

4. Select the Connect using SSL only check box.

5. Select the Only connect to proxy servers that have this principal name in their certificate: check box, and then, in the box that follows, enter msstd:*.yourdomain.com.au.

6. Click OK, and then click OK again.

7. Click Next. Click Finish. Click Close.

8. The new setting will take effect after you exit Outlook and open it again.

Step8: Export Certificate from Exchange in .pfx format

The following Step8 to Step 10 is for Forefront TMG 2010 configuration only. If you are using different method to publish Exchange then you don’t need to follow these steps. Use help file of your firewall/Edge product to configure SSL.

Open Exchange Management Shell, run

Export-ExchangeCertificate -Thumbprint D6AF8C39D409B015A273571AE4AD8F48769C61DB

010e -BinaryEncoded:$true -Path c:certificatesexport.pfx -Password:(Get-Credential).password

Step9: Import certificate in TMG 2010

1.Click Start and select Run and tape mmc
2.Click on the  File menu and select   Add/Remove Snap in
3.Click  Add, select Certificates among the list of   Standalone Snap-in and click   Add
4.Choose   Computer Account and click   Next
5.Choose   Local Computer and click   Finish
6.Close the window and click OK on the upper window
7.Go to Personal then Certificates
8.Right click, choose All tasks then Import
9.A wizard opens. Select the file holding the certificate you want to import.
10.Then validate the choices by default
11.Make sure your certificate appears in the list and that the intermediary and root certificates are in their respective files. If not, place them in the appropriate file and replace existing certificates if needed.

Step10: Replace Certificate in Web Listener

1. click Start Forefront Threat Management Gateway console. The Forefront TMG console starts.

2. In the console tree, expand the name of your Security Server, and then click Firewall Policy.

3. In the results pane, double-click Remote Web Workplace Publishing Rule.

4. In Remote Web Workplace Publishing Rule Properties, click the Listener tab.

5. Select External Web Listener from the list, and then click Properties.

6. In External Web Listener Properties, click the Certificates tab.

7. Select Use a single certificate for this Web listener or Assign a certificate for each IP address, and then click Select Certificate.

8. In the Select Certificate dialog box, click a certificate in the list of available certificates, and then click Select. Click OK twice to close the Properties dialog boxes.

9. To save changes and update the configuration, in the results pane, click Apply.

Step11: Test OWA from external and internal network

On the mobile phone, open browser, type webmail.yourdomain.com.au and log in using credential.

Make sure no certificate warning shows on IE.

Use the RapidSSL Installation Checker https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&actp=CROSSLINK&id=SO9556 to verify your certificate.
 

Relevant References

Request an Internet Server Certificate (IIS 7)

Using wildcard certificates