Understanding Software Defined Storage (SDS)

Software defined storage is an evolution of storage technology in cloud era. It is a deployment of storage technology without any dependencies on storage hardware. Software defined storage (SDS) eliminates all traditional aspect of storage such as managing storage policy, security, provisioning, upgrading and scaling of storage without the headache of hardware layer. Software defined storage (SDS) is completely software based product instead of hardware based product. A software defined storage must have the following characteristics.

Characteristics of SDS

  • Management of complete stack of storage using software
  • Automation-policy driven storage provisioning with SLA
  • Ability to run private, public or hybrid cloud platform
  • Creation of uses metric and billing in control panel
  • Logical storage services and capabilities eliminating dependence on the underlying physical storage systems
  • Creation of logical storage pool
  • Creation of logical tiering of storage volumes
  • Aggregate various physical storage into one or multiple logical pool
  • Storage virtualization
  • Thin provisioning of volume from logical pool of storage
  • Scale out storage architecture such as Microsoft Scale out File Servers
  • Virtual volumes (vVols), a proposal from VMware for a more transparent mapping between large volumes and the VM disk images within them
  • Parallel NFS (pNFS), a specific implementation which evolved within the NFS
  • OpenStack APIs for storage interaction which have been applied to open-source projects as well as to vendor products.
  • Independent of underlying storage hardware

A software defined storage must not have the following limitations.

  • Glorified hardware which juggle between network and disk e.g. Dell Compellent
  • Dependent systems between hardware and software e.g. Dell Compellent
  • High latency and low IOPS for production VMs
  • Active-passive management controller
  • Repetitive hardware and software maintenance
  • Administrative and management overhead
  • Cost of retaining hardware and software e.g. life cycle management
  • Factory defined limitation e.g. can’t do situation
  • Production downtime for maintenance work e.g. Dell Compellent maintenance

The following vendors provides various software defined storage in current market.

Software Only vendor

  • Atlantis Computing
  • DataCore Software
  • SANBOLIC
  • Nexenta
  • Maxta
  • CloudByte
  • VMware
  • Microsoft

Mainstream Storage vendor

  • EMC ViPR
  • HP StoreVirtual
  • Hitachi
  • IBM SmartCloud Virtual Storage Center
  • NetApp Data ONTAP

Storage Appliance vendor

  • Tintri
  • Nimble
  • Solidfire
  • Nutanix
  • Zadara Storage

Hyper Converged Appliance

  • Cisco (Starting price from $59K for Hyperflex systems+1 year support inclusive)
  • Nutanix
  • VCE (Starting price from $60K for RXRAIL systems+support)
  • Simplivity Corporation
  • Maxta
  • Pivot3 Inc.
  • Scale Computing Inc
  • EMC Corporation
  • VMware Inc

Ultimately, SDS should and will provide businesses will worry free management of storage without limitation of hardware. There are compelling use cases of software defined storage for an enterprise to adopt software defined storage.

Relavent Articles

VMware Increases Price Again

VMware increases price again. As per VMware pricing FAQ, the following pricing model will be in effect on April 1, 2016.

vSphere with Operations Management Enterprise Plus from US$4,245/CPU to US$4,395/CPU

VMware vCenter Server™ Standard from US$4,995/Instance to US$5,995/Instance

vSphere with Operations Management Enterprise Plus now includes enhancements to Workload Placement, and vCenter Server™ Standard now includes 25 Operating System Instances of VMware vRealize® Log Insight™ for vCenter.

vSphere Enterprise and vSphere with Operations Management Enterprise customers also entitled for a 50% discount on optional upgrade to vSphere Enterprise Plus and vSphere with Operations Management Enterprise Plus. This offer is valid until June 25, 2016.

Relevant Information

VMware Licensing FAQ

Hyper-v Server 2016 licensing FAQ

Windows Server 2016 datasheet

Windows Server 2016

Understanding Software Defined Networking (SDN) and Network Virtualization

The evolution of virtualization lead to an evolution of wide range of virtualized technology including the key building block of a data center which is Network. A traditional network used be wired connection of physical switches and devices. A network administrator has nightmare making some configuration changes and possibility of breaking another configuration while doing same changes. Putting together a massive data center would have been expensive venture and lengthy project. Since the virtualization and cloud services on the horizon, anything can be offered as a service and almost anything can virtualised and software defined.

Since development of Microsoft SCVMM and VMware NSX, network function virtualization (NFV), network virtualization (NV) and software defined network (SDN) are making bold statement on-premises based customer and cloud based service provider. Out of all great benefits having a software defined network, two key benefits standout among all which are easy provisioning a network and easy change control of that network. You don’t have to fiddle around physical layer of network and you certainly don’t have to modify virtual host to provision a complete network with few mouse click. How does it work?

Software Defined Networking- Software defined networking (SDN) is a dynamic, manageable, cost-effective, and adaptable, high-bandwidth, agile open architecture. SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. Examples of Cisco software defined networking is here.

The fundamental building block of SDN is:

  • Programmable: Network control is directly programmable because it is decoupled from forwarding functions.
  • Agile: Abstracting control from forwarding lets administrators dynamically adjust network-wide traffic flow to meet changing needs.
  • Centrally managed: Network intelligence is (logically) centralized in software-based SDN controllers that maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatically configured: SDN lets network managers configure, manage, secure, and optimize network resources very quickly via dynamic, automated SDN programs, which they can write themselves because the programs do not depend on proprietary software.
  • Open standards-based and vendor-neutral: When implemented through open standards, SDN simplifies network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols.

Cisco SDN Capable Switches

Modular Switches

Cisco Nexus 9516
Cisco Nexus 9508
Cisco Nexus 9504

Fixed Switches

Cisco Nexus 9396PX
Cisco Nexus 9396TX
Cisco Nexus 93128TX
Cisco Nexus 9372PX
Cisco Nexus 9372TX
Cisco Nexus 9336PQ ACI Spine Switch
Cisco Nexus 9332PQ

Network Virtualization- A virtualized network is simply partitioning existing physical network and creating multiple logical network. Network virtualization literally tries to create logical segments in an existing network by dividing the network logically at the flow level. End goal is to allow multiple virtual machine in same logical segment or a private portion of network allocated by business. In a physical networking you cannot have same IP address range within same network and manage traffic for two different kind of services and application. But in a virtual world you can have same IP range segregated in logical network. Let’s say two different business/tenant have 10.124.3.x/24 IP address scheme in their internal network. But both business/tenant decided to migrate to Microsoft Azure platform and bring their own IP address scheme (10.124.3.x/24) with them. It is absolutely possible for them to retain their own IP address and migrate to Microsoft Azure. You will not see changes within Azure portal. You even don’t know that another organisation have same internal IP address scheme and possibly hosted in same Hyper-v host. It is programmatically and logically managed by Azure Stack and SCVMM network virtualization technology.

Network Functions Virtualization- Network function virtualization is virtualising layer 4 to layer 7 of OSI model in a software defined network. NFV runs on high-performance x86 platforms, and it enables users to turn up functions on selected tunnels in the network. The end goal is to allow administrator to create a service profile for a VM then create logical workflow within the network (the tunnel) and then build virtual services on that specific logical environment. NFV saves a lot of time on provisioning and managing application level of network. Functions like IDS, firewall and load balancer can be virtualised in Microsoft SCVMM and VMware NSX.

Here are some Cisco NFV products.

IOS-XRv Virtual Router: Scale your network when and where you need with this carrier-class router.

Network Service Virtualization- Network Service Virtualization (NSV) virtualizes a network service, for example, a firewall module or IPS software instance, by dividing the software image so that it may be accessed independently among different applications all from a common hardware base. NSV eliminates cost of acquiring a separate hardware for single purpose instead it uses same hardware to service different purpose every time a network is accessed or service is requested. It also open the door for service provider offer security as a service to various customer.

Network security appliances are now bundled as a set of security functions within one appliance. For example, firewalls were offered on special purpose hardware as were IPS (Intrusion Protection System), Web Filter, Content Filter, VPN (Virtual Private Network), NBAD (Network-Based Anomaly Detection) and other security products. This integration allows for greater software collaboration between security elements, lowers cost of acquisition and streamlines operations.

Cisco virtualized network services available on the Cisco Catalyst 6500 series platform.

Network security virtualization

  • Virtual firewall contexts also called security contexts
  • Up to 250 mixed-mode multiple virtual firewalls
  • Routed firewalls (Layer 3)
  • Transparent firewalls (Layer 2, or stealth)
  • Mixed-mode firewalls combination of both Layer 2 and Layer 3 firewalls coexisting on the same physical firewall. 

Virtual Route Forwarding (VRF) network services

  • NetFlow on VRF interfaces
  • VRF-aware syslog
  • VRF-aware TACACS
  • VRF-aware Telnet
  • Virtualized address management policies using VRF-aware DHCP
  • VRF-aware TACACS
  • Optimized traffic redirection using PBR-set VRF

Finally you can have all these in one basket without incurring cost for each component once you have System Center Virtual Machine Manager or Microsoft Azure Stack implemented in on-premises infrastructure or you choose to migrate to Microsoft Azure platform.

Relevant Articles

Comparing VMware vSwitch with SCVMM Network Virtualization

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Comparing VMware vSwitch with SCVMM Network Virtualization

Feature VMware vSphere System Center

VMM 2012 R2

Standard vSwitch DV Switch
Switch Features Yes Yes Yes
Layer 2 Forwarding Yes Yes Yes
IEEE 802.1Q VLAN Tagging Yes Yes Yes
Multicast Support Yes Yes Yes
Network Policy Yes Yes
Network Migration Yes Yes
NVGRE/ VXLAN Procure NSX or Cisco Appliance Yes
L3 Network Support Procure NSX or Cisco Appliance Yes
Network Virtualization Procure NSX or Cisco Appliance Yes
NIC Teaming Yes Yes Yes
Network Load Balancing Procure NSX or Cisco Appliance Yes
Virtual Switch Extension Yes Yes
Physical Switch Connectivity
EtherChannel Yes Yes Yes
Load Balancing Algorithms
Port Monitoring Yes Yes Yes
Third party Hardware load balancing Yes Yes
Traffic Management Features
Bandwidth Limiting Yes Yes
Traffic Monitoring Yes Yes
Security Features
Port Security Yes Yes Yes
Private VLANs Yes Yes
Management Features
Manageability Yes Yes Yes
Third Party APIs Yes Yes
Port Policy Yes Yes Yes
Netflow Yes* Yes* Yes
Syslog Yes** Yes** Yes
SNMP Yes Yes Yes

* Experimental Support

** Virtual switch network syslog information is exported and included with VMware ESX events.

References:

VMware Distributed Switch

VMware NSX

Microsoft System Center Features 

Related Articles:

Understanding Network Virtualization in SCVMM 2012 R2

Cisco Nexus 1000V Switch for Microsoft Hyper-V

How to implement hardware load balancer in SCVMM

Understanding VLAN, Trunk, NIC Teaming, Virtual Switch Configuration in Hyper-v Server 2012 R2

Install and Configure IBM V3700, Brocade 300B Fabric and ESXi Host Step by Step

Step1: Hardware Installation

Follow the official IBM video tutorial to rack and stack IBM V3700.

 

image

image

image

Cabling V3700, ESX Host and Fabric.

  • Connect each canister of V3700 Storage to two Fabric. Canister1 FC Port 1—>Fabric1 and Canister 1 FC Port 2—>Fabric2. Canister2 FC Port 1—>Fabric1 and Canister 2 FC Port 2—>Fabric2.
  • Connect two HBAs of each Host to two Fabric.
  • Connect Disk Enclosure to two canister
  • Connect Both Power cable of each devices to multiple power supply

Step2: Initial Setup

Management IP Connection: Once you racked the storage, connect Canister1 Port 1 and Canister 2 Port 1 into two Gigabit Ethernet port in same VLAN. Make LACP is configured in both port in your switch.

image

Follow the official IBM video tutorial to setup management IP or download Redbook Implementation Guide and follow section 2.4.1

 

One initial setup is complete. Log on to V3700 storage using Management IP.

Click Monitoring>System>Enclosure>Canister. Record serial number, part number and machine signature.

image

Step3: Configure Event Notification

Click Settings>Event Notification>Edit. Add correct email address. You must add management IP address of v3700 into Exchange Relay Connector. Now Click test button to check you received email.

image

image

Step4: Licensing Features

Click Settings>General>Licensing>Actions>Automatic>Add>Type Activation Key>Activate. repeat the step for all licensed features.

image

IBM V3700 At a Glance

image

Simple Definition:

MDisk: Bounce of disk configured in preset RAID or User defined RAID. For example you have 4 SSD hard drive and 20 SAS hard drive. You can chose automatic configuration of storage which will be configured by Wizard. However you can chose to have 1 RAID5 single parity MDisk for SSD and 2x RAID5 MDisk with parity for 20 SAS hard drive. In this case you will have three MDisk in your controller. Simply MDisk is RAID Group.

Pool: Storage Pool act as container for MDisks and available capacity ready to be provisioned.

Volume: Volumes are logical segregation of Pools. Volume can defines as LUNs and ready to be mapped or connected to ESX host or Hyper-v Host.

Step5: Configure Storage

Click Pools>Internal Storage>Configure Storage>Select recommended or user defined. Follow the wizard to complete MDisk.

image

image

Step6: Configure Pool

image

Click Pools>Click Volumes by Pool>Click New Volume>Generic>Select Quantity, Size, Pool then Click Create.

image

image

image

Repeat the steps to create multiple volumes for Hyper-v Host or ESXI host.

Important! In IBM V3700, if you configure storage using GUI, GUI console automatically select default functionality of the storage for example number of mdisk. you will not have option to select disks and mdisk you want. Example, you may want create large mdisk containing 15 disks but GUI will create 3xmdisk which means you will loose lot of capacity by doing so. To avoid this, you can create mdisk, raid and pool using command line.

Telnet to your storage using putty.  type username: superuser and password is your password. Type the following command.
Creating RAID5 arrays and a pool

svctask mkmdiskgrp -ext 1024 -guiid 0 -name Pool01 -warning 80%

Creating a spare with drive 6

svctask chdrive -use spare 6

Creating a RAID Group (12 disk)

svctask mkarray -drive 4:11:5:9:3:7:8:10:12:13:14 -level raid5 -sparegoal 1 0

Creating SSD Raid5 with easy tier (disk0, disk1, disk2 are SSD)

svctask mkarray -drive 1:2:0 -level raid5 -sparegoal 0 Pool01

Now go back to GUI and check you have created two mdisk and a Pool with easy tier activated.

Step7: Configure Fabric

Once you rack and stack Brocade 300B switches. Connect both Brocade switches using CAT6 cable to your Ethernet Switch. This could be your core switch or an access switch. Connect your laptop to network as well.

References are here. Quick Start Guide and Setup Guide

Default Passwords

username: root password: fibranne
username: admin password: password

Connect the console cable provided within brocade box to your laptop. Insert EZSetup cd into cdrom of your laptop. Run Easy Setup Wizard.

Alternatively, you can connect the switch to the network and connect to it via http://10.77.77.77 (Give yourself an IP address of 10.77.77.1/24). Use the username root and the password fibranne

If you don’t want to do that, connect the console cable (provided) to your PC and launch the EZ Setup Software supplied with the switch. Select English > OK.

At the Welcome Screen > Click Next > Click Next > accept the EULA > Install > Done > Select Serial Cable > Click Next > Click Next (make sure HyperTerminal is NOT on or it will fail). It should find the switch > Set its IP details.

image

Next > Follow Instructions.

Install Java on your laptop. Open browser, Type IP address of Brocade, you can connect to the switch via its web console Once logged in using default username:root and password: fibranne

image

Click Switch Admin>Type DNS Server, Domain Name, Apply. Click License>Add new License, Apply.

image

image

Click Zone Admin>Click Alias>New Alias>Type the name of Alias. Click Ok. Expand Switch Port, Select WWNN>Add members. Repeat for Canister Node 1, Canister Node 2, ESX Host1, ESX Host2, ESX Host 3….

image

image

Select Zone Tab> New Zone > Type the Name of Zone, Example vSphere or Hyper_v. Select Aliases>Click Add Members> Add all Aliases.

image

Select Zone Config Tab>New Zone Config>Type the name of Zone Config>Select vSphere or Hyper_V Zone>Add Members

image

Click Save Config. Click Enable Config.

image

image

Repeat above steps for all fabric.

Step8: Configure Hosts in V3700

Click Hosts>New Host>Fibre Channel Host>Generic>Type the Name of the Host>Select Port>Create Host.

image

image

image

image

Right Click on each host>Map Volumes.

image

Step9: Configure ESXi Host

Right Click on ESX Cluster>Rescan for datastores>

image

image

Click ESX Host>Configuration>Storage>Add Storage>Click Next>Select Correct and Matching UID>Type the matching Name same as volume name within IBM V3700. Click Next>Select VMFS5>Finish.

image

Rescan for datatores for all Host again. you will see same data store popped up in all Hosts.

Finding HBA, WWNN

Select ESX Host>Configuration>Storage Adapters

image

image

Verifying Multipathing

Right Click on Storage>property>Manage Paths

image

image

Finding and Matching UID

Log on to IBM V3700 Storage>Volume>Volumes

image

Click ESX Host>Configuration>Storage>Devices

image