Buying a SAN? How to select a SAN for your business?

A storage area network (SAN) is any high-performance network whose primary purpose is to enable storage devices to communicate with computer systems and with each other. With a SAN, the concept of a single host computer that owns data or storage isn’t meaningful. A SAN moves storage resources off the common user network and reorganizes them into an independent, high-performance network. This allows each server to access shared storage as if it were a drive directly attached to the server. When a host wants to access a storage device on the SAN, it sends out a block-based access request for the storage device.

A storage-area network is typically assembled using three principle components: cabling, host bus adapters (HBAs) and switches. Each switch and storage system on the SAN must be interconnected and the physical interconnections must support bandwidth levels that can adequately handle peak data activities.

Good SAN

A good provides the following functionality to the business.

Highly availability: A single SAN connecting all computers to all storage puts a lot of enterprise information accessibility eggs into one basket. The SAN had better be pretty indestructible or the enterprise could literally be out of business. A good SAN implementation will have built-in protection against just about any kind of failure imaginable. As we will see in later chapters, this means that not only must the links and switches composing the SAN infrastructure be able to survive component failures, but the storage devices, their interfaces to the SAN, and the computers themselves must all have built-in strategies for surviving and recovering from failures as well.


If a SAN interconnects a lot of computers and a lot of storage, it had better be able to deliver the performance they all need to do their respective jobs simultaneously. A good SAN delivers both high data transfer rates and low I/O request latency. Moreover, the SAN’s performance must be able to grow as the organization’s information storage and processing needs grow. As with other enterprise networks, it just isn’t practical to replace a SAN very often.

On the positive side, a SAN that does scale provides an extra application performance boost by separating high-volume I/O traffic from client/server message traffic, giving each a path that is optimal for its characteristics and eliminating cross talk between them.

The investment required to implement a SAN is high, both in terms of direct capital cost and in terms of the time and energy required to learn the technology and to design, deploy, tune, and manage the SAN. Any well-managed enterprise will do a cost-benefit analysis before deciding to implement storage networking. The results of such an analysis will almost certainly indicate that the biggest payback comes from using a SAN to connect the enterprise’s most important data to the computers that run its most critical applications.

But its most critical data is the data an enterprise can least afford to be without. Together, the natural desire for maximum return on investment and the criticality of operational data lead to Rule 1 of storage networking.

Great SAN

A great SAN provides additional business benefits plus additional features depending on products and manufacturer. The features of storage networking, such as universal connectivity, high availability, high performance, and advanced function, and the benefits of storage networking that support larger organizational goals, such as reduced cost and improved quality of service.

  • SAN connectivity enables the grouping of computers into cooperative clusters that can recover quickly from equipment or application failures and allow data processing to continue 24 hours a day, every day of the year.
  • With long-distance storage networking, 24 × 7 access to important data can be extended across metropolitan areas and indeed, with some implementations, around the world. Not only does this help protect access to information against disasters; it can also keep primary data close to where it’s used on a round-the-clock basis.
  • SANs remove high-intensity I/O traffic from the LAN used to service clients. This can sharply reduce the occurrence of unpredictable, long application response times, enabling new applications to be implemented or allowing existing distributed applications to evolve in ways that would not be possible if the LAN were also carting I/O traffic.
  • A dedicated backup server on a SAN can make more frequent backups possible because it reduces the impact of backup on application servers to almost nothing. More frequent backups means more up-to-date restores that require less time to execute.

Replication and disaster recovery

With so much data stored on a SAN, your client will likely want you to build disaster recovery into the system. SANs can be set up to automatically mirror data to another site, which could be a fail safe SAN a few meters away or a disaster recovery (DR) site hundreds or thousands of miles away.

If your client wants to build mirroring into the storage area network design, one of the first considerations is whether to replicate synchronously or asynchronously. Synchronous mirroring means that as data is written to the primary SAN, each change is sent to the secondary and must be acknowledged before the next write can happen.

The alternative is to asynchronously mirror changes to the secondary site. You can configure this replication to happen as quickly as every second, or every few minutes or hours, Schulz said. While this means that your client could permanently lose some data, if the primary SAN goes down before it has a chance to copy its data to the secondary, your client should make calculations based on its recovery point objective (RPO) to determine how often it needs to mirror.


With several servers able to share the same physical hardware, it should be no surprise that security plays an important role in a storage area network design. Your client will want to know that servers can only access data if they’re specifically allowed to. If your client is using iSCSI, which runs on a standard Ethernet network, it’s also crucial to make sure outside parties won’t be able to hack into the network and have raw access to the SAN.

Capacity and scalability

A good storage area network design should not only accommodate your client’s current storage needs, but it should also be scalable so that your client can upgrade the SAN as needed throughout the expected lifespan of the system. Because a SAN’s switch connects storage devices on one side and servers on the other, its number of ports can affect both storage capacity and speed, Schulz said. By allowing enough ports to support multiple, simultaneous connections to each server, switches can multiply the bandwidth to servers. On the storage device side, you should make sure you have enough ports for redundant connections to existing storage units, as well as units your client may want to add later.

Uptime and availability

Because several servers will rely on a SAN for all of their data, it’s important to make the system very reliable and eliminate any single points of failure. Most SAN hardware vendors offer redundancy within each unit — like dual power supplies, internal controllers and emergency batteries — but you should make sure that redundancy extends all the way to the server. Availability and redundancy can be extended to multiple systems and cross datacentre which comes with cost benefit analysis and specific business requirement. If your business drives to you to have zero downtime policy then data should be replicated to a disaster recovery sites using identical SAN as production. Then use appropriate software to manage those replicated SAN.

Software and Hardware Capability

A great SAN management software deliver all the capabilities of SAN hardware to the devices connected to the SAN. It’s very reasonable to expect to share a SAN-attached tape drive among several servers because tape drives are expensive and they’re only actually in use while back-ups are occurring. If a tape drive is connected to computers through a SAN, different computers could use it at different times. All the computers get backed up. The tape drive investment is used efficiently, and capital expenditure stays low.

A SAN provide fully redundant, high performance and highly available hardware, software for application and business data to compute resources. Intelligent storage also provide data movement capabilities between devices.

Best OR Cheap

No vendor has ever developed all the components required to build a complete SAN but most vendors are engaged in partnerships to qualify and offer complete SANs consisting of the partner’s products.

Best-in-class SAN provides totally different performance and attributes to business. A cheap SAN would provide a SAN using existing Ethernet network however you should ask yourself following questions and find answers to determine what you need? Best or cheap?

  1. Has this SAN capable of delivering business benefits?
  2. Has this SAN capable of managing your corporate workloads?
  3. Are you getting correct I/O for your workloads?
  4. Are you getting correct performance matrix for your application, file systems and virtual infrastructure?
  5. Are you getting value for money?
  6. Do you have a growth potential?
  7. Would your next data migration and software upgrade be seamless?
  8. Is this SAN a heterogeneous solutions for you?

Storage as a Service vs on-premises

There are many vendors who provides storage as a service with lucrative pricing model. However you should consider the following before choosing storage as a service.

  1. Does this vendor a partner of recognised storage manufacturer?
  2. Does this vendor have certified and experienced engineering team to look after your data?
  3. Does this vendor provide 24x7x365 support?
  4. Does this vendor provide true storage tiering?
  5. What is geographic distance between storage as a service provider’s data center and your infrastructure and how much WAN connectivity would cost you?
  6. What would be storage latency and I/O?
  7. Are you buying one off capacity or long term corporate storage solution?

If answers of these questions favour your business then I would recommend you buy storage as a service otherwise on premises is best for you.


A NAS device provides file access to clients to which it connects using file access protocols (primarily CIFS and NFS) transported on Ethernet and TCP/IP.

A FC SAN device is a block-access (i.e. it is a disk or it emulates one or more disks) that connects to its clients using Fibre Channel and a block data access protocol such as SCSI.

An iSCSI, which stands for Internet Small Computer System Interface, works on top of the Transport Control Protocol (TCP) and allows the SCSI command to be sent end-to-end over local-area networks (LANs), wide-area networks (WANs) or the Internet.

You have to know your business before you can answer the question NAS/FC SAN/iSCSI SAN or Unified? Would you like to maximise your benefits from same investment well you know the answer you are looking for unified storage solutions like NetApp or EMC ISILON. If you are looking for enterprise class high performance storage, isolate your Ethernet from storage traffic, reduce backup time, minimise RPO and RTO then FC SAN is best for you example EMC VNX and NetApp OnCommand Cluster. If your intention is to use existing Ethernet and have a shared storage then you are looking for iSCSI SAN example Nimble storage or Dell SC series storage. But having said that you also needs to consider your structured corporate data, unstructured corporate data and application performance before making a judgement call.

Decision Making Process

Let’s make a decision matrix as follows. Just fill the blanks and see the outcome.

Workloads I/O Capacity Requirement (in TB) Storage Protocol


Unstructured Data
Structured Data
Messaging Systems
Application virtualization
Collaboration application
Business Application

Functionality Matrix

Option Rating Requirement (1=high 3=Medium 5=low )
Data movement

Risk Assessment

Risk Type Rating (Low, Medium, High)
Loss of productivity
Loss of redundancy
Reduced Capacity
Limited upgrade capacity
Disruptive migration path

Service Data – SLA

Service Type SLA Target
Hardware Replacement
Vendor Support

Rate storage via Gartner Magic Quadrant. Gartner magic quadrant leaders are (as of October 2015):

  1. EMC
  2. HP
  3. Hitachi
  4. Dell
  5. NetApp
  6. IBM
  7. Nimble Storage

To make your decision easy select a storage that enables you to cost effective way manage large and rapidly growing data. A storage that is built for agility, simplicity and provide both tiered storage approach for specialized needs and the ability to unify all digital contents into a single high performance and highly scalable shared pool of storage. A storage that accelerate productivity and reduce capital and operational expenditures, while seamlessly scaling storage with the growth of mission critical data.

Posted in Miscellaneous | Tagged , , , , , , , , , , | Leave a comment

Understanding Dynamic Quorum in a Microsoft Failover Cluster

Windows Server 2012: Failover Clustering Deep Dive

Microsoft introduced an advanced quorum configuration option in Windows Server 2012/R2. You can choose to enable dynamic quorum management by cluster. There are major benefits of having dynamic quorum in any Microsoft cluster whether for Exchange DAG, SQL cluster, Hyper-v cluster or file server cluster. When you configure dynamic quorum, the cluster dynamically manages the vote assignment to nodes, based on the state of each node. Votes are automatically removed from nodes that leave active cluster membership, and a vote is automatically assigned when a node re-joins the cluster. Dynamic quorum remove dependencies of a quorum disk in Hyper-v and also enable multi-site cluster in a diverse geographic location without sharing common disk.


  • With dynamic quorum management, it is also possible for a cluster to run on the last surviving cluster node.
  • By dynamically adjusting the quorum majority requirement, the cluster can sustain sequential node shutdowns to a single node.
  • The cluster software automatically configures the quorum for a new cluster, based on the number of nodes configured and the availability of shared storage.


  • Dynamic quorum management does not allow the cluster to sustain a simultaneous failure of a majority of voting members. To continue running, the cluster must always have a quorum majority at the time of a node shutdown or failure.
  • If you have explicitly removed the vote of a node, the cluster cannot dynamically add or remove that vote.

How to configure a dynamic quorum?

Configure a standard cluster as you do in a Microsoft environment. Then use Quorum Configuration Wizard in Cluster Manager to configure advanced quorum.

  1. In Failover Cluster Manager, select the cluster that you want to change.
  2. With the cluster selected>under Actions>click More Actions> and then click Configure Cluster Quorum Settings> Click Next.
  3. On the Select Quorum Configuration Option page>click Advanced quorum configuration and witness selection
  4. On the Select Voting Configuration page>select an option to assign votes to nodes. By default, all nodes are assigned a vote.
  5. On the Configure Quorum Management page> enable the Allow cluster to dynamically manage the assignment of node votes
  6. On the Select Quorum Witness page>select Do not configure a quorum witness, and then complete the wizard
  7. Click Next>then click Next.

Once quorum is reconfigured then you run the Validate Quorum Configuration test to verify the updated quorum settings. Follow the steps to validate quorum.

  1. In Failover Cluster Manager, select the cluster> run the Validate Quorum Configuration test to verify the updated quorum settings.
Posted in Windows Server | Tagged , , , , , , , | Leave a comment

Design and Build Microsoft Distributed File System (DFS)


  • Windows and DFS Replication support folder paths with up to 32 thousand characters.
  • DFS Replication is not limited to folder paths of 260 characters.
  • Replication groups can span across domains within a single forest
  • VSS with DFS is supported.

Scalability on Windows Server 2012 R2

  • Size of all replicated files on a server: 100 terabytes.
  • Number of replicated files on a volume: 70 million.
  • Maximum file size: 250 gigabytes.
  • File can be staged ranging 16KB to 1MB. Default is 64KB when RDC is enabled. When RDC is disabled 256KB from sending member.
  • Up to 5000 folders with target. Maximum 50000 folders with targets.

Scalability on Windows Server 2008 R2

  • Size of all replicated files on a server: 10 terabytes.
  • Number of replicated files on a volume: 11 million.
  • Maximum file size: 64 gigabytes.


  • Cross forests replication is unsupported
  • NTBackup for remotely backup DFS folder.
  • DFS in a workgroup environment

Determining Time Zone in DFS

Universal Coordinated Time (UTC). This option causes the receiving member to treat the schedule as an absolute clock. For example, a schedule that begins at 0800 UTC is the same for any location, regardless of time zone or whether daylight savings time is in effect for a receiving member. For example, assume that you set replication to begin at 0800 UTC. A receiving member in Eastern Standard Time would begin replicating at 3:00 A.M. local time (UTC – 5), and a receiving member in Rome would begin replicating at 9:00 A.M. local time (UTC + 1). Note that the UTC offset shifts when daylight savings time is in effect for a particular location.

Local time of receiving member. This option causes the receiving member to use its local time to start and stop replication. Local time is determined by the time zone and daylight savings time status of the receiving member. For example, a schedule that begins at 8:00 A.M. will cause every receiving member to begin replicating when the local time is 8:00 A.M. Note that daylight savings time does not cause the schedule to shift. If replication starts at 9 A.M. before daylight savings time, replication will still start at 9 A.M. when daylight savings time is in effect.

Determine AD Forest

  • The forest uses the Windows Server 2003 or higher forest functional level.
  • The domain uses the Windows Server 2008 or higher domain functional level.
  • All namespace servers are running Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, or Windows Server 2008.

Using RDC:

Remote differential compression (RDC) is a client-server protocol that can be used to efficiently update files over a limited-bandwidth network. RDC detects insertions, removals, and rearrangements of data in files, enabling DFS Replication to replicate only the changes when files are updated. RDC is used only for files that are 64 KB or larger by default. RDC can use an older version of a file with the same name in the replicated folder or in the DfsrPrivate\ConflictandDeleted folder (located under the local path of the replicated folder).

RDC is used when the file exceeds a minimum size threshold. This size threshold is 64 KB by default. After a file exceeding that threshold has been replicated, updated versions of the file always use RDC, unless a large portion of the file is changed or RDC is disabled.

  • RDC is available Windows Server 2008 R2 Enterrprise and Datacenter Edition.
  • RDC is available Windows Server 2012/R2 Standard and Datacenter Edition.

DFS Namespaces Settings and Features

A referral is an ordered list of targets, transparent to the user that a client receives from a domain controller or namespace server when the user accesses the namespace root or a folder with targets in the namespace. The client caches the referral for a configurable period of time.

Targets in the client’s Active Directory site are listed first in a referral. (Targets given the target priority “first among all targets” will be listed before targets in the client’s site.) The order in which targets outside of the client’s site appear in a referral is determined by one of the following referral ordering methods:

Lowest cost, Random order, Exclude targets outside of the client’s site

Design the Replication Topology

To publish data, you will likely use a hub-and-spoke topology, where one or more hub servers are located in data centers, and servers in branch offices will connect to one or more hub servers. To prevent the hub servers from becoming overloaded, we recommend that fewer than 100 spoke members replicate with the hub server at any given time. If you need more than 100 spoke members to replicate with a hub server, set up a staggered replication schedule to balance the replication load of the hub server.

The lowest cost ordering method works properly for all targets only if the Bridge all site links option in Active Directory is enabled. (This option, as well as site link costs, are available in the Active Directory Sites and Services snap-in.) An Inter-site Topology Generator that is running Windows Server 2003 relies on the Bridge all site links option being enabled to generate the inter-site cost matrix that the Distributed File System service requires for its site-costing functionality. If the Bridge all site links option is enabled, the servers in a referral are listed in the following order:

  1. The server in the branch site.
  2. The server in regional data center site 1. (Cost = 10)
  3. The server in regional data center site 2. (Cost = 30)
  4. The server in regional data center site 3. (Cost = 50)

A domain-based namespace can be hosted by multiple namespace servers to increase the availability of the namespace. Putting a namespace server in remote or branch offices also allows clients to contact a namespace server and receive referrals without having to cross expensive WAN connections.


Namespace server . A namespace server hosts a namespace. The namespace server can be a member server or a domain controller.

Namespace root . The namespace root is the starting point of the namespace. In the previous figure, the name of the root is Public, and the namespace path is \\Contoso\Public. This type of namespace is a domain-based namespace because it begins with a domain name (for example, Contoso) and its metadata is stored in Active Directory Domain Services (AD DS). Although a single namespace server is shown in the previous figure, a domain-based namespace can be hosted on multiple namespace servers to increase the availability of the namespace.

Folder . Folders without folder targets add structure and hierarchy to the namespace, and folders with folder targets provide users with actual content. When users browse a folder that has folder targets in the namespace, the client computer receives a referral that transparently redirects the client computer to one of the folder targets.

Folder targets . A folder target is the UNC path of a shared folder or another namespace that is associated with a folder in a namespace. The folder target is where data and content is stored. In the previous figure, the folder named Tools has two folder targets, one in London and one in New York, and the folder named Training Guides has a single folder target in New York. A user who browses to \\\Public\Software\Tools is transparently redirected to the shared folder \\server1\Tools or \\server2\Tools, depending on which site the user is currently located in.

By default, DFS replication between two members is bidirectional. Bidirectional connections occur in both directions and include two one-way connections. If you desire only a one-way connection, you can disable one of the connections or use share permissions to prevent the replication process from updating files on certain member servers.

Step1: Organise Folder Structure in multiple servers in geographically diverse location


Server1 in Perth




Server2 in Melbourne




Step2: Install DFS on Server

Before setting up replication between servers, the DFS Replication roles need to be installed on each server that is going to participate in the replication group. Open Server Manger by clicking on the Server Manager icon on the task bar

  1. On the Welcome Tile, under Quick Start, click on Add roles and features to start the Add Roles and Features Wizard. If there’s no Welcome Tile, it might be hidden. Click View on the menu bar and click Show Welcome Tile.
  2. Click Next.
  3. Select Roll-based or feature-based installation and click Next.
  4. Select a server from the server pool and select the server on which you want to install DFS Replication. Click Next.
  5. Under Roles, expand File and Storage Services, expand File and iSCSI Services, select DFS Replication and click Next.
  6. If you have not already installed the features required for DFS Replication, the following box will pop up explaining which features and roles will be installed along with DFS Replication.
  7. Click Add Features.
  8. Back to the Select server roles dialog. It should now show DFS Replication as checked along with the other roles required for DFS Replication.
  9. Click Next.
  10. The Select features dialog shows the features that will be added along with the DFS Replication role.
  11. Click Next.
  12. Click Install.
  13. Click Close when the installation completes.
  14. You will notice a new DFS management icon.

Step3: Create New Namespace

  1. Double click on this icon to open the DFS Management MMC.
  2. In the DFS Management console, right click on Namespaces and select new namespace. In the New Namespace Wizard, select the server that will host the namespace (the DFS server) and click next to continue.
  3. Give your DFS and easy to understand namespace and click next.
  4. The next step asks whether you want to use a domain based namespace or a stand alone namespace. Select domain-name based DFS namespace and click next, then create.
  5. Once finished, you will see the newly created namespace in the namespace section of the DFS Manager along with its UNC path. This is the path you will use to access the DFS share.
  6. Now that we have create the namespace, it’s time to add some folders. In DFS, you can access multiple shared folders using a single drive letter. Add the required folders to the DFS namespace.
  7. Right click on the DFS namespace and select new folder.
  8. In the new folder window, create a folder named X, then click on the add button and locate the folder on the required server. When finished, click OK.
  9. Repeat the process to add the other shared folders.
  10. To test – Open a browser and type the UNC path of your DFS namespace. All folders appear in a single share.

Step5: Replicate Folders

  1. In the DFS Management console, double click on the folder to view its path.
  2. Log in to server 2 and create a folder named admin as well.
  3. Right click on the folder and select add folder target.
  4. Enter the UNC path of the folder located on the second server and click OK.
  5. You will be prompted to create a replication group. Click yes.
  6. Follow the wizard to configure the replication parameters.
  • Primary Member: This is the server that has the initial copy of the files you want to replicate.
  • Topology: This dictates in what fashion the replication will occur.
  • Bandwidth and Schedule: How much bandwidth to allocate and when to synchronize.
  1. Once you have finished, click create. Any file that you create, modify or delete when using the namespace UNC path will be almost immediately copied to both replicating folders.

Step6: Manually creating replication group if you didn’t create in step1

  1. In the console tree of the DFS Management snap-in, right-click the Replication node, and then click New Replication Group.
  2. Follow the steps in the New Replication Group Wizard and supply the information in the following table.
  3. Select Multipurpose replication group>Type the name of the replication group> Click Add to select at least two servers that will participate in replication. The servers must have the DFS Replication Service installed.
  4. Select Full Mesh> Select Replicate continuously using the specified bandwidth.> Select the member that has the most up-to-date content that you want to replicate to the other member.
  5. Click Add to enter the local path of the Data folder you created earlier on the first server. Use the name Data for the replicated folder name.
  6. On this page, you specify the location of the Data folder on the other members of the replication group. To specify the path, click Edit, and then in the Edit dialog box, click Enabled, and then type the local path of the Data folder.
  7. On this page, you specify the location of the Antivirus Signatures folder on the other members of the replication group. To specify the path, click Edit, and then in the Edit dialog box, click Enabled, and then type the local path of the Antivirus Signatures folder.
  8. Click Create to create the replication group.
  9. Click Close to close the wizard. Click OK to close the dialog box that warns you about the delay in initial replication.
Posted in Windows Server | Tagged , , , | Leave a comment

How to Configure Wild Card Certificate in Exchange Server 2013

You may experience certificate warning when using OWA and Outlook after you installed wild card certificate in your exchange organization. There are resolution available if you bing. Examples:

Certificate error message when you start Outlook or create an Outlook profile

SSL/TLS communication problems after you install KB 931125

“The name on the security certificate is invalid or does not match the name of the site”

This certificate with thumbprint 855951C368ECA4FF16AAAA82298E81B3F001BDED and subject ‘*’ cannot used for IMAP SSL/TLS connections because the subject is not a Fully Qualified Domain Name (FQDN). Use command Set-IMAPSettings to set X509CertificateName to the FQDN of the service.

This certificate with thumbprint 855951C368ECA4FF16A33D82298E81B3F001BDED and subject ‘*’ cannot used for POP SSL/TLS connections because the subject is not a Fully Qualified Domain Name (FQDN). Use command Set-POPSettings to set X509CertificateName to the FQDN of the service.

But root cause is not addressed in these articles. You are using wild card certificate * or incorrect certificate SAN in Exchange server. You have to configure autodiscover, owa and oab correctly to address these issues. If you are using incorrect SAN then you have to regenerate CSR, re-issue certificate and reconfigure Exchange certificate in Exchange EAC.

Check DNS record. You must have the following DNS record internally and externally for autodiscover to function correctly

Internal record

If your internal domain is domain.local then you must create a DNS zone with in your DNS server. DNS must be set to round-robin. 10.143.8.x Host (A) 10.143.8.y Host (A) 10.143.8.z CNAME 10.143.8.z CNAME

External Record 203.17.18.x Host A 203.17.18.x MX (lowest priority) 203.17.18.x CNAME 203.17.18.x CNAME

Let’s assume you have imported certificates in Exchange Administration Center. Now go to Exchange EAC>Click Servers>Click Certificates>Select Wild card certificate>Click Edit (Pen)>Services>Select IIS and SMTP>Click Save.

Now Open Exchange Management Shell using run as administrator. Copy the following cmdlets and amend per your domain and run these command.

Step1: Setup OWA

Set-OwaVirtualDirectory –Identity “ServerName\owa (Default Web Site)” –InternalUrl –ExternalURL

Setp2: Setup ActiveSync

Set-ActiveSyncVirtualDirectory –Identity “ServerName\Microsoft-Server-ActiveSync (Default Web Site)” –InternalURL –ExternalURL

Step3: Setup Outlook Anywhere

Set-OutlookAnywhere –Identity “ServerName\Rpc (Default Web Site)” –InternalHostname –ExternalHostName –ExternalClientAuthenticationMethod Basic –IISAuthenticationMethods Basic,NTLM


Set-OutlookAnywhere –Identity “ServerName\Rpc (Default Web Site)” –InternalHostname –ExternalHostName

Set-OutlookAnywhere –Identity “ServerName\Rpc (Default Web Site)” –ExternalClientAuthenticationMethod Basic

Set-OutlookAnywhere –Identity “ServerName\Rpc (Default Web Site)” –IISAuthenticationMethods Basic,NTLM

Step4: Setup Web Services Virtual Directory

Set-WebServicesVirtualDirectory –Identity “ServerName\EWS (Default Web Site)” –InternalURL –ExternalURL -BasicAuthentication $true

Step5: Setup Client Access URL

Set-ClientAccessServer –Identity ServerName –AutoDiscoverServiceInternalUri

OR depending on DNS record

Set-ClientAccessServer –Identity ServerName –AutoDiscoverServiceInternalUri

Step6: Setup ECP URL

Set-EcpVirtualDirectory –Identity “ServerName\ecp (Default Web Site)” –InternalURL –ExternalURL

Step7: Setup OAB

Set-OabVirtualDirectory -Identity “SERVERNAME\OAB (Default Web Site)” -ExternalUrl

Step8: Setup Certificate principal name for outlook

Set-OutlookProvider EXCH -CertPrincipalName msstd:*

Step9: Setup POP and IMAP with FQDN/CNAME of Mail Server

set-POPSettings -X509CertificateName

set-IMAPSettings -X509CertificateName

Now validate your settings. Issue the following cmdlets and checks FQDN and URLs are correct as issued earlier.

Get-WebServicesVirtualDirectory | Select InternalUrl, BasicAuthentication, ExternalUrl, Identity | Format-List

Get-OabVirtualDirectory | Select InternalUrl, ExternalUrl, Identity | Format-List

Get-ActiveSyncVirtualDirectory | Select InternalUrl, ExternalUrl, Identity | Format-List

Get-ClientAccessServer | Select Fqdn, AutoDiscoverServiceInternalUri, Identity | Format-List

Now Recycle App Pool. Open IIS Manager>Expand Application Pool>Select MSExchangeAutoDiscoverAppPool>Right Click and Recycle

Reboot exchange Server or issue iisreset command in exchange server to restart services. I have restarted my server. I will prefer a restart after these modifications.

Client side test.

  • Delete outlook profile
  • Make sure you use autodiscover to configure mail client
  • Do not manually configure outlook
  • Close IE. Reopen OWA and test OWA.

Last but not least update all exchange servers to latest Microsoft Windows Patch, Exchange Service pack and Exchange roll ups.

Posted in Exchange Server | Tagged , , , , | Leave a comment

Migrate WSUS Server from Server 2008/R2 to Server 2012/R2

The following procedure apply if you have an existing WSUS server installed on a Windows 2008 R2 OS with SQL Express and you wish to migrate to Windows Server 2012 R2 WSUS server and a separate backend database server.

Step1: Backup SQL DB of Old WSUS Server

Log on to existing WSUS server. Open SQL Management Studio>Connect to DB>Right Click SUSDB>backup full database.


Step2: Export metadata from old WSUS Server

The WSUS Setup program copies WSUSutil.exe to the file system of the WSUS server during installation. You must be a member of the local Administrators group and WSUS Administrator Group on the WSUS server to export or import metadata. Both operations can only be run from the WSUS server itself and during the import or export process, the Update Service is shut down.

Open command prompt as an administrator>go to C:\program Files\Update Services\Tools

Issue wsusutil.exe export c:\ c:\export.log command

Move the export package you just created to the new Microsoft WSUS Server.


If you have .netFramework v.2 or v.4 but not configured in IIS Application. Then most likely above command will fail giving you some grief. Here is a solution for this.

Verify that WSUS is configured to use the .NET4 libraries in IIS>Application Pool


Create a file named wsusutil.exe.config in C:\Program Files\Update Services\Tools

Edit the file and add the following:

<configuration><startup><supportedRuntime version=”v4.0.30319″ /></startup></configuration>

If issue persists, please try to unapprove KB3020369 in WSUS Console then try again.

Re-run the wsusutil command but instead of making a CAB file make a .xml.gz file and all should be well.



Further reading 1

Further reading 2


Step3: Build New WSUS Server

Virtualize a new Windows Server 2012 R2 Server. Setup static IP, Join the server to domain. Install .NetFramework 4 in new server.Do not Configure WSUS at this stage. Go to Step4.


Step4: Restore SQL DB in New SQL Server (Remote and/or Local )

Log on to SQL Server. Open SQL Management Studio>Create a Database named SUSDB

Restore old SUSDB to new SUSDB with override option.

Assign sysadmin, setupadmin role to the person who will install WSUS role in new WSUS server.





Step5: Install WSUS Role & Run Initial Configuration Wizard.

Installation of WSUS

 Log on to the server on which you plan to install the WSUS server role by using an account that is a member of the Local Administrators group.

 In Server Manager, click Manage, and then click Add Roles and Features.

 On the Before you begin page, click Next.

 In the Select installation type page, confirm that Role-based or feature-based installation option is selected and click Next.

 On the Select destination server page, choose where the server is located (from a server pool or from a virtual hard disk). After you select the location, choose the server on which you want to install the WSUS server role, and then click Next.

 On the Select server roles page, select Windows Server Update Services. Add features that are required for Windows Server Update Services opens. Click Add Features, and then click Next.

 On the Select features page. Retain the default selections, and then click Next.

 On the Windows Server Update Services page, click Next.

 On the Select Role Services page, Select Windows Server Update Services and Database, and then click Next.

 On the Content location selection page, type a valid location to store the updates. For example, type E:\WSUS as the valid location.

 Click Next. The Web Server Role (IIS) page opens. Review the information, and then click Next. In Select the role services to install for Web Server (IIS), retain the defaults, and then click Next.

 On the Confirm installation selections page, review the selected options, and then click Install. The WSUS installation wizard runs. This might take several minutes to complete.

 Once WSUS installation is complete, in the summary window on the Installation progress page, click Launch Post-Installation tasks. The text changes, requesting: Please wait while your server is configured. When the task has finished, the text changes to: Configuration successfully completed. Click Close.

 In Server Manager, verify if a notification appears to inform you that a restart is required. This can vary according to the installed server role. If it requires a restart make sure to restart the server to complete the installation.


Post Configuration

Open Server Manager>Add/Remove program. It will provide you with previous installation Wizard. Launch Post Configuration Wizard.

 On the Welcome page, click Next.

 On the Installation Mode Selection page, select the Full server installation including Administration Console check box, and then click Next.

 Read the terms of the license agreement carefully. Click I accept the terms of the License agreement, and then click Next.

On the Select Update Source page, you can specify where client computers get updates. If you select the Store updates locally check box, updates are stored on WSUS, and you can select a location (E:\WSUS) in the file system where updates should be stored. If you do not store updates locally, client computers connect to Microsoft Update to get approved updates.

Make your selection, and then click Next.

On the Database Options page, you select the software used to manage the WSUS database. Type <serverName>\<instanceName>, where serverName is the name of the server and instanceName is the name of the SQL instance. Simply type remote or local SQL Server Name and then click Next.

On the Web Site Selection page, you specify the Web site that WSUS will use to point client computers to WSUS. If you wish to use the default IIS Web site on port 80, select the first option. If you already have a Web site on port 80, you can create an alternate site on port 8530 by selecting the second option. Make your selection, and then click Next.

 On the Ready to Install Windows Server Update Services page, review your choices, and then click Next.

 The final page of the installation wizard will tell you whether or not the WSUS 3.0 installation was completed successfully. The final page of the installation wizard will tell you whether or not the WSUS 3.0 installation was completed successfully. After you click Finish the configuration wizard will be launched.


Step6: Match the Advanced Options on the old WSUS Server & the new WSUS Server

Ensure that the advanced synchronization options for express installation files and languages on the old server match the settings on the new server by following the steps below:

  1. In the WSUS console of the old WSUS server, click the Options tab, and then click Advanced in the Update Files and Languages section.
  2. In the Advanced Synchronization Settings dialog box, check the status of the settings for Download express installation files and Languages options.
  3. In the WSUS console of the new server, click the Options tab, and then click Advanced in the Update Files and Languages section.
  4. In the Advanced Synchronization Settings dialog box, make sure the settings for Download express installation files and Languages options match the selections on the old server.

Step7: Copy Updates from File System of the old WSUS Server to the new WSUS server

To back up updates from file system of old WSUS server to a file, follow these steps:

  1. On your old WSUS server, click Start, and then click Run.
  2. In the Run dialog box, type ntbackup. The Backup or Restore Wizard starts by default, unless it is disabled. You can use this wizard or click the link to work in Advanced Mode and use the following steps.
  3. Click the Backup tab, and then specify the folder where updates are stored on the old WSUS server. By default, WSUS stores updates at WSUSInstallationDrive:\WSUS\WSUSContent\.
  4. In Backup media or file name, type a path and file name for the backup (.bkf) file.
  5. Click Start Backup. The Backup Job Information dialog box appears.
  6. Click Advanced. Under Backup Type, click Incremental.
  7. From the Backup Job Information dialog box, click Start Backup to start the backup operation.
  8. Once completed, move the backup file you just created to the new WSUS server.

To restore updates from a file to the file system of the new server, follow these steps:

  1. On your new WSUS server, click Start, and then click Run.
  2. In the Run dialog box, type ntbackup. The Backup or Restore Wizard starts by default, unless it is disabled. You can use this wizard or click the link to work in Advanced Mode and use the following steps.
  3. Click the Restore and Manage Media tab, and select the backup file you created on the old WSUS server. If the file does not appear, right-click File, and then click Catalog File to add the location of the file.
  4. In Restore files to, click Alternate location. This option preserves the folder structure of the updates; all folders and subfolders will appear in the folder you designate. You must maintain the directory structure for all folders under \WSUSContent.
  5. Under Alternate location, specify the folder where updates are stored on the new WSUS server. By default, WSUS stores updates at WSUSInstallationDrive:\WSUS\WSUSContent\. Updates must appear in the folder on the new WSUS server designated to hold updates; this is typically done during installation.
  6. Click Start Restore. When the Confirm Restore dialog box appears, click OK to start the restore operation.

Alternative option would be use FastCopy Software. Copy and paste WSUS content from old server to new server.

Step8: Copy Metadata from the Database on the old WSUS Server to the new WSUS Server

To import metadata into the database of the new Microsoft Windows Server Update Services Server, follow these steps:.

Copy export.xml.gz or file from old server to new server using copy/Paste or FastCopy software.

Note: It can take from 3 to 4 hours for the database to validate content that has just been imported.

At a command prompt on the new WSUS server, navigate to the directory that contains WSUSutil.exe. Type the following: wsusutil.exe import packagename logfile (For example: wsusutil.exe import import.log or wsusutil.exe import export.xml.gz export.log)

Step9: Point your Clients to the new WSUS Server

Next you need to change the Group policy and make it point top the new server.  To redirect Automatic Updates to a WSUS server, follow these steps:

  1. In Group Policy Object Editor, expand Computer Configuration, expand Administrative Templates, expand Windows Components, and then click Windows Update.
  2. In the details pane, click Specify Intranet Microsoft update service location.
  3. Set the intranet update service for detecting updates box and in the Set the intranet statistics server box. With the new server details and port For example, type http(s)://newservername :Port in both boxes.

Step10: Invoke GPUpdate

Open PowerShell command prompt as an administrator in any computer. Run Invoke-GPUpdate Servername to synchronise server with new WSUS Server.

Posted in Windows Server | Tagged , , , , , , | Leave a comment

Bulk Migration of Printer from Windows Server 2008/R2 to Windows Server 2012/R2

Bulk Migration of Printer from Windows Server 2008/R2 to Windows Server 2012/R2

The following steps are from those who would like to migrate print server from legacy Server 2008/R2 to Windows Server 2012/R2. This steps will bring new drivers and avoid bringing old corrupt drivers and configuration into new systems. If you utilize print migration wizard then you may bring legacy corrupt driver into new systems. This steps also helpful if you are using Citrix Universal Print Driver.

Step1: Download correct and latest Generic/Universal/Global print driver. HP called Universal. Other manufacturer may call global or generic driver. Help yourself from Bing.

Step2: Install Generic Driver.

Open Server manager>Print Management>print Servers>Server name>Drivers.

Right Click and add x64 & x86 drivers.

Step3: Extract Legacy print Configuration.

Open PowerShell as an administrator. Run the following command.

$printserver = “”

Get-WMIObject -class Win32_Printer -computer $printserver | Select Name,DriverName,PortName,sharename,location,comment | Export-CSV -path ‘C:\printers.csv’

Step4: Create a CSV file shown below from the CSV File extracted in step3.

Create a CSV fileand store the file into c:\printers.csv in new Windows Server 2012 R2.

First row of CSV shown below. Add relevant rows to your CSV file.


Step5: Create a Powershell script as below (Extracted the script from

Open a notepad. Copy from below and paste into the notepad. Rename to CreatePrinter.PS1

function CreatePrinter {

$server = $args[0]

$print = ([WMICLASS]”\\$server\ROOT\cimv2:Win32_Printer”).createInstance()

$print.drivername = $args[1]

$print.PortName = $args[2]

$print.Shared = $true

$print.Sharename = $args[3]

$print.Location = $args[4]

$print.Comment = $args[5]

$print.DeviceID = $args[6]



function CreatePrinterPort {

$server = $args[0]

$port = ([WMICLASS]”\\$server\ROOT\cimv2:Win32_TCPIPPrinterPort”).createInstance()

$port.Name= $args[1]



$port.HostAddress= $args[2]



$printers = Import-Csv c:\printers.csv

foreach ($printer in $printers) {

CreatePrinterPort $printer.Printserver $printer.Portname $printer.IPAddress

CreatePrinter $printer.Printserver $printer.Driver $printer.Portname $printer.Sharename $printer.Location $printer.Comment $printer.Printername


Step6: run the scrip

Log on to new Server 2012/R2 print server. Open PowerShell as an administrator. Run the above script. You have to tweak little bit such as additional drivers. Amendment of print properties. But this is little effort than creating entire print server manually.

Further reading:

install unsigned drivers

Posted in Windows Server | Tagged , , , , , , , | 2 Comments

Hyper-v Server 2016 What’s New

Changed and upgraded functionality of Hyper-v Server 2016.

  1. Hyper-v cluster with mixed hyper-v version
  • Join a Windows Server 2016 Hyper-v with Windows Server 2012 R2 Hyper-v
  • Functional level is Windows Server 2012 R2
  • Manage the cluster, Hyper-V, and virtual machines from a node running Windows Server 2016 or Windows 10
  • Use Hyper-V features until all of the nodes are migrated to Windows Server 2016 cluster functional level
  • Virtual machine configuration version for existing virtual machines aren’t upgraded
  • Upgrade the configuration version after you upgrade the cluster functional level using Update-VmConfigurationVersion vmname cmdlet
  • New virtual machine created in Windows Server 2016 will be backward compatible
  • Hyper-V role is enabled on a computer that uses the Always On/Always Connected (AOAC) power model, the Connected Standby power state is now available
  1. Production checkpoints
  • Production checkpoints, the Volume Snapshot Service (VSS) is used inside Windows virtual machines
  • Linux virtual machines flush their file system buffers to create a file system consistent checkpoint
  • Check point no longer use saved state technology
  1. Hot add and remove for network adapters, virtual hard drive and memory
  • add or remove a Network Adapter while the virtual machine is running for both Windows and Linux machine
  • Adjust memory of a running virtual machine even if you haven’t enabled dynamic memory
  1. Integration Services delivered through Windows Update
  • Windows update will distribute integration services
  • ISO image file vmguest.iso is no longer needed to update integration components
  1. Storage quality of service (QoS)
  • create storage QoS policies on a Scale-Out File Server and assign them to one or more virtual disks
  • Hyper-v auto update storage policies according to storage policies
  1. Virtual machine Improvement
  • Import virtual machine with older configuration version, update later and live migrate across any host
  • After you upgrade the virtual machine configuration version, you can’t move the virtual machine to a server that runs Windows Server 2012 R2.
  • You can’t downgrade the virtual machine configuration version back from version 6 to version 5.
  • Turn off the virtual machine to upgrade the virtual machine configuration.
  • Update-VmConfigurationVersion cmdlet is blocked on a Hyper-V Cluster when the cluster functional level is Windows Server 2012 R2
  • After the upgrade, the virtual machine will use the new configuration file format.
  • The new configuration files use the .VMCX file extension for virtual machine configuration data and the .VMRS file extension for runtime state data.
  • Ubuntu 14.04 and later, and SUSE Linux Enterprise Server 12 supports secure boot using Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority cmdlet
  1. Hyper-V Manager improvements
  • Support alternative credential
  • Down-level management of Hyper-v running on Windows Server 2012, Windows 8, Windows Server 2012 R2 and Windows 8.1.
  • Connect Hyper-v using WS-MAN protocol, Kerberos or NTLM authentication
  1. Guest OS Support
  • Any server operating systems starting from Windows Server 2008 to Windows Server 2016
  • Any desktop operating systems starting from Vista SP2 to Windows 10
  • FreeBSD, Ubuntu, Suse Enterprise, CentOS, Debian, Fedora and Redhat

9. ReFS Accelerated VHDX 

  • Create a fixed size VHDX on a ReFS volume instantly.
  • Gain great backup operations and checkpoints

10. Nested Virtualization

  • Run Hyper-V Server as a guest OS inside Hyper-V

11. Shared VHDX format

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

12. Stretched Hyper-V Cluster 

  •  Stretched cluster allows you to configure Hyper-v host and storage in a single stretch cluster, where two nodes share one set of storage and two nodes share another set of storage, then synchronous replication keeps both sets of storage mirrored in the cluster to allow immediate failover.
  • These nodes and their storage should be located in separate physical sites, although it is not required.
  • The stretch cluster will run a Hyper-V Compute workload.



Hyper-V on Windows 10 doesn’t support failover clustering

Posted in Virtualization | Tagged , , , , , | 1 Comment