Migrate a SQL Server database to Azure SQL Database

Azure Database Migration Service partners with DMA to migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database, Azure SQL Database Managed Instance or SQL Server on Azure virtual machines.

 

SQL Migration.png
Azure SQL Migration (Source Microsoft Corp)

 

Moving a SQL Server database to Microsoft Azure SQL Database with Data Migration Assistant is a three-part process:

  1. Prepare a database in a SQL Server for migration to Azure SQL Database using the Data Migration Assistant (DMA).
  2. Export the database to a BACPAC file.
  3. Import the BACPAC file into an Azure SQL Database.

Using Microsoft Data Migration Assistant

Step 1: Prepare for migration

Complete these prerequisites:

  • Install the newest version of Microsoft SQL Server Management Studio (SSMS). Installing SSMS also installs the newest version of SQLPackage, a command-line utility that can be used to automate a range of database development tasks.
  • Download and Install the Microsoft Data Migration Assistant (DMA).
  • Identify and have access to a database to migrate.

Follow these steps to use Data Migration Assistant to assess the readiness of your database for migration to Azure SQL Database:

  1. Open the Microsoft Data Migration Assistant. You can run DMA on any computer with connectivity to the SQL Server instance containing the database that you plan to migrate; you do not need to install it on the computer hosting the SQL Server instance.
  2. In the left-hand menu, click New to create an Assessment project. Fill in the form with a Project name (all other values should be left at their default values), and then click Create.
  3. On the Options page, click Next.
  4. On the Select sources page, enter the name of SQL Server instance containing the server you plan to migrate. Change the other values on this page if necessary, and then click Connect.
  5. In the Add sources portion of the Select sources page, select the checkboxes for the databases to be tested for compatibility, and then click Add.
  6. Click Start Assessment.
  7. When the assessment completes, look for the checkmark in the green circle to see if the database is sufficiently compatible to migrate.
  8. Review the results the SQL Server feature parity results. Specifically review the information about unsupported and partially supported features, and the recommended actions.
  9. Review the Compatibility issues by clicking that option in the upper left. Specifically review the information about migration blockers, behavior changes, and deprecated features for each compatibility level. For the AdventureWorks2008R2 database, review the changes to Full-Text Search since SQL Server 2008, and the changes to SERVERPROPERTY(‘LCID’) since SQL Server 2000. For details about these changes, links for more information are provided. Many search options and settings for Full-Text Search have changed.
  10. Optionally, click Export report to save the report as a JSON file.
  11. Close the Data Migration Assistant.

Step 2: Export to BACPAC file

Follow these steps to use the SQLPackage command-line utility to export the AdventureWorks2008R2 database to local storage.

  1. Open a Windows command prompt and change your directory to a folder in which you have the 130 version of SQLPackage, such as C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin.
  2. Execute the following SQLPackage command at the command prompt to export the AdventureWorks2008R2 database from localhost to AdventureWorks2008R2.bacpac. Change any of these values as appropriate to your environment.

SQLPackageCopy

sqlpackage.exe /Action:Export /ssn:localhost /sdn:AdventureWorks2008R2 /tf:AdventureWorks2008R2.bacpac

Once the execution is complete the generated BACPAC file is stored in the directory where the sqlpackage executable is located. In this example, C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin.

  1. Log in to the Azure portal.
  2. Create a SQL Server logical server

A SQL Server logical server acts as a central administrative point for multiple databases. Follow these steps to create a SQL server logical server to contain the migrated Adventure Works OLTP SQL Server database.

  1. Click the New button found on the upper left-hand corner of the Azure portal.
  2. Type sql server in the search window on the New page, and select SQL server (logical server) from the filtered list.
  3. Click Create, and enter the properties for the new SQL Server (logical server).
  4. Complete the SQL server (logical server) form with the values from the red box in this image.
  5. Click Create to provision the logical server. Provisioning takes a few minutes.

Step 3: Create a server-level firewall rule

  1. Click All resources from the left-hand menu, and click the new server on the All resources page. The overview page for the server opens and provides options for further configuration.
  2. Click Firewall in the left-hand menu under Settings on the overview page.
  3. Click Add client IP on the toolbar to add the IP address of the computer you are currently using, and then click Save. This creates a server-level firewall rule for this IP address.
  4. Click OK.

Step 3: Import a BACPAC file to Azure SQL Database

The SQLPackage command-line utility is the preferred method to import your BACPAC database to Azure SQL Database for most production environments.

SqlPackage.exe /a:import /tcs:”Data Source=<your_server_name>.database.windows.net;Initial Catalog=<your_new_database_name>;User Id=<change_to_your_admin_user_account>;Password=<change_to_your_password>” /sf:AdventureWorks2008R2.bacpac /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6

Connect using SQL Server Management Studio (SSMS)

  1. Open SQL Server Management Studio.
  2. In the Connect to Server dialog box, enter this information.
  • Server type: Specify Database engine
  • Server name: Enter your fully qualified server name, such as mynewserver20170403.database.windows.net
  • Authentication: Specify SQL Server Authentication
  • Login: Enter your server admin account
  • Password: Enter the password for your server admin account
  1. Click Connect.
  2. In Object Explorer, expand Databases, and then expand myMigratedDatabase to view the objects in the sample database.

Using Azure Database Migration Service

Azure Database Migration Service (ADMS), now in limited preview, can help you migrate existing on-premises SQL Server, Oracle, and MySQL databases to Azure SQL Database, Azure SQL Database Managed Instance, or SQL Server on an Azure Virtual Machine.

ADMS is designed to simplify the complex workflows you can encounter when migrating various database types to databases in Azure.

  1. In the Azure portal, select Data Migration Service, and then click New Migration Project.
  2. In New Migration Project, enter a unique project name, server source type and a target server type.
  3. Click Start.
  4. Provide all options under Migration target details, and then click Save.
  5. Provide all options under Migration source detail, and then click Save.
  6. In the Select source databases list, select each source database you want to migrate, and then click Save.
  7. Review the details summary, and then click Run Migration to start the migration. The amount of time the migration will run depends on a variety of factors including size and complexity of the database, source disk speed, and network speed.
  8. Once the migration is finished, a Completed status will be displayed in the SQL Migration dashboard.

Migrating VMware Virtual Workloads to Microsoft Azure Cloud

Overview

Migrating to the cloud doesn’t have to be difficult, but many organizations struggle to get started. Before they can showcase the cost benefits of moving to the cloud or determine if their workloads will lift and shift without effort, they need deep visibility into their own environment and the tight interdependencies between applications, workloads, and data. Azure Migrate, Azure Database Migration Service, and Azure Cost Management provides a frictionless approach to moving VMware VMs to Azure.

VMware to Azure.PNG

Microsoft – Cloud Security Certification

Microsoft Azure has been certified by Australian Signals Dicrectorate (ASD), Department of Defence. Check your region to verify Azure certification by the regulator if you have regulatory compliance requirements.

  • Microsoft has undergone an Information Security Registered Assessors Program (IRAP) assessment of Australian Signals Directorate (ASD) and been certified on the Certified Cloud Services List (CCSL) by ASD for Azure, Dynamics 365, and Office 365
  • Microsoft Azure has been awarded PROTECTED classification level by the Australian Signals Directorate (ASD). Microsoft Azure is the first global cloud provider which has been awarded PROTECTED
  • Azure, Cloud App Security, Intune, Office 365, Dynamics 365 and Power BI are awarded certification after rigorous independent assessments of cloud providers by the Cloud Security Alliance (CSA)
  • Azure, Cloud App Security, Intune, Office 365, Dynamics 365 and Power BI are awarded ISO/IEC 27001 certification meeting criteria specified in the ISO certification

Licensing Cost & Azure Hybrid Benefit

  • customers with Software Assurance to run Windows Server VMs on Azure at a lower rate.
  • save up to 40 percent on Windows Server VMs
  • Use existing SQL Server licenses toward SQL Database managed instances
  • Azure Reserved Virtual Machine Instances to further reduce costs—up to 72% on PAYG prices per year or per three years terms on both Windows and Linux virtual machines.
  • pay only for the underlying compute and storage for SQL VM
  • 82% savings over PAYG rates on Azure and up to 67% compared to AWS RIs for Windows VMs.
  • 49% cost savings estimated using the Azure TCO calculator comparing on-premsies VMware VMs. Actual savings may vary based on region, instance type and usage. Reference Nucleus Research
  • You can specify whether you’re enrolled in Software Assurance and can use the Azure Hybrid Use Benefit.

Hybrid Cloud1

Migration Path

Microsoft offers an end-to-end solution to provide you with a proven framework and tools to migrate your first workload and give you a complete roadmap for discovery, migration, and continual optimization, including better insights and strategies for running your entire datacenter portfolio on Azure. Migrating to Azure is simple three-stage process and focuses on how to identify virtual machines, applications, and data that can easily be moved to the cloud.

Hybrid Cloud.PNG

Supported Platform

  • VMware vCenter Server 5.5, 0 and later version managed virtual machines
  • Any On-premises Storage (vSAN, FC SAN, NFS or iSCSI)
  • Appliance-based, agentless, and non-intrusive discovery of on-premises virtual machines.
  • Currently Azure Migrate supports only Locally redundant storage (LRS). However, once you migrated to Azure, you can use Geo-redundant storage.
  • Lift & Shift migration to Azure IaaS Cloud
  • Azure migrate will recommend the use of Azure Database Migration Service
  • Use Azure Site Recovery Manager to migrate business critical and large VMs to Azure Cloud

Stage 1 – Assess Your VMware vSphere Environment

Use these four steps to discover and assess your on-premises workloads for migration to Azure.

  1. Prepare your environment.
  2. Discover virtual machines.
  3. Group virtual machines.
  4. Assess the groups of virtual machines.

Step 1: Prepare your environment

  1. To get started with Azure Migrate, you need a Microsoft Azure account or the free trial.
  2. Assess VMware Virtual machines located on vSphere ESXi hosts that are managed with a vCenter server running version 5.5 or 6.0.
  3. The ESXi host or cluster on which the Collector VM (version 8.0) runs must be running version 5.0 or later.
  4. To discover virtual machines, Azure Migrate needs an account with read-only administrator credentials for the vCenter server.
  5. Create a vCenter virtual machine in .ova format. Download an appliance and import it to the vCenter server to create the virtual machine. The virtual machine must be able to connect to the internet to send metadata to Azure.
  6. Set statistics settings for the vCenter server to statistics level 2. The default Level 1 will work, but Azure Migrate won’t be able to collect data for performance-based sizing for storage.

Tag your virtual machines in vCenter (optional)

Use these steps to tag your virtual machines in vCenter server.

  1. In the VMware vSphere Web Client, navigate to the vCenter server instance.
  2. To review current tags, click Tags.
  3. To tag a virtual machine, click Related Objects > Virtual Machines, and select the virtual machine.
  4. In Summary > Tags, click Assign.
  5. Click New Tag, and specify a tag name and description.
  6. To create a category for the tag, select New Category in the drop-down list.
  7. Specify a category name and description and the cardinality, and click OK.

Step 2: Discover virtual machines

Using Azure Migrate to discover on-premises workloads involves these steps.

  1. Create a Project.
  2. Download the Collector appliance.
  3. Create the Collector virtual machine.
  4. Run the Collector to discover virtual machines.
  5. Verify discovered virtual machines in the portal.

Create a Project

Azure Migrate projects hold the metadata of your on-premises machines and enables you to assess migration suitability.  Use these steps to create a project.

  1. Log on to the Azure portal and click New.
  2. Search for Azure Migrate in the search box, and select the service Azure Migrate (preview) in the search results, and then click Create.
  3. Select the Azure Migrate service from the search results.
  4. Click Create.
  5. Specify a name for the new project.
  6. Select the subscription you want the project to get associated to.
  7. Create a new resource group, or select an existing one.
  8. Specify an Azure location.
  9. To quickly access the project from the Dashboard, select Pin to dashboard.
  10. Click Create. The new project appears on the Dashboard, under All resources, and in the Projects blade.

Download the Collector appliance

  1. Select the project, and click Discover & Assess on the Overview blade.
  2. Click Discover Machines, and then click Download.
  3. Copy the Project ID and project key values to use when you configure the Collector.

Deploy the Collector virtual machine

In the vCenter Server, import the Collector appliance as a virtual machine using the Deploy OVF Template wizard.

  1. In vSphere Client console, click File > Deploy OVF Template.
  2. In the Deploy OVF Template Wizard > Source, specify the location for the .ovf file.
  3. In Name and Location, specify a friendly name for the Collector virtual machine, the inventory object in which the virtual machine will be hosted.
  4. In Host/Cluster, specify the host or cluster on which the Collector virtual machine will run.
  5. In Storage, specify the storage destination for the Collector virtual machine.
  6. In Disk Format, specify the disk type and size.
  7. In Network Mapping, specify the network to which the Collector virtual machine will connect. The network must be connected to the internet to send metadata to Azure.
  8. Review and confirm the settings, and then click Finish.

Run the Collector to discover virtual machines

  1. In the vSphere Client console, right-click the virtual machine > Open Console.
  2. Provide the language, time zone, and password preferences for the appliance.
  3. In the Azure Migrate Collector, open Set Up Prerequisites, and then

o Accept the license terms, and read the third-party information.

o The Collector checks that the virtual machine has internet access. If the virtual machine accesses the internet via a proxy, click Proxy settings, and specify the proxy address and listening port. Specify credentials if proxy access needs authentication.

o The Collector checks that the Windows profiler service is running. The service is installed by default on the Collector virtual machine.

o Select to download and install the VMware PowerCLI.

  1. In Discover Machines, do the following:

o Specify the name (FQDN) or IP address of the vCenter server and the read-only account the Collector will use to discover virtual machines on the vCenter server.

o Select a scope for virtual machine discovery. The Collector can only discover virtual machines within the specified scope. Scope can be set to a specific folder, datacenter, or cluster, but it shouldn’t contain more than 1000 virtual machines.

o If you’re using tagging on the vCenter server, select tag categories for virtual machine grouping. Azure Migrate automatically groups virtual machines based on tag values in the category. If you’re not using tagging, you can group virtual machines in the Azure portal.

  1. In Select Project, specify the Azure Migrate project ID and key you copied from the Azure portal. If didn’t copy them, open Azure in a browser from the Collector virtual machine. In the project Overview page, click Discover Machines, and copy the values.
  2. In Complete Discovery, you can monitor the discovery status, and check that metadata is collected from the virtual machines in scope. The Collector provides an approximate discovery time.

Verify discovered virtual machines in the portal

  1. In the migration project, click Manage > Machines.
  2. Check that the virtual machines you want to discover appear in the portal.

Step 3: Group virtual machines

Enterprises typically migrate virtual machines with dependencies together at the same time to ensure their functionality after migration to Azure. Azure Migrate allows you to categorize the virtual machines by group so you can assess all the virtual machines in a group.

  • If you provided a tag category—which was an optional step while configuring the Collector—groups will be automatically created for the workloads based on the tag values.
  • If a tag category is not provided while configuring the Collector, you can create groups of virtual machines in the Azure Migrate portal.

Optional: Assess machine dependencies before adding them to a group

  1. In Manage > Machines, search the Machine for which you want to view the dependencies.
  2. In the Dependencies column for the machine, click Install agent.
  3. To calculate dependencies, download and install these agents on the machine: o Microsoft Monitoring agent

o Dependency agent

  1. Copy the workspace ID and key to use later when you install the Microsoft Monitoring agent on a machine.
  2. After you install the agents on the machine, return to the portal and click Machines. This time the Dependencies column for the machine should contain the text View dependencies. Click View dependencies.
  3. By default, the dependency time range is an hour. Click the time range to shorten it, specify start and end dates, or change the duration. Press Ctrl + Click to select multiple machines on the map, and then click Group machines.
  4. In Group machines, specify a group name. Verify the machines you added have the dependency agents installed and have been discovered by Azure Migrate. Machines must be discovered to assess them. We recommend that you install the dependency agents to complete dependency mapping.
  5. Click OK to save the group settings. Alternatively, you can add machines to an existing group.

Create a Group

You can create groups of virtual machines from the Machines blade or from the Groups blade, using a similar process.

Create a group from the Machines blade

  1. Navigate to the Dashboard of a project and click the Machines tile.
  2. Click Group Machines.
  3. Specify a name for the group in the Name box, and then select the machines that you want to add to the group.
  4. Click Create.

Add/Remove machines to/from an existing group if you require

  1. Navigate to the dashboard of a project and click the Groups tile.
  2. Select the Group you want to add/remove machines to/from.
  3. Click Add Machines or Remove Machines.
  4. Select the machines that you want to add/remove to/from the group.
  5. Click Add or Remove.

Step 4: Assess groups of virtual machines

Create an assessment

Follow these steps to generate an assessment for the group.

  1. Select the project you want under Project.
  2. On the project dashboard, click Groups.
  3. Create a new group or select an existing group to assess under Group.
  4. Click Create Assessment to create a new assessment for the group.

The assessment includes these details.

  • Summary of the number of machines suitable for Azure which is referred to as Azure Readiness.
  • Monthly estimate of the cost for running the machines in Azure after migration.
  • Storage monthly cost estimate.

Assessment calculation

Azure Migrate performs three checks on virtual machines in this order:

  1. Azure Suitability Analysis
  2. Performance-based sizing
  3. Monthly cost estimate

Stage 2: Migrate virtual machines using Azure Site Recovery

Before you start deployment, review the architecture and make sure you understand all the components you need to deploy.

Next, make sure you understand the prerequisites and limitations for a Microsoft Azure account, Azure networks, and storage accounts. You also need:

  • On-premises Site Recovery components
  • On-premises VMware prerequisites
  • Mobility service component installed on the virtual machine you want to replicate.

These are the general steps to migrate:

  1. Set up Azure services such as Virtual Networks, Availability Group, Network Load Balancer, Address Space, Subnets, Resource Group, Storage Accounts, Public IPs.
  2. Connect to VMware servers.
  3. Set up the target environment.
  4. Complete migration.

I assume, you have completed the step1. So I am moving on to step 2.

Create a Recovery Services vault

  1. Sign in to the Azure portal > Recovery Services.
  2. Click New > Monitoring & Management > Backup and Site Recovery.
  3. In Name, specify a friendly name to identify the vault. If you have more than one subscription, select one of them.
  4. Create a resource group, or select an existing one. Specify an Azure region. To check supported regions, see geographic availability in Azure Site Recovery Pricing Details.
  5. If you want to quickly access the vault from the dashboard, click Pin to dashboard, and then click Create.
  6. The new vault will appear on Dashboard > All resources and on the main Recovery Services vaults blade.

Select a protection goal

In this task, select what you want to replicate, and where you want to replicate to.

  1. Click Recovery Services vaults > vault.
  2. In the Resource Menu, click Site Recovery > Prepare Infrastructure > Protection goal.
  3. In Protection goal, select To Azure > Yes, with VMware vSphere Hypervisor.

Set up the source environment

In this task, set up the configuration server, register it in the vault, and discover virtual machines.

  1. Click Site Recovery > Step 1: Prepare Infrastructure > Source.
  2. If you don’t have a configuration server, click Configuration server.
  3. In Add Server, check that Configuration Server appears in Server type.
  4. Download the Site Recovery Unified Setup installation file.
  5. Download the vault registration key. You need this when you run Unified Setup. The key is valid for five days after you generate it.

Register the configuration server in the vault

The next task requires you to run Unified Setup to install the configuration server, the process server, and the master target server. First however, do these three steps.

  1. On the configuration server virtual machine, make sure that the system clock is synchronized with a Time Server. It should match. If it’s 15 minutes in front or behind, setup might fail.
  2. Run setup as a Local Administrator on the configuration server virtual machine.
  3. Make sure TLS 1.0 is enabled on the virtual machine.

Now you are ready to run Setup.

  1. Run the Unified Setup installation file.
  2. In Before You Begin, select Install the configuration server and process server.
  3. From the Third-Party Software License screen, click I Accept to download and install MySQL.
  4. From the Registration screen, select the registration key you downloaded from the vault, and then click Next.
  5. From the Internet Settings screen, specify how the Provider running on the configuration server connects to Azure Site Recovery over the Internet.
  6. If you want to connect with the proxy that’s currently set up on the machine, select Connect to Azure Site Recovery using a proxy server.
  7. If you want the Provider to connect directly, select Connect directly to Azure Site Recovery without a proxy server.
  8. If the existing proxy requires authentication, or if you want to use a custom proxy for the Provider connection, select Connect with custom proxy settings. o If you use a custom proxy, you need to specify the address, port, and credentials.
  9. From the Prerequisites Check screen, run a check to make sure that installation can run. If a warning appears about the Global time sync check, verify that the time on the system clock (Date and Time settings) is the same as the time zone.
  10. In the MySQL Configuration screen, create credentials for logging on to the MySQL server instance that is installed.
  11. From the Environment Details screen, select whether to replicate VMware virtual machines. If you will, Setup checks that PowerCLI 6.0 is installed.
  12. From the Install Location screen, select where you want to install the binaries and store the cache. The drive you select must have at least 5 GB of disk space available, but we recommend a cache drive with at least 600 GB of available space.
  13. From the Network Selection screen, specify the listener (network adapter and SSL port) on which the configuration server sends and receives replication data. Port 9443 is the default port used for sending and receiving replication traffic, but you can modify this port number to suit your environment’s requirements. In addition to the port 9443, we also open port 443, which is used by a web server to orchestrate replication operations. Do not use port 443 for sending or receiving replication traffic.
  14. In the Summary screen, review the information and click Install. When installation finishes, a passphrase is generated. You will need this when you enable replication, so copy it and keep it in a secure location. After registration finishes, the server is displayed on the Settings > Servers in the vault.

Step 2: Connect to VMware servers

To allow Azure Site Recovery to discover virtual machines running in your on-premises environment, you need to connect your VMware vCenter Server or vSphere ESXi hosts with Site Recovery. Note the following before you start:

  • If you add the vCenter server or vSphere hosts to Site Recovery with an account without administrator privileges on the server, the account needs these privileges enabled:

o Datacenter, Datastore, Folder, Host, Network, Resource, Virtual machine, vSphere Distributed Switch.

o The vCenter server needs Storage views permissions.

  • When you add VMware servers to Site Recovery, it can take 15 minutes or longer for them to appear in the portal.

Step 3: Set up the target environment

Before you set up the target environment, make sure you have an Azure storage account and a virtual network set up.

  1. Click Prepare infrastructure > Target, and select the Azure subscription you want to use.
  2. Specify whether your target deployment model is Resource Manager-based, or classic.
  3. Site Recovery verifies that you have one or more compatible Azure storage accounts and networks.

Create replication policy

You need a replication policy to automate the replication to Azure.

  1. To create a new replication policy, click Site Recovery infrastructure > Replication Policies > Replication Policy.
  2. Under RPO threshold, specify the RPO limit. This value specifies how often data recovery points are created. An alert is generated if continuous replication exceeds this limit.
  3. Under Recovery point retention, specify (in hours) how long the retention window is for each recovery point. Replicated virtual machines can be recovered to any point in a window. Up to 24 hours retention is supported for machines replicated to premium storage, and 72 hours for standard storage.
  4. Under App-consistent snapshot frequency, specify how often (in minutes) recovery points containing application-consistent snapshots will be created.
  5. Click OK to create the policy.
  6. When you create a new policy it’s automatically associated with the configuration server. By default, a matching policy is automatically created for failback. For example, if the replication policy is rep-policy then the failback policy will be rep-policy-failback. The failback policy isn’t used until you initiate a failback from Azure.

Prepare for push installation of the Mobility service

The Mobility service must be installed on all virtual machines you want to replicate. There are several ways to install the service, including manual installation, push installation from the Site Recovery process server, and installation using methods such as System Center Configuration Manager. Here you can review prerequisites and installation methods for the Mobility Service.

If you want to use push installation from the Azure Site Recovery process server, you need to prepare an account that Azure Site Recovery can use to access the virtual machine.

The following describes the options:

  • You can use a domain or local account

For Windows, if you’re not using a domain account, you need to disable Remote User Access control on the local machine. To do this, in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System, add the DWORD entry LocalAccountTokenFilterPolicy, with a value of 1.

  • If you want to add the registry entry for Windows from a CLI, type: REG ADD HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1.
  • For Linux, the account should be root on the source Linux server.

Install Mobility Service manually by using the GUI

  1. Copy the installer executable to the virtual machine that is being migrated to Azure, and then open the installer.
  2. On the Installation Option pane, select Install Mobility Service.
  3. Select the install location and click Install to being the installation procedure.
  4. You can use Installation Progress page to monitor the installer’s progress.
  5. Once installation is complete, click the Proceed to Configuration button to register the Mobility Service with your Configuration server.
  6. Click on the Register button to complete the registration.

Configure replication

After you have installed and configured both the Process Server and the Mobility Service agents, continue configuring replication in Azure.

  1. In the Azure portal, navigate to Site Recovery > Step1: Replicate Application > Enable Replication, and then click Step 1: Source Configure > Source.
  2. In Source, select On-Premises.
  3. In Source location, select your Configuration Server.
  4. In Machine type, select Virtual Machines.
  5. In vCenter/vSphere Hypervisor, select the vCenter server that manages the vSphere host, or select the host.
  6. Select the process server or the configuration server if you haven’t created any additional process servers, and then click OK.
  7. In Target, select the subscription and the resource group in which you want to create the migrated virtual machines. Choose the deployment model for the migrated virtual machines that you want to use in Azure (classic or resource manager).
  8. Select the Azure storage account you want to use for replicating data. If you don’t want to use an account you’ve already set up, you can create a new one.
  9. Select the Azure network and subnet to which Azure Virtual Machines will connect when they’re created after migration. Select Configure now for selected machines to apply the network setting to all machines you select for protection, or select Configure later to select the Azure network per virtual machine.
  10. Point to Virtual Machines > Select, select each enabled machine you want to replicate, and then click OK.
  11. In Properties > Configure properties, select the process server account that will automatically install the Mobility service on the machine.
  12. By default, all disks are replicated. Click All Disks and clear any disks you don’t want to replicate, and then click OK. You can set additional virtual machine disk properties later if needed.
  13. In Replication settings > Configure replication settings, verify that the correct replication policy is selected. If you modify a policy, changes will be applied to the replicating machine and to new machines.
  14. Enable Multi-VM consistency if you want to gather machines into a replication group, specify a name for the group, and then click OK.
  15. Click Enable Replication. You can track progress of the Enable Protection job in Settings > Jobs > Site Recovery Jobs. After the Finalize Protection job runs the machine is ready for failover.

Step 4: Complete migration

Because migration is different than failover, it is important to configure Site Recovery for a migration.

For migration, you don’t need to commit a failover or delete machines. Instead, select the Complete Migration option for each machine you want to migrate.

  1. In Replicated Items, right-click the virtual machine, and then click Complete Migration.
  2. Click OK to complete the migration.

You can track progress in the virtual machine properties by monitoring the Complete Migration job in Site Recovery jobs. The Complete Migration action completes the migration process, removes replication for the machine, and stops Site Recovery billing for the machine.

At this point, your virtual machine has been migrated to Azure and you can begin using the IP addresses you set up in Networking. If you must migrate a database, the next section outlines migrating SQL Server databases using Migration Data Assistant and Azure Database Migration Service. Otherwise, the migration process continues with

Stage 3: Optimize migrated workloads

Cloudyn helps ensure migrated virtual machines continue to deliver targeted resource utilization and best cost by recommending changes. Track costs against budget using spending reports that help identify which virtual machine types are consuming budget and support decisions on how to modify the Azure environment to maximize ROI. Cloudyn benefits include:

  • Visibility into resource costs
  • Visibility into application and departmental costs
  • Budgeting
  • Cost optimization with right-sizing guidance

As organizations move on-premises virtual machines to Azure, a best practice is to move workloads through three stages: discover, migrate, and optimize. Microsoft and its partners offer tools to help increase the efficiency and reduce the complexity of those stages.

Office 365 MailFlow Scenarios and Best Practices

Microsoft Office 365 gives you the flexibility to configure mail flow based on your requirements and uses scenario to delivered email to your organisation’s mailboxes. The simplest way to configure mail flow is to allow Microsoft EOP to handle spam filter and Maiflow of your organisation. However, you may have already invested your infrastructure handle mail flow. Microsoft also accepts this situation and allow you to use your own spam filter.

The below scenario and use cases will allow you to determine how you can configure MailFlow of your organisation.

Mailbox Location MailFlow Entry Point Scenario & Usecases Recommended MailFlow Configuration  and Example MX record
Office 365 Office 365 Use Microsoft EOP

Demote or migrate all mailboxes to office 365

Use Office 365 mailboxes

MX record Pointed to Office 365

MX: domain-com.mail.protection.outlook.com

SPF:  v=spf1 include:spf.protection.outlook.com -all

 

On-premises On-prem Prepare the on-prem to be cloud ready

Build and Sync AAD Connect

Built ADFS Farm

MX record Pointed to On-prem

MX1.domain.com

SPF: v=spf1 include: MX1.domain.com  include:spf.protection.outlook.com -all

Third-party cloud, for example, G-Suite Both third-party and office 365 Prepare to migrate to Office 365

Stage mailbox data

MailFlow co-existance

MX record pointed to third-party cloud

MX record Pointed to On-prem

in.hes.trendmicro.com

SPF: v=spf1 include:spf.protection.outlook.com include: in.hes.trendmicro.com include: ASPMX.L.GOOGLE.COM -all

Combination of On-premises and Office 365 On-premises Hybrid Environment

Stage mailbox migration

MailFlow co-existance

MX record Pointed to On-prem spam filter

MX record Pointed to On-prem

MX1.domain.com

SPF: v=spf1 include: MX1.domain.com  include:spf.protection.outlook.com -all

Combination of On-premises and Office 365 Third-party cloud spam filter Hybrid Environment

Stage mailbox migration

MailFlow co-existance

MX record Pointed to third-party cloud spam filter

MX record pointed to third-party cloud

MX record Pointed to On-prem

in.hes.trendmicro.com

SPF: v=spf1 include:spf.protection.outlook.com include: in.hes.trendmicro.com -all

MailFlow Configuration Prerequisites:

  1. Make sure that your email server (also called “on-premises mail server”) is set up and capable of sending and receiving mail to and from the Internet.
  2. Check that your on-premises email server has Transport Layer Security (TLS) enabled, with a valid public certification authority-signed (CA-signed) certificate.
  3. Make a note of the name or IP address of your external-facing email server. If you’re using Exchange, this will be the Fully Qualified Domain Name (FQDN) of your Edge Transport server or CAS that will receive an email from Office 365.
  4. Open port 25 on your firewall so that Office 365 can connect to your email servers.
  5. Make sure your firewall accepts connections from all Office 365 IP addresses. See Exchange Online Protection IP addresses for the published IP address range.
  6. Make a note of an email address for each domain in your organisation. You’ll need this later to test that your connector is working correctly.
  7. Make sure you add all datacenter IP addresses of Office 365 into your receive connector of on-premises Exchange server

Configure mail to flow from Office 365 to your email server and vice-versa. There are three steps for this:

  1. Configure your Office 365 environment.
  2. Set up a connector from Office 365 to your email server.
  3. Change your MX record to redirect your mail flow from the Internet to Office 365.

Note: For Exchange Hybrid Configuration wizard, connectors that deliver mail between Office 365 and Exchange Server will be set up already and listed here. You don’t need to set them up again, but you can edit them here if you need to.

  1. To create a connectorExchange in Office 365, click Admin, and then click to go to the Exchange admin center. Next, click mail flow click mail flow, and click connectors.
  2. To start the wizard, click the plus symbol +. On the first screen, choose the appropriate options when creating MailFlow from Office 365 to On-premises Server
  3. Click Next, and follow the instructions in the wizard.
  4. Repeat the step to create MailFlow between On-premises to Office 365.
  5. To redirect email flow to Office 365, change the MX (mail exchange) record for your domain to Microsoft EOP, i.e. domain-com.mail.protection.outlook.com

Relevant Articles:

Mailflow Co-existence between G-Suite and Office 365 during IMAP Migration

Office 365 Hybrid Deployment with Exchange 2016 Step by Step

Centralized MailFlow: NDR Remote Server returned ‘550 5.7.1 Unable to relay’

Azure Site-to-Site IPSec VPN connection with Citrix NetScaler (CloudBridge)

An Azure Site-to-Site VPN gateway connection is used to connect on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it.

In this example, I am going to use Citrix CloudBridge feature of a NetScaler. The Citrix CloudBridge works in a pair, one at each end of a link, to accelerate traffic over the link. The transformations done by the sender are reversed by the receiver. One CB virtual appliance  can handle many links, so you do not have to dedicate a pair to each connection. You need just one CB virtual appliance per site to handle traffic to and from Azure datacenter to on-premises datacenter. In a Citrix CloudBridge Connector tunnel, IPSec ensures:

  • Data integrity
  • Data origin authentication
  • Data confidentiality (encryption)
  • Protection against replay attacks

The below exercise creates a IPSec tunnel between 66.128.x.x (On-prem) to 168.63.x.x (Azure).

Basic Requirements:

  • Make sure that the public IPv4 address for your VPN device is not located behind a NAT firewall
  • Make sure you have correct NSG rules are configured for you to access on-premises VM from Azure VM or vise-versa.

IP Address Requirements:

IP address of the CloudBridge Connector tunnel end point (CB Appliance) in the on-premises side 66.128.x.x
IP address of the CloudBridge Connector tunnel end point in the Azure VPN Gateway 168.63.x.x
Datacenter Subnet, the traffic of which is to traverse the CloudBridge Connector tunnel 10.120.0.0/23
Azure Subnet, the traffic of which is to traverse the CloudBridge Connector tunnel 10.10.0.0/22

Citrix NetScaler Settings

IPSec profile CB_Azure_IPSec_Profile IKE version = v1

Encryption algorithm = AES

Hash algorithm = HMAC SHA1

CloudBridge Connector tunnel CB_Azure_Tunnel Remote IP = 168.63.x.x

Local IP= 66.128.x.x (SNIP)

Tunnel protocol = IPSec

IPSec profile= CB_Azure_IPSec_Profile

Policy based route CB_Azure_Pbr Source IP range = Subnet in the datacenter =10.120.0.0-10.120.1.254

Destination IP range =Subnet in Azure =10.10.0.1 – 10.10.3.254

IP Tunnel = CB_Azure_Tunnel

Azure VPN Gateway Settings

Public IP Address of the Azure VPN Gateway 168.63.x.x
Local Network On-prem Network VPN Device IP address = 66.128.x.x (SNIP)

On-prem Subnet =10.120.0.0/24

Virtual Network CloudBridge Tunnel in Azure Side Address Space of the Azure vNET= 10.10.0.0/22

Trusted Subnet within the vNET = 10.10.0.1/24

Untrusted Subnet within the vNET = 10.10.1.1/24

Gateway Subnet=10.10.2.0/24

Region Australia East
VPN Type Route-based
Connection Type Site-to-site (IPsec)
Gateway Type VPN
Shared key Sample Shared Key DkiMgMdcbqvYREEuIvxsbKkW0FOyDiLM

Configuration of Citrix NetScaler CloudBridge Feature

Step1: Create IPSec Profile

add ipsec profile CB_Azure_IPSec_Profile –psk  DkiMgMdcbqvYREEuIvxsbKkW0FOyDiLM  -ikeVersion v1 –lifetime 31536000

Note: DkiMgMdcbqvYREEuIvxsbKkW0FOyDiLM is also used in the Azure VPN connection.

Step2: Create IPSec Tunnel

add iptunnel CB_Azure_Tunnel 168.63.x.x 255.255.255.255 66.128.x.x –protocol IPSEC –ipsecProfileName CB_Azure_IPSec_Profile

Step3: Create PBR Rule

add pbr CB_Azure_Pbr -srcIP 10.120.0.0-10.120.1.255 –destIP 10.10.0.0-10.10.3.255 –ipTunnelCB_Azure_Tunnel

Step4: Apply Settings

apply pbrs

You can configure NetScaler using GUI as well. here is an example.

  1. Access the configuration utility by using a web browser to connect to the IP address of the NetScaler appliance in the datacenter.
  2. Navigate to System > CloudBridge Connector.
  3. In the right pane, under Getting Started, click Create/Monitor CloudBridge.
  4. Click Get Started> In the CloudBridge Setup pane, click Microsoft Windows Azure.
  5. In the Azure Settings pane, in the Gateway IP Address* field, type the IP address of the Azure gateway. The CloudBridge Connector tunnel is then set up between the NetScaler appliance and the gateway. In the Subnet (IP Range)* text boxes, specify a subnet range (in Azure cloud), the traffic of which is to traverse the CloudBridge Connector tunnel. Click Continue.
  6. In the NetScaler Settings pane, from the Local Subnet IP* drop-down list, select a publicly accessible SNIP address configured on the NetScaler appliance. In Subnet (IP Range)* text boxes, specify a local subnet range, the traffic of which is to traverse the CloudBridge Connector tunnel. Click Continue.
  7. In the CloudBridge Setting pane, in the CloudBridge Name text box, type a name for the CloudBridge that you want to create.
  8. From the Encryption Algorithm and Hash Algorithm drop-down lists, select the AES and HMAC_SHA1 algorithms, respectively. In the Pre Shared Security Key text box, type the security key.
  9. Click Done.

Configuration of an IPSec Site-to-Site VPN in the Azure Subscription 

Step1: Connect to Azure Subscription

Login-AzureRmAccount

Get-AzureRmSubscription

Select-AzureRmSubscription -SubscriptionName “99ebd-649c-466a-a670-f1a611841”

Step2: Create Azure Resource Group in your region

New-AzureRmResourceGroup -Name TestRG1 -Location “Australia East”

Step3: Create vNET and Subnets

$subnet1 = New-AzureRmVirtualNetworkSubnetConfig -Name “Tursted” -AddressPrefix 10.10.0.0/24

$subnet2 = New-AzureRmVirtualNetworkSubnetConfig -Name “UnTursted” -AddressPrefix 10.10.1.0/24

$subnet3 = New-AzureRmVirtualNetworkSubnetConfig -Name “GatewaySubnet” -AddressPrefix 10.10.2.0/24

$vnet=New-AzureRmVirtualNetwork -Name TestVNet1 -ResourceGroupName TestRG1 -Location “Australia East” -AddressPrefix 10.10.0.0/22 -Subnet $subnet1, $subnet2, $subnet3

Set-AzureRmVirtualNetwork -VirtualNetwork $vnet

Step4: Create On-premises Network

New-AzureRmLocalNetworkGateway -Name Site2 -ResourceGroupName TestRG1 -Location “Australia East” -GatewayIpAddress “66.128.x.x” -AddressPrefix “10.120.0.0/24”

New-AzureRmLocalNetworkGateway -Name Site2 -ResourceGroupName TestRG1 -Location “East US” -GatewayIpAddress “23.99.221.164” -AddressPrefix @(“10.120.0.0/24”,”10.120.1.0/24”)

Step5: Request a Public IP Address

$gwpip= New-AzureRmPublicIpAddress -Name gwpip -ResourceGroupName TestRG1 -Location “Australia East” -AllocationMethod Dynamic

Step6: Create Gateway IP Address

$vnet = Get-AzureRmVirtualNetwork -Name TestVNet1 -ResourceGroupName TestRG1

$subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name “GatewaySubnet” -VirtualNetwork $vnet

$gwipconfig = New-AzureRmVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $subnet.Id -PublicIpAddressId $gwpip.Id

Step7: Create VPN Gateway

New-AzureRmVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 -Location “Australia East” -IpConfigurations $gwipconfig -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1

Step8: Extract public IP address of the VPN Gateway

Get-AzureRmPublicIpAddress -Name GW1PublicIP -ResourceGroupName TestRG1

Step9: Create VPN Connection

$gateway1 = Get-AzureRmVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1

$local = Get-AzureRmLocalNetworkGateway -Name Site2 -ResourceGroupName TestRG1

New-AzureRmVirtualNetworkGatewayConnection -Name VNet1toSite2 -ResourceGroupName TestRG1 -Location “East US” -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local -ConnectionType IPsec -RoutingWeight 10 -SharedKey “ DkiMgMdcbqvYREEuIvxsbKkW0FOyDiLM”

Step10: verify Connection

Get-AzureRmVirtualNetworkGatewayConnection -Name MyGWConnection -ResourceGroupName MyRG

Deploy Work Folder in Azure Cloud

The concept of Work Folder is to store user’s data in a convenient location. User can access the work folder from BYOD and Corporate SOE from anywhere. The work folder facilitate flexible use of corporate information securely from supported devices. The work folder can be deployed on-premises and in Azure Cloud. In this article, I will demonstrate how to deploy Work Folder in Azure. Before that, let’s start with application of Work Folder.

Applications of Work Folder in Corporate Environment

  • Provide a single point of access to work files from a user’s work and personal devices
  • Access the work files online and offline. While accessing offline, the data can be synced back to the Sync Server when the device connected to internet or intranet again
  • Deploy with existing deployments of Folder Redirection, Offline Files, and home folders
  • Use Windows File Server, SMB Share and other CIFS share for example NetApp CIFS share
  • Use file classification and folder quotas, to manage user data
  • Apply security policy and encryption to encrypt Work Folders and use a lock screen password
  • Use Microsoft Failover Clustering with Work Folders to provide a high-availability solution

Enhanced Functionality:

  • Azure AD Application Proxy support
  • Faster change replication
  • Integrated with Windows Information Protection (WIP)
  • Microsoft Office integration

Supported Environment:

  • NetApp CIFS, Windows File Server or Windows SMB Storage as the UNC path of Sync Share
  • Windows Server 2012 R2 or Windows Server 2016 for hosting sync shares with user files
  • A public certificate or internal certificate domain joined computer
  • Windows Server 2012 R2 level AD DS Schema
  • Windows 10 version 1703,
  • Android 4.4 KitKat and later
  • iOS 10.2 and later

Internal DNS records (CNAME records)

  • workfolders.domain.com pointed to syncserver1.domain.com and sycserver2.domain.com
  • sts.domain.com point to ADFS Servers
  • enterpriseregistration.domain.com pointed to ADFS servers

Internal DNS records (Host A Record)

  • syncserver1.domain.com
  • syncserver2.domain.com

Publishing Work Folder for mobile workforce

  • Access from Internet or use Azure Credentials
  • Web Application Proxy
  • Active Directory Federation Services (AD FS) with public DNS record sts.domain.com and enterpriseregistration.domain.com
  • A public DNS record i.e. CNAME = workfolders.domain.com
  • A public certificate from a public CA i.e. CN= workfolders.domain.com SAN=syncserver1.domain.com, syncserver2work.domain.com. There must be private key associated with the certificate which means the certificate must in pfx format before importing into the sync servers.

Deploy Work Folder Server

  1. Log on to Azure Portal, Deploy a Windows Server 2016 from Azure Marketplace. Since we will be using this VM for Sync Share. I would recommend selecting an L series VM which storage optimised VM.
  2. Once the VM is provisioned, attached premium data disk for high I/O and low latency file store.
  3. Build a Windows Server 2016, Configure TCP/IP and Join the server to the domain
  4. Remote into the server using domain admins credential. Open the Add Roles and Features Wizard.
  5. On the Select installation type page, choose Role-based or feature-based deployment.
  6. On the Select destination server page, select the server on which you want to install Work Folders.
  7. On the Select server roles page, expand File and Storage Services, expand File and iSCSI Services, and then select Work Folders.
  8. When asked if you want to install IIS Hostable Web Core, click Ok to install the minimal version of Internet Information Services (IIS) required by Work Folders.
  9. Click Next until you have completed the wizard.
  10. Repeat the steps for all Work Folder Servers.

Install Certificate on the Work Folder Server

  1. On the Windows server 2016 where you want to install the SSL certificate, open the Console.
  2. In the Windows start menu, type mmc and open it.
  3. In the Console window, in the top menu, click File > Add/Remove Snap-in.
  4. In the Add or Remove Snap-ins window, in the Available snap-ins pane (left side), select Certificates and then click Add
  5. In the Certificate snap-in window, select Computer account and then click Next
  6. In the Select Computer window, select Local computer: (the computer this console is running on), and then click Finish
  7. In the Add or Remove Snap-ins window, click OK.
  8. In the Console window, in the Console Root pane (left side), expand Certificates (Local Computer), right-click on the Web Hosting folder, and then click All Tasks > Import.
  9. In the Certificate Import Wizard, on the Welcome to the Certificate Import Wizard page, click Next.
  10. On the File to Import page, browse to and select the file that you want import and then, click Next.
  11. Notes: In the File Explorer window, in the file type drop-down, make sure to select All Files (*.*). By default, it is set to search for 509 Certificate (*.cert;*.crt) file types only.
  12. On the Private key protection page, provide the password when you exported the certificate, check Mark the Private Key exportable for future use, and check import all extended properties.
  13. On the Certificate Store page, do the following and then click Next, Select Place all certificates in the following store and click Browse.
  14. In the Select Certificate Store window, select Web Hosting and click OK.
  15. On the Completing the Certificate Import Wizard page, verify that the settings are correct and then, click Finish.
  16. Repeat the steps for all Work Folder Servers.

Bind the Certificate:

  1. Log on to a jump box where IIS Management Console is installed, Open IIS Management Console, Connect to Work Folder Server. Select the Default Web Site for that server. The Default Web Site will appear disabled, but you can still edit the bindings for the site and select the certificate to bind it to that web site.
  2. Use the netsh command to bind the certificate to the Default Web Site https interface. The command is as follows:

netsh http add sslcert ipport=<IP address of Sync Share Server>:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Create Active Directory Security Group

  1. You need minimum two AD security groups for Work Folder. One for Work Folder Admin and another for Work Folder Sync Share. For this article, let’s assume we have a Sync Share. We will create two Security Groups named FS-HRShareUser-SG and FS-HRShareAdmin-SG
  2. Make sure these security group scope is Global not Universal. In the Members section, click Add. The Select Users, Contacts, Computers, Service Accounts or Groups dialog box appears.

Create a Sync Share

  1. In Server Manager, click File and Storage Services, and then click Work Folders.
  2. A list of any existing sync shares is visible at the top of the details pane. To create a new sync share, from the Tasks menu choose New Sync Share…. The New Sync Share Wizard appears.
  3. On the Select the server and path page, specify where to store the sync share. If you already have a file share created for this user data, you can choose that share. Alternatively you can create a new folder.
  4. On the Specify the structure for user folders page, choose a naming convention for user folders within the sync share. Select either User alias or User alias@domain
  5. On the Enter the sync share name page, specify a name and a description for the sync share. This is not advertised on the network but is visible in Server Manager
  6. On the Grant sync access to groups page, specify the group that you created that lists the users allowed to use this sync share.
  7. On the Specify device policies page, specify whether to request any security restrictions on client PCs and devices. Select either Automatically lock screen, and require a password or Encrypt Work Folders based on your requirements.
  8. Review your selections and complete the wizard to create the sync share.

Setup a Tech Support Email Address

  1. In Server Manager, click File and Storage Services, and then click Servers.
  2. Right-click the sync server, and then click Work Folders Settings. The Work Folders Settings window appears.
  3. In the navigation pane, click Support Email and then type the email address or addresses that users should use when emailing for help with Work Folders. Click Ok when you’re

Publish Work Folder using ADFS Server

You can set up and configure the relying party trust for Work Folders, even though Work Folders hasn’t been set up yet. The relying party trust must be set up to enable Work Folders to use AD FS. Because you’re in the process of setting up AD FS, now is a good time to do this step.

To set up the relying party trust:

  1. Log on to ADFS Server. Open Server Manager, on the Tools menu, select AD FS Management.
  2. In the right-hand pane, under Actions, click Add Relying Party Trust.
  3. On the Welcome page, select Claims aware and click Start.
  4. On the Select Data Source page, select Enter data about the relying party manually, and then click Next.
  5. In the Display name field, enter WorkFolders, and then click Next.
  6. On the Configure Certificate page, click Next..
  7. On the Configure URL page, click Next.
  8. On the Configure Identifiers page, add the following identifier: https://workfolders.domain.com/V1. This identifier is a hard-coded value used by Work Folders, and is sent by the Work Folders service when it is communicating with AD FS. Click Next.
  9. On the Choose Access Control Policy page, select Permit Everyone, and then click Next.
  10. On the Ready to Add Trust page, click Next.
  11. After the configuration is finished, the last page of the wizard indicates that the configuration was successful. Select the checkbox for editing the claims rules, and click Close.
  12. In the AD FS snap-in, select the WorkFolders relying party trust and click Edit Claim Issuance Policy under Actions.
  13. The Edit Claim Issuance Policy for WorkFolders window opens. Click Add rule.
  14. In the Claim rule template drop-down list, select Send LDAP Attributes as Claims, and click Next.
  15. On the Configure Claim Rule page, in the Claim rule name field, enter WorkFolders.
  16. In the Attribute store drop-down list, select Active Directory.
  17. In the mapping table, enter these values:
    • User-Principal-Name: UPN
    • Display Name: Name
    • Surname: Surname
    • Given-Name: Given Name
  18. Click Finish. You’ll see the WorkFolders rule listed on the Issuance Transform Rules tab and click OK.
  19. In the AD FS snap-in, select the WorkFolders relying party trust, On the properties, choose the Encryption tab, Remove the certificate encryption
  20. Choose the Signature tab and make sure the Work Folder Certificate was imported
  21. Click Apply, Click Ok.

Set relying part trust options

These commands set options that are needed for Work Folders to communicate successfully with AD FS, and can’t be set through the UI. These options are:

  • Enable the use of JSON web tokens (JWTs)
  • Disable encrypted claims
  • Enable auto-update
  • Set the issuing of Oauth refresh tokens to All Devices.
  • Grant clients access to the relying party trust

Set-ADFSRelyingPartyTrust -TargetIdentifier “https://workfolders.domain.com/V1&#8221; -EnableJWT $true

Set-ADFSRelyingPartyTrust -TargetIdentifier “https://workfolders.domain.com/V1&#8221; -Encryptclaims $false

Set-ADFSRelyingPartyTrust -TargetIdentifier “https://workfolders.domain.com/V1&#8221; -AutoupdateEnabled $true

Set-ADFSRelyingPartyTrust -TargetIdentifier “https://workfolders.domain.com/V1&#8221; -IssueOAuthRefreshTokensTo AllDevices

Grant-AdfsApplicationPermission -ServerRoleIdentifier “https://workfolders.domain.com/V1&#8221; –AllowAllRegisteredClients

Enable Workplace Join

To enable device registration for Workplace Join, you must run the following Windows PowerShell commands, which will configure device registration and set the global authentication policy:

Initialize-ADDeviceRegistration -ServiceAccountName domain\svc-adfsservices$

Set-ADFSGlobalAuthenticationPolicy -DeviceAuthenticationEnabled $true

Set up AD FS authentication

To configure Work Folders to use AD FS for authentication, follow these steps:

  1. Log on to Sync Share Server. Open Server Manager.
  2. Click Servers, and then select your Work Folders server in the list.
  3. Right-click the server name, and click Work Folders Settings.
  4. In the Work Folder Settings window, select Active Directory Federation Services, and type in the ADFS URL. Click Apply. In the test example, the URL is https://sts.domain.com.

Publish the Work Folders web application

The next step is to publish a web application that will make Work Folders available to clients. To publish the Work Folders web application, follow these steps:

  1. Import Work Folder Certificate into WAP Servers
  2. Open Server Manager, and on the Tools menu, click Remote Access Management to open the Remote Access Management Console.
  3. Under Configuration, click Web Application Proxy.
  4. Under Tasks, click Publish. The Publish New Application Wizard opens.
  5. On the Welcome page, click Next.
  6. On the Preauthentication page, select Active Directory Federation Services (AD FS), and click Next.
  7. On the Support Clients page, select OAuth2, and click Next.
  8. On the Relying Party page, select Work Folders, and then click Next. This list is published to the Web Application Proxy from AD FS.
  9. On the Publishing Settings page, enter the following and then click Next, use these values:
  1. The confirmation page shows the Windows PowerShell command that will execute to publish the application. Click Publish.
  2. On the Results page, you should see the application was published successfully.

Configure Work Folders on the client

To configure Work Folders on the non-domain join client machine, follow these steps:

  1. On the client machine, open Control Panel and click Work Folders.
  1. Click Set up Work Folders.
  1. On the Enter your work email address page, enter either the user’s email address (for example, user@domain.com) or the Work Folders URL (in the test example, https://workfolders.domain.com), and then click Next.
  2. If the user is connected to the corporate network, the authentication is performed by Windows Integrated Authentication. If the user is not connected to the corporate network, the authentication is performed by ADFS (OAuth) and the user will be prompted for credentials. Enter your credentials and click OK.
  3. After you have authenticated, Click Next.
  4. The Security Policies page lists the security policies that you set up for Work Folders. Click Next.
  5. A message is displayed stating that Work Folders has started syncing with your PC. Click Close.
  6. The Manage Work Folders page shows the amount of space available on the server, sync status, and so on. If necessary, you can re-enter your credentials here. Close the window.
  7. Your Work Folders folder opens automatically. You can add content to this folder to sync between your devices.

To configure Work Folders on the domain joined client machine, follow these steps:

  1. Configure using GPO, use Go to User Configuration > Administrative Templates > Windows Components > Work Folders > Specify Work Folders settings.
  2. Specify Work Folder URL as workfolders.domain.com
  3. Apply the GPO to selected OU.

Relevant Article:

Work Folder FAQ

NetApp CIFS shares not mounting to Windows Server 2012

 

ADFS 4.0 Step by Step Guide: Federating with Splunk Cloud

To integrate On-Premises SSO with Splunk Cloud, you need the following items:

  • An administrative account in your ADFS
  • An administrative account in your Windows Active Directory
  • An administrative account for your Splunk Cloud instance or tenant.

Step1: Create Security Groups

  1. Sign into Domain Controller
  2. Open Active Directory Users and Computers
  3. Create two security groups named, SG-SplunkAdmin and SG-SplunkUsers

Step2: Download IdP (ADFS 2016) Metadata

  1. Log into the ADFS 2016 server or an admin PC.
  2. Open a browser and type metadata URL https://ADFSServer1.domain.com/federationmetadata/2007-06/federationmetadata.xml
  3. Download and save the metadata as IdP metadata.

Step3: Download Splunk Metadata

  1. Login to Splunk Cloud instance using administrator credentials.
  2. Download metadata from your instance of Splunk Cloud or This can be obtained by, once logged into a session as an admin role user, entering the URL https://yourinstance.splunkcloud.com/saml/spmetadata into your browser’s URL field.
  3. Download and save the metadata as SP metadata

Step4: Extract Splunk certificate from metadata

  1. Open Splunk metadata XML file in a notepad, Search “X509Certificate” in the metadata. Copy the everything starting from XML tags from ‘<ds:X509Certificate>‘ to ‘</ds:X509Certificate>‘.
  2. Open a new notepad and paste the content into the notepad. Place a row above the certificate with the text —–BEGIN CERTIFICATE—– and a row below the certificate with the text —–END CERTIFICATE—–
  3. Save the notepad as a .cer
  4. The file will look like this one but with more hexadecimal character

—–BEGIN CERTIFICATE—–

MIIEsjCCA5qgAwIBAgIQFofWiG3iMAaFIz2/Eb9llzANBgkqhkiG9w0BAQsFADCB

sjFuz4DliAc2UXu6Ya9tjSNbNKOVvKIxf/L157fo78S1JzLp955pxyvovrsMqufq

YBLqJop4

—–END CERTIFICATE—–

Step5: Create a Relying Party Trust

  1. Log into the ADFS 2016 server and open the management console.
  2. Right-click Service>Relying Party Trusts>Select Add Relying Party Trust from the top right corner of the window.
  3. Click Claims aware>Click Start
  4. Click Import Data about the relying party
  5. Browse the location where you saved Splunk metadata, select metadata, and Click Next
  6. Type the Display Name as SplunkRP, Click Next
  7. Ensure I do not want to configure multi-factor authentication […] is chosen, and click Next
  8. Permit all users to access this relying party.
  9. Click Next and clear the Open the Claims when this finishes check box.
  10. Close this page. The new relying party trust appears in the window.
  11. Right-click on the relying party trust and select Properties.
  12. On the properties, choose the Encryption tab, Remove the certificate encryption
  13. Choose the Signature tab and make sure the Splunk Certificate was imported
  14. Select to the Advanced tab and set the Secure hash algorithm to SHA-1.
  15. Click into the Identifiers tab. The default Relying party identifier for Splunk came in from the metadata file as ‘splunkEntityId’. Remove Default one. Add new entity ID splunk-yourinstance
  16. Under the Endpoints tab, make sure the Consumer Endpoints is https://yourinstance.splunkcloud.com/saml/acs  with a Post binding and index 0
  17. Under the Endpoints tab, make sure the make sure the Logout Endpoints is https://yourinstance.splunkcloud.com/saml/logout with a Post binding
  18. Click Apply, Click Ok.

Step6: Add Claim Rule for the Relying Party

  1. Log into the ADFS server and open the management console.
  2. Right-click on the Splunk relying party trust and select Edit Claim Rules.
  3. Click the Issuance Transform Rules tab.
  4. Click Add Rules. Add a Rule Type the Name as Rule1
  5. Ensure Send LDAP Attributes as Claims is selected, and click Next
  6. Select the below details

Claim Rule Name =  Rule1

Attribute Store = Active Directory

LDAP Attribute Outgoing Claim Type
Display-Name realName

 

Token-Groups – Unqualified Names Role
E-Mail-Addresses mail
  1. Click Finish. Click Apply
  2. Click Add Rules. Add a Rule Type the Name as  Rule2
  3. Ensure Transform an Incoming Claim is selected, and click Next
  4. Select the below details
Claim Rule Name Rule2

 

Incoming claim type UPN

 

Incoming NameID format Unspecified
Outgoing Claim Type Name ID
Outgoing name ID format Transient Identifier
  1. Click Finish. Click Apply

Step7: Import Splunk Certificate into ADFS Server

  1. Sign into ADFS Server, Open Command Prompt as an Administrator, type MMC.exe
  2. Click File, Click Add/Remove Snap-in
  3. Click Certificates, Click Computer Account
  4. Right Click on Trusted People>All Tasks>Import Certificate
  5. Browse the location of certificate and import
  6. Close MMC.
  7. Repeat these steps in all ADFS Servers in your farm.

Step8: Setup SigningCertificateRevocationCheck to None

Sign into primary ADFS, open PowerShell as an administrator, type the following and hit enter.

Set-ADFSRelyingPartyTrust -TargetName “SplunkRP” -SigningCertificateRevocationCheck None

Step9: Configure SplunkCloud in your instance

  1. On the Splunk instance as an Admin user, choose Settings->Access Controls->Authentication Method.  Choose SAML then click on the Configure Splunk to use SAML’ button.
    within the SAML Groups setup page in Splunk, click on the SAML Configuration button in the upper right corner.
  2. The SAML Configuration popup window will appear. Click on Select File to import the XML Metadata file (or copy and paste the contents into the Metadata Contents textbox) and click Apply.
  3. The following fields should be automatically populated by the metadata:
    Single Sign On (SSO) URL
    Single Log Out (SLO) URL
    idP’s Certificate file
    Sign AuthnRequest (checked)
    Sign SAML response (checked)
    Enter in the Entity ID as splunk-yourinstance as was used in ADFS RP Identifier property of the ADFS configuration.
  4. Scroll down to the ‘Advanced Settings‘ section.
    Enter in the Fully Qualified Domain Name (FQDN) of the Splunk Cloud instance – ‘https://yourinstance.splunkcloud.com
    Enter a ‘0‘ (zero) for the Redirect port – load balancer’s port.
    Set the Attribute Alias Role to ‘http://schemas.microsoft.com/ws/2008/06/identity/claims/role’
    It may also be necessary to set an Attribute Alias for ‘Real Name’ and ‘Mail’ – but not all implementations require these settings. Click Save to Save the configuration:
  5. The next step is set up the SAML groups. Within the Splunk ‘Settings->Access Controls->Authentication Method->SAML Settings‘ page, click the green “New Group” button
  6. Enter a group name that associates with ADFS Active Directory passed group names, some examples follow
Group Name (Type this name on New Group Properties ) Splunk Role (Select from Available Roles) Active Directory Security Group
SG-SplunkAdmin Admin SG-SplunkAdmin
SG-SplunkUsers User SG-SplunkUsers
  1. Click Save.

Step10: Testing SSO

  1. To test SSO, visit  https://yourinstance.splunkcloud.com/en-US/account/login?loginType=splunk  You will be redirected to ADFS STS Signing Page. Enter your on-premises email address and password as the credential.  You should be redirected back to Splunk Cloud.
  2. Also test logging out of Splunk, you should be re-directed to the Splunk SAML logout page.

 

ADFS 4.0 Step by Step Guide: Federating With Google Apps

To integrate On-Premises SSO with Google Apps, you need the following items:

Step1: Export ADFS Token Signing Certificate

  1. Log into the ADFS 2016 server and open the management console.
  2. Right-click Service>Certificate
  3. Right-click the certificate and select View Certificate.
  4. Select the Details tab.
  5. Click Copy to File. The Certificate Export Wizard opens.
  6. Select Next. Ensure the No, do not export the private key option is selected, and then click Next.
  7. Select DER encoded binary X.509 (.cer), and then click Next.
  8. Select where you want to save the file and give it a name. Click Next.
  9. Select Finish.

Step2: Download Google Certificate

  1. Login to Google Admin console with administrator permission to add new apps.
  2. Go to Apps > SAML Appsand click “+” at the right bottom of the page to add a new SAML IDP (“Enable SSO for SAML Application”).
  3. Select the “Setup my own custom app” at the bottom of the window. You will see the “Google IdP Information” page. Click Download button to retrieve google certificate.

Step3: Create a Relying Party Trust

  1. Log into the ADFS 2016 server and open the management console.
  2. Right-click Service>Relying Party Trusts>Select Add Relying Party Trust from the top right corner of the window.
  3. Click Claims aware>Click Start
  4. Click Enter Data about the relying party manually
  5. Give it a display name such as GoogleApps>Click Next>Click Next
  6. On the Configure URL Page, Check Enable support for the SAML 2.0 WebSSO protocol and type  https://www.google.com/a/domain.com/acs, Click Next
  7. On the Configure RP Identifier Page, type the identifiers: google.com/a/domain.com, Click Add
  8. Ensure I do not want to configure multi-factor authentication […] is chosen, and click Next
  9. Permit all users to access this relying party.
  10. Click Next and clear the Open the Claims when this finishes check box.
  11. Close this page. The new relying party trust appears in the window.
  12. Right-click on the relying party trust and select Properties.
  13. Select to the Advanced tab and set the Secure hash algorithm to SHA-256.
  14. Under the Endpoints tab, click Add SAML Logout with a Post binding and a URL of https://sts.domain.com/adfs/ls/?wa=wsignout1.0
  15. Select to signature tab, Click Add.. Import the google certificate, you have exported from Google admin console. Click Apply, Click Ok.

Step4: Add Claim Rule for the Relying Party

  1. Log into the ADFS server and open the management console.
  2. Right-click on the GoogleApps relying party trust and select Edit Claim Rules.
  3. Click the Issuance Transform Rules tab.
  4. Click Add Rules. Add a Rule Type the Name as GoogleApps Rule
  5. Ensure Send LDAP Attributes as Claims is selected, and click Next
  6. Select the below details
  • Claim Rule Name =  Send Email Address As NameID
  • Attribute Store = Active Directory
  • LDAP Attribute = E-mail-Addresses
  • Outgoing Claim Type = Name-ID
  1. Click Finish. Click Apply

Step5: Configure Google Apps in Admin Console

  1. Sign into the Google Apps Admin Console using your administrator account.
  2. Click Security. If you don’t see the link, it may be hidden under the More Controls menu at the bottom of the screen.
  3. On the Security page, click Setup single sign-on (SSO).
  4. Perform the following configuration changes:
  1. In Google Apps, for the Verification certificate, replace and upload the ADFS token signing certificate that you have downloaded from ADFS.
  2. Click Save Changes.

Step6: Testing SSO

To test SSO, visit http://mail.google.com/a/domain.com.  You will be redirected to ADFS STS Signing Page. Enter your on-premises email address and password as the credential.  You should be redirected back to Google Apps and arrive at your mailbox.