Chapter 6 Implement and Manage Storage

This Chapter covers following Topic Lessons

  • Azure Storage Introduction
  • Blob Storage
  • Azure File Storage
  • Azure File Sync
  • Azure Import/Export service
  • Export from Azure
  • Import to Azure
  • Azure Data Box
  • Content Delivery Networks (CDN)

This Chapter covers following Lab Exercises

  • Create Blob Storage Container and upload a File
  • Blob Storage Tiering
  • Create Blob Storage Container using Storage Explorer
  • Creating and Mount File Share
  • Deploying Azure File Sync in 4 Steps
  • Demonstrating Export Job Creation
  • Demonstrating Data Box Order through Azure Portal
  • Implementing Azure CDN using Azure Portal
  • Enabling or Disabling Compression
  • Changing Optimization type
  • Changing Caching Rules
  • Allow or Block CDN in Specific Countries

Chapter Topology

In this chapter we will add Blob Storage, File Storage, Azure File Sync service, CDN & Import/Export Job to the Topology. We also demonstrate how to create Azure Data Box Ordering.

Screenshot_246

We will install Azure File Sync Agent on VM OnPremAD.

Screenshot_247

This diagram is shown separately as there is space constrained in top diagram.

Azure Storage Introduction

Azure Storage is the Managed cloud storage solution. Azure Storage is highly available and massively scalable. Azure provides five types of storage - Blob, Table, Queue, Files and Virtual Machine Disk storage (Page Blobs).

Azure Blobs: A massively scalable object store for unstructured data.

Azure Files: Managed file shares for cloud or on-premises deployments. File Storage provides shared storage for Azure/on-premises VMs using SMB protocol.

Azure Queues: A messaging store for reliable messaging between application components.

Azure Tables: A NoSQL store for schemaless storage of structured data.

Figure Below shows five types of Azure Storage Services.

Screenshot_248

Comparing Different Azure Storage Services types

Screenshot_249

Feature of Azure Storage

Durable and highly available: Redundancy ensures that your data is safe in the event of transient hardware failures. You can also opt to replicate data across datacenters or geographical regions for additional protection from local catastrophe or natural disaster. Data replicated in this way remains highly available in the event of an unexpected outage.

Secure: All data written to Azure Storage is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data.

Scalable: Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today's applications.

Managed: Microsoft Azure handles maintenance and any critical problems for you.

Accessible: Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides SDKs for Azure Storage in a variety of languages -- .NET, Java, Node.js, Python, PHP, Ruby, Go, and others -- as well as a mature REST API. Azure Storage supports scription in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.

Blob Storage

Azure Blob storage stores unstructured data in the cloud as objects/blobs. Azure Blob storage is massively scalable, highly redundant and secure object storage with a URL/http based access which allows it to be accessed within Azure or outside the Azure. Though Azure objects are regionally scoped you can access them from anywhere in the world.

Azure Blob Storage is a Managed Service that stores large amount of unstructured data in the cloud as objects/blobs. Blob storage can store any type of text or binary data, such as a document, media file, or application installer that can be accessed anywhere in the world via http or https.

Blob storage is also referred to as object storage.

Blobs are basically files like those that you store on your computer. They can be pictures, Excel files, HTML files, virtual hard disks (VHDs), log files & database backups etc. Blobs are stored in containers, which are similar to folders. Containers are created under Storage account.

You can access Blob storage from anywhere in the world using URLs, the REST interface, or one of the Azure SDK storage client libraries. Storage client libraries are available for multiple languages, including Node.js, Java, PHP, Ruby, Python, and .NET.

You can create Blob Storage using 3 ways - General Purpose Storage Account v1, General Purpose Storage Account v2 or Blob storage Account.

Common Use cases for Blob Object Storage

For users with large amounts of unstructured object data to store in the cloud, Blob storage offers a cost-effective and scalable solution. You can use Blob storage to store content such as:

  • Serving images or documents directly to a browser.
  • Storing files for distributed access.
  • Streaming video and audio.
  • Storing data for backup and restore, disaster recovery, and archiving.

Blob Storage Service Components

Blob Service contains 3 components.

Screenshot_250

Storage Account

All access to Azure Storage is done through a storage account. This storage account can be a General-purpose v1 & v2 or a Blob storage account which is specialized for storing objects/blobs.

Containers

Container is like a folder which store Blob Files. Container provides a grouping of a set of blobs. All blobs must be in a container. An account can contain an unlimited number of containers. A container can store an unlimited number of blobs. Container name must be lowercase.

Blob

A file of any type and size.

Azure Storage offers three types of blobs: block, page and append blobs. Blob Storage tiering is available with GPv2 Account and Blob Storage Account.

Types of Blob Storage in Azure Cloud

Blob storage offers three types of blobs - Block Blobs, Page Blobs and Append Blobs.

Block blobs are optimized for storing cloud objects, streaming content and and are a good choice for storing documents, media files, backups etc. It is backed by HDD.

Append blobs are similar to block blobs, but are optimized for append operations. An append blob can be updated only by adding a new block to the end. Append blobs are a good choice for scenarios such as logging, where new data needs to be written only to the end of the blob. It is backed by magnetic HDD.

Page blobs are used for storing virtual machine disks (OS and Data Disks). Page Blob can use both HDD and SSD. Page Blob was covered in compute chapter. Page blobs can be up to 8 TB in size and are more efficient for frequent read/write operations.

Comparing 3 types of Blob Storage

Screenshot_251

In this chapter we will focus only on Block blobs and Append bobs as we already have covered Page Blobs in Azure Compute Chapter.

Azure Blob Storage Tiering: Hot, Cool & Archive Storage tiers

Azure Blob Storage tiering is available with General Purpose v2 Account and Blob Storage Account. General Purpose v1 Account does not offers Blob Storage Tiering. Microsoft Recommends using GPv2 instead of Blob Storage accounts for tiering.

General Purpose v2 Account & Blob storage accounts expose the Access Tier attribute, which allows you to specify the storage tier as Hot or Cool . The Archive tier is only available at the blob level and not at the storage account level.

Hot Storage Tier

The Azure hot storage tier is optimized for storing data that is frequently accessed at lower access cost but at higher storage cost

Cool Storage Tier

The Azure cool storage tier is optimized for storing data that is infrequently accessed at lower storage cost but at higher access cost.

Archive Storage Tier

Archive storage tier is optimized for storing data that is rarely accessed and has the lowest storage cost and highest data retrieval costs compared to hot and cool storage. The archive tier can only be applied at the blob level.

Blob rehydration (Important Concept)

Data is offline in Archive Storage Tier. To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take up to 15 hours to complete.

If there is a change in the usage pattern of your data, you can also switch between these storage tiers at any time.

Data in hot storage tier has slightly higher availability (99.9%) than cool storage tier (99%). Availability is not applicable for Archive tier as data is offline.

Hot Tier use case

  1. Data that is in active use or expected to be accessed frequently.

  2. Data that is staged for processing and eventual migration to the cool storage tier.

Cold Tier Use case

  1. Short-term backup and disaster recovery datasets

  2. Older media content not viewed frequently anymore but is expected to be available immediately when accessed.

  3. Large data sets that need to be stored cost effectively while more data is being gathered for future processing.

Archive Tier Use case

  1. Long-term backup, archival, and disaster recovery datasets.

  2. Original (raw) data that must be preserved, even after it has been processed into final usable form.

  3. Compliance and archival data that needs to be stored for a long time and is hardly ever accessed. (For example, Security camera footage, old XRays/MRIs for healthcare organizations, audio recordings, and transcripts of customer calls for financial services).

Comparison of the storage tiers

Screenshot_252

Options to make blob data available to users

Private access: Owner of the Storage Account can access the Blob Data.

Anonymous access: You can make a container or its blobs publicly available for anonymous access.

Shared access signatures : Shared access signature (SAS), provide delegated access to a resource in your storage account, with permissions that you specify and for an interval that you specify without having to share your account access keys.

Anonymous read access to containers and blobs

By default, a container and any blobs within it may be accessed only by the owner of the storage account (Public Access level: Private). To give anonymous users read permissions to a container and its blobs, you can set the container & Blob permissions to allow full public access.

Container (anonymous read access for containers and blobs): Container and blob data can be read via anonymous request. Clients can enumerate blobs within the container via anonymous request, but cannot enumerate containers within the storage account.

Blob (anonymous read access for blobs only): Blob data within this container can be read via anonymous request, but container data is not available. Clients cannot enumerate blobs within the container via anonymous request.

Http access to Blob data using DNS names

By default the blob data in your storage account is accessible only to storage account owner because of Default Private (no anonymous access) Policy. Authenticating requests against Blob storage requires the account access key.

Using DNS names you can http access Blob endpoint if anonymous access is configured.

https://mystorageaccount.blob.core.windows.net/mycontainer/myblob

Here mystorageaccount is storage account name, mycontainer is container name and myblob is uploaded file name.

Controlling access to blob data using Shared access signatures (SAS)

Anybody having access to Storage account key will have unlimited access to storage account.

Shared access signature (SAS), provide delegated access to a resource in your storage account, without having to share your account access keys. SAS is a secure way to share your storage resources without compromising your account keys.

A shared access signature (SAS) is a URI that grants restricted access rights to Azure Storage resources. You can provide a shared access signature to clients who should not be trusted with your storage account key but whom you wish to delegate access to certain storage account resources. By distributing a shared access signature URI to these clients, you grant them access to a resource for a specified period of time.

SAS granular control features

  1. The interval over which the SAS is valid, including the start time and the expiry time.
  2. The permissions granted by the SAS. For example, a SAS on a blob might grant a user read and write permissions to that blob, but not delete permissions.
  3. An optional IP address or range of IP addresses from which Azure Storage will accept the SAS.
  4. The protocol over which Azure Storage will accept the SAS. You can use this optional parameter to restrict access to clients using HTTPS.

Types of shared access signatures (SAS)

Service SAS delegates access to a resource in just one of the storage services: the Blob, Queue, Table, or File service.

Account-level SAS can delegate access to multiple storage services (i.e. blob, file, queue, table).

Note: Exercise 62 & 63 in Chapter 5 shows how to create SAS and use it.

Exercise 64: Create Blob Storage Container and upload a File

In this exercise we will create Blob Storage Container hk410 in Storage Account sastdcloud and in resource group RGCloud. Upload a file to the Blob Container hk410 and access it over internet. Change the permission to private and then access the file over internet.

  1. Go to Storage Account sastdcloud Dashboard > Click Blobs under services>Blob Dashboard opens> In Right pane click +container>Create New Container blade opens>Enter name hk410 and select access level Blob and click ok.

    Screenshot_253
  2. Container is created as shown below.

    Screenshot_254
  3. Upload a file from your desktop . I have created a helloworld.txt file on my desktop. I also added Hello World as content in the file. Click the container hk410>Container hk410 dashboard opens>click upload>Upload blob blade opens>Click file button to upload HelloWorld.txt file from desktop>Rest keep all values as default>Click upload>Close the Upload pane.

    Screenshot_255
  4. File is uploaded as shown below. I also clicked … in extreme right to see the option available.

    Screenshot_256
  5. Double click HelloWorld.Txt in Container pane>Blob Dashboard opens>Copy the URL of the file.

    Screenshot_257
  6. Open a browser and paste the URL copied in step 5. We were able to open the file as we had chosen Blob anonymous read permission.

    Screenshot_258
  7. Change the permission to private . In Blob dashboard select container hk410 and click Change Access Level>Change Access Level blade opens>Select private from drop down box and click ok.

    Screenshot_259
  8. Open a browser and paste the URL copied in step 5. We were not able to open the file as we had chosen Private (no anonymous access) permission.

    Screenshot_260
  9. Change the permission back to Blob as we need it for other exercises. In Blob dashboard select container hk410 and click Change Access Level>Change Access Level blade opens>Select Blob from drop down box and click ok.

    Screenshot_261

Exercise 65: Blob Storage Tiering

In this exercise we will just demonstrate how to move Blob Object HelloWorld.txt from Hot Access tier to Cool or Archive Tier using Azure Portal. We will not actually move it.

  • Go to Blob Container hk410 dashboard>Click the HelloWorld.txt in right pane>Blob Properties pane open>Scroll down and under Access tier select cool or Archive tier.

    Screenshot_262

Exercise 66: Create Blob Storage Container using Storage Explorer

In this exercise we will create Blob Storage Container test410 in Storage Account sastdcloud. We will then upload a text file HelloWorld.txt to the Blob Container.

  1. In Storage Explorer Dashboard enlarge Storage Account sastdcloud under Pay you go subscription.

    Screenshot_263
  2. Right click Blob Containers under sastdcloud>Dailog Box opens. Click create Blob container> In dialog box type test410 and press enter. Container test410 is created as shown below with Private no anonymous access. You can also see container hk410 created in exercise 64.

    Screenshot_264
  3. Create HelloWorld.txt file with contents Hello World on your desktop. Click upload in right pane and select upload files>Upload File Blade opens>Click … and select HelloWorld.txt from your desktop>Click upload.

    Screenshot_265
  4. Figure below shows HelloWorld.txt file uploaded.

    Screenshot_266
  5. You can change the Public Access level by right clicking container test410 and click Set Public Access level>Public Access level blade opens. You can change access level as per your requirement.

    Screenshot_267

File Storage

Azure Files Storage offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB 3.0) protocol (also known as Common Internet File System or CIFS). Azure File shares can be mounted concurrently by cloud or on-premises deployments of Windows, Mac OS, and Linux instances.

Figure below shows Multiple Virtual Machines accessing Azure File share.

Screenshot_268

Azure File share Use case

  • Azure Files can be used to completely replace or supplement traditional onpremises file servers or NAS devices.
  • Developers can leverage their existing code and skills to migrate existing applications that rely on file shares to Azure quickly and without costly rewrites.
  • An Azure File share is a convenient place for cloud applications to write their logs, metrics, and crash dumps.
  • When developers or administrators are working on VMs in the cloud, they often need a set of tools or utilities. Copying such utilities and tools to each VM can be a time consuming exercise. By mounting an Azure File share locally on the VMs, a developer and administrator can quickly access their tools and utilities, no copying required.

File Service Architecture and components

Figure below shows the architecture of File share. File share is mounted as a drive on Virtual Machine and is accessed over the network.

Screenshot_269

File Service contains 3 components: Storage Account, File Shares and Files.

Screenshot_270

Storage Account: This storage account can be a General-purpose v1 or v2 storage account. It supports only Standard Storage for File service.

Share: Share stores the files. Azure File shares can be mounted and accessed concurrently by cloud or on-premises deployments of Windows, Linux, and macOS. A share can store an unlimited number of files.

Directory: Directory is optional. Directory is like a folder for files.

File: A file of any type with max size of 1 TB.

Exercise 67: Create and Mount File Share

In this Exercise we will create File Share fsaz103 and mount it to Windows VM VMFE1. Creating and Mounting File share is a 2 step process:

  1. Create File Share and upload a file.

  2. Mount the File share on a Server instance in cloud or on-Prem. In this Exercise we will mount file share on windows VM VMFE1 created in Ex 18, Chapter 2.

Creating File Share

  1. Go to Storage Account sastdcloud Dashboard.

    Screenshot_271
  2. Click Files in right pane>File Service dashboard opens>Click + File share> Create File share blade opens>Enter name fsaz103 and 1 GB in Quota and click create.

    Screenshot_272
  3. Click File share fsaz103 in File service pane> File share fsaz103 dashboard opens. Click upload in right pane> Upload File Blade open> Upload a file from your desktop and click upload. After file is uploaded close the upload blade.

    Screenshot_273
  4. In File share fsaz103 dashboard Click Connect in right pane>Connect Blade opens. In connect pane go to second rectangular box and scroll down. Copy command appended to net use Z: In this case it is \sastdcloud.file.core.windows.netfsaz103

    Screenshot_274

    Also read the explanation under the box which says: When connecting from a computer from outside Azure, remember to open outbound TCP port 445 in your local network. In our case we are connecting from VMFE1 inside Azure.

  5. Go to Storage Account Dashboard sastdcloud Dashboard>Click Access keys in left pane> In right pane copy key1.

    Screenshot_275
  6. Connect to Azure VM VMFE1 using RDP> Open File Explorer and click This PC in left pane.

    Screenshot_276
  7. Click icon in right side and it opens ribbon items. Note the Map network drive option.

    Screenshot_277
  8. Click Map Network Drive>Map Network drive>In Folder enter command copied in step 4: \sastdcloud.file.core.windows.netfsaz103> Click Finish.

    Screenshot_278
  9. Enter Credential dialog box opens>In Username enter Storage Account name (sastdcloud) prepended with Azure>In password enter Storage Account Key>Ok.

    279
  10. After VMFE1 connects you can see File share fsaz103 mounted to VMFE1.

    Screenshot_280
  11. Click on File share fsaz103 and you can see HelloWorld.txt file.

    Screenshot_281

Azure File Sync Service

Azure File Sync enables synchronization or replication of file data between on-premises file servers and Azure Files shares while maintaining local access to your data. It’s a 2 way synchronization.

Screenshot_282

Benefits of Azure File Sync

  1. By synchronizing on-premises File share data to Azure Share you can eliminate on-premises Backup and DR requirement. This reduces both cost and administrative overhead of managing Backup and DR.

  2. With Azure File Sync you have the option to eliminate on-premises file server. User and On-premises application servers can access data in Azure File share.

  3. With Azure File Sync, Branch office can access Head Office File share data in Azure File shares without requiring any complex set up to integrate Branch and HO File servers.

Azure File Sync Cloud Tiering Option

Cloud tiering is an optional feature of Azure File Sync in which infrequently used or accessed files greater than 64 KiB in size are moved or tiered to Azure File shares.

When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from Azure Files without the user needing to know that the file is not stored locally on the system.

Components of Azure File Sync Solution

Azure Storage Sync Service

On-Premises Windows Server also known as Registered Server.

Azure File Sync Agent : Azure File Sync agent is installed on on-premises server which enables Windows Server to be synced with an Azure file share.

Server Endpoint : A server endpoint represents a specific location on a Windows or registered server, such as a folder on a server volume. Multiple server endpoints can exist on the same volume if their namespaces do not overlap (for example F:sync1 and F:sync2). You can configure cloud tiering policies individually for each server endpoint.

Cloud Endpoint : A cloud endpoint is an Azure file share. Azure file share can be a member of only one sync group. A cloud endpoint is a pointer to an Azure file share. All server endpoints will sync with a cloud endpoint, making the cloud endpoint the hub.

Sync Group : Sync Group has one cloud endpoint, which represents an Azure File share, and one or more server endpoints, which represents a path on a Windows Server. Endpoints within a sync group are kept in sync with each other.

Cloud tiering (optional): Cloud tiering is an optional feature of Azure File Sync in which infrequently used or accessed files greater than 64 KiB in size can be tiered to Azure Files. When a file is tiered, the Azure File Sync file system filter (StorageSync.sys) replaces the file locally with a pointer, or reparse point. The reparse point represents a URL to the file in Azure Files. A tiered file has the "offline" attribute set in NTFS so third-party applications can identify tiered files. When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from Azure Files without the user needing to know that the file is not stored locally on the system.

Design Nugget: Storage Sync service should be in same region and Resource Group as Storage Account. The File Share should be in same Storage Account.

Exercise 68: Deploying Azure File Sync in 4 Steps

Pre- Requisite

  1. Use Storage Account sastdcloud created in Chapter 5, Exercise 54.

  2. Use File Share fsaz103 created in this Chapter Exercise 67.

  3. For Registered Server Use VM VMAD created in Chapter 2, Exercise 32.

Step 1: Create File Sync Service

  1. In Azure Portal Click create a Resource>Storage>Azure File Sync>Deploy Storage sync blade opens>Enter a name> Select RGCloud in Resource Group > Click create (Not Shown).

    Screenshot_283
  2. Figure below shows dashboard of Storage Sync Service.

    Screenshot_284

Step 2: Download, Install & Register Azure File Sync Agent on VM VMAD

  1. RDP to Windows VM VMAD.

  2. Open Internet explorer and Download Azure File Sync Agent for Windows Server 2016 from following link. Disable enhanced IE security settings
    https://www.microsoft.com/en-us/download/details.aspx?id=57159

  3. Click on Agent file to start the Installation. After Installation is complete Server Registration screen opens automatically.

    Screenshot_285
  4. Click sign in and authentication box pops up> Enter your MS Account used for Subscription registration and following screen opens>Select your Subscription and Resource Group HKPortal and Storage Sync Service created in step 1.

    Screenshot_286
  5. Click Register and authentication box pops up>Enter your Microsoft Account used for Subscription registration and password>Registration successful message pops up.

  6. Go the Go Storage sync Service SSPortal created in step 1 Dashboard>Click Registered Servers in left pane>In right pane you can see VMAD is registered and is online.

    Screenshot_287
  7. In Registered server VM OnPremAD I created a Folder Public under C Drive. In Public folder I created 2 text files - Test1 and Test2.

    Screenshot_288

Step 3: Create a Sync group and add File share

In this we will add Storage Account sastdcloud and File Share fsaz103. Storage Account sastdcloud was created in exercise 54, Chapter 5 and File share fsaz103 was created in exercise 67 in this Chapter.

  1. Go Storage sync Service SScloud created in step 1 Dashboard>Click sync group in left pane>In Right pane click +Sync Group>Create Sync group blade opens> Enter a name>Select Storage account sastdcloud created in exercise 54 and file share fsaz103 created in Exercise 67>click create.

    Screenshot_289

Step 4: Add Server Endpoint (Registered Server) to the sync Group

In this we will add VM VMAD on which we installed Azure File sync agent in step 2 to sync group. In Step 2 we also registered the VM with Storage sync service. VM VMAD was created in Chapter 2, Exercise 32.

  1. Go Storage sync Service SSPortal created in step 1 Dashboard>You can see the sync group created in previous step.

    Screenshot_290
  2. In right pane click the sync group SGCloud>Sync group pane opens>Click Add Server Endpoint>Add Server Endpoint blade opens>Select your registered server from drop down box> In path enter C:Public. Public folder was created in Step 2>Click Enabled in Cloud Tiering> Click Create.

    Screenshot_291
  3. Sync Group pane now shows both Cloud Endpoint and Server endpoint. It will take 5-10 minutes for health status of Server Endpoint to get updated.

    Screenshot_292

Step 5: Check whether Files from Public folder in Registered Server are synchronized to File share and vice versa or not.

  1. In Azure Portal go to Storage Account sastscloud dashboard>Click Files in Right pane>Click the File share fsaz103>File share pane opens> You can see text files from Public Folder in Registered Server VMAD are synchronized to File Share fsaz103.

    Screenshot_293
  2. Go to Public Folder in registered server. HelloWorld from File share is synchronized to Public folder in Registered server.

    Screenshot_294

    Note : After the exercise is completed stop the VM VMAD. We will now require this VM in Azure AD Connect lab.

Azure Import/Export service

Azure Import/Export service is used to securely import/export large amounts of data to Azure storage.

Azure Import service is used to import data to Azure Blob storage and Azure Files by shipping disk drives to an Azure Datacentre. Azure Export service is used to export data from Azure Blob storage to disk drives and ship to your on-premises sites.

Important Point : In Azure Import/Export Service, Customer provides the Disks. Where as in Azure Data Box scenario the Disks are provided by Microsoft.

Azure Import/Export use cases

Consider using Azure Import/Export service when uploading or downloading data over the network is too slow, or getting additional network bandwidth is cost-prohibitive. Use this service in the following scenarios:

  1. Data migration to the cloud: Move large amounts of data to Azure quickly and cost effectively.

  2. Content distribution: Quickly send data to your customer sites.

  3. Backup: Take backups of your on-premises data to store in Azure Storage.

  4. Data recovery : Recover large amount of data stored in storage and have it delivered to your on-premises location.

Import/Export Service components

  1. Import/Export service: This service available in Azure portal helps the user create and track data import (upload) and export (download) jobs.

  2. WAImportExport tool: This is a command-line tool that does the following:

    1. Prepares your disk drives that are shipped for import.
    2. Facilitates copying your data to the drive.
    3. Encrypts the data on the drive with BitLocker.
    4. Generates the drive journal files used during import creation.
    5. Helps identify numbers of drives needed for export jobs.
  3. Disk Drives: You can ship Solid-state drives (SSDs) or Hard disk drives (HDDs) to the Azure Datacentre. When creating an import job, you ship disk drives containing your data. When creating an export job, you ship empty drives.

Export from Azure Job

Azure Export service is used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. When creating an export job, you ship empty drives to the Azure Datacentre. You can ship up to 10 disk drives per job.

Export Job Working in brief

  1. Determine the data to be exported, number of drives you need, source blobs or container paths of your data in Blob storage.
  2. Create an export job in your source storage account in Azure portal.
  3. Specify source blobs or container paths for the data to be exported.
  4. Provide the return address and carrier account number for shipping the drives back.
  5. Ship the disk drives to the shipping address provided during job creation.
  6. Update the delivery tracking number in the export job and submit the export job.
  7. The drives are received and processed at the Azure Datacentre.
  8. The drives are encrypted with BitLocker and the keys are available via the Azure portal.
  9. The drives are shipped using your carrier account to the return address provided in the export job.
Screenshot_295

Import from Azure Job

Azure Import service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives containing your data to an Azure Datacentre.

Import Job Working in brief

  1. Determine data to be imported, number of drives you need, destination blob location for your data in Azure storage.
  2. Use the WAImportExport tool to copy data to disk drives. Encrypt the disk drives with BitLocker.
  3. Create an import job in your target storage account in Azure portal. Upload the drive journal files.
  4. Provide the return address and carrier account number for shipping the drives back.
  5. Ship the disk drives to the shipping address provided during job creation.
  6. Update the delivery tracking number in the import details and submit the import job.
  7. The drives are received and processed at the Azure Datacentre.
  8. The drives are shipped using your carrier account to the return address provided in the import job. Figure below shows import job flow.

Exercise 69: Demonstrating Export Job Creation

In this exercise we will demonstrate how to create export job in Resource group RGCloud. We will export Blob file HelloWorld.txt. In Exercise 64 Helloworld.txt file was uploaded in container hk410 which is in Storage account sastdcloud.

  1. Click All Services in left pane>In Right pane All service blade opens>Scroll down to Storage section. Note the Import/Export Jobs option

    Screenshot_297
  2. Click import/export jobs in storage section>Import/Export Jobs pane opens>click +Add>Create import/export job blade opens>Select export, enter a name and select RGCloud as resource group and click ok (Not Shown).

    Screenshot_298
  3. In Job detail pane select Storage Account sastdcloud> Click Ok (Not Shown).

    Screenshot_299
  4. In return shipping information select your carrier, enter carrier account number (I entered Dummy num) and return address and click Ok (Not Shown).

    Screenshot_300
  5. Summary pane will show you Export Job summary and MS Azure Datacenter address where you will ship your drives>Click Ok.

    Screenshot_301
  6. In All Import/Export job pane you can see the job listed.

    Screenshot_302
  7. Ship the disk drives to Microsoft Azure Datacenter using the address provided in summary pane (Step 5).

  8. In Export job Blobexport dashboard update that drives are shipped. Some information is missing in the dashboard as I have not provided proper carrier account number.

    Screenshot_303
  9. Once MS receives disk it will update the information in the dashboard. The disks are then shipped to you and the tracking number for the shipment is available on the portal.

  10. You will receive the disk in encrypted format. You need to get the BitLocker keys to unlock the drives. Go to the export job dashboard and click BitLocker keys in left pane and copy the keys to unlock the drives.

Azure Data Box

Azure Data Box transfers on-premises data to Azure Cloud.

Azure Data Box is a Secure, Tamper proof and Ruggedised appliance as shown below. It is provided by Microsoft. Where as in Import/Export service, disks are provided by the customer.

Screenshot_304

Azure Data Box is used to transfer large amount of data which otherwise would have taken days, months or years to transfer using Internet or ExpressRoute connection.

Each storage device has a maximum usable storage capacity of 80 TB. Data Box can store a maximum of 500 million files.

Ordering, Setup & Working

Data Box is ordered to through Azure Portal.

Connect the Data Box to your existing Network. Assign an IP directly or through DHCP (Default). To access Web UI of Data box, connect a laptop to management port of Data Box and https://192.168.100.10. Sign in using the password generated from the Azure portal.

Load your data onto the Data Box using standard NAS protocols (SMB/CIFS). Your data is automatically protected using 256-AES encryption. The Data Box is returned to the Azure Data Centre to be uploaded to Azure. After data is uploaded the device is securely erased.

The entire process is tracked end-to-end by the Data Box service in the Azure portal.

Figure below shows the setup of Azure Data Box setup.
Screenshot_305

Azure Data Box Use Cases

Data Box is ideally suited to transfer data sizes larger than 40 TBs in scenarios with no to limited network connectivity. The data movement can be one-time, periodic, or an initial bulk data transfer followed by periodic transfers.

One time migration - when large amount of on-premises data is moved to Azure.

Initial bulk transfer - when an initial bulk transfer is done using Data Box followed by incremental transfers over the network. For example, backup solutions can use Data Box to move initial large backup to Azure. Once complete, the incremental data is transferred via network to Azure storage.

Periodic uploads - when large amount of data is generated periodically and needs to be moved to Azure. For example in energy exploration, where video content is generated on oil rigs and windmill farms.

Exercise 70: Demonstrating Data Box Order through Azure Portal

In Azure Portal click + Create a Resource>Storage>Azure Data Box>Select Your Azure Data Box Blade opens>Select your subscription>Transfer type>Source Country and Destination Azure region>Click Apply>Data Box options open> Select as per your requirement.

Screenshot_306

Note: To Order Data Box you require Enterprise Agreement (EA), CSP or Microsoft Partner Network option.

Azure Content Delivery Networks (CDN)

A content delivery network (CDN) is a distributed network of servers that deliver web content to users faster than the origin server. The Azure Content Delivery Network (CDN) caches web content from origin server at strategically placed locations to provide maximum throughput for delivering content to users.

Figure below shows Cached image being delivered to users by CDN server which is faster than the origin server.

Screenshot_307

Use Cases

  1. Azure CDNs are typically used to deliver static content such as images, style sheets, documents, client-side scripts, and HTML pages.
  2. Streaming Video benefits from the low latency offered by CDN servers. Additionally Microsoft Azure Media Services (AMS) integrates with Azure CDN to deliver content directly to the CDN for further distribution.

Benefits of Azure CDN

  1. CDN provides lower latency and faster delivery of content to users.
  2. CDNs help to reduce load on a web application, because the application does not have to service requests for the content that is hosted in the CDN.
  3. CDN helps to cope with peaks and surges in demand without requiring the application to scale, avoiding the consequent increased running costs.
  4. Improved experience for users, especially those located far from the datacentre hosting the application.

Azure CDN Working

Figure below shows the working of Content Delivery Networks.

Screenshot_308
  1. User Alice requests a file using URL (.azureedge.net) in a browser. DNS routes the request to the CDN edge server Point-ofPresence (POP) location that is geographically closest to the user.
  2. If the edge servers in the POP has file in their cache, it returns the file to the user Alice.
  3. If the edge servers in the POP do not have the file in their cache, the edge server requests the file from the origin server. The origin server returns the file to the edge server, including optional HTTP headers describing the file's Time-to-Live (TTL). The edge server caches the file and returns the file to the user Alice. The file remains cached on the edge server until the TTL expires. If the origin didn't specify a TTL, the default TTL is seven days.
  4. Additional users who request same file as user Alice and are geographically closest to the same POP will be get the file from Cache of the edge server instead of the origin server.
  5. The above process results in a faster, more responsive user experience.

Azure CDN Architecture

Azure CDN Architecture consists of Origin Server, CDN Profile and CDN endpoints.

Origin Server

Origin server holds the web content which is cached by CDN Endpoints geographically closest to the user based on caching policy configured in CDN endpoint.

Origin Server type can be one of the following:

Storage
Web App
Cloud Service
Publically Accessible Web Server

CDN Profile

A CDN profile is a collection of CDN endpoints with the same pricing tier. CDN pricing is applied at the CDN profile level. Therefore, to use a mix of Azure CDN pricing tiers, you must create multiple CDN profiles.

CDN Endpoints

CDN Endpoint caches the web content from the origin server. It delivers cached content to end users faster than the origin server and is located geographically closest to the user. CDN Endpoints are distributed across the world.

The CDN Endpoint is exposed using the URL format <endpoint name>.azureedge.net by default, but custom domains can also be used.

A CDN Endpoint is an entity within a CDN Profile containing configuration information regarding caching behaviour and origin Server. Every CDN endpoint represents a specific configuration of content deliver behaviour and access.

Azure CDN Tiers

Azure CDN comes in Standard and Premium tiers. Azure CDN Standard Tier comes from Microsoft, Akamai and Verizon. Azure Premium Tier is from Verizon. Table below shows comparison between Standard and Premium Tiers.

Screenshot_309

Note 1 : MS and Verizon support delivering large files and media directly via the general web delivery optimization.

Dynamic Site Acceleration (DSA) or Acceleration Data Transfer

Dynamic Site Acceleration (DSA), accelerates web content that is not cacheable such as shopping carts, search results, and other dynamic content.

Traditional CDN mainly uses caching to improve website and download performance. DSA accelerates delivery of dynamic content by optimising routing and networking between requester and content origin.

DSA configuration option can be selected during endpoint creation.

DSA Optimization Techniques

DSA speeds up delivery of dynamic assets using the following techniques:

Route optimization chooses the most optimal and the fastest path to the origin server.

TCP Optimizations : TCP connections take several requests back and forth in a handshake to establish a new connection. This results in delay in setting up the network connection.

Azure CDN solves this problem by optimizing in following three areas:

  • Eliminating slow start
  • Leveraging persistent connections
  • Tuning TCP packet parameters (Akamai only)

Object Prefetch (Akamai only) : Prefetch is a technique to retrieve images and scripts embedded in the HTML page while the HTML is served to the browser, and before the browser even makes these object requests. When the client makes the requests for the linked assets, the CDN edge server already has the requested objects and can serve them immediately without a round trip to the origin.

Adaptive Image Compression (Akamai only) : End users experience slower network speeds from time to time. In these scenarios, it is more beneficial for the user to receive smaller images in their webpage more quickly rather than waiting a long time for full resolution images. This feature automatically monitors network quality, and employs standard JPEG compression methods when network speeds are slower to improve delivery time.

Exercise 71: Implementing Azure CDN using Azure Portal

Implementing Azure CDN is a 2 step process - Create CDN profile and Add CDN endpoints to the profile.

In this exercise CDN Profile will be created in Resource Group RGCloud. We will add VM VMFE1 to CDN endpoint. CDN endpoint will cache default website on VM VMFE1. VM VMFE1 was created in Exercise 18, Chapter 2.

Create CDN Profile : In Azure portal click create a resource>Web > CDN> Create CDN profile blade opens>Enter name, select resource group as RGCloud, select pricing tier and click create. We have the option to add CDN endpoint but we will add later.

Screenshot_310

Note: In Next Exercise readers are advised to see options available in optimized for Dropdown box.

Fig below shows CDN Profile Dashboard.

Screenshot_311

ADD CDN Endpoint: In CDN Profile dashboard click +Endpoint>Add an Endpoint Blade opens>Enter a name, Select Custom origin and in Origin hostname enter Public IP of VM VMFE1>Select HTTP as protocol>Click Add.

Screenshot_312

Figure below dashboard of CDN Endpoint vmfe1.

Screenshot_313

Access default website of VMFE1 VM using CDN endpoint address: From CDN Endpoint Dashboard copy the Endpoint addresshttp://vmfe1.azureedge.net.

Screenshot_314

Virtual Machine VMFE1 default site is located in US East 2 region. I am accessing the default site from India. The CDN endpoint in Indian region will cache the default website. Next access of the website will happen through CDN endpoint.

CDN Endpoint Compression Functionality

Compression is used to reduce the bandwidth used to deliver and receive an object. By enabling compression directly on the CDN edge servers, CDN compresses the files and serves them to end users.

Compression is enabled by default.

Note that files are only compressed on the fly by the CDN if it is served from CDN cache. Compressed by the origin can still be delivered compressed to the client without being cached.

Exercise 72: Enabling or Disabling Compression

In CDN Endpoint Dashboard Click Compression in left pane> Compression pane opens>Click On or Off to Enable or Disable Compression.

Screenshot_315

CDN Endpoint Optimization Functionality

Azure Content Delivery Network (CDN) can optimize the delivery experience based on the type of content you have. The content can be a website, a live stream, a video, or a large file for download. When you create a CDN endpoint, you specify optimization type. Your choice determines which optimization is applied to the content delivered from the CDN endpoint.

Optimization Types

General Web Delivery: It is designed for general web content optimization, such as webpages and web applications. This optimization also can be used for file and video downloads.

General media streaming: It is designed for live streaming and video-ondemand streaming. Media streaming is time-sensitive, because packets that arrive late on the client can cause a degraded viewing experience. Media streaming optimization reduces the latency of media content delivery and provides a smooth streaming experience for users.

Video-on-demand Streaming : Video-on-demand media streaming optimization improves video-on-demand streaming content. It reduces the latency of media content delivery and provides a smooth streaming experience for users.

Large File Download : This optimizes large file download (Files larger than 10 MB). If your average files sizes are consistently larger than 10 MB, it might be more efficient to create a separate endpoint for large files.

Dynamic site acceleration (DSA) : DSA use optimization techniques such as route, network and TCP optimization to improve the latency and performance of dynamic content or non-cacheable content.

Note : Azure CDN Standard from Microsoft, Azure CDN Standard from Verizon, and Azure CDN Premium from Verizon, use the general web delivery optimization type to deliver general streaming media content, Videoon-demand media streaming and large File download.

Azure CDN optimization supported by various providers

Important Note : MS & Verizon support Media Streaming, Video on demand streaming and large file download using the general web delivery optimization.

Screenshot_316

Exercise 73: Changing Optimization type

Go to CDN endpoint Dashboard>Click Optimization in left pane> Select optimized type from the drop down box>Click Save.

Screenshot_317

CDN Endpoint Caching Rules Functionality

Caching Rules Control how CDN caches your content including deciding caching duration and how unique query strings are handled.

Default Caching behaviour

The following table describes the default caching behaviour for the Azure CDN products and their optimizations.

Screenshot_318

Honor origin: Specifies whether to honor the supported cache-directive headers if they exist in the HTTP response from the origin server.

CDN cache duration: Specifies the amount of time for which a resource is cached on the Azure CDN. If Honor origin is Yes and the HTTP response from the origin server includes the cache-directive header Expires or Cache-Control: maxage , Azure CDN uses the duration value specified by the header instead.

Control Azure CDN Caching behaviour with Query Strings

Before going into Caching behaviour with query Strings lets discuss what is Query String.

In a web request with a query string, the query string is that portion of the request that occurs after a question mark (?). A query string can contain one or more key-value pairs, in which the field name and its value are separated by an equals sign (=). Each key-value pair is separated by an ampersand (&). For example, http://www.contoso.com/content.mov? field1=value1&field2=value2.

With Azure Content Delivery Network (CDN), you can control how files are cached for a web request that contains a query string. Following Three query string modes are available:

Ignore query strings: This is default mode. In this mode, the CDN point-ofpresence (POP) node passes the query strings from the requestor to the origin server on the first request and caches the asset. All subsequent requests for the asset that are served from the POP, until the cached asset expires.

Bypass caching for query strings: In this mode requests with query strings are not cached at the CDN POP node. The POP node retrieves the asset directly from the origin server and passes it to the requestor with each request.

Cache every unique URL: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache.

You can Change query string caching settings for standard CDN profiles from Caching rules options in CDN Endpoint Dashboard.

Exercise 74: Demonstrating Changing Caching Rules

In CDN Endpoint Dashboard click Caching Rules in left pane> In right pane select the Query String caching option from Drop down box>Click save.

Screenshot_319

Geo-Filtering

By creating geo-filtering rules you can block or allow CDN content in the selected countries.

Exercise 75: Demonstrating Allow or Block CDN in Specific Countries

In CDN Endpoint Dashboard click Geo-Filtering in left pane> In right pane select an action (Allow or Block) from Dropdown box and select the countries on which action will applicable>Click save.

Screenshot_320
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.