Chapter 3 Develop for Azure storage

All applications work with information or data. Applications create, transform, model, or operate with that information. Regardless of the type or volume of the data that your application uses, sooner or later, you will need to save it persistently so that it can be used later.

Storing data is not a simple task and designing storage systems for that purpose is even more complicated. Perhaps your application needs to deal with terabytes of information, or you may work with an application that needs to be accessed from different countries, and you need to minimize the time required to access it. Also, cost efficiency is a requirement in any project. In general, there are many requirements that make designing and maintaining storage systems difficult.

Microsoft Azure offers different storage solutions in the cloud for satisfying your application storage requirements. Azure offers solutions for making your storage cost-effective and minimizing latency.

Skills covered in this chapter:

Skill 3.1: Develop solutions that use storage tables

Skill 3.2: Develop solutions that use Cosmos DB storage

Skill 3.3: Develop solutions that use a relational database Skill 3.4: Develop solutions that use blob storage

Skill 3.1: Develop solutions that use storage tables

Storage tables allow you to store NoSQL data in the cloud. It’s a schemaless storage design in which your data is stored using key/attribute pairs. A schemaless storage design means that you can change the structure of your data as your application requirements evolve. Another advantage of using storage tables is that they are more cost effective than using the same amount of storage on traditional SQL systems.

Azure offers two types of storage tables services: Azure Table Storage and Azure Cosmos DB Table. Azure Cosmos DB is a premium service that offers low latency, higher throughput, global distribution, and other improved capabilities. In this section, we review how to work with Azure Table storage using the Azure Cosmos DB Table API.

Azure Table storage can manage a large amount of structured, nonrelational data. You can authenticate the calls that you make to the Storage Account. You can use OData and LINQ queries with the WCF Data Service .NET libraries for accessing your data store in Azure Table Storage.

This skill covers how to:

  • Design and implement policies for tables
  • Query Table Storage by using code
  • Implement partitioning schemes

Design and implement policies for tables

Before we can start designing and implementing policies for tables, we need to review some essential concepts for working with Table Storage. Azure Table storage is included and managed by the Azure Account Storage service. All scalability and performance features that apply to an Azure Account Storage account also apply to Azure Table Storage. (For simplicity, we refer to “Azure Account Storage” as a “Storage Account” and “Azure

Table storage” as “Table Storage.”)

You can think of a Storage Account as the parent namespace of Table Storage. The Storage account also provides the authentication mechanism to the Table Storage for protecting access to the data. You can create as many tables as you need inside a Storage account as long as the table names are unique inside the Storage Account.

Once you have created the Storage Account, you need to deal with following elements that are part of the table model:

Tables Tables are the storage containers for entities. You can create as many entities as you need inside a table. When you create a table, you need to bear in mind the following restrictions when naming your new table:

The table name must be unique inside the Storage Account.

The table name may only contain alphanumeric characters.

The table name cannot start with a number.

Table names are case-insensitive when you create the table. The name of the table preserves the case, but they are case-insensitive when you are using the table.

Table names must be between 3 and 64 characters long.

You cannot use reserved table names, such as “tables.”

Entities You can think in an entity as a row in a table of a relational database. It has a primary key and a group of properties. Each entity in Azure Table Storage can be as big as 1MB and can have up to 252 properties to store data. When you create a new entity, the Table storage service automatically adds three system properties:

Partition key The partition key defines a group of entities that can be queried more quickly.

Row key The row key is a unique identifier inside the partition.

Timestamp The timestamp properties set the time and date when you created the entity.

Properties These are pairs of key/values related to an entity. You can think of properties as each of the columns or fields that define the structure of a table in a relational database and are part of each row. When you create a property, remember that they are case-sensitive, and the name can be up to 255 characters long. When you are naming a property, you should follow the naming rules for C# identifiers. You can review these naming rules athttps://docs.microsoft.com/enus/dotnet/standard/design-guidelines/naming-guidelines

Need More Review?: Table Service Data Model

When you are working with tables, entities, and properties, you should review all types and details that apply to these elements. You can review a detailed description of each element in the Microsoft Docs article “Understanding the Table Service Data Model” at https://docs.microsoft.com/en-us/rest/api/storageservices/understandingthe-table-service-data-model.

When you are working with storage, you need to control who and how much time a process, person, or application can access your data. Azure Table storage allows you to control this access based on several levels of protection. Because Table storage is a child of Azure Storage, these authorization mechanisms are provided by Azure Storage:

Shared Key Authorization You use one of the two access keys configured at the Storage Account level to construct the correct request for accessing the Storage Account resources. You need to use the Authorization Header for using the access key in your request. The access key provides access to the entire Storage Account and all its containers, such as blobs, files, queues, and tables. You can consider Storage Account keys to be like the root password of the Storage Account.

Shared Access Signatures You use Shared Access Signatures (SAS) for narrowing the access to specific containers inside the Storage Account. The advantage of using SAS is that you don’t need to share the Storage account’s access keys. You can also configure a higher level of granularity when setting access to your data.

The drawback of using shared access keys is that if either of the two access keys are exposed, the Storage Account and all the containers and data in the Storage Account are also exposed. The access keys also allow you to create or delete elements in the Storage Account.

Shared Access Signatures provide you with a mechanism for sharing access with clients or applications to your Storage Account without exposing the entire Storage account. You can configure each SAS with different levels of access to each of the following:

Services You can configure SAS for granting access only to the services that you require, such as blob, file, queue, or table.

Resource types You can configure the access to a service, container, or object. For the Table service, this means that you can configure the access to API calls at the service level, such as list tables. If you configure the SAS token at the container level, you can make API calls like create or delete tables. If you decide to configure the access at the object level, you can make API calls like create or updating entities in the table.

Permissions Configure the action or actions that the user is allowed to perform in the configured resources and services.

Date expiration You can configure the period for which the configured SAS is valid for accessing the data.

IP addresses You can configure a single IP address or range of IP addresses that are allowed to access your storage.

Protocols You can configure whether the access to your storage is performed using HTTPS-only or HTTP and HTTPS protocols. You cannot grant access to the HTTP-only protocol.

Azure Storage uses the values of previous parameters for constructing the signature that grants access to your storage. You can configure two different

types of SAS:

Account SAS Account SAS controls access to the entire Storage Account.

Service SAS Service SAS delegates access to only specific services inside the Storage Account.

Regardless of the SAS type you need to configure, you need to construct an SAS token for access. You append this SAS token to the URL that you use for accessing your storage resource. You need to configure a policy for constructing the SAS. You configure this policy by providing the needed values to the SAS URI—or SAS token—that you attach to your URL request.

Use the following procedure for constructing and testing your own account SAS token:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the search box at the top of the Azure portal, type the name of your Storage Account.
  3. On the Storage Account blade, click Shared Access Signature in the Settings section.
  4. On the Shared Access Signature panel, deselect the Blob, File, and Queue checkboxes under Allowed Services, as shown in Figure 3-1. Leave the Table checkbox selected.
  5. Figure 3-1 Configuring the Account SAS policy
    Screenshot_23
  6. Ensure that all options in Allowed Resource Types and Allowed Permissions are checked, as shown in Figure 3-1.
  7. Ensure that Allowed IP addresses have no value in the text box and HTTPS Only is selected in the Allowed Protocols section.
  8. In the Signing Key drop-down menu, make sure that you have selected the Key1 value.
  9. Click the Generate SAS And Connection String button at the bottom of the panel.
  10. Copy the Table Service SAS URL. Now you can test your SAS token using a tool such as Postman, curl, or a web browser.
  11. Open a web browser.
  12. Paste the Table Service SAS URL in the address bar. Don’t press Enter at this point.
  13. In the Table Service SAS URL—after your Storage Account domain and before your SAS Token—add the bold text below. Your URL should look like this:
  14. 	
    https://az203storagedemo.table.core.windows.ne
    co&sp=rwdlacup&se=2019-04-02T22:32:20Z&st=2019 FxnmdOEyu%2FQJLQyYs1npP65o0No2u1KbrsGfd4%3D
    	
    
  15. Press Enter to navigate to the URL.
  16. Confirm that you get an XML document with the list of existing tables in your Storage Account.

If you need to narrow the access to your resources and limit it only to tables or entities, you can create a Service SAS. This type of SAS token is quite similar to an Account SAS; you need to create a URI that you append to the URL that you use to request your Table storage service. Account and Service SAS share most of the URI parameters, although some parameters are specific to the service, and you need to take them into consideration when creating your Service SAS token.

You can generate a Service SAS token by using the code shown in Listing 3-1. If you need to generate an SAS token and you don’t want to write your own code for creating the token, you can create new Shared Access Signatures from the Azure Portal, using the Storage Explorer in the Storage account. On the Storage Explorer, navigate to the table for which you need to create the new SAS, right-click the name of the table, and click Get Shared Access Signature.

Listing 3-1 Generate a Service SAS token

	
//C# .NET Core.
//you need to install WindowsAzure.Storage NuGet Package
using System;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;
www.examsnap.com ExamSnap - IT Certification Exam Dumps and Practice Test Questions
using Microsoft.WindowsAzure.Storage.Table;
namespace ch3_1_1
{
class Program
{
static void Main(string[] args)
{
string tablename = "az203tabledemo";
string accountname = "az203storagedemo";
string key = "<your_primary_or_secondary_access_key>";
string connectionString = $"DefaultEndpointsProtocol=https;
AccountName={accountname};AccountKey={key}";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tablename);
SharedAccessTablePolicy tablePolicyDefinition = new SharedAccessTablePolicy();
tablePolicyDefinition.SharedAccessStartTime = DateTimeOffset.UtcNow;
tablePolicyDefinition.SharedAccessExpiryTime = DateTimeOffset.UtcNow.
AddHours(24);
tablePolicyDefinition.Permissions = SharedAccessTablePermissions.
Query | SharedAccessTablePermissions.Add | SharedAccessTablePermissions.Delete |
SharedAccessTablePermissions.Update;
string SASToken = table.GetSharedAccessSignature(tablePolicyDefinition);
Console.WriteLine("Generated SAS token:");
Console.WriteLine(SASToken);
}
}
}
	

One drawback of using Service SAS tokens is that if the URL is exposed, an unauthorized user could use the same URL to access your data as long as the access policy is valid. Stored Access Policies allows you to define access policies that are associated and stored with the table that you want to protect. When you define a Stored Access Policy, you provide an identifier to the policy. Then you use this identifier when you construct the Service SAS token. You need to include this identifier when you construct the signature that authenticates the token and is part of the SAS itself.

The advantage of using a Stored Access Policy is that you define and control the validity and expiration of the policy without needing to modify the Service SAS token. You can associate up to five different stored access policies. Listing 3-2 shows the piece of code for creating a Stored Access Policy. Because this code is quite similar to Listing 3-1, we have highlighted the lines that create the Stored Access Policy.

Listing 3-2 Generate a Stored Access Policy

	
//C# .NET Core.
static void Main(string[] args)
{
string tablename = "az203tabledemo";
string accountname = "az203storagedemo";
string key = "<your_primary_or_secondary_access_key>";
string connectionString = $"DefaultEndpointsProtocol=https;
AccountName={accountname};AccountKey={key}"
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference(tablename);
SharedAccessTablePolicy tablePolicyDefinition = new SharedAccessTablePolicy();
tablePolicyDefinition.SharedAccessStartTime = DateTimeOffset.UtcNow;
tablePolicyDefinition.SharedAccessExpiryTime = DateTimeOffset.UtcNow.
AddHours(24);
tablePolicyDefinition.Permissions = SharedAccessTablePermissions.
Query | SharedAccessTablePermissions.Add | SharedAccessTablePermissions.Delete |
SharedAccessTablePermissions.Update;
TablePermissions tablePermissions = Task.Run(async () => await table.
GetPermissionsAsync()).ConfigureAwait(false).GetAwaiter().GetResult();
SharedAccessTablePolicies policies = tablePermissions.SharedAccessPolicies;
policies.Add(new KeyValuePair<string, SharedAccessTablePolicy>(tablename +
"_all", tablePolicyDefinition));
Task.Run(async () => await table.SetPermissionsAsync(tablePermissions)).Wait();
}
	

Query table storage by using code

When you need to work with tables in Azure, you can choose between Azure

Table Storage and Table API for Azure Cosmos DB storage. Although Microsoft offers two different services for working with tables, depending on the language that you use for your code, you can access both services using the same library. You can work with tables by using .NET Framework and Standard, Java, NodeJS, PHP, Python, PowerShell, C++, or REST API.

If you decide to use .NET Standard for your code, you need to use Azure Cosmos DB .NET SDK. This SDK allows you to work with Azure Table Storage and Table API for Azure Cosmos DB. You need to use the NuGet package Microsoft.Azure.Cosmos.Table. This package provides you with the needed classes for working with tables, entities, keys, and values:

CloudStorageAccount You use this class for working with the Storage Account where you want to create or access your tables. You need to provide the appropriate connection string to your instance of this class for connecting with your Storage Account. You can use SAS tokens when creating the instance of this class for authenticating to your Storage Account.

CloudTableClient Use an instance of this class for interacting with the Table service. You need to use this object for getting references for existing tables or creating new references. Once you get the table reference, you need to create an instance of the CloudTable class for working with the table.

CloudTable This class represents a table in the Table service. You use an instance of this class for performing operations on the table, such as inserting or deleting entities. You need to use a TableOperation object for performing any action on the table’s entities.

TableOperation Define an operation that you want to perform in your table’s entities. You construct a TableOperation object for defining the insert, merge, replace, delete, or retrieve operations. Then you pass this TableOperation object to the CloudTable object for executing the operation.

TableResult This object contains the result of executing a TableOperation by a CloudTable object.

TableEntity Any document that you add to your table needs to be represented by a child of the TableEntity class. You define your model with the fields that match the keys of your document in your table.

Important Nuget Packages

Depending on the .NET version that you are using, you need to install different packages:

.NET Framework Use the NuGet package Microsoft.Azure.Cosmos DB.Tables.

.NET Standard Use the NuGet package

Microsoft.Azure.Cosmos.Tables. If you use .NET Core, you also

need to install the NuGet package Microsoft.Azure.Storage.Common.

You can use the following steps and Listings 3-3 to 3-8 to review a working example written in C# Core on how to work with Table Storage using Azure Cosmos DB Table Library for .NET. In this example, we use Visual Studio Code with the C# for Visual Studio Code extension installed, but you can use your preferred editor for this:

  1. Open Visual Studio Code and open a folder on your computer where you want to store the files associated with this project.
  2. Open a new Terminal by clicking Terminal > New Terminal.
  3. Create a new folder named az203TablesCodeDemo. Change the location to this new folder.
  4. Create a new project from the predefined Console Application template. Use the following command:
  5. 	
    dotnet new console
    	
    
  6. Install the required NuGet packages by using the following commands:
  7. 	
    dotnet add az203TablesCodeDemo.csproj package dotnet add az203TablesCodeDemo.csproj package dotnet add az203TablesCodeDemo.csproj package dotnet add az203TablesCodeDemo.csproj package dotnet add az203TablesCodeDemo.csproj package 
    	
    
  8. In the Visual Studio Code window, create a new C# class file named AppSettings.cs. This helper class stores all needed information for your application.
  9. Replace the content of the AppSettings.cs file with content in Listing 3-3.
  10. Listing 3-3 The AppSettings.cs file

    	
    //C# .NET Core.
    using Microsoft.Extensions.Configuration;
    namespace az203TablesCodeDemo
    {
    public class AppSettings
    {
    public string SASToken { get; set; }
    public string StorageAccountName { get; set; }
    public static AppSettings LoadAppSettings()
    {
    IConfigurationRoot configRoot = new ConfigurationBuilder()
    .AddJsonFile("AppSettings.json")
    .Build();
    AppSettings appSettings = configRoot.Get<AppSettings>();
    return appSettings;
    }
    }
    }
    	
    
  11. In the Visual Studio Code window, create a new file named AppSetings.json. Add the following content to the file:
  12. 	
    {
    "SASToken": "<Your_SAS_Token>",
    "StorageAccountName": "<Your_Azure_Storage
    }
    
    	
    
  13. Create a new C# class file and give it the name Common.cs. This class contains basic joint operations that you need for the rest of the examples, like creating the example table or the Storage Account object.
  14. Replace the contents of the Common.cs file with the contents of Listing 3-4. In this example, you are using an SAS token that you previously generated on your Azure Storage account. When you use an SAS token for authenticating the operations to your Table storage service, you also need to provide the account name.
  15. Listing 3-4 The Common.cs file

    	
    //C# .NET Core.
    using System;
    using System.Threading.Tasks;
    using Microsoft.Azure.Cosmos.Table;
    namespace az203TablesCodeDemo
    {
    public class Common
    {
    public static CloudStorageAccount CreateStorageAccountFromSASToken(string SASToken,
    string accountName)
    {
    CloudStorageAccount storageAccount;
    try
    {
    //We required that the communication with the storage service uses HTTPS.
    bool useHttps = true;
    StorageCredentials storageCredentials = new StorageCredentials(SASToken);
    storageAccount = new CloudStorageAccount(storageCredentials, accountName, null, useHttps);
    }
    catch (FormatException)
    {
    Console.WriteLine("Invalid Storage Account information provided. Please
    confirm the SAS Token is valid and did not expire");
    throw;
    }
    catch (ArgumentException)
    {
    Console.WriteLine("Invalid Storage Account information provided. Please
    confirm the SAS Token is valid and did not expire");
    Console.ReadLine();
    throw;
    }
    return storageAccount;
    }
    public static async Task<CloudTable> CreateTableAsync(string tableName)
    {
    AppSettings appSettings = AppSettings.LoadAppSettings();
    string storageConnectionString = appSettings.SASToken;
    string accountName = appSettings.StorageAccountName;
    CloudStorageAccount storageAccount = CreateStorageAccountFromSASToken
    (storageConnectionString, accountName);
    // Create a table client for interacting with the table service
    CloudTableClient tableClient = storageAccount.CreateCloudTableClient(new
    TableClientConfiguration());
    Console.WriteLine($"Creating the table {tableName}");
    // Create a table client for interacting with the table service
    CloudTable table = tableClient.GetTableReference(tableName);
    if (await table.CreateIfNotExistsAsync())
    {
    Console.WriteLine($"Created Table named: {tableName}");
    }
    else
    {
    Console.WriteLine($"Table {tableName} already exists");
    }
    Console.WriteLine();
    return table;
    }
    }
    }
    	
    
  16. In your project’s folder, create a new folder named Model.
  17. Create a new C# class file named PersonEntity.cs inside the folder Model.
  18. Replace the contents of the PersonEntity.cs file with the contents of Listing 3-5.
  19. Listing 3-5 PersonEntity.cs file. The PersonEntity class that inherits from TableEntity is the C# representation of a document in your table. Pay particular attention to the PartitionKey and RowKey properties. These are the only two required properties that any entity needs to provide to the table. We will review these two properties in more detail in the next section.

    	
    //C# .NET Core.
    using Microsoft.Azure.Cosmos.Table;
    namespace az203TablesCodeDemo.Model
    {
    public class PersonEntity: TableEntity
    {
    public PersonEntity() {}
    public PersonEntity(string lastName, string firstName)
    {
    PartitionKey = lastName;
    RowKey = firstName;
    }
    public string Email { get; set; }
    public string PhoneNumber { get; set; }
    }
    }
    	
    
  20. Create a new C# Class file named TableUtils.cs in your project folder. This class contains some basic operations that you need to work with entities in a table.
  21. Replace the TableUtils.cs contents with the contents of Listing 3-6.
  22. Listing 3-6 TableUtils.cs file

    	
    //C# .NET Core.
    using System;
    using System.Collections.Generic;
    using System.Threading.Tasks;
    using az203TablesCodeDemo.Model;
    using Microsoft.Azure.Cosmos.Table;
    namespace az203TablesCodeDemo
    {
    public class TableUtils
    {
    public static async Task<PersonEntity> InsertOrMergeEntityAsync(CloudTable table,
    PersonEntity entity)
    {
    if (entity == null)
    {
    throw new ArgumentNullException("entity");
    }
    try
    {
    // Create the InsertOrReplace table operation
    TableOperation insertOrMergeOperation = TableOperation.
    InsertOrMerge(entity);
    // Execute the operation.
    TableResult result = await table.ExecuteAsync(insertOrMergeOperation);
    PersonEntity insertedCustomer = result.Result as PersonEntity;
    return insertedCustomer;
    }
    catch (StorageException e)
    {
    Console.WriteLine(e.Message);
    Console.ReadLine();
    throw;
    }
    }
    public static async Task<TableBatchResult> BatchInsertOrMergeEntityAsync
    (CloudTable table, IList<PersonEntity> people)
    {
    if (people == null)
    {
    throw new ArgumentNullException("people");
    }
    try
    {
    TableBatchOperation tableBatchOperation = new TableBatchOperation();
    foreach (PersonEntity person in people)
    {
    tableBatchOperation.InsertOrMerge(person);
    }
    TableBatchResult tableBatchResult = await table.ExecuteBatchAsync
    (tableBatchOperation);
    return tableBatchResult;
    }
    catch (StorageException e)
    {
    Console.WriteLine(e.Message);
    Console.WriteLine();
    throw;
    }
    }
    public static async Task<PersonEntity> RetrieveEntityUsingPointQueryAsync
    (CloudTable table, string partitionKey, string rowKey)
    {
    try
    {
    TableOperation retrieveOperation = TableOperation.Retrieve<PersonEntity>
    (partitionKey, rowKey);
    TableResult result = await table.ExecuteAsync(retrieveOperation);
    PersonEntity person = result.Result as PersonEntity;
    if (person != null)
    {
    Console.WriteLine($"Last Name: t{person.PartitionKey}n" +
    $"First Name:t{person.RowKey}n" +
    $"Email:t{person.Email}n" +
    $"Phone Number:t{person.PhoneNumber}");
    }
    return person;
    }
    catch (StorageException e)
    {
    Console.WriteLine(e.Message);
    Console.ReadLine();
    throw;
    }
    }
    public static async Task DeleteEntityAsync(CloudTable table,
    PersonEntity deleteEntity)
    {
    try
    {
    if (deleteEntity == null)
    {
    throw new ArgumentNullException("deleteEntity");
    }
    TableOperation deleteOperation = TableOperation.Delete(deleteEntity);
    TableResult result = await table.ExecuteAsync(deleteOperation);
    }
    catch (StorageException e)
    {
    Console.WriteLine(e.Message);
    Console.ReadLine();
    throw;
    }
    }
    }
    }
    	
    
  23. Now that you have all the code that you need for interacting with the Azure Table Storage service, you can add some code to your Program.cs file to test it. Content in Listing 3-7 shows how to create a new testing table, then create three entities on the new table. You create the last two entities by using a batch operation, which allows you to create several entities at the same time. You need to ensure that the entities in the same batch operation share the same partition key. You also modify one of the entities, and then you retrieve an entity from your table.
  24. Listing 3-7 The Program.cs file

    	
    //C# .NET Core.
    using System;
    using System.Collections.Generic;
    using System.Threading.Tasks;
    using az203TablesCodeDemo.Model;
    using Microsoft.Azure.Cosmos.Table;
    namespace az203TablesCodeDemo
    {
    class Program
    {
    static void Main(string[] args)
    {
    string tableName = "az203TableDemo" + Guid.NewGuid().ToString().Substring(0, 5);
    // Create or reference an existing table
    CloudTable table = Task.Run(async () => await Common.CreateTableAsync
    (tableName)).GetAwaiter().GetResult();
    try
    {
    // Demonstrate basic CRUD functionality
    Task.Run(async () => await CreateDemoDataAsync(table)).Wait();
    }
    finally
    {
    // Delete the table
    // await table.DeleteIfExistsAsync();
    }
    }
    private static async Task CreateDemoDataAsync(CloudTable table)
    {
    // Create an instance of a person entity. See the ModelpersonEntity.cs for
    //a description of the entity.
    PersonEntity person = new PersonEntity("Fernández", "Mike")
    {
    Email = "Mike.Nikolo@contoso.com",
    PhoneNumber = "123-555-0101"
    };
    // Demonstrate how to insert the entity
    Console.WriteLine($"Inserting person: {person.PartitionKey}, {person.RowKey}");
    person = await TableUtils.InsertOrMergeEntityAsync(table, person);
    // Demonstrate how to Update the entity by changing the phone number
    Console.WriteLine("Update an existing Entity using the InsertOrMerge Upsert
    Operation.");
    person.PhoneNumber = "123-555-0105";
    await TableUtils.InsertOrMergeEntityAsync(table, person);
    Console.WriteLine();
    //Insert new people with same partition keys.
    //If you try to use a batch operation for inserting entities with different
    //partition keys you get an exception.
    var people = new List<PersonEntity>();
    person = new PersonEntity("Smith", "John")
    {
    Email = "john.smith@contoso.com",
    PhoneNumber = "123-555-1111"
    };
    people.Add(person);
    person = new PersonEntity("Smith", "Sammuel")
    {
    Email = "sammuel.smith@contoso.com",
    PhoneNumber = "123-555-2222"
    };
    people.Add(person);
    TableBatchResult insertedPeopleResult = new TableBatchResult();
    insertedPeopleResult = await TableUtils.BatchInsertOrMergeEntityAsync(table,
    people);
    foreach (var res in insertedPeopleResult)
    {
    PersonEntity batchPerson = res.Result as PersonEntity;
    Console.WriteLine($"Inserted person in a batch operation:
    {batchPerson.PartitionKey}, {batchPerson.RowKey}");
    }
    // Demonstrate how to Read the updated entity using a point query
    Console.WriteLine("Reading the updated Entity.");
    person = await TableUtils.RetrieveEntityUsingPointQueryAsync(table,
    "Fernández", "Mike");
    Console.WriteLine();
    // Demonstrate how to Delete an entity
    //Console.WriteLine("Delete the entity. ");
    //await SamplesUtils.DeleteEntityAsync(table, person);
    //Console.WriteLine();
    }
    }
    }
    	
    
  25. Before you can run your code, you need to ensure that compilation includes the AppSettings.json file. Edit your az203TablesCodeDemo.csproj file and add the following code inside the ItemGroup section:
  26. 	
    <None Update="AppSettings.json">     <CopyToOutputDirectory>PreserveNewest</Cop
    </None>
    	
    
  27. In the Visual Studio Code window, start debugging your code by pressing F5.
  28. Review the output of your code in the terminal window and ensure that all operations completed successfully. You can also check to ensure that the table and entities have been created successfully using Azure Portal Storage Explorer or Microsoft Azure Storage Explorer desktop app.

In the previous example, you retrieved a single entity by using the Retrieve<>() method from the TableOperation class. It is also quite common to need to retrieve a batch of entities stored in your tables. You can get a batch of different entities from your table by using OData or LINQ queries. If you need to use LINQ queries, you can use a TableQuery object for constructing your query. You can add code in Listing 3-8 to the previous example to review how TableQuery works.

Listing 3-8 TableQuery example

	
//C# .NET Core.
//Add this code to the CreateDemoDataAsync method in Listing 3-6
TableQuery<PersonEntity> query = new TableQuery<PersonEntity>()
.Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal,
"Smith"));
TableContinuationToken token = null;
do
{
TableQuerySegment<PersonEntity> resultSegment = await table.ExecuteQuerySegmentedAsync
<PersonEntity>(query, token);
token = resultSegment.ContinuationToken;
foreach (PersonEntity personSegment in resultSegment.Results)
{
Console.WriteLine($"Last Name: t{personSegment.PartitionKey}n" +
$"First Name:t{personSegment.RowKey}n" +
$"Email:t{personSegment.Email}n" +
$"Phone Number:t{personSegment.PhoneNumber}");
Console.WriteLine();
}
} while (token != null);
	

Implement partitioning schemes

When you create an entity in your table, you need to provide three system properties:

PartitionKey PartitionKey defines which partition belongs to the entity. A partition stores entities with the same PartitionKey.

RowKey The RowKey parameter uniquely identifies the entity inside the partition.

Timestamp The Timestamp property is the date and time when you create or modify the entity and is automatically provided for you by the system.

Because PartitionKey and RowKey are parameters of string type, the order applied to the entities uses lexical comparison. Using this type of comparison means that an entity with a RowKey value of 232 appears before an entity with a RowKey value of 4.

Understanding how partitions store your data is essential because depending on the PartitionKey and RowKey that you define for your data, you can affect the performance when inserting, modifying, or retrieving data from your table. PartitionKey and RowKey define a clustered index that speeds up the entity searches on the table. You need to bear in mind that is the only index that exists in your table. You cannot define any new indexes. If you need to define a new index, you need to move to the Cosmos DB premium storage service using the Table API.

Each Table Storage service node services one or more partitions. The service can scale up and down dynamically by load balancing partitions across different nodes. When a node is under a heavy load, the storage service splits the range of partitions served by a particular node between several other nodes that have a lower load, as shown in Figure 3-2. When the pressure on the primary node lowers, the storage service merges the different parts of the partition into the original node.

Figure 3-2 Configuring Account SAS policy
Screenshot_24

Another essential aspect that you need to consider when implementing your partitioning scheme is the Entity Group Transactions (EGT) built-in transaction mechanism. When you need to perform atomic operations across a batch of entities, the EGT built-in transaction mechanism ensures that all modifications happen consistently. The Entity Group Transaction can only operate on entities that share the same PartitionKey; that is, the entities need to be stored on the same partition. You need to carefully evaluate this feature because using fewer partitions means that you can use EGT more frequently, but it also means that you are decreasing the scalability of your application.

As we mentioned before in this section, the PartitionKey selection you make dramatically impacts the scalability and the performance of your solution. You can go from creating all your entities on a single partition to creating a partition for every single entity. Saving all entities in a single partition allows you to use entity group transactions on all your entities. If you put a significant number of entities in this single partition, you can prevent the Table Storage service from being able to scale and balance the partition efficiently. This happens because the node is reaching the scalability

target of 500 entities per second too quickly. When the node reaches this limit, the Table service tries to load balance the partition to another node that has a lower load. Each time the Table service relocates your partition, it is disconnected and reconnected on the new node. Although this reconnection operation is quite fast, your application still can suffer timeouts and server busy errors.

When you choose your PartitionKey, you need to find the best balance between being able to use batch transactions or EGT and ensuring that your Table Storage solution is scalable. Other important aspects that you should take into consideration when designing and implementing your partition scheme is the type of operation that your application needs to do. Depending on how you update, insert, or delete entities in your table, you should select the correct PartitionKey. Independently of the modification operation that your application uses more frequently, and the impact that it can have on the performance when retrieving the results, you should carefully consider the different query types that you can use for locating your entities. You can use the following query types for working with your entities:

Point Query You set the PartitionKey and RowKey of your query. You should use these types of queries as often as possible because they provide better performance when looking up entities. In this case, you take advantage of the indexed values of PartitionKey and RowKey for getting a single entity. For example, you can use $filter=(PartitionKey eq ‘Volvo’) and (RowKey eq ‘1234BCD’).

Range Query In this type of query, you set the PartitionKey to select a partition, and you filter a range of values using the RawKey. This type of query returns more than one entity. For example, you construct the filter as $filter=PartitionKey eq ‘Volvo’ and RowKey ge ‘1111BCD’ and RowKey lt ‘9999BCD’. This filter will return all entities representing

Volvo cars in which the plate number is greater than or equal to ‘1111BCD’ and less than ‘9999BCD’. This type of query performs better than partition and table scans but worse than point queries.

Partition Scan In this case, you set the PartitionKey but use properties other than RowKeys for filtering the entities in the partition and returning more than one entity. For example, you can use this type of filter: $filter=PartitionKey eq ‘Volvo’ and color eq ‘blue’. In this case, you look for entities representing blue Volvo cars. This type of query performs worse than range and point queries but better than Table Scans.

Table Scan You don’t set the PartitionKey. This means that you don’t take advantage of the indexed values in these system properties. Because you don’t set the PartitionKey, it doesn’t matter if you use RowKey; the Table Storage service performs a table scan lookup. This kind of query returns more than one entity. This type of query has the worst performance compared with partition scans and range and point queries. For example, you use the filter $filter=Color eq ‘Blue’ for getting all entities that represent blue cars.

Bearing in mind the types of queries and the characteristics and features of partition definition and management, you can use different Table design patterns. These patterns allow you to address some of the limitations, such as having only a single clustered index or working with transactions between partition boundaries:

Intra-partition secondary index pattern You can enable efficient lookups on different RowKeys with different sorting orders by creating multiple copies of the same entity in the same partition and using different values for the RowKey on each copy of the entity.

Inter-partition secondary index pattern You create multiple copies of the same entity in different partitions or even different tables using different RowKey values for setting different sort orders.

Eventually consistent transactions pattern Using Azure queues, you can bypass the limitation of using entity group transactions only inside the same partition.

Index entities pattern You use external storage, such as blob storage, for creating lists of indexed entities by using non-unique properties. For example, you have entities that represent employees in a company, and you used the department name as the PartitionKey and the Employee ID as the RowKey. You can create virtual indexes by creating a list for each value of the non-unique attribute that you want to index. This means you can have a list called Smith that contains all employee IDs for employees that have the Smith surname. You can also use the intrapartition and inter-partition secondary index pattern in conjunction with this pattern.

Denormalization pattern Because Table Storage is a schemaless NoSQL database, you are not restricted to normalization rules. This means you can store detailed information in the same entity that you should store in separate entities if you were using an SQL database.

Data series pattern You can minimize the number of requests that you need to make for getting a data series related to an entity. For example, consider an entity that stores the temperature of an industrial oven every hour. If you need to make a graph for showing the evolution of the temperature, you need to make 24 requests, one each hour. Using this pattern, you create a copy of the entity that stores the temperature information for each hour in a 24-hour day, and you reduce the number of needed requests to create the graph from 24 to 1.

Compound key pattern You use the values from the RowKey of two different types of entities for creating a new type of entity in which the RowKey is the join of the RowKeys of each entity type.

Log tail pattern By default, values returned by the Table Storage service are sorted by PartitionKey and RowKey. This pattern returns the n most recently added entities to the table by using the RowKey.

High-volume delete pattern You can delete a high volume of data by storing all the entities that you want to delete in a separate table. Then you only need to delete the table that contains the entities that you want to delete.

Wide entities pattern If you need to create entities with more than 252 properties, you can split logical entities between several physical entities.

Large entities pattern If you need to store entities that are bigger than 1MB, you can use the blob storage service for storing the extra information and save a pointer to the blob as a property of the entity.

Need More Review?: Table Design Patterns

You can find detailed information about how to implement table design patterns in the Table Design Patterns section of the article “Azure Storage Table Design Guide: Designing Scalable and Performant Tables” at https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-designguide#table-design-patterns.

Skill 3.2: Develop solutions that use Cosmos DB storage

Cosmos DB is a premium storage service that Azure provides for satisfying your need for a globally distributed, low-latency, highly responsive, and always-online database service. Cosmo DB has been designed with scalability and throughput in mind. One of the most significant differences between Cosmos DB and other storage services offered by Azure is how easily you can scale your CosmoDB solution across the globe by simply clicking a button and adding a new region to your database.

Another essential feature that you should consider when evaluating this type of storage service is how you can access this service from your code and how hard it would be to migrate your existing code to a Cosmos DB–based storage solution. The good news is that Cosmos DB offers different APIs for accessing the service. The best API for you depends on the type of data that you want to store in your Cosmos DB database. You store your data using Key-Value, Column-Family, Documents, or Graph approaches. Each of the different APIs that Cosmos DB offers allows you to store your data with different schemas. Currently, you can access Cosmos DB using SQL, Cassandra, Table, Gremlin, and MongoDB APIs.

This skill covers how to:

  • Create, read, update, and delete data by using the appropriate APIs
  • Implement partitioning schemes
  • Set the appropriate consistency level for operations

Create, read, update, and delete data by using the appropriate APIs

When you are planning how to store the information that your application needs to work, you need to consider the structure that you need to use for storing that information. You will find that some parts of your application need to store information using a Key-Value structure, while others may need a more flexible, schemaless structure in which you need to save the information into documents. Maybe one fundamental characteristic of your application is that you need to store the relationship between entities, and you need to use a graph structure for storing your data.

Cosmos DB offers a variety of APIs for storing and accessing your data, depending on the requirements that your application has:

SQL This is the core and default API for accessing your data in your Cosmos DB account. This core API allows you to query JSON objects using SQL syntax, which means you don’t need to learn another query language. Under the hood, the SQL API uses the JavaScript programming model for expression evaluation, function invocations, and typing system. You use this API when you need to use a data structure based on documents.

Table You can think of the Table API as the evolution of the Azure Table Storage service. This API benefits from the high performance, low latency, and high scalability features of Cosmos DB. You can migrate from your current Azure Table Storage service with no code modification in your application. Another critical difference between Table API for Cosmos DB and Azure Table Storage is that you can define your own indexes in your tables. In the same way that you did with the Table Storage service, Table API allows you to store information in your Cosmos DB account using a data structure based on documents.

Cassandra Cosmos DB implements the wire protocol for the Apache

Cassandra database into the options for storing and accessing data in the Cosmos DB database. This allows you to forget about operations and performance-management tasks related to managing Cassandra databases. In most situations, you can migrate your application from your current Cassandra database to Cosmos DB using the Cassandra API by simply changing the connection string. Cassandra is a columnbased database that stores information using a key-value approach.

MongoDB You can access your Cosmos DB account by using the MongoDB API. This NoSQL database allows you to store the information for your application in a document-based structure. Cosmos DB implements the wire protocol compatible with MongoDB 3.2. This means that any MongoDB 3.2 client driver that implements and understands this protocol definition can connect seamlessly with your Cosmos DB database using the MongoDB API.

Gremlin Based on the Apache TinkerPop graph transversal language or Gremlin, this API allows you to store information in Cosmos DB using a graph structure. This means that instead of storing only entities, you store:

Vertices You can think of a vertex as in an entity in other information structures. In a typical graph structure, a vertex could be a person, a device, or an event.

Edges These are the relationships between vertices. A person can know another person, a person might own a type of device, or a person may attend an event.

Properties These are each of the attributes that you can assign to a vertex or an edge.

Beware that you cannot mix these APIs in a single Cosmos DB account.

You need to define the API that you want to use for accessing your Cosmos DB account when you are creating the account. Once you have created the account, you won’t be able to change the API for accessing it.

Azure offers SDKs for working with the different APIs that you can use for connecting to Cosmos DB. Supported languages are .NET, Java, Node.js, and Python. Depending on the API that you want to use for working with

Cosmos DB, you can also use other languages like Xamarin, Golang, or PHP. In this section, you can review an example of each API and learn how to create, read, update, and delete data using the different APIs.

Before starting with the examples, you need to create a Cosmos DB

account for storing your data. The following procedure shows how to create a Cosmos DB account with the SQL API. You can use this same procedure for creating accounts with the other APIs we have reviewed in this skill:

  1. Sign in to the management portal (http://portal.azure.com
  2. In the top left corner in the Azure Portal, click Create A Resource.
  3. On the New panel, under the Azure Marketplace column, click Databases. On the Featured column, click Azure Cosmos DB.
  4. On the Create Azure Cosmos DB Account blade, in the Resource Group dropdown, click the Create New link below the drop-down menu. In the pop-up dialog, type a name for the new Resource Group. Alternatively, you can select an existing Resource Group from the drop-down menu.
  5. In the Instance Details section, type an Account Name.
  6. In the API drop-down menu, ensure that you have selected the option Core (SQL), as shown in Figure 3-3.
  7. Figure 3-3 Selecting a Cosmos DB API
    Screenshot_25
  8. On the Location drop down menu, select the region most appropriate for you. If you are using App Services or virtual machines, you should select the same region in which you deployed those services.
  9. Leave Geo-Redundancy and Multi-Region Write disabled.
  10. In the bottom-left corner of the Create Azure Cosmos DB Account blade, click the Review + Create button.
  11. In the bottom-left corner of the Review + Create tab, click the Create button to start the deployment of your Cosmos DB account.

Note: Azure Cosmos DB emulator

You can use the Azure Cosmos DB emulator during the development stage of your application. You should bear in mind that there are some limitations when working with the emulator instead of a real Cosmos DB account. The emulator is only supported on Windows platforms. You can review all characteristics of the Cosmos DB emulator at https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

Once you have your Cosmos DB account ready, you can start working with the different examples. The following example shows how to create a console application using .NET Core. The first example uses Cosmos DB SQL API for creating, updating, and deleting some elements in the Cosmos DB account:

  1. Open Visual Studio Code and create a directory for storing the example project.
  2. Open the Terminal, switch to the project’s directory, and type following command:
  3. 		
    dotnet new console
    		
    	
  4. Install the NuGet package for interacting with your Cosmos DB account using the SQL API. Type following command in the Terminal:
  5. 		
    dotnet add package Microsoft.Azure.DocumentDB.
    		
    	
  6. Change the content of the cs file using the content provided in Listing 3-9. You need to change the namespace according to your project’s name.
  7. Sign in to the management portal (http://portal.azure.com.
  8. In the Search box at the top of the Azure Portal, type the name of your Cosmos DB account and click the name of the account.
  9. On your Cosmos DB Account blade, in the Settings section, click Keys.
  10. On the Keys panel, copy the URI and Primary Keys values from the Read-Write Keys tab. You need to provide these values to the EndpointUri and Key Constants in the code shown in Listing 3-9. (The most important parts of the code are shown with bold format.)

Listing 3-9 Cosmos DB SQL API example

	
//C# .NET Core. Program.cs
using System.Collections.Immutable;
using System.Xml.Linq;
using System.Diagnostics;
using System.Runtime.CompilerServices;
using System;
using System.Linq;
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using System.Threading.Tasks;
using ch3_2_2_SQL.Model;
using System.Net;
namespace ch3_2_2_SQL
{
class Program
{
private const string EndpointUri = "<INSERT_YOUR_COSMOS_DB_URI_HERE>";
private const string Key = "<INSERT_YOUR_KEY_HERE>";
private DocumentClient client;
static void Main(string[] args)
{
try
{
Program demo = new Program();
demo.StartDemo().Wait();
}
catch (DocumentClientException dce)
{
Exception baseException = dce.GetBaseException();
System.Console.WriteLine($"{dce.StatusCode} error ocurred: {dce.Message},
Message: {baseException.Message}");
}
catch (Exception ex)
{
Exception baseException = ex.GetBaseException();
System.Console.WriteLine($"Error ocurred: {ex.Message}, Message:
{baseException.Message}");
}
}
private async Task StartDemo()
{
Console.WriteLine("Starting Cosmos DB SQL API Demo!");
//Create a new demo database
string databaseName = "demoDB_" + Guid.NewGuid().ToString().Substring(0, 5);
this.SendMessageToConsoleAndWait($"Creating database {databaseName}...");
this.client = new DocumentClient(new Uri(EndpointUri), Key);
Database database = new Database { Id = databaseName };
await this.client.CreateDatabaseIfNotExistsAsync(database);
//Create a new demo collection inside the demo database.
//This creates a collection with a reserved throughput.
//This operation has pricing implications.
string collectionName = "collection_" + Guid.NewGuid().ToString().
Substring(0, 5);
this.SendMessageToConsoleAndWait($"Creating collection demo
{collectionName}...");
DocumentCollection documentCollection = new DocumentCollection { Id =
www.examsnap.com ExamSnap - IT Certification Exam Dumps and Practice Test Questions
collectionName };
Uri databaseUri = UriFactory.CreateDatabaseUri(databaseName);
await this.client.CreateDocumentCollectionIfNotExistsAsync(databaseUri,
documentCollection);
//Create some documents in the collection
Person person1 = new Person
{
Id = "Person.1",
FirstName = "Mike",
LastName = "Nikolo",
Devices = new Device[]
{
new Device { OperatingSystem = "iOS", CameraMegaPixels = 7,
Ram = 16, Usage = "Personal"},
new Device { OperatingSystem = "Android", CameraMegaPixels = 12,
Ram = 64, Usage = "Work"}
},
Gender = "Male",
Address = new Address
{
City = "Seville",
Country = "Spain",
PostalCode = "28973",
Street = "Diagonal",
State = "Andalucia"
},
IsRegistered = true
};
await this.CreateDocumentIfNotExistsAsync(databaseName, collectionName,
person1);
Person person2 = new Person
{
Id = "Person.2",
FirstName = "Agatha",
LastName = "Smith",
Devices = new Device[]
{
new Device { OperatingSystem = "iOS", CameraMegaPixels = 12,
Ram = 32, Usage = "Work"},
new Device { OperatingSystem = "Windows", CameraMegaPixels = 12,
Ram = 64, Usage = "Personal"}
},
Gender = "Female",
Address = new Address
{
City = "Laguna Beach",
Country = "United States",
PostalCode = "12345",
Street = "Main",
State = "CA"
},
IsRegistered = true
};
await this.CreateDocumentIfNotExistsAsync(databaseName, collectionName,
person2);
//Make some queries to the collection
this.SendMessageToConsoleAndWait($"Getting documents from the collection
{collectionName}...");
FeedOptions queryOptions = new FeedOptions { MaxItemCount = -1 };
Uri documentCollectionUri = UriFactory.CreateDocumentCollectionUri
(databaseName, collectionName);
//Find documents using LINQ
IQueryable<Person> personQuery = this.client.CreateDocumentQuery<Person>
(documentCollectionUri, queryOptions)
.Where(p => p.Gender == "Male");
System.Console.WriteLine("Running LINQ query for finding people...");
foreach (Person foundPerson in personQuery)
{
System.Console.WriteLine($"tPerson: {foundPerson}");
}
//Find documents using SQL
IQueryable<Person> personSQLQuery = this.client.CreateDocumentQuery<Person>
(documentCollectionUri,
"SELECT * FROM Person WHERE Person.Gender = 'Female'",
queryOptions);
System.Console.WriteLine("Running SQL query for finding people...");
foreach (Person foundPerson in personSQLQuery)
{
System.Console.WriteLine($"tPerson: {foundPerson}");
}
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
//Update documents in a collection
this.SendMessageToConsoleAndWait($"Updating documents in the collection
{collectionName}...");
person2.FirstName = "Mathew";
person2.Gender = "Male";
Uri documentUri = UriFactory.CreateDocumentUri(databaseName, collectionName,
person2.Id);
await this.client.ReplaceDocumentAsync(documentUri, person2);
this.SendMessageToConsoleAndWait($"Document modified {person2}");
//Delete a single document from the collection
this.SendMessageToConsoleAndWait($"Deleting documents from the collection
{collectionName}...");
documentUri = UriFactory.CreateDocumentUri(databaseName, collectionName,
person1.Id);
await this.client.DeleteDocumentAsync(documentUri);
this.SendMessageToConsoleAndWait($"Document deleted {person1}");
//Delete created demo database and all its children elements
this.SendMessageToConsoleAndWait("Cleaning-up your Cosmos DB account...");
await this.client.DeleteDatabaseAsync(databaseUri);
}
private void SendMessageToConsoleAndWait(string message)
{
Console.WriteLine(message);
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
private async Task CreateDocumentIfNotExistsAsync(string database,
string collection, Person person)
{
try
{
Uri documentUri = UriFactory.CreateDocumentUri(database, collection,
person.Id);
await this?.client.ReadDocumentAsync(documentUri);
this.SendMessageToConsoleAndWait($"Document {person.Id} already exists
in collection {collection}");
}
catch (DocumentClientException dce)
{
if (dce.StatusCode == HttpStatusCode.NotFound
{
Uri collectionUri = UriFactory.CreateDocumentCollectionUri(database, collection);
await this?.client.CreateDocumentAsync(collectionUri, person);
this.SendMessageToConsoleAndWait($"Created new document {person.Id}
in collection {collection}");
}
}
}
}
}
	

When you work with the SQL API, notice that you need to construct the correct URI for accessing the element that you want to work with. SQL API provides you with the UriFactory class for creating the correct URI for the object type. When you need to create a Database or a Document Collection, you can use CreateDatabaseIfNotExistsAsync or CreateDocumentCollection IfNotExistsAsync, respectively. These IfNotExists methods automatically check to determine whether the Document Collection or Database exists in your Cosmos DB account; if they don’t exist, the method automatically creates the Document Collection or the Database. However, when you need to create a new document in the database, you don’t have available this type of IfNotExists methods, so you need to check whether the document already exists in the collection. If the document doesn’t exist, then you will create the actual document, as shown in the following fragment from Listing 3-9. (The code in bold shows the methods that you need to use for creating a document and getting the URI for a Document Collection.)

	
try
{
Uri documentUri = UriFactory.Crea                 person.Id);
await this?.client.ReadDocumentAs                 this.SendMessageToConsoleAndWait(                 in collection {collection}");
}
catch (DocumentClientException dce)
{
if (dce.StatusCode == HttpStatusC
{
Uri collectionUri = UriFactor                     collection);
await this?.client.CreateDocu ...
	

You need to do this verification because you will get a DocumentClientException with StatusCode 409 (Conflict) if you try to create a document with the same Id of an already existing document in the collection. Similarly, you get a DocumentClientException with StatusCode 404 (Not Found) if you try to delete a document that doesn’t exist in the collection using the DeleteDocumentAsync method or if you try to replace a

document that doesn’t exist in the collection using the ReplaceDocumentAsync method.

When you create a document, you need to provide an Id property of type string to your document. This property needs to uniquely identify your document inside the collection. If you don’t provide this property, Cosmo DB automatically adds it to the document for you, using a GUID string.

As you can see in the example code in Listing 3-9, you can query your documents using LINQ or SQL sentences. In this example, I have used a pretty simple SQL query for getting documents that represent a person with the male gender. However, you can construct more complex sentences like a query that returns all people who live in a specific country; using the WHERE Address.Country = ‘Spain’ expression, or people that have an Android device using the WHERE ARRAY_CONTAINS(Person.Devices, { ‘OperatingSystem’: ‘Android’}, true) expression.

Need More Review?: Sql Queries With Cosmos DB

You can review all the capabilities and features of the SQL language that Cosmo DB implements by reviewing these articles:

SQL Language Reference for Azure Cosmos DB

https://docs.microsoft.com/en-us/azure/cosmos-db/sql-api-queryreference

Once you have modified the Program.cs file, you need to create some additional classes that you use in the main program for managing documents. You can find these new classes in Listings 3-10 to 3-12.

  1. In the Visual Studio Code window, create a new folder named Model in the project folder.
  2. Create a new C# class file in the Model folder and name it Person.cs.
  3. Replace the content of the Person.cs file with the content of Listing 3-10. Change the namespace as needed for your project.
  4. Create a new C# class file in the Model folder and name it Device.cs.
  5. Replace the content of the Device.cs file with the content of Listing 3-11 Change the namespace as needed for your project.
  6. Create a new C# class file in the Model folder and name it Address.cs.
  7. Replace the content of the Address.cs file with the content of Listing 3-12 Change the namespace as needed for your project.
  8. At this point, you can run the project by pressing F5 in the Visual Studio Code window. Check to see how your code is creating and modifying the different databases, document collections, and documents in your Cosmos DB account. You can review the changes in your Cosmos DB account using the Data Explorer tool in your Cosmos DB account in the Azure Portal.

Listing 3-10 Cosmos DB SQL API example: Person.cs

	
//C# .NET Core. using Newtonsoft.Json;
namespace ch3_2_2_SQL.Model
{
public class Person
{
[JsonProperty(PropertyName="id")]         public string Id { get; set; }         public string FirstName { get; set; }         public string LastName { get; set; }         public Device[] Devices { get; set; }         public Address Address { get; set; }         public string Gender { get; set; }         public bool IsRegistered { get; set; }         public override string ToString()
{
return JsonConvert.SerializeObject(th
}
}
}
	

Listing 3-11 Cosmos DB SQL API example: Device.cs

	
//C# .NET Core.
namespace ch3_2_2_SQL.Model
{
public class Device
{
public int Ram { get; set; }         public string OperatingSystem { get; set;         public int CameraMegaPixels { get; set; }         public string Usage { get; set; }
}
}
	

Listing 3-12 Cosmos DB SQL API example. Address.cs

	
//C# .NET Core.
namespace ch3_2_2_SQL.Model
{
public class Address
{
public string City { get; set; }         public string State { get; set; }         public string PostalCode { get; set; }         public string Country { get; set; }         public string Street { get; set; }
}
}
	

Working with Cosmos DB using the Table API is not too different from what you already did in the Azure Table Storage example in Listings 3-3 to 3-7. You need only make some slight modifications to make that example run using Cosmos DB Table API instead of using Azure Table Storage. Remember, to run this example, you need to create an Azure Cosmos DB Account using the Table API. Once you have created the account, copy the Connection String found in the Connection String panel under the Settings section of your Cosmo DB Account blade.

Use the following steps to adapt the examples in Listings 3-3 to 3-7 to work with the Cosmos DB Table API:

  1. Make a copy of your Azure Table Storage project folder example and rename it.
  2. Change the content in the AppSettings.json file with the following content:
  3. 		
    {     "ConnectionString": "<PUT_YOUR_CONNECTION_
    }
    		
    	
  4. In the AppSettings.cs file, remove the SASToken and StorageAccountName properties and add a new property using the following code:
  5. 		
    public string ConnectionString { get; set; }
    		
    	
  6. In the Common.cs file, change the method CreateStorageAccountFromSASToken to:
  7. 		
    public static CloudStorageAccount CreateStorag
    connectionString)
    		
    	
  8. In the Common.cs file, in theCreateStorageAccountFromConnectionString method, change
  9. 		
    try
    {
    //We required that the communication with     bool useHttps = true;
    StorageCredentials storageCredentials = ne     storageAccount = new CloudStorageAccount(s null, useHttps); } To
    try {
    storageAccount = CloudStorageAccount.Parse( }
    		
    	
  10. In the Common.cs file, in the CreateTableAsync method, change
  11. 		
    AppSettings appSettings = AppSettings.LoadAppS string storageConnectionString = appSettings.S string accountName = appSettings.StorageAccoun
    CloudStorageAccount storageAccount = CreateSto
    ConnectionString, accountName);
    		
    	

    to

    		
    AppSettings appSettings = AppSettings.LoadAppS string cosmosDBConnectionString = appSettings. CloudStorageAccount storageAccount = CreateSto
    (cosmosDBConnectionString);
    		
    	
  12. In the Visual Studio Code window, press F5 to run the project. You can connect to your Cosmos DB account and review the tables and entities in your account.

As you can see in this example, you can migrate from Azure Table Storage service to a Cosmos DB Table API account by making minimal changes to your code. You can migrate your data from Azure Table Storage to Cosmos DB using azCopy. See https://docs.microsoft.com/enus/azure/storage/common/storage-use-azcopy

Working with the MongoDB API for Cosmos DB is as easy as working with any other Mongo DB library. You only need to use the connection string that you can find in the Connection String panel under the Settings section in your Azure Cosmos DB account.

The following example shows how to use Cosmos DB in your MongoDB project. For this example, you are going to use MERN (MongoDB, Express, React, and Node), which is a full-stack framework for working with

MongoDB and NodeJS. Also, you need to meet following requirements:

You must have the latest version of NodeJS installed on your computer.

You must have an Azure Cosmos DB account configured for using MongoDB API. Remember that you can use the same procedure we saw earlier for creating a Cosmos DB with the SQL API to create an Azure Cosmos DB account with the MongoDB API. You only need to select the correct API when you are creating your CosmosDB account.

You need one of the connection strings that you can find in the

Connection String panel in your Azure Cosmos DB account in the Azure Portal. You need to copy one of these connection strings because you need to use it later in the code.

Use the following steps to connect a MERN project with Cosmos DB using the MongoDB API:

  1. Create a new folder for your project.
  2. Open the terminal and run the following commands:
  3. 	
    git clone https://github.com/Hashnode/mern-sta
    cd mern-starter npm install
    	
    
  4. Open your preferred editor and open the mern-starter folder. Don’t close the terminal window that you opened before.
  5. In the mern-starter folder, in the server subfolder, open the config.js file and replace the content of the file with the following code:
  6. 	
    const config = {
    mongoURL: process.env.MONGO_URL || '<YOUR_CO   port: process.env.PORT || 8000,
    };
    export default config;
    	
    
  7. On the terminal window, run the command npm start. This command starts the NodeJS project and creates a Node server listening on port 8000.
  8. Open a web browser and navigate to http://localhost:8000. This opens the MERN web project.
  9. Open a new browser window, navigate to the Azure Portal, and open the Data Explorer browser in your Azure Cosmos DB account.
  10. In the MERN project, create, modify, or delete some posts. Review how the document is created, modified, and deleted from your Cosmos DB account.

Need More Review?: Gremlin and Cassandra Examples

As you can see in the previous examples, integrating your existing code with Cosmos DB doesn’t require too much effort or changes to your code. For the sake of brevity, we decided to omit the examples of how to connect your Cassandra or Gremlin applications with Cosmos DB. You can learn how to do these integrations by reviewing the following articles:

Quickstart: Build a .NET Framework or Core application Using the Azure Cosmos DB Gremlin API account https://docs.microsoft.com/en-us/azure/cosmos-db/create-graph-dotnet

Quickstart: Build a Cassandra App with .NET SDK and Azure https://docs.microsoft.com/en-us/azure/cosmos-db/createcassandra-dotnet

Implement partitioning schemes

When you save data to your Cosmos DB account—independently of the API that you decide to use for accessing your data—Azure places the data in different servers to accommodate the performance and throughput that you require from a premium storage service like Cosmos DB. The storage services use partitions to distribute the data. Cosmos DB slices your data into smaller pieces called partitions that are placed on the storage server. There are two different types of partitions when working with Cosmos DB:

Logical You can divide a Cosmos DB container into smaller pieces based on your criteria. Each of these smaller pieces are logical partitions.

Physical These partitions are a group of replicas of your data that are physically stored on the servers. Azure automatically manages this group of replicas or replica sets. A physical partition can contain one or more logical partitions.

By default, any logical partition has a limit of 10 GB for storing data. When you are configuring a new collection, as shown in Figure 3-4, you need to decide whether you want your collection to be stored in a single logical partition and keep it under the limit of 10 GB or allow it to grow over that limit and span across different logical partitions. If you need your container to split over several partitions, Cosmos DB needs some way to know how to distribute your data across the different logical partitions. This is where the partition key comes into play. Use the following procedure to create a new collection in your Cosmos DB account. This procedure could be slightly different depending on the API that you use for your Cosmos DB account. In this procedure, you use a Cosmos DB account configured with the MongoDB API:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the Search box at the top of the Azure Portal, type the name of your Cosmos DB account and click the name of the account.
  3. On your Cosmos DB account blade, click Data Explorer.
  4. On the Data Explorer blade, click the New Container icon in the top-left corner of the blade.
  5. On the Add Container panel, shown in Figure 3-4, provide a name for the new database. If you want to add a container to an existing database, you can select the database by clicking the Use Existing radio button.
  6. Figure 3-4 Creating a new Collection
    Screenshot_26
  7. Provide a name for the collection.
  8. Select the Storage Capacity by selecting Fixed (10GB) or Unlimited. Bear in mind that the partition key only makes sense for partitions bigger that 10GB.
  9. Enter a Shard Key. This is the partition key that Cosmos DB uses for distributing your data across different partitions.
  10. Enter a Thorughput Limit and click OK.

Note: Partition Size

At the time of this writing, Microsoft is removing the ability to select a partition size of 10GB or Unlimited for some APIs. You still can create partitions limited to 10GB programmatically for any Cosmos DB API.

One of the main differences between Azure Table Storage and Table API for Cosmos DB is whether you can choose the partition key. Azure Table Storage allows you to use the PartitionKey system property for selecting the partition key; Cosmos DB allows you to select the attribute that you want to use as the partition key, as shown in Figure 3-4. Bear in mind that this partition key is immutable, which means you cannot change the property that you want to use as the partition key once you have selected it.

Selecting the appropriate partition key for your data is crucial because of the effect it has on the performance of your application. If you select a partition key that has a lot of possible values, you will end up with many partitions, and each partition might contain only a few documents. This configuration can be beneficial when your application usually performs read workloads and uses parallelization techniques for getting the data. On the other hand, if you select a partition key with just a few possible values, you can end with “hot” partitions. A “hot” partition is a partition that receives most of the requests when working with your data. The main implication for these “hot” partitions is that they usually reach the throughput limit for the partition, which means you will need to provision more throughput. Another potential drawback is that you can reach the limit of 10GB for a single logical partition. Because a logical partition is the scope for efficient multidocument transactions, selecting a partition key with a few possible values allows you to execute transactions on many documents inside the same partition.

Use the following guidelines when selecting your partition key:

The storage limit for a single logical partition is 10GB. If you foresee that your data would require more space for each value of the partition, you should select another partition key.

The requests to a single logical partition cannot exceed the throughput limit for that partition. If your requests reach that limit, they are throttled to avoid exceeding the limit. If you reach this limit frequently, you should select another partition key because there is a good chance that you have a “hot” partition.

Choose partition keys with a wide range of values and access patterns that can evenly distribute requests across logical partitions. This allows you to achieve the right balance between being able to execute crossdocument transactions and scalability. Using timestamp-based partition keys is usually a lousy choice for a partition key.

Review your workload requirements. The partition key that you choose should allow your application to perform well on reading and writing workloads.

The parameters that you usually use on your requests are good candidates for a partition key.

Need More Review?: Partitioning

You can review more information about how partitioning works viewing the following video at https://azure.microsoft.com/enus/resources/videos/azure-documentdb-elastic-scale-partitioning/

Set the appropriate consistency level for operations

One of the main benefits offered by Cosmos DB is the ability to have your data distributed across the globe with low latency when accessing the data. This means that you can configure Cosmos DB for replicating your data between any of the available Azure regions while achieving minimal latency when your application accesses the data from the nearest region. If you need to replicate your data to an additional region, you only need to add to the list of regions in which your data should be available.

This replication across the different regions has a drawback—the consistency of your data. To avoid corruption, your data needs to be consistent between all copies of your database. Fortunately, the Cosmos DB protocol offers five levels of consistency replication. Going from consistency to performance, you can select how the replication protocol behaves when copying your data between all the replicas that are configured across the globe. These consistency levels are region agnostic, which means the region that started the read or write operation or the number of regions associated with your Cosmos DB account doesn’t matter, even if you configured a single region for your account. You configure this consistency level at the Cosmos DB level, and it applies to all databases, collections, and documents stored inside the same account. You can choose between the consistency levels shown in Figure 3-5. Use the following procedure to select the consistency level:

Figure 3-5 Selecting the consistency level
Screenshot_27
  1. Sign in to the management portal (http://portal.azure.com).
  2. In the Search box at the top of the Azure Portal, type the name of your Cosmos DB account and click the name of the account.
  3. On your Cosmos DB account blade, click Default Consistency in the Settings section.
  4. On the Default Consistency blade, select the desired consistency level. Your choices are Strong, Bounded Staleness, Session, Consistent Prefix, and Eventual.
  5. Click the Save icon in the top-left corner of the Default Consistency blade.

Strong The read operations are guaranteed to return the most recently committed version of an element; that is, the user always reads the latest committed write. This consistency level is the only one that offers a linearizability guarantee. This guarantee comes at a price. It has higher latency because of the time needed to write operation confirmations, and

the availability can be affected during failures.

Bounded Staleness The reads are guaranteed to be consistent within a pre-configured lag. This lag can consist of a number of the most recent (K) versions or a time interval (T). This means that if you make write operations, the read of these operations happens in the same order but with a maximum delay of K versions of the written data or T seconds since you wrote the data in the database. For reading operations that happen within a region that accepts writes, the consistency level is identical to the Strong consistency level. This level is also known as “time-delayed linearizability guarantee.”

Session Scoped to a client session, this consistency level offers the best balance between a strong consistency level and the performance provided by the eventual consistency level. It best fits applications in which write operations occur in the context of a user session.

Consistent Prefix This level guarantees that you always read data in the same order that you wrote the data, but there’s no guarantee that you can read all the data. This means that if you write “A, B, C” you can read “A”, “A, B” or “A, B, C” but never “A, C” or “B, A, C.”

Eventual There is no guarantee for the order in which you read the data.

In the absence of a write operation, the replicas eventually converge. This consistency level offers better performance at the cost of the complexity of the programming. Use this consistency level if the order of the data is not essential for your application.

Note: Custom Synchronization

If none of the consistency levels shown in this section fit your needs, you can create a custom consistency level by implementing a custom synchronization mechanism. You can review how to implement a custom synchronization by reviewing this article: “How to Implement Custom Synchronization to Optimize for Higher Availability and Performance at https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-customsynchronization

The best consistency level choice depends on your application and the API that you want to use to store data. As you can see in the different consistency levels, your application’s requirements regarding data read consistency versus availability, latency, and throughput are critical factors that you need to consider when making your selection.

You should consider the following points when you use SQL or Table API for your Cosmos DB account:

The recommended option for most applications is the level of session consistency.

If you are considering the strong consistency level, we recommend that you use the bonded staleness consistency level because it provides a linearizability guarantee with a configurable delay.

If you are considering the eventual consistency level, we recommend that you use the consistent prefix consistency level because it provides comparable levels of availability and latency with the advantage of guaranteed read orders.

Carefully evaluate the strong and eventual consistency levels because they are the most extreme options. In most situations, other consistency levels can provide a better balance between performance, latency, and data consistency.

Need More Review?: Consistency Levels Tradeoff

Each consistency level comes at a price. You can review the implications of choosing each consistency level by reading the article: “Consistency, Availability, and Performance Tradeoffs” at https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levelstradeoffs

When you use Cassandra or MongoDB APIs, Cosmos DB maps the consistency levels offered by Cassandra and MongoDB to the consistency level offered by Cosmos DB. The reason for doing this is because when you use these APIs, neither Cassandra or MongoDB offers a well-defined consistency level. Instead, Cassandra provides write or read consistency levels that map to the Cosmos DB consistency level in the following ways:

Cassandra write consistency level This level maps to the default Cosmos DB account consistency level.

Cassandra read consistency level Cosmos DB dynamically maps the consistency level specified by the Cassandra driver client to one of the Cosmos DB consistency levels.

On the other hand, MongoDB allows you to configure the following consistency levels: Write Concern, Read Concern, and Master Directive. Similar to the mapping of Cassandra consistency levels, Cosmos DB consistency levels map to MongoDB consistency levels in the following ways:

MongoDB write concern consistency level This level maps to the default Cosmos DB account consistency level.

MongoDB read concern consistency level Cosmos DB dynamically maps the consistency level specified by the MongoDB driver client to one of the Cosmos DB consistency levels.

Configuring a master region You can configure a region as the MongoDB “master” by configuring the region as the first writable region.

Need More Review?:Cassandra and Mongodb Consistency Level Mappings

You can review how the different consistency levels map between Cassandra and MongoDB and Cosmos DB consistency levels in the article “Consistency Levels and Azure Cosmos DB APIs” athttps://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levelsacross-apis

Skill 3.3: Develop solutions that use a relational database

NoSQL databases have many desirable features that make them an excellent tool for some scenarios, but there is no one-size-fits-all tool for solving all your data needs. There might be times when you need to use relational databases, such as when you need to migrate an on-premises application to the cloud or write an application that requires strong enforcement of data integrity and transaction support.

Azure offers the SQL Database service, which is a database-as-a-service (DBaaS) platform that allows you to implement your relational database requirements directly in the cloud without worrying about the details of installing, configuring, and managing your instance of SQL Server. Azure SQL Database also provides to you out-of-the-box high availability, scalability, monitoring, and tuning features.

This skill covers how to:

  • Provision and configure relational databases
  • Configure elastic pools for Azure SQL Database
  • Create, read, update, and delete data tables by using code

Provision and configure relational databases

Azure SQL Database offers different deployment models or deployment options to better fit your needs. The base code for all these deployment options is the same Microsoft SQL Server database engine that you can find in any installation of the latest version of SQL Server. Because of the cloudfirst strategy, Microsoft includes the newest capabilities and features in the SQL Database cloud version and then adds it to the SQL Server itself.

As we mentioned, you can choose between the following three deployment options:

Single database You assign a group of resources to your database. The SQL Database service manages these resources for you, and you only need to connect to your database for working with your data. A single database is similar to contained databases in SQL Server 2017.

Elastic pool You configure a group of resources that the SQL Database service manages for you. You can deploy several databases that share these resources. This deployment option is appropriate when you have several databases that have an unpredictable workload or whose workload is predictable but not always stable (peak and valley usage patterns). You can move single databases in and out of elastic pools.

Managed instance This is similar to installing your SQL Server in an Azure Virtual Machine with the advantage of not needing to provision or manage the VM. This deployment option is appropriate when you need to move an on-premises environment to the cloud, and your application depends on features available only for SQL instance. Bear in mind that this deployment option is different from deploying a SQL Server on Azure Virtual Machines.

Need More Review?: Azure SQL Versus SQL Server

You can review a side-by-side comparison of features available in each version of the SQL service by consulting the article “Feature Comparison: Azure SQL Database versus SQL Server,” at https://docs.microsoft.com/en-us/azure/sql-database/sql-databasefeatures

When you are planning to deploy an SQL Database, you should consider the purchasing model that best fits your needs. Azure SQL Database offers two purchasing models:

DTU-based Resources are grouped into bundles or service tiers—basic, standard, and premium. A DTU (Database Transaction Unit) is a grouping of computing, storage, and IO resources assigned to the database that allows you to measure the resources assigned to a single database. If you need to work with elastic pools, then you need to apply a similar concept—the elastic Database Transaction Unit or eDTU. You cannot use DTU or eDTU with managed instances.

vCore-based You have more fine-grained control over the resources that you want to assign to your single database, elastic pool, or managed instance. You can choose the hardware generation that you want to use with your databases. This pricing model offers two service tiers—

general purpose and business critical. For single databases, you can also choose the additional service tier, hyperscale. When you use this pricing model, you pay for the following:

Compute You configure the service tier, the number of vCores, amount of memory, and the hardware generation.

Data You configure the amount of space reserved for your databases and log information.

Backup storage You can configure Read Access Geo-Redundant Storage.

In general, if your single database or elastic pool is consuming more than 300 DTU, you should consider moving to a vCore pricing model. You can make this pricing model conversion with no downtime by using the Azure Portal or any of the available management APIs.

Need More Review?: SQL Database Pricing Model

If you want to know more about the Azure SQL Database pricing model, you can review the Microsoft Docs article “Azure SQL Database Purchasing Models” at https://docs.microsoft.com/en-us/azure/sqldatabase/sql-database-purchase-models.

When you deploy a single database or an elastic pool, you need to use an Azure SQL Database server. This server is the parent resource for any single database or elastic pool, and it is the entry point for your databases. The SQL Database server controls important aspects, such as user logins, firewall and auditing rules, thread detection policies and failover groups. When you create your first database or elastic pool, you need to create an SQL Database server and provide an admin username and password for managing the server. This first administrator user has control over the master database on the server and all new databases created on the server.

You should not confuse this SQL Database server with an on-premises server or a managed instance server. The SQL Database server does not provide any instance-level features to you. It also does not guarantee the location of the databases that the server manages. This means that you can have your databases or elastic pools located in a region different from the region in which you deployed your SQL Database server. Also, this SQL Database server is different from a managed instance server because, in the case of the managed instance server, all databases are located in the same region in which you deployed the managed instance databases.

Use the following procedure for creating a single database with a new SQL Database server:

  1. Sign in to the management portal (http://portal.azure.com).
  2. In the top-left corner of the Azure Portal, click Create A Resource.
  3. On the New panel, under the Azure Marketplace column, click Databases. In the Featured column, click SQL Database.
  4. On the Create SQL Database blade, in the Resource Group, click the Create New Link below the drop-down menu. In the pop-up dialog, type a name for the new Resource Group. Alternatively, you can select an existing Resource Group from the drop-down menu.
  5. In the Database Details section, type a name for the new database in the Database Name text box.
  6. In the Server Option section, click the Create New link below the Select A Server drop-down menu to open the New Server panel.
  7. In the New Server panel, shown in Figure 3-6, provide a name for the new SQL Database Server.
  8. Figure 3-6 Creating a new SQL Database server
    Screenshot_28
  9. Leave the Allow Azure Services To Access Server option selected.
  10. In the New Server panel, provide values for the Admin Username, Password, and Confirm Password fields.
  11. Leave the Location drop-down menu with the default value.
  12. At the bottom of the New Server panel, click the Select button.
  13. On the Create SQL Database blade, in the Database Details section, ensure that the Want To Use SQL Elastic Pool? option is set to No. You only use this option when configuring an elastic pool.
  14. In the Compute + Storage section, select the Standard S0 service tier.
  15. Click the Next: Additional settings >> button at the bottom of the Create SQL Database blade.
  16. On the Additional Settings tab, in the Data Source section, ensure that the None option is selected. Alternatively, you can create a database from an existing backup in your subscription. You can also create the AdventureWorksLT database with sample data.
  17. In the database collation section, leave the SQL_Latin1_General_CP1_CI_AS option selected. In this section, you can configure any other collation that your database or application may require.
  18. Click the Review + Create button at the bottom of the Create SQL Database blade.
  19. On the Review + Create tab, make sure that all settings are correct and click the Create button at the bottom of the blade.

Once you have created your database, you need to configure a server-level firewall rule. You need to do this because when you create a SQL Database server, Azure doesn’t allow any external clients to connect to the server. You need to allow the connections from your computer’s IP to access your database. Because you kept the Allow Azure Services To Access Server option selected during the Azure SQL Database server creation, you don’t need to explicitly grant access to any Azure service, such as App Services or Azure Functions that may need to access this database. You can create the appropriate client-side firewall rule using Azure Portal or Azure Data Studio. Use the following procedure for creating a server-side firewall rule for any network or host address:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the Search box at the top of the Azure Portal, type the name of your SQL Database.
  3. On the Overview panel of your SQL Database, click the Set Server Firewall button on the top side of the blade.
  4. On the Firewall Settings blade, click the Add Client IP button located on the top bar of the blade to add the IP address from which you are connected to the Azure Portal.
  5. Click the Save button at the top of the blade.

Once you have created the server-side firewall rule, you can connect to your SQL Database, using your preferred SQL management tool or IDE, such as Azure Data Studio, SQL Server Management Studio, Visual Studio, or Visual Studio Code.

Another exciting feature that you should consider when working with the SQL Database service is the ability to create backups of your databases automatically. The retention period of these backups goes from 7 to 35 days, depending on the purchase model and the service tier that you choose for your database. You can configure this retention policy at the SQL Database server level. Use the following procedure for configuring the retention policy for your SQL Database server:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the Search box at the top of the Azure Portal, type the name of your SQL Database server.
  3. In the Settings section, click the Manage Backups option.
  4. On the Manage Backups panel, click the Configure Retention button in the top-left corner of the panel.
  5. On the Configure Policies panel, in the Point In Time Restore Configuration drop-down menu, select the number of days that you want to keep your Point In Time Restore (PITR) backups.
  6. Click the Apply button at the bottom of the Configure Policies panel.

Need More Review?: Long-Term Backups

The retention time offered by the automatically created point-in-time restore backups may be not sufficient for your company. In those situations, you can configure long-term retention policies for storing full backups of the databases in a separate Storage Account for up to ten years. You can review how to configure these long-term backups by consulting the article “Store Azure SQL Database backups for up to 10 years” at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-longterm-retention

Need More Review?: Restore a Database

You can use the point-in-time restore backups for restoring a previous version of your databases or restore a database that you deleted by accident. You can review the different options for restoring your databases in the article “Recover an Azure SQL Database Using Automated Seehttps://docs.microsoft.com/en-us/azure/sqldatabase/sql-database-recovery-using-backups

Configure elastic pools for Azure SQL Database

When you configure a single database in the SQL Database service, you reserve a group of resources for your database. If you deploy several databases, you reserve independent groups of resources, one for each database. This approach could lead to a waste of resources if the workload for your database is unpredictable. If you provide too many DTUs for the peak usage of your database, you can waste resources when your database is in the valley usage periods. If you decide to provide fewer resources to your database, you can face a situation in which your database doesn’t have enough resources to perform correctly.

A solution to this problem is to use elastic pools. This deployment option allows you to allocate elastic Database Transaction Units (eDTUs) or vCore and to put several databases in the elastic pool. When you add a database to the elastic pool, you configure a minimum and maximum amount of resources for the database. The advantage of using a database in an elastic pool is that the database consumes resources based on its real usage. This means that if the database is under a heavy load, it can consume more resources; if the database is under a low load, it consumes fewer resources. This also means that if the database is not being used at all, it doesn’t consume any resources. The real advantage of the elastic pool comes when you put more than one database in the elastic pool.

You can add or remove resources for your database in an elastic pool with no downtime of the database (unless you need to add more resources to the elastic pool). If you need to change service tiers, then you might experience a little downtime because SQL Database service creates a new instance on the new service tier. Then the SQL Database service copies all the data to the new instance. Once the data is completely synced, the service switches the routing of the connections from the old instance to the new one.

You can use the following procedure for creating a new elastic pool and adding an existing database to your elastic pool:

  1. Sign in to the management portal (http://portal.azure.com
  2. In the top-left corner of the Azure Portal, click Create A Resource.
  3. On the New panel, in the Search The Marketplace text box, type ql elas and select SQL Elastic Database Pool from the list of results.
  4. On the SQL Elastic Database Pool panel, click the Create button.
  5. On the Elastic Pool panel, type a name for the elastic pool in the Name text box.
  6. In the Resource Group property drop-down control, click the Create New link. In the pop-up dialog, type a name for the new Resource Group. Alternatively, you can select an existing Resource Group from the drop-down menu.
  7. From the Server Property drop-down menu, select the SQL Database server that you created in the previous procedure.
  8. Click the Configure Pool property to open the Configure Panel, as shown in Figure 3-7, where you can select the service tier for this elastic pool.
  9. Figure 3-7 Configuring elastic pool service tier
    Screenshot_29
  10. Click the Databases tab and then click the Add Databases button.
  11. In the Add Databases panel, select the database that you want to add to this elastic pool. It should appear in the list of databases that you created in the previous procedure.
  12. Click the Apply button at the bottom of the Add Databases panel.
  13. Click the Apply button at the bottom of the Configure panel.
  14. Click the Create button at the bottom of the Elastic Pool panel.

Once you have created your elastic pool, you can configure the upper and lower limits of the resources that you want to assign to each database in the elastic pool. You must configure this limit for all databases together; you may not configure this limit per database. This means that if you set a lower limit of 10 DTUs and an upper limit of 20 DTUs, these limits apply to all databases in the elastic pool. Use the following procedure for configuring the per-database limit in your elastic pool:

  1. Sign in to the management portal (http://portal.azure.com
  2. In the Search box at the top of the Azure Portal, type the name of your SQL Elastic Database Pool.
  3. On the SQL Elastic Pool blade, click the Configure option under the Settings section.
  4. On the Configure panel, click the Per Database Settings tab.
  5. Move the slider control to adjust your Per Database limits. Move the left side to adjust the Minimum limit and use the right side to select the Maximum limit.
  6. Click the Save button on the top-left side of the Configure panel.

Need More Review?: Configure Limits

When you are configuring the per-database settings, you cannot use arbitrary values for these limits. The values that you can use for these limits depend on the resources that you configure in the service tier. You can review the full list of available limit values in these articles:

Resources Limits for Elastic Pools Using the DTU-BasedPurchasing Model” at https://docs.microsoft.com/en-us/azure/sqldatabase/sql-database-dtu-resource-limits-elastic-pools

“Resource Limits for Elastic Pools Using the vCore-Based

Purchasing Model Limits” at https://docs.microsoft.com/enus/azure/sql-database/sql-database-vcore-resource-limits-elasticpools

Create, read, update, and delete data tables by using code

Working with an SQL Database from your code works the same way as you do with any other database hosted in an on-premises SQL Server instance. You can use the following drivers for SQL for accessing the database from your code: ADO.NET, ODBC, JDBC, and PDO (PHP). You can also work with Object-Relational Mapping frameworks like Entity Framework/Entity Framework Core or Java Hibernate. The following example shows how to connect to your SQL Database from your .NET Core code using ADO.NET. To run this example, you need:

Visual Studio 2017 installation You can also use a Visual Studio Code installation.

SQL Database You can use a single database or a database included in an elastic pool.

For the sake of simplicity, this example uses ADO.NET. Using other ORM frameworks like Entity Framework doesn’t require any special consideration. You only need to use the connection string copied from your SQL Database:

  1. Sign in to the management portal (http://portal.azure.com
  2. In the Search box at the top of the Azure Portal, type the name of your SQL Database.
  3. Click the Properties option in the Settings section.
  4. On the Properties panel, click the Show Databases Connection Strings link.
  5. On the Databases Connection Strings panel, copy the connection string that appears on the ADO.NET tab. You will use this connection string later in the code.
  6. Open your Visual Studio 2017 installation.
  7. Click File > New > Project.
  8. On the New Project window, on the navigation tree at the left, select Installed > Visual C# > .NET Core.
  9. In the template list in the middle of the window, select Console App (.NET Core).
  10. At the bottom of the window, type a name for the project and a name for the solution.
  11. Select the location in which you want to save your solution.
  12. Click the OK button in the bottom-right corner of the window.
  13. On the Visual Studio window, click Tools > NuGet Package Manager > Manage NuGet Packages For Solution.
  14. On the NuGet tab, click the Browse tab.
  15. In the Search box, type Data.SqlClient and press the Enter key to install the System.Data.SqlClient NuGet package.
  16. Replace the contents of the cs file with the contents in Listing 3-13.

Listing 3-13 Connect to your SQL Database. Program.cs

	
//C# .NET Core.
using System;
using System.Data.SqlClient;
namespace ch3_3_3
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("SQL Database connection Demo!");
try
{
Program p = new Program();
p.StartADOConnectionDemo();
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
private void StartADOConnectionDemo()
{
try
{
string your_username = "<your_db_admin_username>";
string your_password = "<your_db_admin_password>";
string connectionString = $"<your_ADO.NET_connection_string>";
using (var connection = new SqlConnection(connectionString))
{
connection.Open();
Console.WriteLine("Creating tables with ADO");
using (var command = new SqlCommand(ADO_CreateTables(), connection))
{
int rowsAffected = command.ExecuteNonQuery();
Console.WriteLine($"Number of rows affected: {rowsAffected}");
}
Console.WriteLine("========================");
Console.WriteLine("Press any key to continue");
Console.ReadLine();
Console.WriteLine("Adding data to the tables with ADO");
using (var command = new SqlCommand(ADO_Inserts(), connection))
{
int rowsAffected = command.ExecuteNonQuery();
Console.WriteLine($"Number of rows affected: {rowsAffected}");
}
Console.WriteLine("========================");
Console.WriteLine("Press any key to continue");
Console.ReadLine();
Console.WriteLine("Updating data with ADO");
using (var command = new SqlCommand(ADO_UpdateJoin(), connection))
{
command.Parameters.AddWithValue("@csharpParmDepartmentName",
"Accounting");
int rowsAffected = command.ExecuteNonQuery();
Console.WriteLine($"Number of rows affected: {rowsAffected}");
}
Console.WriteLine("========================");
Console.WriteLine("Press any key to continue");
Console.ReadLine();
Console.WriteLine("Deleting data from tables with ADO");
using (var command = new SqlCommand(ADO_DeleteJoin(), connection))
{
command.Parameters.AddWithValue("@csharpParmDepartmentName",
"Legal");
int rowsAffected = command.ExecuteNonQuery();
Console.WriteLine($"Number of rows affected: {rowsAffected}");
}
Console.WriteLine("========================");
Console.WriteLine("Press any key to continue");
Console.ReadLine();
Console.WriteLine("Reading data from tables with ADO");
using (var command = new SqlCommand(ADO_SelectEmployees(),
connection))
{
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine($"{reader.GetGuid(0)} , " +
www.examsnap.com ExamSnap - IT Certification Exam Dumps and Practice Test Questions
$"{reader.GetString(1)} , " +
$"{reader.GetInt32(2)} , " +
$"{reader?.GetString(3)} ," +
$"{reader?.GetString(4)}");
}
}
}
Console.WriteLine("========================");
Console.WriteLine("Press any key to continue");
Console.ReadLine();
}
}
catch (SqlException ex)
{
Console.WriteLine(ex.ToString());
}
}
static string ADO_CreateTables()
{
return @"
DROP TABLE IF EXISTS tabEmployee;
DROP TABLE IF EXISTS tabDepartment; -- Drop parent table last.
CREATE TABLE tabDepartment
(
DepartmentCode nchar(4) not null PRIMARY KEY,
DepartmentName nvarchar(128) not null
);
CREATE TABLE tabEmployee
(
EmployeeGuid uniqueIdentifier not null default NewId()
PRIMARY KEY,
EmployeeName nvarchar(128) not null,
EmployeeLevel int not null,
DepartmentCode nchar(4) null
REFERENCES tabDepartment (DepartmentCode) -- (REFERENCES would be
//disallowed on temporary tables.)
);
";
}
static string ADO_Inserts()
{
return @"
-- The company has these departments.
INSERT INTO tabDepartment (DepartmentCode, DepartmentName)
VALUES
('acct', 'Accounting'),
('hres', 'Human Resources'),
('legl', 'Legal');
-- The company has these employees, each in one department.
INSERT INTO tabEmployee (EmployeeName, EmployeeLevel, DepartmentCode)
VALUES
('Alison' , 19, 'acct'),
('Barbara' , 17, 'hres'),
('Carol' , 21, 'acct'),
('Deborah' , 24, 'legl'),
('Elle' , 15, null);
";
}
static string ADO_UpdateJoin()
{
return @"
DECLARE @DName1 nvarchar(128) = @csharpParmDepartmentName;
--'Accounting';
-- Promote everyone in one department (see @parm...).
UPDATE empl
SET
www.examsnap.com ExamSnap - IT Certification Exam Dumps and Practice Test Questions
empl.EmployeeLevel += 1
FROM
tabEmployee as empl
INNER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
WHERE
dept.DepartmentName = @DName1;
";
}
static string ADO_DeleteJoin()
{
return @"
DECLARE @DName2 nvarchar(128);
SET @DName2 = @csharpParmDepartmentName; --'Legal';
-- Right size the Legal department.
DELETE empl
FROM
tabEmployee as empl
INNER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
WHERE
dept.DepartmentName = @DName2
-- Disband the Legal department.
DELETE tabDepartment
WHERE DepartmentName = @DName2;
";
}
static string ADO_SelectEmployees()
{
return @"
-- Look at all the final Employees.
SELECT
empl.EmployeeGuid,
empl.EmployeeName,
empl.EmployeeLevel,
empl.DepartmentCode,
dept.DepartmentName
FROM
tabEmployee as empl
LEFT OUTER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
ORDER BY
EmployeeName;
";
}
}
}
	

As you can see in this example, the code that you use for connecting to your Azure SQL Database is the same that you use for connecting to a database hosted in an on-premises SQL Server instance. Migrating your code for connecting from your on-premises database to a cloud-based SQL database is a straightforward process.

Need More Review?: Authentication

In the previous example, you connected to your Azure SQL Database using SQL authentication. Also, Azure SQL Database services allow you to authenticate using Azure Active Directory users. You can review how to configure the integration between Azure SQL Database and Azure Active Directory by consulting the Microsoft Docs article “Tutorial: https://docs.microsoft.com/enus/azure/sql-database/sql-database-security-tutorial

Skill 3.4: Develop solutions that use blob storage

Storing information in SQL or NoSQL databases is a great way to save that information when you need to save schemaless documents or if you need to guarantee the integrity of the data. The drawback of these services is that they are relatively expensive for storing data that doesn’t have such requirements.

Azure blob storage allows you to store information that doesn’t fit the characteristics of SQL and NoSQL storage in the cloud. This information can be images, videos, office documents, or more. The Azure Blob Storage still provides high availability features that make it an ideal service for storing a large amount of data but at a lower price compared to the other data storage solutions that we reviewed earlier in this chapter.

This skill covers how to:

  • Move blob storage items between Storage Accounts or containers
  • Set and retrieve properties and metadata
  • Implement blob leasing
  • Implement data archiving and retention

Move items in Blob storage between Storage Accounts or containers

When you are working with Azure Blob storage, there can be situations in which you may need to move blobs from one Storage Account to another or between containers. For particular situations, you can use the azCopy command-line tool for performing these tasks. This tool is ideal for doing incremental copy scenarios or copying an entire account into another account. You can use the following command for copying blob items between containers in different Storage Accounts:

	
azcopy copy <URL_Source_Item><Source_SASToken> <U
	

Although using the azCopy command may be appropriate for some situations, you may need to get more fine-grained control of the items that you need to move between containers or event-Storage Accounts. The following example written in .NET Core shows how to move a blob item between two containers in the same Storage Account and how to move a blob item between two containers in different Storage Accounts. Before you can run this example, you need to create two Storage Accounts with two blob containers. For the sake of simplicity, you should create the two containers with the same name in the two different Storage Accounts. Also, you need to upload two control files as blob items to one of the containers in one Storage Account:

  1. Open Visual Studio Code and create a folder for your project.
  2. In the Visual Studio Code Window, open a new terminal.
  3. Use the following command to install NuGet packages:
  4. 		
    			dotnet add package <NuGet_package_name>
    		
    	
  5. Install the following NuGet packages:
    • Microsoft.Azure.Storage.Blob
    • Microsoft.Azure.Storage.Common
    • Microsoft.Extensions.Configuration
    • Microsoft.Extensions.Configuration.Binder
    • Microsoft.Extensions.Configuration.Json
  6. In the project folder, create a new JSON file and name it AppSettings.json. Copy the content from Listing 3-14 to the JSON file.
  7. Create a C# class file and name it AppSettings.cs.
  8. Replace the contents of the AppSettings.cs file with the contents of Listing 3-15. Change the name of the namespace to match your project’s name.
  9. Create a C# class file and name it Common.cs.
  10. Replace the contents of the Common.cs file with the contents of Listing 3-16.
  11. Change the name of the namespace to match your project’s name.
  12. Replace the contents of the Programm.cs file with the contents of Listing 3-17. Change the name of the namespace to match your project’s name.
  13. Edit your .csproj project file and add the following code inside the ItemGroup section:
  14. 		
    <None Update="AppSettings.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
    		
    	
  15. At this point, you can set some breakpoints in the Program.cs file to see, step by step, how the code moves the blob items between the different containers and Storage Accounts.
  16. In the Visual Studio Window, press F5 to build and run your code. You can use the Azure Portal or the Microsoft Azure Storage Explorer desktop application to review how your blob items change their locations.

Listing 3-14 AppSettings.json configuration file

	
{
"SourceSASToken": "<SASToken_from_your_first_storage_account>",
"SourceAccountName": "<name_of_your_first_storage_account>",
"SourceContainerName": "<source_container_name>",
"DestinationSASToken": "<SASToken_from_your_second_storage_account>",
"DestinationAccountName": "<name_of_your_second_storage_account>",
"DestinationContainerName": "<destination_container_name>"
}
	

Listing 3-15 AppSettings.cs C# class

	
//C# .NET Core
using Microsoft.Extensions.Configuration;
namespace ch3_4_1
{
public class AppSettings
{
public string SourceSASToken { get; set; }
public string SourceAccountName { get; set; }
public string SourceContainerName { get; set; }
public string DestinationSASToken { get; set; }
public string DestinationAccountName { get; set; }
public string DestinationContainerName { get; set; }
public static AppSettings LoadAppSettings()
{
IConfigurationRoot configRoot = new ConfigurationBuilder()
.AddJsonFile("AppSettings.json",false)
.Build();
AppSettings appSettings = configRoot.Get<AppSettings>();
return appSettings;
}
}
}
	

Listing 3-16 Common.cs C# class

	
//C# .NET Core
using System;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Auth;
using Microsoft.Azure.Storage.Blob;
namespace ch3_4_1
{
public class Common
{
public static CloudBlobClient CreateBlobClientStorageFromSAS(string SAStoken,
string accountName)
{
CloudStorageAccount storageAccount;
CloudBlobClient blobClient;
try
{
bool useHttps = true;
StorageCredentials storageCredentials =
new StorageCredentials(SAStoken);
storageAccount = new CloudStorageAccount(storageCredentials,
accountName, null, useHttps);
blobClient = storageAccount.CreateCloudBlobClient();
}
catch (System.Exception)
{
throw;
}
return blobClient;
}
}
}
	

In the following Listing, portions of the code that are significant to the process of working with the Azure Blob Storage service are shown in bold.

Listing 3-17 Program.cs C# class

	
//C# .NET Core
using System.Threading.Tasks;
using System;
using Microsoft.Azure.Storage;
using Microsoft.Azure.Storage.Blob;
namespace ch3_4_1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Move items between Containers Demo!");
Task.Run(async () => await StartContainersDemo()).Wait();
Console.WriteLine("Move items between Storage Accounts Demo!");
Task.Run(async () => await StartAccountDemo()).Wait();
}
public static async Task StartContainersDemo()
{
string sourceBlobFileName = "<first_control_filename>";
AppSettings appSettings = AppSettings.LoadAppSettings();
//Get a cloud client for the source Storage Account
CloudBlobClient sourceClient = Common.CreateBlobClientStorageFromSAS
(appSettings.SourceSASToken, appSettings.SourceAccountName);
//Get a reference for each container
var sourceContainerReference = sourceClient.GetContainerReference
(appSettings.SourceContainerName);
var destinationContainerReference = sourceClient.GetContainerReference
(appSettings.DestinationContainerName);
//Get a reference for the source blob
var sourceBlobReference = sourceContainerReference.GetBlockBlobReference
(sourceBlobFileName);
var destinationBlobReference = destinationContainerReference.GetBlockBlob
Reference(sourceBlobFileName);
//Move the blob from the source container to the destination container
await destinationBlobReference.StartCopyAsync(sourceBlobReference);
await sourceBlobReference.DeleteAsync();
}
public static async Task StartAccountDemo()
{
string sourceBlobFileName = "<second_control_filename>";
AppSettings appSettings = AppSettings.LoadAppSettings();
//Get a cloud client for the source Storage Account
CloudBlobClient sourceClient = Common.CreateBlobClientStorageFromSAS
(appSettings.SourceSASToken, appSettings.SourceAccountName);
//Get a cloud client for the destination Storage Account
CloudBlobClient destinationClient = Common.CreateBlobClientStorageFromSAS
(appSettings.DestinationSASToken, appSettings.DestinationAccountName);
//Get a reference for each container
var sourceContainerReference = sourceClient.GetContainerReference(appSettings.
SourceContainerName);
var destinationContainerReference = destinationClient.GetContainerReference
(appSettings.DestinationContainerName);
//Get a reference for the source blob
var sourceBlobReference = sourceContainerReference.GetBlockBlobReference
(sourceBlobFileName);
var destinationBlobReference = destinationContainerReference.
GetBlockBlobReference(sourceBlobFileName);
//Move the blob from the source container to the destination container
await destinationBlobReference.StartCopyAsync(sourceBlobReference);
await sourceBlobReference.DeleteAsync();
}
}
}
	

In this example, you made two different movementsâ€"one between containers in the same Storage Account and another between containers in different Storage Accounts. As you can see in the code shown previously in Listing 3-17, the high-level procedure for moving blob items between containers is

  1. Create a CloudBlobClient instance for each Storage Account that is involved in the blob item movement.
  2. Create a reference for each container. If you need to move a blob item between containers in a different Storage Account, you need to use the CloudBlobClient object that represents each Storage Account.
  3. Create a reference for each blob item. You need a reference to the source blob item because this is the item that you are going to move. You use the destination blob item reference for performing the actual copy operation.
  4. Once you are done with the copy, you can delete the source blob item by using the DeleteAsync()

Although this code is quite straightforward, it has a critical problem that you can solve in the following sections. If someone else modifies the source blob item while the write operation is pending, the copy operation fails with an HTTP status code 412.

Need More Review?: Cross-Account Blob Copy

You can review the details of how the asynchronous copy between Storage Accounts works by reading this MSDN article: “Introducing Asynchronous Cross-Account Copy Blob” at https://blogs.msdn.microsoft.com/windowsazurestorage/2012/06/12/introd asynchronous-cross-account-copy-blob/

Set and retrieve properties and metadata

When you work with Azure Storage services, you can work with some additional information assigned to your blobs. This additional information is stored in the form of system properties and user-defined metadata:

System properties This is information that the Storage services automatically adds to each storage resource. You can modify some of these system properties, while others are read-only. Some of these system properties correspond with some HTTP headers. You don’t need to worry about maintaining these system properties because the Azure Storage client libraries automatically make any needed modification for you.

User-defined metadata You can assign key-value pairs to an Azure Storage resource. These metadata are for your own purposes and don’t affect the behavior of the Azure Storage service. You need to take care of updating the value of these metadata according to your needs.

Listing 3-18 shows how to create a new container and get a list of some system properties assigned automatically to the container when you create it.

Listing 3-18 Getting system properties from a storage resource

	
//C# .NET Core
using System;
using System.Threading.Tasks;
using Microsoft.Azure.Storage.Blob;
namespace ch3_4_2
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Getting System properties Demo!");
AppSettings appSettings = AppSettings.LoadAppSettings();
//Create a CloudBlobClient for working with the Storage Account
CloudBlobClient blobClient = Common.CreateBlobClientStorageFromSAS
(appSettings.SASToken, appSettings.AccountName);
//Get a container reference for the new container.
CloudBlobContainer container = blobClient.
GetContainerReference("container-demo");
//Create the container if not already exists
container.CreateIfNotExistsAsync();
//You need to fetch the container properties before getting their values
container.FetchAttributes();
Console.WriteLine($"Properties for container {container.StorageUri.
PrimaryUri.ToString()}");
System.Console.WriteLine($"ETag: {container.Properties.ETag}");
System.Console.WriteLine($"LastModifiedUTC: {container.Properties.
LastModified.ToString()}");
System.Console.WriteLine($"Lease status: {container.Properties.LeaseStatus.
ToString()}");
System.Console.WriteLine();
}
}
}
	

As you can see in the previous code in Listing 3-18, you need to use the FetchAttributes() or FetchAttributesAsync() before you can read the properties from the container, stored in the Properties property of the CloudBlobContainer or CloudBlockBlob objects. If you get null values for system properties, ensure that you called the FetchAttributes() method before accessing the system property.

Working with user-defined metadata is quite similar to working with system properties. The main difference is that you can add your custom keypairs to the storage resource. These user-defined metadata are stored in the Metadata property of the storage resource. Listing 3-19 extends the example in Listing 3-18 and shows how to set and read user-defined metadata in the container that you created in Listing 3-18.

Listing 3-19 Setting user-defined metadata

	
//C# .NET Core
//Add some metadata to the container that we created before
container.Metadata.Add("department", "Technical");
container.Metadata["category"] = "Knowledge Base";
container.Metadata.Add("docType", "pdfDocuments");
//Save the containers metadata in Azure
container.SetMetadata();
//List newly added metadata. We need to fetch all attributes before being
//able to read if not, we could get nulls or weird values
container.FetchAttributes();
System.Console.WriteLine("Container's metadata:");
foreach (var item in container.Metadata)
{
System.Console.WriteLine($"tKey: {item.Key}");
System.Console.WriteLine($"tValue: {item.Value}");
}
	

You can find a complete list of system properties in the Microsoft.Azure.Storage.Blob .NET client reference at https://docs.microsoft.com/enus/dotnet/api/microsoft.azure.storage.blob.blobcontainerproperties. The BlobContainerProperties and BlobProperties classes are responsible for storing the system properties for the storage resources in a Blob Storage account.

You can also view and edit system properties and user-defined metadata by using the Azure Portal, using the Properties and Metadata sections in the Settings section of your container, or clicking on the ellipsis next to the blob item and selecting the Blob Properties option in the contextual menu.

Implement blob leasing

When you are working with the Blob Storage service—in which several users or process can simultaneously access the same Storage Account—you can face a problem when two users or processes are trying to access the same blob. Azure provides a leasing mechanism for solving this kind of situation. A lease is a short block that the blob service sets on a blob or container item for granting exclusive access to that item. When you acquire a lease to a blob, you get exclusive write and delete access to that blob. If you acquire a lease in a container, you get exclusive delete access to the container.

When you acquire a lease for a storage item, you need to include the active lease ID on each write operation that you want to perform on the blob with the lease. You can choose the duration for the lease time when you request it. This duration can last from 15 to 60 seconds or forever. Each lease can be in one of the following five states:

Available The lease is unlocked, and you can acquire a new lease.

Leased There is a lease granted to the resource and the lease is locked. You can acquire a new lease if you use the same ID that you got when you created the lease. You can also release, change, renew, or break the lease when it is in this status.

Expired The duration configured for the lease has expired. When you have a lease on this status, you can acquire, renew, release, or break the lease.

Breaking You have broken the lease, but it’s still locked until the break period expires. In this status, you can release or break the lease.

Broken The break period has expired, and the lease has been broken. In this status, you can acquire, release, and break a lease. You need to break a lease when the process that acquired the lease finished suddenly, such as when network connectivity issues or any other condition results in the lease not being released correctly. In these situations, you may end up with an orphaned lease, and you cannot write or delete the blob with the orphaned lease. In this situation, the only solution is to break the lease. You may also want to break a lease when you need to force the release of the lease manually.

You use the Azure Portal for managing the lease status of a container or blob item, or you use it programmatically with the Azure Blob Storage client SDK. In the example shown in Listings 3-14 to 3-17, in which we reviewed how to move items between containers or Storage Accounts, we saw that if some other process or user modifies the blob while our process is copying the data, we get an error. You can avoid that situation by acquiring a lease for the blob that you want to move. Listing 3-20 shows the modification that you need to add to the code in Listing 3-17 so that you can acquire a lease for the blob item.

Listing 3-20 Program.cs modification

	
//C# .NET Core
//Add lines in bold to StartContainersDemo method on
public static async Task StartContainersDemo()
{
string sourceBlobFileName = "prueba.pdf";
AppSettings appSettings = AppSettings.LoadAppSettings();
//Get a cloud client for the source Storage Account
CloudBlobClient sourceClient = Common.CreateBlobClientStorageFromSAS
(appSettings.SourceSASToken, appSettings.SourceAccountName);
//Get a reference for each container
var sourceContainerReference = sourceClient.GetContainerReference
(appSettings.SourceContainerName);
var destinationContainerReference = sourceClient.GetContainerReference
(appSettings.DestinationContainerName);
//Get a reference for the source blob
var sourceBlobReference = sourceContainerReference.GetBlockBlobReference
(sourceBlobFileName);
var destinationBlobReference = destinationContainerReference.GetBlockBlob
Reference(sourceBlobFileName);
//Get the lease status of the source blob
await sourceBlobReference.FetchAttributesAsync();
System.Console.WriteLine($"Lease status: {sourceBlobReference.Properties.
LeaseStatus}" +
$"tstate: {sourceBlobReference.Properties.LeaseState}" +
$"tduration: {sourceBlobReference.Properties.LeaseDuration}");
//Acquire an infinite lease. If you want to set a duration for the lease use
//TimeSpan.FromSeconds(seconds). Remember that seconds should be a value
//between 15 and 60.
//We need to save the lease ID automatically generated by Azure for release
//the lease later.
string leaseID = Guid.NewGuid().ToString();
await sourceBlobReference.AcquireLeaseAsync(null, leaseID);
await sourceBlobReference.FetchAttributesAsync();
System.Console.WriteLine($"Lease status: {sourceBlobReference.Properties.
LeaseStatus}" +
$"tstate: {sourceBlobReference.Properties.LeaseState}" +
$"tduration: {sourceBlobReference.Properties.LeaseDuration}");
//Move the blob from the source container to the destination container
await destinationBlobReference.StartCopyAsync(sourceBlobReference);
await sourceBlobReference.DeleteAsync();
await sourceBlobReference.ReleaseLeaseAsync(AccessCondition.
GenerateLeaseCondition(leaseID));
await sourceBlobReference.FetchAttributesAsync();
System.Console.WriteLine($"Lease status: {sourceBlobReference.Properties.
LeaseStatus}" +
$"tstate: {sourceBlobReference.Properties.LeaseState}" +
$"tduration: {sourceBlobReference.Properties.LeaseDuration}");
}
	

As you can see in the previous example, you use the AcquireLeaseAsync() method to acquire the lease for the blob. In this case, you create an infinite lease, so you need to release the lease using ReleaseLeaseAsync method.

Need More Review?: Leasing Blobs and Containers

You can review the details of how leasing works for blobs and containers by consulting the following articles:

Lease Blob https://docs.microsoft.com/enus/rest/api/storageservices/lease-blob

Lease Containerhttps://docs.microsoft.com/enus/rest/api/storageservices/lease-container

Implement data archiving and retention

When you are working with data, the requirements for accessing the data changes during the lifetime of the data. Data that has been recently placed on your storage system usually will be accessed more frequently and requires faster access than older data. If you are using the same type of storage for all your data, that means you are using storage for data that is rarely accessed. If your storage is based on SSD disk or any other technology that provides proper performance levels, this means that you can be potentially wasting expensive storage for data that is rarely accessed. A solution to this situation is to move less-frequently accessed data to a cheaper storage system. The drawback of this solution is that you need to implement a system for tracking the last time data has been accessed and moving it to the right storage system.

Azure Blob Storage provides to you with the ability to set different levels of access to your data. These different access levels, or tiers, provide different levels of performance when accessing the data. Each different access level has a different price. Following are the available access tiers:

Hot You use this tier for data that you need to access more frequently.

This is the default tier that you use when you create a new Storage Account.

Cool You can use this tier for data that is less frequently accessed and is stored for at least 30 days.

Archive You use this tier for storing data that is rarely accessed and is stored for at least 180 days. This access tier is available only at the blob level. You cannot configure a Storage Account with this access tier.

The different access tiers have the following performance and pricing implications:

Cool tier provides slightly lower availability, reflected in the servicelevel agreement (SLA) because of lower storage costs; however, it has higher access costs.

Hot and cool tiers have similar characteristics in terms of time-to-access and throughput.

Archive storage is offline storage. It has the lowest storage cost rates but has higher access costs.

The lower the storage costs, the higher the access costs.

You can use storage tiering only on General Purpose v2 (GPv2) Storage Accounts.

If you want to use storage tiering with a General Purpose v1 (GPv1) Storage Account, you need to convert to a GPv2 Storage Account.

Moving between the different access tier is a transparent process for the user, but it has some implications in terms of pricing. In general, when you are moving from a warmer tier to a cooler tier—hot to cool or hot to archive —you are charged for the write operations to the destination tier. When you move from a cooler tier to a warmer tier—from the archive to cold or from cold to hot—you are charged for the read operations from the source tier. Another essential thing to bear in mind is how the data is moved when you change your data tier from archive to any other access tier. Because data in the archive tier is saved into offline storage, when you move data out of the access tier, the storage service needs to move the data back to online storage. This process is known as blob rehydration and can take up to 15 hours.

If you don’t manually configure the access tier for a blob, it inherits the access from its container or Storage Account. Although you can change the access tier manually using the Azure Portal, this process creates an administrative overload that could also lead to human errors. Instead of manually monitoring the different criteria for moving a blob from one tier to another, you can implement policies that make that movement based on the criteria that you define. You use these policies for defining the lifecycle management of your data. You can create these lifecycle management policies by using the Azure Portal, Azure PowerShell, Azure CLI, or REST API.

A lifecycle management policy is a JSON document in which you define several rules that you want to apply to the different containers or blob types. Each rule consists of a filter set and an action set.

Filter set The filter set limits the actions to only a group of items that match the filter criteria.

Action set The action sets define the actions that are performed on the items that matched the filter.

The following procedure for adding a new policy using the Azure Portal:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the Search box at the top of the Azure Portal, type the name of your Storage Account.
  3. On the Blob service section, click Lifecycle Management.
  4. Copy the content from Listing 3-21 and paste it into the Lifecycle Management panel.
  5. Click the Save button on the top-left corner of the panel.

Listing 3-21 Lifecycle management policy definition

	
{
"rules": [
{
"enabled": true,
"name": "rule1",
"type": "Lifecycle",
"definition": {
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 90
},
"delete": {
"daysAfterModificationGreaterThan": 2555
}
},
"snapshot": {
"delete": {
"daysAfterCreationGreaterThan": 90
}
}
},
"filters": {
"blobTypes": [
"blockBlob"
],
"prefixMatch": [
"container-a"
]
}
}
}
]
}
	

The previous policy applies to all blobs under the container named container-a, as stated by the prefixMatch in the filters section. In the actions sections, you can see the following things:

Blobs that are not modified in 30 days or more will be moved to the cool tier.

Blobs that are not modified in 90 days or more will be moved to the archive tier.

Blobs that are not modified in 2,555 days or more will be deleted from the Storage Account.

Snapshots that are older than 90 days will be also deleted. The lifecycle management engine process the policies every 24 hours. This means that it is possible that you won’t see your changes reflected on your Storage Account until several hours after you made the changes.

Need More Review?: Storage Access Tiers and Lifecycle Management Policies

You can extend your knowledge about storage access tiers and lifecycle management by reviewing the following articles from Microsoft Docs:

Azure Blob Storage: Hot, Cool, and Archive Access Tiers at https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobstorage-tiers

Manage the Azure Blob Storage Lifecycle at

https://docs.microsoft.com/en-us/azure/storage/blobs/storagelifecycle-management-concepts

Chapter summary

Azure Table Storage provides a NoSQL schemaless storage system, which allows your applications to work with documents called entities.

Each entity can have up to 253 properties.

Entities are stored in tables.

You can use the Table API for Cosmos DB for accessing Azure Table Storage and Cosmos DB services.

The PartitionKey system property defines the partition where the entity will be stored.

Choosing the correct PartitionKey is critical for achieving the right performance level.

You cannot create additional indexes in an Azure Table.

You can bypass the custom index creating limitation by implementing some partitioning patterns.

Cosmos DB is a premium storage service that provides low-latency access to data distributed across the globe.

You can access Cosmo DB using different APIs: SQL, Table, Gremlin (Graph), MongoDB, and Cassandra.

You can create your custom indexes in Cosmos DB.

You can choose the property that is used as the partition key.

You should avoid selecting partition keys that create too many or too few logical partitions.

A logical partition has a limit of 10GB of storage.

Consistency levels define how the data is replicated between the

different regions in a Cosmos DB account.

There are five consistency levels: strong, bounded staleness, session, consistent prefix, and eventual.

Strong consistency level provides a higher level of consistency but also has a higher latency.

Eventual consistency level provides lower latency and lower data consistency.

Azure SQL Database offers a managed relational storage solution for your application.

You can purchase resources for your database using two different purchase models: DTU-based and vCore-based.

You can save money by putting several databases with unpredictable workloads into an elastic pool.

If you need to migrate an on-premises SQL Server database, you can achieve a seamless migration using SQL Database–managed instances.

You can move blob items between containers in the same account storage or containers in different account storages.

Azure Blob Storage service offers three different access tiers with different prices for storage and accessing data.

You can move less frequently accessed data to cool or archive access tiers to save money.

You can automatically manage the movement between access tiers by implementing lifecycle management policies.

Thought experiment

In this thought experiment, you can demonstrate your skills and knowledge about the topics covered in this chapter. You can find the answers to this thought experiment in the next section.

You are developing a web application that needs to work with information with a structure that can change during the lifetime of the development process. You need to query this information using different criteria. You need to ensure that your application returns the results of the queries as fast as possible. You don’t require your application to be globally available.

With this information in mind, answer the following questions:

  1. Which technology should you use? You should select the most costeffective solution.
  2. How many indexes should you create in your storage system?

Thought experiment answers

This section contains the solutions to the thought experiment.

  1. You should use Azure Table Storage for your application. This is a cost-effective solution that allows you to work with schemaless documents, allowing you to change the structure of your documents during the development process and minimizing the impact on your data.
  2. Because you should use Azure Table Storage, you cannot define your custom indexes. You can bypass this limitation by using partitioning and implementing the Index Entities pattern.
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.