Chapter 5. Develop for the cloud and for Azure Storage

Microsoft Azure platform provides a rich set of options for storage needs for microservices-based distributed application architecture and built-in capability to monitor and autoscale cloud-hosted applications.

As an Azure architect and for AZ-300 certification exam success, you need to understand the available storage solutions and messaging services options and know how to choose one over the other to fit for a given application scenario. You also need to understand design aspects to develop reliable and resilient cloud applications.

Skills covered in this chapter:

Skill 5.1: Develop solutions that use Cosmos DB Storage

Skill 5.2: Develop solutions that use a relational database

Skill 5.3: Configure a message-based integration architecture

Skill 5.4: Develop for autoscaling

Skill 5.1: Develop solutions that use Cosmos DB Storage

In today’s world, many global businesses want to deploy globally distributed applications to achieve low latency, higher throughput, and high availability by putting application instances and databases close to the geographic location of their user base. The application deployment in multi-datacenter comes with deployment complexity. One example is the burden of upgrading databases without affecting production traffic.

To alleviate the complexity of database schema management during upgrades, the concept of NoSQL comes in. Additionally, for global applications, you need to consider the scalability and availability of the databases. Selecting the right database service based upon the nature of your application is a critical design decision.

This skill covers how to:

  • Create and manage Azure Cosmos DB account
  • Manage scalability and implementing partitioning schemes for Cosmos
  • Set appropriate consistency level for operations
  • Create, read, update, and delete data by appropriate APIs

Create and manage Azure Cosmos DB account

In this section, you learn how to set up an Azure Cosmos DB account and configure its advanced features such as security, business continuity, and disaster recovery.

What is Cosmos DB?

Azure Cosmos DB is Microsoft's globally distributed, multi-model database. Azure Cosmos DB enables you to elastically and independently scale throughput and storage across the globe with guaranteed throughput, latency, availability, and consistency.

The Cosmos DB offers the following benefits:

Guaranteed throughput Cosmos DB guarantees throughput and performance at peak load. The performance level of Cosmos DB can be scaled elastically by setting Request Units (RUs).

Global distribution and Always On With an ability to having multimaster replicas globally and built-in capability to programmatically

(or manually) invoke failover, Cosmos DB enables 99.999%

read/write availability around the world. The Cosmos DB multihoming API is an additional feature to configure the application to point to the closest datacenter for low latency and better performance.

Multiple query model or API Cosmos DB supports many APIs to work with the data stored in your Cosmos database. By default, you can use SQL (the default API) for querying your Cosmos database. Cosmos DB also implements APIs for Cassandra, MongoDB, Gremlin, and Azure Table Storage.

â-    Choices of consistency modes The Azure Cosmos DB replication protocol offers five well-defined, practical, and intuitive consistency models. Each model has a trade-off between consistency and performance.

No schema or index management The database engine is entirely schema agnostic. Cosmos DB automatically indexes all data for faster queries response.

Security and compliance in Azure Cosmos DB

Security has always been a shared responsibility between the customer and a cloud provider. In the case of a platform-as-a-service (PaaS) database offering such as Azure Cosmos DB, the customer’s responsibility to keep the data secure shrinks to some extent as the cloud provider takes on more responsibility. In addition to keeping your data secure in the cloud, the cloud provider also helps customers meet their compliance obligations with the product offerings.

Security Azure Cosmos DB by default provides encryption at rest and in transit for documents and backups in all Azure regions without requiring any configuration from you. The AZ-300 exam expects you to know the ways to secure your data stored in Cosmos DB.

Inbound request filtering The first defense you can turn on is IP address-based access control. The default is allowed, provided a request is with a valid authorization token. You can add a single client IP or IP ranges in Classless Inter-Domain Routing (CIDR) or by subnet of the VNet to allow access to the Cosmos DB. Figure 5-1 shows the Azure portal Cosmos DB blade and how to configure IPbased security for Cosmos DB.

Figure 5-1 Setting up Cosmos account security
Screenshot_127

Fine-Grained Access Azure Cosmos DB uses a hash-based message authorization code (HMAC) to authorize access at the account level or even at the resource level like database, container, or items. Access to the account and resources is granted by either the master key or a resource token. The master key is used for full administrative access; the resource token approach is based on the fine-grained Role-Based Access Code (RBAC) security principle.

Need More Review?

Data Security in Azure Cosmos DB

To learn more about the options to set up secure access to Cosmos DB and secure your data, visit the Microsoft docs article “Security in Azure Cosmos DB - overview” at https://docs.microsoft.com/enus/azure/cosmos-db/database-security/.

Compliance Azure Cosmos DB has a major industry certification to help the customer meet their compliance obligations across regulated industries and markets worldwide.

Need More Review?

Compliance Certification

To learn more about the compliance certification of Cosmos DB, please visit the Microsoft docs “Compliance in Azure Cosmos DB” located at https://docs.microsoft.com/en-us/azure/cosmos-db/compliance

Understand the Cosmos account

Azure Cosmos account is a logical construct that has a globally unique DNS name. For high availability, you can add or remove regions to your Cosmos account at any time. You also can set up multiple masters/write replicas across different regions in your Cosmos account.

You can manage your Cosmos account in an Azure subscription either by using Azure portal, Azure CLI, or AZ PowerShell module, or you can use different language-specific SDKs. This section describes the essential fundamental concepts and mechanics of an Azure Cosmos account.

As of writing this book, you can create 100 Azure Cosmos accounts under one Azure subscription. Under the Cosmos account, you can create one or more Cosmos DBs (Cosmos databases), and within the Cosmos DBs, you can create one or more containers. In the container, you put your massive data. Cosmos DB container is a fundamental unit of scalability—a logical resource composed of one or more partitions.

Figure 5-2 gives you the visual view of what we’ve shared about Cosmos account thus far.

Figure 5-2 Azure Cosmos account entities
Screenshot_128

Create a Cosmos account

To set up a Cosmos account using the Azure portal, use the following steps:

  1. Sign into Azure portal (https://portal.azure.com).
  2. Under your subscription, on the upper-left corner, select Create AResource and search for Cosmos DB.
  3. Click Create (see Figure 5-3).
  4. Figure 5-3 Creating an Azure Cosmos account
    Screenshot_129
  5. On the create Cosmos DB account page, supply the basic mandatory information, as shown in Figure 5-4.
  6. Subscription The Azure subscription you need to create an account under.

    Resource Group Select or create a resource group.

    Account Name Enter the name of the new Cosmos account. Azure appends documents.azure.com to the name you provide to construct a unique URI.

    API The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core (SQL) and MongoDB for document data, Gremlin for graph data, Azure Table, and

    Cassandra. You must create a separate Cosmos account for each API. The Core (SQL) is to create a document database and supports query by using SQL syntax.

    Location Choose the geographic location you need to host your Cosmos account.

    Figure 5-4 Create an Azure Cosmos Account wizard
    Screenshot_130
  7. You can skip the Network and TAG section and Click Review +Create. It takes a few minutes for the deployment to complete. You can see the Cosmos DB created under the resource group resources.
  8. Global distribution and multiple write replicas

    To set up the global distribution of your Cosmos DBs and enable multiple replicas across regions, please use the following steps:

  9. Go to your Cosmos account and open up the Replicate Data Globallymenu (see Figure 5-5). You have an option to either add region by selecting hexagon icon of your desired region on the map or choose from the drop-down menu after you click +Add Region on the right side.
  10. Figure 5-5 Turnkey, Global Distribution
    Screenshot_131
  11. To remove regions, clear one or more regions from the map byselecting the blue hexagons with check marks.
  12. Click Save to commit the changes.

Important

Azure Cosmos DBâ€"Multi-Write Replicas

You cannot turn off or disable the multi-region writes after it’s enabled.

Business continuity and disaster recovery

Business continuity and disaster recovery is one of the critical factors moving to the cloud. Cosmos DB global distribution with an ability to have multiple read/write replicas provides an option to either automate failover to the secondary database in case of a regional outage or do it manually when needed. Cosmos DB also automatically back up your database every four hours and stores it in the GRS blob storage for disaster recovery (DR). At any given time, at least last two backups are retained for 30 days.

Need More Review?

Azure Cosmos DB Backup and Restore

To learn more about the backup and restore procedure, please visit Microsoft docs “Online backup and on-demand data restore in Azure Cosmos DB” located at https://docs.microsoft.com/enus/azure/cosmos-db/online-backup-and-restore#options-to-manageyour-own-backups.

Manage scalability and implementing partitioning schemes for Cosmos DB

Azure Cosmos DB uses horizontal partitioning to scale individual containers not only in terms of storage but also in terms of throughput. As the throughput and storage requirements of an application increase, Azure Cosmos DB transparently moves partitions to automatically spread the load across a larger number of physical servers to satisfy the need of scalability and performance need of a container in the database.

Azure Cosmos DB uses hash-based partitioning to spread logical partitions across physical partitions. Queries that access data within a single logical partition are more cost-effective than queries that access multiple partitions. You must be very mindful when choosing a partition key to query data efficiently and avoid “Hot Spots“ within a partition.

Following are the key considerations for a partition key:

As of the writing of this book, a single partition has an upper limit of 10 GB of storage.

Azure Cosmos DB containers have a minimum throughput of 400 request units per second (RU/s). RUs are a blended measure of computational cost (CPU, memory, disk I/O, network I/O). 1RU corresponds to the throughput of reading of a 1 KB document. All requests on the partition are charged in the form of RUs. If the request exceeds the provisioned RUs on the partition, Cosmos DB throws RequestRateTooLargeException or HTTP Status code 429.

Choose a partition key that has a wide range of values and access patterns that are evenly spread across logical partitions.

Candidates for partition keys might include properties that frequently appear as a filter in your queries.

Need More Review?

Best Practices to Choose a Partition Key

To learn more about best practices to choose partition key, visit the Microsoft docs “Create a synthetic partition key” at https://docs.microsoft.com/en-us/azure/cosmos-db/synthetic-partition-keys

Cross-partition query

Azure Cosmos DB automatically handles the queries against the single partition that has a partition key in the header request. For example, the following pseudo query1 is routed to the userID partition, which holds all the documents corresponding to partition key value XMS-0001.

	
IQueryable<DeviceReading> query = client.CreateDocumentQuery<Employee>(
UriFactory.CreateDocumentCollectionUri("myDatabaseName", "myCollectionName"))
.Where(m => m.name== “singh” && m.userID == “ID-1”);
	

The second pseudo query doesn’t have a filter on the partition key (userID). Azure Cosmos DB fans out the query across partitions. The fan-out is done by issuing individual queries to all the partitions, and it’s not default behavior. You have to explicitly mention in the using Feed options by setting the EnableCrossPartitionQuery property to on.

	
IQueryable<DeviceReading> crossPartitionQuery = client.CreateDocumentQuery<Employee>(
UriFactory.CreateDocumentCollectionUri("myDatabaseName", "myCollectionName"),
new FeedOptions { EnableCrossPartitionQuery = true })
.Where(m => m.FirstName == “Guru” && m.LastName > “jot”);
	

Setting Request Units (RUs), partition key for the containers using Azure portal

The throughput and performance of Cosmos DB depend on the Requests Units (RUs) and partition key you specify while creating a container. RUs are the blended measure of CPU, IOPS, and memory that’s required to perform a database operation. Use the following steps to create a container with the required RUs and a partition key.

  1. Log in to the Azure portal and navigate to your Cosmos account underthe Resource Group.
  2. On the Data Explorer pane, select New Container (see Figure 5-6).
  3. Figure 5-6 Cosmos DB Data Explorer blade
    Screenshot_132
  4. The screen shown in Figure 5-7 appears; enter the container name and database name.
  5. Figure 5-7 Create a new Cosmos DB wizard
    Screenshot_133
  6. Check the box for Provision Database Throughput and specify RUsaccording to your scalability need.
  7. Specify the Partition Key (for example, /state/city/zip).
  8. Click OK.

Note

Define a Partition Key for a Container

You can only define a partition key for a container during its creation; it cannot be changed after the container is created. So be very thoughtful while choosing a partition.

Set appropriate consistency level for operations

In geo-distributed databases, it’s likely that you’re reading the data that isn’t the latest version, which is called a dirty read. The data consistency, latency, and performance don’t seem to show much of a difference within a datacenter as data replication is much faster and takes only one millisecond. However, in the geo-distribution scenario, when data replication takes several hundred milliseconds, the story is different, which increases the chances of dirty reads. The Cosmos DB provides the following data consistency options to choose from with trade-offs between latency availability and performance:

Strong A strong consistency level ensures no dirty reads, and the client always reads the latest version of committed data across the multiple read replicas in single or multi-regions. The trade-off going with the strong consistency option is the performance. When you write to a database, everyone waits for Cosmos DB to serve the latest writes after it has been saved across all read replicas.

Bounded Staleness The bounded staleness option gives you an ability to decide how much staleness of data in terms of updates in a time interval an application can tolerate. You can specify the lag by an x version of updates of an item or by time interval T by which read lags behind by a write. The typical use case for you to go with bounded staleness is to guarantee low latency for writes for globally distributed applications.

Session Session ensures that there are no dirty reads on the write regions. A session is scoped to a client session, and the client can read what they wrote instead having to wait for data to be globally committed.

Consistent Prefix Consistent prefix guarantees that reads are never out of order of the writes. For example, if an item in the database was updated three times with versions V1, V2, and V3, the client would always see V1, V1V2, or V1V2V3. The client would never see out of order like V2, V1V3, or V2V1V3.

Eventual You probably use eventual consistency when you’re least worried about the freshness of data across the read replicas and the order of writes over time, but you need the highest level of availability and low latency.

To set up a desired consistency on the Cosmos DB, follow the following steps:

  1. Log in to the Azure portal and navigate to your Cosmos account underthe Resource Group.
  2. On the Data Consistency pane (see Figure 5-8), select the desired consistency from the five consistency levels.
  3. Figure 5-8 Setting Cosmos DB consistency
    Screenshot_134
  4. For bounded staleness, define the lag in time or operations anapplication can tolerate.
  5. Click Save.

Note

Business Continuity and Disaster Recovery

For high availability, it’s recommended that you configure Cosmos DB with multiregion writes (at least two regions). In the event of a regional disruption, the failover is instantaneous, and the application doesn’t have to undergo any change; it transparently happens behind the scene. Also, if you’re using the default consistency level of strong, there will not be any data loss before and after the failover. For bounded staleness, you may encounter a potential data loss up to the lag (time or operations) you’ve set up. For the Session, Consistent Prefix, and Eventual consistency options, the data loss could be up to a maximum of five seconds.

Create, read, update, and delete data by appropriate APIs

As said previously, Azure Cosmos DB currently provides five APIs (see Figure 5-9): Core (SQL) and MongoDB for document data, Cassandra, Azure Table API, and Gremlin (graph) API. As of the writing of this book, you can create only one API per Cosmos account.

Figure 5-9 Cosmos DB APIs
Screenshot_135

The choice of selecting which APIs to use ultimately depends on your use case. You’re probably better off selecting SQL API if your team already has a skillset of T-SQL, and you’re moving from a relational to a nonrelational database. If you’re migrating an existing application that uses a MongoDB to Cosmos DB, you don’t need to make any changes and can continue to use MongoDB API; the same is true for Cassandra API. Similarly, to take advantage of better performance, turnkey global distribution, and automatic indexing, use Table API if you’re using Azure Table storage. The Gremlin API is used for graph modeling between entities.

The next section looks at the programming model using .Net SDKs to interact with Cosmos DB for SQL APIs mentioned in the previous section.

Note

Azure Cosmos DB Local Development Emulator

For development purpose, you can download Cosmos DB local emulator from https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator that gives the same flavor of Cosmos DB service running on the cloud.

Exam Tip

The AZ-300 exam doesn’t expect you to know the service limits by heart. But it’s worthwhile to know them in case you get into the weeds during solution design. Please visit the Microsoft docs “Azure Cosmos DB service quotas” at https://docs.microsoft.com/en-us/azure/cosmos-db/conceptslimits.

SQL API

Structured Query Language (SQL) is the most popular API adopted by the industry to access and interact with Cosmos DB data with existing SQL skills. When using SQL API or Gremlin API, Cosmos DB also gives you an ability to write server-side code using stored procedures, user-defined functions (UDFs), and triggers, as shown in Figure 5-10. These are essentially JavaScript functions written within the Cosmos DB database and scoped at the container level.

Figure 5-10 SQL API
Screenshot_136

Following are the key considerations when you choose writing server-side code with Cosmos DB:

Stored procedures and triggers are scoped at partition key and must be supplied with an input parameter for the partition key, whereas UDFs are scoped at the database level.

Stored procedures and triggers guarantee atomicity (ACID) like in any relational database. Transactions are automatically rolled back by Cosmos DB in case of any exception; otherwise, they’re committed to the database as a single unit of work.

Queries using stored procedures and triggers are always executed on the primary replica as these are intended for write operations to guarantee strong consistency for secondary replicas, whereas UDFs can be written to the primary or secondary replica.

The server-side code must complete within the specified timeout threshold limit, or you must implement a continuation batch model for long-running code. If the code doesn’t complete within the time,

Cosmos DB roll back the whole transaction automatically.

There are two types of triggers you can set up:

Pre-triggers As the name defines, you can invoke some logic on the database containers before the items are created, updated, or deleted.

Post-triggers Like pre-triggers, these are also tied to an operation on the database; however, post-triggers are run after the data is written or updated in the database.

Create, read, update, and delete data in Cosmos DB using .NET SQL API SDK

In this section, we programmatically interact with Cosmos DB and perform CRUD operations on it, using .NET SQL API SDK. You can use any of the supported languages or respective SDKs, as discussed previously.

For the example, we use Cosmos DB that we previously created using the Azure portal. Following are the prerequisites to get started:

You need Visual Studio 2017 or later either licensed or community edition with Azure development kit installed.

Azure subscription or free Cosmos DB trial account.

If you would like to use local Cosmos DB emulator, install the local emulator as mentioned in the previous section.

Under the Azure Cosmos account, create a database databaseaz300 and container FlightReservation using Azure portal.

After you have your environment ready, you can get right into the code. Use the following steps:

  • Create a Visual Studio project.
  • Select the Windows Application and Console .NET applicationtemplate and name the project AZ_300_Exam_Prep_Code.
  • After you have named and created a project, open NuGet packagemanager.
  • Install Microsoft.Azure.DocumentDB fromhttps://www.nuget.org/packages/microsoft.azure.documentdb
  • You need referencing to the following libraries in your code:
  • 	
    using System.Net;
    using Microsoft.Azure.Documents;
    using Microsoft.Azure.Documents.Client;
    using Newtonsoft.Json;
    	
    
  • Log in to the Azure portal and capture the Cosmos DB REST APIendpoint and the keys from your Cosmos account as shown in Figure 5-11 to be referenced in the code. Here we take read-write keys as we will perform both read and write operations.
  • Figure 5-11 Cosmos DB keys
    Screenshot_137
  • In your console application, under the namespace
  • AZ_300_Exam_Prep_Code, replace Program.cs with FlightReservation.cs, as shown in the following code snippet.

    In the following code, we have constant variables referencing Cosmos DB account keys and other constants representing database name, container name, and the partition you need to write to:

    	
    namespace AZ_300_Exam_Prep_Code {
    class FlightReservation {
    private const string EndpointUrl = "https://cosmosdxxxxxx.documents.azure.
    com:443/";
    private const string PrimaryKey =
    "9i3Y0K2j2A8aOYxxxxxxxxxxxxxxxxxxxxxxxxxQLSRiFiPF4n5vLzVA==";
    private DocumentClient client;
    private static string cbDatabaseName = "databasxxxxxeaz300";
    private static string cbContainerName = "FlightReservation";
    private static string PartitionKey = "AZ300ExamCod";
    }
    	
    
  • Add the following POCO (plain old CLR object) classes TravellerInfo, Itinerary, and Address under the namespace AZ_300_Exam_Prep_Code. The object is serialized when written to the Cosmos DB database. You want to make sure that all reservation data goes under the same partition for better performance and throughput, so here we’re setting up a partition key value (PartitionKey) in the code logic. Cosmos DB automatically put the reservations that have a matching partition key property under the same partition of a container:
  • 	
    // TravellerInfo class , that holds properties of a traveler and itinerary information
    public class TravellerInfo {
    [JsonProperty(PropertyName = "id")]
    public string Id { get; set; }
    public string LastName { get; set; }
    public string FirstName { get; set; }
    public string DOB { get; set; }
    public Itinerary[] TravelItinerary { get; set; }
    public Address { get; set; }
    public bool IsRegistered { get; set; }
    public override string ToString() {
    return JsonConvert.SerializeObject(this);
    }
    public string PartitionKey { get; set; }
    }
    public class Itinerary {
    public string SourceAirport { get; set; }
    public string DestinationAirport { get; set; }
    public DateTime DepartureDate { get; set; }
    public DateTime? ReturnDate { get; set; }
    public bool IsRoundTrip { get; set; }
    }
    public class Address {
    public string State { get; set; }
    public string County { get; set; }
    public string City { get; set; }
    }
    	
    
  • To interact with Cosmos DB programmatically, initiate a connect with Cosmos DB RestAPIs using the DocumentClient library of the SDK as shown here with the main method:
  • 	
    static void Main(string[] args) {
    client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey);
    }
    	
    
  • Now that you have a solution ready to make a connection with Cosmos DB, add a function CreateReservationDocumentIfNotExists in the FlightReservation class and call it from the main method as shown in the following code snippet. All you’re doing here is initializing the connection to Cosmos DB using SQL API SDK. Then you create an object of a travelInfo class that holds information to a traveler and his or her itinerary. Next, you initialize a new object of FlightReservation class to call the function CreateReservationDocumentIfNotExists.
  • 	
    static void Main(string[] args) {
    client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey);
    var travellerInfo = reservation.GetTravellerInfo(DateTime.Now.AddDays(10));
    FlightReservation reservation = new FlightReservation();
    reservation.CreateReservationDocumentIfNotExists(cbDatabaseName, cbContainerName,
    travellerInfo).Wait();
    }
    // Function to create a reservation in Cosmsos DB
    private async Task CreateReservationDocumentIfNotExists(string databaseName,
    string collectionName, TravellerInfo Travellers) {
    try {
    await client.ReadDocumentAsync(UriFactory.
    CreateDocumentUri(databaseName, collectionName, travellers.Id),
    new RequestOptions { PartitionKey = new
    PartitionKey(PartitionKey) });
    WriteToConsole($"Found {travellers.Id}");
    }
    catch (DocumentClientException de) {
    if (de.StatusCode == HttpStatusCode.NotFound)
    {
    await client.CreateDocumentAsync(UriFactory.CreateDocumentColl
    ectionUri(databaseName, CollectionName), travellers);
    WriteToConsole($"Created reservation {travellers.Id}");
    }
    else
    {
    throw;
    }
    }
    catch(Exception) {
    WriteToConsole($"An Error in the reservation {travellers.Id}");
    }
    }
    	
    

    After compiling your solution, run it by pressing F5 or using the Start menu in Visual Studio, log in to Azure portal and navigate to your Cosmos DB. Click the Data Explorer properties and then an item in the database, as shown in the Figure 5-12.

    FIGURE 5-12 Cosmos DB Data Explorer
    Screenshot_138
  • As shown in the following code snippet, to update or delete a specificdocument or reservation in the database, you use the ReplaceDocumentAsync or DeleteDocumentAsync functions that are part of SDK. The ReplaceDocumentAsync function requires an ID of the document or collection needed to be updated with a new object reference, and the DeleteDocumentAsync function requires you to supply the ID of the document to be deleted:
  • 	
    await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(databaseName,
    collectionName, documentID), traveller);
    await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(databaseName,
    collectionName, documentID));
    	
    
  • The following code snippet shows an example of reading a queryagainst the Cosmos DB database where are you searching the reservation database for a traveler with lastname= Joe within the partition you had created.
  • 	
    // Set some common query options.
    FeedOptions queryOptions = new FeedOptions {EnableCrossPartitionQuery
    =false };
    IQueryable<TravellerInfo> travellers = client.CreateDocumentQuery
    <TravellerInfo>(
    UriFactory.CreateDocumentCollectionUri(databaseName,
    collectionName), queryOptions)
    .Where(r=>r.LastName== "Joe" && r.PartitionKey==Pa
    	
    

    Cross-partition query

    Querying the document from Cosmos DB is not that complex. All you need is to specify the database name and the container name after you’re authenticated and have a valid authorization token to read the data. As you can see in the following LINQ query, we haven’t specified a partition key as part of the read query, and you need to enable cross-partition query. The default is OFF, otherwise you will receive an error asking to allow it to.

    	
    FeedOptions queryOptions = new FeedOptions {EnableCrossPartitionQuery=TRUE };
    .Where(r=>r.LastName== "Joe");
    	
    

    Need More Review?

    SQL Query Reference Guide for Cosmos DB

    To learn more about SQL Query examples and operators, visit the Microsoft doc “Getting started with SQL queries” at https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-gettingstarted#GettingStarted

    Exam Tip

    Azure Cosmos DB was formerly known as Document DB; therefore, the commands in Azure CLI refer to the default SQL API as document DB. It’s likely in the exam that you may get a question to check your knowledge of Azure CLI commands to create and manage Azure Cosmos account and resources. That said, it’s recommended that you visit “Azure CLI samples for Azure Cosmos DB” at https://docs.microsoft.com/bs-latn-ba/azure/cosmos-db/clisamples.

    MongoDB API

    You can switch from MongoDB to Cosmos DB and take advantage of excellent service features scalability, turnkey global distribution, various consistency levels, automatic backups, and indexing without having to change your application code. All you need to do is to create a Cosmos DB for MongoDB API (see Figure 5-13). As of the writing of this book, Cosmos DB’s MongoDB API supports MongoDB server version 3.2, and you can use existing tooling, libraries, and open source client MongoDB drivers to interact with Cosmos DB.

    Figure 5-13 Cosmos supported APIs
    Screenshot_139

    Table API

    Similar to MongoDB API, applications that are originally written for Azure Table storage can seamlessly be migrated to Cosmos DB without having to change the application code. In this case, you would create a Cosmos DB for Azure Table from the API options.

    The client SDKs in.Net, Java, Python, and Node.js are available for Table API. Migrating from Azure Table storage to Cosmos DB provides you the service’s premium capabilities, as we’ve been discussing since the start of this chapter.

    Cassandra API

    You can switch from Apache Cassandra and migrate to Cosmos DB and take advantage of enterprise-grade features scalability, turnkey global distribution, various consistency levels, automatic backups, and indexing without having to change your application code. As of the writing of this book, Cosmos DB’s Cassandra API supports Cassandra Query language V4, and you can use existing tooling, libraries, and open source client Cassandra drivers to communicate with Cosmos DB.

    Gremlin API

    The Gremlin API is used for generating and visualizing a graph between data entities. The Cosmos DB fully supports an open-source graph computing framework called Apache TinkerPOP. You use this API when you would like to present complex relationships between entities in graphical form. The underlying mechanics of data storage are similar to what you have learned in the previous sections for other APIs, such as SQL or Table. That being said, your Graph data gets the same level of

    Scalability

    Performance and throughput

    Auto-indexing

    Global distribution and guaranteed high availability.

    The critical components of any Graph database are the following:

    Vertices Vertices denote a discrete object such as a person, a place, or an event. If you take the analogy of an airline reservation system that we discussed in the SQL API example, a traveler is a vertex.

    Edges Edges denote a relationship between vertices. The relationship could be uni- or bidirectional. For example, in our analogy, an airline carrier is a vertex and the relationship between the traveler and the airline that defines which airline you traveled within a given year is considered an edge.

    Properties Properties include the information between vertices and edges—for example, the properties for a traveler, comprised of his or her name, date of birth, address, and so on. The properties for the edge (airline) could be airline name, travel routes, and so on.

    Gremlin API is widely used in solving problems in a complex business relationship model like social networking, the geospatial, or scientific recommendation in retail and other businesses.

    Here’s a quick look at the airline reservation analogy and how to create vertices and edges using the Azure portal. You can do this programmatically as well using SDKs available in .NET and other languages.

    CREATING VERTICES

    Use the following steps to create a vertex traveler in the Graph database:

    1. Log in to Azure portal and navigate to your Cosmos DB account thatyou created for Gremlin API.
    2. On the Data Explorer blade, create a New Graph database byspecifying the name, storage capacity, the throughput, and a partition key for the database. You have to provide the value for the partition key that you define. In our example, the partition key is graphdb and its value is az300 while creating vertices.
    3. After the database is created, navigate to the Graph Query window, asshown in Figure 5-14, and run the following commands to create vertices, edges, and several properties for travelers and airlines:
    	
    g.addV('traveller').property('id', 'thomas').property('firstName', 'Thomas').
    property('LastName', 'Joe').property('Address', 'Ohio').property('Travel Year',
    2018).property('graphdb', 'az300')
    g.addV('traveller').property('id', 'Gurvinder').property('FirstName',
    'Gurvinder').property('LastName', 'Singh').property('Address', 'Chicago').
    property('Travel Year', 2018).property('graphdb', 'az300')
    g.addV('Airline Company').property('id', 'United Airlines').
    property('CompanyName', 'United Airlines').property('Route 1', 'Chicago').
    property('Route 2', 'Ohio').property('graphdb', 'az300')
    g.addV('Airline Company').property('id', 'American Airlines').
    property('CompanyName', 'American Airlines').property('Route 1', 'California').
    property('Route 2', 'Chicago').property('graphdb', 'az300')
    g.addV('Airline Company').property('id', 'Southwest Airlines').
    property('CompanyName', 'Southwest Airlines').property('Route 1', 'Chicago').
    property('Route 2', 'California').property('graphdb', 'az300')
    g.addV('Airline Company').property('id', 'Delta Airlines').property('CompanyName',
    'Delta Airlines').property('Route 1', 'Chicago').property('Route 2', 'Ohio').
    property('graphdb', 'az300')
    	
    

    In the preceding Gremlin commands, “g” represents your graph database and g.addV() is used to add vertices. Properties() is used to associate properties with vertices.

    CREATING EDGES

    Now that you’ve added vertices for travelers and airlines, you need to define the relationship in a way that explains which airline a traveler has traveled with in a given year and if travelers know each other.

    Create an edge on the vertex ‘traveler’ that you created previously. As you created vertices in step 3 in the preceding section, follow the same method and run the following commands on the graph window to create edges (see Figure 5-14):

    	
    g.V('thomas').addE('travelyear').to(g.V('Delta Airlines'))
    g.V('thomas').addE('travelyear').to(g.V('American Airlines'))
    g.V('thomas').addE('travelyear').to(g.V('United Airlines'))
    g.V('Gurvinder').addE('travelyear').to(g.V('Delta Airlines'))
    g.V('Gurvinder').addE('travelyear').to(g.V('United Airlines'))
    g.V('thomas').addE('know').to(g.V('Gurvinder'))
    	
    
    Figure 5-14 Gremlin API
    Screenshot_140

    In this example, addE() is used to define a relationship with vertex traveler and an airline using g.V(). After you run the preceding commands, you can see the relationship between entities on the graph using Azure portal, as shown in Figure 5-15.

    Figure 5-15 Gremlin API Graph
    Screenshot_141

    Skill 5.2: Develop solutions that use a relational database

    Small, medium, and large enterprise companies have been using the relational databases for decades as a preferred way to store data for their small- or large-scale applications. In a relational database, the data is stored as a collection of data items with a predefined relationship between them. The data in any relational database is stored in rows and columns. Each row of the table has a unique key and represents a collection of values associated with an entity and can be associated with rows of other tables in the database that defines the relationship between entities. Each column of a row holds the values of an entity or object. In addition to it, the relational databases come with built-in capability of managing data integrity, transactional consistency, and ACID (Atomicity, Consistency, Isolation, and Durability) compliance.

    Microsoft, as part of Azure platform PaaS suite of relational database offerings, provides the following databases to choose from for your application need:

    Azure SQL database Azure SQL database is a Microsoft core product and the most popular relational database in the cloud. It’s meant to be replacement of SQL server on premises.

    Azure SQL Data Warehouse A relational database for big data solutions with an ability to massively process data parallelly.

    Azure database for MySQL Azure database for MySQL is a fully managed database as a service where Microsoft runs and manages all mechanics of MySQL Community Edition database in the cloud.

    Azure database for PostgreSQL Like MySQL, this is a fully managed database-as- a-service offering based on the opensource Postgres database engine.

    Azure database for MariaDB Azure database for MariaDB is also a managed highly available and scalable database as a service based on the opensource MariaDB server engine.

    Regardless of the database you select for your application needs, Microsoft manages the following key characteristics of any cloud-based service offerings:

    High availability and on-demand scale

    Business continuity

    Automatic backups

    Enterprise-grade security and compliance

    This skill covers how to:

    • Provision and configure relational databases
    • Create elastic pools for Azure SQL databases
    • Create, read, update and delete data tables by using code

    Provision and configure relational databases

    In this section, we dive into the critical aspects of how you set up a relational database in the cloud and configure the cloud-native features that come with the service offering.

    Azure SQL database

    Azure SQL database is the Microsoft core and most popular relational database. The service has the following flavors of database offerings.

    Single database With a single database, you assign preallocated compute and storage to the database.

    Elastic pools With elastic pools, you create a database inside of the pool of databases, and they share the same resources to meet unpredictable usage demand.

    Managed Instance Microsoft recently launched a Managed Instance flavor of the service that gives close to 100% compatibility with SQL Server Enterprise Edition with additional security features.

    Note

    Regional Availability of SQL Azure Service Types

    Although Exam AZ-300 does not expect you to get into the weeds of regional availability of the Azure SQL database service, as an architect, it is crucial that you know this part. Please visit Microsoft docs “Products available by region” at https://azure.microsoft.com/en-us/global-infrastructure/services/? products=sql-database&regions=all.

    Now that we looked at different types of databases that you can create with Azure SQL database offering, it is crucial that you understand the available purchasing model that helps you choose the right service tier that meets your application needs. Azure SQL database comes with the following two options of purchasing model:

    DTUs (Database Transaction Units) model DTUs are the blend of compute, storage, and IO resource that you preallocate when you create a database on the logical server. For a single database, the capacity is measured in terms of DTUs; for elastic databases, capacity is measured in eDTUs. Microsoft offers three service tiers, Basic, Standard and Premium for single and elastic pool databases. Each of the tiers has its own differentiated range of compute, storage, fixed retention, backup options, and pricing levels.

    vCore (Virtual Core) model The vCore-based model is the Microsoft recommended purchasing model where you get the flexibility of independently choosing compute and storage to meet your application needs. Additionally, you get an option to use your existing SQL Server license to save up to 55% of the cost. The vCore purchasing model provides three service tiers—General Purpose, Hyperscale, and Business Critical. Each of them has its own range of compute sizes, types and size of storage, latency, and I/O ranges. In the vCore model, you can create a single, elastic, or managed instance database.

    Exam Tip

    Database migration is a crucial part of any cloud migration project. It’s likely that in the Exam AZ-300, Microsoft checks your knowledge of different database migration strategies and options available. Please check the Microsoft docs Azure Database Migration Guide at https://datamigration.microsoft.com/.

    Create a SQL Azure single database using Azure portal

    The databases (single or pooled) reside on a logical SQL database server, and the server must exist before you can create a database on it. The security firewall setting, auditing, and other threat protection policies on the server automatically apply to all the databases on the server. The databases are always in the same region as their logical server. Use the following steps to create a database server and database:

    1. Log in to Azure portal.
    2. On the Navigation blade on the left side of the portal, click Create AResource and search for SQL Database.
    3. Select the SQL Database as shown in Figure 5-16.
    4. Figure 5-16 Search for a new resource
      Screenshot_142
    5. On the Create Database screen (see Figure 5-17), provide the database name, select subscription, resource group, and server. Make sure Want To Use Elastic Database is selected as No. If the SQL Server doesn’t exist, you have to provide the detail for the new server along with the database request (see Figure 5-18).
    6. Figure 5-17 Create a SQL database
      Screenshot_143
      Figure 5-18 Create a SQL Server
      Screenshot_144

      The Allow Azure services to access server option, as shown in Figure 5-18, is checked by default; it enables other Azure IP addresses and subnets to be able to connect to the SQL Azure server.

    7. Ignore the Next Additional Settings and Click Review +Create. In thisexample, we’re creating a single blank database for demonstration purposes.
    8. On the review screen, you can review your configuration and clickCreate to initiate database deployment, as shown in Figure 5-19.
    9. Figure 5-19 Review and submit a request
      Screenshot_145

    After the database is created, you need to set up firewall rules (see Figure 5-20) to allow inbound connections to the database. The rule can be set up for a single IP address or ranges in CIDR for clients to be able to connect to the SQL Azure database from outside of Azure. By default, all inbound connections from the internet are blocked by the SQL database firewall. The firewall rules can be at the server level or the individual database level. The server-level rules apply to all the databases on the server.

    Figure 5-20 Setting SQL firewall rules
    Screenshot_146

    Another option to control access to the SQL Azure database is to use virtual network rules. This is specifically to implement a fine-grained granular control as opposed to the Allow Access to Azure Services option, which allows access to the database from all Azure IP addresses or Azure subnets that may not be owned by you.

    Exam Tip

    Unlike a single or pooled database, the managed instance of a SQL Azure database doesn’t have a public endpoint. Microsoft provides two ways to connect to the Managed Instance databases securely. Please visit Microsoft docs at https://docs.microsoft.com/en-us/azure/sql-database/sqldatabase-managed-instance-quickstart-guide to check out options.

    Geo-replication and automatic backups

    One of the appealing features of Azure SQL database (single or pooled) is an ability to spin up four read-only copies of secondary databases in the same or different regions. The feature is specially designed for a business continuity solution where you have an option to failover (manually or via some automation) to the secondary database in the event of a regional disruption or large-scale outages. In an active geo-replication, the data is replicated to secondary databases immediately after the transactions are committed on the primary database. The replication happens automatically and asynchronously. If you’re using SQL Azure Managed Instance, you use the auto-failover group feature for business continuity. You can initiate or create a georeplication and create up to four copies of the secondary read-only database from the pane, as shown in Figure 5-21.

    Figure 5-21 Geo-replication
    Screenshot_147

    Regardless of the service tiers you choose, Azure automatically takes a backup of your databases as part of a disaster recovery solution. The backups are retained automatically between 7 and 35 days based on your service tier (basic or higher) and do not incur any cost. If your application needs require you to maintain the backups beyond 35 days, Azure provides an option to set up a long-term backup retention option to keep the backups up to 10 years.

    Create elastic pools for Azure SQL databases

    As said previously, the elastic database pools are best suited for the applications that have predictable usage patterns. Choosing a flexible database pool is the best bet to save cost when you have many databases, and overall average utilization of DTUs or vCores is low. The more databases you add in the pool, the more cost it saves you because of efficiently unused shared DTUs and vCores across the databases.

    1. Log in to Azure portal.
    2. On the navigation pane on the left side of the portal, click Create AResource.
    3. Search for SQL Elastic Database pool from the marketplace; the screenappears (see Figure 5-22).
    4. Figure 5-22 Elastic database resource
      Screenshot_148
    5. Click Create.
    6. After you have clicked Create, you will be navigated to another screen called Elastic Pool (see Figure 5-23). Provide the pool name, choose the resource group, service tiers, and SQL server, and click Create.
    7. Figure 5-23 Elastic Pool create form
      Screenshot_149

      After the pool is created, you can create a new database and add it to the pool. You can also add an existing database on the same server to the pool or remove databases from the pool.

    8. To add a new database to the pool, navigate to the elastic database poolyou created in step 5, and click Create A Database (see Figure 5-24).
    9. Figure 5-24 Create a database in the pool
      Screenshot_150
    10. The screen to create a database appears (see Figure 5-25). Provide the database name and click OK.
    11. Figure 5-25 Create new database form
      Screenshot_151
    12. To add an existing database from the same server in the pool or removethe database from the pool, go to the Configure tab on the left pane of the screen shown in Figure 5-26.
    13. Figure 5-26 Elastic database scale setting
      Screenshot_152

    Create, read, update, and delete data tables by using code

    In this section, we look at some programming aspects to interact with Azure SQL database and perform CRUD operation on the database tables using ADO.NET and Visual Studio IDE. Before we jump into the code, there are few prerequisites you must ensure exist:

    You have created a database server and a database.

    You have set up firewall rules on the server to allow your client computer to connect to the database.

    You have Visual Studio 2017 or higher installed. You need the community or licensed version.

    In the previous section, we already explained the process of creating and enabling a connection with the SQL Azure database, so we leverage the same database in this section. Figure 5-27 shows the Entity Relationship (ER) diagram that we use to demonstrate our goal of this section. Here we are creating two tables called tblcustomer and tblOrder, where the parent table tblcustomer has one-to-many relationships with its child table, tblorder.

    Figure 5-27 ER diagram
    Screenshot_153

    You need to open Visual Studio and create a new console application (.NET framework).

    After you have created a Console App using Visual Studio, in the program.cs file, add the following code snippet in the Main method. In the following code, you have to refer to your SQL Azure database and its credentials:

    	
    //.NET Framework
    // create a new instance of the SQLConnectionStringBuilder to populate database
    endpoints and credentials for SQL connection.
    var cb = new SqlConnectionStringBuilder();
    cb.DataSource = "sqxxxxxxaz300.database.windows.net";
    cb.UserID = "az300exxxxxxxn";
    cb.Password = "gxxxxxxxxx";
    cb.InitialCatalog = "az300exaxxxxxxb";
    	
    

    The next step is to write TSQL commands to create database tables and perform CRUD (create, read, update, and delete) operation on the tables. The next code snippet shows the functions and their respective TSQL commands statements.

    Static class TSQL encapsulates all the TSQL operations that you’ll execute on the database. The class has a total of seven methods:

    TSQL_CreateTables Returns the TSQL to drop and create tables in the database

    TSQL_InsertCustomerandOrder Returns the TSQL to sample records for the customer and their order history

    TSQL_UpdateCustomerBillingAddress Has responsibility for returning TSQL to perform update operation on the tables

    TSQL_DeleteCustomerByID() Returns the TSQL to execute the delete statement on the tables to remove a customer and its order history by customer ID

    TSQL_SelectCustomerandItsOrder Returns TSQL to perform the read operation on the tables and displays all customers and their related orders

    ExecuteCommand Creates the connection to the database and runs commands on the database to create update and delete operations

    ExecuteQuery Runs the read command on the database and displays all customers and their orders on the GUI

    	
    /// <summary>
    /// static class that exposes various methods to perform database operation
    /// <summary>
    public class TSQLS {
    /// <summary>
    /// A Function that return TSQL to drop and create table
    /// </summary>
    /// <returns></returns>
    public static string TSQL_CreateTables() {
    return @"
    DROP TABLE IF EXISTS tblOrder;
    DROP TABLE IF EXISTS tblCustomer;
    CREATE TABLE tblCustomer
    (
    ID int not null identity (1,1) primary key,
    [Name] nvarchar(50) null ,
    BillingAddress nvarchar(255) null
    )
    CREATE TABLE tblOrder
    (
    ID int not null identity (1,1) primary key,
    ProductName nvarchar(128) not null,
    CustomerID int null
    REFERENCES tblCustomer (ID)
    );
    ";
    }
    /// <summary>
    /// A Function that returns a TSQL to create sample customer and their orders
    /// </summary>
    /// <returns></returns>
    public static string TSQL_InsertCustomerandOrder() {
    return @"
    -- Three customer exist for your online business .
    INSERT INTO tblCustomer (Name, BillingAddress)
    VALUES
    ('Gurvinder', 'chicago, IL lombard'),
    ('Mike', 'Phoenix'),
    ('Amit', 'San Jose');
    --Each customer have bought some product from your online store.
    INSERT INTO tblOrder (ProductName, CustomerID)
    VALUES
    ('Apple Phone case' , 1),
    ('Google Pixel Phone case' , 2),
    ('Google PixelXL Phone case' , 3) ";
    }
    /// <summary>
    /// A Function that returns a TSQL to update Customer billing address by name
    /// </summary>
    /// <returns></returns>
    public static string TSQL_UpdateCustmerBillingAddress() {
    return @"
    DECLARE @CustomerName nvarchar(128) = @paramCustomerName; -- for
    example Gurvinder;
    -- upsate the billing address of customer by name
    UPDATE c
    SET
    c.BillingAddress ='lombard'
    FROM
    tblCustomer as c
    WHERE
    c.Name = @CustomerName; ";
    }
    /// <summary>
    /// A function that returns a TSQL to delete customer and his/her order history
    by customerID
    /// </summary>
    /// <returns></returns>
    public static string TSQL_DeleteCustomerByID() {
    return @"
    DECLARE @cusID int;
    SET @cusID = @paramCusID;
    DELETE o
    FROM
    tblOrder as o
    INNER JOIN
    tblCustomer as c ON o.id = c.id
    WHERE
    c.id = @cusID
    DELETE tblCustomer
    WHERE ID = @cusID; ";
    }
    /// <summary>
    /// A Function that Returns the list of customers and their order history from
    the database
    /// </summary>
    /// <returns></returns>
    public static string TSQL_SelectCustomerandItsOrder() {
    return @"
    -- Look at all the customer and their order history
    SELECT
    c.*,o.*
    FROM
    tblCustomer as c
    JOIN
    tblOrder as o ON c.id = o.CustomerID
    ORDER BY
    c.name; ";
    }
    /// <summary>
    /// A Function to run creates tables and CRUD operation on the database tables
    /// </summary>
    /// <param name="sqlConnection"<</param>
    /// <param name="databaseOperationName"<</param>
    /// <param name="sqlCommand"<</param>
    /// <param name="parameterName"<</param>
    /// <param name="parameterValue"<</param>
    public static void ExecuteCommand(SqlConnection, string
    databaseOperationName,string sqlCommand, string parameterName = null,string
    parameterValue = null) {
    Console.WriteLine();
    Console.WriteLine("=================================");
    Console.WriteLine("DB Operation to {0}...", databaseOperationName);
    using (var command = new SqlCommand(sqlCommand, sqlConnection))
    {
    if (parameterName != null)
    {
    command.Parameters.AddWithValue(
    parameterName,
    parameterValue);
    }
    int rowsAffected = command.ExecuteNonQuery();
    Console.WriteLine(rowsAffected + " = rows affected.");
    }
    }
    /// <summary>
    /// A Function that runs the read operation on the database.
    /// </summary>
    /// <param name="sqlConnection"></param>
    /// <param name="tSQLquery"></param>
    public static void ExecutQuery(SqlConnection sqlConnection, string tSQLquery) {
    Console.WriteLine();
    Console.WriteLine("=================================");
    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Displaying, Customers and their order history...");
    Console.ForegroundColor = ConsoleColor.White;
    Console.WriteLine();
    Console.WriteLine("=================================");
    using (var query = new SqlCommand(tSQLquery,sqlConnection))
    {
    using (SqlDataReader reader = query.ExecuteReader())
    {
    while (reader.Read())
    {
    Console.WriteLine("{0} , {1} , {2} , {3} , {4},{5}",
    reader.GetInt32(0),
    reader.GetString(1),
    reader.GetString(2),
    reader.GetInt32(3),
    reader.GetString(4),
    reader.GetInt32(5));
    }
    }
    }
    Console.WriteLine();
    Console.WriteLine("=================================");
    }
    }
    	
    

    Now that we’ve looked at a TSQL class and its various encapsulated methods, it’s time to call them and see all the operations in action. In the Main method of your Program.cs class, place the following code snippet, and compile the complete solution. After you have compiled the solution, press F5, or click Start on the top menu of Visual Studio. The output of the program is shown in Figure 5-28

    	
    using (var connection = new SqlConnection(cb.ConnectionString)) {
    connection.Open();
    TSQLS.ExecuteCommand(connection, "1 - Create-Tables", TSQLS.
    TSQL_CreateTables());
    TSQLS.ExecuteCommand(connection, "2 - Insert Customer and Orders",
    TSQLS.TSQL_InsertCustomerandOrder());
    TSQLS.ExecuteCommand(connection, "3- Update Customers", TSQLS.TSQL_
    UpdateCustmerBillingAddress(), "@paramCustomerName","Gurvinder");
    TSQLS.ExecuteCommand(connection, "3- Delete Customer and Its Order
    history ", TSQLS.TSQL_DeleteCustomerByID(), "@paramCusID", "1");
    TSQLS.ExecutQuery(connection,TSQLS.
    TSQL_SelectCustomerandItsOrder());
    }
    	
    
    Figure 5-28 The console window
    Screenshot_154

    Finally, you can clean up the resource you have created in the Azure porta by deleting a resource group.

    Skill 5.3: Configure a message-based integration architecture

    In today’s world of distributed application development that embraces microservices architecture, messaging systems play a vital role in designing a reliable, resilient, and scalable application. Creating a successful distributed

    microservices-based application involves lots of complexity and requires you think of a robust way of establishing communication and networking between autonomous, loosely coupled components of the application.

    To address this problem, you have to have some messaging-based architecture that orchestrates applications to communicate in a loosely coupled manner. Microsoft provides the Azure Integration suite of services to leverage in your microservice-based solution, which we will discuss in detail later in this chapter.

    The messaging solution provides the following key benefits for developing loosely coupled distributed applications:

    Messaging allows loosely coupled applications to communicate with each other without direct integration.

    Messaging allows applications to scale independently. The computeintensive tasks of the application can be asynchronously handled by a background job that can scale independently of the lightweight client or GUI. You can trigger the job by a message in the queue.

    Messaging allows several types of communication protocols that cater to a variety of business use cases, like one-to-one, one-to-many, or many-to-many.

    Advanced messaging technologies facilitate designing a solution when the order of workflows between discrete application components is critical and duplication is unaffordable.

    This skill covers how to:

    • Configure an app or service to send emails, Event Grid, and the Azure Relay Service
    • Create and configure Notification Hubs, Event Hub, and Service Bus
    • Configure queries across multiple products

    Configure an app or service to send emails, Event Grid, and the Azure Relay Service

    Integrating the distributed application often requires you to orchestrate workflow and automate business processes. For example, if you’re creating a resource group in Azure subscription and adding a contributor to manage resources, you may want to have some governance in place to let the Resource Group Owner or a Subscription Owner know via email or SMS when new resources are created or updated in the resource group. Azure Logic Apps, one of the services from the Azure Integration suite of services, allows automating such workflows.

    Azure Logic Apps

    Logic Apps allows you to define workflows and processes without having to write any code and facilitate application and services integration across enterprises and organizations. Logic Apps are also called a designer-first serverless integration service that allows you to use Azure portal and visually see and design workflows.

    As of the writing of this book, Logic Apps support more than 200 managed connectors that provide triggers and actions to access cloud-based SaaS Services, Azure native suite of services, and on-premises services using a Logic App gateway agent. The overview of Logic Apps is shown in Figure 5-29.

    Figure 5-29 Azure Logic Apps overview
    Screenshot_155

    Azure Event Grid

    Some integration scenarios require you to respond to events in real time as opposed to the standard message polling mechanism. In this scenario, the source system where some condition or state change happens may need the subscribers to be notified to take some action. As an example, one use case that we briefly discussed previously is sending an email notification to Azure subscription owner or Azure Resource Group Owner when a new resource is updated (the event happened) by resource group contributors.

    The Azure Event Grid service is best suited for use cases that act as a message broker to tie one or more subscribers to discrete event notifications coming from event publishers. The service is fully managed and uses a serverless computing model. It’s massively scalable.

    Figure 5-30 shows that Event Grid has built-in support to integrate Azure services, where you can define your event publisher and event handlers. Event publishers emit events and send them to Event Grid using an endpoint called Topics; you also can create custom topics for your custom events. Event Grid then pushes those events instantaneously to Event Handlers, and it guarantees at least one delivery of the event messages for each subscription.

    Figure 5-30 The Azure Event Grid service
    Screenshot_156

    To receive an event and act on it, you define an event handler on the other hand and tell Event Grid which event on the Topic should be routed to which handler. This is done using Event Subscription.

    Send an email using SendGrid and Logic Apps and Azure Event Grid

    SendGrid is a third-party SaaS application that allows sending emails. With Azure logic app, Event Grid, and SendGrid connector, you can define a workflow to send an email. Event Grid here acts as a trigger for Azure Logic App to invoke the email workflow.

    The following steps describe how to set up a workflow to send email using SendGrid connector. The following section assumes you have an Azure Storage account and SendGrid account already created. In this example, we create a logic app and set up a workflow to send an email notification on changes to the Azure Storage account.

    1. Log into Azure portal and go to the Create Resource blade to search forLogic App. Click Create (see Figure 5-31).
    2. Figure 5-31 Create a logic app
      Screenshot_157
    3. On the Create screen, provide the logic app name, choose asubscription, resource group, and location, and click Create.
    4. After the logic app is created, navigate to the Logic Apps Designer andchoose Event Grid as your trigger (see Figure 5-32). You’re prompted to sign in with your Azure credential for Logic Apps to be able to connect to the Event Grid.
    5. Figure 5-32 Logic Apps Designer
      Screenshot_158
    6. Set up an event publisher for your logic app. In this example, we selecthe Resource Type as Microsoft.Resources.ResourceGroups and Even Type as Microsoft.Resources.ResourceActionSuccess (see Figure 5-33).
    7. Figure 5-33 Logic app event publisher
      Screenshot_159
    8. The next step is to Add Condition and Action to the Logic Appworkflow (see Figure 5-34). On the expression editor on the left side of the condition add, triggerBody()?['data']['operationName'] and click OK. Keep the middle box operator IsEqual To. On the right side of the equation, add Microsoft.Storage/storageAccounts/write.
    9. Figure 5-34 Logic app condition dialog box
      Screenshot_160
    10. Add an action to send an email using SendGrid. On the Add Actiondialog box, search for SendGrid (see Figure 5-35). You have to provide the SendGrid account key for Logic Apps to connect to it.
    11. Figure 5-35 A logic app action dialog box
      Screenshot_161
    12. Define the email template and add a recipient to receive an email, asshown in Figure 5-36.
    13. Figure 5-36 Set up an email recipient
      Screenshot_162

      After you’ve added conditions and actions to the Logic Apps Designer, it looks like Figure 5-37.

      Figure 5-37 A logic app workflow
      Screenshot_163
    14. To see this in action, go to your storage account and make some updates. You will see that an email is sent to the recipient after the changes are successful.

    Exam Tip

    There is a very high likelihood of you getting a use case where you need a logic app workflow to be able to react to the changes to the on-premises data sourceâ€"for example, data updates in the on-premises SQL server database. Azure Logic Apps gives you the ability to connect to a variety of data sources sitting behind the organization firewall using a gateway installer. Please check out Microsoft documents for supported on-premises connectors at https://docs.microsoft.com/en-us/azure/logic-apps/logic-appsgateway-install.

    Azure Relay services

    Azure Relay services allows establishing a secure connection to the services running behind the corporate network without opening any firewall ports.

    The Relay service has two types:

    Hybrid Connections Hybrid connection is based on standard Http and Web Socket protocols and hence can be used in any platform and languages.

    WCF Relay WCFRelay is A legacy Relay offering that works only for .NET framework Windows Communication Foundation endpoints.

    Use the following steps to create a Relay Service namespace and configure Hybrid Connections or WCFRelay on the Azure portal:

    1. Log in to the Azure portal and navigate to create a resource from theleft pane. Search for Relay.
    2. Click Create (see Figure 5-38).
    3. Figure 5-38 Create a Relay Service
      Screenshot_164
    4. Create a namespace, providing the name and selecting the subscription resource group, and location as shown in Figure 5-39. Click Create.
    5. Figure 5-39 Create a Relay Service namespace
      Screenshot_165
    6. After the namespace has been created, navigate to the resource. On theoverview tab, you see an option to configure a Hybrid Connection or a WCFRelay (see Figure 5-40).
    7. Figure 5-40 Configure a Hybrid Connection or a WCF Relay
      Screenshot_166

    Create and configure Notification Hubs, Event Hubs, and Service Bus

    Microsoft Azure platform provides a variety of options for applications that are based on messaging-based architecture and require an optimal way to process or consume messages or push notifications to other services. The following sections look at the use cases to help you choose the service that is the best fit.

    Azure Notification Hubs

    In today’s digital world, the use of mobile and handheld devices is growing faster than ever. One of the critical factors that help you grow your business is to keep the customer engaged and be notified of the offers and the latest event as soon as they come. The diversity of the customers, their choices of what they should be notified on, and the types of mobile device platforms they use are vast. This is where Azure Notification Hubs comes in.

    Azure Notification Hubs allow you to send massively scaled push notification to devices from backed services either running on the cloud or on premises. Notification Hubs are platform agnostic and can send push notification to any platform (iOS, Android, Windows, Kindle, Baidu).

    Use the following steps to create and configure Azure Notification Hubs:

    1. Log in to the Azure portal and navigate to create a resource on the leftpane. Search for Notification Hub. You can create the namespace first or create it at the same time as you create the hub.
    2. Click Create as shown in Figure 5-41.
    3. Figure 5-41 Create an Azure Notification Hub
      Screenshot_167
    4. On the Create screen as shown in Figure 5-42, provide the name for the Notification Hub and Notification Hub namespace and select the location, subscription, resource group, and pricing tier and click Create.
    5. Figure 5-42 Create a Notification Hub
      Screenshot_168
    6. After the Notification Hub is created, you can set up a push notification service using any one of the supported providers (see Figure 5-43).
    7. Figure 5-43 Setting up push notification service
      Screenshot_169

      Need More Review?

      Azure Notification Hubs and Google Firebase Cloud Messaging (FCM)

      Microsoft has very comprehensive documentation to register push notification providers (PNS) with Notification Hubs. For detailed information, please visit Microsoft docs “Push notifications to Android devices by using Azure Notification Hubs and Google Firebase Cloud Messaging” at https://docs.microsoft.com/enus/azure/notification-hubs/notification-hubs-android-pushnotification-google-fcm-get-started

      Exam Tip

      It’s very likely that if you’re using some existing push notification mechanism, you may wonder how you switch to use to Azure Notification Hubs seamlessly. Azure Notification Hubs supports bulk import of device registration. Please take a look at the implementation of export and import Azure Notification Hubs registrations in bulk at https://azure.microsoft.com/en-us/pricing/details/appservice/plans.

      Azure Event Hubs

      Event Hubs is a big-data pipeline meant to take a massive real-time stream of event data from various event producers. Unlike Event Grid, Event Hubs allows capture, retain, and replay of event data to a variety of stream processing systems and analytic services, as shown in Figure 5-44.

      Figure 5-44 Azure Event Hubs
      Screenshot_170

      Event Hubs provides a distributed stream-processing platform with low latency and seamless integration with other ecosystems like Apache Kafka to publish events to Event Hubs without intrusive configuration or code changes. It has a support for advanced messaging protocol like HTTP, AMQP 1.0, Kafka 1.0, and major industry languages (.NET, Java, Python, Go, Node.js). The partition consumer model of the Event Hub makes it massively scalable. It allows you to partition the big data stream of events among the different partitions and enable parallel processing with an ability to give consumers its own partitioned event stream.

      Create an event hub

      To create an event hub, you need to create a container called the namespace, as we previously created for Event Grid.

      Use the following steps to create and configure Event Hubs:

    8. Log in to the Azure portal and navigate to create a resource on the left pane. Search for Event Hubs. Click Create as shown in Figure 5-45.
    9. Figure 5-45 Create an Event Hubs namespace
      Screenshot_171
    10. The next screen that opens is shown in Figure 5-46. On the screen, do the following:
    11. A. Enter a name for the namespace.

      B. Choose the pricing tier (Basic or Standard). If you need messageretention customization, choose Standard.

      C. Select the subscription, resource group, and a location.

      D. Choose the desired throughput.

      E. Click Create.

      Figure 5-46 The Event Hubs namespace Create form
      Screenshot_172
    12. After the namespace has been created, navigate to the resource. Nowyou can click the +Event Hub icon to create an event hub under it (see Figure 5-47).
    13. Figure 5-47 The Event Hub namespace blade
      Screenshot_173
    14. Provide the name of the event hub and choose the retention and partition as needed. Click Create. (See Figure 5-48.)
    15. Figure 5-48 The Create Event Hub screen
      Screenshot_174

    Note

    Event Grid Versus Event Hubs

    Event Grid and Event Hubs both offer some similar capabilities, but each is designed to address a specific business scenario. Event Grid isn’t meant for queuing the data or storing it for later use. Instead, because of its nature of integration with Function Apps and Logic Apps, it’s meant for distributing events instantaneously and trigging application logic to react to application metrics to take some actions.

    Azure Service Bus

    Like Event Hubs and Event Grid, Azure Service Bus also offers messaging capability at an enterprise scale, enabling the loosely coupled application to connect asynchronously and scale independently.

    Service Bus provides enterprise messaging capabilities, including queuing, publish/subscribe, and an advanced integration patterns model for an application hosted on the cloud or on-premises. It has the following key features:

    Message persistence and duplicate detection

    First-in-first-out order of message delivery

    Poison message handling

    High availability, geo-replication, and built-in disaster recovery

    Transactional integrity, an ability to have queued read or write messages as part of a single transaction

    Supports advanced messaging protocol like HTTP, AMQP 1.0, and significant industry languages (.NET, Java, Python, Go, Node.js, and Ruby)

    Service Bus allows you to implement a publish/subscribe model using topics and subscriptions. One or more topics in the service bus queue enable you to send messages to these topics and have subscribers receive these messages on the topics they have subscribed for. See Figure 5-49.

    Figure 5-49 Service Bus topics and subscriptions
    Screenshot_175

    Use the following steps to create and configure the Azure Service Bus, topics, and subscriptions. Like any other messaging service, you have to create a container called namespace first.

    1. Log in to the Azure portal and navigate to create a resource on the leftpane. Search for Service Bus. Click Create (see Figure 5-50).
    2. Figure 5-50 Service Bus resource
      Screenshot_176
    3. The next screen that opens is shown in Figure 5-51. On the screen, you see the following:
    4. A. Enter a name for the namespace.

      B. Choose the pricing tier (Basic or Standard). Select at least Standardpricing tiers if you need topics.

      C. Select the subscription and resource group in which you want to create the namespace.

      D. Select a location for the namespace.

      E. Select Create.

      Figure 5-51 Create a Service Bus namespace
      Screenshot_177
    5. After the namespace has been created, navigate to the resource. Hereyou can create queues or topics under the namespace. See Figure 5-52.
    6. Figure 5-52 The Service Bus Resource blade
      Screenshot_178

    Exam Tip

    As an architect, the AZ-300 exam expects that you take the right decisions to solve complex problems and choose appropriate services among the variety of different messaging options. Please take a look at advanced features that are available in the service bus queue offering listed in the Microsoft documentation located at https://docs.microsoft.com/en-us/azure/service-busmessaging/service-bus-messaging-overview#advanced-features.

    Configure queries across multiple products

    Monitoring and collecting telemetry data and analyzing across the products and services is a crucial part of any platform to maximize performance and high availability by identifying issues and correcting them before they affect the production users. This is where the role of Azure Monitor comes in.

    Azure Log Analytics is a primary data store for telemetry data that we emit from a variety of data sources, including Azure services, on-premises datacenters, and Application Insights custom tracing, that helps you to emit application metric data as part of performance and health monitoring. Azure Monitor now has an integration with Log Analytics and Application Insights that enables you to query and analyze data across multiple log analytics workspaces within or across Azure subscriptions.

    Querying telemetry and log data across Log Analytic workspace and Application Insights

    To query across the Log analytics workspace, the Identifier “workspace” is used. To query across Application Insights, the identifier “app” is used.

    The following example (see Figure 5-53) queries records across two separate Application Insights—'AZ300appinsights-1' and 'AZ300appinsights2'—against the requests table in two different apps. It counts the total number of records, regardless of the application that holds each record.

    	
    union app('AZ300appinsights-1').requests, app('AZ300appinsights-2').requests, requests
    | summarize count() by bin(timestamp, 1h)
    	
    
    Figure 5-53 Queries across Application Insights using Azure Monitor
    Screenshot_179

    In a similar way, the following query shows an example of querying the data between two Log Analytics workspaces. It counts total records of Heartbeat, regardless of the workspace that holds each record:

    	
    union Heartbeat, workspace("AZ300ExamRefLAWorkspace"). Heartbeat, Heart
    	
    

    Skill 5.4: Develop for autoscaling

    The key benefit with cloud computing is agility and elasticity to provision resources on demand to keep the desired performance of the application intact as load grows at any given time. When the load goes down, or there is no need for additional resources, you can remove them or deallocate them to minimize the cost.

    Azure provides you with built-in capability for most of its services to dynamically scale them as the need arises.

    Scaling can be achieved in the following two ways:

    Horizontal Scaling (also called a Scale out or Scale In) In this, you add or remove resources dynamically without affecting the availability of your application or workload. An example of horizontal scaling is virtual machine scale set. You’re running two instances of VM behind the load balancer. When load increases, you add two instances to spread the load among four VM instances instead of two. The scaling out approach doesn’t require downtime or impact availability because the load balancer automatically routes traffic to new instances when they are in a ready state. Conversely, additional instances are gracefully removed automatically as load stabilizes.

    Vertical Scaling You add or remove capacity to the existing resources in terms of compute and storage. For example, you move the existing VM from one tier (say general purpose) to another compute-optimized tier. Vertical scaling often requires VMs to be redeployed; hence, it may cause temporary unavailability to the service while an upgrade is happening. Therefore, it’s a less conventional approach to scaling.

    This skill covers how to:

    • Implement autoscaling rules and patterns
    • Implement code that addresses transient state

    Implement autoscaling rules and patterns

    Azure provides a built-in feature for the majority of platform services to achieve autoscaling based on demand or metrics. The Azure monitor gives you a common platform to schedule and configure the autoscaling for supported services as mentioned later. Following are the list of the services that use Azure autoscaling:

    Azure Virtual Machines Azure VMs use virtual machines scale sets,

    a set of identical virtual machines grouped together for autoscaling. Scaling rules can be configured either based on metrics, such as CPU usage, memory usage, and disk I/O, or based on a schedule to trigger scale out to meet a service level agreement (SLA) on the performance.

    Azure App Services Azure App services come with a built-in mechanism to configure autoscaling rules based on resource metrics such as CPU usage, memory demand, and HTTP queue length or on specific schedules. The rules are set to an app service plan and apply to all apps hosted on it.

    Service Fabric Likewise, virtual machine scaling, service fabric also supports autoscaling using virtual machine scale sets.

    Cloud Services These are Microsoft legacy PaaS offerings, but they do support autoscaling at the individual roles level (Web or Worker).

    Azure Functions Azure functions are a Microsoft serverless compute option. The autoscaling options depend on the hosting plan you choose. If you select the App Service plan, the scaling then works the same as we discussed for Azure App Service. However, if you decide to have the on-demand consumption plan, you don’t have to configure any autoscaling rules because of the nature of the service; it allocates the required compute-on-demand as your code runs.

    In addition to configuring the autoscaling rules using Azure Monitor, you can set up custom metrics using Application Insights and define a custom autoscaling solution on top of it. The custom autoscaling solution might require you think through carefully if none of the platforms provided rules meet your application scaling requirements.

    Note

    Service Limits, Quotas, and Constraints

    You must pay careful attention when designing a scalable solution on Azure. There are some constraints and limits on the services regionwise and at the subscription level. Please visit the article “Azure subscription and service limits, quotas, and restrictions” at https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits

    Code design considerations and best practices for autoscaling

    The Azure platform-level capabilities to configure resource autoscaling may not be fruitful if the application isn’t designed to scale. Consider the following points to make the best of autoscaling features:

    The application must be designed to support horizontal scaling. Always develop services to be stateless so that requests can be evenly spread across healthy instances. Avoid using session affinity or sticky session. Consider using a queue-based load leveling pattern where the application can post the requests as a message in the queue, and messages can be picked up by any background worker instances to process.

    For better resource utilization and cost-effectiveness, avoid long-running tasks in a single monolithic application and break them to run on separate instances using a queue-based mechanism. This approach facilitates scaling application components independently that requires high compute power as opposed to scaling everything.

    Since autoscaling (Scale out and Scale in) is not an immediate process, it takes time for the system to react to the autoscaling rules and make additional instances in a ready state. Consider a throttling pattern to reject the requests if they are beyond the defined threshold limit.

    Ensure the configuring scale-in rule in combination with the scale-out rule. Having only one rule will end up scaling only in one direction (Out or In) until it reaches a maximum or minimum value, which is not an optimal approach.

    Always have a different and adequate margin between the minimum and maximum values of instance count. For example, if your rule set is minimum instance count =2, maximum is also =2, and default is also =2, autoscaling will never be triggered.

    Consider an adequate margin between threshold values of autoscale metrics with a legitimate cool-down period. For example, the ideal desired scale-out and scale-in values for the following metrics would be

    Increase instances by two counts when CPU% >= 90 over 10 minutes

    Decrease instances by two counts when CPU% <= 60 over 15 minutes

    Setting the metrics values close to each other would result in undesired results.

    Common autoscaling metrics

    As said in the previous section, Azure monitor allows you to set up autoscaling rules based on the built-in platform metrics for Azure App Services, virtual machine scale sets, Cloud Services, and API management services.

    Scale based on metrics The standard metrics used for autoscaling are built-in host metrics that are available in VM instances like the following:

    CPU usage

    Memory Demand

    Disk read/writes

    Autoscaling rules also use the metrics from one of the following sources:

    Application Insights

    Service Bus Queue

    Storage account

    Scale based on schedule Sometimes you may want to have scaling configuration (In or Out) based on a schedule that makes sense when you have predictable usage patterns, and you want to have the system ready to meet the on-demand scaling need as opposed to reactive scaling based on metrics.

    Custom metrics Custom metrics enables you to leverage application insights to meet the scaling demand of a complex scenario when none of the platforms provided scaling options to meet your requirements.

    Need More Review?

    Autoscaling Guidance and Best Practices

    You can find additional information about the autoscaling options and best practices at https://docs.microsoft.com/en-us/azure/azuremonitor/platform/autoscale-best-practices.

    To set up autoscaling rules based on metrics for Azure App Service plan, use the following steps. This section assumes you have an app service plan already created:

    1. Log in to the Azure portal and navigate to Azure Monitor.
    2. On the left menu, click Autoscale, as shown in Figure 5-54.
    3. Figure 5-54 Azure Monitor
      Screenshot_180
    4. On the right side, filter and select the required resource for autoscalingconfiguration.
    5. Set up autoscale rules (Scale out and Scale In), as shown in Figure 5-55.
    Figure 5-55 Autoscaling rules
    Screenshot_181

    Exam Tip

    The AZ-300 exam expects that you know what is supported in different tiers of the app service plans. To understand what is supported in regard to autoscaling and other features, please look at the Microsoft documentation located at https://azure.microsoft.com/en-us/pricing/details/appservice/plans/.

    Autoscaling and singleton application

    Azure platform offerings such as Azure WebJobs and Azure Functions enable you to run your code as a background job and leverage the on-deman horizontal autoscaling capability of the platform. By default, your code runs on all the instances of the WebJobs or Azure Functions. For Azure WebJobs you configure your code to run on the single instance by using the key is_singleton:true either in the configuration file settings.job or adding an attribute programmatically that comes as part of Azure WebJob SDK as shown in the following code snippet:

    	
    [Singleton]
    public static async Task ProcessJob([BlobTrigger("file")] Stream bytearray)
    {
    // Process the file.
    }
    	
    

    Similarly, with Durable Azure Functions, you have an ability to configure a singleton background job orchestration by specifying the instance ID to an orchestrator when you create it.

    Singleton Implementation for session persistence

    A singleton pattern allows only one instance of the class created and shared among all callers. The following code snippet shows a straightforward implementation of a singleton class where the instance of a singleton class is created and stored at local memory of the VM. That creates a bottleneck for scalability because the state will not be persisted when you create a new instance of the VM, unless you store it at some external system like Azure Storage, an external database, or in some caching mechanism using Redis cache.

    	
    public sealed class SingletonAz300 {
    private static SingletonAz300 GUID = null;
    private static readonly object threadlock = new object();
    public static SingletonAz300 GUID {
    get {
    lock (threadlock) {
    if (GUID == null {
    GUID = new SingletonAz300();
    }
    return instance;
    }
    }
    }
    	
    

    Storing the state using external providers requires additional development efforts and may not be best suited for your performance requirements because you would have to interact with an external service to save and retrieve state, and that comes with latency and throughput cost.

    Azure Service Fabric Stateful Service allows you to maintain the state reliably across the nodes of the service instance locally. Stateful service comes with a built-in state manager called Reliable Collections that enables you to write highly scalable, low-latency applications.

    When you create a stateful service using the Visual Studio (2017 or later) Service Fabric Application template, it uses the availability of state provider by default in its entry point method RunAsync as shown in the following code snippet:

    	
    protected override async Task RunAsync(CancellationToken cancellationToken) {
    var myDictionary = await this.StateManager.GetOrAddAsync<IReliableDictionary
    <string, long>>("myDictionary");
    while (true) {
    cancellationToken.ThrowIfCancellationRequested();
    using (var tx = this.StateManager.CreateTransaction()) {
    var result = await myDictionary.TryGetValueAsync(tx, "Counter");
    ServiceEventSource.Current.ServiceMessage(this.Context, "Current
    Counter Value: {0}",
    result.HasValue ? result.Value.ToString() : "Value does not
    exist.");
    await myDictionary.AddOrUpdateAsync(tx, "Counter", 0, (key, value)
    => ++value);
    // If an exception is thrown before calling CommitAsync, the
    transaction aborts, all changes are
    // discarded, and nothing is saved to the secondary replicas.
    await tx.CommitAsync(); }
    await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
    }
    }
    	
    

    Reliable collections can store any .NET type or custom types. The data stored in the reliable collections must be serializable because the data is stored on the local disk of service fabric replicas.

    Implement code that addresses the transient state

    When you’re designing an application for the cloud, the recommended design approach is to design to handle failure and errors gracefully. It’s obvious that you can’t prevent failure to happen, and things could go wrong—for example, temporary network outage in the datacenter or any temporary service interruption. A transient state that could lead the service in the temporarily unstable state is not uncommon in any distributed application. Here are some of the examples of the transient state:

    Service temporary unavailable because of network connectivity issues.

    Service is busy and returns a timeout error.

    Degraded performance of the service.

    The following design patterns are recommended for handling transient faults:

    Retry Pattern In this pattern, when the remote service is temporarily unavailable, the following strategies are considered to handle failure.

    Cancel When the calls to remote service are not of type transient and likely to fail. The application immediately returns exceptions instead of trying to call a remote service. An example of such a fault could be authentication failure because of incorrect credentials.

    Retry If the nature of the error is not standard or rare, the application could retry the failed request because it may be successful. An example of such a fault is a database timeout error that could be caused by prolonged running queries on the database or deadlocks on the tables.

    Retry-After Delay Using a back-off logic to add a delay in the subsequent retry attempts is recommended when you know the fault is likely to happen again and could be successful at a later point in time.

    Note

    Design Guidance and Best Practices for Retry

    Microsoft provides in-build functionality as part of services SDKs to implement a Retry mechanism for transient fault. For more information, please visit the Microsoft documentation “Retry guidance for specific services” at https://docs.microsoft.com/enus/azure/architecture/best-practices/retry-service-specific.

    The following code snippet shows the custom C# program to handle transient faults using a retry logic with delay. The transient errors vary based on the service you’re using. RemoteServiceCall function calls the remote service; if exceptions happen, the catch block in the following code checks whether the error type is transient. If the error is temporary in nature, the program retries it gracefully.

    	
    public class RetryAZ300 {
    private int retryCount = 3;
    private readonly TimeSpan delay = TimeSpan.FromSeconds(5);
    public async Task RetryWithDelay() {
    int currentRetry = 0;
    try
    {
    // Call remote service.
    await RemoteServiceCall();
    // Return or break.
    break;
    }
    catch (Exception ex) {
    currentRetry++;
    // Check if the exception thrown was a transient exception
    // based on the logic in the error detection strategy.
    // Determine whether to retry the operation, as well as how
    // long to wait, based on the retry strategy.
    if (currentRetry > retryCount || !IsTransientInNature(ex)) {
    // If this is not transient, do not retry and throw,
    // rethrow the exception.
    throw;
    }
    }
    // Wait to retry the operation.
    await Task.Delay(delay);
    }
    }
    	
    

    Circuit Breaker Pattern The Circuit Breaker pattern is used for a transient fault that is long lasting, and it’s not worthwhile to retry an operation that’s most likely to fail. The Circuit Breaker pattern is different from the Retry in the sense that a Retry attempt assumes it will succeed, whereas, in contrast, the Circuit Breaker pattern prevents the application from making an attempt that is likely to fail. The pattern is called Circuit Breaker because it resembles the electrical circuit breaker. There are three states of a Circuit Breaker Pattern:

    Closed In a closed state, which is the default state, the requests to services are successful. If there is a transient failure, the Circuit Breaker increments the Failure count, and as soon it exceeds the threshold value within a given period, the circuit breaker changes its state from Closed to Open State.

    Open In Open state, an exception is returned immediately before a connection request is made to a remote service.

    Half-Open In the Half-Open state, the Circuit Breaker starts the timer as soon as state changes from Closed to Open. When it expires based on the value we define, it makes a limited number of requests to remote service to see if it has been restored. If the request is successful, the Circuit Breaker switches to Closed state; otherwise, it goes back to the Open state and restarts the timer.

    The following pseudocode shows an example of a Circuit Breaker implementation using C#:

    	
    /// <summary>
    /// A sample code for Circuit Breaker Pattern
    ///  </summary>
    public class CircuitBreakerForAZ300 {
    // CircuitBreakerStateEnum enum, used for three flags of a circuit breaker
    enum CircuitBreakerStateEnum {
    Closed = 0, Open = 1, HalfOpen = 2
    }
    private CircuitBreakerStateEnum State { get; set; }
    private readonly object halfOpenSyncObject = new object();
    private bool IsClosed { get; set; }
    private bool IsOpen { get; set; }
    // The default constructor of a CircuitBreakerForAZ300 class to set the default state
    and configuration.
    public CircuitBreakerForAZ300(Action remoteServiceCall) {
    IsClosed = true;
    State = CircuitBreakerStateEnum.Closed;
    TimerForHalfOpenWaitTime = TimeSpan.FromMinutes(3);
    Action = remoteServiceCall;
    }
    // The Action denotes the remote service API Call .
    private Action { get; set; } // Call to Remote Service
    private DateTime LastStateChangedDateTimeUTC { get; set; }
    // Threshold Configuration to Switch States
    // The following properties are used for timer and threshold values that we will use to
    switch from one to another state in the circuit breaker.
    private TimeSpan OpenToHalfOpenWaitTime { get; set; }
    private Exception LastKnownException { get; set; }
    private int MaxAllowedFailedAttempts { get; set; }
    private int FailedAttempts { get; set; }
    private int SuccessfulAttempts { get; set; }
    // The following Public RemoteServiceCall function is invoked from the client.
    // It checks whether the state is open and if the timer for the Open state has expired,
    it switches to HalfOpen() and makes an attempt to remote service.
    public void RemoteServiceCall (Action action) {
    if (IsOpen) {
    if (LastStateChangedDateTimeUTC + OpenToHalfOpenWaitTime < DateTime.
    UtcNow) {
    try {
    Monitor.TryEnter(halfOpenSyncObject, ref lockTaken);
    if (lockTaken) {
    // Set the circuit breaker state to HalfOpen.
    HalfOpen();
    // Attempt the operation.
    action();
    // If this action succeeds, close the state and allow other
    operations.
    // In an ideal case, instead of immediately returning to the Closed
    state, a counter
    // is recommended to check the number of successful attempts and
    then switch
    // circuit breaker to the Closed state.
    CloseState()
    return;
    }
    }
    catch(Exception ex) {
    // if there is an exception in the request that was made in
    the Half-open state, switch to OpenState immediately.
    OpenState(ex);
    throw;
    }
    finally {
    // The HalfOpen state did not return any exceptions, you can Switch the circuit breaker
    to ClosedState();
    ClosedState();
    }
    }
    // The Open timeout hasn't yet expired. Throw a lastKnownException
    throw new CircuitBreakerOpenException(lastKnownException);
    }
    // if the state is in the closed already, this code executes, and state is closed, and
    remote service is healthy.
    try {
    action();
    }
    catch (Exception ex) {
    // Log exception
    }
    }
    }
    	
    

    Chapter summary

    Azure has massively scaled Cosmos DB NoSQL database with an ability to turnkey global distribution for high availability and disaster recovery.

    Azure Cosmos DB automatically protects your data using encryption at rest or transit and has a different way to set restricted access to the database resources using network firewall or user and permissions.

    Azure Cosmos DB has major industry certifications to comply with compliance obligations.

    Azure Cosmos DB has five ways of setting up the consistency level for data read and write operations to meet your business scenarios.

    Azure Cosmos DB supports native SQL API, MongoDB API, TableAPI, Cassandra API, and Gremlin API, making it easy to migrate to Cosmos DB.

    Azure has a variety of options to run your relational database

    workload using Azure Platform core product SQL Azure, Azure SQL Data warehouse, Azure database for MySQL, Azure database for PostgreSQL, and Azure database for MariaDB.

    Azure SQL database has three types of database offerings: Single, Elastic, and Managed Instance. Each offering has its own security, performance, and redundancy options.

    Azure SQL databases come with two purchasing models: DTU and vCores. Each has its own scalability, performance, backup, and restore abilities for business continuity and disaster recovery.

    Azure SQL database allows you to create four secondary read-only copies of databases on the same or different datacenters.

    Elastic databases in Azure SQL allow you to efficiently achieve high performance cost-effectively by sharing the unused DTUs among the databases in the pool.

    Azure Integration suite of services provides massively scalable, performant, and highly resilient services for messaging- and eventsbased architectures.

    Azure Logic Apps, which is a designer-first serverless service, allows defining workflows to trigger an action based on some event by connecting services using more than 200 connectors and custom connectors.

    Azure Event Grid is an event routing service that allows the publisher to send an event to deliver it to one or more event handlerr in real time.

    Azure Relay provides an ability to expose services running onpremises or services behind the corporate network securely to services running on the cloud.

    Azure Notification Hubs gives an ability to send push notifications to a variety of devices on different platforms at massive scale.

    Azure Event Hubs is a big data pipeline solution that ingests a realtime stream of data from different devices and platforms. It then performs aggregation and exposes it to different stream analytic services or storage services for data modeling and reporting.

    The Azure Service Bus is a scalable and highly available message broker service that provides enterprise messaging capabilities. It gives a queueing mechanism with unique ability to deliver the messages in the order they are received or provide a publish/ subscribe mechanism to send a message to one or many subscribers.

    Azure monitor provides one common platform to monitor application telemetry data for performance and availability and gives an ability to query data across services.

    Azure provides built-in options to scale an application horizontally automatically. The scaling methods vary from services to services.

    Azure App services has a built-in scaling mechanism. Azure VMs and Service Fabric can be scaled using virtual machine scale sets.

    Azure Functions can be configured to scale automatically using a consumption plan with no configuration or can be scaled using an app service plan with specific scaling configuration.

    Horizontal scaling rules can be either configured based on metrics like CPU or memory consumption or on a schedule.

    Azure Service Fabric allows you to configure an application as either stateless or stateful. The stateful service automatically manages the state using Reliable Collection locally on each fabric cluster.

    Retry and Circuit Breaker patterns allows you to handle transient state across Azure services elegantly.

    Thought experiment

    In this thought experiment, demonstrate your knowledge and skills that you have acquired throughout the chapter. The answers to the thought experiment are given in the next section.

    You’re an architect of an online education institution. The institution has its own IT and software development department. The institution has its student base across the world and provides online study courses on various subjects and conducts exams and provides degrees upon successful completion of the course. The course content is made available online during the weekdays, and instructor-led training is held over the weekend. The online applications of the institution have a NoSQL back end and are hosted in the United States. The institution is facing several challenges. First, they are getting feedback from students around the world that applications for online courses and practical exams crash and work slowly at times. Also, machines that are given to them on the weekend for hands-on work run very slow. Second, the students are not notified at times about the class schedule changes. The management of the institution wants to leverage cloud technologies to address these challenges and approaches you to answer the following questions:

    1. How can the web application be made available close to the geo-location of the students and scaled based on the unique concurrent student login during exams?
    2. How can students be notified of any changes in the courses and exam schedules in real time?
    3. How can administrators of the institution be notified when new VMsare created and torn down during weekend classes?

    Thought experiment answers

    This section contains solutions to the thought experiment.

    1. You would consider an application using Azure Web Apps and itsback-end database Cosmos DB. The application can be hosted in different datacenters behind the traffic manager, and the back-end Cosmos DB can be configured with geo-distribution based on the need for regional requirements. The Cosmos database and the application can be configured with desired throughput and autoscaling to meet the demand of unique concurrent user login.
    2. Use Azure Notification Hubs to send push notification in real time forany changes in the exam schedules or courses. Notification Hubs can facilitate send platform-agnostics notification across different mobile platforms.
    3. Using Azure Logic App, a workflow can be configured to send emailsto administrators when VMs are created during weekend classes and shut down after off-hours.
    UP

    LIMITED OFFER: GET 30% Discount

    This is ONE TIME OFFER

    ExamSnap Discount Offer
    Enter Your Email Address to Receive Your 30% Discount Code

    A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

    Download Free Demo of VCE Exam Simulator

    Experience Avanset VCE Exam Simulator for yourself.

    Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

    Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.