Chapter 5 Monitor, troubleshoot, and optimize Azure solutions

Chapter 5 Monitor, troubleshoot, and optimize Azure solutions

Providing a good experience to your users is one of the key factors for the success of your application. Several factors affect the user’s experience, such as a good user interface design, ease of use, good performance, and low failure rate. You can ensure that your application will perform well by assigning more resources to your application, but if there are not enough users using your application, you might be wasting resources and money.

To ensure that your application is working correctly, you need to deploy a monitoring mechanism that helps you to get information about your application’s behavior. This is especially important during peak usage periods or failures. Azure provides several tools that help you to monitor, troubleshoot, and improve the performance of your application.

Skills covered in this chapter:

Skill 5.1: Develop code to support scalability of apps and services

Skill 5.2: Integrate caching and content delivery within solutions

Skill 5.3: Instrument solutions to support monitoring and logging

Skill 5.1: Develop code to support scalability of apps and services

Azure provides several out-of-the-box high-availability and fault-tolerance features. Although these Azure features improve the resiliency of your application, you need to understand when taking advantage of these features is necessary. Just adding more resources to an application doesn’t mean that your application will perform better, and adding more resources won’t increase performance or resiliency if the application is not aware of these changes.

You usually add more resources when your application is at peak usage or you remove resources from your application when the usage is low. To be efficient, you can configure Azure resources to automatically add or remove resources based on schedules or conditions that you can configure.

This skill covers how to:

  • Implement autoscaling rules and patterns
  • Implement code that handles transient faults

Implement autoscaling rules and patterns

One of the biggest challenges you face when deploying your application in a production environment is ensuring that you provide enough resources, so that your application performs as expected. Determining the number of resources you should allocate is the big question when it comes to configuring the resources for your application. If you allocate too many resources, your application will perform well during usage peaks, but you are potentially wasting resources. If you allocate fewer resources, you are saving resources, but your application may not perform well during usage peaks. Also, anticipating heavy usage peaks is very difficult. This is especially true for applications that have unpredictable usage patterns.

Fortunately, Azure allows you to dynamically assign more resources to your application when you need them. Autoscaling is the action of automatically adding or removing resources to an Azure service and providing needed computing power for your application in each situation. An application can scale in two different ways:

Vertically You add more computing power by adding more memory, CPU resources, and IOPS to the application. At the end of the day, your application runs on a virtual machine. It doesn’t matter if you use an

IaaS (Infrastructure as a Service) virtual machine, Azure App Service, or Azure Service Fabric; you are using virtual machines under the hood. Vertically scaling an application means moving from a smaller VM to a larger VM size and adding more memory, CPU, and IOPS. Vertically scaling requires stopping the system while the VM is resizing. This type of scaling is also known as “scaling up and down.”

Horizontally You can also scale your application by creating or removing instances of your application. Each instance of your application is executed in a virtual machine that is part of a virtual machine scale set. The corresponding Azure service automatically manages the virtual machine scale set for you. All these instances of your application work together to provide the same service. The advantage of scaling horizontally is that the availability of your application is not affected because there is no need to reboot all the instances of your application that provide the service. This type of scaling is also known as “scaling out and in.”

When we work with autoscaling, we refer to horizontally scaling because vertical scaling requires the service interruption while the Azure Resource Manager is changing the size of the virtual machine. For that reason, vertical scaling is not suitable for autoscaling.

You configure autoscaling based on criteria your application should meet to provide a good performance level. You configure these criteria in Azure by using autoscaling rules. A rule defines which metric Azure Monitor should use to perform the autoscaling. When that metric reaches the configured condition, Azure automatically performs the action configured for that rule. In addition to adding or removing virtual machines from the scale set, the rule can perform other actions, such as sending an email or making an HTTP request to a webhook. You can configure three different types of rules when working with the autoscaling rules:

Time-based Azure Monitor executes the autoscaling rule based on a schedule. For example, if your application requires more resources during the first week of the month, you can add more instances and reduce the number of resources for the rest of the month.

Metric-based You configure the threshold for standard metrics, such as

the usage of the CPU, the length of the HTTP queue, or the percentage of memory usage, as shown in Figure 5-1.

Figure 5-1 Configuring a metric-based autoscale rule
Screenshot_44

Custom-based You can create your own metrics in your application, expose them using Application Insight, and use them for autoscaling rules.

You can only use the built-in autoscaling mechanism with a limited group of Azure resource types:

Azure virtual machines You can apply autoscaling by using virtual machine scale sets. All the virtual machines in a scale set are treated as a group. By using autoscaling, you can add virtual machines to the scale set or remove virtual machines from it.

Azure Service Fabric When you create an Azure Service Fabric cluster, you define different node types. A different virtual machine scale set supports each node type that you define in an Azure Service Fabric cluster. You can apply the same type of autoscaling rules that you use in a standard virtual machine scale set.

Azure App Service This service has built-in autoscaling capabilities that you can use for adding or removing instances to the Azure App Service. The autoscale rules apply to all apps inside the Azure App Service.

Azure Cloud Services This service has built-in autoscaling capabilities that allow you to add or remove resources to or from the roles in Azure Cloud Service.

When you work with the autoscale feature in one of the supported Azure services, you define a profile condition. A profile condition defines the rule that you configure to add or remove resources. You can also define the default, minimum, and maximum allowed instances for this profile. When you define a minimum and maximum, your service cannot decrease or grow beyond the limits you define in the profile. Also, you can configure the profile for scaling based on a schedule or based on the values of built-in or custom metrics. Use the following procedure to add a metric-based autoscaling rule to an Azure App Service. This rule will add an instance to the Azure App Service plan when the average percentage of CPU usage is over 80 percent more than 10 minutes:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the search text box at the top of the Azure Portal, type the name of your Azure App Service.
  3. Click the name of your Azure App Service in the results list.
  4. On the Azure App Service blade, in the navigation menu on the left side of the blade, click the Scale-out (App Service Plan) option in the Settings section.
  5. On the Scale-Out (App Service Plan) blade, on the Configure tab, click the Enable Autoscale button. Autoscale rules are available only for the App Service plans that are Standard size or bigger.
  6. On the Scale-Out (App Service Plan) blade, on the Configure tab, in the Default Auto Created Scale Condition window shown in Figure 5-2, click the Add A Rule link.
  7. Figure 5-2 Configuring a metric-based autoscale rule
    Screenshot_45
  8. On the Scale rule panel, in the Criteria section, ensure that CPU Percentage is selected in the Metric Name drop-down menu.
  9. Ensure that the Greater Than value is selected from the Operator dropdown menu.
  10. Type the value 80 in the Threshold text box.
  11. In the Action section, ensure that the Instance count value is set to 1.
  12. Click the Add button at the bottom of the panel.
  13. On the Scale-Out (App Service Plan) blade, in the Default Profile condition, set the Maximum Instance Limit to 3.
  14. Click the Save button at the top-left corner of the blade.

Note: Scale-Out / Scale-In

The previous procedure shows how to add an instance to the App Service plan (it is scaling out the App Service plan) but doesn’t remove the additional instance once the CPU percentage falls below the configured threshold. You should add a Scale-In rule for removing the additional instances once they are not needed. You configure a Scale-In rule in the same way you did it if for the Scale-Out rule. Just set the Operation dropdown menu to the Decrease Count To value.

You can use different common autoscale patterns, based on the settings that we have reviewed so far:

Scale Based On CPU You scale your service (Azure App Service, VM Scale Set, or Cloud Service) based on your CPU. You need to configure a Scale-Out and a Scale-In rule for adding and removing instances to the service. In this pattern, you also set a minimum and a maximum number of instances.

Scale Differently On Weekdays vs. Weekends You use this pattern when you expect that the main usage of your application will happen on weekdays. You configure the default profile condition with a fixed number of instances, and then you configure another profile condition for reducing the number for instances on the weekends.

Scale Differently During Holidays You use the Scale Based On CPU pattern, but you add a profile condition to add additional instances during holidays or days that are important to your business.

Scale Based On Custom Metrics You use this pattern with a web application composed of three layers: front end, back end, and API tiers. The front end of an API tier communicates with the back-end tier. You define your custom metrics in the web application and expose them to the Azure Monitor by using Application Insights. Then you can use these custom metrics to add more resources to any of the three layers.

Exam Tip

Autoscaling allows you to assign resources to your application efficiently. Autoscale rules that add more instances to your application do not remove those instances when the rule condition is not satisfied. As a best practice, if you create a scale-out rule to add instances to a service, you should create the opposite scale-in rule to remove the instance. This ensures that the resources are assigned to your application efficiently.

Need More Review? Autoscale Best Practices

You can find more information about best practices when configuring autoscale rules by reviewing the article at https://docs.microsoft.com/enus/azure/azure-monitor/platform/autoscale-best-practices

Need More Review? Application Design Considerations

Simply adding more resources to your application doesn’t guarantee that your application will perform well. Your application needs to be aware of the new resources to take advantage of them. You can review some application-design considerations in the article at https://docs.microsoft.com/en-us/azure/architecture/best-practices/autoscaling#related-patterns-and-guidance

Implement code that handles transient faults

Developing applications for the cloud means that your application depends on the resources in the cloud to run your code. As we already reviewed in previous chapters, these resources provide out-of-the-box high availability and fault-tolerant features that make your application more resilient. Azure Cloud Services use redundant hardware and load balancers. Although you are guaranteed not to suffer big breakdowns, there can be situations that can temporarily affect your application, such as performing automatic failovers or load balancing operations. Usually, recovery from that kind of transient situation is as simple as retrying the operation your application was performing. For example, if your application was reading a record from a database and you get a timeout error because of a temporary overload of the database, you can retry the read operation to get the needed information.

Dealing with these transient faults leads you to deal with some interesting challenges. Your application needs to respond to these challenges to ensure that it offers a reliable experience to your users. These challenges are:

Detect and classify faults Not all the faults that may happen during the application execution are transient. Your application needs to identify whether the fault is transient, long-lasting, or a terminal failure. Even the term “long-lasting failure” is dependent on the logic of your application because the amount of time that you consider “long-lasting” depends on the type of operations your application performs. Your application also needs to deal with the different responses that come from different services types. An error occurring while reading data from a storage system is different than an error occurring while writing data.

Retry the operation when appropriate Once your application determines that it’s dealing with a transient fault, the application needs to retry the operation. It also needs to keep track of the number of retries of the faulting operation.

Implement an appropriate retry strategy Indefinitely retrying the operation could lead to other problems, such as performance degradation or the resource being blocked. To avoid those performance problems, your application needs to set a retry strategy that defines the number of retries, sets the delay between each retry, and sets the actions that your application should take after a failed attempt. Setting the correct number of retries and the delay between them is a complex task that depends on factors such as the type of resources, the operating conditions, and the application itself.

You can use the following guidelines when implementing a suitable transient fault mechanism in your application:

Use existing built-in retry mechanism When working with SDKs for specific services, the SDK usually provides a built-in retry mechanism. Before thinking of implementing your retry mechanism, you should review the SDK that you are using to access the services on which your application depends and use the built-in retry mechanism. These built-in retry mechanisms are tailored to the specific features and requirements of the target service. If you still need to implement your retry

mechanism for a service—such as a storage service or a service bus— you should carefully review the requirements of each service to ensure that you correctly manage the faulting responses.

Determine whether the operation is suitable for retrying When an error is raised, it usually indicates the nature of the error. You can use this information to determine whether the error is a transient fault. Once you determine your application is dealing with a transient fault, you need to determine whether retrying the operation can succeed. You should not retry operations that indicate an invalid operation, such as a service that suffered a fatal error or continuing to look for an item after receiving an error indicating the item does not exist in the database. You should implement operation retries if the following conditions are met:

You can determine the full effect of the operation.

You fully understand the conditions of the retry.

You can validate these conditions.

Use the appropriate retry count and interval Setting the wrong retry count could lead your application to fail or could lock resources that can affect the health of the application. If you set the retry count too low, your application may not have enough time to recover from the transient fault and will fail. If you set the retry count to a value that is too high or too short, you can lock resources that your application is using, such as threads, connections, or memory. This high-resource consumption can affect the health of your application. When choosing the appropriate retry count and interval, you need to consider the type of operation that suffered the transient fault. For example, if the transient fault happens during an operation that is part of user interaction, you should use a short retry interval and count, which avoids having your user wait too long for your application to recover from the transient fault. On the other hand, if the fault happens during an operation that is part of a critical workflow, setting a longer retry count and interval makes sense if restarting the workflow is time-consuming or expensive. Following are some of the most common strategies for choosing the retry interval:

Exponential back-off You use a short time interval for the first retry, and then you exponentially increase the interval time for subsequent retries. For example, you set the initial interval to 3 seconds and then use 9, 27, 81 for the subsequent retries.

Incremental intervals You set a short time interval for the first retry, then you incrementally increase the interval time for the subsequent retries. For example, you set the initial interval to 3 seconds and then use 5, 8, 13, 21 for the subsequent retries.

Regular intervals You use the same time interval for each retry. This strategy is not appropriate in most cases. You should avoid using this strategy when accessing services or resources in Azure. In those cases, you should use the exponential back-off strategy with a circuit breaker pattern.

Immediate retry You retry as soon as the transient fault happens. You should not use this type of retry more than once. The immediate retries are suitable for peak faults, such as network package collisions or spikes in hardware components. If the immediate retry doesn’t recover from the transient fault, you should switch to another retry strategy.

Randomization If your application executes several retries in parallel—regardless of the retry strategy—using the same retry values for all the retries can negatively affect your application. In general, you should use random starting retry interval values with any of the previous strategies. This allows you to minimize the probability that two different application threads start the retry mechanism at the same time in the event of a transient fault.

Avoid anti-patterns When implementing your retry mechanism, there are some patterns you should avoid:

Avoid implementing duplicated layers of retries. If your operation is made of several requests to several services, you should avoid implementing retries on every stage of the operation.

Never implement endless retry mechanisms. If your application never stops retrying in the event of a transient fault, the application can cause resource exhaustion or connection throttling. You should use the circuit breaker pattern or a finite number of retries.

Never use immediate retry more than once.

Test the retry strategy and implementation Because of the difficulties when selecting the correct retry count and interval values, you should thoroughly test your retry strategy and implementation. You should pay special attention to heavy load and high-concurrency scenarios. You should test this by injecting transient and non-transient faults into your application.

Manage retry policy configuration When you are implementing your reply mechanism, you should not hardcode the values for the retry count and intervals. Instead, you can define a retry policy that contains the retry count and interval as well as the mechanism that determines whether a fault is transient or non-transient. You should store this retry policy in configuration files so that you can fine-tune the policy. You should also implement this retry policy configuration so that your application stores the values in memory instead of continuously rereading the configuration file. If you are using Azure App Service, you should consider using the service configuration shown in Figure 5-3.

Figure 5-3 Configuring a metric-based autoscale rule
Screenshot_46

Log transient and non-transient faults You should include a log mechanism in your application every time a transient or non-transient fault happens. A single transient fault doesn’t indicate an error in your application. If the number of the transient faults is increasing, this can be an indicator of a bigger potential failure or that you should increase the resources assigned to the faulting service. You should log transient faults as warning messages instead of errors. Using the Error Log Level could lead to triggering false alerts in your monitoring system. You should also consider measuring and logging the overall time taken by your retry mechanism when recovering a faulty operation. This allows you to measure the overall impact of transient faults on user response times, process latency, and efficiency of the application.

Need More Review?: Managing Transient Faults

You can review some general guidelines for implementing a transient fault-handling mechanism by reviewing the following articles:

https://docs.microsoft.com/en-us/azure/architecture/bestpractices/transient-faults

https://docs.microsoft.com/en-us/aspnet/aspnet/overview/developingappswith-windows-azure/building-real-world-cloud-apps-withwindows-azure/transient-fault-handling

Need More Review? Useful Patterns

When implementing your retry mechanism, you can use the following patterns:

Retry pattern You can review the details and examples of how to implement the pattern by reading the article at https://docs.microsoft.com/en-us/azure/architecture/patterns/retry.

Circuit pattern You can review the details and examples of how to implement the pattern by reading the article at https://docs.microsoft.com/en-us/azure/architecture/patterns/circuitbreaker.

Skill 5.2: Integrate caching and content delivery within solutions

Any web application that you implement delivers two types of content— dynamic and static.

Dynamic content is the type of content that changes depending on user interaction. An example of dynamic content would be a dashboard with several graphs or a list of user movements in a banking application.

Static content is the same for all application users. Images and PDFs are examples of static content (as long as they are not dynamically generated) that users can download from your application.

If the users of your application access it from several locations across the globe, you can improve the performance of the application by delivering the content from the location nearest to the user. For static content, you can improve the performance by copying the content to different cache servers distributed across the globe. Using this technique, users can retrieve the static content from the nearest location with lower latency, which improves the performance of your application.

For dynamic content, you can use cache software to store most accessed data. This means your application returns the information from the cache, which is faster than reprocessing the data or getting it from the storage system.

This skill covers how to:

Store and retrieve data in Azure Redis Cache

Develop code to implement CDN’s in solutions

Invalidate cache content (CDN or Redis)

Store and retrieve data in Azure Redis Cache

Redis is an open-source cache system that allows you to work as an inmemory data structure store, database cache, or message broker. The Azure Redis Cache or Azure Cache for Redis is a Redis implementation managed by Microsoft. Azure Redis Cache has three pricing layers that provide you with different levels of features:

Basic This is the tier with the fewest features and less throughput and higher latency. You should use this tier only for development or testing purposes. There is no Service Level Agreement (SLA) associated with the Basic tier.

Standard This tier offers a two-node, primary-secondary replicated Redis cache that is managed by Microsoft. This tier has associated a high-availability SLA of 99.9 percent.

Premium This is an enterprise-grade Redis cluster managed by Microsoft. This tier offers the complete group of features with the highest throughput and lower latencies. The Redis cluster is also deployed on more powerful hardware. This tier has a high-availability SLA of 99.9 percent.

Note Scaling the Azure Redis Cache Service

You can scale up your existing Azure Redis cache service to a higher tier, but you cannot scale down your current tier to a lower one.

When you are working with Azure Cache for Redis, you can use different implementation patterns that solve different issues, depending on the architecture of your application:

Cache-Aside In most situations, your application stores the data that it manages in a database. Accessing data in a database is a relatively slow operation because it depends on the time to access the disk storage system. A solution would be to load the database in memory, but this approach is costly; in most cases, the database simply doesn’t fit on the available memory. One solution to improve the performance of your application in these scenarios is to store the most-accessed data in the cache. When the back-end system changes the data in the database, the same system can also update the data in the cache, which makes the change available to all clients.

Content caching Most web applications use web page templates that use common elements, such as headers, footers, toolbars, menus, stylesheets, images, and so on. These template elements are static elements (or at least don’t change often). Storing these elements in Azure Cache for Redis relieves your web servers from serving these elements and improves the time your servers need to generate dynamic content.

User session caching This pattern is a good idea if your application needs to register too much information about the user history or data that you need to associate with cookies,. Storing too much information in a session cookie hurts the performance of your application. You can save part of that information in your database and store a pointer or index in the session cookie that points that user to the information in the database. If you use an in-memory database, such as Azure Cache for Redis, instead of a traditional database, your application will benefit from the faster access times to the data stored in memory.

Job and message queuing You can use Azure Cache for Redis to implement a distributed queue that executes long-lasting tasks that may negatively affect the performance of your application.

Distributed transactions A transaction is a group of commands that need to complete or fail together. Any transaction needs to ensure that the data is always in a stable state. If your application needs to execute transactions, you can use Azure Cache for Redis for implementing these transactions.

You can work with Azure Cache for Redis using different languages, such as ASP.NET, .NET, .NET Core, Node.js, Java, or Python. Before you can add caching features to your code using Azure Redis Cache, you need to create your Azure Cache for Redis database using the following procedure:

  1. Open the Azure Portal (https://portal.azure.com).
  2. On the navigation menu at the left side of the Azure Portal, click Create A Resource.
  3. On the New blade, click Databases on the navigation menu on the leftside of the blade.
  4. In the list of Database services, shown in Figure 5-4, click the Azure Cache For Redis item.
  5. Figure 5-4 Creating a new Azure Cache for Redis resource
    Screenshot_47
  6. On the New Redis Cache blade, type a DNS Name for your Redis resource.
  7. Select the Subscription, Resource Group, and Location from the appropriate drop-down menu that best fits your needs.
  8. In the Pricing tier drop-down menu, select the Basic C0 tier.
  9. Click the Create button at the bottom of the New Redis Cache blade. The deployment of your new Azure Cache for Redis takes a few minutes to complete. Once the deployment is complete, you need to get the access keys for your instance of the Azure Cache for Redis. You use this information in your code to connect the Redis service in Azure.

If you are using any of the .NET languages, you can use the StackExchange.Redis client for accessing your Azure Cache for Redis resource. You can also use this Redis client for accessing other Redis implementations. When reading or writing values in the Azure Cache for Redis, you need to create a ConnectionMultiplexer object. This object creates a connection to your Redis server. The ConnectionMultiplexer class is designed to be reused as much as possible.

For this reason, you should store this object and reuse it across all your code, whenever it is possible to reuse. Creating a connection is a costly operation. For this reason, you should not create a ConnectionMultiplexer object for each read or write operation to the Redis cache. Once you have created your ConnectionMultiplexer object, you can use any of the available operations in the StackExchange.Redis package. Following are the basic operations that you can use with Redis:

Use Redis as a database You get a database from Redis, using the GetDatabase() method, for writing and reading values from the database. You use the StringSet() or StringGet() methods for writing and reading.

Use Redis as a messaging queue You get a subscriber object from the Redis client, using the GetSubscriber() method. Then you can publish messages to a queue, using the Publish() method, and read messages from a queue, using the Subscribe() method. Queues in Redis are known as “channels.”

The following procedure shows how to connect to an Azure Cache for Redis database and read and write data to and from the database using an ASP.NET application:

  1. Open Visual Studio 2017.
  2. In the Visual Studio 2017 window, click File > New > Project.
  3. On the New Project window, in the navigation menu on the left side of the window, select Installed > Visual C# > Cloud.
  4. Select the ASP.NET Web Application (.NET Framework) template.
  5. Type a Name and Solution Name for your project.
  6. Select the Location for your project.
  7. Click OK.
  8. On the New ASP.NET Web Application, select the MVC template.
  9. Click OK.
  10. On the Visual Studio 2017 window, click Tools > NuGet Package Manager > Manage NuGet Packages For Solution.
  11. On the NuGet – Solution tab, click Browse.
  12. On the Search text box control type Redis.
  13. Select the Redis NuGet package.
  14. On the right side of the NuGet tab, select the checkbox beside your project’s name and click the Install button.
  15. On the Preview Changes window, click OK.
  16. Open the Azure Portal (https://portal.azure.com).
  17. On the search text box in the top-middle of the portal, type the name of your Azure Cache for Redis that you created in the previous example.
  18. Click your Azure Cache for Redis in the results list.
  19. On the Azure Cache for Redis blade, click Access Keys in the Settings section in the navigation menu on the left side of the blade.
  20. On the Access Keys blade, copy the value of the Primary Connection String (Redis). You need this value on the next steps. 21. On the Visual Studio 2017 window, open the Web.config file
  21. On the <appSettings> section, add the following code:
  22. 	
    		<add key="CacheConnection " value="<value_copie
    	
    

    Note Security Best Practice

    In real-world development, you should avoid putting connection strings and secrets on files that could be checked with the rest of your code. To avoid this, you can put the <appSettings> section with the keys containing the sensible secrets or connection strings in a separate file outside the source code control folder. Then add the file parameter to the <appSettings> tag pointing to the external appSettings file path.

  23. Open the HomeController.cs file in the Controllers folder.
  24. Add the following using statements to the HomeController.cs file:
  25. using System.Configuration; using

    StackExchange.Redis;

  26. Add the code in Listing 5-1 to the HomeController class.
  27. Listing 5-1 HomeController RedisCache method

    	
    // C#. ASP.NET.
    public ActionResult RedisCache()
    {
    ViewBag.Message = "A simple example with Azure Cache for Redis on ASP.NET.";
    var lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
    {
    string cacheConnection = ConfigurationManager.AppSettings["CacheConnection"].
    ToString();
    return ConnectionMultiplexer.Connect(cacheConnection);
    });
    // You need to create a ConnectionMultiplexer object for accessing the Redis cache.
    // Then you can get an instance of a database.
    IDatabase cache = lazyConnection.Value.GetDatabase();
    // Perform cache operations using the cache object...
    // Run a simple Redis command
    ViewBag.command1 = "PING";
    ViewBag.command1Result = cache.Execute(ViewBag.command1).ToString();
    // Simple get and put of integral data types into the cache
    ViewBag.command2 = "GET Message";
    ViewBag.command2Result = cache.StringGet("Message").ToString();
    // Write a new value to the database.
    ViewBag.command3 = "SET Message "Hello! The cache is working from ASP.NET!"";
    ViewBag.command3Result = cache.StringSet("Message", "Hello! The cache is working
    from ASP.NET!").ToString();
    // Get the message that we wrote on the previous step
    ViewBag.command4 = "GET Message";
    ViewBag.command4Result = cache.StringGet("Message").ToString();
    // Get the client list, useful to see if connection list is growing...
    ViewBag.command5 = "CLIENT LIST";
    ViewBag.command5Result = cache.Execute("CLIENT", "LIST").ToString().Replace(" id=",
    "rid=");
    lazyConnection.Value.Dispose();
    return View();
    }
    	
    
  28. On the Solution Explorer, right-click the Views > Home folder and click Add > View on the contextual menu.
  29. On the Add View window, type RedisCache for the View Name.
  30. Click the Add button.
  31. Open the cshtml file.
  32. Replace the content of the cshtml file with the content of Listing 5-2.
  33. Listing 5-2 RedisCache View

    	
    // C#. ASP.NET.
    @{
    ViewBag.Title = "Azure Cache for Redis Test";
    }
    <h2>@ViewBag.Title.</h2>
    <h3>@ViewBag.Message</h3>
    <br /><br />
    <table border="1" cellpadding="10">
    <tr>
    <th>Command</th>
    <th>Result</th>
    </tr>
    <tr>
    <td>@ViewBag.command1</td>
    <td><pre>@ViewBag.command1Result</pre></td>
    </tr>
    <tr>
    <td>@ViewBag.command2</td>
    <td><pre>@ViewBag.command2Result</pre></td>
    </tr>
    <tr>
    <td>@ViewBag.command3</td>
    <td><pre>@ViewBag.command3Result</pre></td>
    </tr>
    <tr>
    <td>@ViewBag.command4</td>
    <td><pre>@ViewBag.command4Result</pre></td>
    </tr>
    <tr>
    <td>@ViewBag.command5</td>
    <td><pre>@ViewBag.command5Result</pre></td>
    </tr>
    </table>
    	
    
  34. Now press F5 to run your project locally.
  35. In the web browser running your project, append the /Home/RedisCache URI to the URL. Your result should look like Figure 5-5.
Figure 5-5 Example results
Screenshot_48

Exam Tip

You can use Azure Cache for Redis for static content and for the mostaccessed dynamic data. You can use it for in-memory databases or message queues using a publication/ subscription pattern.

Need More Review? More Details About Redis

You can review features, patterns, and transactions of the Redis cache system by reading the following articles:

https://stackexchange.github.io/StackExchange.Redis/Basics

https://stackexchange.github.io/StackExchange.Redis/Transactions

https://stackexchange.github.io/StackExchange.Redis/KeysValues

Develop code to implement CDNs in solutions

A Content Delivery Network (CDN) is a group of servers distributed in different locations across the globe that can deliver web content to users. Because the CDN has servers distributed in several locations, when a user makes a request to the CDN, the CDN delivers the content from the nearest server to the user.

The main advantage of using Azure CDN with your application is that Azure CDN caches your application’s static content. When a user makes a request to your application, the CDN stores the static content, such as images, documents, and stylesheet files. When a second user from the same location as the first user accesses your application, the CDN delivers the cached content, relieving your web server from delivering the static content. You can use third-party CDN solutions such as Verizon or Akamai with Azure CDN.

To use Azure CDN with your solution, you need to configure a profile. This profile contains the list of endpoints in your application that would be included in the CDN. The profile also configures the behavior of content delivery and access of each configured endpoint. When you configure an Azure CDN profile, you need to choose between using Microsoft’s CDN or using CDNs from Verizon or Akamai.

You can configure as many profiles as you need for grouping your endpoints based on different criteria, such as Internet domain, web application, or any other criteria. Bear in mind that Azure CDN pricing tiers are applied at the profile level, so you can configure different profiles with different pricing characteristics. The following procedure shows how to create an Azure CDN profile with one endpoint for caching content from a web application:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the navigation menu on the left side of the portal, click Create A Resource.
  3. On the New blade, in the Search The Marketplace text box, type CDN.
  4. In the result list, click CDN.
  5. On the CDN blade, click the Create button.
  6. On the CDN profile blade, type a Name for the profile.
  7. Select an existing Resource Group in the drop-down menu. Alternatively, you can create a new resource group by clicking the Create New link below the Resource Group drop-down menu.
  8. In the Pricing Tier drop-down menu, select Standard Microsoft.
  9. Click the Create button at the bottom of the CDN profile blade.
  10. In the search text box on the middle-top side of the Azure Portal, type the name for your CDN profile.
  11. In the result list, click the name of your CDN profile.
  12. On the CDN profile blade, shown in Figure 5-6, click the Endpoint button.
  13. Figure 5-6 CDN profile
    Screenshot_49
  14. On the CDN Profile blade’s Overview panel, click the Endpoint button.
  15. In the Add An Endpoint panel, type a Name for the endpoint. Bear in mind that this name needs to be globally unique.
  16. In the Origin Type drop-down menu, select Web App.
  17. In the Origin Hostname drop-down menu, select the name of your web application.
  18. In the Origin Path text box, type the path to the application you need to include in the CDN.
  19. Leave the Origin Host header value as is. The Origin Host header value should match the Origin Hostname value.
  20. Leave the other options as is.
  21. Click the Add button.

The propagation of the content through the CDN depends on the type of CDN that you configured. For Standard Microsoft CDN, the propagation usually completes in 10 minutes. Once the propagation of the CDN completes, you can access your web application by using the endpoint that you configured in the previous procedure: https://<your_endpoint’s_name>.azureedge.net

Once you have configured the endpoint, you can apply some advanced options to adjust the CDN to your needs:

Custom DNS domain By default, when using the CDN, your users access your application by using the URL https://<your_endpoint’s_name>.azureedge.net. This URL would not be appropriate for your application. You can assign more appropriate DNS domains to the CDN endpoint, such as https://app.contoso.com, which allows your users to access your web application using a URL related to your business and your DNS domain name.

Compression You can configure the CDN endpoint to compress some MIME types. This compression is made on the fly by the CDN when the content is delivered from the cache. Compressing the content allows you to deliver smaller files, improving the overall performance of the application.

Caching rules You can control how the content is stored in the cache by

setting different rules for different paths or content types. By configuring a cache rule, you can modify the cache expiration time, depending on the conditions you configure. Caching rules are only available for profiles from Verizon’s Azure CDN Standard and Akamai’s Azure CDN Standard.

Geo-filtering You can block or allow a web application’s content to certain countries across the globe.

Optimization You can configure the CDN for optimizing the delivery of different types of content. Depending on the type of profile, you can optimize your endpoint for:

General web delivery

Dynamic site acceleration

General media streaming

Video-on-demand media streaming

Large file downloads

Note Dynamic Site Acceleration

Although Dynamic Site Acceleration is part of the features provided by the Azure CDN, this is not strictly a cache solution. If you need to use Dynamic Site Acceleration with Microsoft Azure services, you should use Azure Front Door Service instead of Azure CDN.

If you need to dynamically create new CDN profiles and endpoints, Microsoft provides the Azure CDN Library for .NET and Azure CDN Library for Node.js. Using these libraries, you can automate most of the operations that we reviewed in this section.

Need More Review? How Caching Works

Caching web content involves working with HTTP headers, setting the appropriate expiration times, or deciding which fi les should be included in the cache. You can review the details of how caching works by reading https://docs.microsoft.com/en-us/azure/cdn/cdn-howcaching-works

Exam Tip

Content Delivery Networks (CDN) are appropriate for caching static content that changes infrequently. Although Azure CDN from Akamai and Azure CDN from Verizon include Dynamic Site Acceleration (DSA), this feature is not the same as a cache system. You should not confuse Azure CDN DSA optimization with Azure CDN cache.

Invalidate cache content (CDN or Redis)

When you work with cached content, you need to control the lifetime or validity of that content. Although static content usually has a low rate of change, this kind of content can change. For example, if you are caching the logo of your company and the logo is changed, your users won’t see the change in the application until the new logo is loaded in the cache. In this scenario, you can simply purge or remove the old logo from the cache, and the new image will be loaded into the cache as soon as the first user accesses the application.

This mechanism of manually purging the cache could be appropriate for a very specific scenario, but in general terms, you should consider using an automatic mechanism for having the freshest content in your cache system.

When you add content to a CDN cache, the system automatically assigns a TimeToLive (TTL) value to the content file instead of continuously comparing the file in the cache with the original content on the web server. The cache system checks whether the TTL is lower than the current time. If the TTL is lower than the current time, the CDN considers the content to be fresh and keeps the content in the cache. If the TTL expires, the CDN marks the content as stale or invalid. When the next user tries to access the invalid

content file, the CDN compares the cached file with the content in the web server. If both files match, the CDN updates the version of the cached file and makes the file valid again by resetting the expiration time. If the files in the cache and the web server don’t match, the CDN removes the file from the cache and updates the content with the freshest content file on the web server.

The cached content can become invalid by deleting the content from the cache or by reaching the expiration time. You can configure the default TTL associated to a site by using the Cache-Control HTTP Header. You set the value for this header in different ways:

Default CDN configuration If you don’t configure any value for the TTL, the Azure CDN automatically configures a default value of seven days.

Caching rules You can configure TTL values globally or by using custom matching rules. Global caching rules affect all content in the CDN. Custom caching rules control the TTL for different paths or files in your web application. You can even disable the caching for some parts of your web application.

Web.config files You use the web.config file to set the expiration time of the folder. You can even configure web.config files for different folders by setting different TTL values. Use the following XML code to set the TTL:

	
<configuration>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge=
"3.00:00:00" />
</staticContent>
</system.webServer>
</configuration>
	

Programmatically If you work with ASP.NET, you can control the CDN caching behavior by setting the HttpResponse.Cache property.

You can use the following code to set the expiration time of the content to five hours:

	
// Set the caching parameters.
Response.Cache.SetExpires(DateTime.Now.AddHours(5));
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetLastModified(DateTime.Now);
	

Use the following procedure to create caching rules in your Azure CDN. Bear in mind that you can only configure caching rules for Azure CDN for Verizon and Azure CDN for Akamai profiles:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the navigation menu on the left side of the portal, click Create A Resource.
  3. On the New blade, in the Search The Marketplace text box, type CDN.
  4. In the result list, click CDN.
  5. On the CDN blade, click the Create button.
  6. On the CDN profile blade, type a Name for the profile.
  7. Select an existing Resource Group from the drop-down menu. Alternatively, you can create a new resource group by clicking the Create New link below the Resource Group drop-down menu.
  8. On the Pricing Tier drop-down menu, select Standard Akamai.
  9. Check the Create A New CDN Endpoint Now check box.
  10. Type a name for the endpoint in the CDN Endpoint Name text box.
  11. In the Origin Type drop-down menu, select Web App.
  12. In the Origin Hostname drop-down menu, select the name of your web application.
  13. Click the Create button at the bottom of the CDN Profile blade.
  14. In the search text box on the middle-top side of the Azure Portal, type the name of your CDN profile.
  15. In the result list, click your CDN profile’s name.
  16. On the Overview panel, on the CDN profile blade, in the Endpoints list, click the existing endpoint.
  17. On the Endpoint blade, click Caching Rules in the Settings section of the navigation menu.
  18. On the Caching Rules panel, shown in Figure 5-7, set the Caching Behavior drop-down menu to Override in the Global Caching Rules section.
  19. Figure 5-7 Configuring Caching Rules
    Screenshot_50
  20. Set the Cache Expiration Duration to 15 days.
  21. On the Custom Caching Rules list, create a new custom rule. Set the Match Condition drop-down menu to File Extension(s).
  22. In the Match Value(s) text box, type png.
  23. In the Caching Behaviour drop-down menu, select Override.
  24. In the Days column, type 4.
  25. In the top-left corner of the panel, click the Save button.

When you work with Azure Cache for Redis, you can also set the TTL for the different values stored in the in-memory database. If you don’t set a TTL for the key/value pair, the entry in the the cache won’t expire. When you create a new entry in the in-memory database, you set the TTL value as a parameter of the StringSet() method. The following code snippet shows how to set a TTL of 5 hours to a String value:

	
_cache.StringSet(key, Serialize(value), new TimeSpan
	

Apart from invalidating the content of the cache by the expiration of the content, you can manually invalidate the content by removing it directly from the CDN or Redis Cache. You can remove a key from the Azure Cache for Redis in-memory database. You can use the following methods:

KeyDelete() method Use this method for removing a single key from the database. You need to use this method with a database instance.

FlushAllDatabases() method Use this method to remove all keys from all databases in the Azure Cache for Redis.

For Azure CDN, you can invalidate part or the entire content of the CDN profile by using the Purge option available in the Azure Portal. Use the following procedure for purging content from your Azure CDN profile:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the search text box on the middle-top side of the Azure Portal, type the name of your CDN profile.
  3. On the Overview panel, in your CDN profile blade, click the Purge button.
  4. On the Purge panel, shown in Figure 5-8, select the Endpoint you want to purge from the drop-down menu control.
  5. Figure 5-8 Purging content from the cache
    Screenshot_51
  6. In the Content Path text box, type the path that you want to purge from the cache. If you want to purge all the content from the cache, you need to check the Purge All checkbox.

Note Purge All and Wildcards in Azure Cdn for Akamai

At the time of this writing, the Purge All and Wildcard options are not available for Akamai CDNs.

Skill 5.3: Instrument solutions to support monitoring and logging

Knowing how your application behaves during normal operation is important, especially for production environments. You need to get information about the number of users, resource consumption, transactions, and other metrics that can help you to troubleshoot your application if an error happens. Adding custom metrics to your application is also important when creating alerts that warn you when your application is not behaving as expected.

Azure provides features for monitoring the consumption of resources assigned to your application. Also, you can monitor the transactions and any other metrics that you may need, which allows you to fully understand how your application behaves under conditions that are usually difficult to simulate or test. You can also use these metrics for efficiently creating autoscale rules to improve the performance of your application, as we reviewed in Skill 5.1.

This skill covers how to:

  • Configure instrumentation in an app or service by using Application Insights
  • Analyze and troubleshoot solutions by using Azure Monitor
  • Implement Application Insights Web Test and Alerts

Configure instrumentation in an app or service by using Application Insights

Microsoft provides you with the ability to monitor your application while it is running by using Application Insights. This tool integrates with your code, allowing you to monitor what is happening inside your code while it is executing in a cloud, on-premises, or hybrid environment. You can also enable Application Insights for applications that are already deployed in Azure without modifying the already deployed code.

By adding a small instrumentation package, you can measure several aspects of your application. These measures, known as telemetry, are automatically sent to the Application Insight component deployed in Azure. Based on the information sent from the telemetry streams from your application to the Azure Portal, you can analyze your application’s performance and create alerts and dashboards, which help you better understand how your application is behaving. Although Application Insight needs to be deployed in the Azure Portal, your application can be executed in Azure, in other public clouds, or in your on-premises infrastructure. When you deploy the Application Insight instrumentation in your application, it monitors the following points:

Request rates, response times, and failure rates You can view which pages your users request more frequently, distributed across time. You may find that your users tend to visit certain pages at the beginning of the day while others are more visited at the end of the day. You can also monitor the time that your server takes for delivering the requested page or even if there were failures when delivering the page. You should monitor the failure rates and response times to ensure that your application is performing correctly and your users have a good experience.

Dependency rates, response times, and failure rates If your application depends on external services (such as Azure Storage Accounts), Google or Twitter security services for authenticating your users, or any other external service, you can monitor how these external services are performing and how they are affecting your application.

Exceptions The instrumentation keeps track of the exceptions raised by servers and browsers while your application is executing. You can review the details of the stack trace for each exception via the Azure Portal. You can also view statistics about exceptions that arise during your application’s execution.

Page views and load performance Measuring the performance of your server’s page delivery is only part of the equation. Using Application Insights, you can also get information about the page views and load performance reported from the browser’s side.

AJAX calls This measures the time taken by AJAX calls made from your application’s web pages. It also measures the failure rates and response time.

User and session counts You can keep track of the number of users who are connected to your application. Just as the same user can initiate multiple sessions, you can track the number of sessions connected to your application. This allows you to clearly measure the threshold of concurrent users supported by your application.

Performance counters You can get information about the performance counters of the server machine (CPU, memory, and network usage) from which your code is executing.

Hosts diagnostics Hosts diagnostics can get information from your application if it is deployed in a Docker or Azure environment.

Diagnostic trace logs Trace log messages can be used to correlate trace events with the requests made to the application by your users.

Custom events and metrics Although the out-of-the-box instrumentation offered by Application Insights offers a lot of information, there are some metrics that are too specific to your application that cannot be generalized and included in the general telemetry. For those cases, you can create custom metrics to monitor your server and client code. This allows you to monitor user actions, such as shopping cart checkouts or game scoring.

Application Insights are not limited to .NET languages. There are instrumentation libraries available for other languages, such as Java, JavaScript, or Node.js. There are also libraries available for other platforms like Android or iOS. You can use the following procedure to add Application Insight instrumentation to your ASP.NET application. To run this example, you need to meet these requisites:

An Azure Subscription.

Visual Studio 2017/2019. If you don’t have Visual Studio, you can download the Community edition for free from https://visualstudio.microsoft.com/free-developer-offers/

Install the following workloads in Visual Studio:

ASP.NET and web development, including the optional components.

Azure development.

For this example, we are going to create a new MVC application from a template, and then we will add the Application Insight instrumentation. You can use the same procedure to add instrumentation to any of your existing ASP.NET applications:

  1. Open Visual Studio 2019.
  2. In the home window in Visual Studio, click the Create A New Project button in the section Get Started on the right side of the window.
  3. In the Create A New Project window, on the search box, type MVC.
  4. Select the ASP.NET Web Application (.NET Framework) template.
  5. Click the Next button in the bottom-right corner of the window.
  6. Type a name for your project and solution in the Project Name and Solution Name boxes, respectively.
  7. Select the Location where your project will be stored.
  8. Click the Create button at the bottom-right corner of the window.
  9. On the Create A New ASP.NET Web Application window, select the MVC template.
  10. Click the Create button at the bottom-right corner of the window.
  11. In the Solution Explorer window, right-click the name of your project.
  12. In the contextual menu, shown in Figure 5-9, click Add > Application Insights Telemetry.
  13. Figure 5-9 Adding Application Insights Telemetry
    Screenshot_52
  14. On the Application Insights Configuration page, click the Get Started button at the bottom of the page.
  15. On the Register Your App With Application Insights page, ensure that the correct Azure Account and Azure Subscription are selected in the drop-down menus.
  16. Click the Configure Settings link below the Resource drop-down menu.
  17. In the Application Insights Configuration dialog box, select the Resource Group and Location where you want to create the new Application Insight resource.
  18. Click the Register button.
  19. On the Application Insights Configuration tab, click the Collect Traces From System.Diagnostics button at the bottom of the tab. Enabling this option allows you to send a log message directly to Application Insights.

At this point, Visual Studio starts adding the needed packages and dependencies to your project. Visual Studio also automatically configures the Instrumentation Key, which allows your application to connect to the Application Insight resource created in Azure. Now your project is connected with the instance of the Application Insights deployed in Azure. As soon as you run your project, the Application Insight instrumentation starts sending information to Azure. You can review this information in the Azure Portal or your Visual Studio. Use the following steps to access the Application Insight from Visual Studio and Azure Portal:

  1. From the Visual Studio window, in the Solution Explorer window, navigate to your project’s name and choose Connected Services > Application Insights.
  2. Right-click Application Insights.
  3. On the contextual menu, click Search Live Telemetry. The Application Insights Search tab will appear in Visual Studio.
  4. In the Solution Explorer, right-click Application Insights to open the Azure Portal Application Insights from Visual Studio.
  5. On the contextual menu, click Open Application Insights Portal.

Apart from the standard metrics that come out-of-the-box with the default Application Insight instrumentation, you can also add your custom events and metrics to your code. Using custom events and metrics, you can analyze and troubleshoot logic and workflows that are specific to your application. The following example shows how to modify the MVC application that you created on the previous example for adding custom events and metrics:

  1. Open the project that you created in the previous example.
  2. Open the HomeController.cs file.
  3. Add the following using statement at the beginning of the file:
  4. 	
    using Microsoft.ApplicationInsights;
    using System.Diagnostics;
    	
    
  5. Replace the content of the HomeController class in the HomeController.cs file with the content in Listing 5-3.
  6. Listing 5-3 HomeController class

    	
    // C#. ASP.NET.
    public class HomeController : Controller
    {
    private TelemetryClient telemetry;
    private double indexLoadCounter;
    public HomeController()
    {
    //Create a TelemetryClient that can be used during the life of the Controller.
    telemetry = new TelemetryClient();
    //Initialize some counters for the custom metrics.
    //This is a fake metric just for demo purposes.
    indexLoadCounter = new Random().Next(1000);
    }
    public ActionResult Index()
    {
    //This example is trivial as ApplicationInsights already register the load
    of the page.
    //You can use this example for tracking different events in the application.
    telemetry.TrackEvent("Loading the Index page");
    //Before you can submit a custom metric, you need to use the GetMetric
    //method.
    telemetry.GetMetric("CountOfIndexPageLoads").TrackValue(indexLoadCounter);
    //This trivial example shows how to track exceptions using Application
    //Insights.
    //You can also send trace message to Application Insights.
    try
    {
    Trace.TraceInformation("Raising a trivial exception");
    throw new System.Exception("Trivial Exception for testing Tracking
    Exception feature in Application Insights");
    }
    catch (System.Exception ex)
    {
    Trace.TraceError("Capturing and managing the trivial exception");
    telemetry.TrackException(ex);
    }
    //You need to instruct the TelemetryClient to send all in-memory data to the
    //ApplicationInsights.
    telemetry.Flush();
    return View();
    }
    public ActionResult About()
    {
    ViewBag.Message = "Your application description page.";
    //This example is trivial as ApplicationInsights already register the load
    //of the page.
    //You can use this example for tracking different events in the application.
    telemetry.TrackEvent("Loading the About page");
    return View();
    }
    public ActionResult Contact()
    {
    ViewBag.Message = "Your contact page.";
    //This example is trivial as ApplicationInsights already register the load
    //of the page.
    //You can use this example for tracking different events in the application.
    telemetry.TrackEvent("Loading the Contact page");
    return View();
    }
    }
    	
    
  7. In the Solution Explorer, open the ApplicationInsights.config file.
  8. 6. In the <AddType="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector Microsoft.AI.PerfCounterCollector"> XML item, add the following child XML item:
	
<EnableIISExpressPerformanceCounters>true</Ena
	

Note Controllers Constructors

In the previous example, we used a private property in the constructor for creating and initializing a TelemetryClient object. In a real-world application, you should use dependency injection techniques for properly initializing the Controller class. There are several frameworks, like Unity, Autofac, or Ninject, that can help you in implementing the dependency injection pattern in your code.

At this point, you can press F5 and run your project and see how your application is sending information to Application Insights. If you review the Application Insight Search tab, you can see the messages, shown in Figure 5-10, that your application is sending to Application Insights.

Figure 5-10 Application Insights messages
Screenshot_53

You send messages to Application Insights by using the TelemetryClass class. This class provides you with the appropriate methods for sending the different types of messages to Application Insights. You can send custom events by using the TrackEvent() method. You use this method for tracking meaningful events to your application, such as the user created a new shopping cart in an eCommerce web application or the user won a game in a mobile App.

If you need to keep track of the value of certain variables or properties in your code, you can use the combination of GetMetric() and TrackValue() methods. The GetMetric() method retrieves a metric from the azure.applicationinsight namespace. If the metric doesn’t exist on the namespace, Application Insight library automatically creates a new one.

Once you have a reference to the correct metric, you can use the

TrackValue() method to add a value to that metric. You can use these custom metrics for setting alerts or autoscale rules. Use the following steps for viewing the custom metrics in the Azure Portal:

  1. From the Visual Studio window, in the Solution Explorer window, navigate to your project’s name and choose Connected Services > Application Insights.
  2. Right-click Application Insights.
  3. In the contextual menu, click Open Application Insights Portal.
  4. On the Application Insights blade, click Metrics in the Monitoring section of the navigation menu on the left side of the blade.
  5. On the Metrics blade, on the toolbar above the empty graph, on the Metric Namespace drop-down menu, select applicationsight.
  6. On the Metric drop-down menu, select CoutOfIndexPageLoad. This is the custom metric that we defined in the previous example.
  7. On the Aggregation drop-down menu, select Count. The values for your graph will be different but should look similar to Figure 5-11.
  8. Figure 5-11 Custom metric graph
    Screenshot_54

You can also send log messages to Application Insights by using the integration between System.Diagnostics and Application Insights. Any message sent to the diagnostics system using the Trace class appears in the Application Insights as a Trace message. In this same line, use the TraceException() method for sending the stack trace and the exception to Application Insights. The advantage of doing this is that you can easily correlate exceptions with the operations that were performing your code when the exception happened.

Exam Tip

Remember that Application Insights is a solution for monitoring the behavior of an application on different platforms, written in different languages. You can use Application Insights with Web Applications and Native Applications or Mobile Applications written in .NET, Java,

JavaScript, or Node.js. There is no requirement to run your application in Azure. You only need to use Azure for deploying the Application Insight resource that you use for analyzing the information sent by your application.

Need More Review?: Creating Custom Events and Metrics

You can create more complex metrics and events than the one that we reviewed here. For complex operations, you can track all the actions inside an operation for correctly correlating all the messages generated during the execution of the operation. You can learn more about how to create custom events and metrics by reading the article at https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-customevents-metrics

Analyze and troubleshoot solutions by using Azure Monitor

Azure Monitor is a tool composed of several elements that help you monitor

and better understand the behavior of your solutions. Application Insights is a tool for collecting information from your solutions. Once you have the collected information, you can use the Analyze tools for reviewing the data and troubleshooting your application. Depending on the information that you need to analyze, you can use Metric Analytics or Log Analytics.

You can use Metric Analytics for reviewing the standard and custom metrics sent from your application. A metric is a numeric value that is related to some aspect at a particular point in time of your solution. CPU usage, free memory, and number of requests are all examples of metrics; also, you can create your own custom metrics. Because metrics are lightweight, you can use them to monitor scenarios in near real-time. You analyze metric data by representing the values of the metrics in a time interval using different types of graphs. Use the following steps for reviewing graphs:

  1. Open the Azure Portal (https://portal.azure.com).
  2. On the navigation menu on the left side of the Azure Portal, click Monitor.
  3. On the Monitor blade, click Metrics on the navigation menu on the left side of the blade.
  4. On the Metrics blade, click the Select A Resource button.
  5. On the Select A Resource panel, on the Resource Group drop-down menu, select all the resource groups that contain the Azure App Service containing the metrics you want to add to the graph.
  6. In the Resource Type drop-down menu, select only the App Service Plans and App Service resource types.
  7. On the list of filtered resources, click the resource that you want to add to the graph.
  8. Click the Apply button at the bottom of the panel.
  9. On the Metrics blade, select the Average Response Time metric in the Metric drop-down menu.
  10. Click the Add Metric button at the top of the graph. You can add several metrics to the same graph, which means you can analyze different metrics that are related between them.
  11. Repeat steps 4 to 10 for adding the Connections metric. Figure 5-12 shows the metrics added to the graph.

Analyze and troubleshoot solutions by using Azure Monitor

Azure Monitor is a tool composed of several elements that help you monitor

and better understand the behavior of your solutions. Application Insights is a tool for collecting information from your solutions. Once you have the collected information, you can use the Analyze tools for reviewing the data and troubleshooting your application. Depending on the information that you need to analyze, you can use Metric Analytics or Log Analytics.

You can use Metric Analytics for reviewing the standard and custom metrics sent from your application. A metric is a numeric value that is related to some aspect at a particular point in time of your solution. CPU usage, free memory, and number of requests are all examples of metrics; also, you can create your own custom metrics. Because metrics are lightweight, you can use them to monitor scenarios in near real-time. You analyze metric data by representing the values of the metrics in a time interval using different types of graphs. Use the following steps for reviewing graphs:

  1. Open the Azure Portal (https://portal.azure.com).
  2. On the navigation menu on the left side of the Azure Portal, click Monitor.
  3. On the Monitor blade, click Metrics on the navigation menu on the left side of the blade.
  4. On the Metrics blade, click the Select A Resource button.
  5. On the Select A Resource panel, on the Resource Group drop-down menu, select all the resource groups that contain the Azure App Service containing the metrics you want to add to the graph.
  6. In the Resource Type drop-down menu, select only the App Service Plans and App Service resource types.
  7. On the list of filtered resources, click the resource that you want to add to the graph.
  8. Click the Apply button at the bottom of the panel.
  9. On the Metrics blade, select the Average Response Time metric in the Metric drop-down menu.
  10. Click the Add Metric button at the top of the graph. You can add several metrics to the same graph, which means you can analyze different metrics that are related between them.
  11. Repeat steps 4 to 10 for adding the Connections metric. Figure 5-12 shows the metrics added to the graph.
Figure 5-12 Configuring metrics for a graph.
Screenshot_55

You use Log Analytics for analyzing the trace, logs, events, exceptions, and any other message sent from your application. Log messages are more complex than metrics because they can contain much more information than a simple numeric value. You can analyze log messages by using queries for retrieving, consolidating, and analyzing the collected data. Log Analytics for Azure Monitor uses a version of the Kusto query language. You can construct your queries to get information from the data stored in Azure Monitor. To do so, complete the following steps:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the navigation menu on the left side of the Azure Portal, click Monitor.
  3. On the Monitor blade, click Logs in the navigation menu on the left side of the blade.
  4. On the Logs blade, type Event | search “error” in the text area.
  5. Click the Run button.
  6. You can review the result of your query in the section below the query text area.

This simple query returns all error events stored in the default workspace. You can use more complex queries for getting more information about your solution. The available fields for the queries depend on the data loaded in the workspace. These fields are managed by the data schema. Figure 5-13 shows the schema associated with a workplace that stores data from Application Insights.

Figure 5-13 Workspace schema
Screenshot_56

Once you get the results from a query, you can easily refine the results of the query by adding where clauses to the query. The easiest way to add new filtering criteria is to expand one of the records in the table view in the results section below the query text area. If you move your mouse over each of the fields in a record, you can see two small plus and minus sign icons. If you click the plus sign, you add the value of the field as an inclusive where clause. If you click the minus sign beside the name of the field, you add the value of the field as an exclusive where clause. Based on the example that we reviewed in the previous section, the following query would get all traces sent from the application except those with the message “Raising a trivial exception”.

	traces | where message <> "Raising a trivial exception

You can review the results of this query in both table and chart formats. Using the different visualization formats, you can get a different insight into the data. Figure 5-14 shows how the results from the previous query are plotted into a pie chart.

Figure 5-14 Rendering query results
Screenshot_57

Need More Review?: Creating Log Queries

Creating the appropriate query for your need greatly depends on the details of your solution. You can review the details about the Kusto query language and how to create complex queries by reviewing the following articles:

Kusto Query Language: https://docs.microsoft.com/enus/azure/kusto/query/

Azure Monitor log queries: https://docs.microsoft.com/enus/azure/azure-monitor/log-query/query-language

Implement Application Insights Web Test and Alerts

As a result of analyzing the data sent from your application to the Azure Monitor using Application Insights, you may find some situations that you need to monitor more carefully. Using Azure Monitor, you can set alerts based on the value of different metrics or logs. For example, you can create an alert to receive a notification when your application generates an HTTP return code 502.

You can also configure Application Insights for monitoring the availability of your web application. You can configure different types of tests for checking the availability of your web application:

URL ping test This is a simple test for checking whether your application is available by making a request to a single URL for your application.

Multi-step web test Using Visual Studio Enterprise, you can record the steps that you want to use as the verification for your application. You use this type of test for checking complex scenarios. The process of recording the steps in a web application generates a file with the recorded steps. Using this generated file, you can create a web test in Application Insights; then you upload the recording file.

Custom Track Availability Test You can create your own availability test in your code using the TrackAvailability() method.

When creating a URL ping test, you can check not only the HTTP response code but also the content returned by the server. This way, you can minimize the possibility of false positives. These false positives can happen if the server returns a valid HTTP response code, but the content is different due to configuration errors. Use the following procedure for creating an URL ping test on your Application Insights that checks the availability of your web application:

  1. Open the Azure Portal (https://portal.azure.com).
  2. In the navigation menu on the left side of the Azure Portal, click Monitor.
  3. On the Monitor blade, click Applications in the Insights section.
  4. On the Applications blade, click the Application Insight resource where you want to configure the alert.
  5. On the Applications Insights blade, click Availability in the Investigate section of the navigation menu on the left side of the blade.
  6. On the Availability blade, click Add Test on the top-left corner of the blade.
  7. On the Create Test blade, shown in Figure 5-15, type a name for the test in the Test Name text box.
  8. Figure 5-15 Creating a URL test
    Screenshot_58
  9. Ensure that URL Ping Test is selected in the Test Type drop-down menu.
  10. In the URL text box, type the URL of the application you want to test.
  11. Expand the Test Location section. Select the locations from which you want to perform the URL ping test.
  12. Leave the other options as is.
  13. Click the Create button at the bottom of the panel.
  14. When you configure the URL ping test, you cannot configure the Alert directly during the creation process. You need to finish the creation of the test and then you can edit the Alert for defining the actions that you want to perform when the alert fires. Use the following procedure for configuring an alert associated with the URL ping test that you configured previously:

    1. On the Availability blade, click the ellipsis beside the newly created alert.
    2. In the contextual menu, click Edit Alert.
    3. On the alert blade, in the Actions section, click the Add button. You are going to configure an action for sending an email when the URL ping test fails.
    4. On the Configured Actions panel, click the Create Action Group button.
    5. On the Add Action Group panel, type a name in the Action Group Name text box. Bear in mind that an action group appears as a resource in the Resource Group. This means that the name that you choose for this action group needs to be unique to the resource group.
    6. Type a name in the Short Name text box. This name is used in the email and SMS communications for identifying the source action group that sent the message.
    7. Select a resource group in the Resource Group drop-down menu.
    8. In the Actions section, type a name in the Action Name text box.
    9. In the Action Type drop-down menu, select Email/SMS/Push/Voice.
    10. On the Email/SMS/Push/Voice panel, select the Email checkbox. Type an email address in the text box below the Email checkbox.
    11. Click the OK button at the bottom of the panel.
    12. On the Add Action Group panel, click the OK button at the bottom of the panel.
    13. On the Configured Actions panel, click the Done button at the bottom of the panel.
    14. On the alert blade, click the Save button on the top-left corner of the blade.

    Now you can test whether the URL ping test is working correctly by temporarily shutting down your testing application. After five minutes, you should receive an email message at the email address you configured in the alert action associated with the URL ping test.

    Exam Tip

    Remember that you need a Visual Studio Enterprise license for creating multi-step web tests. You use the Visual Studio Enterprise for the definition of the steps that are part of the test, and then you upload the test definition to Azure Application Insights.

    Need More Review?: Azure Monitor Alerts

    Apart from creating alerts when a web test fails, you can also create alerts based on other conditions that depend on the events information stored in the Application Insights. You can review the details about how to create https://docs.microsoft.com/enus/azure/azure-monitor/platform/alerts-log

    Chapter summary

    Horizontal scaling or In-Out scaling is the process of adding or removing instances of an application.

    Vertical scaling or Up-Down scaling is the process of adding or removing resources to the same virtual machine that hosts your application.

    Scale In/Out doesn’t affect the availability of the application.

    Vertical scaling affects the availability of the application because the application needs to be deployed in a virtual machine with the new

    resources assignment.

    You can add and remove resources to your applications by using autoscale rules.

    You can apply autoscale only to some Azure Resource types.

    Autoscale depends on Azure virtual machine scale sets.

    Your application needs to be aware of the changes in the resources assignment.

    Your application needs to be able to manage transient faults.

    You need to determine the type of fault before retrying the operation.

    You should not use immediate retry more than once.

    You should use random starting values for the retry periods.

    You should use built-in SDK mechanism when available.

    You should test your retry count and interval strategy.

    You should log transient and non-transient faults.

    You can improve the performance of your application by adding cache to your application.

    Azure Cache for Redis allows the caching of dynamic content.

    Using Azure Cache for Redis, you can create in-memory databases to cache the most-used values.

    Azure Cache for Redis allows you to use messaging queue patterns.

    Content Deliver Networks (CDNs) store and distribute static content in servers distributed across the globe.

    CDNs reduce the latency by serving the content from the server nearest to the user.

    You can invalidate the content of the cache by setting a low TTL (TimeTo-Live).

    You can invalidate the content of the cache by removing all or part of the content from the cache.

    Application Insights gets information from your application and sends it to Azure.

    You can use Application Insights with different platforms and languages.

    Application Insights is part of the Azure Monitor service.

    Application Insights generates two types of information: metrics, and logs.

    You can use Log Analyze and Metric Analyze to troubleshoot your application.

    Application Insights allows you to create web tests to monitor the availability of your application.

    You can configure alerts and trigger different actions associated with web tests.

    Thought experiment

    In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find answers to this thought experiment in the next section.

    Your company has a Line-of-Business (LOB) application that has been developed by your team. This LOB application is an eCommerce application that has more usage during holiday periods. This application is deployed on several Azure virtual machine scale sets. You are receiving some complaints about the stability and the performance of the application. Answer the following questions about the troubleshooting and the performance of the application:

    1. You need to ensure that the application has enough resources for providing good performance. You decide to configure autoscaling rules. Which type of autoscale rules should you configure?
    2. After reviewing the metrics of your application in the Azure Monitor, you find that you don’t have enough detail about the performance of the internal application workflows. What should you do to get information about the internal workflows?
    3. You need to ensure that the purchase process is working correctly. You decide to configure a web test in Application Insights. Which type of test should you configure?

    Thought experiment answers

    This section contains the solution to the thought experiment. Each answer explains why the answer choice is correct.

    1. You should configure schedule-based and metric-based autoscale rules.
    2. You configure the schedule-based rule for ensuring that the application has enough resources during the holiday period.

      You also need to configure a metric-based autoscale rule to ensure that you assign more resources if the application goes over a certain threshold of CPU or memory usage that could affect the performance of the application.

    3. You should integrate Application Insights instruments with your code. Once you integrate the Application Insights with your code, you can track custom events in your code. You can define operations inside your code to track complex operations compounds of several tasks. This allows you to get more information about the internal workflows executed in the application. Performing Application Insight agent-based monitoring doesn’t provide enough information.
    4. The process of a purchase in a web application is a complex testing scenario. In this scenario, you need to use a multi-step web test. Using Visual Studio Enterprise, you need to record the steps needed for performing a purchase in your web application. Once you have generated the file with the recorded steps, you can create a web test in Application Insights to monitor the purchase process.
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.