Chapter 2 Develop Azure Platform as a Service compute solution

Traditionally, the deployment of any software has required not only the planning of the architecture from the development point of view but also the planning of the infrastructure that would support that software. Networking, load balancing, fault-tolerance, and highly available configurations are some of the things that any new enterprise-level software deployment must manage.

However, once the deployment in the production environment has finished, you need to maintain it. This maintenance means that you also need to allocate the budget for the infrastructure’s maintenance, and you must have trained staff for conducting this maintenance.

Thanks to cloud technologies, you can drastically reduce these infrastructure planning and deployment requirements by deploying your software on a managed service known as Platform as a Service (PaaS). Doing so means you only need to worry about your code and how it interacts with other services in Azure. Platform as a Service products such as Azure App Service or Azure Functions release you from worrying about highly-available or fault-tolerant configurations because these things are already managed by the service provided by Azure.

In this chapter, we review the PaaS solutions that Azure provides, which allow you to focus on your code and forget about the underlying infrastructure.

Skills covered in this chapter:

Skill 2.1: Create Azure App Service web apps

Skill 2.2: Create Azure App Service mobile apps

Skill 2.3: Create Azure App Service API apps Skill 2.4: Implement Azure functions

Skill 2.1: Create Azure App Service web apps

Azure App Service is a Platform as a Service (PaaS) solution that Microsoft offers to assist with developing your applications, mobile app back-end, or REST APIs without worrying about the underlying infrastructure.

You use most of the more popular programming languages—.NET, .NET Core, Java, Ruby, Node.js, PHP, or Python—on top of your preferred platform (Linux or Windows). Azure App Service provides you with enterprise-level infrastructure capabilities, such as load balancing, security, autoscaling, and automated management. You can also include Azure App Service in your continuous deployment life cycle thanks to the integration with GitHub, Docker Hub, and Azure DevOps.

This skill covers how to:

  • Create an Azure App Service web app
  • Create an Azure App Service background task by using WebJobs
  • Enable diagnostics logging

Create an Azure App Service web app

When you plan to create an Azure App Service, there are some concepts about how your application performs that you need to understand. Every App Service needs resources to execute your code. Virtual machines are the base of these resources. Although the low-level configuration for running these virtual machines is automatically provided by Azure, you still need to provide some high-level information. The group of virtual machines that host your web application is managed by an App Service plan.

You can think of an App Service plan like a server farm being run in a cloud environment. This also means that you are not limited to running a single App Service in an App Service plan and sharing the same computing resources.

When you create a new App Service plan, you need to provide the following information:

Region This is the region where your App Service plan is deployed. Any App Service in this App Service plan is placed in the same region as the App Service plan.

Number Of Instances This is the number of VMs that are added to your App Service plan. Bear in mind that the maximum number of instances that you can configure for your App Service plan depends on the pricing tier that you select. You can scale the number of instances manually or automatically.

Size Of The Instances You configure the size of the VM that is used in the App Service plan.

Operating System Platform This controls whether your web application runs on Linux or Windows VMs. Depending on the operating system, you have access to different pricing tiers. Beware that once you have selected the operating system platform, you cannot change the OS for the App Service without recreating the App Service.

Pricing Tier This sets the features and capabilities available for your

App Service plan and how much you pay for the plan. For Windows VMs, there are two basic pricing tiers that use shared VMs—F1 and D1. When you use the basic pricing tiers, your code runs alongside other Azure customers’ code.

When you run an App Service in an App Service plan, all instances configured in the plan execute the app. This means that if you have five virtual machines, any app you run will run on each of the five VMs. Other operations related to the App Service, such as additional deployment slots, diagnostic logs, backups, or WebJobs, also are executed using the resources of each virtual machine in the App Service plan.

The following procedure shows how to create an App Service plan and upload an elementary web application based on .NET Core using Visual Studio 2017. Ensure that you have installed the ASP.NET and web development workload and you have installed the latest updates.

  1. Open Visual Studio 2017 on your computer.
  2. Click the Tools menu and choose Get Tools And Features. Verify that the ASP.NET And Web Development In The Web & Cloud section is checked.
  3. In the Visual Studio 2017 window, click File > New > Project to open the New Project window.
  4. In the New Project window, on the tree structure on the left side, expand the Installed node, expand the Visual C# node, and click the Web node.
  5. In the list of templates in the center of the window, select ASP.NET Core Web Application.
  6. In the Properties of the project at the bottom of the page, complete the following steps:
    1. Select a Name for the project.
    2. Enter a path for the Location of the solution.
    3. In the Solution drop-down menu, select Create A New Solution.
    4. Enter a Name for the solution.
  7. Click the OK button in the bottom-right corner of the New Project window. This opens the New ASP.NET Core Web Application window.
  8. In the New ASP.NET Core Web Application window, ensure that the following values are selected in the two drop-down menus on the topleft side of the window:
    1. .NET Core
    2. ASP.NET Core 2.1
  9. Select Web Application from the Project Templates area in the center of the window.
  10. Uncheck the option Configure For HTTPS on the bottom-left side of the window.
  11. Click the OK button in the bottom-right corner of the New ASP.NET Core Web Application window.

At this point, you have created an elementary ASP.NET Core web application. You can run this application in your local environment to ensure that the application is running correctly before you publish the application to Azure.

Now we need to create the Resource Group and App Service plan that hosts the App Service in Azure:

  1. In your Visual Studio 2017 window, ensure that you have opened the solution of the web application that you want to publish to Azure.
  2. On the right side of the Visual Studio window, on the Solution Explorer window, right-click the project’s name.
  3. In the contextual menu, click Publish. This opens the Pick A Publish Target window.
  4. In the Pick A Publish Target window, make sure that App Service is selected from the list of Available Targets on the left side of the window.
  5. In the Azure App Service section, on the right side of the window, ensure that the Create New Option is selected.
  6. In the bottom-right corner of the window, click Publish button, which opens the Create App Service window.
  7. In the Create App Service window, add a new Azure account. This account needs to have enough privileges in the subscription for creating new resource groups, app services, and an App Service plan.
  8. Once you have added a valid account, you can configure the settings for publishing your web application, as shown in Figure 2-1.
  9. Figure 2-1 Creating an App Service
    Screenshot_7
  10. In the App Name text box, enter a name for the App Service. By default, this name matches the name that you gave to your project.
  11. In the Subscription drop-down menu, select the subscription in which you want to create the App Service.
  12. In the Resource Group drop-down menu, select the resource group in which you want to create the App Service and the App Service plan. If you need to create a new resource group, you can do so by clicking the New link on the right side of the drop-down menu.
  13. To the right of the Hosting Plan drop-down menu, click the New link to open the Configure Hosting Plan window.
  14. In the Configure Hosting Plan window, type a name for the App Service plan in the App Service Plan text box.
  15. Select a region from the Location drop-down menu.
  16. Select a virtual machine size from the Size drop-down menu.
  17. Click the OK button in the bottom-right corner of the window. This closes the Configure Hosting Plan window.
  18. On the bottom-right corner of the Create App Service window, click the Create button. This starts the creation of the needed resources and the upload of the code to the App Service.
  19. Once the publishing process has finished, Visual Studio opens your default web browser with the URL of the newly deployed App Service. This URL will have the structure https://<your_app_service_name>.azurewebsites.net.

Depending on the pricing tier that you selected, some features are enabled, such as configuring custom domains or configuring SSL connections for your web applications. For production deployment, you should use Standard or, Premium pricing tiers. As your feature needs change, you can choose different pricing tiers. You can start by using the free tier, F1, in the early stages of your deployment and then increase to an S1 or P1 tier if you need to make backups of your web application or need to use deployment slots.

Even if the premium pricing tiers do not fit your computer requirements needs, you can still deploy a dedicated and isolated environment, called Isolated pricing tier. This tier provides you with dedicated VMs running on top of dedicated Virtual Networks where you can achieve the maximum level of scale-out capabilities. Bear in mind that Linux cannot be used with tiers F1 and D1.

When you are developing your web application, you need to test your code on both your local environment and in development or testing environments that are similar to the production environment. Starting with the Standard pricing tier, Azure App Service provides you with the deployment slots.

These slots are deployments of your web application that reside in the same App Service of your web application. A deployment slot has its own configuration and hostname. You can use these additional deployment slots for testing your code before moving to the production slot. The main benefit of using these deployment slots is that you can swap these slots without any down time. You can even configure an automated swap of the slots by using Auto Swap.

When you plan for deploying your web application into an App Service, Azure offers you several options:

ZIP or WAR files When you want to deploy your application, you can package all your files into a ZIP or WAR package. Using the Kudu service, you can deploy your code to the App Service.

FTP You can copy your application files directly to the App Service using the FTP/S endpoint configured by default in the App Service.

Cloud synchronization Powered by the Kudu deployment engine, this method allows you to have your code in a OneDrive or Dropbox folder, and it syncs that folder with the App Service.

Continuous deployment Azure can integrate with GitHub, BitBucket, or Azure DevOps Services for deploying the most recent updates of your application to the App Service. Depending on the service, you can use the Kudu build server, Azure Pipelines, or Azure DevOps Service for implementing a continuous delivery process. You can also configure the integration manually with other cloud repositories like GitLab.

Your local Git repository You can configure your App Service as a remote repository for your local Git repository and push your code to Azure. Then the Kudu build server automatically compiles your code for you and deploys to the App Service.

ARM Template You can use Visual Studio and an ARM template for deploying your code into an App Service.

Note: Kudu

Kudu is the platform that is in charge of the Git deployments in Azure App Service. You can find more detailed information on its GitHub site at https://github.com/projectkudu/kudu/wiki.

Azure App Service also provides you with the ability to integrate the authentication and authorization of your web application, REST API, a mobile app back-end, or even Azure Functions. You can use different wellknown authentication providers, like Azure, Microsoft, Google, Facebook, and Twitter for authenticating users in your application. You can also use other authentication and authorization mechanisms on your applications. However, by using this security module, you can provide a reasonable level of security to your application with minimal or even no required code changes.

There are situations when your application may require access to resources on your on-premises infrastructure, and App Service provides you with two different approaches:

VNet Integration This option is available only for Standard, Premium, or PremiumV2 pricing tiers. This integration allows your web app to access resources in your virtual network. If you create a site-to-site VPN with your on-premises infrastructure, you can access your private resources from your web app.

Hybrid connections This option depends on the Azure Service Bus Relay and creates a network connection between the App Service and an application endpoint. This means that hybrid connections enable the traffic between specific TCP host and port combinations.

Once you have created your App Service application, you can manage the different settings that may affect your application. You can access these settings in the Configuration menu on the Settings section in the App Service blade:

General Settings These settings are related to the environment and platform in which your app runs. You can control the following items:

Framework Versions This setting controls which languages and versions are available to your application. You can enable or disable languages that will or won’t be used by the App Service.

Platform This setting controls whether your application runs on a 32- or 64-bit platform.

Web Sockets If your application uses SignalR or socket.io, you need to enable web sockets.

Always On Enabling this setting means your app is always loaded. By default, the application is unloaded if it is idle for some amount of time. You can configure this idle timeout in the host.json project file. The default value for App Service is 30 minutes.

Managed Pipeline Version Only for IIS, this setting controls the pipeline mode.

HTTP Version This enables the HTTPS/2 protocol.

ARR Affinity Enabling this setting ensures that client requests are routed to the same instance for the life of the session. This setting is useful for stateful applications but can negatively affect stateless applications.

Auto Swap Used in conjunction with deployment slots, if you enable this option at the deployment slot level, it will be automatically swapped into the production deployment slot when the stage slot is updated.

Debugging Enable remote debugging options for Visual Studio so that it can connect directly to your app.

App Settings You can load your custom settings in your application during startup. You use a key/value pair for each of your custom settings. These settings are always encrypted at rest, that is when they are stored.

Connection Strings This will store the needed configurations that allow your application to connect to databases.

Default Documents This setting configures which web page is displayed at the root URL of your app.

Handler Mappings You can configure custom script processors for different file extensions.

Virtual Applications And Directories This setting allows you to add additional virtual directories or applications to your App Service.

Create an Azure App Service background task by using WebJobs

Your web application may require you to run specific type of tasks that do not require interaction with the user. These types of tasks can usually be executed on background threads without affecting the user interface.

Azure App Service provides you with the capability to execute these background jobs by using WebJobs. WebJobs are executed in the same context as the web application, using the resources available in the App Service Plan where your app is executing. WebJobs can be either executables or scripts that you upload to the App Service using the Azure portal, or you can program your own custom WebJob using WebJobs SDK and include it with your web app project. Bear in mind that you cannot run WebJobs in an App Service running Linux.

Note: Running Background Tasks

Azure provides you with different services—Microsoft Flow, Azure Logic Apps, Azure Functions, and WebJobs—that can be used for automating business processes and solving integration problems. This overlapping between these different services can lead to some confusion. You can review the differences between these services at https://docs.microsoft.com/en-us/azure/azure-functions/functionscompare-logic-apps-ms-flow-webjobs.

When you are working with WebJobs, you need to think about how many times your job should be executed and the circumstances under which it should be executed. Depending on your requirements, you should configure one of the two available WebJobs types:

Continuous This type of job starts as soon as you create the WebJob. This job type runs inside every instance in which the web app is running. The job runs in an endless loop. Continuous jobs can be remotely debugged.

Triggered This type of job is executed based on a schedule that you define. These jobs can also run when you manually fire a trigger. The job will be executed in a single instance, selected by Azure, between all instances of a web app running in the App Service Plan. You cannot remotely debug this kind of job.

When you work with WebJobs, you should enable the Always On setting so that the web app does not stop when it becomes idle. This setting is found in the App Service application’s General Settings. Use the following procedure for creating a scheduled WebJob using the Azure Portal:

  1. Sign in to the management portal at http://portal.azure.com.
  2. In the Search box at the top of the Azure Portal, type the name of your App Service.
  3. On the left side of the App Service blade, click the WebJobs item under the Settings section, as shown in Figure 2-2. This will open the WebJobs area on the center and right sections of the App Service blade.
  4. Figure 2-2 App Service settings
    Screenshot_8
  5. In the WebJobs area in the top-left corner, click the Add button to open the Add WebJob dialog box on the right side of the screen.
  6. Type a name for the WebJob in the Name text box.
  7. Click the folder icon for the File Upload control to open a file browser dialog box for uploading the executable or script that you want to use with this WebJob. Supported files are .cmd, .bat, .exe, .ps1, .sh, .php, .py, .js, .jar, and zip files. If you use a zip file, you can only add supported file types to it.
  8. From the Type drop-down menu, select Triggered.
  9. From the Triggers drop-down, select Scheduled.
  10. In the CRON Expression text box, write the expression that represents the schedule for the execution of your job.
  11. Click OK at the bottom of the Add WebJob dialog to create the WebJob and add it to the list in the WebJobs area.

Need More Review?: Using CRON Expressions

A CRON Expression is a single-line string that represents the schedule for the execution of your job. Each line is comprised of five fields that represent time attributes, in this order, from left to right: seconds, minutes, hours, days, months, and days-of-week. All time fields need to be present in a CRON line. If you don’t need to provide a value for a field, use the asterisk character. A CRON Expression uses the following format:

	
<seconds> <minutes> <hours> <days of month> <mont
	

For example, the CRON Expression 0 15 10 * * 6 runs at 10:15 AM on Fridays of every month.

For a detailed description of each field, as well as syntax and examples, see https://docs.microsoft.com/en-us/azure/azure-functions/functionsbindings-timer#cron-expressions.

Using executables or external scripts is the simplest way to work with WebJobs, but you can also program your own WebJob using the WebJobs .NET SDK. This SDK allows you to create your own console application that you can include in your solution and upload directly to your Azure App Service. Using the WebJobs .NET SDK, you can create background tasks that can integrate with other Azure services, such as Storage Queue, Event Hub, Blob Storage, and so on. When you program your console application using WebJobs .NET SDK, you are not limited to using it with WebJobs. Although the association between WebJobs and WebJobs SDK is the best way to use both features, you are not constrained to this configuration.

Depending on the SDK version you use, WebJobs .NET SDK allows you to create .NET Core or .NET Framework console application. You should use version 3.0 for creating .NET Core console apps and use version 2.0 for .NET Framework apps. There are significant differences when working with each version that you should bear in mind when planning your WebJob. Some of the keys differences between versions are

In version 3.0, you need to add a storage binding extension by installing the NuGet package Microsoft.Azure.WebJobs.Extensions.Storage. By default, this extension is available in WebJobs .NET SDK version 2.0. Only Version 3.0 does supports .NET Core.

Visual Studio tools are different between version 2.0 and version 3.0. You cannot automatically deploy a WebJob .NET Core project with a web application project. You can only add existing or new WebJob .NET Framework projects.

When you are programming your own WebJob, you need to know about some concepts that you will use in your code:

host This is the runtime container in which functions are executed. This object will listen for the configured triggers and will call the appropriate function. When using SDK version 3.0 , you need to build an iHost object; when using version 2.0, you need to create a new instance of the class JobHost.

trigger This represents the different event types that fire the execution of the function that you program to perform a task. You can program two types of triggers:

Automatic This type of trigger calls a function in response to an event of a particular type. For example, you could add a trigger that calls a function every time you put a message in an Azure Queue.

Manual Using this type of trigger, you need to manually call the WebJob from your host or from the Azure Portal.

binding This is how your job application can interact with the external world. Bindings provide input and output connectivity with Azure and other third-parties services. You can use input bindings to get information and data from external services and output bindings to update data in external services. How you install and configure the different binding types in your code depends on the version that you are using.

The following steps show how to program a console application using WebJob SDK 3.0. In this example, you will use .NET Core to program the application:

Note: Requirements

To run this example in your local environment, you will need to install Visual Studio 2017 with the Azure Development workload and have an Azure account to publish your WebJob.

  1. Open Visual Studio and create a new project by clicking File > New > Project. This will open the New Project window.
  2. In the New Project window, on the tree structure on the left side of the window, select Installed > Visual C# > .NET Core.
  3. Select the Console App (.NET Core) template project.
  4. At the bottom of the Project Properties window, provide values for the Name, Location, Solution, and Solution Name. Then click the OK button.
  5. You need to add some NuGet packages by clicking Tools > NuGet Package Manager > Manage NuGet Packages For Solution.
  6. On the Manage NuGet Packages for Solution tab, click the Browse tab.
  7. Install the following NuGet packages:
    • Microsoft.Azure.WebJobs
    • Microsoft.Azure.WebJobs.Extensions
    • Microsoft.Azure.WebJobs.Extensions.Storage
    • Microsoft.Extensions.Logging.Console
  8. In the Solution Explorer window, click the Program.cs file.
  9. From the Main method, remove all the code and add the code shown in Listing 2-1.

Listing 2-1 Configuring a .NET Core Generic Host

	
//C# .NET Core. WebJobs SDK v3.0
//you need to add Microsoft.Extensions.Hosting and Microsoft.//namespaces to your code
//var builder = new HostBuilder();
builder.UseEnvironment("development");
builder.ConfigureWebJobs(wj =>
{
wj.AddAzureStorageCoreServices();
wj.AddAzureStorage();
});
builder.ConfigureLogging((context,b) =>
{
b.AddConsole();
});
var host = builder.Build();
using (host)
{
host.Run();
}
	

The first thing you need to do to create a .NET Core Generic Host container is to create a HostBuilder object. This object will perform all the configuration needed before creating the actual host. This process is typical for any other .NET Core application that doesn’t need to deal with HTTP. The generic host deals with the lifetime of the application.

You make the configuration that you want to apply to the new host by using the ConfigureWebJobs() method. This method will automatically add the appsettings.json file and environment variables as configuration sources. Inside this method, you configure the bindings that you will use for listening for the events that you want to monitor. In this example, you want to take some actions when a new message arrives at the Azure Queue. This means that you need to configure the Storage binding extension. You do so by calling the extension method AddAzureStorage() in your HostBuilder instance. The AddAzureStorageCoreServices() method is used by Azure WebJobs for saving log data that will be shown on the WebJobs dashboard.

When you are happy with your configuration, you can create the actual host by calling the Build() method on your HostBuilder instance. Then you only need to start the host’s lifecycle by calling the Run() method for the Host instance.

  1. In the Solution Explorer window, add a new C# class. Right-click the name of your project. On the contextual menu, click Add > New Item.
  2. In the New Item window, select Class. Type a name for your new class and click the Add button. The new class will contain the triggers that will be listening to the events on the Azure Queue.
  3. In the new class, add the method shown in Listing 2-2.

Listing 2-2 New message queue trigger

	
//C# .NET Core. WebJobs SDK v3.0
//you need to add Microsoft.Azure.Webjobs and Microsoft.Extensions.//your code
public static void NewMessageQueueTrigger(
[QueueTrigger("<put_your_queue_name_here>")] string message,
ILogger logger)
{
logger.LogInformation($"New message from queue (<put_{message}");
}
	

You can create an automatic trigger associated with a binding by creating a public static function with the appropriate parameter attributes. In this case, the QueueTrigger parameter attribute configures the Azure Queue that will be listening for the new message. When a new message arrives, it will be passed to the function using the message string parameter. The parameter attributes and types depend on the triggers and bindings that you want to use in your code.

Need More Review?: Queue Storage Binding

You can review the full details about the available triggers and outputs associated with the storage binding by reviewing the online Microsoft Doc at https://docs.microsoft.com/en-us/azure/azure-functions/functionsbindings-storage-queue.

Now that you are done with your code, you need to configure the connection string that you will use for connecting with your storage account:

  1. In the Azure Portal, create a storage account. You can also use an existing storage account for this example.
  2. On the Storage Account blade, click the Access Key item in the Settings section. Copy the Connection string under the Key 1 section. You will need this value in an upcoming step.
  3. In the Solution Explorer window, right-click the name of your project. On the contextual menu, click Add > New Item.
  4. In the New Item window, select JavaScript JSON Configuration File. Type appsettings.json as the name for the new file and click the Add button.
  5. In the Solution Explorer window, right-click the appsettings.json file and choose Properties.
  6. In the Properties for the appsettings.json file, change the Copy To Output Directory option from Do Not Copy to Copy If Newer.
  7. Replace the content of the appsettings.json file with the following string:
		
{"AzureWebJobsStorage": "<Put your storage 
}
		
	

At this point, you can start testing your application locally:

  1. In the Visual Studio window, click Debug > Start Without Debugging.
  2. In Visual Studio, open the Cloud Explorer window by clicking View > Cloud Explorer.
  3. In the Cloud Explorer window, connect to your Azure Subscription.
  4. Click the user icon at the top center of the Cloud Explorer window, which will open the Account Management section.
  5. Click the Manage Accounts link, which will open the Account Settings window.
  6. In the Account Settings window, click the Sign In button.
  7. Once you have logged into your account, ensure that your Azure subscription appears on the Cloud Explorer window and click the Apply button to close the Account Management section.
  8. In the Cloud Explorer, navigate to your storage account.
  9. Expand your storage account node, right-click Queues and click Create Queue. You need to use the same queue name as the one you used for your code in the QueueTrigger parameter attribute.
  10. Click the queue that you created in the previous step to open a new tab in Visual Studio with your queue’s name.
  11. On your queue’s tab, create a new message by clicking the Add Message button (see Figure 2-3). This will open the Add Message window.
  12. Figure 2-3 Creating a new message
    Screenshot_9
  13. In the Add Message window shown in Figure 2-4, write a message and configure an expiration value using the Expires In setting.
  14. Figure 2-4 Adding a new message to the queue
    Screenshot_10
  15. On the console application window, ensure that the new message that you published on your Azure Queue has appeared. If you refresh your Azure Queue, you will see that your message has disappeared. This happens because messages from the queue are removed once they are processed.

At this point, you have tested your console application to confirm that it can connect to Azure and monitor an Azure Queue for a new message. Also, you have ensured that new messages come to the configured Azure Queue, and that your console application writes the message to the console. You used the WebJobs .NET SDK 3.0 to conduct the test by using the QueueTrigger from the Queue storage extension.

The last part of this example is to publish your console application as a WebJob in an App Service. You have two options:

Publish your .NET Core application as a console application, package all binaries in a zip file, and create a run.cmd file that runs the application by using the command dotnet run. Then you can upload this zip file by using the procedure explained at the beginning of this section.

Publish your .NET Core application directly from Visual Studio.

The following procedure shows how to publish your .NET Core application from Visual Studio:

  1. In Visual Studio, right-click the name of your project.
  2. Click Publish. This will open the Pick A Publish Target window.
  3. On the Pick A Publish Target window, ensure that the Microsoft Azure App Service option on the left side of the window is selected.
  4. Ensure that the Create New option is selected. If you need to publish your WebJob to an existing App Service, click the Select Existing option.
  5. Click the Publish button at the right-bottom corner of the window to open the Create App Service window.
  6. In the Create App Service window, provide the following information: App Name, Subscription, Resource Group, and Hosting Plan. This is the same procedure that you use for creating a new App Service from Visual Studio.
  7. Once the publishing process has finished, your WebJob is ready to perform the tasks you have programmed.

In this example, you won’t be able to see any results because you cannot access the console of the instance in which the WebJob is running. When you run your application in your local environment, any new messages that you receive from the queue are written to the console. When you publish your WebJob to Azure, you need to use an alternate method for viewing these results. You can use output bindings and write the message to a blob file or an alternate queue. The preferred way of visualizing these log messages is to integrate your WebJob with Application Insights.

Need More Review?: Using WebJobs SDK

You can learn more about how to use WebJobs SDK by reviewing Microsoft Docs at https://docs.microsoft.com/en-us/azure/appservice/webjobs-sdk-how-to

You can also review how to add Application Insight logging support to your WebJob SDK application by reviewing the Microsoft Docs example at https://docs.microsoft.com/en-us/azure/app-service/webjobs-sdk-getstarted

Enable diagnostics logging

Troubleshooting and diagnosing the behavior of an application is a fundamental operation in the lifecycle of every application. This is especially true if you are developing your own application. Azure App Service provides you with some mechanisms for enabling diagnostics logging at different levels that can affect your application:

Web Server Diagnostics These are message logs generated from the web server itself. You can enable three different types of logs:

Detailed Error Logging This log contains detailed information for any request that results in an HTTP status code 400 or greater. When an error 400 happens, a new HTML file is generated, containing all the information about the error. A separate HTML file is generated for each error. These files are stored in the file system of the instance in which the web app is running. A maximum of 50 error files can be stored. When this limit is reached, the oldest 26 files are automatically deleted from the file system.

Failed Request Tracing This log contains detailed information about failed requests to the server. This information contains a trace of the IIS components that were involved in processing the request. It also contains the time taken by each IIS component. These logs are stored in the file system. The system creates a new folder for each new error, applying the same retention policies as for detailed error logging.

Web Server Logging This log registers the HTTP transactions information for the requests made to the web server. The information is stored using the W3C extended log file format. You can configure custom retention policies to these log files. By default, these diagnostic logs are never deleted, but they are

restricted by the space they can use in the file system. The default space quota is 35 MB.

Application diagnostics You can send a log message directly from your code to the log system. You use the System.Diagnostics.Trace class for writing information in the application diagnostics logs. This is different from Application Insights because Application diagnostics are just logged information that you register from your application. If you want your application to send logs to Application Insights, you need to add the Application Insights SDK to your application.

Deployment diagnostics This log is automatically enabled for you, and it gathers all information related to the deployment of your application. Typically, you use this log for troubleshooting failures during the deployment process, especially if you are using custom deployment scripts.

You can enable the different diagnostics logs, shown in Figure 2-5, using the Azure Portal. When you enable Application Logging, you can select the level of error log that will be registered on the files. These error levels are:

Figure 2-5 Enabling diagnostics logging
Screenshot_11

Disabled No errors are registered.

Error Critical and Error categories are registered.

Warning Registers Warning, Error, and Critical categories.

Information Registers Info, Warning, Error, and Critical log categories.

Verbose Registers all log categories (Trace, Debug, Info, Warning, Error, and Critical).

When you configure application logging, you can configure where the log files will be saved. You can choose between saving the logs in the file system or using blob storage. Storing application logs in the file system is intended for debugging purposes. If you enable this option, it will be automatically disabled after 12 hours. If you need to enable the application logging for a more extended period, you need to save the log files in blob storage. When you configure application logging for storing the log files in blob storage, you can also provide a retention period in days. When log files become older than the value that you configure for the retention period, the files are automatically deleted. By default, there is no retention period configured. You can configure the web server logging in the same way that you configure the storage for your application logging.

If you configure application or web server logging for storing the log files in the file system, the system creates the following structure for the log files:

/LogFiles/Application/ This folder contains the logs files from the application logging.

/LogFiles/W3SVC#########/ This folder contains the files from the

Failed Request Traces. The folder contains an XSL file and several XML files. The XML files contain the actual tracing information, while the XSL file provides the formatting and filtering functionality for the content stored in the XML files.

/LogFiles/DetailedErrors/ This folder contains the *.htm files related to the Detailed Error Logs.

/LogFiles/http/RawLogs/ This folder contains the Web Server logs in W3C extended log format.

/LogFiles/Git This folder contains the log generated during the deployment of the application. You can also find deployment files in the folder D:homesitedeployments.

You will need this folder structure when you want to download the log files. You can use two different mechanisms for downloading the log files: FTP/S or Azure CLI. The following command shows how to download log files to the current working directory:

	
az webapp log download --resource-group <Resouce 
	

The logs for the application <App name> will be automatically compressed into a file named webapp_logs.zip. Then, this file will be downloaded in the same directory where you executed the command. You can use the optional parameter --log-file for downloading the log files to a different path in a different zip file.

There are situations in which you may need to view the logs for your application in near-real time. For these situations, App Service provides you with Log Streams. Using streaming, you can see the log messages as they are being saved to the log files. Any text file stored in the D:homeLogFiles folder will be also displayed on the log stream. You can view log streams by using the embedded viewer in the Azure Portal, on the Log Stream item under the monitoring section in your App Service. You can also use the following Azure CLI command for viewing your application or web server logs in streaming:

	
az webapp log tail --resource-group <Resouce grou
	

Skill 2.2: Create Azure App Service mobile apps

From the most basic to the more complex, most of the mobile applications are an excellent vehicle for exchanging information with the user. Mobile apps are usually the interface that the user uses for purchasing products, playing music, sending and receiving messages, playing games, and many other activities. All these mobile apps need to communicate with the services that provide the actual service to the user. The mobile apps receive the requests or the information from users and they return responses with information such as confirmation of a purchase, a stream with the song that the user wants to hear, a message from a friend, or the position of other players in the game.

Azure App Service provides you with the capabilities of programming these back-end services that will make your mobile app work correctly. It also provides you with the ability to remotely monitor your mobile apps to ensure that they are running correctly, and it gathers all the information that you may need when troubleshooting.

This skill covers how to:

  • Add push notifications for mobile apps
  • Enable offline sync for mobile apps
  • Implement a remote instrumentation strategy for mobile devices

Add push notifications for mobile apps

When you develop a mobile app, there is a high probability that you will need to send information to your users when they are not using the app. For doing so, you will use push notifications. This asynchronous communication mechanism allows you to interact with your users when they are offline. For making this interaction happen, there are some key players that are part of this asynchronous communication:

The mobile app client This is your actual mobile app, which runs on your user’s device. The user must register with the Platform Notification System (PNS) to receive notifications. This will generate a PNS handler that will be stored in the mobile app back end for sending notifications.

The mobile app back end This is the back end for your app client, and it stores the PNS handler that the client received from the PNS. Using this handler, your back end service can send push notifications to all registered users.

A Platform Notification System (PNS) These platforms deliver the actual notification to the user’s device. Platform Notification Systems are platform dependent, and each vendor has its own PNS. Apple has the Apple Push Notification Service, Google uses the Firebase Cloud Messaging, and Microsoft uses the Windows Notification Service.

Even if your mobile app will be targeted to a single platform, implementing push notifications requires a good amount of effort. This is because some Platform Notification Systems only focus on delivering the notification to the user’s device but doesn’t deal with requirements like targeted notifications or broadcasting notifications. Another requirement for most Platform Notification Systems is that device tokens need to be refreshed every time you release a new version of your app. This operation requires that your back end deals with a large amount of traffic and databases updates simply to keep device tokes updated. If you need to support different mobile platforms, these tasks become even more complicated.

Microsoft provides you with the Azure Notification Hub. This service provides cross-platform push notification to your mobile app back end, allowing you to create an abstraction for managing each Platform Notification System and providing a consistent API for interacting with the Notification Hub. When you need to add push notifications to your mobile app, you will integrate the Notification Hub service with your back-end service hosted on the Mobile App Service. Figure 2-6 shows the workflow for sending push notifications to users using the Notification Hub.

Figure 2-6 Push notification workflow using Notification Hub
Screenshot_12

Note: Notification Hub Integration

Microsoft also provides an SDK for easing the direct integration between your native (iOS, Android, or Windows) or cross-platform (Xamarin or

Cordova) code and Azure Notification Hub—without using your back end. The drawback to this approach is that the Mobile Apps Client SDK removes all tags that you can associate with the device for security purposes. If you need these tags for performing segmented notifications, you should register your users’ devices using the back end.

The interaction between your back-end Mobile App and Notification Hub is performed using the Mobile App SDK for ASP.NET or Node.js web applications. Before your back-end application can send push notifications, you need to connect your App Service with your Notification Hub. Use

following procedure to make this connection:

  1. Sign in to the management portal (http://portal.azure.com.
  2. On the left side of the portal, click Create A Resource.
  3. On the New blade, in the Search the Marketplace text box, type Mobile App and press return.
  4. On the Mobile App blade, click the Create button.
  5. On the Create Mobile App blade, type a name for your mobile back-end application.
  6. Type a name for the New Resource Group.
  7. Click App Service Plan, and then click Create New.
  8. In the New App Service Plan panel, in the App Service Plan text box, type a name for the App Service Plan.
  9. Select the Location and Pricing Tier for the App Service Plan.
  10. In the New App Service Plan panel, click OK.
  11. In the Mobile App panel, click the Create button.
  12. Once the Mobile App has been created, type the name of your new Mobile App in the Search Resources, Services, And Docs text box at the top of the Azure Portal.
  13. On the App Service blade, on the left side of the blade, click Push in the Settings section. This will open the Push area on the center and right sections of the App Service blade.
  14. In the top-left corner of the Push area, click the Connect button to open the Notification Hub panel.
  15. In the Notification Hub panel, click the Notification Hub plus sign, which will open the New Notification Hub panel.
  16. In the New Notification Hub panel, type a name for the Notification Hub.
  17. In the Namespace section, click the Or Create New link to create a new namespace. A Notification Hub namespace is a group of hubs in the same region.
  18. Leave the Pricing Tier set to Free.
  19. Click the OK button to close the New Notification Hub panel.
  20. The newly created Notification Hub should now be connected to your App Service. At this point, you can configure the integration of the Notification Hub with each Platform Notification System that you want to use for sending notifications.

The next step is to modify your back end code for integrating with the Notification Hub. Listing 2-3 shows a piece of code for sending notifications from your back end to the Notification Hub.

Listing 2-3 Sending notifications from the ASP.NET back end

	
// C# ASP.NET Framework. Mobile App SDK
// Add following using statements to your code:
// using System.Collections.Generic;
// using Microsoft.Azure.NotificationHubs;
// using Microsoft.Azure.Mobile.Server.Config;
// We need to get the configuration for sending the logs HttpConfiguration config = this.Configuration;
// We get the mobile settings for getting the Notification MobileAppSettingsDictionary mobileSettings = this.Configuration.
GetMobileAppSettingsProvider().GetMobileAppSettings();
// Get the Notification Hubs name and connection string. // a Notification Hub client.
string notificationHubName = mobileSettings.NotificationHubName;
string notificationHubConnection = mobileSettings
.Connections[MobileAppSettingsKeys.NotificationHubConnectionString].// Create a new Notification Hub client that will perform // Notification Hub
NotificationHubClient hubClient = NotificationHubClient
.CreateClientFromConnectionString(notificationHubConnection, // We want to send notifications to all registered templates "messageParam" parameter
// This includes templates for Apple, Google, Windows and Dictionary<string,string> templateParams = new Dictionary<templateParams["messageParam"] = item.Text + " was processed.";
try
{
// Send the actual push notification.
var result = await hubClient.SendTemplateNotificationAsync(// We register the notification was sent succesfully.
config.Services.GetTraceWriter().Info(result.State.ToString());
} catch (
System.Exception ex)
{
// There were some issues that we need to register in config.Services.GetTraceWriter()
.Error(ex.Message, null, "Push.SendAsync Error");
}
	

The last step is to make the needed modifications on your mobile app client to register the device with its correspondent Platform Notification System and register with your Notification Hub through your back end. The details for how to make this implementation depend on the platform that you are using for your mobile app client.

More Info: Notification Hub Examples

You can review the details for implementing Notification Hub integration for your mobile app in Microsoft Docs:

  • iOS: https://docs.microsoft.com/en-us/azure/notificationhubs/notification-hubs-ios-apple-push-notification-apns-get-started
  • Android: https://docs.microsoft.com/en-us/azure/notificationhubs/notification-hubs-android-push-notification-google-fcm-getstarted
  • Windows Universal: https://docs.microsoft.com/en- us/azure/notification-hubs/notification-hubs-windows-store-dotnet-getstarted-wns-push-notification

Enable offline sync for mobile app

When you plan and design any mobile app, you need to consider situations in which the user won’t have access to a data network. Perhaps the user is in a zone with no coverage, or is on a plane and has enabled the plane mode; whatever the reason, users sometimes cannot access a data network.

For dealing with these offline scenarios, Azure Mobile Apps client and server SDKs allow your application to be functional even when the user has no access to a network. While your app is in offline mode, Mobile Apps SDKs allow your application to create, delete, or modify data. This modified data is saved to a local store, and when the user has access to a network and the app is online again, the SDK synchronizes the changes with your back end. Also, the SDK also deals with situations in which there are conflicts between the data on the server and the data in the application’s local storage. This allows you to handle the conflicts on either the client or the server side.

When you work with the Azure Mobile App SDK, your client uses the /tables endpoint for performing CRUD (Create, Read, Update, Delete) operations on the data models used by your back end. The calls to this endpoint fail if the client application doesn’t have a network connection IMobileServiceSyncTable and MSSyncTable. When you use these interfaces, you can still make any CRUD operation that you made before with the online interface version, but the data will be automatically read or written to local storage.

The local storage is a data-persistence layer provided by the Azure Mobile App client SDK. This persistence layer is provided by SQLite on Windows, Xamarin, and Android platforms. For iOS, the persistence layer is provided by Core Data. This local store needs to be initialized before your client application can use it with the sync tables. This initialization consists on calling the method IMobileServicesSyncContext.InitializeAsync(localstore).

The Mobile App client SDK tracks the changes made to the local store by using a sync context. This sync context consists of an ordered list or operation queue that contains all CUD (Create, Update, Delete) operations made with sync tables. These CUD operations will be sent later to the server when the client makes a push operation on the local changes. When you work with the sync context, you can perform the following operations:

Push All the CUD changes will be sent to the server. This is an incremental action, which means only changes from the last push operation will be sent to the server. To ensure that the operations are sent in the correct order, you cannot send changes for individual tables.

Pull This operation downloads the data in a table to the local storage. Pull operations are made on a per-table basis. By default, all records in the table are downloaded, although you can use customized queries for getting only a subset of the data. If you perform a pull operation on a table that has pending changes to be sent to the server, then the pull operation performs an implicit push to synchronize the data with the server. This allows the SDK to minimize the possibility of conflicts.

Incremental Sync When you perform a pull operation, you can add a query name to the call. This query name is used only on the client side. You need to ensure that the query name is unique for each logical operation; otherwise, a different pull operation could return incorrect results. When you use a query name, Azure performs an incremental sync, retrieving only the most recent record in the table. This action depends on the updateAt field in the table.

Purge This operation clears the contents of your local storage. If there is a CUD operation in the sync context pending to be uploaded to the server, the purge operation will fail and throw an exception. In this situation, you can still purge the data from your local store by setting the force purge parameter to false in the PurgeAsync() method.

Implement a remote instrumentation strategy for mobile devices

Once you have deployed your mobile app to any of the different distribution marketplaces such as Apple Store, Google Play, or Microsoft Store, it becomes difficult to get information about how your app is performing on the user’s device.

Visual Studio App Center is a cloud tool that provides remote instrumentation for your mobile apps. Using App Center, you can get information about your mobile app’s problems while running on users’ devices. Also, you can also monitor the usage statistics for your apps.

You can integrate App Center with your mobile app using the App Center SDK. This SDK is available for most popular mobile platforms and programming languages, such as Android, iOS, React Native, Universal Windows Platform, Xamarin, and Apache Cordoba.

The App Center SDK is a modularized SDK in which each module corresponds with the different services offered by App Center:

Analytics This service allows you to analyze users’ behavior and customer engagement. It offers information about the operating system version, session count, device model properties, application updates, and the number of times the user comes back to your application. Also, you can create your own custom events for measuring meaningful things for your business, such as whether a user played a video or started a purchase transaction and then decided to cancel it.

Diagnostics (Crashes) When your application crashes, App Center automatically generates a crash report including a stack trace of the execution at the moment of the crash, which module threw the exception that caused the crash, and other useful information for troubleshooting the crash. You can also integrate App Center with your favorite bug tracker, like Jira, VSTS, Azure DevOps, or GitHub, and it can automatically create a ticket or incident report on your bug-tracking platform.

Distribute During the testing phase, you can distribute your application to a group of users before publishing your app to Apple Store, Google Play, or Microsoft Store. Also, you can also use this distribution mechanism if you plan to use your mobile app as an internal corporate app that won’t be publicly available.

Push App Center allows you to send push notifications to your users directly from the App Center portal.

The following procedure shows how to integrate an iOS Xamarin App with App Center:

  1. Sign in to the App Center Management Portal (https://appcenter.ms.
  2. In the top-right corner of the App Center Management Portal, click the Add New drop-down, and then click Add New App.
  3. On the Add New App panel, type the name of your application in the App Name text box.
  4. In the OS section, select iOS.
  5. In the Platform section, select Xamarin.
  6. Click the Add New App button in the bottom-right corner of the Add New App panel.
  7. Open Visual Studio for Mac.
  8. Click File > New Solution.
  9. On the Choose Template For Your New Project window, click Multiplatform > App on the left side of the window.
  10. Select the project template’s Native App (iOS, Android).
  11. Click the Next button in the bottom-right corner of the window.
  12. Type a name for your application.
  13. Click the Next button at the bottom-right corner of the window.
  14. Select the location in which your project will be created.
  15. Click the Create button at the bottom-right corner of the window.
  16. In the Solution Explorer, right-click your iOS project, and then click Add > Add NuGet Packages.
  17. On the NuGet packages manager window, type App Center in the search box in the top-right corner of the window.
  18. Select the following packages:
    • Microsoft.AppCenter
    • Microsoft.AppCenter.Crashes Microsoft.AppCenter.Analytics
  19. Click the Add Packages button.
  20. Accept the license terms.
  21. Open the cs file and add following using statements:
  22. 	
    using Microsoft.AppCenter;
    using Microsoft.AppCenter.Analytics; using Microsoft.AppCenter.Crashes;
    	
    
  23. Add the following statement to the FinishedLaunching() method in the file AppDelegate.cs:
  24. 	
    		AppCenter.start("<your_app_center_key>", typeo
    	
    
  25. You can get your App Center key from the Overview page of your app in the App Center Management Portal.
  26. Now you see your active users, events, and diagnostics information in the App Center Management Portal using the Diagnostics and Analytics modules, as shown in Figure 2-7.
Figure 2-7 App Center modules
Screenshot_13

Skill 2.3: Create Azure App Service API apps

When you are designing the architecture of a web application, one of the layers that you will need is an API that allows the different layers of your architecture to communicate with each other. Regardless of the architecture of your application, there is a good chance that you will use a RESTful API to make that intercommunication happen.

In the same way that you can use Azure App Service for hosting your web application, you can use it for hosting your RESTful API. This allows you to take advantage of all the security, scalability, and serverless capabilities that we reviewed in previous skills.

This skill covers how to:

  • Create an Azure App Service API app
  • Create documentation for the API by using open-source and other tools

Create an Azure App Service API app

Creating an Azure App Service API app is quite similar to creating a regular web application deployed in an App Service. You will have the same options available for your API app that you have for a web app. This means that you need to create or assign your API to App Service Plan. The following procedure shows how to create a new App Service API app, and it includes an ASP.NET API demo using the Azure Portal:

  1. Sign in to the management portal (http://portal.azure.com.
  2. Click the Create A Resource Link in the upper-left corner, and then select Web > API App. This will open the Create API App panel.
  3. On the Create API App panel, type a name for your API.
  4. In the subscription drop-down menu, select the subscription for which you want to create this API app.
  5. In the Resource Group section, type a name for a new resource group. You can also use an existing resource group.
  6. Click the App Service Plan/Location setting, which will open the App Service Plan pane.
  7. On the App Service Plan pane, click Create New.
  8. On the New App Service Plan pane, shown in Figure 2-8, type a name for the App Service Plan.
  9. Figure 2-8 Creating a New App Service Plan
    Screenshot_14
  10. In the location drop-down menu, select the region in which you want to create the App Service Plan.
  11. In the Pricing Tier control, select the F1 pricing tier under the Dev / Test tab.

Once you have created your App Service API app, you can create your demo API app:

  1. In the newly created API app, click Quickstart from the Deployment section.
  2. On the Quickstart pane, click ASP.NET from the General section.
  3. On the ASP.Net Get Started pane, check the option I Acknowledge That This Will Overwrite All Site Contents.
  4. Click the blank icon below the check box. This will deploy the example project to your App Service.
  5. Click the Download button to download the code to your local environment. If desired, you can redeploy it to the App Service later.

At this point, you should be able to make requests to your API published in Azure. Your API is exposed using HTTP and HTTPS protocols. HTTP is not suitable for production environments, so you should secure your API access by enabling the HTTPS Only option on the SSL Settings pane in the Settings section. In this section, you can also configure SSL bindings for any additional fully qualified domain name (FQDN) added to your API app.

Another important security-related topic to consider is that other applications do not require authentication to interact with your API. Depending on your security requirements, you might want to authenticate all or some of the requests made to your API. Azure App Service allows you to secure the access to your API by providing authentication before users or applications make requests to your API. You can add this security layer to your API without making any modifications to your code. This security layer is provided by different authentication providers:

Azure Active Directory You can authenticate any user within your organization or any other organization with Office 365 deployed.

Facebook A user with a valid Facebook account could access your API.

Google A user with a valid Google account could access your API.

Twitter A user with a valid Twitter account could access your API.

Microsoft For Microsoft accounts, such as outlook.com, Xbox Live, MSN, OneDrive, and Bing.

Use the following procedure to enable Authentication for your API app using Azure Active Directory:

  1. Sign in to the management portal (http://portal.azure.com.
  2. In the search box at the top of the Azure portal, type the name of your App Service API app.
  3. In the Settings section, click Authentication / Authorization.
  4. In the Authentication / Authorization pane, choose On for the Switch Control App Service Authentication.
  5. In the Action To Take When Request Is Not Authenticated drop-down menu, select Log In With Azure Active Directory.
  6. In the Authentication Providers section, shown in Figure 2-9, click Azure Active Directory to configure this authentication provider.
  7. Figure 2-9 Authentication providers
    Screenshot_15
  8. On the Azure Active Directory Settings pane, choose Express for the Management Mode.
  9. Click Create New AD App for the Management Mode from the Current Active Directory section.
  10. Type a name in the Create App text box. This will be the name of the application that will be created and registered on your Azure Active Directory domain. By default, only the users from your domain will be able to authenticate to your API.

Need for Review?: Authenticate on Behalf of your Users

The API is not a service that will be typically used by users. It will be consumed by front-end web desktop, or mobile applications. If you require your users to authenticate to your front-end or mobile application, it will be quite usual to pass these credentials to your back-end API. The following article shows how to enable and configure authentication between a front-end web application and an API app. See https://docs.microsoft.com/en-us/azure/app-service/app-service-webtutorial-auth-aad#configure-auth.

One additional security layer that you might like to add to your application is Cross-Origin Resource Sharing (CORS) protection. Using this protection, you are telling your web browser that your web application is allowed to use resources from another authorized domain. This scenario usually happens when your web application needs to make a request to your API back end using JavaScript. By default, web browsers don’t allow requests from sources that don’t match your web application domain, port, and protocol. This means that if you published your web application at https://app.contoso.com/ and some JavaScript code in your application makes a request to https://images.contoso.com, the request will fail because of the CORS protection. In this example, the domain name doesn’t match. You need to configure the Access-Control-Allow-Origin header for allowing the web application, app.contoso.com, to access resources from other trusted sources, such as images.contoso.com.

You can configure this CORS protection on your own code, but Azure App Service also provides you with this protection without making any changes to your code. Use the following procedure to enable CORS protection for your API app:

  1. Sign in to the management portal (http://portal.azure.com).
  2. In the search box at the top of the Azure portal, type the name of your App Service API app.
  3. Click the CORS item in the API section.
  4. On the CORS pane, on the Allowed Origins section, type the URL for your web application or any other URL that you want to allow to access to your API programmatically. For example, if your API is published at https://api.contoso.com, and your web application is published at https://webapp.contoso.com, you should add https://webapp.contoso.com to the list of allowed origins in the API App CORS configuration.

Bear in mind that you should not mix your own CORS configuration with Azure’s built-in CORS configuration. You can use either of them—built-in or your own configuration—but you cannot mix them. Built-in configuration takes precedence over your own configuration, so if you mix both types of configurations, your own configuration won’t be effective.

Create documentation for the API by using opensource and other tools

One crucial part of the lifecycle of every project is the documentation of the project. This is especially important for API software projects in which the functionality and details of each API endpoint need to be clearly identified for other developers to be able to use the API without accessing your code.

Swagger is an open-source tool that helps you document REST APIs. Swagger is language-agnostic, which means that you can use it with any language that you use for programming your REST API. The Swagger project has been donated to the OpenAPI Initiative, which was donated to the Linux Foundation in 2015. This means that you can also refer to Swagger as OpenAPI, which is the preferred way to name this specification.

OpenAPI not only helps you document your REST API, but it also helps with generating client SDKs or API discovery. Another advantage of using OpenAPI is that the tools that come with OpenAPI allows you to generate interactive documentation.

When you work with OpenAPI, you need to create a JSON or YAML file. Fortunately, you don’t need to manually create this swagger.json or swagger.yaml file. There are tools that help you create these files. The appropriate tool depends on whether you need to document an existing REST API or you want to create your API from scratch. Regardless of whether you create the API documentation from scratch or you use the documentation from an existing API, you can use Swagger UI to view the API documentation interactively.

Swagger UI is a web-based UI, as shown in Figure 2-10, which provides interactive access to the information contained in the Swagger file. You can also embed this web-based UI in your web project, which allows you to have your project and documentation accessible from the same place. Using Swagger UI, you can even test the endpoint or function directly from the web UI without using external tools, such as Postman.

Figure 2-10 Swagger UI provides interactive access to the Swagger file
Screenshot_16

You can create your API documentation from scratch using Swagger

Editor, which is an online and free tool that allows you to create your own API definition. Once you are done with the definition of your API, you can generate server and client code for that definition.

If you need to document an existing REST API, you can use tools like these:

Swashbuckle This tool integrates with your ASP.NET project. It consists of three components that generate, store, and display the information obtained dynamically from your routes, controllers, and models. The tool publishes an endpoint that provides the swagger.json used by Swagger UI or other external tools.

NSwag This tool also integrates with your ASP.NET project. It

dynamically generates the documentation from your API, based on the routes, controllers, and models. It also provides an embedded version of Swagger UI. The main advantage of this tool is that it can also generate new code based on the definition of the API made in the swagger.json file.

Note: Openapi Tools

Swashbuckle and NSwag are not the only tools for documenting your API. You can review a more complete list at https://swagger.io/tools/open-source/open-source-integrations/?

Use the following NSwag procedure for adding OpenAPI documentation to your API project. In this procedure, we will assume that you are using ASP.NET Core for your REST API.

  1. Open your API project.
  2. Install the NuGet package AspNetCore using the following code:
  3. 	
    dotnet add <Your_project_name_here>.csproj pac
    	
    
  4. In the Startup class, import the following namespaces:
    • NJsonSchema
    • Nswag.AspNetCore
  5. In the ConfigureServices method, add following code for registering the required OpenAPI services:
  6. 	
    services.AddOpenApiDocument();
    	
    
  7. In the Configure method of your Startup class, enable the middleware to serve the swagger.json file as an endpoint. This is useful for integration with third parties and API discovery:
  8. 	
    app.UseSwagger(); app.UseSwaggerUi3();
    	
    
  9. Run your project to ensure that you have enabled OpenAPI correctly. From your web browser, navigate to the URL of your local project and append the /swagger URI. You should have something like http://localhost:5000/swagger.

Listing 2-4 shows the code before making the modifications for adding OpenAPI documentation.

Listing 2-5 shows how your code should look after you make the modifications explained in the preceding set of steps.

Listing 2-4 Adding NSwag for OpenAPI documentation

	
// C# ASP.NET Core. Startup.cs file
using Microsoft.AspNetCore.Builder;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
using TodoApi.Models;
namespace TodoApi
{
public class Startup
{
public void ConfigureServices(IServiceCollection {
services.AddDbContext<TodoContext>(opt => opt.services.AddMvc();
}
public void Configure(IApplicationBuilder app)
{
app.UseDefaultFiles();
app.UseStaticFiles();
www.examsnap.com
ExamSnap - IT Certification Exam Dumps and Practice Test Questions
app.UseMvc();
	

Listing 2-5 Adding NSwag for OpenAPI documentation

	
// C# ASP.NET Core. Startup.cs file
using Microsoft.AspNetCore.Builder;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;
using TodoApi.Models;
using NJsonSchema;
using NSwag.AspNetCore;
namespace TodoApi
{
public class Startup
{
public void ConfigureServices(IServiceCollection {
services.AddDbContext<TodoContext>(opt => opt.services.AddMvc();
// Register the Swagger generator, defining one services.AddOpenApiDocument();
}
public void Configure(IApplicationBuilder app)
{
app.UseDefaultFiles();
app.UseStaticFiles();
// Enable middleware to serve generated Swagger app.UseSwagger();
app.UseSwaggerUi3();
app.UseMvc();
}
}
}
	

Skill 2.4: Implement Azure Functions

Based on Azure App Service, Azure Functions allow you to run pieces of codes that solve particular problems inside the whole application. You use these functions in the same way that you may use a class or a function inside your code. That is, your function gets some input, executes the piece of code, and provides an output.

The big difference between Azure Functions and other app services models is that with Azure Functions (using the Consumption pricing tier), you will be charged per second only when your code is running. If you use App Service, you are charged hourly when the App Service Plan is running—even if there is no code executing. Because Azure Functions is based on App Service, you can also decide to run your Azure Function in your App Service Plan if you already have other app services executing.

This skill covers how to:

  • Implement input and output bindings for a function
  • Implement function triggers by using data operations, timers, and webhooks
  • Implement Azure Durable Functions
  • Create Azure Function apps by using Visual Studio

Implement input and output bindings for a function

When you are writing a function in your code, that function may require data as input information for doing the job that you are writing. The function can also produce some output information as the result of the operations performed inside the function. When you work with Azure Function, you may also need these input and output flows of data.

Binding uses Azure Functions for connecting your function with the external world without hard-coding the connection to the external resources. An Azure Function can have a mix of input and output bindings, or it can have no binding at all. Bindings pass data to the function as parameters.

Although triggers and bindings are closely related, you should not confuse them. Triggers are the events that cause the function to start its execution; bindings are like the connection to the data needed for the function. You can see the difference in this example:

One publisher service sends an event—one that reads a new image that has been uploaded to blob storage—to an Event Grid. Your function needs to read this image, process it, and place some information in a CosmosDB document. When the image has been processed, your function also sends a notification to the user interface using SignalR.

In this example, you can find one trigger, one input binding, and two output bindings:

Trigger The Event Grid should be configured as the trigger for the Azure Function.

Input binding Your function needs to read the image that has been uploaded to the blob storage. In this case, you need to use blob storage as an input binding.

Output bindings Your function needs to write a CosmosDB document with the results of processing the image. You need to use the CosmosDB output binding. Your function also needs to send a notification to the user interface using the SignalR output binding.

Depending on the language that you use for programming your Azure Function, the way you declare a binding changes:

C# You declare bindings and triggers by decorating methods and parameters.

Other Updates the function.json configuration file.

When defining a binding for non-C# language functions, you need to define your binding using the following minimal required attributes: type This string represents the binding type. For example, you would use eventHub when using an output binding for Event Hub.

direction The only allowed values are in for input bindings and out for output bindings. Some bindings also support the special direction inout.

name This attribute is used by the function for binding the data in the function. For example, in JavaScript, the key in a key/value list is an attribute.

Depending on the specific binding that you are configuring, there could be some additional attributes that should be defined.

Note: Supported Bindings

For a complete list of supported bindings, please refer to this article at https://docs.microsoft.com/en-us/azure/azure-functions/functions-triggersbindings#supported-bindings.

Before you can use a binding in your code, you need to register it. If you are using C# for your functions, you can do this by installing the appropriate NuGet package. For other languages, you need to install the package with the extension code using the func command-line utility. The following example will install the Service Bus extension in your local environment for non-C# projects:

	
func extensions install -package Microsoft.Azure.
	

If you are developing your Azure Function using the Azure Portal, you can add the bindings in the Integrate section of your function. When you add a binding that is not installed in your environment, you will see the warning message shown in Figure 2-11. You can install the extension by clicking the Install link.

Figure 2-11 Missing extension warning message
Screenshot_17

Need More Review?: Manually Install Binding Extensions from the Azure Portal

When you develop your Azure Function using the Azure Portal, you can use the standard editor or the advanced editor. When you use the advanced editor, you can directly edit the function.json configuration file. If you add new bindings using the advanced editor, you will need to manually install any new binding extensions that you added to the function.json. You can review the following article for manually installing binding extensions from the Azure Portal at https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-register.

If you decide to program your Azure Function using C#, the configuration of the bindings is made using decorators for function and parameters. The function.json file is automatically constructed based on the information that you provide in your code. Listing 2-6 shows how to configure input and output bindings using parameter decorators.

Listing 2-6 Configuring input and output bindings

	
// C# ASP.NET Core
using System;
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Microsoft.Azure.WebJobs.Extensions.SignalRService;
using Microsoft.Azure.WebJobs.Extensions.EventGrid;
using Microsoft.Azure.EventGrid.Models;
using System.Threading.Tasks;
namespace Company.Functions
{
public static class BlobTriggerCSharp
{
[FunctionName("BlobTriggerCSharp")]
public static Task Run(
[EventGridTrigger]EventGridEvent eventGridEvent,
[Blob("{data.url}", FileAccess.Read, Connection Stream imageBlob,
[CosmosDB(
databaseName: "GIS",
collectionName: "Processed_images",
ConnectionStringSetting = "CosmosDBConnection")] [SignalR(HubName = "notifications")]IAsyncCollector<signalRMessages,
ILogger log)
{
document = new { Description = eventGridEvent.log.LogInformation($"C# Blob trigger function {eventGridEvent.Topic} n Subject: {eventGridEvent.return signalRMessages.AddAsync(
new SignalRMessage
{
Target = "newMessage",
Arguments = new [] { eventGridEvent.Subject });
}
}
}
	

Let’s review the portions of Listing 2-6 that are related to the binding configuration. In this example, we configured one input binding and two output bindings. The parameter imageBlob is configured as an input binding. We have decorated the parameter with the attribute Blob, which takes following parameters:

Path The value {data.url} configures the path of the blobs that will be passed to the function. In this case, we are using a binding expression that resolves to the full path of the blob in the blob storage.

Blob access mode In this example, you will access the blob in read-only mode.

Connection This sets the connection string to the storage account where the blobs are stored. This parameter sets the app setting name that contains the actual connection string.

We have also configured two output bindings, though we have configured them differently. The first output binding is configured using the keyword out in the parameter definition. Just as we did with the input parameter, we configured the output parameter document by using a parameter attribute. In this case, we used the CosmosDB attribute. We use following parameters for configuring this output binding:

databaseName Sets the database in which we will save the document that we will create during the execution of the function.

collectionName Sets the collection in which we will save the generated document.

ConnectionStringSetting Sets the name of the app setting variable that contains the actual connection string for the database. You should not put the actual connection string here.

Setting a value for this output binding is as simple as assigning a value to the parameter document. We can also configure output bindings by using the return statement of the function. In our example, we configure the second output binding this way.

The function parameter signalRMessages is our second output binding. As you can see in Listing 2-6, we didn’t add the out keyword to this parameter because we can return multiple output values. When you need to return multiple output values, you need to use ICollector or IAsyncCollector types with the output binding parameter, as we did with signalRMessages. Inside our function, we add needed values to the signalRMessages collection and use this collection as the return value of the function. We used the SignalR parameter attribute for configuring this output binding. In this case, we only used one parameter for configuring the output binding.

HubName This is the name of the SignalR hub where you will send your messages.

ConnectionStringSetting In this case, we didn’t use this parameter, so it will use its default value AzureSignalRConnectionString. As we saw in the other bindings, this parameter sets the name of the app setting variable that contains the actual connection string SignalR.

When you are configuring bindings or triggers, there will be situations when you need to map the trigger or binding to a dynamically generated path or element. In these situations, you can use binding expressions. You define a binding expression by wrapping your expression in curly braces. You can see an example of a binding expression shown previously in Listing 2-6. The path that we configure for the input binding contains the binding expression {data.url}, which resolves to the full path of the blob in the blob storage. In this case, EventGridTrigger sends a JSON payload to the input binding that contains the data.url attribute.

Need More Review?: Binding Expression Patterns

You can learn about more binding expression patterns by reviewing this article about Azure Functions binding expression patterns in Microsoft Docs at https://docs.microsoft.com/en-us/azure/azure-functions/functionsbindings-expressions-patterns

The way you configure the bindings for your code depends on the language that you used for your Azure Function. In the previous example, we review how to configure input and output bindings using C# and parameters decorations. If you use any of the other supported languages in your Azure Function, the way you configure input and output bindings changes.

The first step when configuring bindings in non-C# languages is to modify the function.json configuration file. Listing 2-7 shows the equivalent function.json for the binding configuration made in Listing 2-6. Once you have configured your bindings, you can write your code to access the bindings that you configured. Listing 2-8 shows an example written in JavaScript for using bindings in your code.

Listing 2-7 Configuring input and output bindings in function.json

	
{
"disabled": false,
"bindings": [
{
"name": "eventGridEvent",
"type": "eventGridTrigger",
"direction": "in"
},
{
"name": "imageBlob",
"type": "blob",
"connection": "ImagesBlobStorage",
"direction": "in",
"path": "{data.url}"
},
{
"name": "document",
"type": "cosmosDB",
"direction": "out",
"databaseName": "GIS",
"collectionName": "Processed_images",
"connectionStringSetting": "CosmosDBConnect
"createIfNotExists": true
},
{
"name": "signalRMessages",
"type": "signalR",
"direction": "out",
"hubName": "notifications"
}
]
}
	

Listing 2-8 Using bindings in JavaScript

	
// NodeJS. Index.js
const uuid = require('uuid/v4');
module.exports = async function (context, eventGridEvent) context.log('JavaScript Event Grid trigger function processed context.log("Subject: " + eventGridEvent.subject);
context.log("Time: " + eventGridEvent.eventTime);
context.log("Data: " + JSON.stringify(eventGridEvent.context.bindings.document = JSON.stringify({
id: uuid(),
Description: eventGridEvent.topic
});
context.bindings.signalRMessages = [{
"target": "newMessage",
"arguments": [ eventGridEvent.subject ]
}];
context.done();
};
	

Listings 2-7 and 2-8 represent the equivalent code in JavaScript to the code in the C# code shown in Listing 2-6. Most important is that name attributes in the binding definitions shown in Listing 2-7 correspond to the properties of the context object shown in Listing 2-8. For example, we created a cosmosDB output binding and assigned the value document to the name attribute in the binding definition in Listing 2-7. In your JavaScript code, you access this output binding by using context.bindings.document.

Remember that you need to install the extensions on your local environment before you can use bindings or triggers. You can use the func command-line command from the Azure Function CLI tools.

Implement function triggers by using data operations, timers, and webhooks

When you create an Azure Function, that function will be executed based on events that happen in the external world. Some examples include

Executing a function periodically

Executing a function when some other process uploads a file to blob storage or sends a message to a queue storage Executing a function when an email arrives in Outlook

All these events are programmatically managed by triggers.

You can configure function triggers in the same way that you configure input or output bindings, but you need to pay attention to some additional details when dealing with triggers. You configure a trigger for listening to specific events. When an event happens, the trigger object can send data and information to the function.

You can configure three different types of triggers:

data operation The trigger is started based on new data that is created, updated, or added to system. Supported systems are CosmosDB, Event Grid, Event Hub, Blob Storage, Queue Storage, and Service Bus.

timers You use this kind of trigger when you need to run your function based on a schedule.

webhooks You use HTTP or webhooks triggers when you need to run your function based on an HTTP Request.

Triggers send data to the function with information about the event that caused the trigger to start. This information depends on the type of trigger. Listing 2-9 shows how to configure a data operation trigger for CosmosDB.

Listing 2-9 Configuring a CosmosDB trigger

	
// C# ASP.NET Core
using System.Collections.Generic;
using Microsoft.Azure.Documents;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
namespace Company.Function
{
public static class CosmosDBTriggerCSharp
{
[FunctionName("CosmosDBTriggerCSharp")]
public static void Run([CosmosDBTrigger(
databaseName: "databaseName",
collectionName: "collectionName",
ConnectionStringSetting = "AzureWebJobsStorage",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<ILogger log)
{
if (input != null && input.Count > 0)
{
log.LogInformation("Documents modified " log.LogInformation("First document Id " + log.LogInformation("Modified document: " }
}
}
}
	

Important: Working with Leases Collection

At the time of this writing, Cosmos DB trigger does not support working with a partitioned lease collection. Microsoft is removing the ability to create a non-partitioned collection using Azure Portal. You can still create your non-partitioned collections using SDKs. CosmosDB trigger requires a second collection to store leases over partitions. Both collections—leases and the collection that you want to monitor—need to exist before your code runs. To ensure that the lease collection is correctly created as a nonpartitioned collection, don’t create the collection using the Azure Portal, and set the trigger parameter CreateLeaseCollectionIfNotExists to true.

Just as with bindings, you need to install the corresponding NuGet package with the appropriate extension for working with triggers. In this case, you need to install the package Microsoft.Azure.WebJobs.Extensions.CosmosDB. We used the CosmosDBTrigger parameter attribute for configuring our trigger with the following parameters:

databaseName This is the name of the database that contains the collection this trigger should monitor.

collectionName This is the name of the collection that this trigger should monitor. This collection needs to exist before your function runs.

ConnectionStringSetting This is the name of the app setting variable that contains the connection string to the CosmosDB database. If you want to debug your function in your local environment, you should configure this variable in the file local.settings.json file and assign the value of the connection string to your development CosmosDB database. This local.settings.json file is used by Azure Functions Core Tools to locally store app settings, connection strings, and settings and won’t be automatically uploaded to Azure when you publish your Azure Function.

LeaseCollectionName This is the name of the collection that will be used for storing leases over partitions. By default, this collection will be stored in the same database as the collectionName. If you need to store this collection in a separate database, use the parameter leaseDatabaseName or leaseConnectionStringSetting if you need to store the database in a separate CosmosDB account.

CreateLeaseCollectionIfNotExists This creates the lease collection set by the LeaseCollectionName parameter if it does not exist in the database. Lease collection should be a non-partitioned collection and needs to exist before your function runs.

This trigger monitors for new or updated documents in the database that you configure in the parameters of the trigger. Once the trigger detects a change, it passes detected changes to the function using an IReadOnlyList<Document>. Once we have the information provided by the trigger in the input list, we can process the information inside our function. If you have enabled Application Insight integration, you will be able to see the log messages from your function, as show in Figure 2-12.

Figure 2-12 View Azure Function logs in Application Insight
Screenshot_18

Note: Version 1.0 Versus Version 2.0

When you work with Azure Functions, you can choose between versions 1.0 and 2.0. The main difference between the two versions is that you can only develop and host Azure Functions 1.0 on Azure Portal or Windows computers. Functions 2.0 can be developed and hosted on all platforms supported by .NET Core. The Azure Function you use affects the extension packages that you need to install when configuring triggers and bindings. Review the following overview of Azure Functions runtime versions. See https://docs.microsoft.com/en-us/azure/azurefunctions/functions-versions.

When you work with timer and webhooks triggers, the main difference between them and a data operations trigger is that you not need to explicitly install the extension package that supports the trigger.

Timer triggers execute your function based on a schedule. This schedule is configured using a CRON expression that is interpreted by the NCronTab library. A CRON expression is a string compound of six different fields with this structure:

		
{second} {minute} {hour} {day} {month} {day-of-we
		
	

Each field can have numeric values that are meaningful for the field:

second Represents the seconds in a minute. You can assign values from 0 to 59.

minute Represents the minutes in an hour. You can assign values from 0 to 59. hour Represents the hours in a day. You can assign values from 0 to 23. day Represents the days in a month. You can assign values from 1 to 31.

month Represents the months in a year. You can assign values from 1 to 12. You can also use names in English, such as January, or you can abbreviations of the name in English, such as Jan. Names are caseinsensitive.

day-of-week Represents the days of the week. You can assign values from 0 to 6 where 0 is Sunday. You can also use names in English, such as Monday, or you can use abbreviations of the name in English, such as Mon. Names are case-insensitive.

All fields need to be present in an CRON expression. If you don’t want to provide a value to a field, you can use the asterisk character *. This means that the expression will use all available values for that field. For example, the CRON expression * * * * * * means that the trigger will be executed every second, in every minute, in every hour, in every day, and in every month of the year. You can also use some operators with the allowed values in fields:

Range of values Use the dash operator—–—for representing all the values available between two limits. For example, the expression 0 1012 * * * * means that the function will be executed at hh:10:00, hh:11:00, and hh:12:00 where hh means every hour. That is, it will be executed three times every hour.

Set of values Use the comma operator—,—for representing a set of values. For example, the expression 0 0 11,12,13 * * * means that the function will be executed three times a day, every day, once at 11:00:00, a second time at 12:00:00, and finally at 13:00:00.

Interval of values Use the forward slash operator—/—for representing an interval of values. The function is executed when the value of the field is divisible by the value that you put on the right side of the operator. For example, the expression */5 * * * * * will execute the function every five seconds.

Listings 2-10 and 2-11 show how to configure a timer trigger and how to use the trigger with JavaScript code.

Listing 2-10 Configuring a timer trigger in function.json

	
{
"disabled": false,
"bindings": [
{
"name": "myTimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 */5 * * * *",
"useMonitor": true,
"runOnStartup": true
}
]
}
	

Listing 2-11 Using a timer trigger with JavaScript

	
//NodeJS. Index.js file
module.exports = async function (context, myTimer) {
var timeStamp = new Date().toISOString();
if(myTimer.isPastDue)
{
context.log('JavaScript is running late!');
}
context.log('JavaScript timer trigger Last execution: context.log('JavaScript timer trigger Next execution: };
	

Just as we did when we configured bindings in the previous section, when you configure a trigger for non-C# languages, you need to add them to the function.json configuration file. You configure your triggers in the bindings section. Listing 2-10 shows the appropriate properties for configuring a timer trigger:

name This is the name of the variable that you will use on your JavaScript code for accessing the information from the trigger.

type This is the type of trigger that we are configuring. In this example, the value for the timer trigger is timerTrigger. direction This is always included in a trigger.

schedule This is the CRON expression used for configuring the execution scheduling of your function. You can also use a TimeSpan expression.

useMonitor This property monitors the schedule even if the function app instance is restarted. The default value for this property is true for every schedule with a recurrence greater than one minute. Monitoring the schedule occurrences will ensure that the schedule is maintained correctly.

runOnStartup This indicates that the function should be invoked as soon as the runtime starts. The function will be executed after the function app wakes up after going idle because of inactivity or if the function app restarts because of changes in the function. Setting this parameter to true is not recommended on production environments because it can lead to unpredictable execution times of your function.

Note: Troubleshooting Functions on Your Local Environment

While you are developing your Azure Functions, you need to troubleshoot your code in your local environment. If you are using non-HTML triggers, you need to provide a valid value for the AzureWebJobsStorage attribute in the local.settings.json file.

TimeSpan expressions are used to specify the time interval between the invocations of the function. If the function execution takes longer than the specified interval, then the function is invoked immediately after the previous invocation finishes. TimeSpan expressions are strings with the format hh:mm:ss where hh represents hours, mm represents minutes, and ss represents seconds. Hours in a TimeSpan expression need to be less than 24. The TimeSpan expression 24:00:00 means the function will be executed every day. 02:00:00 means the function will be invoked every two hours. You can use TimeSpan expressions only on Azure Functions that are executed on App Service Plans. That is, you cannot use TimeSpan expressions when you are using the Consumption pricing tier.

You use HTTP triggers for running your Azure Function when an external process makes an HTTP request. This HTTP request can be a regular request using any of the available HTTP methods or a webhook. A web callback or webhook is an HTTP request made by third-party systems, or external web applications, or as a result of an event generated in the external system. For example, if you are using GitHub as your code repository, GitHub can send a webhook to your Azure Function each time a new pull request is opened. Webhooks are available only for version 1.x of the Azure Function runtime.

When you create an Azure Function using HTTP triggers, the runtime automatically publishes an endpoint with the following structure:

	
http://<your_function_app>.azurewebsites.net/api/
	

This is the URL or endpoint that you need to use when calling to your function using a regular HTTP request or when you configure an external webhook for invoking your function. You can customize the route of this endpoint by using the appropriate configuration properties. This means that you can also implement serverless APIs using HTTP triggers. You can even protect the access to your function’s endpoints by requesting authorization for any request made to your API using the App Service Authentication / Authorization. Listing 2-12 shows how to configure an HTTP trigger with a custom endpoint.

Listing 2-12 Configuring an HTTP trigger

	
// C# ASP.NET Core
using System.Security.Claims;
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace Company.Function
{
public static class HttpTriggerCSharp
{
[FunctionName("HttpTriggerCSharp")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", {id:int?}")] HttpRequest req,
int? id,
ILogger log)
{
log.LogInformation("C# HTTP trigger function //We access to the parameter in the address by www.examsnap.com
ExamSnap - IT Certification Exam Dumps and Practice Test Questions
//with the same name
log.LogInformation($"Requesting information for //If you enable Authentication / Authorization //information
//about the authenticated user is automatically ClaimsPrincipal identities = req.HttpContext.string username = identities.Identity?.Name;
log.LogInformation($"Request made by user {username}");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.dynamic data = JsonConvert.DeserializeObject(name = name ?? data?.name;
//We customize the output binding
return name != null
? (ActionResult)new JsonResult(new { message username = username, device = id})
: new BadRequestObjectResult("Please pass in the request body");
}
}
}
	

The example in Listing 2-12 shows the following points when working with HTTP triggers:

How to work with authentication.

How to work with the authorization level.

How to customize the function endpoint, using route parameters.

How to customize the output binding.

HTTP triggers are automatically provided to you out-of-the-box with the function runtime. There is no need to install a specific NuGet package for working with this extension. You use the HTTPTrigger parameter attribute for configuring the HTTP trigger. This trigger accepts the following parameters:

AuthLevel This parameter configures the authorization key that you should use for accessing the function. Allowed values are anonymous No key is required.

function This is the default value. You need to provide a functionspecific key.

admin You need to provide the master key.

Methods You can configure the HTTP methods that your function will accept. By default, the function runtime accepts all HTTP methods.

Listing 2-12 reduces these accepted HTTP methods to get and post. Don’t use this parameter if you set the WebHookType parameter.

Route You can customize the route of the endpoint used for the function to listen to a new request. The default route is http://<your_function_app>.azurewebsites.net/api/<your_function_name>

WebHookType This parameter is available only for version 1.x runtime functions. You should not use the Methods and WebHookType parameter togethers. This parameter sets the WebHook type for a specific provider. Allowed values are

Generic This parameter is used for non-specific providers.

Github This parameter is used for interacting with GitHub webhooks. slack This parameter for interacting with Slack webhooks.

When you declare the variable type that your function uses as the input from the trigger, you can use HttpRequest or a custom type. If you use a custom type, the runtime tries to parse the request body as a JSON object for getting needed information for setting your custom type properties. If you decide to use HttpRequest for the type of the trigger input parameter, you will get full access to the request object.

Every Azure Function App that you deploy automatically exposes a group of admin endpoints that you can use for accessing programmatically some aspects of your app, such as the status of the host. These endpoints look like

By default, these endpoints are protected by an access code or authentication key that you can manage from your Function App in the Azure Portal, as shown in Figure 2-13.

Figure 2-13 Managing host keys for a Function App
Screenshot_19

When you use the HTTP trigger, any endpoint that you publish will also be protected by the same mechanism, although the keys that you will use for protecting those endpoints will be different. You can configure two types of authorization keys:

host These keys are shared by all functions deployed in the Function App. This type of keys allows access to any function in the host. function These keys only protect the function where they are defined.

When you define a new key, you assign a name to the key. If you have two keys of a different type—host and function—with the same name, the function key takes precedence. There are also two default keys—one per type of key—that you can also use for accessing your endpoints. These default keys take precedence over any other key that you created. If you need access to the admin endpoints that we mentioned before, you need to use a particular host key called _master. You can also need to use this administrative key when you set the admin value to the AuthLevel trigger configuration parameter. You can provide the appropriate key when you make a request to your API by using the code parameter or using the x-function-key HTTP header.

Protecting your endpoints using the authorization keys is not a recommended practice for production environments. You should only use authorization keys on testing or development environments for controlling the access to your API. For a production environment, you should use one of the following approaches:

Enable Function App Authorization / Authentication This will integrate your API with Azure Active Directory or other third-party identity providers to authenticate clients.

Use Azure API Management (APIM) This will secure the incoming

request to your API, such as filtering by IP address or using authentication based on certificates.

Deploy your function in an App Service Environment (ASE) ASEs provides dedicated hosting environments that allow you to configure a single front-end gateway that can authenticate all incoming requests.

If you decide to use any of the previous security methods, you need to ensure that you configure the AuthLevel as anonymous. You can see this configuration in Listing 2-12 in this line:

	
HttpTrigger(AuthorizationLevel.Anonymous...
	

When you enable the App Service Authentication / Authorization, you can access the information about the authentication users by using ClaimPrincipal. You can only access this information if you are running Azure Functions 2.x runtime and only with .NET languages. You can use ClaimPrincipal as an additional parameter of your function signature or from the codeâ€"using the request contextâ€"as shown previously in Listing 2-12.

	
ClaimsPrincipal identities = req.HttpContext.User string username = identities.Identity?.Name;
	

As we saw earlier in this section, Azure Functions runtime exposes your function by default using the following URL schema:

	
http://<your_function_app_name>.azurewebsites.net
	

You can customize the endpoint by using the route HTTPTrigger parameter. In Listing 2-12, we set the route parameter to devices/{id:int?}. This means that your endpoint will look like this:

	
		http://<your_function_app_name>.azurewebsites.net
	

When you customize the route for your function, you can also add parameters to the route, which will be accessible to your code by adding them as parameters of your function’s signature. You can use any Web API Route Constraint (see https://www.asp.net/web-api/overview/web-apirouting-and-actions/attribute-routing-in-web-api-2#constraints) that you may use when defining a route using Web API 2.

By default, when you make a request to a function that uses an HTTP trigger, the response will be an empty body with these status codes:

	
HTTP 200 OK in case of Function 1.x runtime
HTTP 204 No Content in case of Function 2.x runti
	

If you need to customize the response of your function, you need to configure an output binding. You can use any of the two types of output bindings, using the return statement or a function parameter. Listing 2-12 shows how to configure the output binding for returning a JSON object with some information.

It is important to remember the limits associated with the function when you plan to deploy your function in a production environment. These limits are

Maximum request length The HTTP request should not be larger than 100MB.

Maximum URL length Your custom URL is limited to 4096 bytes.

Execution timeout Your function should return a value in less than 2.5 minutes. Your function can take more time to execute, but if it doesn’t return anything before that time, the gateway will time out with an HTTP 502 error. If your function needs to take more time to execute, you should use an async pattern and return a ping endpoint to allow the caller to ask for the status of the execution of your function.

Need More Review?: Host Properties

You can also make some adjustments to the host where your function is running by using the host.json file. Visit the following article for reviewing all the properties available in the host.json file at https://docs.microsoft.com/en-us/azure/azure-functions/functionsbindings-http-webhook#trigger---hostjson-properties

Implement Azure Durable Functions

One crucial characteristic of Azure functions is that they are stateless. This characteristic means that function runtime does not maintain the state of the objects that you create during the execution of the function if the host process or the virtual machine where the function is running is recycled or rebooted.

Azure Durable Functions are an extension of the Azure Functions that provide stateful workflow capabilities in a serverless environment. These stateful workflow capabilities allow you to

Chain function calls together This chaining means that a function can call other functions, which maintains the status between calls. These calls can be synchronous or asynchronous.

Define workflow by code You don’t need to create JSON workflow definitions or use external tools.

Ensure that the status of the workflow is always consistent When a function or activity on a workflow needs to wait for other functions or activities, the workflow engine automatically creates checkpoints for saving the status of the activity.

The main advantage of using Azure Durable Functions is that it eases the implementation of complex stateful coordination requirements in serverless scenarios. Although Durable Azure Functions is an extension of Azure Functions, at the time of this writing, it doesn’t support all languages supported by Azure Functions. The following languages are supported:

C# Both precompiled class libraries and C# script are supported.

F# Precompiled class libraries and F# script are supported. F# script is available only for Azure Functions runtime 1.x.

JavaScript Supported only for Azure Functions runtime version 2.x runtime. Version 1.7.0 or later or Azure Durable Functions is required.

Durable Functions are billed using the same rules that apply to Azure Functions. That is, you are charged only for the time that your functions are running.

Working with Durable Functions means that you need to deal with different kinds of functions. Each type of function plays a different role in the execution of the workflow. These roles are:

Activity These are the functions that do the real work. An activity is a job that you need your workflow to do. For example, you may need your code to send a document to a content reviewer before other activity can publish the document, or you need to create a shipment order to send products to a client.

Orchestrator Any workflow executes activity functions in a particular order. Orchestrator functions define the actions that a workflow executes. These actions can be activity functions, timers, or waiting for external events or sub-orchestrations. Each instance of an orchestrator function has an instance identifier. You can generate this identifier manually or leave the Durable Function framework to generate it dynamically.

Client This is the entry point of a workflow. Instances of a client

function are created by a trigger from any source, such as HTTP, queue, or event triggers. Client functions create instances of orchestrator functions by sending an orchestrator trigger.

In the same way that Azure Function uses triggers and bindings for sending and receiving information from functions, you need to use triggers and bindings for setting the communication between the different types of durable functions. Durable functions add two new triggers to control the execution of orchestration and activity functions:

Orchestration triggers These allow you to work with orchestration functions by creating new instances of the function or resuming instances that are waiting for a task. The most important characteristic of these triggers is that they are single-threaded. When you use orchestration triggers, you need to ensure that your code does not perform async calls—other than waiting for durable function tasks—or I/O operations. This ensures that the orchestration function is focused on calling activity functions in the correct order and waiting for the correct events or functions.

Activity trigger This is the type of trigger that you need to use when writing your activity functions. These triggers allow communications between orchestration functions and activity functions. They are multithreaded and don’t have any restriction related to threading or I/O operations.

The following example shows how the different types of functions and triggers work together for processing and saving a hypothetical order generated from an external application and saved to a CosmosDB database. Although the example is quite simple, it shows how the different functions interact. Figure 2-14 shows a diagram of the workflow implemented on the functions shown in Listings 2-13 to 2-20. For running this example, you need to meet the following requirements:

Figure 2-14 Durable function workflow
Screenshot_20

An Azure subscription.

An Azure Storage Account. The orchestration function needs an Azure Storage Account for saving the status of each durable function instance during the execution of the workflow.

An Azure CosmosDB database.

Install following dependencies using this command:

	
func extensions install -p <package_name> -v
	

CosmosDB:

Package name: Microsoft.Azure.WebJobs.Extensions.CosmosDB Version: 3.0.3

Durable Functions extension:

Package name: Microsoft.Azure.WebJobs.Extensions.DurableTask Version: 1.8.0

You can run this example using your favorite Integrated Development Environment (IDE). Visual Studio and Visual Studio Code offers several tools that make working with Azure projects more comfortable. Use the following steps for configuring your Visual Studio Code and creating the durable functions:

  1. Open your Visual Studio Code.
  2. Click the Extensions icon on the left side of the window.
  3. On the Extensions panel, on the Search Extensions In Marketplace text box, type Azure Functions.
  4. In the result list, on the Azure Function extension, click the Install button. Depending on your Visual Studio Code version, you may need to restart Visual Studio Code.
  5. Click the Azure icon on the left side of the Visual Studio Code window.
  6. In the Functions section, click Sign In To Azure For Log Into Azure.
  7. In the Functions section, click the lightning bolt icon, which creates a new Azure Function.
  8. In the Create New Project dialog, select JavaScript.
  9. In the Select A Template For Your Project’s First Function dialog box, select HTTP Trigger.
  10. For the Provide A Function Name option, type HTTPTriggerDurable. This creates the first function that you need for this example.
  11. Select Anonymous for the Authorization Level.
  12. Select Open In Current Window to open the project that you just created.

Repeat steps 5 to 12 for all the durable functions that you need for this example. It is important to save all the functions you need in the same folder.

Listings 2-13 and 2-14 show the JavaScript code and the JSON configuration file that you need to create the client function that will call the orchestration function.

Listing 2-13 Azure Durable Functions client function code

	
// NodeJS. HTTPTriggerDurable/index.js const df = require("durable-functions");
module.exports = async function (context, req) {     context.log('JavaScript Durable Functions exa     const client = df.getClient(context);     const instanceId = await client.startNew(req.     context.log('Started orchestration with ID = 
return client.createCheckStatusResponse(conte };
	

Listing 2-14 Durable functions. Client function JSON configuration file

	
{
"disabled": false,
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"route": "orchestrators/{functionName}",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"name": "context",
"type": "orchestrationClient",
"direction": "in"
}
]
}
	

Listings 2-15 and 2-16 show the JavaScript code and the JSON configuration file that you need to create the Orchestration function that invokes, in the correct order, all the other activity functions. This function also returns to the client the results of the execution of the different activity functions.

Listing 2-15 Azure Durable Functions Orchestrator function code

	
// NodeJS. OrchestratorFunction/index.js const df = require("durable-functions");
module.exports = df.orchestrator(function*(contex     context.log("Starting workflow: chain example
const order = yield context.df.callActivity("     const savedOrder = yield context.df.callActiv
return savedOrder;
});
	

Listing 2-16 Durable functions. Orchestrator function JSON configuration file

	
{
"disabled": false,
"bindings": [
{
"type": "orchestrationTrigger",
"direction": "in",
"name": "context"
}
]
}
	

Listings 2-17 and 2-18 show the JavaScript code and the JSON configuration file that you need to create the activity function Get Order. In this example, this function is in charge of constructing the information that will be used in the Save Order function. In a more complex scenario, this function could get information from the user’s shopping cart from an ecommerce system or any other potential source.

Listing 2-17 Azure Durable Functions activity function code

	
// NodeJS. GetOrder/index.js module.exports = async function (context) {
//Create a mock order for testing     var order = {
"id" : Math.floor(Math.random() * 1000),
"name" : "Customer",
"date" : new Date().toJSON()
}
context.log(order);     return order; };
	

Listing 2-18 Azure Durable Functions activity function JSON configuration file

	
{
"disabled": false,
"bindings": [
{
"type": "activityTrigger",
"direction": "in",
"name": "name"
}
]
}
	

Listings 2-19 and 2-20 show the JavaScript code and the JSON configuration file that you need to create the activity function that will save the order in a CosmosDB database. In a much more complex scenario, you could use this function to insert the order into your ERP system or send it to another activity function that could do further analysis or processing.

Listing 2-19 Azure Durable Functions activity function code

	
// NodeJS. SaveOrder/index.js module.exports = async function (context) {
//Saves the order object received from other     context.bindings.orderDocument = JSON.stringi
"id": '${context.bindings.order.id}',
"customerName": context.bindings.order.na
"orderDate": context.bindings.order.date,
"cosmosDate": new Date().toJSON()
});
context.done(); };
	

Listing 2-20 Azure Durable Functions activity function JSON configuration file

	
{
"disabled": false,
"bindings": [
{
"type": "activityTrigger",
"direction": "in",
"name": "order"
}
,
{
"name": "orderDocument",
"type": "cosmosDB",
"databaseName": "ERP_Database",
"collectionName": "Orders",
"createIfNotExists": true,
"connectionStringSetting": "CosmosDBStorage
"direction": "out"
}
]
}
	

The entry point in any workflow implemented using Durable Functions is always a client function. This function uses the orchestration client for calling the orchestrator function. Listing 2-14 shows how to configure the output binding.

	
{
"name": "context",
"type": "orchestrationClient",
"direction": "in"
}
	

When you are using JavaScript for programming your client function, the orchestrator client output binding is not directly exposed using the value of the name attribute set in the function.json configuration file. In this case, you need to extract the actual client from the context variable using the getClient() function declared in the durable-functions package, as shown in Listing 2-13.

	
const client = df.getClient(context);
	

Once you have the correct reference to the orchestrator client output binding, you can use the method startNew() for creating a new instance of the orchestrator function. The parameters for this method are:

Name of the orchestrator function In our example, we get this name from the HTTP request, using the URL parameter functionName, as previously shown in Listings 2-13 and 2-14.

InstanceId Sets the Id assigned to the new instance of the orchestration function. If you don’t provide a value to this parameter, then the method creates a random Id. In general, you should use the autogenerated random Id.

Input This is where you place any data that your orchestration function may need. You need to use JSON-serializable data for this paramenter.

Once you have created the instance of the orchestration function and saved the Id associated to the instance, the client functions return a data structure with several useful HTTP endpoints. You can use these endpoints to review the status of the execution of the workflow, or terminate the workflow, or send external events to the workflow during the execution. Following is an example of the workflow management endpoints for the execution of our example in a local computer environment:

	
{
"id": "789e7eb945a04ab78e74e9216870af28",     "statusQueryGetUri":
"http://localhost:7071/runtime/webhooks/durableta instances/789e7eb945a04ab78e74e9216870af28?taskHu Storage&code=AZNSvCSecL4w0RIRzPxLqbey1uJlThcwRE42     "sendEventPostUri": "http://localhost:7071/runtime/webhooks/durableta
70af28/raiseEvent/{eventName}?taskHub=DurableFunc
SecL4w0RIRzPxLqbey1uJlThcwRE42UNuJavVIozMJhrNOzw=     "terminatePostUri":
"http://localhost:7071/runtime/webhooks/durableta
70af28/terminate?reason={text}&taskHub=DurableFun
AZNSvCSecL4w0RIRzPxLqbey1uJlThcwRE42UNuJavVIozMJh     "rewindPostUri":
"http://localhost:7071/runtime/webhooks/durableta 70af28/rewind?reason={text}&taskHub=DurableFuncti cL4w0RIRzPxLqbey1uJlThcwRE42UNuJavVIozMJhrNOzw=="
"purgeHistoryDeleteUri":
"http://localhost:7071/runtime/webhooks/durableta
70af28?taskHub=DurableFunctionsHub&connection=
Storage&code=AZNSvCSecL4w0RIRzPxLqbey1uJlThcwRE42
}
	

In this example, we used an Azure Function based on an HTTP trigger, but your client function is not limited to use this trigger. You can use any of the triggers available in the Azure Function framework.

Once you have created the instance of the orchestrator function, this function calls the activity functions by the order defined in the code, as previously shown in Listing 2-15.

	
const order = yield context.df.callActivity("GetO const savedOrder = yield context.df.callActivity(
	

The orchestrator function uses an orchestration trigger for getting the information that the client function sends when it creates the instance. The orchestration trigger creates the instances of the different activity functions by using the callActivity() method of the durable-functions package. This method takes two parameters:

Name of the activity function

Input You put here any JSON-serializable data that you want to send to the activity function.

In our example, we execute the activity function GetOrder, previously shown in Listing 2-17, for getting the order object that we use as the input parameter for the next activity function SaveOrder, previously shown in Listing 2-19, for saving the information in the CosmosDB database configured in Listing 2-20.

You can test this example on your local computer by running the functions that we have reviewed in this section, in the same way that you test any other Azure function. Once you have your function running, you can test it by using curl or postman. You should make a GET or POST HTTP request to this URL: http://localhost:7071/api/orchestrators/OrchestratorFunction.

Notice that the parameter functionName of the URL matches with the name of our orchestrator function. Our client function allows us to call different orchestration functions, just by providing the correct orchestration function name.

You can use different patterns when you are programming the orchestration function, and the way it calls the activity functions:

Chaining The activity functions are executed in a specific order, where the output of one activity function is the input of the next one. This is the pattern that we used in our example.

Fan out/fan in Your orchestratration function executes multiple activity functions in parallel. The result of these parallel activity functions is processed and aggregated by a final aggregation activity function.

Async HTTP APIs This pattern coordinates the state of long-running operations with external clients.

Monitor This pattern allows you to create recurrent tasks using flexible time intervals.

Human Interaction Use this pattern when you need to run activity functions based on events that a person can trigger. An example of this type of pattern is the document approval workflow, where publishing a document depends on the approval of a person.

Need More Review?: Durable Function Patterns

You can get more information about durable function patters by reviewing the article Patterns and Concepts in Microsoft Docs at

https://docs.microsoft.com/en-us/azure/azure-functions/durable/durablefunctions-concepts

Create Azure Function apps by using Visual Studio

When you are developing Azure Functions, if you decide to use C# for creating your function, your natural choice for the Integrated Development Environment (IDE) would be Visual Studio. Any function that you create needs to run in Azure Function App.

You can create Function Apps either using the Azure Portal or Visual Studio. The following procedure shows how to create an Azure Function project and an Azure Function App in Azure using Visual Studio 2017:

  1. On your local computer, open Visual Studio 2017.
  2. Click Tools > Get Tools And Features. Check the Azure Development Workload option in the Web & Cloud section.
  3. Click the Modify button on the bottom-right corner of the window.
  4. Click Tools > Extensions And Updates.
  5. In the navigation tree on the left side of the window, click Installed > Tools.
  6. In the installed tools, look for Azure Functions And Web Jobs Tools. Ensure that you have installed the latest version as listed in the release notes at https://docs.microsoft.com/en-us/azure/azurefunctions/durable/durable-functions-concepts
  7. Create a new Azure Function project. Click File > New > Project.
  8. On the left side of the New Project window, click Installed > Visual C# > Cloud.
  9. On the New Project window, in the template area, click Azure Functions, as shown in Figure 2-15.
  10. Figure 2-15 Cloud project templates
    Screenshot_20
  11. On the bottom of the New Project window, provide a Name, Location, and Solution Name for your project.
  12. Click OK.
  13. On the New Project <your_project_name> window, select Azure Functions v2 Preview (.NET Standard) from the Azure Function templates drop-down menu.
  14. In the Azure Function Templates area, select HTTP Trigger.
  15. Click OK.
  16. Make the modifications that you need on the new project.
  17. In the Solution Explorer, right-click the Azure Function project name.
  18. Click Publish, which opens the Pick A Publish Target window
  19. On the Pick A Publish Target window, click Publish. This opens the Create App Service window.
  20. In the top-right corner of the Create App Service window, click the Add An Account Button For Connecting To Your Azure Subscription.
  21. On the Create App Service window that is connected to you Azure Subscription, type the name of your function app in the App Name field, as shown in Figure 2-16.
  22. Figure 2-16 Creating a Function App
    Screenshot_21
  23. On the Resource Group drop-down menu, click on the New link next to the drop-down menu.
  24. Type a name for the new Resource Group and click OK.
  25. Click the New link next to the Hosting Plan drop-down menu.
  26. On the Configure Hosting Plan window, provide a name for the App Service Plan.
  27. From the drop-down menu Location, select the location where your app service plan will be created.
  28. Ensure the size consumption option is selected in the last drop-down menu.
  29. Click OK to close the Configure Hosting Plan window.
  30. In the Create App Service window, click on the New link next to the Storage Account drop-down menu.
  31. Type a name for the new Storage Account and click OK.
  32. In the bottom-right corner of the Create App Service window, click OK. This will create the Function app in Azure and deploy your code in the new function app.

Chapter summary

Azure provides you with the needed services for deploying serverless solutions, allowing you to center on the code and forget about the infrastructure.

Azure App Service is the base of the serverless offering. On top of App Service, you can deploy Web Apps, Mobile backend Apps, REST APIs or Azure Functions, and Azure Durable Functions.

When you work with App Service, you are charged only for the time your code is running.

App Service runs on top of an App Service Plan.

An App Service Plan provides the resources and virtual machines needed for running your App Service code.

You can run more than one App Service on top of a single App Service Plan.

You can run non-interactive tasks in the background by using WebJobs tasks.

WebJobs can be executed based on a schedule or triggers.

When troubleshooting your App Service application, you can use several methods of diagnostics logging: web server logging and diagnostics, detailed error, failed requests, application diagnostics, and deployment diagnostics.

Diagnostics logs are stored locally on the VM where the instance of your application is running.

You can add push notifications to your mobile back-end app by connecting it with Azure Notification Hub.

Azure Notification Hub offers a platform-independent way of managing and sending notifications to all mobile platforms to which you have deployed your mobile app.

Offline sync for mobile apps allows your mobile app to create, update, and delete data while the user doesn’t have access to a mobile network. Once the user is online again, the Azure Mobile Apps client SDK syncs the changes with your mobile app back end.

You can track the activity of your mobile users and get automatically generated crash reports from your users’ mobile apps by using Visual Studio App Center.

You can configure CORS in your own code or at App Service level.

If you configure CORS in your code and at App Service–level, the configuration on your code won’t take effect because App Service level takes precedence.

You can secure the access to your REST APIs deployed in App Service by enabling built-in Authentication / Authorization without modifying your code.

Swagger / OpenAPI is an open-source tool for documenting your APIs.

Swagger UI is the tool for visualizing the interactive API documentation generated by Swagger.

You can use tools like Swashbuckle or NSwag to dynamically generate OpenAPI documentation based on the definition of the routes on your code.

Azure Functions is the evolution of WebJobs.

Azure Functions uses triggers and bindings for creating instances of Azure functions and sending or receiving data to or from external services, such as Queue Storage or Event Hub.

There are two versions of Azure Functions. Version 1.0 only supports .NET Framework and Windows environments. Version 2.0 supports .NET Core and Windows and Linux environments.

When you work with triggers and bindings, you need to install the appropriate NuGet package for function extensions that contain that triggers or bindings.

Azure Function runtime already includes extensions for Timers and HTTP Triggers. You don’t need to install specific packages for using these trigger bindings.

Triggers that creates function instances can be based on data operations, timers, or webhooks.

Azure Durable Functions is the evolution of Azure Functions that allows you to create workflows in which the state of the instances is preserved in the event of VM restart or function host process respawn.

Orchestration functions define the activity and the order of execution of the functions that do the job.

Activity functions contain the code that makes the action that you need for a step in the workflow, such as sending an email, saving a document, or inserting information in a database.

Client functions create the instance of the orchestration function using an orchestration client.

Azure Function Apps provides the resources needed for running Azure Functions and Durable Functions.

Thought experiment

In this Thought Experiment, you can demonstrate your skills and knowledge about the topics covered in this chapter. You can find the answers to this Thought Experiment in the next section.

You are developing a web application that is comprised of several pieces that communicate with each other. The web application has a REST API and a front-end application. This web application also needs to start a document publication workflow every time a document is saved in a Blob Storage Account.

With this information in mind, answer the following questions:

  1. Which serverless technology should you use for implementing the REST API?
  2. Which trigger should you use for starting the document publication workflow?
  3. Which is the appropriate function type for using the trigger in the previous question?

Thought experiment answers

This section contains the solutions to the Thought Experiment.

  1. You can use App Service API or Azure Function for implementing the REST API of your web application. If you use App Service API, you can use ASP.NET Web API 2 for implementing your API with C#. You can also use other languages like JavaScript or Python for this task. You can also use Azure Functions for your REST API. You can use HTTP Triggers with Azure Functions, customizing the route of each function for listening to the appropriate endpoint.
  2. Because you need to start the workflow when a new document is saved to a Blob Storage account, you should use a Blob Trigger in your Azure Durable Function. You should use Durable functions because your workflow depends on humans for triggering some events, such as approving or declining the document. You should implement the Human Interaction pattern.
  3. The appropriate function for using the Blob trigger is the Client function. This function waits for the new document saved event and then creates a new instance of the orchestration function that manages the document publication workflow. You need to implement the Human Interaction pattern on the orchestration
UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.