AI-900 Microsoft Azure AI Fundamentals – Describe features of computer vision workloads on Azure

  1. Lab – Setting up Visual Studio 2019

Hi, and welcome back. Now in this chapter, I want to go ahead and show you how to download and instal Visual Studio 2019, the Community Edition. So I want to go ahead and show you some examples of how you can use a Dot Net programme to go ahead and interact with Azure Cognitive Services. This is just to give you a better idea of how to use cognitive services as your computer vision. In no way are you expected to go ahead and understand any sort of development when it comes to working with your computer vision. But, just to give you an idea of how you can use these services, and if there are any developers out there who are interested in using the Internet to work with these as your base services, I’d like to show you this. So the first thing that we are going to do is go ahead and instal Visual Studio 2019 on a brand new Windows 10 machine. Now again, you might be working on a different machine, and the installation might be a little bit different.

So please keep this in mind. Over here, I’m just showing you a very simple example of how you can go ahead and instal Visual Studio 2019. So I’m going to go ahead and search for “Visual Studio 2019” so we can actually go ahead and download the Community Edition. So over here, this is basically free of charge. So let me go ahead and hit on “free download.” Again, you can also go ahead and use Visual Studio code. Let me go ahead and hit “Run.” So this will go ahead and download and run the executable for installing Visual Studio 2019. So over here in the Visual Studio installer, let me go ahead and hit on “continue” so it will go ahead and download some further installation files just to go ahead and start the installer itself. Now, once the installer has started, you can go ahead and choose the workloads that you want to instal in Visual Studio. So let’s say you want to learn ASP.NET and web development. You can go ahead and choose that. If you want to go ahead and do aero development, you can go ahead and choose that. You could also go ahead and instal individual components if you want to.

So I can go on to workloads. So currently, I’m just going to go ahead and choose those two components only. So that’s ASP.NET and web development, and as your development, let me go ahead and hit install, right, so that this will go ahead and instal it. The installation might take around 15 to 20 minutes. Again, this depends on your Internet speed. It depends on the processing system itself. So let’s come back once the installation is complete. Now, once the installation is complete, if you want, you can sign in, or you could go ahead and do it later. So we could go ahead and just choose the option of later. You can go ahead and choose the environment. This is the general development setting. I can go ahead and choose the default and start Visual Studio. This is the first step in preparing Visual Studio for use. And now over here, you have the pane of Visual Studio 2019, where you can actually go ahead and create a new project, or you can just go ahead and continue without code. So now over here, you have the integrated document environment in place. I just want to go ahead and show you how you can instal Visual Studio 2019. So.

  1. Lab – Computer Vision – Basic Object Detection – Visual Studio 2019

Now in this chapter, let’s start with working with computer vision. Now I’m going to go ahead and show you how you can work with the Computer Vision API from a Net program. So you can go ahead and invoke the API for Computer Vision from a variety of sources. So in another tab, over here I have the quick start for using the Computer Vision client library because, in the end, you have to go ahead and invoke this service to basically give you some output.

So you can go ahead and invoke the service using any of the programming languages listed here. You can also go ahead and use the Rest API. Now I’m going to show you how to use the Computer Vision service from the Net, as well as examples of how to use the Postman service. So the Postman service is another service that you can actually use for invoking APIs. So again, for the purpose of the exam, I’m just reiterating that you don’t need to have any sort of development experience. So you will not get any sort of, let’s say, programming question on the exam itself. The entire idea of these labs is to show you what the service can actually do. So the first thing we’ll do is create a Computer Vision resource in Azure. We can then go ahead and make use of that resource when it comes to working with the service itself.

So in all resources, let me go ahead and click on “Add over here.” Let me go ahead and search for computer vision. So let me go ahead and choose that, and then let me go ahead and hit on Create. I’ll go ahead and choose my resource group over here. Let me go ahead and choose a region. We have to give the computer vision service a unique for the Computer Vision service. Now we have two pricing tiers. I’m going to go ahead and use the free pricing tier. So this is a demo over here. Basically, we can make just 20 calls with a permit. That should be fine by us. So let me go ahead and use the free pricing tier. I’ll go on to Next for tags; leave everything as it is. I’ll go on to review and create, and let me go ahead and hit on create. Now we’re going to go ahead and actually submit an image to the Computer Vision service. As you can see, it’s already been deployed, so we can go ahead with the resource.

So I said what we’re going to do is go ahead and submit an image via a network programme to this computer vision service, and we’re going to use that service to give us some basic information about the image itself. So remember, if you are to submit an image to any normal machine, that machine would not be able to go ahead and understand anything about the image unless it has been trained to understand different aspects of an image. But with the Computer Vision Service, which is available as part of Azure Concrete Services, this feature is already in place. Now I want to go ahead and store the image. Somewhere in Azure, your image could also be on your local machine. Remember, earlier on in one of our earlier chapters, we actually went ahead and created an Azure storage account? So let me go ahead and open all resources in a new chapter. So this is the beginning, when we started working on Azure.

In this particular course, I showed you how to create an Azure storage account. So let’s make use of that same storage account. If you want, you can actually go on to storage accounts if I go and expand this so you can go on to storage accounts. And over here, you can see a storage account. This is the one that we created earlier on. So let me go on to it. So remember, we had gone ahead and added the Blob service, and we had created a container, so we can go on to the container. Now I’m going to go ahead and delete the files that I have over here, so I can go ahead and choose the file and hit delete. Let me go ahead and upload an image from my local machine. So I’m just uploading a simple image. Let me go ahead and hit upload rights, so we’ve got the image in place. If I proceed to the image, if I proceed to edit. So this is the image that is currently in place. So there are some objects in this particular image.

So I want to go ahead and submit this image to the Computer Vision Service. And I want to understand what sort of information the computer vision service can actually give us. Now over here, I’ve gone ahead and launched Visual Studio 2019. Now let me go ahead and click on “Create a new project.” Over here, I want to go ahead and search for creating a console-based application. So I’ll go ahead and select console, apps, and the.NET Framework. Let me go ahead and hit next. So I’ll just choose a temporary location. Let me go ahead and give the project a name. I’ll choose the framework as it is; you might have a different version of the framework. Let me go ahead and hit Create. So console-based applications are kind of the easiest applications we can actually go ahead and start with because, as I said, we are not trying to learn programming over here. And I’m not going to go ahead and actually show you all aspects of the programme itself. I don’t want to proceed, and I promised to show you what this computer vision service is capable of. This is just for those students who are actually well versed in programming, if they want to follow along. Now, in order for this dot-point programme to go out and work with the API, with the computer vision API, we have to go ahead and instal the computer vision package.

For this, we must navigate to Tools and then to New Get Package Manager. Go ahead and browse for the solution by clicking on ManageNugget packages. And then I’m going to go ahead and search for the computer vision package, which is Microsoft Azure Cognitive Services Vision, right? I’m going to go ahead and choose that. I’m going to choose my project, and I’m going to go ahead and hit Uninstall. This list takes a couple of minutes. Now it might give you a prompt window. You can go ahead and hit OK and then on Accept. So this will go ahead and instal the required packages for your particular project solution. Now, once this is done, I’m going to go ahead and click on Program CS. So over here, we have the programme in place. Now I’m going to go ahead and place some code over here. Please keep in mind that all of this code is also available as a zip file, which is directly linked as a resource to this chapter.

Right, as I’ve already copied and pasted various parts of this code. Now I just have some red screen lines over here. So I can go ahead and just over here, I can go ahead and choose the light bulb over here, and I can say, “Please go ahead and use the package that we just installed.” I can also go on to the others and do the same thing. So again, if you are trying to follow along, I have to make sure that I have no errors over here. Now let me go ahead and just quickly explain what we are trying to do over here. So we’re trying to go ahead and use a computer vision service to go ahead and process our image and give us some information about the image. Now, in order for this programme to go ahead and authorise itself to use our computer vision service, we have to go ahead and paste in the subscription key and the endpoint. The best thing about services in Azure is that they all have some security aspect in place so that no one can actually just go ahead and start invoking your computer vision service. Only someone has this key in place. This kind of secure key can actually go out and invoke your computer vision service.

So how do we get the subscription key and the end point? So you must return to your computer vision service, then to keys and endpoints. So over here, we have two keys in place. We have key one, and we have key two. As a result, we can use either key. So you could go ahead and show the keys and see the information about the keys. You can then hide it. You can go ahead and copy any one of the keys onto the clipboard, and we can go ahead and paste it in our programme over here. Now the next thing is our endpoint. So for the endpoint, let’s go ahead and come back over here and copy this endpoint. So let me go and hit copy onto the clipboard, go back onto Visual Studio, and replace it over here. Next, we have to go ahead and determine the location of our image. So, once again, if you go to our image in the other tab, that’s very simple, right? So if you go onto the overview, we can go ahead and copy the URL onto the clipboard, go back onto our program, and replace it over here. And now this programme is going to use the ComputerVision Client classes that are already in place to invoke our Computer Vision API. Now over here, I’m saying please go ahead and get the categories and the description of the image.

So, whatever categories and descriptions the computer vision service can discover about the image, I want to know about them, and I want them to pass those details on to me. So if I go ahead and scroll down in the console-based application, I’m just going ahead and getting all the captions for the description itself, basically showing what the text is and what the confidence level is. So for everything that Azure Commercial Services actually detects, it tries to give you a confidence level. What confidence does it have that it’s giving you accurate information about the image itself? I’m also going to go ahead and add a statement that just reads an input key from the user so that we can see the output of the console-based application. Now let me go ahead and hit on “Run” or click on “Start.” As a result, it is analysing the image. And here you can see the caption for the image itself. So there’s a laptop and a tablet on a table. And over here, it gives a confidence level ranging from 0 to 1. If it’s closer to one, that means it has more confidence in the information it’s actually giving you. But you can see that this particular service now has the ability to go ahead and look at the image and give you information about the image. This is the beginning of the abilities that are actually available with the Computer Vision Service. Bye.

  1. Lab – Computer Vision – Restrictions example

Hi, and welcome back. Now, in this chapter, I just want to give a quick note when it comes to a restriction that is available for images that are submitted to the Computer Vision service. And in this, I just want to go ahead and make a reference to the size. I’ve uploaded the same image again, but in a higher resolution. So over here, you can see that the size of this image is 24, or roughly around 24 megabytes. And the image—the original image we had—was just in the form of kilobytes. So let me go ahead and click on this image.

Let me go ahead and take the URL. So in Visual Studio, let me go ahead and place the URL over here, and let’s go ahead and submit this image for processing. So I’ll go ahead and run the program, and actually what’s going to happen is that it’s going to go ahead and generate an error. So we are going to be redirected back to Visual Studio. There’ll be an exception that will be generated in the programme itself. So if I go on to view details over here, I’ll just click on it. So if I go ahead and just expand this a little bit, So I want to go on to the body of the exception. And over here, it says invalid image size. The input image is too large. So over here, basically, there’s a restriction that the image can be no larger than four megabytes. So again, there are some restrictions when it comes to what you can actually submit to the Computer Vision Service. Over here, I just want to give an example of one of those restrictions.

  1. Lab – Computer Vision – Object Bounding Coordinates – Visual Studio 2019

And welcome back. Now, in the previous chapter, I demonstrated using a net programme how we could use the computer vision service to take an image stored in an Azure storage account and have it go ahead and provide us with a description of the image. That means Computer Vision was able to understand various aspects of the image and then provide us with a description of the image itself. Now we’re going to go ahead and use the same program. I’ve made some modifications because I want to go ahead and take in more information on what the computer vision service can actually give us. So over here, in addition to the categories, the description, and the tags, I’m also asking it to go ahead and return all of the objects that it detects in the picture itself. Then, for each of the objects I’m interested in, what is the name it’s given to the object width, how confident it is, and what are the bounding coordinates? So what is the X coordinate? What is the Y coordinate? So where exactly is the object in the picture itself, and what is the width and what is the height? So let me go ahead and run this program, right?

So over here, it now had the ability to go ahead and detect different objects in the picture. So it detected a cup, another cup, a laptop, a book, et cetera. So now it has the ability to go ahead and detect different objects within a picture. If you look at the laptop, for example, it has an X coordinate of 353 and a Y coordinate of 0. So if I go on to the pictures, this is the laptop in question, right? So it’s gone ahead and detected this particular object. So over here in terms of the X coordinate, it’s zero because it’s starting from here itself. And over here, this is the Y coordinate, the starting point of the object itself. And then you have the height and the width. Similarly, it has actually gone ahead and detected other objects within this picture as well. So now computer vision has this ability to go ahead and get information about an image. Just think that if you were to go ahead and train your own model or your own machine to go ahead and decipher different objects in an image, this would be really quite difficult. The service already has built-in intelligence to go ahead and detect different aspects of your image.

  1. Lab – Computer Vision – Brand Image – Visual Studio 2019

Hi, and welcome back. Now, in this chapter, I want to showcase another feature that is available from the computer vision service when it comes to the detection of objects in an image. So in this particular program, I’ve actually gone ahead and made modifications to ensure that. Now using the computer vision service and a new image, which I’ve uploaded onto the AzureStorage account, it has the ability to go ahead and detect any sort of brand logos that are actually present in the image itself. so in the Azure Storage account. So I went ahead and uploaded another image, this time a JPEG file. So, if I proceed to the image itself, if I proceed to edit so that I have the image over here and the Apple logo over here, So, in my program, I went ahead and changed it to scan image 2, JPEG. Over here, again, the same service and the same key I’m going ahead and scrolling down now in terms of the features; I’m saying, “Please go ahead and look at the brands that get detected.” And then I’m going ahead and saying, “Please give the name of the brand; give the level of confidence.” So let me go ahead and run this programme so you can see that it has directed the Apple brand logo, and over here, it’s given the confidence level. So this is another feature that is actually available when it comes to the computer vision service, which is available in Xiao.

  1. Lab – Computer Vision – Via the POSTMAN tool

Hi, and welcome back. Now, we have been seeing how to invoke the Computer Vision API, which is basically the service via a net program. Now, I’m going to go ahead and show you how you can invoke the Computer Vision API via the Postman tool. So over here, I have the Postman tool in place. So, remember how we used this tool earlier to invoke an endpoint that was available when it came to Azure Machine Learning? Now, we can also go ahead and do the same thing when it comes to the APIs that are available with Azure Cognitive Services.

So this makes it much easier to go ahead and see what the services can offer without actually having any sort of programming knowledge. So if you’re not well versed in Dot Net, if you don’t know how to use Visual Studio, if you don’t know how to use C#, you can actually go ahead and just use the Postman tool and go ahead and invoke the APIs. So let’s see how we can go about doing this. So firstly, in the documentation itself, when you go to the rest API section, if you go ahead and scroll all the way down, So over here, you can go ahead and click on “Explore the Computer Vision API.” So this will take you to another link. Now, over here in the link, it basically tells you that if you want to go ahead and analyse an image, if you want to go out and detect objects, if you want to go ahead and describe an image, or if you want to go ahead and basically use OCR, then you can actually go ahead and use the documentation that’s available over here.

So if you want to go ahead and describe the image or if you want to go ahead and detect the objects, right, you’ll get all of this information over here, and this information will actually tell you how you can actually invoke the API. So let’s go ahead and understand how we can actually go ahead and use the information that we have posted over here. So firstly, when it comes to invoking the API, you have the request URL. So let me go ahead and take this URL. So I just go ahead and open Notepad, right? So I have the URL over here. Now, what we have to replace is this endpoint. We can now go ahead and take the end point anywhere from here. So if I go ahead and take the end point of North Europe, let me go ahead and copy this and we can place it over here, right? So this basically becomes our end point. So if I come over here, this is it. This is the end point. So let me go ahead and copy this onto the Postman tool, right? So I’ve got this in place. Now, next, if I just go ahead and scroll up the method is a post method.

So we have to make sure that we change that as well. So, if we go ahead and select it as a postmethod, let’s go back; let’s scroll down. Now it’s saying in the request headers that we have to go ahead and set the content type, which is fine; we don’t have to do that. But you have to go ahead and have this subscription key. So this basically authorises the use of your cognitive services. So let me go ahead and copy this. Go on to the Postman Tool, go on to headers over here, and add that key and the value. So we already have Azure Congruent Services in place, so we can go on to our computer vision resource. We can go ahead and take a key from here, go on to the Postman Tool, and place it as the value. Now, next, we can go on to the body. Over here, we can go ahead and choose BinaryFormat, and over here, we can go ahead and select the image file from the local system. So let me go ahead and select the file.

So I’m selecting that file, which basically has the images of the laptop and the cups, et cetera. So if I go on to the image, this is the image. So this is the same image. The only difference is that it is now taking the image from the storage account; I’ve just gone ahead and browsed for images from my local system. That’s it. Now let me go ahead and hit “Send.” And now you can see that you’re getting the output. So over here, you’re getting the objects. So over here, you have the cup. You have a high level of self-assurance. You have it, then what is the bounding rectangle? Again, you have the other cup. So all of this information about the objects is now being returned to you. So now we have another way of invoking the API when it comes to computer vision. So I’m going to go ahead and show you a lot of examples when it comes to using the PostmanTool for invoking APIs because not everybody will be well-suited to using DotNet and Visual Studio.

  1. The benefits of the Cognitive services

So in the last set of chapters, we’ve seen quite a lot of benefits when it comes to the usage of the cognitive services that are available in Azure. So we’ve seen the computer vision service in action. Over here, we could go ahead and invoke this service, either from a net programme or from a tool known as Postman, and we could go ahead and submit images onto the service. And the service had the ability to go out and detect objects in the image. It had the ability to go ahead and understand what the image was all about. So over here, you didn’t have to go ahead and use machine learning to go ahead and train a model. So you could go ahead and train a model by feeding it different images.

The model will then go ahead and understand the images, and you could then go ahead and invoke the model and see the output based on a submitted image. But over here, using machine learning, you didn’t have to do anything. You didn’t have to train or test a model. In fact, everything is already available to you. So via cognitive services, a developer does not need to have any sort of knowledge when it comes to machine learning because they don’t need to go out and train and test a model. All they need to do is go ahead and invoke the service based on what they want, based on the platform, and get the desired result. This is one of the core benefits of using Azure.

  1. Another example on Computer Vision – Bounding Coordinates

Hi, and welcome back. Now, just to give you an idea, when it comes to the bounding box, which is given by the object detection feature in the Computer Vision API, So I’ve set up a net program. This is based on the Windows Presentation Foundation framework. Now, I’m not going to go into any details about this programme at all. I actually want to go ahead and run the programme because I want to show you what it can actually do. So this is making use of the computer vision API.

So here I have my subscription key and the endpoint. Let me go ahead and just run the program. So over here, I want to go ahead and browse for an image on my local system, right? So I have an image of fruits over here. Now I want to go ahead and tell the computer vision service to go ahead and identify the objects in this image and just give me a bounding box. So a bounding box like this basically identifies, “Okay, this is one object, then this is another object, et cetera.” So when I go ahead and actually click on “Analyze Image” over here, which takes a couple of seconds, it’s actually going to the computer vision service.

And now you can see the bounding boxes for the objects within the image. So why am I showing this? Just to understand the concept that this is actually possible. Because this is important from an exam perspective to understand that you do get these bounding coordinates, right? So you get the X coordinate, the Y coordinate, the width, and the height of the object. And using all of that, you can actually get a bounding box for the object itself. It aids in distinguishing one object’s position from another. So just to showcase this particular feature that is available, I just want to go ahead and show you this program.

  1. Lab – Computer Vision – Optical Character Recognition

Hi, and welcome back. Now in this chapter, we’ll look at another feature that is available with the Computer Vision Service, and that is the Read API. So this service can be used to extract printed and handwritten text from images. So here it uses the capabilities of optical character recognition. It supports the extraction of printed text in several languages. It supports the extraction of handwritten text, which is currently supported for the English language only. It can go ahead and extract text from a variety of file formats. So let’s go ahead and see how to work with the Read API. Now, for the purpose of this lab, I’m going to be using this image. So this is basically a screenshot of my PowerPoint slide. So over here, I want to go ahead and use that OCR service to have the ability to go ahead and read the text within this image itself.

So we’re going to go ahead and use this for the purpose of this lab, so we are going to head on over to the Computer Vision API. We’re going to be using the Postman tool. So over here, we’re going to go ahead and use the OCR features, which are the optical character recognition features. So over here, if you go ahead and scroll down, it tells you that this has the ability to go ahead and extract the text, basically detect and extract the text that is available in an image. So over here, if you go ahead and scroll down, let’s go ahead and start working with this API call. So firstly, let me go ahead and take the request URL again. so I’m pasting it over here. So, one step at a time So, let me take it, where is the location? So, once again, I’ll take North Europe and place it over here, right, so we have this as our URL. Now we also have the ability to go ahead and specify a language or even detect the orientation. So if I go ahead and scroll down, you will actually see the supported languages. Over here, you can see this is optional. You do not need to specify a language in the detection or when it detects the orientation. Again, this is optional.

So I’m not going to go ahead and actually add any one of these parameters. However, if we wanted to add a parameter solely for the purpose of defaulting to English, we can do so by adding the language is equal to English and deleting the orientation. So this now forms a URL. Now again, just to confirm, there should also be a post request. So in the Postman tool, let’s go ahead and create a new request. I’m going to go with Post. I’ll put the URL, and then I’ll go on to headers. So this remains the same for all of your requests when it comes to the Azure Cognitive Services. So we have to ensure that we have that key, that subscription key. So let me go ahead and copy this. Let me add that it has the key name and the value. Let me go on to our computer vision service. I’ll go onto the keys, I’ll copy the key, and I’ll paste it over here. Now, in terms of the body of the request, let me go ahead and choose binary again, and let me go ahead and select a file that has my content, my image. So I’ve selected my image, now let me go ahead and hit send. So now over here, when you scroll down, you can see all of the words that have been recognized. So the Computer Vision Service, the Read API, and this service can be used too. So it’s basically detected that each word has an object over here. So, if you basically call this form, say, from a dotnet program, you can get the entire text as a string. Over here, it’s gone ahead and detected that everything has a single word along with the bounding boxes. But again, over here, you have seen that this service has the ability to go ahead and read text from an image.

  1. Face API

Hi, and welcome back. Now, in the next set of chapters, we’ll go through the facial recognition features that are available with the Azure Cognitive Services. So this is available with the computer vision API. And there is a separate API known as the Face API. Now, when you use the facial recognition features, which are part of the Computer Vision API, you can go ahead and submit an image that contains a face to the Computer Vision API. And over here, you’ll get the coordinates of the face itself. And it also has the ability to go ahead and predict the age and gender of the person itself. But there is also the Phase API, which is available as part of Azure Cognitive Services.

So the Phase API, in addition to giving you the coordinates of the phase and also the age and gender, can also give you additional phase attributes as well. So this API has the ability to go out and detect the head pose, the emotion of the person, and whether the person is wearing glasses or not. As a result, the Face API can direct additional capabilities. It has built-in intelligence. In addition to the face API You have another API known as the verify API. This can be used to check if two faces belong to the same person. So again, when it comes to facial recognition, there are a lot of features that are available with Azure Cognitive Services. So let’s go ahead and see them.

img