AI-900 Microsoft Azure AI Fundamentals – Describe features of computer vision workloads on Azure Part 3

  1. Lab – Face Find Similar API – Using POSTMAN tool

Hi, and welcome back. Now in this chapter, I want to go through another API that is available with the Face API, another feature, basically. So this is when it comes to using the find-similar feature that is available with the Face API. So the entire purpose of this particular feature is to go ahead and basically find a similar face to search for similar-looking faces.

So, given a list of faces, this service or feature can search for similar faces from a list of faces provided by the face API. So now, in order to actually go ahead and look at how the API for finding similar faces actually works, I’m going to go ahead and use this set of images. So the first thing that we are going to do is go ahead and create something known as a “face list.” We are then going to go ahead and upload three images, or three faces, onto this face list. And then we’re going to go ahead and use this image to go ahead and query the face list to see if this face is similar to any one of the faces in the face list. So let’s go ahead. So the first thing that we are going to do is go ahead and create something known as a “face list.” So if I go onto the face list API, over here I could go on to the method of first creating a new face list.

So over here, these are the details of the list. Please note that this is a put method. So I’m going to go on to the Postman tool. I’ll create a new request and make sure it’s a put request. Over here, I’m going to give you that URL to go ahead and create a contact list. Over here, I’ll give a name for the list, then I’ll go on to headers. So over here, we go ahead and add the key. So for my face app, let me go ahead and take the key place. It has a value, and obviously I need to go ahead and basically take the request header. We’ve got that in place. Now for the body of the request, I have to go out and choose raw. It needs to be in JSON. So over here, we need to go ahead and give the name of the list so I can go out and copy all of this. I can place it over here just because the name “Give” has been applied and delete everything else. Let me go ahead and hit send. As a result, we receive no response. But we do have our list in place because the status is 200. Okay, that means it has been a successful request.

Now we must proceed to add our photos or images to the face list. So this is going to be basically a post request. So going back over here, we have to go on to the method of adding a face to the face list. So over here, that’s what we’re going to do. Let me go on to the postman tools, so this is going to be the endpoint. I’m trying to add it to my face list, so over here I need to go ahead and give the name to my list, and I want to go ahead and add that particular face to this face list. Now I have to go back onto headers again. I have to ensure that I have the key, place it over here, take the value, place it over here. Now I’ll go onto the body. Let me go ahead and choose binary. Let me go ahead and select my first image. I’ll select my first image hit on Openover here, and I’ll go on to headers after again ensuring that the content type is basically an Octetstream, so let me go ahead and choose that. Let me go ahead and hit on Send.So we’ve got this persistent face ID. Let me go ahead and just copy it. So this is the first face ID. Let me go on to the body. Let me go ahead and select the second face. then click “Open,” then “Send.” Let me copy the face ID again from here, and similarly, I’ll do it for the third one as well. I’ll choose the third one. Click on “open.” Click on “Send.” Take the face ID and copy it over here, so that all three faces are added to the face list.

We’ll open a new tab now. We have to go ahead and get the faceID of the face that we want to use in our query. So we have to go ahead and use the detect feature of the face API for this. So I’m going to use the detect feature again; let me just take the authorization key over here, place it as a value now on the body, and this will be binary once more. Let me select the file, so I want to go ahead and query for this face across the list of these faces. Let me hit “Open.” I can go onto the headers, remove the content type, and  choose the content type again. Octet Stream: Let me go ahead and hit on sending access. So in the headers, it seems like I don’t have the key, so let me go ahead and just copy the key quickly over here, copy the value, and hit on Send. So let me go ahead and take the face ID, so this is the face that we want to query on.

Now let’s go ahead and use the find similar API. I’m going to go ahead and select a postrequest to find the similar faces. So if you go on to our APIs over here, If you go to Facebook and click on Find Similar, So it is a post request over here. This is the end point. So again, let me go ahead and add the key quickly, right? And now to go on to the body of the request, I need to go ahead and choose Raw; I need to go ahead and choose JSON; and I have to go ahead and find that Face ID in my list. So let me go ahead and change the list name. Let me go ahead and take the Face ID. Let me replace it over here, and we’ll go ahead and hit on Send.And now you can see with confidence. This level of confidence is telling you that this is the persistent Face ID from that list of faces if you do a comparison, so it’s matching with our first Face ID, which is correct. So over here, it’s now possible to find a face that is similar to a list of faces. So even though this is a long process, it’s important to understand what the Find Similar feature does when it comes to the Face API.

  1. Lab – Custom Vision

Hi, and welcome back. Now in this chapter, I want to go through the Custom Vision service. So this service allows you to go ahead and build and deploy your own image identifiers. We’ll go through a lab right now and see how to work with the Custom Vision service, so you’ll get a better idea of what the service is all about. This service uses machine learning algorithms to go ahead and analyse your image files. They can go ahead and upload those images and tag them accordingly. You can go ahead and train the model based on those uploaded images, and you can go ahead and create projects in the Custom Image Service based on either image classification or object detection. So just to get a better idea of the Custom Vision service, let’s go ahead and jump onto Azure. Now, in order to make use of the Custom Vision service in all resources, we have to go ahead and first add a resource based on Custom Vision. So I need to go ahead and choose or search for Custom Vision and choose that service.

Let me go ahead and hit Create. So over here, I want this service to go ahead and be both for training purposes and also for production purposes as well. So over here, we are trying to create our own sort of vision service. So remember, the Computer Vision service, as an example, has some features attached to it. So with the Computer Vision Service, remember that you have capabilities where you can actually go ahead and submit images. And that service has the capabilities to go ahead and, let’s say, detect the objects within an image and give you all sorts of information. So this is built on a prebuilt model that is already available on Azure. But let’s say that you want to go ahead and create your own CustomVision service and your own Custom Vision model. So, in that model, you’ll actually go ahead and upload images to the model. The model will be trained, and then you can go ahead and use that training model to go ahead and predict something about the images that you submit to it. So that’s the entire purpose of the Custom Vision service.

Over here, you’re going to go ahead and train a model based on images, and then you can go ahead and use it for prediction as well. So over here, let me go ahead and choose my subscription, and let me go ahead and choose my resource group. Let me go ahead and just give a name so it is available. I can go ahead and choose my region in the pricing tier. I can go ahead and basically choose the free pricing tier. That should be okay. In terms of the prediction resource, it creates two resources: one for training and one for prediction. Let me go ahead and select North Europe again, or I can select the free one. Allow me to continue with tags. Let me go onto View and Create, and let me go ahead and hit on Create. Let’s come back once we have the resource in place. Now that you have the service in place, let’s move on to the resource. So now over here, in order to go ahead and start using the Custom Vision service, we can actually go on to the Custom Vision portal. So over here, we don’t need to have any sort of programming experience, et cetera. This is basically now a service. It provides a web interface where you can actually go and train your model, and you can also go ahead and make predictions. So let’s go ahead and sign in.

So I’ll go ahead and sign in with my same Azure account. So it will actually automatically log in because I’ve already logged into Azure, so it should automatically be logged into my account. So now over here, I’m in the Custom Vision web interface. So the first thing I’m going to do is go ahead and create a new project. So this is a project wherein I want to go ahead and train my own model based on images that I actually submit to the model itself. I want to go ahead and train a model. Now over here, let me go ahead and just give a project name. It’s going to go ahead and use our Custom Vision service. Now over here, you have two project types. So either you can do a classification of images or you can go ahead and do object detection. So let me go ahead and choose object detection. Please keep in mind that classification can be done using multiple tags or a single tag per image. And then you have these different domains. Over here, I’m going to go ahead and choose “Object detection.” Let me go ahead and hit Create.

So for the project of object detection, what I’m going to do is that I’m going to go ahead and submit pictures of cats over here. So I want to go ahead and train my model to have the ability to go ahead and detect cats if an image is given to it. So for that, we first have to go ahead and train our model. So to go ahead and train our model, we first have to go ahead and give or provide different images of cats to the model, which learns from those images and has the ability to predict a cat object within an image. So in order to go ahead and actually train the image, we need at least 15 images. And also, each image must be in this format. And this is the maximum size; that’s six images per page. So now that we’re over here in my project, let me go ahead and add those images. So I’m going to go ahead and select all of the images except for the last image. That will be our test image. So that will be the image. For prediction purposes, let me go ahead and open up all of the images, or basically add all of them.

So let me go ahead and upload the 15 images. Let’s come back once we have this in place. This takes a minute. Once the upload is complete, let me go ahead and hit “done.” So now, all the images are currently untagged. So let me move on to the first image. So now in the image detail over here, let me go ahead and click on the boundaries so it’s able to detect a boundary for that particular image. And let me go ahead and add a tag for a cat. So I’m going to go ahead and do this for all of the images. So I’ll go on to the next image. So over here, I can again go ahead and tag this object. So, once again, I’ll go with the cat’s tag. Again, I’ll go on to the next image. So let me go ahead and ensure that I add the tag for each and every image, and then let’s come back after that. Now, once I’ve tagged all of the images, which are currently untagged, I don’t have any images. If I go and switch over to tagged images over here, I can see all of my tagged images. Now, with my images tagged, let me go ahead and hit the train.

So now I want to train the models so that we can do a quick training, which I will do. If you proceed to advanced training, you can actually go ahead and decide how long you want to go ahead and run this training for. I’m going to go ahead and choose quick training, and let me go ahead and hit on training, right? So it’s going to go through the training process. This will take some time. Let’s come back once this is complete. Once the training is complete, you can see the results. Now, we could go ahead and do a quick test. So in the test, I could go ahead and browse for my local file. So I’m going to go ahead and browse for the file, which we have not put in the training. So it’s going ahead and analysing the image. And now over here, you can see it has gone ahead and detected the object that is a cat from the image itself. It’s also showing its level of confidence. So again, if you give more and more different types of images to the training model, it will be in a better position and more confident to go out and identify those objects within an image, right? So in this chapter, I just want to show you how you can use the custom Vision service to go out and train your model.

  1. A quick look at the Form Recognizer service

Hi, and welcome back. Now in this chapter, I want to go through the Custom Vision service. So this service allows you to go ahead and build and deploy your own image identifiers. We’ll go through a lab right now and see how to work with the Custom Vision service, so you’ll get a better idea of what the service is all about. This service uses machine learning algorithms to go ahead and analyse your image files. They can go ahead and upload those images and tag them accordingly. You can go ahead and train the model based on those uploaded images, and then they can go ahead and create projects in the Custom Image Service based on either image classification or object detection. So just to get a better idea of the Custom Vision service, let’s go ahead and jump onto Azure. Now, in order to make use of the Custom Vision service in all resources, we have to go ahead and first add a resource based on Custom Vision. So I need to go ahead and choose or search for Custom Vision and choose that service. Let me go ahead and hit Create. So over here, I want this service to go ahead and be both for training purposes and also for production purposes as well.

So over here, we are trying to create our own sort of vision service. So remember, the Computer Vision service, as an example, has some features attached to it. So with the Computer Vision Service, remember that you have capabilities where you can actually go ahead and submit images. And that service has the capabilities to go ahead and, let’s say, detect the objects within an image and give you all sorts of information. So this is built on a prebuilt model that is already available on Azure. But let’s say that you want to go ahead and create your own CustomVision service and your own Custom Vision model. So, in that model, you’ll actually go ahead and upload images to the model. The model will be trained, and then you can go ahead and use that training model to go ahead and predict something about the images that you submit to it. So that’s the entire purpose of the Custom Vision service. Over here, you’re going to go ahead and train a model based on images, and then you can go ahead and use it for prediction as well.

So over here, let me go ahead and choose my subscription, and let me go ahead and choose my resource group. Let me go ahead and just give a name so it is available. I can go ahead and choose my region in the pricing tier. I can go ahead and select the free pricing here. That should be okay. In terms of the prediction resource, it creates two resources: one for training and one for prediction. Let me go ahead and select North Europe again, or I can select a free one. Allow me to continue with tags. Let me go onto View and Create, and let me go ahead and hit on Create. Let’s come back once we have the resource in place. Now that you have the service in place, let’s move on to the resource. So now over here, in order to go ahead and start using the Custom Vision service, we can actually go on to the Custom Vision portal. So over here, we don’t need to have any sort of programming experience, et cetera.

This is basically now a service. It provides a web interface where you can actually go and train your model, and you can also go ahead and make predictions. So let’s go ahead and sign in. So I’ll go ahead and sign in with my same Azure account. So it should actually log in automatically because I’ve already logged into Azure, it should automaticallybe logged into my account.So now over here, I’m in the Custom Vision web interface. So the first thing I’m going to do is go ahead and create a new project. So this is a project wherein I want to go ahead and train my own model based on images that I actually submit to the model itself. I want to go ahead and train a model. Now over here, let me go ahead and just give a project name. It’s going to go ahead and use our Custom Vision service. Now over here, you have two project types. So either you can do a classification of images or you can go ahead and do object detection. So let me go ahead and choose object detection. Please keep in mind that classification can be done using multiple tags or a single tag per image. And then you have these different domains. Over here, I’m going to go ahead and choose “Object detection.” Let me go ahead and hit Create. So for the project of object detection, what I’m going to do is that I’m going to go ahead and submit pictures of cats over here.

So I want to go ahead and train my model to have the ability to go ahead and detect cats if an image is given to it. So for that, we first have to go ahead and train our model. So to go ahead and train our model, we first have to go ahead and give or provide different images of cats to the model, which learns from those images and has the ability to predict a cat object within an image. So in order to go ahead and actually train the image, we need at least 15 images. And also, each image must be in this format. And this is the maximum size; that’s six images per page. So now that we’re over here in my project, let me go ahead and add those images. So I’m going to go ahead and select all of the images except for the last image. That will be our test image. So that will be the image. For prediction purposes, let me go ahead and open up all of the images, or basically add all of them. So let me go ahead and upload the 15 images.

Let’s come back once we have this in place. This takes a minute. Once the upload is complete, let me go ahead and hit “done.” So now, all the images are currently untagged. So let me move on to the first image. So now in the image detail over here, let me go ahead and click on the boundaries so it’s able to detect a boundary for that particular image. And let me go ahead and add a tag for a cat. So I’m going to go ahead and do this for all of the images. So I’ll go on to the next image. So over here, I can again go ahead and tag this object. So, once again, I’ll go with the cat’s tag. Again, I’ll go on to the next image. So let me go ahead and ensure that I add the tag for each and every image, and then let’s come back after that. Now, once I’ve tagged all of the images, which are currently untagged, I don’t have any images. If I go ahead and switch over to tag, So over here, I can see all of my tagged images.

Now, with my images tagged, let me go ahead and hit the train. So now I want to train the models so that we can do a quick training, which I will do. If you proceed to advanced training, you can actually go ahead and decide how long you want to go ahead and run this training for. I’m going to go ahead and choose quick training, and let me go ahead and hit on training. Right? So it’s going to go through the training process. This will take some time. Let’s come back once this is complete. Once the training is complete, you can see the results. Now, we could go ahead and do a quick test. So in the test, I could go ahead and browse for my local file. So I’m going to go ahead and browse for the file, which we have not put in the training. So it’s going ahead and analysing the image. And now over here, you can see it has gone ahead and detected the object that is a cat from the image itself. It’s also showing its level of confidence. So again, if you give more and more different types of images to the training model, it will be in a better position and more confident to go out and identify those objects within an image, right? So in this chapter, I just want to show you how you can use the Custom Vision service to go out and train your model.

  1. Lab – Form Recognizer

Hi, and welcome back. Now in this chapter, let’s go through the Form Recognizer service. So this is a service that helps identify and extract text key-value pairs, selection marks, tables, and structures from documents. So you can go and submit documents to the service, and it has the ability to go and extract different types of information. It also contains prebuilt models.

So let’s say that you submit an invoice onto the Form Recognizer service, which has the ability to go ahead and actually detect the different parts of an invoice, similarly for receipts and for business cards as well. So the following services are available for each of the form recognizers, so you have the layout API. So this has the ability to go ahead and extract text selection marks and table structures, along with the abounding box coordinates, from the document itself. You have personalized models here. You can go ahead and do the same thing when it comes to extracting text. But over here, the models are trained with your own data. And again, I said you have prebuilt models when it comes to invoices, sales, receipts, and business cards. So let’s go ahead and see how to make use of this service. Now here we are in Azure. Now, in order to go ahead and actually use the Form Recognizer service, we first have to go ahead and create a resource based on the service. So let me go ahead and click on Add in all resources.

Over here, let me go ahead and search for Form Recognizer. Let me go ahead and choose it, and let me go ahead and hit on Create. Over here, let me go ahead and choose one of my existing resource groups. I’ll choose the region, and I’ll give a name, so it needs to be unique. Over here, again, I can go ahead and choose the free pricing model. I’ll go on to Next for tags. I’ll go on to view and create, and let’s go ahead and create a resource based on the service. Now, we’re going to be using the API calls. We’re going to be using the Postman tool, and let’s go ahead and actually submit this invoice. So this is one of my invoices. So over here, you can see that you have the invoice number and other information as well when it comes to this invoice. So over here, you can see the user’s charges, you can see the tax, and you can see the total amount. So I’m going to go ahead and submit this invoice to Form Recognize the Service. So over here, the entire purpose of this service is to go ahead and recognise this entire invoice. As a form, it should have the ability to go ahead and extract the invoice number, and it knows that this is an invoice. So based on the model of an invoice, it can go ahead and extract the invoice number.

It can also go ahead and extract the different charges as well. So let’s see how we can actually make use of this particular service. Right, so we’ve already gone ahead and created the service in the Azure Portal. Over here, I am on the API page for the Form Recognizer API, and we can go ahead and use the Analyze Invoice Service or the API. So over here, we have one for the receipt, we have custom forms, we have business cards, et cetera. So, if you scroll down, you’ll find all of the information about what this particular service can do. As a result, this should be a post method. If you go ahead and scroll down here, you can see the request URL. Again, you can see the request body, which is what you can actually submit. So over here, you have PDF, you have JPEG, et cetera. Here, you can also look at any limitations if they are applicable. So all of this information is now in place. Let’s go ahead and use the cells to go ahead and inspect our invoice. So the first thing we’re going to do is grab the request URL. I’ll paste it over here. Now I only want to go ahead and analyse the invoice, so I don’t want any other information for now.

Then, for the region, I can go ahead and take the North Europe region. So even if you don’t have the regions specified over here, on top, you can always take the regions from the previous APIs, which we have, right? So let’s go ahead and take this to the Postman tool. So I’ll go on to the Postman tool. Let me go ahead and just close whatever existing requests I had. So there should also be a post request. I’ll go ahead and enter the request URL. So let’s go ahead and scroll down. So what do we have again? We have the subscription key. So let’s go ahead and take that. I’ll go on to the headers. Let me go ahead and add the key. In terms of the value, I have to go on to “Recognize the Form,” “The Service,” “Keys,” and “Endpoint.” Over here, let me go ahead and take a key. I’ll place it over here. What else do we need now? In terms of the request body, we can go ahead and actually attach it. So let’s go ahead and do that. So come on over here and let me go onto the body. Let me go on to Binary, and I’ll go ahead and select that invoice from my local system. So I’m selecting the invoice PDF. Let me go ahead and hit on Send.

Now, over here, we’ve not got any information in place. Over here, you can see that the request status is “202 accepted.” So over here, this is basically a request that has been made to the Form Recognizer service. The recognised service has gone ahead and taken your invoice, and now it’s processing it on the back end. If you want to see the output information, you should go ahead and go to the headers in this response so that even the response has header information overhere. Yes, something known as the operation location Let’s go ahead and copy this operation location. Let’s go on to a new tab, so this will be an ad hoc request. So now if you want to go ahead and see the output information, you have to go ahead and take this URL and then submit it back onto the form to recognise the service. So in the backend, I said this processing will be taking place; it will just take some time, so this operation will go ahead and check if it is already done, then go ahead and give you back the required information.

Now over here, we still have to go onto headers, and we have to ensure that we have the header information for the key. So let me go ahead and take that subscription key so I can copy it over. Here, I’ll take the value and paste it over here. Let me go ahead and hit send so you can see it is successful. And now you can see all of the information. So over here, if I just go ahead and scroll down, you can see the outstanding balance that has been delivered automatically. You can see the current charges. And if I go ahead and scroll down, we can see the usage charges over. Here we can see the pretax charges. If I go ahead and scroll down even more, again, it’s using the prebuilt invoice document type. So over here, you can see the billing address. That’s something that’s already been detected in the receipt as well. And if I go ahead and scroll down even more, over here it’s also detected the invoice ID. So now, based on that inbuilt template of an invoice, it has the ability to go ahead and detect the different parts of an invoice, so if you want to go ahead and extract this information and, let’s say, keep it in a database, you can actually go ahead and make use of the form recognizer service.

img