Microsoft Azure AI AI-102 Exam Dumps, Practice Test Questions

100% Latest & Updated Microsoft Azure AI AI-102 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Microsoft AI-102 Premium Bundle
$69.97
$49.99

AI-102 Premium Bundle

  • Premium File: 156 Questions & Answers. Last update: Jan 28, 2023
  • Training Course: 74 Video Lectures
  • Study Guide: 741 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AI-102 Premium Bundle

Microsoft AI-102 Premium Bundle
  • Premium File: 156 Questions & Answers. Last update: Jan 28, 2023
  • Training Course: 74 Video Lectures
  • Study Guide: 741 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Download Free AI-102 Exam Questions

File Name Size Download Votes  
File Name
microsoft.test-king.ai-102.v2022-12-18.by.elliot.67q.vce
Size
2.69 MB
Download
52
Votes
1
 
Download
File Name
microsoft.testkings.ai-102.v2021-10-13.by.dominic.57q.vce
Size
983.61 KB
Download
486
Votes
1
 
Download
File Name
microsoft.testking.ai-102.v2021-10-09.by.angel.41q.vce
Size
1.14 MB
Download
497
Votes
1
 
Download
File Name
microsoft.testking.ai-102.v2021-07-09.by.angel.80q.vce
Size
814.27 KB
Download
590
Votes
1
 
Download
File Name
microsoft.test4prep.ai-102.v2021-05-06.by.cameron.32q.vce
Size
792.02 KB
Download
649
Votes
2
 
Download
File Name
microsoft.examcollection.ai-102.v2021-05-05.by.archie.14q.vce
Size
566.78 KB
Download
653
Votes
2
 
Download

Microsoft AI-102 Practice Test Questions, Microsoft AI-102 Exam Dumps

With Examsnap's complete exam preparation package covering the Microsoft AI-102 Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Microsoft AI-102 Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.

Computer Vision Text and Form Detection

1. *NOTE* Exam Changes July 29, 2021

The next set of computer vision APIs that we're interested in is extracting text from images. Now we've got two types of text that we can recognise here. One is basically what is called computergenerated text oroptical character recognition (OCR). It can extract text from images or PDFs using this technique. And the other type of text we'll be looking at is handwritten text. This is called something called an ink recognizer. And so these two things have different ways of extracting text from images. If we go to GitHub, we're going to switch to a different directory under the computer vision project. It's called extracting text from images. And the first thing we're going to look at is the printed text. Now this is all pretty much the same. We're setting up the computer vision client computer services credentials and basically creating that. Now what we're going to be dealing with, I'm going to scroll past the local into the remotesection, is a function called Recognized printed Text. And so for OCR, for printed computer printed text in an image, you pass in the image URL and it's going to then do that. It's going to recognise the text. We'll see here that it's going to return not only the words that it finds, but it's going to return the location on the image in which the words exist, if that's helpful for you. Now we're dealing with handwritten text, which we'll see in a second as well. This is a completely different technique. So we're going to be using the readcommand and this is an asynchronous operation. Obviously, it's a harder task for handwritten text to be detected. So you pass in the image and then you have to go and wait for the results. Once the results are available, get the results. We'll fill in the object with that. You're going to wait. If the project has not started, you'll wait. Finally, when you get to a successful state, you can check on those results, and we will print out a similar text in the bounding box. So the handwritten text is obviously a more complicated operation here than the printed text, which is a little bit simpler. So let's head over to PyCharm and have a look at both of these in action. By the way, here's the image we'll be analysing for printed text, and it's a little bit challenging. You can see that it's at an angle. There are a lot of words on the screen, and some of them get a little bit blurry. There's obviously a blurring effect. Let's see how well the computer vision service does at reading the text on the screen. If we look at the handwritten text, then this is a handwritten sample with the classic quick brownfox jumps over the lazy dog example. Okay, so let's switch over now. Alright, so let's run the code and we can see the results coming through. We're not going to look at every letter. There are a couple of mistakes here. Clearly some of the letters have gotten cut off because the image was designed that way, but it's determined nutrition facts and serving size and saturation, et cetera. The handwritten text also did pretty well. The quick brown fox jumps over the lazy dog. We can also see the bounding box, X and Ycoordinates for how that text appears in the image.

2. Computer Vision Text Detection - Handwritten and OCR

So there's a related service to computer vision called FormRecognizer, and it uses a different set of APIs. You can see it at the top in Python. This is the Azure AI dot form. Recognizer And we're going to use the FormRecognizer client to access that service is.So it's the same cognitive services key and endpoint with the Form Recognizer client. Now, there are two things we're going to cover in this video. One is recognising the contents of forms. The other is recognising the contents of receipts. Here's what we mean by forms. So this is a purchase order. It could be an invoice or any other preformatted form. It has data on it. It has the vendor's name, ship to ship from,name of the company, quantity and products. So you might need to import this information into a database. And so this API is going to help you do that. Another thing the API can do is recognise receipts. So this is a fictitious Microsoft company, Contoso,and it's basically a Surface Pro receipt. And our Form Recognizer API is going to be able to pull the data out of that receipt. And again, we can put that into a database or do whatever we want with it. So let's try writing this code. We're going to use the Begin Recognize Contentform API to do this for the form. And we'll use the receipts and recognizedreceipts API for the receipts. Now, we can see that it's found on a table. It's going to point out the subtotal, the value. There are x and y coordinates. If you know that you're looking for subtotal, you're looking for tax, you're looking for from, you're looking for two, you're going to be able to pull that information out of the results of this API. Now we're also looking at the receipt. So again, there's a confidence score for these things. It's going to have the merchant's address, merchant phone number, and transaction dates. You can see it's actually put this data into usable columns and we can even see the items on that receipt. So this can be very helpful if you've got to process a lot of images of receipts and invoices.

3. Computer Vision Form Detection

So let's turn our attention back to.net Coreand C sharp and see if the Computervision service and particularly the Optical Character Recognitioncan read the text from this person's handwriting. Again, this is a critical commons image from Flickr. So I've uploaded that handwriting into my Azure Blobaccount and set a constant with that URL. Now we're going to use the same client, but instead of calling the analysed image URL, we're going to call a method that we'll call Read File. And so it's going to extract the text from this read textural. But we still need to create this method. I'm going to minimise this method and instead create a new method called "Read File URL." Again, it takes the computer vision client and this link to the image. Now it's pretty straightforward. We're just going to call the Read Asynchronous method of the Computer Vision client, and that's going to return back a variable we're calling Text Headers. Now, as you can expect, a method such as this is going to take a couple of seconds to read the Optical Character Recognition. So we're going to return the location where the results are going to be stored in a variable called Location, and we're going to make the thread sleep for 2 seconds. Now what we're going to do is basically we're going to have to call another method to extract the results. So we're going to basically say that we'll create a variable called Results and we're going to call the getRead result Asynchronous and it's going to pass in the operation ID that comes back from the first call to the Asynchronous method to get the results. And so this will come back to this result variable here. We're going to wait until the status is not running or not started. It's either completed or failed. And the last element of this is just to output the results. As a result, the results are returned to a variable. We get the read results, which is a sub variable of this. And then we're going to loop through each read result for each line. I'm going to set a breakpoint here, so we're going to make sure it's properly bracketed here. Try to say properly bracketed three times fast. And then we're going to run FIVE debug this and we're going to see if the computer vision service can actually read the text on that image. I'm not even sure if it can because I haven't even tested it before now, but I'm just going to run it anyway. So let's see how the computer vision service handles that image. Remember, we did put that two second delay in there, so you can see it says I believe you can tell a lot that's supposed to be about someone from their handwriting. And this is my discussion. Most of the words are correct. There is one clearly incorrect word and a missing piece of punctuation. But the computer vision service did a pretty good job of extracting the text from this image. And that's as easy as it is to call the optical character recognition service from my Azure now. I can remember going back ten or fifteen years ago having to purchase an optical character recognition service library, and how difficult that was. And this is now infinitely easier to detect handwriting from images.

Extract Facial Information from Images

1. Detect and Match Faces in an Image

Well, we're moving on to the next section of the exam, which has to do with extracting facial information from images. We're still dealing with computer vision, but in this section of the course and on the exam, we are going to drill down past cognitive services into something called the Face API. So it's important to note that Cognitive Serviceshas a very basic ability to detect faces. And I can say there's a face and give you the borders of that. But if you really want to dig into the attributes of the face, the poses, which direction the person is looking into their emotions, you're going to have to use what's called the Face API. So we can see here in the Microsoft documentation that it's pretty detailed. They can discern 27 parts of your face,starting from where your nose starts on each side, the tip of your nose, various attributes of your mouth, your eyes and your eyebrows. Okay? And so obviously, for each individual,there's going to be slight differences. And this goes into how they can determine different people among each other. But you can see that the Face API can also estimate the age of the person in the picture, how blurry the photo is, and whether they have an emotion. So if there's happiness or sadness or anger, that's pretty cool. Whether they've got facial hair, estimate their gender based on their face, whether they're wearing glasses, and things like how they're posing their head,whether they're looking to the left, looking to the right, up or down. So whether they're wearing makeup or not, right? So there are lots of things you can discern from this. Now, in the case of the code, if we go over to GitHub inside of the AI 102files GitHub repository, we go into computer vision. We're looking for the facial information subdirectory and we're looking at detecting and matching faces. Now, like I said, we're not using the Cognitive Services API for this. We're using the Face API. So this is the Azure Cognitive Services vision face. It's basically a subset of this. Parental Services We still need to have Cognitive Services sand endpoints to get our Face client using that. And what we can do then is we can pass in an image of a person and use the Detect with URL method to determine that there's a face on this image. So here's the picture that we've chosen. It's a famous American president, John F. Kennedy. Again, at this stage, we're noting that it is John F. Kennedy. It's not like they have celebrity recognition. We are recognising that this is a face. These are the eyes, nose, ears, and mouth; the coordinates of those elements. So we're using the Detect command and we're going to pass in the URL of a single face. And we do get to specify a specific API model for that. Now if there's no face detected,then we'll get no face returned. Now, one of the unique things here is that Azure will assign what's called a face ID to this face. And so it's not telling us that it's John F. Kennedy. It's telling us a globally unique identifier to identify this person. Now what we can do with that is give them a picture of multiple people and detect that individual there. So this is the "Find Similar" face feature. So in this case, we are still detecting the single portrait and then we're using it to find a similar face on a family portrait. And so the family portrait is this one. You can see that the President is looking over his right shoulder. We're seeing a very strong angle to his face, not like the facing forward picture here. So let's see if Microsoft Azure can detect which of these people is John F. Kennedy, if any. So we're going to use the "Find similar" method. Again, it's the Face API, not cognitive services. It will tell us if it found any similarities, and then we will be able to determine which of the faces in this image matches the former president's. So let's switch over to PyCharm and then we're going to be able to run this code and see the output. So I'm going to hit the runcommand on PyCharm for this particular script. What we're going to find when we run this is that we did find a face in the original image and it's given a unique face ID, and it found another face in the family portrait, gave it a unique ID, and said that they are a match. And so we have the location of both faces, though the second face is at a different angle. And so what we've seen is that we can use the Microsoft Azure Face API to detect the location of faces, learn about attributes such as age, gender, and other details, and also the ability to find the same face in other images.

2. Recognize Faces in an Image

So the next step from just detecting that a face exists and being able to match similar faces is to actually recognising the individuals in images. Now, the way that we can do that using the AzureFace API is to use what is called Build a Face Groupand to train the model based on your own data. So again, we're using the Cognitiveservices Vision Faceset and we're importing the Face client. But in this particular case, we're also working with models. OK, so we're going to create our Faceclient and the first thing we're going to do is to create what is called a Person group. So we need to train a model, a machine learning model, based on a number of images that are our images. Okay, so let's imagine you have a computer full of photos from your family vacations and you want to train this computer model to recognise you, your spouse, and each of your three children. So you could do this in code. Basically, you can set up this Face group, this person group, which is your own group, and then you're going to identify yourself, your spouse, and each of your children as a person. So this is called a "person group person." And you create one of these people for each one you want to recognize. The next thing you need to do is train the computer model based on your known images of those people. So in this case, we've chosen some images. I guess they are stock models, but we have the same gentleman three times here. And we're going to train this computer model based on these three images. So that if we had another image of him, it would say, "Hey, that's this guy." So we're basically training for the cases of a woman,a man, which we just saw, and a child. And so we're basically uploading those JPEGs into the model, and we're setting off the training. So the Persongroup Train command is going to set the model off to go do the work to train the model. And now, once you've got a trained model, you can basically wait until the training is complete. So there's a looping operation going on, waiting for the training status to come back as successful. Once you have a trained model, then you can upload a test image and compare that against your model to see if you can detect who that is. So now we're back to the Face Detect command. In this case, we're detecting a stream not with the URL,but our client already has the trained model in it. Finally, we want to call the Identify command. So Face Identify and we're going to figure out if this person that is in the test image exists within our trained model. And so we are basically identifying that person. Either it is him or it isn't. And we're going to identify all of the groups that he may or may not have some type of identity in code. It is a little bit complicated. I do recommend that you go to the GitHub repository, AI 102 files. This is called Recognizefaces PY, and you can go through this and basically get an understanding of what it's trying to do. So we've got this gentleman. We've trained a model. Now we're going to send a test image to that to see if we can detect him. We also identified his fictional or not-fictional wife. So if I go back to the thing here and we can see the images are also in GitHub, we can see another woman who is also training a model to recognize And then we're going to upload a test image. This is the testimony here. And we'll see if our computer model can identify the people that we've trained from this image that it has never seen before. So we're going to switch back over to PyCharm and we're going to run the code. Now remember, what it's doing is creating a person group and now it's going through the images one by one and uploading them. So there are three images of the woman, three images of the man, and three images of a child. And so this is now building the model that you're going to see in a second. It's going to pop up and say "training the persona," which is Is, and the training status is running. Now, if you've been involved with Microsoft Machine Learning at all on the machine learning studio side, you'll know that training a model can actually take some time. It could take 15 minutes, an hour, or several hours, depending on the size of the data. Luckily, this doesn't take that long. I can actually just talk while it's running. And you can see the training status has succeeded. So now we're going to take the test image and compare it against our data set. And we've identified two faces in the test image in our data set, and that is, of course, the man and woman that we have identified with 92 and 93% confidence. So this is pretty cool. We're able to basically take images and train a model based on our own images, and then recognise those same people in images that the Azure Face API has never seen before. You.

ExamSnap's Microsoft AI-102 Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Microsoft AI-102 Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.

Comments (0)

Add Comment

Please post your comments about Microsoft Exams. Don't share your email address asking for AI-102 braindumps or AI-102 exam pdf files.

Add Comment

Purchase Individually

AI-102  Premium File
AI-102
Premium File
156 Q&A
$43.99 $39.99
AI-102  Training Course
AI-102
Training Course
74 Lectures
$16.49 $14.99
AI-102  Study Guide
AI-102
Study Guide
741 Pages
$16.49 $14.99

Microsoft Certifications

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.