Microsoft Data Science DP-100 Exam Dumps, Practice Test Questions

100% Latest & Updated Microsoft Data Science DP-100 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Microsoft DP-100 Premium Bundle
$69.97
$49.99

DP-100 Premium Bundle

  • Premium File: 472 Questions & Answers. Last update: Mar 6, 2024
  • Training Course: 80 Video Lectures
  • Study Guide: 608 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

DP-100 Premium Bundle

Microsoft DP-100 Premium Bundle
  • Premium File: 472 Questions & Answers. Last update: Mar 6, 2024
  • Training Course: 80 Video Lectures
  • Study Guide: 608 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Download Free DP-100 Exam Questions

File Name Size Download Votes  
File Name
microsoft.passguide.dp-100.v2024-02-08.by.jack.176q.vce
Size
5.11 MB
Download
85
Votes
1
 
Download
File Name
microsoft.examcollection.dp-100.v2021-10-14.by.jeremiah.166q.vce
Size
4.56 MB
Download
922
Votes
1
 
Download
File Name
microsoft.actualtests.dp-100.v2021-10-05.by.christopher.141q.vce
Size
3.95 MB
Download
921
Votes
1
 
Download
File Name
microsoft.examcollection.dp-100.v2021-08-30.by.layla.129q.vce
Size
3.78 MB
Download
960
Votes
1
 
Download
File Name
microsoft.selftestengine.dp-100.v2021-05-25.by.santiago.130q.vce
Size
3.66 MB
Download
1058
Votes
1
 
Download
File Name
microsoft.test4prep.dp-100.v2021-02-19.by.louis.129q.vce
Size
4.4 MB
Download
1165
Votes
2
 
Download

Microsoft DP-100 Practice Test Questions, Microsoft DP-100 Exam Dumps

With Examsnap's complete exam preparation package covering the Microsoft DP-100 Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Microsoft DP-100 Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.

Classification

6. Logistic Regression - Build Multi-Class Wine Quality Prediction Model

Hello and welcome. In the previous lectures, we have seen how two-class logistic regression can predict a binary outcome or an outcome with only two possible values. We also saw various parameters, interpreted the results and discussed the confusion metrics. We have also discussed the impact of ofstratification on the overall accuracy and AUC. Today we will learn how to predict an outcome that can have multiple possible predictions. For this, we are going to use the Wine Quality data set. As you may know, there are various characteristics or physiochemical properties of wine that may affect its quality. It could be the acidity, the citric acid, the residual sugar in its density and so on. These researchers from Portugal proposed a data mining approach to predict human wine taste preferences. A large data set is considered with white and red wines from Portugal. So let's go to the AzureML studio and view the dataset. You may want to pause the video so that you can upload the data set and create the experiment along with me. All right, let me search for this dataset and drag and drop it here. Next, let's visualise the data set. As you can see, there are twelve columns in it, namely fixed acidity, volatileacidity, citric acid, residual sugar, chlorides, free sulphur dioxide, density, PH, and so on. We also have an output variable called quality, which has a score between 0 and 10. But for now, we only have values from three to eight. As you can see here, for the column quality, we have six unique values. However, the number of records for three, four, seven, and eight is very low. So what am I going to do? The purpose of this lecture is to convert quality into low, average, and high quality. Please remember that is not necessarily the best approach. You may want to add additional rules for those where the number of records is low. For example, there are some techniques such as smooth that we can use, but for the time being, we are going to use them for low, average, and high quality. All right, so let's use the SQL transformation to do this, which has already been explained in detail during the data transformation. For now, I'm simply going to copy and paste the SQL script here. We have done this in the past during the data processing section, so let's run it. Alright, this has run successfully and let's visualise the output. As you can see, it has added a new column called "Wine Category" with values of low, average, and high. great. Let me close this and let's now take further steps. Now we do not need the column quality for prediction purposes. If we keep it, it may become one of the independent variables. Hence, let's remove it from the selection. So I'm going to look at the forselect column module. There it is. and drag and drop it onto the canvas. Make the right connections here and let's launch the column selector. Let's select all the columns except quality and click OK. The next step is to split the data into training tests. So drag and drop the split module and let's connect it. All right, let's use 0.6 as the fraction for forsplitting and random seed as one, two, and three. And let's also do the stratifiedsplit on the column wine category. Hence, let's launch the column selector and select the column wine category. There it is and let's click on "Okay." Next, we are going to predict multiple outcomes, so let's search for the multiclass logistic regression module. Let's drag and drop it here. Let's keep the default parameters for now. And with a random seed of one to two, we now need to train this model. So we take the train model. We have seen that in the two class logistic regression and drag and drop it onto the canvas, making the right connections for the model and for the trained data set. We also need to tell this module the outcome variable. So let's launch the column selector. Select the column wine category which holds the values of low, average, and high quality wines and click OK. Next we need to score it, so let's drag and drop the score module. All right, connect the train model output and the test data set to the score model and let's finish the final step of evaluating it by searching for evaluate and dragging and dropping the evaluate module. Make the right connections here and let's run it. All right, it has finished running; let's visualise the output. The confusion metrics here are all the values of the wine category. As you can see, it did a very good job of predicting average wines, but not so well at predicting high and low quality wines. One reason could be the number of observations available for training purposes. You can experiment with increasing the split for the training data set as well as providing lower values for the L1 and L2 regularisation weights to see what happens. That concludes the section on logistic regression using Azure ML. In this section we learnt about what logisticregression is and what the various parameters are in it. We also now know how to interpret the results and the impact of changes to various parameters, as well as the impact of training and test fractions. And finally, we also learned about multiclass logistic regression. That is what we saw in this class. So let's cover a very important topic of classification called decision tree in the next class. Thank you so much for joining, and until then, have a great time.

7. Decision Tree - What is Decision Tree?

Hello and welcome. I'm sure by now you are comfortable using the Azure ML Studio. We covered logistic regression in the previous section, and in this one, we will learn about various models using Decision Tree. Let's try to understand what the decision tree is. Well, at a high level, it is one of the supervised learning methods. It's a decision support tool that uses a tree likegraph, or the model of decisions and their possible consequences,which in turn helps us in making the predictions. It has got various variations, such as Boosted DecisionTree, Decision Forest, Decision Jungle, and so on. We will go through them during the lab. Decision trees and their variations can be used for categorical as well as continuous variables. All right, let's try to understand what a decision tree is and how it works. Using an example, let's say you are trying to predict whether the loan application will be approved or not. Now, if you only had one parameter, such as income level, based on which you needed to predict, it would have been easy. However, you see variations there as well. Not all applications with a medium income level have been approved. Let's bring in another feature called CreditScore and see if that helps us. Well, here we also see variations in the approval process. The income level and credit score may have a cumulative impact on the overall approval. But what if we add another one called Employment Type?And it can go on and on. Hence, it will be very difficult to simply analyse the applications for so many variables. Imagine if you had 40 or 50 suchvariables and all of them gave you different types of results, having a different impact. It will be almost impossible to predict for a human being. If we assume this is the historic data and a new record comes, will we be able to predict it? Well, it will take a huge amount of time, as we mentioned earlier. So how can we answer that efficiently and use machines? Well, that's where something like a decision tree helps us. Let's try to build a decision tree for this data set. Okay, so here we go and we start with the income level and we ask ourselves a question about every parameter. So, depending upon the income level, will the loan be approved or not? So we have three income levels: high, low, and medium. So we divide our observations according to the income level and, as you can see, all applications with high income levels were approved. That was pretty easy. Remember, the goal of the decision tree in classification is to come up with the pure subset. And what do we mean by a pure subset? A pure subset is where you have only one outcome. And in this case, we have all observations with only one outcome. That is, yes. So we do not split it further for this particular node. Similarly, for low income, we see that there is only one outcome. Hence, we have reached the pure subset and we don't split it further. However, if you see the medium income level, there are some approved and some not approved applications. So we cannot say convincingly that this is not a pure subset. So we decided to split it further. So let's split it on the credit score. We have two values of credit score here: low and high. So let's see what the observations look like when we split them on the credit score. Okay, as you can see, applications with a medium income level and a high credit score always get approved. So we have reached a pure subset here. However, with a low credit score and a medium income level, we still have not reached a pure subset. So we decided to split it further. Let us now divide it by the third variable, employment type. all right? So we have two values for employment type: salaried and self employed.Now, when we split the data set,we do get the pure subsets. So whenever a new application comes, we can traverse through these nodes and predict whether the loan will be approved or not. This is precisely how the decision trees work. As you can see, it looks like an inverted tree, hence the name decision tree. Let's now get ourselves familiar with some of the common terms for decision trees. Well, the root node is the first node that represents the entire sample data set. And the process of forming various decision nodes is called "splitting." When a sub node splits into further sub nodes,then it is called a decision node. all right? And if we cannot or do not want to split a decision node further, it is called a leaf or a terminal node. I hope that's clear. And the last one that we need to remember is a subsection of this entire tree. And when a subsection has one or more decisionnotes and two or more leaves, then it is called a branch or the sub tree.And I hope that explains what a decision tree is. In the next lecture, we will try to understand some additional concepts of bagging and boosting decision trees before we start creating our Azure machine learning models. Thank you so much for joining me on this one,and I will see you in the next class. Until then, have a great time.

8. Decision Tree - Ensemble Learning - Bagging and Boosting

Hello and welcome. In this lecture we are going to cover what is known as ensemble learning. Ensemble modelling is a very powerful way to improve the performance of your model. Before we get into the technical details of ensemble modeling, let's try to understand ensemble learning using a day-to-day example. Just to make things a little bit clearer. Let's say you have decided to purchase a house. You went to see the seller, and he gave you a price. But before you make a final decision, some doubts and questions come up in your mind,such as is this price fair? Is the location appropriate? Will I get a good appreciation of the price? Is this the right neighbourhood for me? What about the construction quality and how can I be sure about it? And so on. So now you have a real dilemma, as you are not an expert in many of these things. So what do you do? You go and ask the experts or those with the knowledge to get an answer. Well, you may ask a broker or visit a real estate portal to check the fair price and price appreciation. You may even ask a friend or colleague who stays nearby or has stayed in the neighbourhood in the past for construction quality. You may even get an inspection done by an architect for quality checks and structural defects. There is no one single expert here that can answer all the questions. But from multiple experts, you get these questions answered. Well, not all the questions were to your satisfaction. So what do you do? Well, either you go by the majority or you can take the weighted average in terms of which one is more important for you and accordingly take a call. Or you can also go by the decisionmakers of different types and how accurate they were in the past in giving their predictions. Ensemble learning is used in the exact same way. So what is an ensemble learning? As we know by now, all algorithms have some form of error, and our aim is to minimise the errors as much as possible. We also know that collective wisdomis higher than individual intelligence. For example, we often have a panel of judges on many crucial judgments rather than the best judge in the country. In ensemble learning, we generate a group of data-based learners and we combine the results to get higher accuracy. Now, each of these based learners can use different parameters, different sequences,as well as different training sets. And there are two major ensemble learning methods which we will cover in the subsequent slides, which are bagging and boosting. All right, so let's have a look at what we mean by bagging. In bagging, we built various models in parallel using bags of data. That is, by splitting the data and creating bins in which each of the baseliners is modeled. And then all the models vote to give a final prediction,such as in this particular case, we have these four-base learners or models, which predict an outcome. Here, for a particular observation, we have three YS and one N. As a result, the majority's final prediction will be a why? Let's now see. What do we mean by boosting? In this boosting type of ensemble learning, we train the decision tree in a sequence. However, the algorithm learns from the previous tree by focusing on incorrect observations. And then we build a new model with a higher weight for incorrect observations from the previous sequence. Let's try to understand that with an example. Let's say we have a data set that has observations spread in this particular fashion. So, first, learners would try to create a boundary somewhere over here. Now it has identified these four quadrangles and these four circles correctly. However, it made a mistake in identifying these observations. So in the next sequence, we increase theweight for these four observations and create a new decision boundary to identify these observations. So now we have these two observations identified correctly, but not the two circles. So what do we do? We go for another sequence and create a decision boundary that will identify these two observations correctly. All right, so we have run the decision tree three times and tried to improvise it in every sequence. Now is the time to combine or ensemble what we learned. And there we have our final decision tree that can identify all the triangles and circles correctly. That is nothing but a boost. All right, so in this lecture we covered what is ensemble learning and what do we mean by bagging and boosting methods? That brings us to the end of this lecture on ensemble learning. In the next lecture, let's build a boosted decision tree using Azure.ML studio. Thank you so much for joining me in this class, and I'll see you in the next one. Until then, have a great time.

9. Decision Tree - Parameters - Two Class Boosted Decision Tree

Hi, Before we start working and creating models using boosteddecision trees, let's see the parameters it requires. In the lecture on logistic regression parameters, we saw what createrunner mode using a single parameter and what createrunner mode using a parameter range looked like. It does exactly the same function here as well. The random number seed and the allowed unknown categoricalvalues are also exactly the same as we discussed in the earlier lecture. Let's see what the maximum number of leaves per tree is. Well, as we have seen, the leaves are nothing but the terminal nodes, which cannot be divided further. This parameter specifies the maximum limit for such terminal nodes or leaves that we can create in a model. Too few leaves and you risk underfitting, while too many can lead to overfitting of your model. The next parameter is the minimum number of samples per leaf. As we have seen, we go on splitting the nodes until we find a pure subset. This can become a problem and a tree may become huge if we do not limit how many samples we should have per leaf. This parameter also determines how the split of the node should occur. In this example, if we had the minimum number of samples per leave as three, we would have stopped splitting it further despite the fact that we did not have a pure set. whereas if it was less than three, let's say one, we would have gone ahead and split it further and created these additional branches. However, if we had specified two again, it would not have split it because this particular node after splitting would have only had one sample. I hope that makes it clear what this parameter is all about. The next parameter a boosted decision tree requires is the learning rate. We have seen in the previous lecture on logistic regression how a model converges on the optimum solution with the aim of minimising errors or costs. However, at what rate should our model descend so that it reaches the conversions quickly as well as with the best possible accuracy? The learning rate defines the step size that the model should take while it is trying to find the bottom or local minima. A larger step size would mean it would reach the conversions faster, but it may not be very close to the bottom. It might stop somewhere here as it will converge to this particular point. That also means you may have a less accurate model if the learning rate is large. With the smaller learning rate, the step size is small. Hence, it will take more time or a greater number of steps to reach the bottom of the local minima. However, it will be much closer to the bottom, so a smaller step size takes more time to reach the conversions. However, it is much, much more accurate. I hope this clarifies the optimal size you may require for your learning rate. It's very difficult to come up with the right combination of all of these, and hence we useparameter range with Jone Mode hyper parameter. More on that in the respective lecture. For now, let's try to understand another parameter, which is the number of trees constructed. If you recall from the lecture on ensemble learning,boosting is one of the ensemble learning methods that trains the decision tree in a sequence. So we go on building multiple trees, and each tree learns from the previous tree by focusing on the incorrect observations. Finally, we get a model that's much more accurate than a single decision tree. The Number of trees constructed parameter specifies the limit of how many trees should be constructed during the boosting method, right? Please remember that it looks for the limits of each of these parameters while constructing the trees. And even if one of the parameters reaches its limit, it stops. I hope that explains the parameters required for the two class boosted decision tree in the next lecture. Let's create an experiment using this module. Thank you so much for joining me on this one, and enjoy your time.

10. Two-Class Boosted Decision Tree - Build Bank Telemarketing Prediction

Hello and welcome. In the previous lectures, we saw what a decision tree is and what ensemble learning is, along with the two most commonly used ensemble methods, which are boosting and bagging. Today we are going to build a model based on two class boosted decision trees. But before we do that, let's first try to understand the business problem that we are going to solve today. The data here is related to direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls, and often more than one contact with the same client was required. This was done to find out if the product would be subscribed to or not. The classification goal is to predict if a client will subscribe to a term deposit. So this is a supervised learning problem and it is of type two class classification. There are various features of this data set, and we will go through the details when we visualise the data set in Azurama. The data set is also publicly available and can be found on the UCI website. You can search for it as "bank marketing data set ofUCI" and it should provide you with among the first three links. In the previous lecture, we saw what isBoosting is and how it ensembles the results. Boosted decision trees are based on this boosting method of ensemble learning. That means the next tree corrects for the errors of the previous tree, and predictions are based on the entire ensemble of trees. This is among the easiest methods to get a top performance. However, as it builds hundreds of trees,it is very very memory intensive. In fact, the current implementation holds everything in memory, so it may not be suitable for very large data sets. All right, let's go to the Azure ML studio and build our first boosted decision tree model. Here I am and I have already uploaded the data set as well as dragged and dropped it onto the canvas. So let's visualise the data set. Let's try to understand all these variables one by one. Sometimes it's worth spending some time analysing and visualising the data, as many patterns as well as the quality of the data can be seen. The first column contains the customer's age. It is a numeric feature and does not have any missing values. great. The second feature is job, which is a stringfeature and has twelve unique values, such as management technician, blue collar, and so on. It also does not have any missing values, and that does help us. The next one is the Marital Column, and it is also a string-based feature with four distinct levels of education with eight unique values. Column default indicates if the credit is in default or not. As a result, it contains only yes values, no unknown values, and so on. You can go through every column or you can also get the bank names text file from the UCI website to understand the columns and their values. Alright, let me close this first, and because we do not have any missing values, we are going to straight away jump to the split module. Let me search for it and drag and drop it here. Provide the connections and let's do a 70/30 split. So our split fraction will be 0.7. Let the random seed be one, two, or three, and let's do the stratified split on the column y. So let's launch the column selector, select column y, and click okay. As the outcome here is binary, that is yes or no. We will select a two-class model. So let's now apply the two-class boosteddecision tree model to this data set. So let me search for two class-boosted decision trees. There it is and I am going to drag and drop it onto the canvas. Let's look at the various parameters it accepts. I hope you recall the concepts from our first lecture on what a decision tree is. If you remember them, then understanding these parameters will not be difficult at all. As with previous models, it asks for trainer mode and we are going to continue with single parameter mode. Next is the maximum number of leaves we want per tree. As you know, leaves, or terminal nodes, are the nodes that we cannot or do not want to split further. By increasing this value, you potentially increase the size of the tree and get better precision. The better precision here comes at the risk of overfitting and longer training time. The minimum number of samples per leaf node indicates the number of cases or observations required to create any terminal node or leaf in a tree. By increasing this value, you raise the threshold for creating new rules. For example, with the value of y, the training data will have to contain at least five cases that meet the same conditions before we can split it further. All right. The learning rate determines how fast or slowly the learner converges on the optimal solution. If the step size is too big,you might overshoot the optimal solution. And if the step size is too small, training may take longer to arrive at the best solution. The number of trees constructed indicates the total number of decision trees to be created. By creating more decision trees, you can potentially get better coverage, but the training time will increase. We already know what the random number seed is, and we specify one, two, or three there. Then we check to allow unknown categorical values. Our decision tree model has now been set. Let's first train this model using the Train model module. All right. Let's also score this model on the data set, making sure that at every step we have all the right connections. Let's also add the evaluation model so that we have everything ready. There is our evaluation model, and we are ready to run it with the right connections made. Believe me, it may take some time depending on which region you are running it in. I assume you have followed me and are now ready to run your first two class boosted decision trees. So get yourself a cup of coffee and also pause the video while it runs. All right, all the steps have been successfully completed, and let's now visualise the are now reaAnd congratulations. Because we have just achieved an AUC of more than 0.9% and our accuracy is also more than 90%. You should know these terms by now. If you are still not clear about how to interpret these results, I suggest you go through the class on understanding the results, where we have explained this in great detail. That concludes our session on two class boosted decision trees. In this class, we used the bank's telemarketing data and predicted it with very high accuracy. And what did we predict? We predicted whether the prospect would pay the term deposit or not. In the next class, we will cover the two class decision forests and try to predict the same outcome as in this class. So I'll see you in the next class and enjoy your time.

ExamSnap's Microsoft DP-100 Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Microsoft DP-100 Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.

Comments (0)

Add Comment

Please post your comments about Microsoft Exams. Don't share your email address asking for DP-100 braindumps or DP-100 exam pdf files.

Add Comment

Purchase Individually

DP-100  Premium File
DP-100
Premium File
472 Q&A
$43.99 $39.99
DP-100  Training Course
DP-100
Training Course
80 Lectures
$16.49 $14.99
DP-100  Study Guide
DP-100
Study Guide
608 Pages
$16.49 $14.99

Microsoft Certifications

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.