Use VCE Exam Simulator to open VCE files

100% Latest & Updated Microsoft Azure Database DP-300 Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!
DP-300 Premium Bundle
Download Free DP-300 Exam Questions
File Name | Size | Download | Votes | |
---|---|---|---|---|
File Name microsoft.pass4sure.dp-300.v2023-09-12.by.matthew.114q.vce |
Size 3.12 MB |
Download 76 |
Votes 1 |
|
File Name microsoft.selftesttraining.dp-300.v2021-12-02.by.erin.102q.vce |
Size 3.36 MB |
Download 718 |
Votes 1 |
|
File Name microsoft.passit4sure.dp-300.v2021-07-30.by.angel.105q.vce |
Size 3.1 MB |
Download 812 |
Votes 1 |
|
File Name microsoft.test-inside.dp-300.v2021-06-08.by.teddy.84q.vce |
Size 1.74 MB |
Download 873 |
Votes 1 |
|
File Name microsoft.braindumps.dp-300.v2021-02-26.by.elijah.49q.vce |
Size 1.58 MB |
Download 988 |
Votes 2 |
Microsoft DP-300 Practice Test Questions, Microsoft DP-300 Exam Dumps
With Examsnap's complete exam preparation package covering the Microsoft DP-300 Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Microsoft DP-300 Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.
In this section, we'll be looking at how to configure an Azure SQL database for scale and performance. So let's go to Azure SQL and create a new SQL database. So there are three different types. Single database. So this is obviously a single database, which contains one database. It's a great fit for modern cloud-born applications. So if you're starting from scratch and you're going to have things in the cloud, this is where you can store the information. The storage is between half a terabyte and up to four terabytes depending on how you provision it. Unless you go into the hyper scale where you can go up to 100 terabytes and maybe beyond, you have the option of serverless compute or you can have provisioned compute, whichever one you want and it's fairly easy to manage. Now if you've got multiple databases, then you might want to consider an elastic pool. Now we'll have a look at, I think, a couple of videos time.When you would use an elastic pool What are the requirements? When is it a good idea? There is a fairly new version on the block called a database server, used to managegroups of single databases and elastic pools. I won't actually be getting into that. I want to know if there are any requirements for the DP 300 certification about it.So we're going to have a single database and I'm going to click Create. So you can set up your resource group. So this could be an existing resource group or you can create a new one just here. So a resource group is just a container for where all of your resources for a particular project, or maybe multiple projects, are. So the advantage of having a resource group at the end is that if you don't want the projects anymore, you just delete the resource group and it deletes everything associated with the project. You can also give it a database name and you can see it's got to have at least one character and a maximum of 128 characters should contain reserved words and special patterns. And on this server, and I previously set up a server,this was the dialogue box. If you remember, for setting up a server, the database name needs to be unique. So you can set up an elastic pool at this point an elastic pool.As I said, we'll talk about that more in a couple of videos time.And you can also set up your storage redundancy. So how do you want your backups to be stored? Do you want them to be locally zoneredundant or georgundant? And you can see from the preview that these are relatively new. So georeredundant backup storage is the default version and it will be geoofficiated to the paired region. So what's that? Well, we've got all of these regions across the world, and the paired region is simply a specific region. So, for instance, the east U.S. is paired with the west U.S. and vice versa. So what's the reason for this? Assume there is a major disaster in both the west and east of the United States, and other regions go offline at the same time. Well, rather than having your resources in all of the places, you might decide, let's say, Microsoft might decide that East US comes up first because the paired region, West US, will probably have much of the same things. As a result, we'll only do one construct, possibly followed by another in the West US. So we're trying to get one of the pairs up to begin with. And everywhere that has a pair region, and that currently has a pair region in the same sort of place as China, China, France, France, Europe. Europe There is one that isn't, and that is BrazilSouth, and its regional pair is South Central US. You may be wondering why they don't use Brazil Southeast. This is actually at the point of recording a very new region. I've recorded previous courses about Brazil when Brazil Southeast wasn't an option. So the only South American version was Brazil South. So it has to have a pair somewhere. So that is where your backup data goes. Now, you can also configure the amount of compute and storage. So compute is the actual working power and storage, the amount that it can be retained here in the configured database. I suggest you do that because the default is around $400-$450 a month. And that might not be as much as you want. It might be too much. So in the next video, what we're going to do is talk about all of these various things that you can see on the screen here.
Now, in this video we're going to have a look at the service and compute tiers, and let's just see that there are six different tiers you can have. So at the DTU base, there's basic, standard, and premium. Then there are the VCO-based general purpose, business critical, and hyper scale servers. There is a link if you want to have some sort of idea of what the differences are. But I didn't find it very easy initially. So let's just break this down into two vico and a DTU. So, Vico here, we can specify the number of Vcontrols, or virtual cores, we can specify the memory, and we can specify the amount and speed of storage. So I can have, as you can see, anywhere from two to eight V cars. Now, notice how much the estimated cost goes up and down. So it starts off at about $450 per month for this general purpose of the core. And then you can see doubles as soon as I get to four calls, and then by the time I get to 80 calls, that's around 16 grand. You've also got a database maximum size. Now, notice as the V Coles come up this way on the left, and the reason for that is that the maximum size is partly dependent on the vehicles. It is also dependent on the hardware configuration. The standard hardware configuration nowadays is called Gen Five. So there are some alternatives, but stick to GenFive as the balanced memory and compute version. You can have a version which is more focused on computing rather than balance. So if you want, Gen Five is the standard name that you will hear. So, with two cores, we can get up to 1024GB. But if I go all the way up to 80 cores, then we can get up to 4096 GB. And you'll notice the amount of logspace allocated is directly proportional to the amount of the data maximum size. In fact, it's 30%. Now, while you can't necessarily configure it separately,you've also got differences in IOPS, or inputoutput operations per second. IOPS is the concurrent worker number of requests that you can have and the backup retention as well. So you can have a maximum of 80V cores at the Gen Five configuration and a maximum data size of four terabytes. If you want more, then instead of looking at general purpose, you'll be looking at hyper scale. So you can see the ability to change from hyperscale to something else is not supported. So if you've got hyper scale, then you can go up to 100 terabytes, which is a standard thing, and you can go up to 80 V. Now, business-critical is the one in the middle. So business-critical is when you need a high transaction rate and high resiliency. As you can see, you can go up to 80 V cores there and up to four terabytes. So which of these three should be used well? The general purpose is scalable compute and storage. And that's for most business workloads. So the storage latency, the amount of time it takes to actually retrieve the data, is about five to ten milliseconds. And to put that in perspective, that's the same as an SQL Server on a virtual machine. However, if you want a higher transaction rate and higher resiliency, then you've got the business critical, but you can see the difference in the cost. So the general purpose starts off at about $450, whereas the business critical starts out at about $1100 US dollars. So use Business Critical when you need low latency input and output. So we're talking not five to ten milliseconds, but one to two milliseconds, or when there are frequent communications between the app and the database. You could also use Business Critical when there's a large number of updates or long-running transactions that modify data. You've got high resiliency, high availability, and fast georecovery and recovery from failures, as well as advanced data corruption protection. And you also get a freeof charge secondary read only Replica.So let's say I was in the west us.In the eastern United States, I can get a free replica. And then when I wanted to access information, I could go to the eastern US. As opposed to constantly going to the western US, which is also for writing as well. So, hyper scale, that is when you need more than four terabytes or up to around 100 terabytes. So the advantage of the hyperscale is that it's the same rate as the standard Azure SQL database. As you can see, the estimated compute cost was between $800 and $900. Storage costs about thirteen cents. Now that is the VCO and as we saw earlier, if I just go back to the critical database, you can see we can have georedundant,backup, storage zone redundant, and local redundant. These two are cheaper, but only if you need a single region's data resiliency. In other words, if something goes wrong in a particular region, then that's okay; you're not going to pay extra money to have it in more than one region. So that is the vehicle. So these are the top three: general purpose, business critical, and hyperscale. On the screen, you can see some of the various hardware configurations available in the Vcore. And it's interesting to note that there is an indirect correlation between the number of V cores that you're allowed and the TempDB maximum data size. So Tempdb is where it stores temporary things,and that correlation is that there is a new Tempdb data size for every vehicle. So when there's one vehicle, we have a 32GB file, when there's two, we have two files totalling 64GB, and so on. So it keeps going up the server, up we go. So if we have 14 vehicles, then our temporary database max data size will be 448. And that's also the case for the general purpose layer. So you can see two vehicles, 64GB and same for the business-critical layer as well. So you'll also notice that there are increases in memory. Again, it goes up in line with the number of VCOs storage for OLTP in memory size, that is. So what if it can retain cache rather than having to go out into a SSD, a solid state drive and the IOPS as input output per second? The number of processes it can do also increases, as does the maximum concurrent workers, orrequests, the maximum number of logins. But some things do remain constant. But generally, if you're talking about storage, we've seen that there are some limitations on storage earlier at the general purpose level, but when we're talking about temp DB, it goes up. When we're talking about log size, it goes up. When we're talking about the maximum data size, it goes up as well. So an increase in the number of vehicles also increases other things. Now notice the I/O latency. I referred to this earlier. So for compute, we have an IR latency of between five and seven milliseconds for writing and five to ten milliseconds for reading. But for business-critical, this is reduced to one to two milliseconds.
So if that is the Veco-based purchase model, what is the DTU-based purchase model? Well, DTUs are little units. They're called database transaction units. And they are packages of the maximum number of compute, memory, and input/output resources for each class. So, whereas previously with the vehicle, you could say, "Well, I'm really more interested in the storage and the compute." Here with the DTU model, you have these packages and you say, "I want to have this package." So if I go back to the basics, So this is for less demanding workloads. You can see that my DTU goes from zero to two gigabytes. The DTU itself is just five DTUs. It's nothing more, nothing less. And I'm not charged based on my maximum data science. You can see the price remains at about $7 per month. And that is actually quite a bargain if you just want to have your own database online. And you can still use all the rest of the Azure functionality and features as well. Now going up to standard, this is for more typical performance requirements. So here you can see I want this particular model. And you notice that the models have a standard number of DTUs, but they also have a name next to them. So s = zero, s 1, s two. Now, there are some features that aren't available in the lower models. So, for instance, the basic that we just saw is S zero and S one. This is where database files are stored in standard storage hard disc drives (HDDs), so that's much less reliable than SSDs, the solid state disks. So consider if you're doing a standardSRS One or Basic. Then this is for development testing and infrequently accessed workloads. Later on, we'll be using something called Change Data Capture, or CDC. And this cannot be used while you've got less than the equivalent of one vehicle. And therefore, you can't use it in basics zero, s one, or even s two. You have to go up to the S3, which is 100 DTUs, before you can actually use it. Now you can see you can go all the way up to 3000, but consider changing to a V card. If you're using more than about 300, consider it. It might reduce costs and there's no downtime when you're converting. So that's the standard. So that's typical performance. If you've got more input-output intensive workloads, then you might want to have a look at the Premium Model. So the premium model, as you can see now, starts at about $650. The standard started at $20. And we can take the DTUs all the way up to 4000 at a cost of about $19,000.Again, you can see there's no change in the charge for the data management size. It is included in the DTUs because the DTUs are a bundle of compute memory and input/output resources. So this is a good calculator for how much such a thing will cost. If you want to know how many DTUs you might need, then there is this website, dtucalculator websites.Net. And for this, you need to do a few traces on your existing on prem and then it will give you some sort of figure calculation recommendation. So, if you're content with standard bundles that are simply preconfigured, go for the DTU. And as I said, the goal is all the way down to basic, and remember that the general purpose starts at around $450. So if you're below that on theDTU, then you've got something cheaper. As long as you can actually use it,it is usable for what you want. Now you can change your service tier on demand. So if I go to my current Azure SQLand, I've got a database on the server, so if I go to the server, you can't see anything here about what particular level you can see. It's a per database function, and here you can see the pricing tier. This is in the overview section. So if I click on that, I can change this to any of these VCO or DTU, except I won't be able to go to hyperscale. It used to be you couldn't go into hyper scale from elsewhere. It looks like now they've solved that, but you still can't go out of hyperscale. Just one word of warning, I wouldn't change this when you actually have a long job running. I would go at some point when the server is less likely to be a user database and then change it. If you have got DMVs, these are dynamic management views that we had a look at earlier. For them to have accurate figures, it's possible you may need to flush what's called the Query Store, which will be looking at future sections before you rescale. And if you want to do that, you would need to issue the command SP for store procedure. Query store: flushdb. So this is the range of options that you have got. So general purpose, business critical on hyperscale, the vehicle, and then basic, standard, and premium are in the DTU based purchasing model. In terms of tempdb size, For the basic service level, you've got one tempdb file of 13 GB and that's the same at the start of standard. So s one and s two, while you've got less than one V core. When we get to s3, we start having 32 gigabyte file sizes. And once you progress all the way through to twelve, we have twelve tempdb files at about 384 GB. So it's 32 times twelve. So you can see the number of temp files is roughly half the service level. S six has three, S four has two, until you get to S seven, where it really jumps. Now we'll be looking at pools later. So we have a similar sort of relationship with the DTUs that you can have in a pool, which are called EDTUS and the number of ten DB files. But in each case, the maximum data size per file is 32GB. And then finally, at the premium model, we have a maximum size of thirteen gigabytes for the temp DB files. We have twelve of them. So that gives a maximum data size of ten DB of 166.7gb.
Returning to compute and storage, we can see that we are currently at the basic level, with a very small configuration. We can't really change anything. As we go through the DTU-based purchasing model, we can change more and more things. Now, when we get to the general purpose and only the general purpose, we can actually change the compute tier from provisioned to serverless. So the advantage of serverless is that you are billed by the second. So, after an hour of inactivity, the billing stops, as does the ability to use the database, and it restarts. When there's any database activity There may be a small delay in talking. Not much, to be honest, but there may be a little bit. So you can autopilot from 1 hour all the way up to seven days, but no less than 1 hour. Now, if you're using Provisioned, then you do have the option of saving money by using the Hybrid benefit. So if you've already got an SQL Server licence for on-prem, then you can use that licence and we can't use it everywhere. This example is in the central US region. You can see that we are saving up to 55%, but it's only saving here about 35%. And notice what it says at the bottom. Your actual savings will vary; however, this may significantly reduce the cost. So, provisioned or serverless for general purpose, the choice is yours. When you get to business critical, you can't do that. And the same for hyperscale. It only really makes sense for general purposes, because if it's business critical, you need to have access to it all the time. Now, the next question is, do you want to use an SQL elastic pool? And the question is, what is that? Well, let's suppose that you have more than one database. So here we have an example of a database. And here you can see the compute requirements for this particular database. So it peaks at 12:00 to 01:00 and then peaks again at 04:00 to 05:00. Okay? So we need to have sufficient DTUs or we need to have sufficient vehicles, etc., so that we can have these compute requirements. These figures are completely made up, by the way. Now, suppose we have a second database, okay? We have exactly the same requirements,except the timing is different. This peaks between one and 02:00, and we still have this peak here till 05:00. Let's add a third database. Again, we have different timings, so itpeaks between two and 03:00, five and 06:00, and then a fourth database. And as you can see again, we have a peak between three and 04:00 and five and 06:00. So notice none of these. The compute requirements are above this figure 20, which as I said is a purely fictitious figure. So what we could do is we could have four databases with a maximum compute requirement of 20. But that would not be that good. When we have looked at an individual database, you can see we are not using much in terms of the compute requirements in this time period. In this time period, if we had a threshold of 20, which would have to accommodate the peaks, that wouldn't be that good. We'd be wasting a lot of money. So here are these compute requirements again. Here's Database One, but let's put Database Two now on top of it. And you can see we peak near 40 here. Add Database Three and Database Four. And you can see that the total peak is around 52, 53. So we were previously talking about four databases, each with an allocation of 20. Here we've got four databases with a total peak of 52. So if we commissioned each database separately, we'd have to provision a peak of 20 x 480. Here we can provision a peak of around 53 or so,and that will be fine for all of the databases. So that is what an elastic pool is. It is a pool of resources, and we can create a new pool. So this is my elastic pool here, and we can configure this elastic pool with whatever purchasing model we want, but we can't choose hyper scale. So you'll notice that when we get to the DTU-based purchasing model, there's a small difference in terminology. It's not DTUs. It's e DTUs. The e stands for elastic. So I could have said, 500 EDTUS, and for the database, individual database, up to 75 DTUs, or I could not do what's called Frost Lite and allow everybody access to the full 500 Ed to use as necessary. So that is what an elastic pool is all about. It's the ability to provision better. all of these bumps. If all of the bumps happened at different times, if they all happened at the same time, well, I would have four lots of 20 all at once. I wouldn't actually save money by having an elastic pool. And in fact, it might well cost me money because it might not be at the Vcore level but at the DTU based purchasing model. The unit price for EDTUS pools is an extra 50%. VCO is the same unit price. So in this video, we've had a look at Provisionedand Serverless and we've had a look at elastic pools. In the next video, we're just going to have a look at some of the other things that we need to look at while provisioning and deploying Azure SQL Database.
ExamSnap's Microsoft DP-300 Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Microsoft DP-300 Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.
Comments (0)
Please post your comments about Microsoft Exams. Don't share your email address asking for DP-300 braindumps or DP-300 exam pdf files.
Purchase Individually
DP-300 Training Course
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.