CompTIA Cloud+ CV0-003 – Domain 4.0 Management

  1. Storage Management

Let’s talk about storage. Well, storage is one of those things in the cloud that you either get or don’t. and you’ll figure it out really quickly if you do. Especially if you notice that your cloud bill is increasing by 30% every quarter and you have no idea why. When it comes to storage, you must understand that there are various approaches, use cases, costing methods, and technologies to consider. So with that said, just be aware that you’re going to need to understand the right type of storage for the right situation when it comes to object storage. For example, cloud storage with Google is your object storage. This may be a good use case for your staging areas, for your big data services, for ingesting data, or perhaps for your production data if you don’t have tight RPO and RTO requirements. Let’s say.

Now, let’s say, for example, you need to have more sand storage or dash storage. You need to look at the right situation for the right scenario. For example, if you want to spin up a VM and have a direct SSD drive attached to it, you can do that with a lot of the providers. But again, realise that the use case needs to match up to the cost that you require. I like to think of storage as a puzzle in the cloud because there’s so much more to it than a high-level picture. Many people believe it is simple, but it is not, especially when compliance and performance requirements, as well as costing models, are factored in. Also too. Again, you’ve got other technologies that can enhance your applications. You’ve got other capabilities as well that could help you reduce costs. The question is, do you see storage as more than just a place to store data? Let’s go ahead and talk about different storage types. When it comes to cloud storage, just be aware that not everything is created equal.

Base your decision on several factors. It could be the costing, the features such as tiering or thin provisioning, or the replication compliance requirements. For example, if you have HIPAA data, SOCKS data, or whatever PCI data you have, you’ll need to be aware that those will likely be on a different type of storage tier than perhaps some other production data. You may not need to access that data as much, and therefore it may be possible to keep it on an archive instead of an SSD drive. Let’s say you need to be aware of whatever the use case is. High availability: do you want that data fully redundant? Do you want that data cluster? Do you want it to be as efficient and accessible as possible? Again, we need to know the use case. Do you use the cold line? Do you use Near line? Whatever the situation is, just remember that typically cost goes up with performance. Cost goes down with performance and availability as well. OPS and throughput could be a factor. Many cloud providers, for example, offer IOPS guarantee storage, in which they provide a burst of IOPS if needed. For example, with GCP, you get up to 3000 additional IOPs on some of the tiers, whereas you get 30,000. If you don’t want to pay additional fees, you should use the right platform for the right purpose. Case security is another big deal too, right? Again, look at the details.

We’ll talk more about storage security in the security module. Primary storage includes primary storage, secondary storage, production storage, and storage course. Secondary storage is where you keep your archives and backups. Typically, we have different types of storage. Just remember that the cost goes up. Typically, when you have higher performance requirements, there may not always be a direct correlation to that, but generally there is. Object storage is data that’s stored without a hierarchy. In other words, there’s no file structure. If you want to use Google Cloud Storage and expect a file structure like in Nas, you’ll need to use a third-party tool to accomplish that. File in data with hierarchical identifiers, of course. And then a block is essentially a sequence of bytes and block sizes. This is your sand storage area here’s. Some areas around file, object, and block storage generally use the right type of storage for the right use case. If you have stringent performance requirements, you may need to use block storage, such as Amazon EBS. High availability, business continuity, replication, clustering provisioning, Ddup, and compression are all possible storage features. All of these are features. Each of the providers supports them differently. The cost is significantly different as well. Once again, if you use cloud storage, great. If you also need to use cloud storage and have it replicated between regions, you could expect the cost to go up, right? Again, just be aware of that. So on this exam, I just want you to be aware of some of the cost considerations. Again, storage costs money. Storage is one of your top costs in the cloud, along with virtual machines.

typically understand what could drive the cost up in your cloud organization, and again, I won’t read those to you. There’s one term I wanted to make sure you understood, and I’m sure most of you have taken Cloud Essentials or other comparable courses. You’ve probably seen one masking before, but you’ll probably see it again on this test. So that’s why I’m covering it. Lun masking is an authorization process that makes Aloan available to some hosts and unavailable to others. It’s basically a level of volume security. On the exam, you’ll likely get a question that asks you about the use case of Lun masking. Or perhaps they’ll give you a scenario. Again, I like to call them the paragraph scenarios, and basically, you’ll need to solve the problem. And what would you use to solve the problem? Could it be lung masking? Could it be sand zoning? Could it be other areas that enable, ACLs, or whatever the answers have? So just be aware; you want to make sure you know the use case for Lun masking Lun zoning. Again, this is a method of configuring your storage to match LUNs so that the appropriate devices have access to the appropriate resources.

Availability zone. So what’s an availability zone? This exam does not specifically test you on Amazon, Google, Microsoft Rackspace, Blue Mix, and so on. However, it will test you in some areas that are what I would call generally accepted, I guess, between the providers. So even though they won’t say it’s AWS, they will sort of give you a question that infers that it’s AWS. If you can appreciate what I’m saying now, you’ll need to know that an availability zone is essentially a regional data centre on EC Two.As long as you know that it’s an AWS data center, you’re in good shape. Some of the factors around AWS are that you’re going to have what’s called “elastic” IP addresses. These are your public IP addresses. You’re going to map them between the different resources. This is essentially how you will use networking in AWS with replication. This is a managed service. This is where you will essentially duplicate data across a sandbox. Typically, however, it doesn’t have to be over sand. But just for this definition, I just want you to be aware of that.

For example, you could replicate object storage as well, and so on and so forth. could be synchronous replication. We’ll discuss the differences between regional, multiregional, and cross-regional async. Let’s make sure you understand the difference between synchronous and asynchronous. One of the things you may see on the exam is a test of your knowledge about the right type of replication to use. Synchronous replication will write data at the same time that you will use synchronous data for the highest availability and the lowest recovery point goal. So, basically, either 15 minutes or 5 minutes. Actually, it depends on the vendor. My opinion is that if you need any kind of RPO or RTO granularity and you need it really tight, you have to use synchronous replication. If you’re migrating, I’m migrating. If you’re replicating production data, you’re probably going to want to use synchronous, especially if you have a clustered approach where you may have one site failover to another. This is very expensive. You will of course need to pay for the high-quality links to be able to do that, as well as the additional services from the different vendors to accomplish that, which are very expensive and typically used only in higher-end organisations. Asynchronous: This is where you can basically replicate data with a delay. This is more of a journal approach. For example, some of the vendors out there that support both of these would be EMC and Hitachi, for example. Now, when we talk about the cloud generally, you need to look at the right use case between, say, using AWS versus Google. They’re going to support these in different ways. Storage, replication—there are different types of ways to replicate data. In the cloud, you could replicate regionally, multiregionally, or cross-regionally. Again, the question is how easily accessible and geographically dispersed you require that data to be. Now, regionally, let’s say, for example, you want to replicate in Northern Virginia. Regional is, let’s say I want to replicate that instance from one instance in this zone to another zone in AWS. It’s likely going to be in the same data center. Perhaps it’s on the other side, perhaps it’s just the next rack over, whatever it is, it’s just a separate instance that’s available. Multiregional is where you’re going to typically replicate from one region to another. It could be anywhere from Northern Virginia to, say, California, or wherever the region is.

Cross-regional is a little different because you’re typically going between the regions. And this is typically known in the storage world as “multi-tiered replication.” When it comes to storage technologies, we have deduplication and compression. Once again, I’m not here to explain the basics to you, but make sure you know what a dupe is. In compression, we want to use DUP. For example, there was a question that asked whether you use dedupe before compression or compression before dedupe when replicating over a link. It was very similar to that. I won’t tell you exactly how the question was structured, but it was similar to that approach. You want to understand that compression is typically used in situations where bandwidth is tight, and you’re going to also want to use it on the right types of files because every type of file out there can have different benefits from compression. So a JPEG is going to perform very differently than a Word document. So, once again, a text file. Ddup again is going to remove those instances of the same data. So understand the use cases of compression and DUP. Cloning. What’s a clone? Well, good question. Right, this is the copy. You’re making a copy. Basically, one of the things I want you to be aware of for cloning is that there is typically a parent-child relationship typically.

Generally, you’re not going to want to recover from a parent or child, depending on the situation. So what do I mean by that? When we talk about snapshots and cloning more later in the course, you’ll understand that the parent is what you’re going to have, which is essentially the top copy essentially. And then the child will be a sub copy. So, for example, you’ll take a snapshot—actually, a clone, I mean—of a resource, and then you’ll make other clones from that resource. So just be aware of that, okay? Let’s see. You want to be aware that there are different storage types. Once again, we talked about these. I just want to make sure that you know the right use case for the right cloud use case. Here’s an example: Tip. One masking is a process that makes a one available to some hosts while making it unavailable to others. Don’t get confused between zoning and masking. This is a very common situation in which people confuse masking and zoning. Make sure you understand the difference.

  1. Storage Performance

Let’s go ahead and talk about storage performance. Again, when it comes to storage, there are numerous factors that can influence performance. Let’s go ahead and discuss a few of the areas that could certainly be a performance concern when it comes to storage. One thing to keep in mind is that storage comes in a variety of forms and pricing methods. Once again, object storage has different options, just like file and sand storage as well. Block storage performance and cost are usually directly proportional. That means that, once again, the higher the performance, the higher the cost. Generally, what you would see is that you want to use the right solution to provide the right service level. Make sure that the SLA correlates to what you’re expecting from a service level performance indicator, and we’ll talk more about KPIs, for example, and other modules.

You want to enable features as needed. For example, if you don’t need to have replication or snapshots enabled, don’t enable those. Those could definitely affect your cost, for sure, but they could also affect your performance as well. Use resources that aren’t needed. When it comes to the basics, one of the things you may see on the exam is around I Ops. IOPS is the number of I/O operations per second. This is the amount of reading that could be done in one second’s time. Average IOP size times IOPsis is the throughput in megabits per second. Just keep in mind that most cloud providers do have some kind of burst capability around IOPS. There are also IOPS templates that you can use to guarantee specific levels of performance. But again, you need to know what you really need versus what you think you need. This is where, again, the cost and performance are relative when it comes to throughput. This is essentially the maximum amount of data delivered, expressed in megabytes per second. Throughput and megabits per second are calculated by multiplying average IO size by IOPS.

You don’t need to know the formulas for this exam, but we just want you to be aware of how you would figure them out. One of the things, though, that you may want to understand is that the IO size is the transport size. For example, if you send a bunch of four kilowatt blocks versus 32, what do you think is going to take more work? For example, just be aware that the chunk size is—again, every vendor has a little different take on this—but in reality, it’s the way the packets are chunked up and sent over, the cloud services, the network, etc. That could definitely have an impact on your performance. For example, databases are very dependent on some of these factors. Be aware of what you need once again; we’ve seen this chart before. Make sure that the costs are what you’re expecting for the performance that you need. Some of the features we spoke about in the previous module would be around high availability, business continuity, replication, clustering provisioning, Ddup, and compression. I won’t go ahead and explain all this. Again, you should remember some of this, hopefully. But just be aware that the main point I want to get across here is to make sure you understand how storage features could impact performance. That is a given. Okay. When it comes to cost, just be aware that cost could also drive your performance as well. Again, if you have compliance requirements, do you need to have that on SSD drives, or can you get away with it on lower-end drives like SAS, SATA, or spinning discs? Of course.

Right. Other cost considerations, such as those mentioned earlier, can also have an impact. Once again, dedupe compression and encryption can certainly have an impact on storage performance. One of the things I recommend that you look at before you do any kind of enablement in storage environments is to validate whether or not you really need any of these features. They’re nice to have. Compression is good, especially for specific data types. Files that compress easily can benefit from compression encryption. Encryption is overhead. That’s the way I like to explain it. It provides additional layers of overhead for security. It provides additional layers of privacy, for example. Again, there’s a cost to that, not only from a cost perspective but from a performance perspective as well. Rates can definitely have an impact as well. Be aware that the different levels of raids can affect your performance and cost. I like to compare rates to availability levels, but also to performance levels. One of the factors that I think customers have traditionally overlooked is that it’s nice to have the benefits of Raid 5. But do you need the benefits of Raid 5 for archives? Maybe not. It really depends.

Maybe you get away with Raid Six; who knows? Once again, I’m not here to teach you Raid. This should be expected if you’re taking this course. I do hope, though, that you understand the differences between raid zero and raid five. Or raid one and raid five. The Cloud Plus exam will now test you only to the extent that the objectives can test you on rate. However, I didn’t see any rating questions. But again, I’m doing this because I want to make sure all the objectives are covered. Storage Tiering now, this is a greatway to address different performance requirements. Most storage arrays include storage tiering as an optional feature. Also, a lot of the storage providers provide some of the same services in the cloud. However, the storage tier in the cloud differs from what you would typically do in a private cloud. However, those on private clouds will almost certainly have some form of storage tiering. Whether that’s an EMC solution or HCs, whatever it is, This may be the right way to address different performance requirements. times the average IO size IOPS equals throughput in megabits. Know the difference between throughput and IOPS. One of the things that you may see on this exam I did see one question that asked about IOPS. I didn’t see any on throughput, but that doesn’t mean that your test will be exactly the same as mine. However, the objectives clearly state these areas, so we want to address them on the exam. Just make sure you know the difference. Also, two of the objectives are to understand the way throughput is addressed, which is megabits per second divided by seconds there. So just be aware of that.

  1. GCP Storage Demo

So I’m over in the console, and as you can see, I’m at the home page under the same group project. And so what we want to do now is add some storage under one of these projects. So I get, I think, three projects. And, yes, I do; I’m working on three projects. I’ve got boot camp and the IME group project. Okay, so now let’s go over to the sidebar, and you can see that there’s an area called storage. Now I’m just going to highlight this for readers, because initially, when you first look at this, you don’t think of the Big Table as being storage. At least the name doesn’t infer that. To clarify, if you go to Big Table, the goal is to improve your Hadoop no sequel capabilities. So we go ahead and create an instance there if we so choose. But since the goal of this lesson is to go over to storage, let’s go ahead and go over there. So this is cloud storage.

Now, under this project, I have no storage. So, as part of this project, we had to create some storage buckets. And just to show you, if I go over to the boot camp, let’s say I’ve already created some buckets over there, and you can see that they’re under certain regions. You could see that the lifecycle one is enabled, but the others aren’t. And then these are the labels that are attached to that storage. So if I click on one of these buckets, you can see now that the default search class is regional, whereas these are multiregional. If you ever take the cloud architect exam, you’ll need to understand the distinction between these areas. And then I’ll show you where I go over here to see which one I last created, which I believe was this one here. As you can see, I’ve uploaded a significant number of files. Furthermore, these are exports. These are essentially the bills that I export every week or so. Actually, I think it’s every day. I said, “Yeah, so it’s every day.” So you can see that this is the CSV file that is exported every day. So if I go back to buckets and then I go over to this bucket here, the regional bucket, you’d see that there are no objects there.

So let’s just go through it, let’s go to the blank project, and let’s go create a bucket from scratch. So that way, if you haven’t created a bucket yet, I encourage you to go log into your free account with Google and essentially get your login set up, your credit applied, and go ahead and play around with this so that you understand how to create a bucket and what to look for when you create a bucket. And remember, that standard is going to be the default when it comes to the storage class. And I’m bringing this up because, once again, if you take the architect exam or the data engineer exam, both of those exams will expect you to know some of the differences and will quiz you on them. So I’m going to spend a little more time than usual on this because I want to make sure you understand stories because they’re going to make up at least 10% or 12% of the architect exam. So it is a good chunk of the test. Okay? Now remember a bucket. A bucket is object storage. Okay? So for my AWS friends and fans out there, this is essentially Google’s approach to s three. So you can see here that I have a lot of choices, right? And so what exactly is the right choice? So one of the strategies I would highly recommend is to understand why you’re creating the bucket in the first place. Is it to keep log files, or is it for content delivery? If it’s for content delivery, you probably need to go with multiregional, assuming that it’s important to your organization that people can access it from the region that will probably be most appropriate for them. So think of it from that perspective. So if you have content that is going to be downloaded routinely, then maybe you want to think about this. Again, cost is proportionally lower than higher, as one would expect from a regional perspective. So this is regional data. So basically, you’re going to keep it in the US. And you don’t need to have another region, like in Europe or Asia, for example. Nearline is back; basically, this is for; this is like an archive, which is exactly what it is. So this is for infrequently accessed data. The key point here is that it’s very easy to get confused on the test between the near line and the cold line. So keep in mind that a cold line indicates that nothing is happening. This is your archive, essentially. And if you’re not going to access the data at least once every 30 days, it’s best to put it on ice just because the cost is substantially different, at least in most regions. Now you can see that when I select a coal line, you can see that it allows you to again place the redundancy at the nearest location. So you can see that this choice updates with this storage class. So if you do take the exam, you need to know it inside and out. And again, I’m just spending time on this because I want to make sure those who take the exam get it. Now, when you make a decision on what storage class you’re going to go with, this is just one of the choices. And then we haven’t even gotten to the type of disc that you’re going to use.

This is basically how the objects are going to be stored, not the underlying SSDs or anything like that per se. We’ll talk more about hardware specifics in a minute. So in this case, I just want to do a regional again, just to show you how to create a bucket. And I’m going to probably use US East and then labels. Now a label is going to be important, especially if you’re going to create a fair number of buckets and you want to find specific keywords pretty quickly. So for example, let’s say you want to store information on a bucket and that information is about, say, a specific application. I’ll just say that in this case it is SQL instructions, and it’s got to be lowercase. As you can see, I’m just going to call them SQL Files to keep things simple. Now that this is going to be discussed again, there are a couple of ways you could do this, but for me, I like to use numbers. You don’t have to use numbers. So, for example, when you create a bucket, maybe you want to create one for specific types of files that are in production or development. You can do that. I like to add value. Or let’s say this is going to be for development, and the value is SQL files. So again, you could do this many different ways. If you’re a larger company, I’d recommend putting something related to the region in the key. So you could cross-reference the region with the files that you want to reference as well. So there are many different ways to do that. And again, if you just highlight this area, it actually isn’t coming up for some reason. But again, if you did want to, it’s there.

But the other thing I did notice is that sometimes, even though I’m using Chrome, you may get a different response with IE or a different version of Chrome. So, again, it may or may not appear, but the help should occasionally appear and inform you of what you should put in for the value. So, for example, let’s do an experiment. So what should have happened is that it should have popped up saying you wanted to do this, but it didn’t, and that’s fine. So I’m just going to go ahead and put a value in there. Again, there’s no right or wrong answer, so I’m going to leave it like that. So let’s sum up what we’re doing. I need to name this also. So I’m going to create this name. I’m going to call this bootcamp. And again, I have this horrible habit, and it will send me warning messages until it’s not unique or you meet their requirements. I’m going to call it boot camp. And which regency am I in, the east? I’m going to say us on the east coast. Okay, so it seemed like that name was acceptable for that bucket. Again, if you’re in a large production or development environment, it would be very wise to think of a naming schema that made sense, because when you have hundreds of buckets with thousands of files, it will be a very hard process to find what you need sometimes. So think of a naming scheme. Okay, so I think we get what we just did. So to sum it up, you want to name the bucket storage class. If you have questions on storage classes, you may want to read up on them again, and I think you can see that they are highlighted. Okay. and you could go over here. What I like is that, as you can see, they give you the page you want to go to. And so if you go over to this page, it explains everything that you really need to know, from APIs to pricing to availability. Now, one of the things I would do on the exam is go to this page and spend 30 minutes going up and down reviewing this because a lot of the questions will focus on what the lowest cost per gig is and what the minimum requirement is. For example, this was one with a minimum store duration of 90 days. So pay attention to this geo redundant. Now, the only one that’s geo redundant is a multiregional. So again, that might be easy for you to remember.

Regional is once again located near the line. This is a 30-day minimum, essentially, and then this is a 90-day minimum. And again, on both exams, the data engineer’s and the cloud architect’s, you’re going to get a fair number of questions on these. Then there’s this: bucket default storage class, which I’d like to go over. Now if you don’t select one, it’ll assign the default storage class. So again, it’ll go ahead and assign that based on what you’re doing. So, once again, the bucket is standardly signed. It’ll be equivalent to either multiregional or regional. So, for example, if you have reduced redundancy, it would be correlating to that. It wouldn’t be regional. It would be similar to, like, “I think it’s near line, near line, and cold line.” So again, that’s how that would generally correlate. So do pay attention to that. Then, again, one thing Google does well, I must say, is provide you with visual instructions. Another thing, if you haven’t noticed, they give you the instructions to complete the task in the console or GS. Util is the command line. Or if you want to go ahead and set up an API call, you can do that as well. And they also have JSON and XML. So again, it’s fairly easy to figure out how to use the Google Cloud once you actually read the instructions and go through the pages. Now, I’ll admit that the most difficult part will be when you get into the development areas. Again, as a developer, you need to really think like a developer.

So you need to understand how the processes work and how APIs work. And some of that area could be fuzzy, to be honest. So with that said, let’s go back to the console. Let’s go ahead and create this. I’m going to go ahead and leave it as regional, and it’ll create that bucket. And again, you could see that it has been created, but there is no object. So let’s go to the buckets. Right now, you can see that the storage class is regional, and the location is US East. Lifecycle has none. And then follow up on that; remember, that’s the label that I assigned. and you could see one. And again, this is extremely helpful to have when you have console pages or, as typically happens, CLIdumps of the files that you’re looking for. So I go here, and you can see that you have options up here. Do I want to upload files? Create a folder. So again, I can create a folder here and say, “These are home files,” let’s say. And I go create, now that there aren’t a lot of restrictions on file names or folder names, for that matter. So I go here, and then again, I could upload it to the folder here, or I could upload it directly to the bucket here. Whatever I want to do, it doesn’t matter. But let’s go over here to Settings so you can see that it has project access.

So you could go ahead and use the Rest API as well. and you’re going to need to do this. Your developers are at least required to be able to access the content, especially with the services that you might want to tie into interoperability here as well. Transfer. Now, this is actually a cool feature, and I cover it when I talk about the storage transfer, the migration, actually. But you go ahead and create a transfer from AWS to Google, for example. So let’s go back to the browser. So let’s go here to the bucket. I’m going to go to the folder here. Now I want to upload files. So I go click “upload files,” and I’m just going to take this as my picture. So I’m just going to take snapshots of what I have here. And you can see here that it’s been uploaded, and it’s pretty darn quick. It doesn’t take too long. Top-load buckets, especially if you have a decent connection at home. And so it’s done. So you can see that I have gone over here to get buckets. You can see that I went to a folder, and now I’ll say that I went to my home files, and then I could share them publicly by going here. Now, a word of advice to you security-conscious individuals. Now, Google has done a good thing by not enabling sharing immediately. You’ve got to really enable this yourself. So just be very cautious when you do. Because when you do, someone could figure out the link, or again, you could share that link, and you can see that it brings it right to that file. So again, whatever you want to do, I’ll go over here and edit permissions, rename it, or whatever I would like to do. So again, that’s how you upload files: by creating a bucket. It’s extremely simple. Now let’s go over to, let’s see, back to storage. Here’s one more thing. And if we go over here, yes, right there. Storage, you see, moved too quickly.

So you’ve got the other types of storage that are available: SQL, cloud storage, data stores, for example, and big tables. So again, if I go to SQL, I have nothing set up in this instance. For the purposes of this class, we will not go over each option in detail in order to give you an idea. But again, you create an instance, and I could choose either. And remember, there’s a cost to this. So I’m using my free credits, and this will kill it if I forget it. So, if I go to my SQL, I can select the generation if I want. I could use either the first or second generation. Now, the major differences are highlighted here. You only want to use the first generation of MySQL. If the application that you’ll be reporting on is using it, there’s no good reason to use it other than that. Because, once again, if you need any has or anything, it’s best to stick with the current generation, and then instance ID, you’d go ahead and call it a GCP test. One, two, three. I could set a password here. Now, again, you don’t want to do this with no password. Even if it’s just being kept in the GCP cloud, it’s still a best practise to generate a password. Now, if you go here to generate a password, it will generate one for you. Now, location: be very cautious with the location. So, if you’re a national company and the majority of your customers are on or near the east coast, you should use east in most cases. Again, tools like CloudHermany are available to help you determine whether central is preferable to, say, east or west, depending on your location.

Again, there are four GEOS for the four time zones in the United States. So you could see Central 1, East 1, and West 1. Now, you’ve probably noticed that these numbers don’t match and that a few things are missing. Well, that’s because in my SQL, this specific capability is only supported in certain zones and in certain regions. Another thing to be cautious of So, if I travel to the east coast of the United States, I intend to set up shop. However, if the majority of my customers are from California, you should consider whether I should use Washington or Oregon. Or, for example, go to configuration options; there are numerous options to consider. In this case, I don’t really care; there’s no rush, and we’re not teaching you SQL right now. We just want to give you an idea of how to get up and running, and you can see that it is bringing up that SQL instance. So if I go back, you can see now that this can take up to a minute; it really just depends on the latency and the workloads. This is a very small instance, so it’s not going to consume too many resources. So let’s go back and take a quick look at — actually, let’s go to CloudSpander — and see what we can find. So it says I have to enable the CloudSpanner API for my project.

So what do we have to do? This is a good exercise. Let’s go ahead and create an instance and you can see that the cost is, I’ve got $79in credit, I probably don’t want to do this right now, but play around with this. It’s all free if you want. And then the cost of storage is an additional capability. So again, take a look at that as well. So that’s cloud spanner, and this is essentially what you want to use for your SQL databases to enable the database service for transaction capabilities. And then you go here to learn more if you want. Okay, so I want to harp on that. And then lastly, let’s just go back to SQL; it’ll take a little bit of time, but it’ll come up. And then once it’s up, you can then select it and essentially take a look at the config essentially. Then you can see create instance if you want to go create another instance. And then, if you notice this icon here, see how it’s doing. That really cool circle is essentially telling you that you can go ahead and not have to look over here because it’s actually still doing its thing. So again, lots of great capabilities with storage exist for COVID storage. You really need a whole day to really get into it deep. But for what we need to know, this is a good start for you.

  1. Sizing Cloud Deployments

Let’s go ahead and talk about cloud deployments and trying to get them right the first time. Now, one of the things I like to compare the cloud to is the fact that when you’re deploying a new service, you can be as technical and as analytical as possible. And the truth is, you’re unlikely to get it right anyway. So I like to call “size and cloud deployments” more focused on artwork than actually something that is technical. And you’re probably saying, “What do I mean by that?” Well, let’s take it from this perspective. Generally, when you deploy any kind of new enterprise application or service, that service needs to be deployed in a manner that’s going to accommodate what could be an unknown number of users in an unknown number of situations.

So with that said, you need to size your cloud deployment. You need to understand that every enterprise application is going to have a different use case. So, of course, you don’t want to size that email application the same way you’re going to size a CRM application. You need to be aware that that application has a different use case and therefore will likely have different requirements when it comes to different migration approaches. Just understand once again that some data may be transactional, whereas some of it may not. Databases that are sequels, again, are going to have a different import, export, or migration strategy than perhaps a new SQL database in some cases; depending on how you scale, right sizing is important to consider in your planning. When it comes to sizing your cloud deployment, one of the things you want to do is make sure you map that application to the proper provider. So again, if you’re just taking, for example, your Microsoft Office solutions and Exchange solutions, let’s say perhaps the use case to go to Office 365 is pretty hard to make. On the other hand, if you’re using Google Mail, Gmail, let’s say you go directly to Office 365, and again, maybe the use cases in there because of the amount of work to accomplish what you’re trying to accomplish.

What you want to think about as well is that you need to match the application to the correct SLA terms. Now remember that the SLA is a service level agreement. If you require anything more than four nines of uptime, then don’t go to the cloud. In fact, I tell my customers that if they expect more than three nines and a half, they’re still gambling on something they can’t control. And that’s just my thoughts. But everyone has their own opinions on what belongs in the cloud and what doesn’t for this exam. What we do want you to be aware of, though, is that you want to match your application to the correct service and deployment models. Once again, if you’re just deploying infrastructure, you may just want to deploy the infrastructure as a service. But let’s say you want developers to manage their own VMs. Do you go to a platform as a service or do you deploy a platform as a service on top of your infrastructure as a service? And, once again, there may or may not be a proper use case for doing so.

I don’t know when you’re sizing the deployment model; just be aware that each of these will have their own sizing requirements, performance requirements, and SLA requirements. For example, if you require a higher level of security, you may need to go with the private cloud because the public cloud may not offer it when it comes to service models. Once again, use the right service model for the right deployment approach. If you’re migrating email to Office 365, you’ll most likely use a platform or software as a service, depending on the use case. It could be either, but in general, just be aware of that. On the other hand, if you want to deploy VMs and then deploy Exchange on top of that, you may want to go with infrastructure as a service and then deploy a Passon on top of that, let’s say, or SAS. Whatever you’re trying to accomplish when you’re comparing service models, just be aware that the SaaS provider is pretty much going to handle everything between the customer and provider. Share some management and infrastructure as a service. The customer and provider share management as well. One thing you should consider is where the requirements for customer and provider management end. So here’s a nice chart that sort of depicts where, typically, in most cases, that responsibility stops and ends. So take a look at that and see if that is where your service will most likely meet. Again, this is not always the case, but it is an excellent example. When it comes to scaling, we typically have horizontal scaling, vertical scaling, and diagonal scaling. Horizontal scaling is typically used when you’re going to scale the number of virtual machines. So, for example, you could start with two virtual machines and scale them up to four. Vertical scaling is where you typically just add resources. So, for example, you may have two cores and add additional cores to that virtual machine configuration. So just be aware that vertical scaling is typically done when it comes to applications that just require more horsepower, more memory, et cetera. Horizontal scaling, on the other hand, requires more distributed workload management.

For example. For example, horizontal scaling would be more of a focus, let’s say, and then we have what’s called diagonal scaling. This is where you’re going to scale out, typically using both of the approaches or scaling out directly to the cloud. Again, what I’ve seen is that some vendor definitions are a little bit off or actually not exact. So everyone’s got their own take on this. But for this exam, I just want to make sure that you’re aware of the three types of scaling cloud resources: horizontal, vertical, and diagonal. What is cloud bursting? Well, “cloud bursting” is an application deployment model in which your application runs in a private cloud or data center. This is where you’re going to upload those resources to the public cloud. This may or may not address the capacity issues, but in most cases, hopefully, it does. So I’ve seen instances where that’s not always the case. I like to say that when it comes to cloudbursting, one of the challenges is: are you able to accommodate the right network load at the right time? So if you have a “skinny link,” as I like to call it, don’t attempt this. When it comes to scaling to the cloud, be aware that you also need to understand APIs as well. So an API is an application programming interface. Again, you don’t need to know all the details about APIs; that’s for developers and testers, for example. But you want to be aware that APIs are generally what connect your application to the cloud service. And these can scale as well. And these can cause performance issues, believe it or not.

Again, sometimes things are misprogrammed, and sometimes things are not connected. Sometimes what happens is that there are different versions rolled out, and maybe that version is behind rev. Maybe you have to upgrade it; who knows? So just be aware of APIs. They could be a concern when it comes to scaling. When it comes to scaling as well, just be aware that you want to check the provider ports and protocols that are supported. Most of the providers use commonly used ports and protocols. So DNS, perhaps port 53, say https: Those ports are commonly used as well. Also, when we talk about security, we’ll talk a lot more about federation, but just be aware that you need to integrate those cloud services. This is more of a hybrid cloud approach in which you federate your security and IAM solutions with your in-house LDAP solution, typically with a touch of orchestration. We will talk more about federation in the modules ahead. When sizing in the cloud, it is important to understand that the cloud service and deployment model scale differently and have different management responsibilities. Here’s an example tip. Make sure you are aware of what “cloud bursting” is, but also know that there are three ways to scale in the cloud. We have horizontal, vertical, and diagonal.

mes to SLAs. Right? Yeah.

img