Use VCE Exam Simulator to open VCE files
Get 100% Latest AWS Certified Solutions Architect - Associate Practice Tests Questions, Accurate & Verified Answers!
30 Days Free Updates, Instant Download!
AWS Certified Solutions Architect - Associate SAA-C03 Premium Bundle
Download Free AWS Certified Solutions Architect - Associate Exam Questions in VCE Format
File Nameamazon.realtests.aws certified solutions architect - associate saa-c03.v2023-09-30.by.cooper.7q.vce
Amazon AWS Certified Solutions Architect - Associate Certification Practice Test Questions, Amazon AWS Certified Solutions Architect - Associate Exam Dumps
ExamSnap provides Amazon AWS Certified Solutions Architect - Associate Certification Practice Test Questions and Answers, Video Training Course, Study Guide and 100% Latest Exam Dumps to help you Pass. The Amazon AWS Certified Solutions Architect - Associate Certification Exam Dumps & Practice Test Questions in the VCE format are verified by IT Trainers who have more than 15 year experience in their field. Additional materials include study guide and video training course designed by the ExamSnap experts. So if you want trusted Amazon AWS Certified Solutions Architect - Associate Exam Dumps & Practice Test Questions, then you have come to the right place Read More.
Now we're getting into the concept of auto scaling groups. So basically, in real life, your websites and applications will change and they will have different loads. So, the more users you have, the more popular you will be and the more load you will have. So in the cloud, as we see, we can create and get rid of servers very quickly, and so there's one thing that line group does very well, it is to scale out. That means adding EC two instances to match an increased load but also scaling in removing EC two instances to match a decreased load. Then finally, we can ensure that the EC two instances can only grow to a certain amount or decrease to a certain amount and we can define a minimum and a maximum number of machines running in an ASG.
Finally, we can have an ASG super goal feature that automatically registers new instances to a load balancer. So in the previous lecture we registered instances manually, but obviously there's always some kind of automation we can do so in AWS. What does it look like with the graph? Well, here is our beautiful auto scaling group. It's a big error, and so the minimum size example here is the number of easy to instance variables you'll have for sure run into this auto-scaling group. The actual size or desired capacity parameter is the number of EC two instances running at the previous moment in the current moment in your ASG, and then you have the maximum size, which is how many instances can be added for scale out if needed when the load goes up, so that's super useful.
What you need to know about is the minimum size, desired capacity, and maximum size parameters because they will change very often. Also, notice scale out means adding instances, while scale in means removing instances. So now what does it look like with a load balancer? Well, here is our load balancer, and web traffic goes straight through it. We have an order scaling group at the bottom, and basically, the load balancer will know how to connect to these ASG instances, so it will direct the traffic to these three instances.
But if our auto scaling group scales out, and we add two instances, then the load balancers will also register these targets, obviously perform health checks, and directly route traffic back to them. So, in AWS, load balancers and auto scaling groups go hand in hand. As a result, ASG have the following characteristics: We'll be creating one in the next lecture during the heads-on, but the launch configuration has an AMI and an instance type for easy user data if you need EBS volumes. security groups SSH. key pair, and as you can see, this is quite common. Quite the same things that we've been created from before when we launched an instance manually.
Obviously, they're very close. You also set the minimum size, the maximum size, and the initial capacity, as well as the desired capacity. We can define the network and the subnet in which our ASG will be able to create instances. And we'll define load balancer information or target group information based on which load balancer we use. Finally, when we create ANSG, we'll be able to define scaling policies. So, what will set off a scale, or what will set off a scale in? So we are getting to the auto scaling part of auto scaling, which is the alarms. So basically, it's possible to scale your autoscalinggroups based on the Cloud Watch alarm. And we haven't seen what Cloud Watch is yet. But as I said, Amazon is kind of a spaghetti ball. So don't worry. Please follow me.
So the Cloud Watch alarm is going to be something that's going to monitor a few metrics and when the alarm goes off, so when a metric goes up, you say, okay, you should scale out, you should add instances, and then when the alarm goes back down or there's another alarm saying it's too low, then we can scale in. So basically, the ASG will scale based on the alarms. And the alarms can be anything you want metric to monitor, such as the average CPU. And the metrics are computed as an average overall. Okay, it doesn't look at the minimum or maximum, it looks at the average of these metrics based on the alarm. Basically, we can create scalping and scaling policies as I said. So there are rules and there are new rules for auto scaling. We'll be seeing them in the hands on. But now you can basically say, "Okay, I want to have a target average CPU usage in my auto scaling group, and basically it will scale in and scale out based on your load to meet that target CPU usage."
You can also have a rule based on the number of requests on the ELD, for instance, the average network in an average network app. So really, whatever you think the best scaling policy is for your application, you can use it. These rules are easier to set up than the previous ones, and they can make more sense to reason about saying, okay, I want to have a thousand requests per instance from my ELB. That's easy to reason about where I want my Cubage to be 40% on average. Now you can also auto scale according to your custom metric. And basically, we can define a custom metric, say, the number of connected users to our application.
And to do this, we'll create that custom metric from our application, we'll send it to Cloud Watch using the Put metrication, and then we'll create a Cloud Watch alarm to react to low or high values of that metric. And then these alarms will basically trigger the scaling policy for the ASG. So what you should know about it is that the AutoCAD and group aren't tied to the metrics AWS exposes. It can also be any metrics you want and it can be a custom metric. So here's a small brain dump on things to know on your ASG. First of all, you can have scaling policies for your ASG and they can be anything you want. It could be the CPU network, it could be a custom metric you define, or even based on the schedule. If you know in advance how your visitors are going to visit, for example, your website, if you know they're logging in very early at 09:00 A.M., maybe you can be proactive and add more instances proactively before users arrive so they have a better experience.
ASG can also use launch configurations or launch templates. Launch Templates are newer versions of Launch configuration that are recommended for use in the future. And if you want to update an auto scaling group, what you need to do is to provide a new version of that launch configuration or that launch template, and then your underlying two instances can be replaced over time. If you do attach an imp role to your auto scaling group, that means that the imp role will automatically be assigned to the EC two instances that you launched, okay? And then the auto scaling group itself is free. The only thing you're going to pay for is the underlying resources being launched. So your simple two instances with attached EBS volumes, etc.
But if you have an instance under an ASG,the beautiful thing is that if somehow the instance gets terminated, then the energy will realize that your instance has been terminated and will automatically create a new instance as a replacement. And that is the whole purpose of having You really do have that extra safety that it gives you to know that your instances will be automatically recreated as new ones in case something goes wrong. When will the instance be terminated? Well, instances can be terminated, for example, if they are marked unhealthy by a load balancer. And so, as you know is okay, your load balancer thinks that your instance is unhealthy. The better thing to do is to terminate the instance and replace it by creating a new one. Okay? So remember that an ASG creates new instances automatically. It doesn't restart, it doesn't stop your instances, it just terminates them and creates new ones as replacements. So that's it. I hope you liked it and I will see you at the next lecture.
So let's have a play with autoscaling groups but before we do so. Make sure to terminate all your instances, so I'm going to go ahead and terminate all the easy two instances that I've created from before, and also, if you go to your loadbalancers, make sure you only have your application lounge available and if you have a CLB or an NLB, Please make sure to delete them as well. So the idea now is that if we go to our target group, this one should have zero targets, so if we go to the load balancer right now and try out the DNS, we should get something like a 503 because we don't have any instances to serve any traffic, so this is perfect.
Now let's go and create our first auto-scaling group. We are on this auto-scaling group page and we have to create our first coastal group, so there is a new design and I'm going to use a new design this way. The video will look like it's using the new redesigned autoscaling group console and this will look like the video is updated, so let's do it. So I'm going to create an auto-scaling group and we can choose a launch template or a launch configuration, so they're very similar The most recent version is Launch Templates. For the sake of this, launch configuration is the older version of launch templates. Launch templates allow you to use a spot fleet of instances, while launch configuration allows you to just specify one instance type so forthe sake of this.Because we want to make sure we are more geared towards the future. We'll use launch templates, but if you see launch configuration or launch template in the exam, they're almost the same thing, so I'll just create my first ASG and then I can use a launch template. Now we can switcha configuration or create a launch template very quickly.
The launch template describes how to create two instances very easily. So let's just go ahead and create a launch template very quickly. The launch template describes how to create two instances very easily. And finally, do we want auto scaling guidance? No, this is fine. Then we need to scroll down and select an AMI, so we'll say Amazon Linux twoAMI For the instance type, we can say don't include the world's use of Tmicro to be within the feature, so we choose Ttwo micro excellent. I'll use Ectutorial as a key pair. Which is the one I have from before and the networking platform I'll use for VPC for security group I will use my first security group so that these EC2 instances launched through my launch templates use Amazon Linux. Two RT. Two micros will inherit the same security group as before, and then for storage we'll have a root EBS volume. This is a great instance because we don't have anything. But you can tag them if you want. network interfaces. We're not going to use any special ones besides the primary ones, and advanced details would allow us to specify more stuff. So the instance profile if we wanted to.
And the most important thing actually is at the very, very bottom, which is the user data. So we have to go and get our user data so you can copy this entire script right here in easy to use the data that I said to make it a little bit simpler. And then when you have from the very beginning bin bash all the way to the end echo, then you're good to go. It's successful. Click on Create Launch template and it's successful.So I will go back to my ASG console and refresh and here I can select my first template as the launch template so we can review it. It looks like everything is good. We have security group IDs, which is good. The launch template is here, the Eke pair is the one we have from before, and so on. Click on Next and the purchase options and instance types appear. So here's the cool thing. With launch templates, we can have on-demand or spot instances, or we can have a combination of OnDemand and Spot.So the cool thing is that, from an architectural point of view, we could have a base capacity of OnDemand and then some spot capacity to serve better. So this is very helpful if you want to have a hybrid fleet. But in our case, we'll just adhere to the launch templates and we'll create just on demand. So the subnets will select three subnets to launch our EC two into. So we have three A, three B, and three C in EU West.
So the EC two instances will be launched into three different AZs. Click on next. And now we have to specify load balancing. Load balancing and health checks This is extremely important if we want the load balancer we had before to serve the traffic in front of our two instances. So yes, we do want load balancing. So I'll enable it. And is it an ALB or an NLB, or is it a CLB? In our case, it's an ALB, but we can definitely use the CLB if you want to. And with ALB, we have to specify a target group. In this case, it says my first target group. So that means that, automatically, so let me just go back to the EC console and go to my target groups. That means that when an instance comes up, it will be registered as a target within this target group. This is fantastic; it is flawless. And the health checks are optional. But we have two types of health checks. The first one is EC Two health checks,which means that if the EC instance itself fails, then it will be replaced. But we can have an ELB health check and this is the one that will check.
That means that if the ELB health check from within the target group doesn't pass, Then the application, the auto scaling group,sorry, will terminate that instance automatically and recreate a new one maybe.So this is definitely the kind of behaviour we want. And we'll click on Next. Okay, now we have group size and scaling policies. So we'll have a very big lecture on scaling policies. But for now, we'll just consider group size. So the desired capacity is one, which is how many easy instances we want. The minimum is one, which is the minimum capacity. I'll set it to three. We can change this as we go along and we'll see in the hands-on scaling policy. For now, I will set it to none. But the whole idea behind auto scaling groups is that they scale automatically. And so we will be setting a scaling policy later on. But I want to cover that in a separate lecture. Okay? Scaling protection will not enable it, and we don't need it as far as we go right now. We don't require them. So we'll click on Next and tag this as fine. I'll click on Next as well. So everything looks good. This is the new way of creating an auto-scaling group.
So I click on Create Auto Scalinggroup and we're good to go. So this is the new console, and right now our auto-scaling group has zero instances. Although the status is being updated, the desired number of instances is one. So what I'll do is that I'll just wait a little bit and here we go. So let me click on this one so we can see some activity. So as I click on this, ASG can see the desired capacity is one. The minimum capacity is one, and the maximum is three. So this is all the things we've said before, and we have a launch template, which is great. And the load balancing is set up properly. So if I scroll all the way up and look at activity in activity history, I will see what is happening for my auto-scaling group. So, as you can see right now, it was successful and it launched a new EC Two instance. So it was launched because the desired capacity was one and the actual capacity was zero. Therefore, a new instance was being created. So that was very quick. And if I go to instance management, I see now that I have one instance in my auto scaling group and this is perfect. So if I look at this instance itself in my EC Two management console, we can see it's appearing here and it's working. So now the beautiful thing is that if I go back to my target group and refresh this,I should be seeing the appearing instance right here. So the target is registered, and right now it's unhealthy. But hopefully it will become healthy very, very soon. So let's just wait a little bit to see if the health check passes.
My instance is now healthy and, thankfully, And so now if I go back to my ASG, we can see here that the instance is registered and on the right hand side, we can see it's also healthy. So it shows the right health status. And so, because it is registered to the target group,I should be able to go to my load balancer and I should be able to open my ALB. So I'll open the DNS name and, here we go, we get the hello world.So this is happening so automatically that our ASG, our auto scaling group, created this instance. And so the cool thing we can do now is we can go to this, edit the configuration, and say the Zo capacity is now two. And so what will this do? I'll just go all the way down and say updates. And so what it should do is to tell the ASG to increase the capacity by one because what it is desired is more than our actual capacity. So I'll go to activity and look at the activity history, and hopefully very very soon it will start to create a new ECQ instance. As we can see now, a new instance has been launched, so it's free in service because the desired capacity was asked of the group to change from capacity one to capacity two. So I'll refresh this and see if it's successful.
The instance has now been launched. So if I go to instance management, I should see two instances and they're both healthy. So if I go back to my ALB and refresh now, I should be seeing, hopefully very soon, two instances. So that may take a little bit of time, and let's make sure that stickiness is not enabled here. So this looks fine. Our target group as well as in the description,this looks fine. It is disabled, so very very soon, one target is still unhealthy. So I need to wait a bit for the health check to pass. So once this is done, this will be okay. And my target is now deemed healthy. So going back to my ALB, I refreshed and now I see my two IPS changing over time. So everything's working great. And so finally, for our auto scaling group, if we were to change the capacity from two back to one, this is called a scale in because we have to remove an instance. So I'll update this and what will happen is that now we have two instances but the desired capacity is one, and therefore in the activity history it should start telling me very very soon that it wants to terminate an EC two instance. So let's refresh again. Here we go. We have a new activity history where it is waiting for the ELD connection training but then it will terminate our instance accordingly. All right, for this lecture. I hope you liked it and I will see you at the next lecture.
So now let's talk about scaling policies, because that is the core of your auto-scaling groups. So there are three things you need to know. Number one is called "target tracking scaling," and it is the simplest and easiest to set up. For example, you would say to your ASG, "I want your average CPU of all instances within your ASG to stay at around 40%." very, very simply. So that means that if you're over that CPU, it will provision more instances to serve your ASG. And if you're under the Cubit will start doing scaling in. So we begin terminating your instances in order to maintain this around 40% average. CPU A simple step-scaling policy is not that simple.
But you would set up something called a "cloud watch alarm" and the clutch alarm would be an an alarm that would be triggered, for example, when the average CPU of your entire group goes over 70%. And then you would say, okay, if that alarm is being triggered, then add two units, so you have a bit more control over how many instances are being added through this scaling policy. And you would set up a second cloud watch alarm to be triggered. For example, let's say if the CPU is less than 30%, then remove one instance. As a result, you have a lot more control over the scale of events when capacity units are added and the scale of events when capacity units are removed. OK? Finally, if you know in advance about some patterns in your application, there is something called "scheduled action," which anticipates scaling based on known usage patterns. So, for example, you know that at 05:00 p.m. On Fridays, you should increase the capacity to ten instances because you know your auto scaling group will require at least ten instances.
So this is the kind of schedule you would have. Another thing you need to know going into the exam is scaling cool downs. So a lot of text, but I will read this out to you. So there's a cool down period and that helps ensure that your auto-scaling group doesn't launch or terminate additional instances before the previous scaling activity takes effect. Okay? So this is the cool down which allows your auto scaling group to kind of settle before there's a new scaling. So in addition to the default cool down, you can create a cool down that applies only to a specific simple scaling policy, and a scaling-specific cool down period overrides the default cool down period. So that makes sense. And one common use for scaling specific cool down is with a scaling policy, the one that terminates your instance because it terminates instances. Amazon EC2 auto-scaling needs less time to determine whether or not to terminate additional instances. And so, this is where you may want to override the cool down.
So, as a rule of thumb, if the default period of 300 seconds for the scaling cool down is too long, you can reduce your costs even further by applying a scaling-specific cool down period of say, 180 seconds for the scaling policy to terminate instances a little bit faster. And if your application is scaling up and down multiple times each hour, make sure to modify the cool down timers and the cloud watch alarm period that triggers the scaling. So this is something you can look at the documentation and the question is, is there a scaling action happening? If you're within the cool downperiod, then you can ignore the action. If you're not, then you can launch or terminate instances. Okay, so just a subtlety you may need to know going into the exam. Now let's have a look at the scaling policies directly in the console. So let's go to this automatic scaling tab. And here we can define either an asymmetrical policy or a scheduled action. So let's start with a scaling policy. We can click on "Add policy" and we have three policy types. So, target tracking, scaling, and then we have step and simple scaling, which are to be in the same kind of zone. So target tracking is saying, okay, we want to track maybe the average CPU utilization or average number of requests that accounts for ALB. And we want to, for example, choose a CPU.
The target value is going to be 50. It could be 40. Whatever you want. And the cool down specifies how long the instances must warm up before incorporating the metrics. So 300 is the default. And then we can disable scaling if we want to only create a scale out policy. But in this case, we want to scale in and out. So I will not take that box. So I will create this. And this is our target tracking policy. And so we can actually remove or edit this a little bit just to make the demo faster. So I'll say instances need about 10 seconds to warm up before being included in the metric and save this change, and let's see it in action. So we have a CPU utilization of obviously zero because nothing is happening. So if we look at the CPU utilization for EC Two here, we see the CPU utilization is really, really low. And so what I'm going to do now is go to details, and I'm going to change the desired capacity to two and see what happens. So I'm going to update this. So this will result in the creation of two simple instances. So we will have two easy instances and then the average CPU utilization will be very low. It will be about 0%.
So because of this target tracking policy, we'll say, "Wait a minute, you look like you have too many instances for your needs" and so it should automatically remove that instance. So let me pause the video just so we can show you exactly the entire result at the end. So the first thing that happened is that an instance was being launched because the desired capacity was going from one to two. So now, in instance management, we can see two instances. And now I hope to see that the Target Tracking policy will automatically terminate that instance very, very soon. So wait and see. So if you want to know what's happening behind the scenes of this target tracking policy, we’re going to look at Cloud Watch. And we haven't seen Cloud Watch just yet, but let's have a quick look at it to see how it works. It'll be a fun way to get started. So your Cloud Watch alarms are going to be right here. And right now I have three, but this one is probably not relevant to you.
So I have two alarms right here which are saying okay, and they're linked to my ASG. So, as you can see, they were automatically being created. And the first one is called Target Tracking, my first ASG Alarm High. And the second one is Target Tracking, my first ASG Alarm Low. And it's saying that if the CPU utilization is less than 28 for 15 data points within 15 minutes, then it will trigger. And if it's more than 40 for three data points within 3 minutes, then this one will trigger as well. Okay? So right now, the one we want to trigger is Alarm Low, but it's not happening. I mean, they've been waiting for a long time because it takes 15 minutes for this alarm to go off. So if we look at the alarm itself, we can see the CPU utilization right here has been under 28 since 19:00 p.m. And right now, I have to wait a little bit.
So right now it is 1913, so I need to wait an extra 3 minutes for this alarm to go off, and then it will be in the alarm state and I will be able to see the scaling happening. So now I need to wait an extra 2 minutes. But I want to show you exactly behind the scenes. If you go to your alarm and type my first ASG, it should just show you the one you're interested in. Here we go, the two one. And now I will wait a bit for the alarm to be triggered. And this should trigger a scale in my other scaling group. OK, so to speed this up, I'm just going to click on my alarm clock and edit the condition. And I'm going to say instead of 28, I'm going to say 28 is great, but you need to have three data points out of three under the alarm to be in the alarm states. So I've just updated my alarm to make it trigger a little bit faster because I want to make sure I can go on with my hands on. So I need to wait a little bit, and this time the alarm should be set up very soon. And now, thankfully, my alarm is now in the alarm state. So what this will do is that if I go back to my ASG and go to activity, I should be able to see very very soon a scaling activity. So let's wait just a little bit to see it happen. So I'll be honest, it took a lot of time.
So if you're doing the handle with me, make sure to be able to wait a long time. But right now it says yes, there is an accumulating instance that is happening because the alarm target tracking, my first energy alarm law, was in the alarm space and therefore the desired capacity changed from being two to being one. And so if we go back to the details here and I refresh this page, let me refresh this page right here. We can see now that the desired capacity is one. So this is why there is a scale happening and instances will go from being two instances to being one. As you can see, the last one, the second one right now, is in the terminating space. Okay? And now for automation. So this is the target tracking policy. Now I can go ahead and delete this, but I also have the option to add a simple or step scale. So let's just use that stem scaling. So, you name it. My step scale. And you would need to create cloud watch alarms for this, which we don't have. And then the action would be if this alarm is triggered, then add, remove, or set to And then you tell how many capacity units or what percentage of the group you want. So for example, we say, okay, if this alarm is breached, then add two capacity units, and then if another alarm is breached, then remove one capacity unit. So very helpful, right? So this is step scaling and with simple scaling, it's even simpler.
You don't have as many options, okay? You must add, remove, or set to whatever capacity unit or percentage of the group you desire. And then you wait 6 seconds before allowing another scaling activity. So that's the general idea. And now I'm going to schedule actions. As you can see, we can create a schedule. We'll call it Scale at X PM. And so we're saying, okay, when the starttime is, whatever, tomorrow at 18 UDC. Then set the minimum capacity threshold and click on Create. And here we go. So tomorrow at ten, then it will set the capacity to ten for the minimum. So very, very easy. But these scheduled actions allow you to predict things ahead of time. Do they happen only once, or do they happen on a regular basis? So they're very helpful when you have more predictable scaling capacity scaling patterns in your ASG. But we'll go ahead and remove it, and that will be it for our auto-scaling scaling policies. OK, that's it. I will see you in the next lecture.
Two more things you should know as an ASG Solutions architect The first one is that there is a rule regarding how your instances are being terminated. So by default, there is a SG Default Termination Policy and here's a simplified version: first you find the AZ which has the largest number of instances, and then if there are multiple instances in the AZ to choose from, delete the one with the oldest launch configuration or launch templates. So if we look on the right-hand side of our Auto Scaling Group, we have two availability zones and the first one is in A. We have three instances of V One. We have three instances of Vane on the as, and V One is the oldest Launch configuration, as we can see. The rule is that the oldest gala group will choose as as the place to delete instances because it is the one with the most instances.
So in this case, as will be selected and then, within the four instances of the Availability Zone, the ASD will choose a V-One instance because it has the oldest launch configuration. The idea is that the ASG, by default using the Default Termination Policy, will try to balance the number of instances across AZ by default and this is something you should know. So in this one explosion. This one will be zapped and then for lifecycle hooked. This is another feature of ASG. So, by default. As soon as an instance is launched in Sagan, it's in service, but there is a long list of things that are happening when you launch an instance so when your instance is launched. It goes into a pending state and if you define a lifecycle hook in the pending state, The instance will go directly into a pending Wait State and you have the option to configure that instance to do a lot of things and then when you're ready, You move it into pending to proceed.
When it goes into a pending service, it will go directly into service. Obviously, if there is no lifecycle hook, then it will go directly from pending to in-service. The idea here is that you have the option to install extra software and do extra checks before making sure your instance is declared in service, and similarly, when there is a skill in events, when the instance gets terminated, it goes into a Terminating State and if you define a lifecycle hook on the Terminating State, Then it will go into terminating weight. Then terminate it, and finally terminate it. And so why do we have a termination lifecycle hook? Well, we would have one, for example, if we wanted to extract information, for example, logs or files out of an easy-to-execute instance before it is completely terminated. So this is the use case for lifecycle hooks and this is something you have to do before going to the exam. Okay, let's just talk about the difference between launch templates and launch configuration. So, both launch templates and launch configuration allow you to specify the AMI ID of your EC Instance type, the key pairs you want to attach, security groups and other parameters you may want, such as tags, easy to user data and so on.
So both these things allow you to define how your easy two instances as part of your SG should be launched. But launch configurations are considered legacy. So they're old because they must be recreated every time you want to update any single parameter in them. The launch templates are the newer capability of ASG, and this is what AWS recommends you use going forward. The reason is that launch templates can have multiple versions. So they can be versions. You can create parameter subsets, so it's possible for you to define a partial configuration that can be reused and inherited across multiple templates. It allows you to provision a mix of On Demand and Spot instances to optimize and have a better cost structure than your launch configuration. You can also use the T two unlimited birth feature, and as I said, yes, it is recommended by AS going forward, so anytime in the exam you see a question, it would probably be more lean towards using the launch template than the launch configuration. I don't see any reason why you would have to use a launch configuration now that is legacy and completely replaced and the launches are much better and they're newer and more shiny. OK, so those are the things you should know about the launch and the launch configuration, and that's it for this lecture. I hope you liked it and I will see you in the next lecture. Bye.
Study with ExamSnap to prepare for Amazon AWS Certified Solutions Architect - Associate Practice Test Questions and Answers, Study Guide, and a comprehensive Video Training Course. Powered by the popular VCE format, Amazon AWS Certified Solutions Architect - Associate Certification Exam Dumps compiled by the industry experts to make sure that you get verified answers. Our Product team ensures that our exams provide Amazon AWS Certified Solutions Architect - Associate Practice Test Questions & Exam Dumps that are up-to-date.
Please post your comments about AWS Certified Solutions Architect - Associate Exams. Don't share your email address
Asking for AWS Certified Solutions Architect - Associate braindumps or AWS Certified Solutions Architect - Associate exam pdf files.
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.