Amazon AWS-SysOps Exam Dumps, Practice Test Questions

100% Latest & Updated Amazon AWS-SysOps Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Amazon AWS-SysOps Premium Bundle
$69.97
$49.99

AWS-SysOps Premium Bundle

  • Premium File: 932 Questions & Answers. Last update: Dec 5, 2022
  • Training Course: 219 Video Lectures
  • Study Guide: 775 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS-SysOps Premium Bundle

Amazon AWS-SysOps Premium Bundle
  • Premium File: 932 Questions & Answers. Last update: Dec 5, 2022
  • Training Course: 219 Video Lectures
  • Study Guide: 775 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$69.97
$49.99

Download Free AWS-SysOps Exam Questions

File Name Size Download Votes  
File Name
amazon.test-inside.aws-sysops.v2022-10-27.by.violet.537q.vce
Size
1.62 MB
Download
52
Votes
1
 
Download
File Name
amazon.pass4sureexam.aws-sysops.v2021-06-04.by.elliot.574q.vce
Size
1.86 MB
Download
561
Votes
1
 
Download
File Name
amazon.examcollection.aws-sysops.v2021-04-26.by.violet.537q.vce
Size
1.48 MB
Download
606
Votes
2
 
Download
File Name
amazon.examlabs.aws-sysops.v2020-12-24.by.clara.532q.vce
Size
1.81 MB
Download
739
Votes
2
 
Download

Amazon AWS-SysOps Practice Test Questions, Amazon AWS-SysOps Exam Dumps

With Examsnap's complete exam preparation package covering the Amazon AWS-SysOps Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Amazon AWS-SysOps Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.

EC2 High Availability and Scalability

1. Section Introduction

The next topics for us to visit are EC2, high availability, and scalability. That means load balancers and auto-scaling groups. Now, this section may seem a little bit basic,but we'll see it from a CISO's perspective. So you already know about load balancers and those scaling groups, but what does it mean from a CISO's perspective? We'll see load balancers. Regarding troubleshooting, we'll also look at advanced options, logging, and cloud watch integrations. Basically, these topics come up a lot in this course because this is what the systems exam is about. For auto scaling, you can expect the exact same thing. We'll look at troubleshooting, we'll look at advanced options, logging, and we'll also look at cloud watch innovations.

2. What is High Availability and Scalability ?

So it's just a quick lecture just to touch base on what scalability and High Availability.This is quite a beginner's level. So if you feel confident about this concept, feel free to skip this lecture. But scalability means that your application system can handle a greater load by adapting. And so there are two kinds of scalability. There's going to be vertical scalability and horizontal scalability, also called elasticity. As a result, scalability differs from high availability. They're linked, but different. So what I want to do is deep dive into this distinction, and we'll use a call centre as a fun example to really put into practise how things work. So let's talk about vertical scalability. Vertical scalability means that you need to increase the size of your instance. So let's take a phone operator, for example. We have a junior operator and we just hired him. He's great, but he can only take five calls per minute. Now we have a senior operator and he's much greater. He can take up to ten calls per minute. So we've basically scaled up our junior operator into a senior operator. It means faster and better. This is vertical scalability. As you can see, it goes up. So, for example, in EC Two, our application goes on aT two micro and we want to upscale the application. That means maybe we want to run it on a T too large. So when do we use vertical scalability? Well, it's very common when you have a distributed system, such as a database. So it's quite common for a database,for example on RDS or ElastiCache. These are services that you can just scalevertically by upgrading the underlying instance type. Although there are usually limits to how much you can vertically scale, and that's a hardware limit. But still, vertical scalability is fine for a lot of use cases. Now let's talk about horizontal scalability. Horizontal scalability means that you can increase the number of instance systems for your application. So let's take our call centre again. We have an operator and he is being overloaded. I don't want to vertically scale it, I want to hire a second operator, and I've just doubled my capacity. Actually, I'll hire a third operator. I'll hire six operators. I've horizontally scaled my call center. So when you have horizontal scaling, that implies you have distributed systems, and this is quite common when you have a web application or a modern application. But remember that not every application can be a distributed system. And I think it's easy nowadays to horizontally scale thanks to the cloud offerings such as Amazon ECTwo because we just right click on it on the web page and boom, all of a sudden we have a new EC2 instance and we can just horizontally scale our application. Now, let's talk about high availability. High availability is often associated with horizontal scaling, but it is not always the case. That means that you're running your application or system in at least two datacenters or two availability zones in AWS. And the goal of high availability is to be able to survive a data centre loss. So in case one centre goes down, then we're still running. So let's talk about our phone operators. Maybe I'll have three of my phone operators in the first building in New York, and maybe I'll have three of my phone operators in the second building on the other side of the United States, in San Francisco. If my building in New York loses their Internet connection or their phone connection, then that's okay, they can't work. But my second building in San Francisco is still fine and they can still take phone calls. So in that case, my call centre is highly available. Now, High availability can also be passive. For example, when we have RDS multi AZ,for example, we have a passive kind of high availability, but it can also be active. And this is when we have horizontal scaling. So this is where, for example, I have all my phone calls in two buildings in New York. They're all taking calls at the same time. So for EC two, what does that mean? While vertical scaling again increases the instancesize, it's called scaling up or down. So, for example, the smallest kind of instance you can get in AWS today is T, or two nano. And this is zero five gigs of RAM, one vCPU,and the biggest is a UT, twelve TB, one metal, which has twelve three terabytes of RAM, and 450 vCPUs. So this is a significant case. And I'm sure these things will get bigger as time goes along. So you can vertically scale from something very, very small to something extremely large. Horizontal scaling That means you increase the number of instances you have. And in AWS terms, it's called scale out or scale in. Out when you increase the number of instances, in when you decrease the number of instances. And this could be used for other scaling groups or load balancers. And then finally, high availability is when you run the same instance of the same application across multiple AZs. And so, this is for an autoscaling group that has multiaz enabled or a load balancer that has multiaz enabled as well. So that's it. Just a quick rundown. So we're finally getting to the terms "High Availability" and "Scalability." They're necessary for you to understand when you look at the questions because they can trick you sometimes. So make sure you're very confident with those. They're pretty easy when you think about it. Remember the call centre in your mind when you have these questions. Okay, that's good. I will see you in the next lecture.

3. Load Balancer Overview

Okay, so now we're going to learn about load balancing. So you may be asking, what is load balancing? Well, basically a load balancer is a server that will front your application and it will forward all the internet traffic to your instances of your applications downstream. Okay, so this is what it looks like. We have our EC two instances and they have our application and we had a load balancer a bit before that. Now when the users connect, they don't connect directly to the EC, in two instances, they will connect to the load balancer. So we have user one connecting, and the loadbalancer basically redirects that traffic to the ECQ instance. The ETNs come back with a response to the load balancer, and the load balancer gives the response back to the user one.Similarly, we can have a user two. They will connect to another two instances, and a user three will connect to another two instances. It's called a load balancer because, as you can see, users one, two, and three do not go to the same EC2 instance in the backend. Okay, the load is being balanced. So that's the whole idea behind the load balancer. So why would you use a load balancer? Well, you really want to spread the load across multiple downstream instances. So you can scale your instances downstream but still expose only a single point of access to your application. Similarly, if your downstream instances fail, we don't want our clients and our users to see that failure, right? We want the load balancer to still work. And so the load balancer will do health checks on our instances to make sure they're working fine. And if they're not working properly, it will redirect traffic to the instances that are working. Also, the ALB, the load balancer, can provide something called SL Termination or Https for your websites. That means that the termination and the encryption connection are between the client and the ELB, and then the ELB, or Elastic Load Balancer, goes and talks directly to instances in HTTP traffic. It can also enforce stickiness with cookies. So that means that the user basically talks to the same instance over time. And there is also a concept of high availability across zones. That means that your load balancer can run across many zones and so do your instances, and that basically allows your application to be highly available in case an availability zone just fails. Finally, the last benefit is the ability to separate public traffic from private traffic. So don't worry, all these concepts we're going to do a deep dive on right now. So what would you use? One, there is basically an easy tool downthere called ELB, and it's a manageable balancerand a tobacco will guarantee that it will work, a guarantee that they will doupgrades, maintenance, and will maintain high availability. And I will provide a few configuration knobs for you to tune things up, and these are really nice because everything is managed from the UI. Overall, even though it costs less to set up your own balancer, it will be a lot more effort on your end and, in the end, the total cost of ownership will be much higher. So it's very common to use the ELB from Amazon. Finally, it is integrated with so many of Alist's offerings and services for monitoring for compute. So there are different types of load balancer for AWS, and that can be confusing. There are three kinds of load balancers. There is a classic load balancer, or V one, or old generation. This was created in 2009, so this is quite old. And then recently there was something called an applicationlobalancer, which is V two new generation, which is 216, and we had a network lobalancer, V two new generation, 2017. Overall, it is now recommended by Amazon to use the newer V two load balancer as they provide more features. So we'll see how the features relate to one another. But basically, the application load balancer and the network load balancer are what are going to be asked mainly in the exam. Although it's still important for you to know how a classic load balancer works, Overall, depending on whether you want your application to be exposed externally or internally, we can configure things to loadbalancer, internal load balancer, or external load balancer. So let's first talk about health checks. Health checks are really crucial for loadbalancers because they really allow you to redirect the traffic to instances that are healthy. They will basically enable us to know which available instances are to reply to a request. And basically, the health check is super easy; you just give it a port and a route. For example, slash health is very common and if the instance says "Yeah, I'm okay" then the instance is healthy, otherwise if it's not 200, then the instance will be unhealthy. So here's what it looks like. We have a load balancer and instances in the back end, and it will perform a health check on any port you specify. So 4567, but it could be port 80 or whatever you want, and it will ask for the route slashhealth, and that's you who set up these things. If the EC2 instance says okay, then it's deemedhealthy. If it's not okay, it's not healthy and the load balancer will stop sending traffic to the instance. So application load balancers are Vtoo, and they're called layer seven. Layer seven because it allows you to work at the HTP level. So the application load balancers allow you to handle multiple HTTP applications across machines, and we're going to group these applications entirely groups.On the next slide, I have a diagram to show you. They will tell us that we require a load balancer. We need something to load balance across the same application running on the same machine. How do we do this in the responses and applications? Finally, you can load balance using the root of the URL or you can load balance based on the hostname in the URL, so they allow for greater flexibility. Overall, I think load balancers are a great fit for microservices and containerbased application.For example, Docker and Amazon ECS but also for Amazon EC2, as we see in this lecture. Finally, there is a port mapping feature that the load balancer can redirect to any dynamic port in the backend, and this is what allows the load balancer to basically redirect to the same instance of the application running on the same machine. In comparison, if you wanted to run five microservices using a classic load balancer, you would need five classic load balancers, and that was very expensive and inefficient. Now we have an application load balancer. If you wanted to, you could use one load balancer to front ten applications, and it would work perfectly. So as a diagram, what does it look like? Well, we have the best traffic and so we will have our external application on the Bouncer V two. Basically, there's going to be a target group, and maybe our first application is the user's application. So we have our target group and we have two instances. We also define a health check to make sure that these instances are healthy. So a load balancer will basically direct the HTTP based on the routes. So if a user comes and says I want root user, then the load balancer says okay, I'm going to redirect to that target group on the right hand side and to the instances that are healthy. Additionally, we can set up a second target group maybe for the search application and we have the same concept. We have many instances and we have health checks defined on them to make sure they're healthy or not. And if the user requests to slash the search root, the external application will basically redirect to the search target group, so that's the idea. You can have as many target groups as you want behind your ALB, and then you can have as many instances as you want in each target group, and you can have health checks to check the status of these instances. So what's good to know for ALB? Well, you can enable stickiness at the target group level, which means that the same user always goes to the same instance and basically the sticks will be generated by the ALB, so it will add a cookie at the ALB level, not the application. You have nothing to do with your application. ALB supports three things It supports Http, HTTP, and WebSockets Protocol, and the application servers don't see the IP of the client directly. So that's a very, very common and popular question on the exam as well. Basically, the true IP of the client is inserted into one header called X forwarded for header. And if you still want to get the port,you can get exfoliated port and exported proto.So you're going to be asking what it looks like. Well, basically, we have a client IP, 123-4567-eight, and it talks to our ALB. And what happens is that ALBoes something called a connection termination. So, establishing a new connection to your EC two instances. So your EC2 instance sees the private IP of your load balancer, but it doesn't see the private IP of the client. And for your easy instance, to see the IP of your clients, so 123-4567, then it needs to look at the header X four at four. And this is how you get it. And this is a very popular question in the exam. Now, if we look at network load balancers,they're layer four, so they're a bit lower level and they're for TCP traffic, okay? So before it was layer seven, HTTP traffic, now it's layer four. TCP traffic and network balancers are deemed to be super high performance. So this is how they advertise, and they can handle millions of requests per second. And the support is static and elasticIP, which has less latency. So 100 milliseconds versus maybe 400 milliseconds for ALD. Those are ballpark numbers, and they're most commonly used for extreme performance and probably won't be the default load bouncer you'll choose overall, but this is the exact same process as the application louncer. And so if we look at the diagram, we'll get the exact same diagram as before, but instead of HTTP traffic, we're talking about TCP traffic. And that's simple. The TCP network and traffic will be routed to different target groups. So that's about it. network lounge. So just remember it's higher performance now. Good to know. The classic load balancer, as I said, is deprecated. Now you want to use the application load balancer for HTTP, HTTPS, and WebSockets and you want to use the network load balancer for TCP, the CLB, and ALB. As a result, traditional load balancers and application louncers support SSL certificates and offer SSL termination. All load balancers will have a health check capability. The ALB can route based on the hostname and the path. The ALB is a great fit with ECS or Docker. Also, all these load balancers, classicallouncer, application louncer, and network load balancer, have static host names. That means that, basically, we get a URL, as we'll see in the hands-on in the next lecture, we'll get a URL and that will be the URL that our application will use for the rest of the time. We should not resolve that URL and use the underlying IP. That is also a very popular question. Load balancers can scale, but not instantaneously. So basically, if you expect a massive load, like you know a lot of load is expected for you, you should contact NWS for them to warm up your load balancer so they scale with your NLB as opposed to ALB and CLB. They see the client IP on the application site. So there is no export for proto headerbefore, but that's just for network balancers. And then you should know that 404 XS types of header are client-induced errors and five xxtypes of errors are application-induced errors. If you get a 503, that means that your ELB has no more capacity or no registered targets. And if the load balancer cannot connect to your application, then you should check your security group. So I'm aware that it was a very heavy lecture and lots of knowledge right here. Don't worry, we're going to go into a hands-on lab in the next lecture to practise this. But just remember, we have three types of loss when it's deprecated, and the most common one you're going to see on the exam is going to be the ALB application load balancer. Don't forget about stickiness, don't forget about health checks and target groups. Okay? I hope that was helpful. I will see you in the next lecture.

4. Load Balancer Hands On using SSM

Okay, so in this lecture we're going to launch a load balancer and three easy instances that we're going to manage using SSM just for fun. So let's go ahead to the EC Two console. So I type EC Two and the first thing you have to do is launch instances. So I'll just launch an instance and launch three of them, one in each Availability Zones.So quickly to do this is that I'll select AmazonLinux Two, click on select T Two Micro, click on Configure Instance Details, and here in Subnet instead of saying no preference, I'm going to say the first one is going to be in EU West One A. The second thing I want to do is to send an IAMrole called Amazon Easy to Roll for SSM, which basically will allow us to manage our instances from SSM just for fun. Again, we don't have to do it, but I think it's a good way of tying in with the previous section. Then I click on Next Act to add storage. This is fine, add tags This is fine as well, and securitygroup here. I will just create a new security group and I'll call it my web App. Very simple and this was just created. Then in the security group I will not have an SSH rule because now we don't need SSH because we have SSM and so I will just add in an HTTP role allowing traffic from anywhere. So basically, on port 80 we can talk to our instances, so we can have a web development web server if you want to review, launch, and finally launch. I do have that AWS Corus key pair, so I'm fine. I click on "Launch instances" and here we go. Our instance is now live, so here it is pending. What I'm going to do is just launch others like this. So I'll write click and click on Launch more often, and this is basically a way to shortcut and have the same kind of parameters already prefilled for you. So the only thing I'm going to change here is the scrolldown and the instance details. I'm going to launch this one in EU West One B and click on Review and Launch. And as you can see, if we go to this page, everything is the same. So T two micro, there's the same security group called My Web App. We have the same rules, the instance details are the same thing. We have to have that IAM role attached and so on, so I click on Lunch and Acknowledge. I'll do this one last time. So right click on this one, click on Lunchmore like this, and then scroll down to the instance details. I will say this one is in U West One C. Excellent launch, and here we go. Okay, so we have launched three instances and so now we should just wait for them to happen, and then we're going to install the Apache web server on them. Before we do this, let's go to the target group. So on the left hand side panel, if you scroll down under Load balancing, there's targetgroup, and this is where we're going to create the target group for our instances. So we could create a little bouncer. But I like to create a target beforehand. It makes it a little bit cleaner and I think it separates the steps very well. So I'll create a target group and I'll call it my Apache app. And then the target type is instance. So we're going to direct traffic to specific instances on the HTTP protocol port 80. In this VPC, where we've launched all our instances and then the protocol HTTP for the health check, the path is slash. That means we're just going to help check the root of our HTTP protocol. So the health check settings are kind of cool. You can go to advanced and here we can see that we're going to go to the traffic port, so port 80, and we're going to talk to our instances five times. After five times, I am going to be healthy. If you don't get a response twice, it will be unhealthy. There's a timeout of 5 seconds and an interval of 30 seconds. For the health checks, I'm just going to change the interval instead. The interval is going to be 10 seconds just to be a little bit quicker. Click on create and here we go. We have attached, we have created a target group. It's not too difficult. Right now, under the target group, we need to attach targets. So click on Targets and here we can edit it and we can register targets. We've just created three instances, so we're going to add our targets to the register. As we can see, the zone is one A, one B, and one C. Excellent. Click on save and here we go. Now we have three registered targets and the current status is unused because the target group is not configured to receive traffic from the load balancer. So why don't we go ahead and launch a load balancer? So we click on Load Balancers and create a load balancer. As we can see, we know this already. There's an application load balancer, a network load balancer, and a classic load balancer. This one is deprecated. This is the previous generation. We're going to use an application load balancer because we want to use HTTP traffic. If we had a super high-performance type of application (this is a common exam question), then the network balancer would be preferred. But because our application is normal performance, we'll use an application balancer. Click on "create." We're going to name it, so we'll call it MyApache LB, and it's going to be Internet facing.The address type is going to be IPV4, not dual stack. We don't need to support IPV Six right now, so IPVFour and then it's going to listen on port 80. That sounds fine. In terms of availability zones it's going to be deployed into, we're going to select the three, one, two, and three, so that we have a highly available setup. It is very important to understand how to make a load balancer highly available. Well, you have to launch it into several Availability Zones.Okay, next security group setting. So we click on Configure Security Groups and we're going to create a new security group and we'll call it a load balancer. This is fine. So my load balancer and this is going to allow port 80 from the outside. So, from my computer I should be able to access my load balancer. Click on Configure Routing and here we can create a new target group, but we actually wanted to create one before. So we'll collect an existing target group called MyApache Web, and from here we can review the settings. They look fine. Click on Register Targets and, as you can see, all three targets are registered because we have registered them before the next review and everything looks good. Click on "create." Now all these things will be created and provisioned, and the load balancer may take a little bit of time. So right now, the state is in provision. So what I'll do is I'll just pause the video while it happens. Okay, excellent. So my load balancer is active and, as we can see now, it is on top of listenersforwarding to the My Apache Web target group. If we look at this target group, we can see that the three registered instances are unhealthy. That's because we haven't deployed our web app yet. But I want to show you that if we don't deploy a web app, then it's going to be unhealthy. So let's go ahead and SSM. So we'll launch Systems Manager and we're going to use the Run command we have created from before to install Apache. So I click on Run command and here I will say the owner ran a command and we will look for it. The owner is going to be me, and I'll just click on Install and configure Apache. The default version of Right Time is fine. We're going to say Hello World and then we're going to specify that these three instances that appear in SSM, thanks to the IAM role we've attached before, will be having these commands run on them. Okay, In terms of the rate control, we're going to do it all at once. So we leave this at 50 targets at a time and then we're not going to write the output in this Rebecca or CloudWatch but we could click on Run and here we go. My comment is running and right now, basically, my three targets will get Apache installed on them. So I just waited a minute and here we go. Success! That was really quick. But now my three instances have Apache installed on them and hopefully now within a minute because the health checks need to happen within a minute, we should see that the health checks should be healthy. So I'll just wait a little bit. So I just refreshed and now they are healthy. So all three of them are healthy, which is awesome. Let's go to our load balancer and see if everything works. We'll go to the description and here is the DNS name of my load balancer. I'll copy it, I'll open a new tab, paste it, press Enter, and here we go. We're getting the hello, world. The cool thing is that, because it's a load balancer, if I refresh, I get redirected to a different instance every time. So you see 271-3833. So here our load balancer is basically shifting traffic from our three instances, which is really really awesome. Now, in the perfect world, we also want security to be super tight, and so we don't want to be able to access our instances directly from the IP. right? Now, if I do this and get the hello world directly from my public IP, what we want to do is allow the load balancer to access my easily accessible instances, not me. For this, I'll return to security groups and the inbound ports in my web app. In fact, I will allow port 80, but only from SG minus and from my load balancer. So we only allow traffic on port 80 from the loadbalancer, and that makes our set up way more secure. So here we go. Now if you go to my load balancer and refresh, we should still be able to access our instances, and indeed, it's really working. But if we go to the instance IP directly and refresh, now we get to see a long load. That means it's timing out, and because it's timing out, that means the security group firewall is basically blocking us from making a request,which is exactly what we wanted. So by modifying the security groups, which basically have a more secure setup, but that's it. What I really like about this hands-on is that we do launch three instances in a highly available setup. We have a load balancer redirecting traffic to these three instances, and on top of that, we've used SSM to configure the instances, and I think that's quite awesome. That's quite a nice way of applying SSM and configuring three instances at a time in less than a second. OK, so I hope you like this. I will see you in the next lecture. Bye.

ExamSnap's Amazon AWS-SysOps Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Amazon AWS-SysOps Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.

Comments (0)

Add Comment

Please post your comments about Amazon Exams. Don't share your email address asking for AWS-SysOps braindumps or AWS-SysOps exam pdf files.

Add Comment

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.