Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part 12

  1. ECR – Part II

So let’s go to task definition, my Httpdand I’m going to create a new revision. So I click on this one, “Create new revision.” And the task role will now be the same as before. Network mode. EC two. All great. We continue to scroll down. All great. And this time, the container will be different. So now, as you can see, the custom format needs to be the full image name. So how do we get the full image name? Well, if we go back, basically, this is the full image name.

So here what I’m going to say is, okay, the image we’re going to pull is from ECR, and it’s actually going to be very smart and use IAM to pull that image. So here is my full image name: It comes from ECR. That’s a demo image, and the tag is Latest. Okay, that sounds good. The rest is exactly the same. So I’ll just go ahead and click Updates. The difference now is that our HTTPD container will no longer be pulled from Docker Hub, but rather from ECR. Okay. Click on “Create.” And here we go. We have a third revision of our task definition. Let’s go to the Clusters demo service, and this time we can update the service.

So I’m going to click on Update, and the revision is going to be three. Excellent. I click on Next Step, Next Step, and Next Step, and everything looks great. Click on “Update Services.” And now we’re basically asking our EC2 instances to pull the image from ECR and run it. So let’s wait and see what happens. Right now, all my tests for definition number two are running. So let’s just wait a little bit until they deregister from my ALB and start rolling out an update. So I just clicked on the refresh button right here. And as we can see right now, we have a mix of Task Definitions 2 and 3. That’s because there is a rolling update happening, and so new task definitions for three are being launched and the ones from two are being shut down. So we have three of three and three of two. If I refresh it now, we have a bit more than three. So I’ll just wait a little bit longer, maybe until all these task definitions become three. So I’ll just wait a minute. And so while this happens, I want to show you what’s happening behind the scenes. So if I go to EC 2 and go to my ALB, This is my target group, and I go to targets. As we can see, two are healthy.

So this is the new test definition, and four tasks are draining. Essentially, a task is exhausting. That means that it’s being shut down by the ALB, and that can take a lot of time. And the reason it can take a lot of time is that if you go to the description, scroll down, and go to “deregistration delay,” you can see there’s a 300-second grace period for the draining to happen. So we have to wait about five minutes, basically, for these targets to be drained. And you could speed that up if you wanted to, if you were a bit brutal, by just stopping individual tasks. So this is what I’m going to do because I’m a bit hurried. So I’m going to go to tasks, and I’ll choose this task with the task definition number 21234, and I’ll click on stop. I’m a little harsh, and here we go, stop. But if I had waited five minutes, it would have done the rolling update very nicely, just like I would expect it to. So I’ll just wait a little bit until the service basically spins up four tasks because it’s going to realise that, yes, it’s missing some tasks. So let’s go to tasks and just wait a little bit here, maybe a minute. Okay, so this took a little bit of time, but now my four tasks are running on my instances, and they’re all running. But now let’s test with the ELB and see if it really works.

So I’m going to go to MyLoadBalancer and get the URL again. Here we go, the DNS opens a new tab, and here we go. We are getting “Hello World” from our custom Docker image. The image is running on ECS, and here is some information about this container and task. And this is a small script I wrote to display this information so we can see that the image is from ECR, the image ID, the ports information, some labels, the CPU and memory limits when it was created, and the Docker ID. And the really cool thing is that you can now see how everything changes by refreshing this page. That’s because our load balancer is actually doing load balancing on our different tasks. And so this is why the task, for example, is always changing. So I have a really, really cool example to show you basically how to do a rolling restart, how to use our custom Docker image, and how to display some basically important information in there. And what I wanted to call your attention to is that our EC2 instance was able to pull off these tasks.

It was able to pull the data from ECR, but the reason it was able to do it is because it has the right IAM permissions. So, to wrap up this lecture, let’s take a quick look at the IAMpermissions. If we go to instances and we go to our instance, look at the Im role. This one ECS instance role is actually very, very privileged. And so, as we can see in the policy summary there for the Elastic Container Registry, it’s able to read all resources. And this is why it was able to pull our Docker image from ECR. You can’t basically pull from ECR if you get an IAM error. This is most likely due to missing IAM permissions. And that’s also a very common exam question. So hopefully that puts things into perspective. I hope you understand how ECR works. This is just scratching the surface, but hopefully it’s enough for you to get a better idea of how to use it in real life. I’ll see you at the next lecture.

  1. Fargate

Okay, so finally we talk about one of the services I’m most excited about, which is Fargates. So ECS has been here for a long time, and when I used ECS in the first place, there was no Fargate. And so when you wanted to launch an ECS cluster, we had to create our own EC2 instances, basically what we did in the previous hands-on. And then, if we needed to scale, we needed to add two EC2 instances and then scale our service. It was really hard, and we had to manage infrastructure. And then one day AWS came up with this revolutionary service called Fargate, saying it’s all serverless and you don’t need easy-to-create instances anymore. So you don’t provision easy-to-use instances anymore.

And Alias stated that we will simply define the task and you will run the container for us, which is fantastic. I don’t have to be concerned about being easily instanced. And so if I want to scale my service, I just increase the number of tasks I want. There’s no need to worry about EC-2 anymore. And so far, getting there is very revolutionary because now if you want to run a Docker container in the cloud, just think no more, think Fargate, and don’t even worry about provisioning easy instances. AWS manages everything behind the scenes, and you don’t need to worry about anything. So let’s go and have a quick look at how Fargate works. All right, so let’s finish up and go with Fargate.

So Fargate is the serverless offering, and for this, we’re going to create a new cluster. We can use the same clusters as before, but I really want to separate things to show you how things work. So we’re going to create a new cluster, and this one is going to be networking only. As you can see, we’re going to create a cluster, and the VPC and subnet are optional, but everything will be powered by Fargate. and you’ll see how much simpler that is. So forget everything we’ve done on creating easy-to-create instances and stuff. Now this is Fargate, and this is going to be serverless and so much easier. So click on “next step.” The cluster name is going to be Fargate demo. And we don’t even need to create a new VPC for this one. We’re going to use what we already have. Click on “create,” and here we go. We’ve created our first cluster. How simple was it? No crazy stuff, right? So in this cluster, if you go back to the cluster UI, you can see there’s a Fargate demo and a cluster demo. Get the demo so far. Excellent. So as we can see, there are no container instances because it’s a Fargate cluster.

And so far, we have zero Fargate tasks, and so on. But let’s create our first service. So we create a service, and the launch type this time is Fargate. And as we can see, the task definition we have is incompatible with the launch type. So we need to create a new task definition that’s going to be compatible with Farget. Okay, so we got that. So let’s go to task definition, create a new task definition, and this time we’re going to select a launch compatibility type of Fargate. Perfect. Click on Next Step, and the task definition is going to be called Forget Task Definition Demo. Okay, task role is again if you want to assign a role to the endorsed tasks, but we won’t do that right now. and we scroll down. Task memory is allocated in 0.5 GB increments, then 1 GB, 2 GB, and 3 GB. So you define how much memory you want your task to have. So we’ll choose 0.5 vCPU per task vCPU. It shows how many CPUs you want your task to have. We’ll choose zero point 25, which is basically going to be a very, very small container because we don’t need much right now anyway. And now, for container definition, we’ll go ahead and add a container, naming it HTTPD and providing the image URL. Again, I’m going to pull it from my CLI.

We can pull it from ECR as well, paste it, and then we’re just going to provide a hard limit of 500 megabytes or 512 megabytes, which is what we requested. For port mapping, we’re going to add a container port of 80, and that will be enough. There’s no host port mapping when you’re in Fargate because we don’t need to do that. It’s intelligent enough, so we’ll let it handle dynamic routing on its own. Okay, so we have our HTTPD container, and this is excellent. It’s created, and everything looks good. And so the cool thing we see about Target is that we define tasks as a number of RAM and CPU, and that’s it. And from there, AWS will automatically provision our task for us. So I think that’s really cool because we don’t need to worry about EC 2 anymore. So everything has been created; we view the task definition, and we’re good. Let’s go back to our cluster, the Fargate demo. And here we can create a service. Let’s create a service. Fargate will be used for the launch. The platform version is going to be the latest. The cluster is the Fargate demo.

The service name is Fargate Service Demo. The replica type is a replica. Let’s say we want six tasks, which will be comparable to two, just to avoid going overboard with timing and pricing. So we want a minimum of two healthy percent and a maximum of 200. Again, don’t worry about it too much. And then the deployment type is going to be rolling updates. We could also choose blue-green, but for now just rolling updates are fine. Click on “next step.” And here we have to select the VPC. We must specify which subnets we want our task to run in. So I’ll select my three subnets and the security group. This Fargate 8378 will suffice. Actually, I can already use a security group. I already have, just to basically say, “Okay, we’re going to use the EC2 container service cluster that was already created.” So here we go, save. We’re using the same security group as before, with auto-assign public IP enabled or disabled depending on if you want to have a private instance or not. We’ll just leave it enabled, and then you can add it to a load balancer, which we could do. So we’ll add it to our application load balancer, and basically, we can say the load balancer name is what we have from before, which is excellent, and we’ll add this container to it.

We’ll use the same ports as before, and the target group name is going to be a new target group, which is this one. So we’re going to have a little bit of trouble here, because if we do a path pattern slash, it’s going to say that the path pattern is already in use for this listener. So we actually have to go back really quickly to EC2 and go to load balancers, and then in the load balancer, we’re just going to do a quick hack just to get rid of this error because this is otherwise too complicated. So, listeners, we’re going to change the rules, and this one we can just delete. So I’m just going to delete that rule, which will basically render the other service not in use anymore. Okay, so now the rule has been deleted, and we just have this default rule. So, if we go back in here and refresh my load balancers and add to load balancer, it should work this time. So let’s go to the settings. The target group is new, and the path pattern is slash, so hopefully that works. And there is only one evaluation order. Here we go. So now it works. So we’re basically replacing our old ECS rule listener with this one, okay?

And the health check pass will be just slash, and we will disable service discovery. Click on “next step.” We can set up auto-scaling, but we’re not going to do it. And now here we go. We’re ready to create the service, and hopefully, if everything works, everything has been created. And the really cool thing is that here we haven’t provisioned any EC2 instances; there is no EC2 instance in a Fargate service, but automatically AWS will, behind the scenes, create some magic Docker containers for us. So this is running Docker containers in a serverless fashion without us doing anything or provisioning any EC2 instances, which I think is really, really awesome. So now the task status is “pending.” So, let’s select one task, and it will begin spending while we wait for it to complete. So let’s just wait a little bit. Very cool. Our task is running. So if we go back to our cluster and look at two tasks, we have two tasks that are running. They’re not attached to any container instance. Remember, it’s serverless, so there is no container instance we can look at. But now our service is ready. So, if I return to my DNS and simply refresh this page, as we can see now, I still get the answers from my containers, and this time the Docker name is an ECS Fargettype of definition, which is really, really cool. So now it’s actually running Target, and we know it’s running Target because the network mode in here is AWS VPC. So this was a cool way of showing you, “Hey, here’s how we can do everything we’ve done, but so quickly with Fargates without running any EasyTo instances or an Otis scaling group or whatever, just using a Fargate service and a Fargate cluster.” So that’s it for this lecture. I hope you enjoyed it, and I will see you in the next lecture.

  1. ECS & Multi Docker Beanstalk

Another topic that comes up at the exam more and more now is running Docker containers through Elastic Beanstalk and ECS. So there is this option that we haven’t seen yet, but we’ll see this as a hands-on, which is that you can run your Elastic Beanstalk application either in single Docker container mode or multidocontainer mode. The only difference is that if you run a multidocker container mode, you can run multiple containers in your Elastic Beanstalk environments per or easily to instances. So what will this single- or multi-container mode do for you?

Well, this will create an ECS cluster for the two EC instances, and they will be configured to use that ECS cluster. It will create your load balancer if you select, obviously, the high availability mode. And it will create the test definitions and ensure that they are executed correctly. The only thing you need to provide is this file called Docker run AWS JSON, and it needs to be at the root of your source code. So remember that. So overall, your Docker images—Beanstalk doesn’t create them for you—need to be created in advance, and maybe you want to store them in ECR, as we’ve seen in this course. Okay, so from this slide, you need to remember that there is a single and a multi-Docker container mode, and that there is this file named Docker Run that is terrifying and that you need to have at the root of your source code. Now, in terms of a diagram, what does it look like? When you create your Beanstalk environment, it creates a load balancer. It will also create an auto-scaling group and an ECS cluster within it. Your EC2 instances will be created and automatically registered to the ECS cluster. Maybe you’ll have three containers. So this is a multi-docker-container type of deployment with microservices.

So PHP and NGINX? Maybe another container. They will be replicated on each EC2 instance and through the load balancer; maybe on port 80, your NGINX container will talk to it. That means that you can access your Beanstalk application through port 80 on the load balancer. But maybe there is also port 1234 on some other container, and you could access that as well through Beanstalk URL 1234. So that’s the whole idea behind running multiple logo containers in Beanstalk. Let’s go through the steps one by one to see how that works. So in my Beanstalk console, I’m going to create a new environment, and it will be a web server environment. And I will just name it like this: it’s fine. And I’ll go with a pre-configured platform. And here we can see that we can get a single-deck container. So it’s simply known as Docker or multicontainer Docker. And this is the mode we’re interested in because it’s a more interesting one where we can run multiple Docker containers per EC2 two instances. So we’ll do this, and we’ll select a sample application because we don’t want to bother with anything else. and I will configure more options. I will go for the more expensive route just to show you how things work.

So I’ll select high availability in terms of capacity. I will modify it, and I want at least two easy instances. I’ll save this and I will have a load balancer, which is great. So this is high availability mode, with at least two easy instances. And this is just a custom configuration. Okay, so we’re good. I will create the environment, and it will go ahead and create all the resources for me. So I’ll just wait until this is done so we can check what was created and how things work. My environment has now been created, and if I go to the URL, I get welcomed. And it says that now our Docker container is running an elastic beanstalk. So that’s perfect. We’ve done our first multi-Docker deployment, but that was too easy. Let’s have a look at what happened under the hood. So, let’s go to ECS and also to EC Two. All right, so under EC 2, if I go to auto-scaling groups, I should find that there’s a new auto-scaling group. This is my elastic beanstalk running in multiple Docker containers. and the main capacity is two. So we should have two EC2 instances being created. There’s also a load balancer that should have been created for ECS. Now, I’m not sure which one, but one of these is going to be pointing to my application, which we already know because we’ve used Beanstalk before, but now it’s going to ECS. In ECS, there has been this new cluster that was created for us.

So perfect. Now in this cluster, as we can see, we have no services defined. So Beanstalk does not define a service, but instead it defines tasks. And so we can see that two tasks are running and we have two ECS instances. They’re basically our EC2 instances registered to our cluster. Now, if we look at these tasks, they both point to the same task definition. So if I click on the task definition, we can see that Beanstalk did indeed create a task definition. And if we look at the JSON of it, we can see that there are a lot of parameters. And if we scroll all the way down, we can see that we have two containers. But now I’m going to show you this in a much better format right here. So here, if you look at the container definitions, we can see that there is a PHP application that is running and an NGINX proxy. And that NGINX proxy is the thing that maps to port 80. We have a PHP application and an NGINX proxy, which is fantastic. And if we go to our Amazon ECS now and click on a task, we can see that this task does indeed run two containers. Hence the name “multidoco container beanstalk.” That’s all I want to show you. This is everything that was created for us. As we can see, we run two containers per easy2 instance, and we have two easy2 instances. So we’ll have four containers in total. And Bin Stock provisioned everything for us so that we could deploy Docker containers directly through Beanstalk while underlying ECS. All right, that’s it for this lecture. I’ll go ahead with destroying my environment, but I hope you liked it. I will see you at the next lecture.

  1. ECS – IAM Roles

So this is me with my DevOps videos now. And I have also created Fargate and ECS classic clusters. So we are probably in the same kind of setup. Anyway, this is fine. So I want to draw your attention to IAM roles. So, in the case of ECS Classic, we had created ECS instances, which were simple for NSG to create as instances. And these instances have IAM roles attached to them. This is known as an ECS instance role. This Im role is able to talk to the EC2 API using this AWS managed policy called Amazon EC2 container service for EC2 two role. So here we’re able to, for example, create a cluster, deregister a container instance, register a container instance, and so on. We can also get images directly from ECR to download the Docker images we require. So this is an IM policy that gets attached to your EC, for instance. And as such, it provides your EC with the ability to interact with the service. Right? But when we launch services and these services launch tasks, the tasks themselves have a task definition. And the task definition—not this revision but maybe revision two—can have a task role assigned to it. And a task role is also an IM role. This time, however, the trust relationship is an ECS task. So this is a role that can only be assigned to ECS tasks.

And this will give the Docker container the ability to do things with AWS, for example. So in this example, I have created an ECS task execution role and attached the policy Amazon read only S 3. As a result, my Docker containers that are running now have the ability to issue API calls to S 3. So for this example, my EC-2 instance has a role, and it’s allowing the agent to make calls against the ECS service. And then each of my Docker containers running within my instance also has a role attached to it, and it’s an ECS task role. So you must understand the distinction between these two roles before taking the exam. So in the ECS Classic, when we have EC2 instances, you have two kinds of roles. We have the EC-2 role and we have the task role. But in the Fargate kind of case, okay, because there is no easy instance being managed, then only the tasks have a role. And that makes the execution model a little bit more simple. But going into the exam, you need to understand that there are two different kinds of IAM roles. One for the simple example and one for the task. Okay, I’ve said it enough. I will see you at the next lecture.

img