Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part 3
CloudFormation – Parameters from SSM Okay, so our first topic for cloud formation for DevOps is going to be understanding how we can use parameters from SSM, or Systems Manager, in the Parameter Store to inject them into our cloud formation templates. So let’s have a look at what it looks like, and then we’ll be discussing use cases. So here are our confirmation templates, and here you have a parameter section, and we have an instance type parameter. But now the type is not a string as we’re used…
Okay, so our first topic for cloud formation for DevOps is going to be understanding how we can use parameters from SSM, or Systems Manager, in the Parameter Store to inject them into our cloud formation templates. So let’s have a look at what it looks like, and then we’ll be discussing use cases. So here are our confirmation templates, and here you have a parameter section, and we have an instance type parameter. But now the type is not a string as we’re used to knowing it; it’s an SSM parameter value of a string. And the default value of this is the EC2 instance type. And then for Image ID, we have something similar where we have an SSM parameter value of AWSEC-2 Image ID defaulting to EC-2 Image ID. So this is something new, and this is something that this parameter should be fetched from this value.
And you may ask me, “Where does this value exist?” Well, let’s just go about it and create it. So let’s go to SSM, which is Systems Manager, and then I will scroll down all the way to the bottom left and go to Parameter Store. If you don’t know where to store AWS parameters, use Parameter Store. So I can go ahead and create a parameter, and for the name, I can enter EC2 instance type, and I will call it the default instance type for my EC2 instances, and why not? Then I will scroll down, and in terms of value, I will enter T 2 micro, and this will be a string, not a list string or a secure string. Okay, I will create this parameter, and the first parameter has been created.
Now, if you remember, there was a second parameter called AMI ID that will go ahead and be created, which represents the default AMI I want to have for my EC2 instances. So I’ll say the latest AMI for my EC. Two instances. Think about your company; maybe you’ll have a default AMI that you’ll update from time to time. Then you want to store that reference to that AMI ID within the parameter store, and you can name it however you want. I just chose to name it this way.
So here is a string, and for the value Well, let me go and create an easy two-instance just so I can get an AMI ID. So I’ll go and launch an instance, and I’ll choose Amazon Linux One. So choose one with Linux and bring it with me, and I’ll copy this. AMI ID. I’ll show you why afterwards. So I’ll go and paste this AMI ID right here, and I will create this parameter. So I have two parameters created.EC has two instance types and two amides.
So now let’s get back to our templates. These parameters will retrieve the value from the parameter store, and that is the power of this integration of cloud formation. And then for the resources, we have an instance, and it’s just referencing the image ID and the instance type, the two parameters we have from before. So nothing changes from here; everything we know remains the same. So let’s take a look at this template and experiment with it to see what happens. So we go to Cloud Formation, and we’ll go and create a stack; the template is ready; we’ll upload a file, and that file is going to be a ten-parameter SSM. Then I click on “next,” and I’ll call it demo parameter SSM. Then I have to say that I can replace the value here so that they are prefilled because I entered a default value. So I’ll just use those and say, “Okay, the image ID should come from this value, and the instance type should come from this value.” From within systems management, parameter stores strive for excellence.
So I click on Next, and for tags, I will put nothing. I will just scroll all the way down. Click on Next, and the template is ready. I will scroll and verify the parameters they’ve corrected, and then click on Create Stack. So my stack is being created, but I want to click on this parameter tab here, so let me click on it, and I’ll just close this. So we have two of them in parameters: the first is Image ID and the value is slashes Two AMI, but there has been a resolved value, so this AMI right here was retrieved by confirmation directly from the value held within the Parameters Store, which was right here. This is the last value, and we could see a history of values if we wanted to, and similarly, if we go to instance type and look at easytoinstance type, that value has been resolved to t 2 micro again, which was coming from the value of t 2 micro that we did enter in the parameter store. So this is really cool because now we can go to the actual EC2 instance, let’s go to EC2 and look up the instance that our cloud formation templates created. Here it is. This one. Well, it is a T2 micro. And if you look at the AMI, it is running the AMI that we selected. So everything was working fine. And our cloud formation templates, if we go to the events, should say that now the creation is complete. So you may ask me if it’s nothing different from doing a parameter that we had before, and I would say yes, it is because imagine that through some process we start releasing a new AMI,
okay? And so what we want to do is go to Systems Manager, and I’ll go to my parameter store and update this value of the AMI ID, so I’ll go and edit it, and I need to put a new value for it. So for a new value, I will find it and say, “Okay, now we want Linux to be on AMI, so I’ll just copy this entire value here and save my changes.” So now, if we go to this parameter and look at the history, we have two values. That was the one that was used before, and now there is this one. Okay, so now what happens if we rerun our templates? So let’s update this template and say we want to use the current templates; we’ll use the same parameters; click on Next, then click on Next, and here I’m going to not update this tag right away. You see, the instance is going to be modified. I’m going to view the change sets and I’ll go to JSON changes, and here we see that my instance parameters are going to be replaced. And the reason it is going to be replaced is that the image ID has changed, and indeed, the cloud formation template has resolved the new value from the parameters store. This one, and thus myeasytoinstance, should be updated. So using these, I’ll go and execute this.
Okay? So using this parameter store, we are able to centralize the latest value of all the parameters that matter for our cloud formation templates so that when they’re run, if a new value has been changed in here, the cloud formation template will not automatically get updated, but it will fetch the new values and then apply them upon changes. So this is not something that’s automatic. As you see, we had to manually run an update for our stack, but it did pick up the new value, and that’s the power of SSM parameters. Now, this is something that can come up at the exam, so you need to understand the value of it, and one of the use cases for sure is to have the AMIID somewhere central in the parameter store referenced within confirmation templates. So let me just refresh this, and now if we go to instances in EC2 and refresh, we see that this instance has been created and that this instance is now running Amazon Linux 2. So we did pick up that last AMI parameter. So that’s it for this lecture. I hope you liked it. Just remember to go ahead and delete this tag when you redo it, and I will see you in the next lecture.
So let me show you another trick. Let’s open the file in an eleven-parameter SSM hierarchy. So in the previous lecture, I did show you that we could use SSM to store image ID, but what I didn’t tell you is that there have been some public values for these Amis. And so Amazon has their own parameters that you can reference, as well as their public parameters, which you can reference in your confirmation stack. So, for example, this is the latest Linux AMI ID, whose value is this, and this is a parameter from the Parameter Store that works for Windows 2. So if we do this service (AMI, Windows latest codebase), then we get the latest AMI for this image. And so, how do we get the list of all these public parameters? So, if we run this command right here in the CLI, which we will do, we will get a list of all the AMIs that are publicly available by Amazon for this Amazon Linux latest.
And so we get Amazon AMI, we get Amazon Linux Two because there are two here, and so on. So we have a bunch of these Amis that we can read from, right? And this works just as well for Windows. So if you run this one command again for Windows, and this time I did not use the query parameters, it’s going to have a lot more output from it. We see the list of all the different Windows parameters that we can have and their values as well. So we have, for example, the fact that this Windows Server 2016 EnglishFool SQL standard has the AMI value of this, is version seven, and was last modified on this date. So with these, we are able to not even maintain our own list of AMI if we just use the basic AMI from AWS, but also leverage the ones that are created by AWS. So let’s go ahead and run this template real quick.
So I’ll create a stack, I’ll upload a template file, and this template file is going to be parameters for the SSM hierarchy. Click on Next, and the stack name is going to be called the demo public parameter from SSM. And so click on Next, then click on Next again, and then click on Create Stack. So, if we look at the parameters on this side, we can see that this value right here was resolved to Nami, and this value right here was resolved to another AMI. And so you can verify that I did not create these parameters myself. If we go to Systems Manager and then we go to Parameter Store, we see that I only have three parameters. So these AWS services and so on are parameters that are maintained by AWS for us, and we can leverage them in our cloud formation templates so we can resolve values, for example, of the latest AMI at any point in time. So that’s it. That’s all I wanted to show you. I’m just going to go ahead and delete the stack, and I will see you in the next lecture.
Okay, so next let’s learn about “depends on” and we’ll open the file “twelve depends on.” So this file is interesting. Let’s get started. So we have a mapping, and we’ve seen mappings before; mappings map, for example, regions to AMI. So this is kind of an old way of finding a way to map a region name to an AMI. So, for example, if you run this template from US-East, you’ll resolve this value; if you go from US-West, you resolve that value; and so on. And then we have a resource that’s easy to instance, and we have the image ID. So, for the AMI ID, we use a finding map function, and we’re saying, “Okay, in the mapping named “AWSregion” arch to “AMI,” which is the name of this one, then you need to use the reference AWS region.” So this is a pseudo-parameter, and the value HVM 64 is interesting. So if I run this within EU West 1, it’s going to say okay here, then it’s going to go down to EU West 1, then it’s going to go down to key HVM 64, and the value is going to be this. So this is going to be the result of this finding about map functions.
So we have an easy instance, and it’s going to pick up an AMI idea from this mapping. And then there is this thing called “depends on.” And this is what I want to draw your attention to in this lecture. So depends on is a way of saying that this resource, this EC2 instance, should not be created until my database is ready, and this is what depends on me. So if I had removed it, for example, the EC2 instance and the database would be created at the same time. And for example, if I had an application running on my EC2 instance that tried to connect to the database, well, it would not work, right? Using this mechanism, we can conclude that, for cloud formation, you should only create this easy to instance after the Mydb and RDSDB instances have been created successfully. So let’s try it out and see how that works. So we’ll go to stacks, create a stack, upload a template, and we’ll use the template number twelve depending on, and then I’ll click on next.
It can take a lot of time, but the demo depends on it because creating a database is very, very long. But we’ll just observe the beginning of the behavior. So let’s go ahead and create this tag. And now this tag gets created in progress, right? Then I’ll refresh, and the first thing that gets created is MyDB. So, as I refresh, mydbhas a resource creation has begun. But my EC2 instance is not being created at all. And so I have to wait. Actually, I have to wait a very long time for the MyDB to be created so that the EC2 instance can be created. And similarly, if I went and waited for the entire thing to happen and then deleted my resources, then this would be deleted first and then this would be deleted second. So, by introducing some relationship between the EC two instances and mydb, we can tell cloud formation which one we want to happen first. So that’s something you need to remember if it comes up at the exam. All right. If I wait, this tag will take an eternity to complete. But as you can see, the EC2 instance is still not being created. So I’ll go ahead and delete it right away. All right, that’s it. I will see you at the next lecture.
So next, let’s learn about how we can create lambda functions using cloud formation. So if we go to the 13-lambda cloud formation, you will see that there are two directories there, the one in line in the two zip codes and the third. So let’s start with the next person in line. So lambda functions can be defined in line. So first, let’s look at the resources section. So the resources section has a lambda execution role, which represents an IAM role. And then we have the assumed policy document, which has the trust to say that it is a lambda role. As a result, this impersonation role must be a lambda role.
Okay? And then the policy itself is defined using YAML. So it’s a JSON document, but it’s defined using YAML. And it says you’ve given the function permission to star any resource. So that’s pretty permissive for a 3. Then, allowing the lambda function to publish logs and create log events on cloud watch logs, which is required for the lambda function to publish its logs. Then for the function itself, it’s called list buckets, or three lambdas. It’s an AWS lambda function, by the way. And in terms of properties, you can see the handler is called index handler, index because this is an inline function, and handler because we have a function defined here called handler in our lambda function. Its role is the lambda execution role ARN, which we have defined here.
So that’s perfect. And then the runtime is going to be Python 3, 7, and the code itself; the code block is using the zip file argument. And then there is a vertical pipe saying that everything here is one giant string. So the vertical pipe and the code of the function itself So we have the code defined inline. That’s why it’s called lambda inline. So some restrictions about this are that we cannot have dependencies, and the code itself can only be limited to, I think, 4000 characters. So after 4000 characters, you cannot have more inline content. So this is fine for small lambda functions like this one, which returns a list of three buckets. However, it may become more difficult if you add dependencies or have very, very long functions, but this should still work. So let’s go into cloud formation. And then I’m going to create a stack. My template is ready, and I’m going to upload a template file, and this one is going to be my lambda function that’s going to be in line. And then I will click on “next,” I will call it “lambda-inline,” and then I will click on “next,” scroll all the way down, click on “next,” and then finally create the stack.
So if I create the stack right now, it says it requires capabilities, capabilities, and IAM. So this is an error message, and we’ll see it in greater details, but for now, let’s scroll down and we’ll take that box, and that’s saying yes because we are creating an IAM role. Then we click that box to give the capability to cloud formation to create that im role. Okay, let’s click on “create stack” and then see what happens. So the create is in progress, and let’s go into the lambda console now to see what’s happening there. So the resource is being created, and in the lambda console we should wait a little bit, and very soon, yes, the lambda function is now in create complete. So if you go to the lambda console, we can find our lambda inline function has been created right here, and it’s Python three seven. So that’s perfect. We’ll click on it, and here we get a nice information message saying this function belongs to the cloud formation stack lambda inline. So Lambda knows that this Lambda function itself is being managed by cloud formation. Okay, we scroll down, and yes, the code is what we expect. This is the Python code that we have.
And so if we just test our function by clicking on “Test,” hopefully this will work. So, let’s call this hello, then click on text create, then tests. The function is being executed, and it returns a list of all the Esther buckets in our accounts. So the function works. So this was the first way of defining our lambda function, and that was in line. Now let’s go about it the second way. The second way is to use a zip from Esther. So we have our function index PY. This is the exact same as before, and what we have to do is zip up this entire file and its dependencies. If we had dependencies, However, we currently do not have any dependencies in a single large zip file. So this is my lambda function zip file that I have here. And then we have to upload that zip file to s three. So let’s go to our S-3 service, and in S-3, I’ll select a bucket. For example, I will choose my AWS DevOps course, defend, and I’ll create a folder. I’ll call it lambda. Save the file and enter lambda I’m going to upload a file, and the file that I’m going to upload is going to be this zip file that I’ve created called lambda function zip. And you can name that file however you want; you don’t have to name it lambda function zip. So I’ll click on upload, and here we go.
The file has been uploaded. So now we need to reference that file in our confirmation templates. So let’s go back to our confirmation templates. And the first one is this one. So we’ve got two parameters here: three bucket parameters and three key parameters. And for resources, we still have our lambda execution role. Nothing has changed. So we’ll scroll down, and the list has three buckets: lambda function has the same handler, role, and runtime. But now for the code. As you can see, there is only one “s” parameter in the code that references the “s” parameter. Actually, have it just the way you’re used to it. So by having a ref like this, it’s going to be better. I’m also going to have a shorthand reference function for the s-three key. So these two parameters right here, bucket and key, are referencing the parameters from the very, very top. Okay? So that means that we’ll have to enter a little bit of information into our confirmation templates. However, in this case, it is stating that in order to create a cloud for the code of a lambda function, you must go to this bucket and retrieve this key. So let’s try it out.
We’ll go to cloud formation, and we’ll create a stack, and we’ll upload a template file, and we’ll choose the one that is two or three lambdas. YAML, press the “next” button, and I’ll call it lambda from s three. And now we need to add a bucket parameter. As a result, we’ll put AWS DevOps on the defensive. So here we go. And as far as the key is concerned, we need to enter the name of the entire thing. So it’s lambda function zip, and that should be all right. Let’s click on “next” and “next.” Then we’ll scroll down and tick that box for capability. IAM created this tag, and here we go. Our lambda function from s3 is being created. So remember here that cloud formation will retrieve that file from the bucket before creating the lambda function. So let’s scroll down; we’ll wait a little bit for this to be done. And our lambda function is now being created. So if we go to the console of lambda and look at our functions, we should now see a new function, lambda from s, that is being created. So the code will be exactly the same because lambda will unzip our zip file automatically and give us back our index PY file. So excellent. This looks good, right?
So what if we wanted to use S three to update this lambda? What if we replaced that lambdafunction.zip with another file? Then there would be a problem, right? because our cloud formation template would not change. We still refer to the same “three bucks” and “three keys.” So as such, there would be no update from the cloud formation side because it doesn’t know that we have updated a file in s three. So for this, you have a couple of solutions. Number one, you could update to another S-3 bucket, and that would be pretty tiring to do that. But you could upload your updated function to another S3 bucket, and then this would change and the update to lambda would happen. You could also update your S-3 key. So instead of naming our file lambda functionzip, we’ll name it lambda function to zip. OK, that would work as well, but that would be pretty tiring. What if we wanted to update that file already in place, the lambda function zip? Well, for this, thankfully, we have a third template, and this is an update over the first one, and we have a new parameter called the “s three object version parameter,” and we scroll down, and here we have encoded a new block called “s three object version.” This stems from the fact that if your S3 bucket is versioned, you can also refer to an S3 object version. So, let’s return to Step 3 and look at the properties in my bucket. I’m going to enable versioning, and actually, versioning has already been enabled, so I’ll do okay. This is fine. Go to overview; go to lambda. And so if I update this lambda function zip, I’m actually going to upload the very same file, but that doesn’t matter because it will create a new version. So I’ll update the same file and click on “update.” I go to versions and click on “Show.” Now we see that we have two versions of the lambda function zip. We have the one from before and the one that I just uploaded. So if I click on the version ID and copy it, this is what I would use in my new cloud formation template. So let’s try it out. I’m going to my cloud formation and I’m going to do updates, replace the current template, and upload a new one.
And the one I’m going to upload is the one that has the “s three object version parameter” in it. After you click next, you’ll see three object version parameters. Now I’m going to add this, which is the latest version that has been uploaded to s three. So let’s go back to Linda here. This is the latest version that has been uploaded to s three. Okay, I’ll click on “next,” click again on “next,” scroll down, and acknowledge. And so we can see here that now our three parameters are using a unique combination that will be unique every time we upload a new lambda function, regardless of whether we use the same file name or not. And then I’ll click on “updates,” and here we go. Now we have triggered an update because I have uploaded a new file and I referenced that very specific object version ID in my confirmation template. I’m showing you this in great detail because that is something that can come up in the exam. So you need to know how to use cloud formation to update your lambda functions. Obviously, this will work. And now the update is complete, and we have succeeded with our goal. So that’s it for this lecture. Just remember to delete all your stacks when you’re done, and I will see you in the next lecture. Bye.
Let’s create our stack and take box. Okay, the stack is being built, so let’s wait for the lambda function to be built. My lambda function has now been built. elated. So I can click on this link here, and thisndthis should take me directly into my function. We can look at the code and see happens. So we have a handler function that will examine the resource and the event that occurs as a result of confirmation. ation. And if the event is of type delete, it should try to list all the objects in our Sribuckets and then delete them. So only when the event request type is d willet we’ll delete s objects, band then, then when done we’ll send a restack to bacon cloud formation wit result of success. If an exception occurs, we will print it and send it to cloud formation. action. The failed, failed and so this is how Cloud Formation will know if our customer resource was a success or a failure.
And then what is this “send response” Cloud Formation function? Well, we’ll just send a big JSON document into confirmation and back into basically the same sign URL that we had before. So with that for Durado, let’s just go ahead and create our custom resource. So what is our template going to be? Looking at 15 custom resources of type “buckets,” we can see that we first create a bucket called mybucket resource, which is also of type “buckets,” and then we create this custom resource.
So there’s this type of custom container here, which I’ll refer to as custom cleanup buckets, and it uses what? It’s using a bucket name as a property, which is referencing the bucket name that we are creating right here. Also included is a service token that specifies which lambda function should be called for this custom type. And so the service token is an important part of the MTS three-bucket list. Why do we have an import value? Well, if you go back to the other template and scroll all the way down, I forgot to show you the outputs section. And the output section stated that the lambda function ARN was exported as empty s three bucket lambda. So this is something we can verify by going to the outputs.
And on the right hand side, we have the output that is exported as the name. And please excuse me, MTS three bucket lambda. Okay, so let’s go ahead and create our stack, and we’ll upload this template file, and that template file is going to be number 15 for a custom resource. I’ll click Next, name it lambdas three with cleanup lambda, and then click Next again. Then click on Next, and then click on Create Stack. So now our stack is being created, and in terms of resources, we’ll have two resources. We’ll have the estry bucket, and we will also have the cleanup custom resource backed by AWS Lambda. So let’s wait for this to be done. Okay, so our stack is complete, and so in terms of resources, we have two resources; we have the S-3 bucket. So I’ll click on this to go to the S three buckets, and we also have a custom resource that is backed by Lambda. If we go to our lambda function and look at the monitoring tab, then we’ll go to monitoring. We can see from the cloud watch metrics in a single second that the lambda function should have been invoked once. So we need to wait a little bit for the data to appear in the dashboard. And yes, now we see that the invocation has been made; it took about 358 milliseconds to run; the success count is one; and the errors are zero. So this worked. Now let’s go to our S three buckets, and we’ll just add some files. So we’re going to upload some files, and I’m just going to upload a random file in here. Okay, upload. And now I’m going to go ahead and delete my stack.
And so, by deleting my stack, it will delete my custom resource backed by my lambda function, and it should invoke the lambda function again. And that lambda function should clean up the bucket, and then confirmation will be able to delete the bucket resource altogether. So let’s try it out. We can delete a stack, yes, and that’s been initiated. So what it should do is start invoking that lambda function, and then that lambda function should clean up that bucket. When the bucket has been cleaned up, cloud formation will attempt to delete the mybucket resource and should be successful. So let’s refresh this and wait for a minute. And, as we can see, the delete has been completed, which we can confirm by returning to our lambda function. Refreshing. And we should see very soon that our function should have been called yet again. So let’s wait a second, and yes, now we see from the fact that there is a line that, yes, our function has been invoked again and successfully invoked. So if we go to our Lambda console and refresh this now, this should be empty. And actually, it’s not even empty. The bucket does not exist anymore because it has already been deleted by cloud formation. So everything has been working just fine. So that’s it. For custom resources, you can do anything you want with them. But remember the use cases I showed again? So it’s about deleting a mystery bucket and emptying its content before deleting it. It’s about fetching an amidst. It’s about managing on-premises resources and, soon, doing whatever you want with them. All right, I will see you in the next lecture.