Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part7

  1. Elastic Beanstalk – Worker Environments

So, by the way, you could go ahead and terminate this environment. So I’m just going to enter the name of the environment and terminate it. So I enter “test ends” and click on “terminates.” Or you could have done EB-terminate in the CLI to terminate this.

So if you do EB terminations, this would have done it as well. Okay, so now let’s go ahead and create a new environment. So I’ll create environments, and one of them will be a worker environment. So in this lecture, we’ll understand why we need a working environment. So imagine a web server is doing something really long; maybe it’s encoding videos. So we have a video service, and people upload videos to a web server, and then the web server does encoding that’s very CPU intensive, and maybe it’s not well suited for a web server environment; maybe it’s something we want to do over time asynchronously, and therefore we want to have a dedicated environment here for this. And this environment tier is going to be a worker environment. So a worker environment is an application that processes long-running workloads such as encoding a video on demand or performing tasks on schedule.

So it’s very important to remember both. It can do long-running workloads or perform tasks on the schedule using the Crown YAML file. So I’ll select this and I’ll just name it “workerNV,” and we can use a preconfigured platform, for example Nodes—who knows whatever—and use the sample application, and we’ll go ahead and create the environment. So this is a worker environment, and we’ll see what it does in a second, but for now, we’ll let it start, and it has already started, with a view queue option right here on the screen. So it’s not available just yet, but what we’ll do is go to cloud formation. So let’s go to cloud formation and see what is being created. So, cloud formation begins. So there is a work in progress for this tag. Here we go. And that is my working environment. And in terms of resources, we can see what is being created, and one of those is an SQS queue. So two of those are SQS queues. Actually, there is a worker queue, which is our SQS queue, and it gets created by beanstalk.

And then there is a dead letter queue that also gets created by a beanstalk. So the way it works with worker environments is that the worker environment will pull from the SQS queue and try to perform the work, and if the work doesn’t work, then it will push it into the dead letter queue. So if we go to SQS now to look at the simple queue service, we can see here that two queues have been created: the worker queue and the dead letter queue for our worker. So, the environment is still being created, but we now know that our worker queue will be present to decouple our web tier from our worker tier. So that’s really important. And so the idea now is that the application can now perform the very, very long-running task on our worker tier. So let me just go back to Environment and wait for everything to be configured. Okay, so our environment has been successfully created, and now if I click on “View Queue,” it takes me directly into the SQS. So perfect. So our application is created, and if we go to the configuration of it, we can look at all the configuration options for it. One of these is the configuration of the worker environment. so I can modify it.

And here for the worker QR URL, we can use an auto generated queue, or we could have specified a specific queue to read from. So it’s not necessarily that we pack the SQS queue with our environment. It is possible for us to create that SQS queue externally and then use that instead in our deployment. So, back to the dashboard. So what I need you to remember is that we have two environments here. We have two environments: developer and worker. And those are the only two tiers you can have on an elastic beanstalk. So remember, web server and worker. Okay? Now, the last thing I want you to look at is that in the worker environment, there is this thing called Cron YAML. And you need to remember this. Cron YAML is a file that you can use to define Cron jobs that will run on your worker environment from time to time. So here is the information log set, successfully loaded. one scheduled task from Cron YAML. So now we have to remember that the work environment can do two things. Number one, it can pull from a queue and do jobs that get submitted from a queue. And number two, it can schedule tasks using Cron YAML. And that is a feature that is only available in the worker environment. Okay? That concludes the high-level overview of worker environments. So, remembering the examiner, there are two environments in elastic beanstalk. One serves web content, while the other handles long-running jobs. And then this worker environment also allows us to have Cron jobs defined in a Cron YAML file. Okay? So that’s it. I will see you at the next lecture.

  1. Elastic Beanstalk – Multi Docker Integration

So the last thing that’s good to know is that we can create new environments, and we’ll create a web server environment. This time I’ll call it Docker NV. So we can use Docker to create Beanstalk environments. And so I’ll choose a preconfigured platform. And as you can see, we have two of those. We have Docker, or multicontainer Docker. So Docker is a simpler one. We can just run one Docker container, and with multiple Docker containers, we can run multiple Docker containers within the same easy instance. So this is the more recent version. So we’ll use multicontainer Docker, we’ll use a sample application, and we’ll create the environment.

As a result, we will have a Docker-enabled environment. And so you need to know it, and I will show you what this creates in the end. And so our environment has launched, and we can go into this URL and see that, yes, this environment was launched using a Docker container. So this is really cool. And one of the reasons why we want to use Docker on Beanstalk is to be able to standardize our deployments. So we could use Java, Python, Go, or whatever, but with Docker, we’re able to standardize our deployment for any kind of language or any kind of application that we want. So this would be a good reason to use Docker on Elastic Beanstalk. If we go to cloud formation and look at all the resources that were created, there are seven of them. And so we have a security group, we have Beanstalk metadata, we have a launch configuration, and so on. and this is pretty great. Let’s go to Services and then ECS.

And we can see here that we have an Elastic Beanstalk Docker environment that has been created for us. So a cluster was created by Beanstalk, and it’s an ECS cluster, and it’s running tasks. And tasks are what will actually run our Docker container. So tasks have task definitions and so on, and they’re running on ECS instances for us. So this is perfect. It’s just a quick overview, but we can see that Docker is running on Beanstalk and is creating an ECS cluster for us and so on. Finally, I’d like to demonstrate Docker running a US JSON. And so this is the one file for the multi-docker container. This Docker run AWS JSON file is an Elastic Beanstalk-specific JSON file that describes how to deploy a set of Docker containers as an Elastic Beanstalk application. So it’s just something you would have to create if you wanted to have a very complex type of application. And there are some examples here. And going into the exam, you don’t need to know how this file is structured; you just need to know that its name is Docker run AWS JSON. And that corresponds to a multi-Docker container configuration. On Beanstalk. All right, that’s it. Now we’ll simply terminate the environment by clicking here and taking action, sorry. and delete the application, and then just end. Hello, world. And this will delete all the stuff that we’ve created. All right, that’s it for everything on Beanstalk. I will see you at the next lecture.

  1. Lambda – Overview

So now let’s talk about AWS lambda. Lambda is not used as a standalone service in the exam. This is usually something that will act as a glue between two different services. So far in this course, we’ve seen how to use lambda with code commit, code pipeline, cloud watch events, and so on. So lambda can be used with so many different services, and we won’t go over all of them. But you need to remember that Lambda provides some customization that is needed to make other services work. For example, we saw in CloudFormation that by using lambdas, we could define custom resources. So what I want to do in this lecture is just create a dummy lambda function to review the general options we have for AWS lambda. So let’s create a function and author it from scratch; I’ll call it lambda dummy. And for the runtime, we’ll use Python 3/7 because I think it’s an easy language to read. And for permissions, we will create a new role with basic lambda permissions. So let’s go ahead and create that function. So our function is created, and I’m going to go over the different bits in this UI, just so you know the features that we need to know for now.

So this lambda function has been created, and it’s linked to cloud watch logs because in IAM we have created a role that gives our lambda function access to cloud watch. So you can start logging different things that are happening within the lambda function into IAM. So if we go to a lambda-type role and type lambda, we have a lambda dummy role. And if you look at the policy name itself, the basic execution role provides the function of being able to create log groups, create log streams, and put log events. So it’s fairly simple, right? Let’s go back to lambda, and we’ll scroll down. Here is our function code. As a result, our function code is extremely simple in this case. This is a lambda handler that just returns a status code of 200, and the body is Jason, who dumps hello from lambda. So it’s just going to be a JSON file, and we could change that code if we wanted to. So something we can notice is that the lambda function file is called the lambda underscore function, and the lambda handler is the function that will respond to lambda events is lambda underscore handler.

As a result, the handler setting is lambdafunction, which represents the file name, and lambda handler, which represents the function name within the file. So you’re a lambda handler. So we could edit the code in place, but we could also upload a zip file or one of Amazon’s three files. If we started uploading our lambda code in three the same way we did it with cloud formation, uploading it in three does give us the ability to pack in more dependencies, whereas doing stuff in line just allows us to quickly edit the code for a quick 101 function. So if we wanted to test this function right away, I could create a new test event, call it hello, and we would have a Hello World event that we could modify if you wanted to. It is possible for us to choose any kind of template we want.

If we needed to do some rapid development, For example, if we wanted to simulate an event from SQS, we would choose the event template from Amazon SQS. And this gives us a much more appropriate type of event for our lambda function quickly.So for now, we’ll just go to the bottom and choose “Hello World,” because that’s an easy one. Let’s click on Create, and then we’ll test this lambda function. The lambda function has succeeded, and the logs get written to cloud watch logs. So we are able to view the logs in this log stream, and everything seems to have worked. If we look at the details, we can see that the result is status code 200, and the buddy from lambda says hello. So it’s fairly easy. Our function currently works. Okay, let’s scroll down. So the function we’ve been running for the past 18 or 19 milliseconds has been running for that long. And because we get built in 100-millisecond increments, the total build duration for this function is 100 milliseconds. The memory size was 128 megabytes, and we’ve used 55 megabytes of it. So it does give us some idea about how much memory you’re using in case we need to increase the memory function. Okay, so everything looks good. Let’s scroll down.

We are able to define environment variables for our lambda function, and we are also able to encrypt them. I’ll be going over this in a future lecture, so for now, let’s just don’t worry about it. We are able to tag our function using these tags if we wanted to have some permission management, for example, or cost management over our lambda functions. In terms of the execution role, we are using an existing role, and the service role is the one we have created before, so we could do that role in IAM. The same holds true for a rope connected to lambda. It’s always important to look at the trust relationship. The trust relationship says that it’s an alambda role because the identity provider trusted for this role is lambda amazonaws.com. And if you click on Edit Trust Relationship, you’ll see a policy document that represents this trust. And this is something you should be comfortable reading and understanding. The service is lambda.amazoninovs.com, and the action is “STS assume role.” And so this policy document says that Lambda functions can assume this IAM role. Okay, we’re good here. Let’s go into lambda, and we can look at the basic settings so we could have a description for a function.

This is a dummy function, and we are able to set the memory. So the memory is at its lowest point right now, at 128 megabytes. And we are able to assign up to 3000 megabytes for now. Okay, the more memory you assign, the more expensive it’s going to be because your function does require more resources, but you get more CPU proportional to the memory configured. So if you went and had a bit less memory, so 124 megabytes, you would get a certain amount of CPU, and if you went twice that amount here, 2048 megabytes, you would get twice the amount of CPU. So it’s really up to you to choose how much memory you want to have. But you need to know that the more memory you assign to it, the more CPU you get, and the faster its execution is going to be. If it is CPU-bound for now, we’ll just keep it at 128 megabytes. Now there’s the timeout. Timeout is how long the function is allowed to run, and by default it is 3 seconds. We can also increase the number of seconds. We are also able to increase the minutes, and the maximum you will get is 15 minutes. So you need to remember that this function can run for up to 15 minutes. And so that is the upper bound for lambda functions.

It is something that’s super important to understand and remember. Because if you need a job that needs to run for an hour, well, Lambda is not a great service for it. Maybe it’s going to be something like AWS batchor if you need to synchronise functions with one another; maybe lambda again is not good for this. You will use step functions, and we’ll see this later on. So keep that 15-minute timeout in mind as you make decisions for the automation you’re creating as a device. Okay, let’s scroll down. You are able to launch your application within a VPC and VPC is a virtual picture cloud, and you will need to do this, for example, by assigning it to this VPC. If your function needs to access resources that are living within your VPC, think, for example, of an RDS database that is private. So in that case, you would launch your function in your VPC, choose a bunch of subnets to assign to it, and this would allow your function to access your VPC. Now, if you go over the VPC, you should assign a security group to your function, and that security group can be very helpful in allowing it to access other resources, such as your RDS database, by having a security group to security group rules.

Okay, I’m just going to cancel this though. So this is fine; we’re not going to use a VPC, but this is a very important fact to know. Right now, our lambda function gets launched within Amazon’s cloud and not our VPC. So it cannot access our own databases. But if we needed to access RDS, we would need to launch into a VPC. By the way, as of this writing, they’ve just released a feature in which if your lambda is launched in the VPC, it used to take about 15 seconds for a lambda function to launch in the VPC, and it was using a lot of network interfaces, and they’ve improved it as of today. And now you don’t need to worry too much about what they did; nothing changes for you. But they brought down the performance to around 1 second. So it’s about 15 times faster, and you can go and look at it by doing VPC lambda. And let’s go to the blog. Here we go. That was 20 hours ago. So the blog is called Announcing Improved VPC Networking for AWS Lambda Functions. It has no bearing on the exam but is useful for your knowledge. And I always like to give you some real-world knowledge.

So, now that we’ve seen how much faster it is to launch lambda functions in a VPCanyway, let’s move on to the rest. We can do debugging and error handling. So if it’s an average service that asynchronously invokes your lambda function, we can do a DLQ if it doesn’t work three times. So if your lambda function fails three times, then it is possible for us to send an event payload to Amazon SNS or to Amazon SQS as a dead letter queue, and this will ensure that the event is not lost and we could deal with it later on if we wanted to or troubleshoot manually. So this debugging is really good. And also, we have AWS X-ray integration if we wanted to get active tracing of the function to record timing and error information for a subset of invocations. If you don’t know what X-ray is, I’d recommend you take a look at it. It’s in the developer course. And to me, this is knowledge you should have. Xray is a tracing service that allows you to decompose how your functions behave and how long each API call takes so you can identify bottlenecks in your infrastructure. Finally, concurrency refers to how many functions like this can run at the same time.

So we have a maximum concurrency of 1000, but we could reserve concurrency and say, okay, ten functions at most can run concurrently. So if we invoke it a lot, maybe tonnes of them can run concurrently. And this is where our account concurrency limits come into play. So if you need to run 10,000 lambdas at the same time, you would need to open a service limit request and justify why you would need to do so. Cloud Trail can log these function invocations for operational and risk auditing. So we can, using Cloud Trail, see every API call to a lambda function as an audit trail. Okay, so let’s go back to the timeout of 3 seconds. So one, two, three, and I’m going to save, but I don’t think I changed anything. We can now access the monitoring tab as well. And the monitoring tab does give us some information about the Cloud Watch metrics. So we can see how many invocations were made of our function. So we can see how long one of them took. So we’re able to debug the billing for our function and the error count and success rate for our function. So we could set up Cloud Watch alarms for errors if we wanted to.

We can see a throttle for a function. For example, if it runs too many times and we go up to that reserved concurrency, then we start getting some throttle errors, and then we can start seeing how many dead letter errors we get in case our function gets invoked asynchronously and it still cannot succeed after three tries. Okay, Cloud Watch Logs insight does provide you with the queries to get access to your log streams. So if I click on this again, I get directly taken into the Cloud Watch logs UI, and I can look at the logs for all my lambda executions, which is quite handy, to be honest. Finally, there’s this cool new feature called most expensive invocations (ingB seconds). And so it tells us how many times our function was invoked. And then, for each invocation, it will give us the most expensive invocations. And so if one were to take, for example, 10 seconds, we could look at the log stream and understand better what happened for this function. Maybe there’s a bug in our function, or at least we can drill down more into it to save some cost. So that’s the end of Lambda. So all of this should be revised stuff you already know, but it’s good to see it once again. And in the next lectures, we’ll look at the aspects of lambda that are very important for the DevOps exam. So, until the next lecture.

  1. Lambda – Sources and Use Cases

So I’m going to go over all the triggers for the Linda function and just discuss the types of integrations you can create because I think they’re quite important to see at a high level. Now, as you can see, there are a lot of them, so I’m probably not going to talk all about them, but it’s good for me to give you an idea of everything that can interact with Linda and the types of integrations. As I previously stated, I usually show you the integration with Linda as we go through a service in this course, but let’s just summarise them now.

So the first is API gateway, which allows you to create an API, an API that is exposed to the outside world, that can invoke your lambda function. And so this is how you can build a serverless API. And this is quite a common pattern. When you build a serverless application, your lambda functions can only be invoked using the SDK. And so that’s why the API gateway does provide some kind of HTTP interface to your lambda functions. As a result, IoT and Alexa integration is quite common. We can just skip this for the application load balancer. It is now actually possible to, instead of using an API gateway, use an application of the bouncer in front of your lambda functions. So when we use one and the other,  well, the API gateway does give you some more flexibility around doing authentication, around doing rate controls, over doing security, and so on. Whereas the balancer application simply provides a straight up Http or Https front end to your lambda functions, it does not add any functionality such as Http, such as security, and so on. So it’s good to know that both exist, though. Cloud watch events will now be the glue that holds all of our DevOps operations together. So CloudWatch events allow us to, for example, react to any event in our infrastructure and then create an event and assign a lambda function to it.

And so our lambda function can basically react to any event happening in our cloud, which makes it have infinite possibilities. So one very common pattern, though, is to use CloudWatch event scheduling to create a cron and schedule, for example, an event that gets created every hour, and then that event will trigger an lambda function, effectively creating a server-less chrome script. So we can have a lambda function invoked every 1 hour using CloudWatch events. We could also react to CloudWatch logs using CloudWatchlog subscriptions, and that makes our lambda function analyse the logs in real time and look for any patterns we should be looking at and maybe create some alerts out of them. For code commit, we can react to hooks happening in code commit. So whenever someone commits code, for example, we could have a lambda function look at the code being committed and ensure that no credentials were committed as part of this or otherwise send a notification, for example, to an SNS topic. Then, for DynamoDB, this is actually DynamoDB streams. So if we enable a DynamoDB stream in our account and have a lambda function at the end of it, that means our lambda function can react in real time to the changes in DynamoDB. And one use case for this, for example, is that if you have a user’s table and you have a user signup event, then DynamoDB can react to the user signup event and send an email saying hello and welcome to our application. Kinesis is going to involve real-time processing of data.

So we can have Kinesis analytics or Kinesisstreams feed into Lambda, and our Lambda can react in real time to these events. S3 is going to be a very common integration. As a result, three events will occur. For example, whenever someone puts a new object in our S-3 bucket, the lambda function could trigger a workflow or could trigger something such as creating a quick thumbnail from our S-3 bucket. SNS is going to be a notification service, and our lambda functions can react to SNS. And this is the kind of asynchronous integration invocation that actually uses S 3. SNS and SQS are both two of three asynchronous invocations. And so if our lambda function reacts to an SNS message but doesn’t succeed three times, then the message will be put into the dead letter queue. And finally, sqsq is also an integration for lambda, allowing our lambda function to process queues. There are messages within our queue, and remember that if the message doesn’t get processed by lambda, then the message gets put back into the queue so that another lambda function or application can process it. So that’s perfect. That concludes the integrations. I know it’s a pretty boring overview, but it’s useful to see all of the integrations that we can do with a rest lambda. All right, so that’s it. I will see you at the next lecture.

  1. Lambda – Security, Environment Variables, KMS and SSM

Okay, so back in our function, let’s scroll down and let’s look at the environment variables. So we are able to define environment variables as key-value pairs that are accessible from our code. For example, if I had a database URL with the value “MySQL,” whatever that URL was (3306), it would be a good candidate for an environment variable. But we could also have encryption for our environmental variables. As a result, we can enable helpers for in-transit encryption. And let’s click on this. And we need to first create an AVS-KMS key in this region.

So let’s go ahead and create one using the console. And so for the alias, I will say “Kms” and then click on Next. Click on Next, click on Next, click again on Next, and click on Finish. So now our key has been created. Excellent. And so we return to lambda. Let’s click on “Enable helpers for encryption in transit.” Here we go, excellence. And we can use the KMS key to encrypt this. For example, my KMS key And so now if we had a DB password and the value was “super secrets,” okay, we could go ahead and encrypt this. And now this value has been encrypted, and we need to decrypt it at runtime within our lambda function to make this work. So let’s look at the code for this, and the code looks like this. So let’s copy this into our code and adapt it a little bit. So importing JSON was something that I forgot. Here we go. So we now have the environment variable DB password. And so this will be an encrypted environment variable. As a result, we should decrypt that encrypted environment variable using the bottom three client KMS to obtain our decrypted DB password. So this is the DB password result.

And here in the lambda handler, what I’m going to do is do a quick print DB password so you can see it. And then for our DB URL, you can just type “OS environ” and then “DB URL.” And this one does not need to be decrypted because it is not encrypted here. So, if I choose to print the DB URL and save our function, we’ll test it, and the function will run and fail. So it probably failed because it timed out. And why did it time out? I’m not sure. So let’s scroll down and see what happened. So we are going to give it a greater timeout. So we’ll give it 10 seconds, click on Save, and test the function again. And now the function didn’t work because we get an access denied exception, which goes right into my second topic. The lambda function requires access to KMS to function, which it currently lacks. So let’s go all the way down. We can find the role right here. So let’s click on “view role,” and in here we need to attach a policy, and we’ll add an inline policy. For example, we’ll choose a service; the service is going to be KMS, and we want to give write access, which is the decrypt operation. So decrypt the operation in here, and that’s perfect. And for the resources, we need to select a specific resource and add an ARN to encrypt it. And we need to find the key here. So, if we go to KMS and click on this key, we should be able to find the ARN.

So I can just add it and click on Review policy, and I’ll call it Allow decrypt from KMS, create policy, and now they should be good. So let’s retry our lambda function to see if it works this time, and it still fails. So let’s wait a little bit. So test again. And now the execution has succeeded, and we get this hello from Lambda. However, if we look at the logs, we can see that the DB URL, MySQL this URL, and the super secret password are all printed. So this worked. So definitely, using environment variables is a great way to use lambda. And so remember that you need to encrypt an environment variable if you want to pass in something really secret. And another really good way to deal with Lambda and system secrets is to use the SSM systems manager. So if we go in here with systems manager and we scroll down to parameter store, remember that we can have parameters in here, so we’re able to create a secret parameter.

We already have something called proddbpassword, so you could just use proddv password, and we’ll look at this parameter in there. So we’ll go back to the parameter store. This proddb password is a secure string, which means that it’s encrypted, and we can see the value here, my super secret password. And so, using the AWS SDK, we should be able to also retrieve this database password if we are authorised in Imto to decrypt it and read it into lambda. And that’s another way of importing secrets or common values into lambda. So the topic of this lecture was lambda security. So remember, we have environment variables, and we have KMS encryption for our variables. We must ensure that the imrole has sufficient permissions for your lambda function to function properly. And then finally, you can use the SSM parameter store to store some secrets or just normal values for your lambda function in case you need them. Finally, there is a place to store secrets known as the secrets manager. And this is another place to store secrets, and we’ll see this later on in this course. Just keep in mind that you can retrieve secrets from the AWS secrets manager using the SDK. Okay? So that’s it for this lecture on lender security. I will see you at the next lecture.

img