Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part8

  1. Lambda – Versions, Aliases and Canary Routing

So let’s talk about AWS lambda versions because as soon as you start working on lambda functions and putting them in dev test cases, this is where the functions and aliases that we see in the next slide are really, really important. So when you work on the function, you work on the version called “latest,” and that makes sense. What you change is the most recent version, and the most recent is Mutable. Mutable means that we can change it as we want.

When we’re ready to publish this function and are happy with its state, we can publish it and create a version. So the first version we create will be “vine,” for example, and that version is immutable. That means we can never change the first one. It’s a snapshot, and versions get increasing version numbers. So, after V1, there will be V2. Now, each of the versions will get its own ARN. ARN is the Amazon resource name. So, depending on the architecture, we can address vine, v2, or latest, and the version is the code itself and the configuration. So environment variables, timeouts, et cetera, nothing can be changed; everything is immutable. And so each version of the lambda function, including the latest, can be accessed whenever you want using the correct ARN. So in our workflow, when we work and we’re happy with the state of a lambda function, we say okay, we’re going to snapshot the latest, and we’re going to create a new version out of it.

Why would we do this? Well, we create versions in order to have a development environment, a test environment, and a production environment, right? However, if we have new versions of the time, we must constantly rewire everything in dev, test, and production, which can be extremely painful. Instead, we can use lambda aliases, and so aliases are pointers to lambda function versions; so versions are immutable, and aliases are not. So we can define, for example, a devtest and production alias and point them to different lambda versions, and we can change these pointers over time because they’re mutable. So here is our latest lambda function version; it’s mutable, but we also have versions one and two. Now we can create a dev alias that represents our development environment, which most likely is the latest function we have. As a result, we’ll set the dev alias to latest.

Now, when our users interact with our function, they don’t directly interact with the latest version; they interact with the dev alias. Now we also have a test alias, and we’re actually testing version two of our function. So the test alias may point to v-2, and if we have a prod function, the prod alias may point directly to v-1. So as you can see, all the aliases are mutable, whereas the versions are immutable. And so if we want to do a blue-green deployment or update our production environment to migrate it from version one to version two because it turns out it was great, we can do a deployment and assign weights so we can say the production alias. 95% of this is going to point to V1, and 5% of it is going to point to V two.This way, we’re going to test with a little bit of traffic how the V-2 function is doing, and using this, we can do blue-green deployments. Our users don’t know what’s happening at all because the only thing they interact with is the aliases. So for the users, nothing changes, and we’re able to basically iterate in the backend while providing a stable configuration for our users, our event triggers, and our destinations. So we can change all we want and rewire the alias functions, but for the users, the inputs are the same, and that’s because aliases will have their own ARN.

So I hope that makes sense. We will go right now into a hands-onprobably to give it a bit more clarity. So now let’s play with these versions and aliases. So let’s modify our function a little bit to make it a bit more simple. So I’m going to remove everything about the environment variables, and I’m going to just import JSON. Here we go. And the return is going to be something very simple. I’m going to greet the world. Okay, and so this is our function; this returns HelloWorld. Let me save it, and everything works fine. And so if we test it, it should go ahead and say hello to the world of excellence. Now let’s go look at qualifiers. So we have our versions and then we have aliases, so both are saying “latest” and “latest.” But what we should do now is publish our first lambda version. So we are very happy with this function. We’re very happy with what it does. So I’m going to go ahead and click action and publish a new version, which I’ll call version one, and publish. Here we go. So my first version, called One, and the description, “version one,” have been created 31 seconds ago, and we’re very happy with them. This is our version one, and it is immutable. So I’m trying right now to change the code, but I cannot. OK, so once you publish a version, the version cannot be changed at all.

So now version one is a lambda dummy, and it contains this code and cannot ever be changed. To change the code, go to the most recent version. So we switch to the latest version, and here we’re able to change the code, so maybe I’ll say goodbye world and save this function. So now we have saved the latest, and to publish this new version, I’ll click on Publish new version, and I’ll call it the updated lambda function to say goodbye, okay? And click on “publish,” so now we have two lambda functions. We have versions one and two. Version one says “hello world,” and this one says “goodbye world.” We can put it to the test and say, yes, this one says goodbye to the world. Excellent. So now, how do we go about creating aliases? So this is our development cycle. We keep on creating new versions, but what we’d like to have is a dev and a prod alias, right? So let’s go to actions and create analias, and the first one is going to be called “dev” and “devalues,” which makes sense. Dev is the latest version of our lambda function, and the version it is going to point to is the latest. Okay, so what created creates, and now whenever we refer to the alias dev, this will point to the most recent version. So if we make some modifications to the latest, let’s go ahead and make some modifications. So I’m going to type 42, save it, and then test this function.

So this function returns 42, and if you click on the alias dev and test this function, it’s also going to return 42 because our alias Devi is pointing to the latest version. But we’re going to make another alias, the prod alias, because this is our most stable version of our lambda function, and the version that will point to it is version one, so let’s click create. So now we have the prod alias, and the prod alias right now, as you can see, is pointing to version one. But the idea with aliases is that if you look at the ARN here, the ARN of a version has the version number at the very, very end, and if we look at the aliases at the ARN for aliases, we can see that. Now there’s Dev in the back, as well as Prodin at the end of the ARN. So now we can point our functions not only to versions, but also to aliases, and the aliases can be used. There are two versions, just as we saw in the slides. Okay, so this is good, right? But what if you want to release a new version? So we’re pleased with how development is progressing and plan to put version two into production. So to do this, I’m going to scroll down, and we can see here the alias configuration. So we are viewing the configuration for aliasProd, and we can manage a configuration for version one if you click on that link. But we’ll stay here on this page. So Prod is our most-tabled version for our lambda function, and currently it is pointing to version one. But maybe we want to also point it toward Version 2, and we’ll say that 10% of the traffic should go to Version 2.

So now what we have is version one serving 80% or 90% of the traffic, and version two serving 10% of the traffic based on the weights we assigned. So this is how we are doing some blue-green testing. Okay, we’re trying to see if the green version of the newer version is working just fine. So I click on save, and now anytime we invokeprod, 90% of the time we’ll have version one, and 10% of the time we’ll have version two. And if we’re happy, we can switch this all the way to 100 if we want to switch all the way to version 2, right? Or we can just select Version 2 here and be done with it. Or we could say, for example, that after 10% I want to test a little bit more load. So I’m going to say 50 50.So I enter 50; oops, I cannot enter 50. So I can enter the five and then the zero. Here we go. I’m able to enter 50. And so in here we have 50/50 of the weights, and now 50% of the time version one will be invoked and 50% of the time version two will be invoked. So let’s save this and let’s test our product.

So I’ll click on test, and here we get Hello World, and I’ll test again, and we get Goodbye Weld. So this is excellent. And in the log outputs, we can see which version was invoked. So here it is, version two. But if I test again and again, we should eventually see Hello World. Here we go, Hello World. And then we have Version 1. So here, using these weights, we’re able to test the traffic to a new lambda version. And when we’re ready to promote version two, we’ll click on version two, click on save, and I’ll set the weight to zero and the additional version to none. Here we go. So we just have Version 2 in here. Now click on save, and now this has been saved. So PROD is only pointing to version two. And now, when I use my function, all I get is goodbye, world. So using aliases, we’re able to use a bit more weighted deployments for our lambda functions, and that’s really, really good. And does that remind you of something? Perhaps you’ve been waiting for me to say it all along, and I hope it did serve some purpose. However, remember that we went to code deploy for lambda functions, so we had our deployment configurations, and for lambda sofilter for lambda, we had it all at once where we updated the lambda function all at once.

However, we had lambda linear 10% every 1 minute. And this actually, under the hood, is using the very same feature where we have percentage weights in here. So this configuration of code deployment in here is actually adding 10% every 1 minute to this configuration. And here is a lambda-linear 10% every two minutes, and so on. And remember, linear means that we start at ten and end up at twenty. So 20; then we’ll be at 30; and so on. And when we have a canary deployment, which is here at 10% for 15 minutes, that means that we’ll be at 10 for 15 minutes. And right after, we’ll be at 100%. Because the canary deployment is a two-step thing, okay? where linear is gradual. It is super important to see this. How could we deploy configurations for the deployments that tie into this feature of aliases and versions, and something else I want to draw your attention to? So that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.

  1. Lambda – SAM Framework

So now let’s learn how we can use AWS Sam, which is the serverless application model, to deploy applications into lambdas using code. So, Sam is a CLI, and it provides you with a command-line tool, the SamCLI, to easily create and manage serverless applications. So I recommend you go to this link and instal it either for Linux, Windows, or Mac, and just follow the instructions for me on macOS. At the very bottom of this, all I have to do is run the brew command. It’s called “brew tap AWS tap” and “brew instal AWS Sam CLI.” So, once you’re ready, do the Sam minus version Oops version with on. Here we go. This is the Sam CLI version. Whatever. You’ve probably got a new version of me, but from what I’ve seen, you’re on version 21. Okay? So next, what we have to do is create our first Sam application. So I’ll put Sam in. Runtime Python 37 So I just run it, and Sam will create a project for me that will be based on Python three seven.

So it generated a folder called “Sam App.” And within this folder, we’ll be able to execute a lot of things. So, for starters, I can go to the Samapp directory, or Sam app, and clear the screen. So clear, and look at all of this. So let’s use VS code for this. It’s going to be simpler. So, in the Sam app, we have a lot of files. The first one is the template YAML, and this is the most important file for Sam. As you can see, this looks like a cloud formation, but not exactly. So it is definitely based on cloud formation, but there is a transform here, and this transform says AWS serverless 2016 1031. And this hasn’t changed very well. So this is saying that this is a confirmation template, but it needs to be transformed first through the serverless transforms to be transformed into a valid cloud formation template. And so this is what the Sam CLI will do for us. It will transform this template into a real-cloud formation that we can apply on AWS before deploying our application. So here we get some information about our function and some global parameters that we can set. For example, the function has a timeout of 3 seconds, and we are defining resources. So here, the type is an AWS serverless function. And it turns out that this is not a cloud formation thing; this is a Sam thing. And thanks to the transform, this will be transformed into a cloud formation lambda function at some points. And the properties are saying that the code is living in the Hello World directory. So we have a Hello World directory here that contains our code, and the app is the handler. So, lambda handler, we’ll see the app in a second. Okay, the runtime is Python 3.7, and the events to invoke our function are of type API, so this is going to be an API gateway, and to obtain our function, we have to go to the path hello MethodGet. So this is a way for one block to define one lambda function and also an API gateway resource. So if we were using real cloud formation, we would have to do this in two steps.

So this is a much more intuitive way of defining our lambda function in terms of outputs: what will the confirmation output? Well, it will output the hello world API URL, the hello world function ARN, and the IAM role that was created for this function. So this template YAML file is really, really important. Looking through the remaining files in this directory, we find the application PY, which is a very simple lambda function that returns just a hello world. So it says at the very bottom, “Hello, weld.” So simple, right? Requirements TXT is a dependency file. And so this is saying that you need to instal the package request on top of it to package this application. So this is something that the Sam CLI will take care of when we do the Sam build. Okay, so let’s go into events. Events will show us the type of events that get passed to our lambda function. So we can use the Sam CLI to test our cloud formation and lambda function locally, and we’ll see this in a second. And finally, we can run some tests if we want to test our application before it gets deployed. So this is it. And now all we want to do is deploy it. So you could read this README.MD file if you need to understand everything, but let’s go ahead and run a few commands.

So the first command we’re going to run is “Sam build.” So in here, I’m going to run sam build, and this is going to build our lambda function. So the first thing is going to be to resolve the dependencies, and there were some request dependencies that would need to be resolved, and then copy the source. So the build artefacts are in the AWS Sam directory, and we have the build and the build template YAML. So thanks to this, now we have the AWSSam directory, and within it we have this template. YAML, which is the exact same template as we had before, and we have the hello world function. But this time, it’s not just the app itself. There’s the app PUI, but there’s also all the packages that were installed by the Sambuild command, and those will get uploaded to AWS Lambda with our function. So, what are our options now? We can definitely test our function locally, and this is something you can only do if you have Docker installed. So you need to have Docker installed and go online to see if you can instal Docker. As a result, I have a clean Mascreen. And now we can run the Sam local invoke command to invoke our function locally. So we’re saying, “Invoke the Hello World function using the events in the JSON document here, press Enter, and this will start Docker.” So Docker will be fetching the Python 37 lambda Docker image. So this can take a little bit of time.

So I’ll wait until this is done. And this is just a one-time thing. And after a painfully long time, we have received the status quo 200 body message hello, which is the result of our lambda function. So we are able to run our lambda functions locally using the Sam app, which is great for developers. Obviously, we are also able to start the entire API by having Sam start the local API, which will start the local API gateway and our lambda function. And now, if we go to this URL, we can test our lambda function from here. So let’s go to this URL, and hopefully, if things work, we get the message “hello, world.” So we are able to have a local API gateway as well, which I think is quite handy when you are a developer. So this is for testing our function, and the Sam framework definitely helps us with testing our function. But what about packaging it and deploying it? So this is step four, and this is step five. So we all package our functions using the Sam package command. And what we’ll do out of it is that we’ll have a package YAML template file, and this will be a valid cloud formation file. In addition, the code will be uploaded to the AWS DevOps course definition’s three buckets. And you should obviously change this for your own buckets.

The region will be the EU’s because that’s where I am, and I’ll be uploading using myAWS profile for DevOps. So let’s run the Sam package command. And now we see that the package has been successfully uploaded. So, if we go into the three consoles, and I go to my bucket this one and refresh, I can see that the Sam framework has now deployed this package, and hopefully, this package can be deployed to AWS Lambda. So to do this, what do I do? Well, I will run the Sam deployment, and so here we go. We’ll deploy this, and hopefully our functions get correctly uploaded. So we’ve created a change set, and we’ve defined the capabilities I am as part of our command because we’re creating an IAM role for our lambda function, and we should set these capabilities I am. Otherwise, we’ll get an “insufficient capability” exception; remember from the cloud formation section? And so now we’re waiting for the stack to be created or updated. So if we go very quickly to cloud formation in here, cloud formation, we see that the AWSSam getting started template is being processed. And we can have a look at the templates in here, and we can see that the transform was AWS serverless.

So cloud formation is transforming the template. And to view the process template, we can enable this. And here we get more information about all the stuff that gets created for us. So, as you can see, it was processed. And then we have a lambda permission, we will have an IAM role, we will have an API gateway stage, we will have an API gateway deployment, a lambda permission, and the rest of the API soon, and the lambda function at the very end. So the idea with the Sam framework is that we’ll be able to upload a very simple template in here. So it’s a very simple template. And when it gets processed by cloud formation, this becomes a much more complicated template to look at and create in the first place. So this is the whole power of the Sam framework. Let’s go into the events. The creation is complete. So now in lambda, I’m able to refresh, and I have my AWS Sam getting started function right here, so we could test it, but it also belongs to an API gateway. So if I click on the API endpoint here and open this, we get “hello world” executed directly on the API gateway by AWS. So that’s perfect. Everything is working. And we’ve deployed our first application using Sam, which I think is pretty cool. I will see you at the next lecture.

  1. Lambda – SAM and CodeDeploy

So I want to show you how everything ties together. As a result, when we use the Sam CLI to deploy our application, it includes built-in support for Code Deploy, ensuring safe lambda deployment. So everything I’ve been telling you about AWS ties together, and as a DevOps, you should really understand these integrations. So we’ll have a demo of Code Deploy using the deployment configuration, thanks to the Sam framework. So we’ve just got a few lines of configuration. Sam does the following for you: It deploys new versions of the lambda function and automatically creates aliases that point to the new version. So we’ve seen aliases and versions, and we definitely understand this sentence. It gradually migrates customer traffic to the new version until you’re satisfied that it’s working as expected or your role, and you can have pre- and post-traffic test functions.

So they’re on the hook to verify that the newly deployed code is configured correctly and that your operations operate as expected. So this ties back to the code deployment at SpecYAML to have the pre- and post-traffic hooks. And finally, there’s a rollback if a cloud watch alarm is triggered, which rolls back to the feature in Code Deployed that says that if a cloud watch alarm is triggered, roll back the deployment to the latest known version. So this is good, right? So let’s see how we can make it work. And for this, we just need to add this snippet of code to our lambda function.

So let’s take a look at what it does. So we’ll go in here, and we’ll go to template YAML. Scroll down, and next to the events listed here, let me add some code. Paste it, and this should be one more index. Here we go. So, with this auto-publish alias Live, every time we publish, it will go to the new alias Live, and the deployment preference will be Canary 10% for ten minutes. So we know what that is. That means that 10% of the traffic will go to the new lambda function, and after ten minutes, the entire traffic will go to the new lambda function. So this is the ten-minute Canary 10%. And this comes straight out of Code Deploy, so we can choose any type of deployment we want here. The alarm here is that if we wanted to monitor alarms and roll back our deployments, we don’t need this. So I’m going to delete it. And then the hooks are the validation limit functions that are run before and after traffic shifting. If you wanted to test, for example, communication with a database or schematic changes or whatever,

So I’m going to delete this as well, because we don’t need it. So the only thing we need is for the deployment preference to be of type Canary for 10% of the next ten minutes. Next, let’s go into our function app PY and change the word from “hello world” to “goodbye world.” This way, we have an update, and now we need to run the Sam commands to deploy. So we first built our application. So we run Sam’s build. Sam build. Here we go. Next, it’s going to resolve the dependency and create new artifacts. Next, I need to package my function. So I’m going to run the Sam package. Here we go. The Sam package is now running, and it’s uploading the new file into S 3. So, for excellence, we have a new revision in S 3. Finally, Sam will implement the new revision in cloud formation. So now the change sets are being created, and they will be applied. So now the stack-create update is happening. So let’s go in here, and let’s go to cloud formation. So let’s go in three. And as we can see in S 3,  we now have two revisions for our lambda function that have been uploaded in cloud formation. If we refresh this, we have an update in progress.

And if I click on this tag name and look at the resources being created, now we have a code deployment service role being created. And then, I guess, a code deployment project will be created for us. So if I click here and type “Code Deploy,” we see that we have a Code Deploy application being created. So, if we go to the code deployment console, we should be able to see our application very soon. Code deployment, indeed. This one requires some getting used to. and so perfect. We have a deployment group that has been created for us as well. And now we should look at deployments. So, when this occurs, when the deployment occurs, it will occur right here. So let’s wait a little bit. And so it turns out that no application deployment has happened. And I looked into cloud formation, and the reason was that the old lambda function was being deleted. So we need to probably run this one more time to make this work. So let’s do one more change in our application stack. So I’ll change this back to “hello world.” Okay. And I’m going to run the same command. So Sam builds as a first step. And then I’m going to run the Sam package and then have Sam deploy. So I’ll copy this and paste it. Here we go.

So now it should push yet another new revision into S Three and deploy it using the Sam deploy command. And hopefully, this will be the first time we see a deployment in code deployment. So let’s hold our breath and see if it actually happens. And yes, here we go. We have a blue-green deployment happening within our code-deploy lambda application. really cool. And it’s in progress. And so I click on the deployment ID to get some more information as to what’s happening. So we have the pre-deployment validation hook that is running. There was no run, so no hooks. That’s perfect. Then there is traffic shifting. So we get 90% of the traffic going to the original lambda function and 10% going to the new version. And this will happen for ten minutes. So this is due to the canary strategy we have selected. It was Canary 10% for ten minutes, okay? And then, after ten minutes, the post-deployment validation should happen, and the new traffic should go all the way onto the new version. So, how do we validate this in lambda as well?

So we’re going to Lambda. I’m going to refresh this page. And in here, if we look at the function versions, where is it? It’s in qualifiers, and in versions, we have version one and version two. And for now, the live version is pointing to number one. So let’s click on “live.” And as we can see, Live is now a weighted version between 90% on version one and 10% on version two. So I really, really like this lecture because it ties a lot of services together. There’s the Sam framework, there is a lambda function, there is cloud formation, and there is code deploy that is doing some Bluegreen deployment for us. And we get the lambda function, and Alias is working nicely altogether. And so that demonstrates the power of the Sam CLI here, because we can simply create a template in a YAML file, upload our lambda functions, update them, and deploy them in lambda using cloud formation in the back and leveraging code deploy to do a very gradual release of our lambda functions. So I hope you enjoy this lecture. I’m really excited that everything worked, and I will see you in the next lecture.

  1. Step Functions – Overview

So let’s talk about AWS step functions, and we’ll start with the use cases. So step functions are used when you have a very complex workflow and you need to isolate some functions of your workflow and probably make it more visual and intuitive. So let’s go through the use cases. For example, we have a Transcode media file. So imagine someone uploads something into an Amazon EC2 bucket, then a lambda function is triggered and it invokes this step function.

Now, the step function can be used to coordinate between a lot of different services and create this entire workflow. For example, the workflow can be that lambda will invoke the recognition API to recognise some information in the image, and then another lambda function can extract the metadata from the stored object, and all this information can go into an Amazon DynamoDB. Now you could code this in one giant lambda function, but it will be very difficult to do some error handling to see what’s going on, to trace the performance, and to decouple as a whole. So lambda functions can help bring this flow together by defining a workflow and saying all these things need to happen and having them happen independently. Another one could be a sequencing batch processing job.

So imagine you have a lot of batch jobs using AWS batch, and you say this one batch job needs to happen, then batch number two, and then batch number three to synchronise them to create a workflow, saying that one, then two, then three happens. Then you can use step functions again because step functions will be able to transition you from the first bad job to the second batch job to the third batch job. So anytime there’s another workflow, step functions are a great candidate. You couldn’t use a lambda function for this because, first and foremost, you’d be paying for waiting while the aid of this badge job finished.

And if the bad jobs go over 15 minutes, then your lambda function would time out. As a result, lambda does not work in this case. The step function is a much better fit for orchestrating different batch jobs. Another one would be, for example, to send messages from automated workflows. So here we have an Amazon API gateway that triggers an Alpha function, and that Alpha function starts a Step function, and it starts a whole process around having admin verification, sending data to the SQS queue, getting to look at that queue, and doing something with s three. So the step function can coordinate all the manual approval and data publication along the way. So you get a lot of these different examples in here.

So I invite you to go to this page, Stepfunction Use Cases, and have a look through it. However, step functions can be used for this whenever you have a workflow, even if it is a very complex workflow. So we can go to step functions in the management console. And step functions, as you can see, are used to coordinate distributed applications using visual workflows. So let’s get started. Here we go. So we’ll create a hello-wealth example. And this is our state machine definition. And it is a very complex JSON document, but quite easy to read. And that JSON document can be translated into something a bit more visual. So here we get the entire visual representation of the workflow defined on the left-hand side. And the idea is that you can definitely talk to a business user and say, “Look, this is the workflow on the right hand side that we’re redefining as code on the left hand side.” And so all along the way in this workflow, you can do different things. You can pass some data, you can invoke some lambda functions, and you can wait for an arbitrary amount of time. And step functions can run all the way up to one year. So you have a lot of time to transition between all these events. It could be a payment process, for example, or it could be a machine-learning processor orchestrating batch jobs or whatever.

So this all looks great. Let’s click on “next.” And then we’ll create an iRole for this step function to execute and create the state machine. So here we go. Our first state machine has been created, and now we can start the execution of that state machine. That means passing data through the state machine and seeing what happens. So what we’ll do is say that Lord’s example equals two and start the execution. Now that the execution is underway, we can examine the path it takes. So it followed the path of the Hello World example in this section. Yes. Then it waited for 3 seconds. It did these two things in parallel, saying “hello” and “world.” At the very end, it created Hello World, and the output of that was going into the end. So let’s go through the end. The user interface is not always easy to use. Here we go; the end is here. So this was the path that was taken for this step of the workflow. Okay, but we could create another execution. so I could create a new execution. And this time I’ll say that the HelloWorld example is false and see what happens. So in this stage, the Hello World example Well, it went directly into the no branch. So there was a branching; it was testing for true and false and ended up in no.

And the no resulted in a failure. And we went to the end of the execution. And so here we get some information around the execution—even history—and they say, “Okay, at the very end, the execution has failed because it was not Hellod.” Okay, so going back into our step function, we can see all the execution histories that would have happened. So this is really nice because we can audit why something failed or succeeded. So for the one that succeeded, for example, we can see all the steps and click on yes, seeing what was the input, what was the output, and what was the exception, if any. So we can retrace all the event history in our step function with the elapsed time, the timestamps, and so on. And so this is the real power of state machines. So state machines can be a lot more complicated, and we can create a new state machine. For example, we have code snippets and we can create our own state machines in here, and we have some examples here, such as invoking a lambda function and so on. But you can also use the sample project if you want to start with it and just understand how a few things work. For example, to manage a job queue, a container task, a batch job, and so on.

Or we can use templates to see how things work, for example, “hello weld” or “wait,” “retry parallel,” “show state,” “catch failure,” and so on. So there’s a lot to play with, but this is not the focus from an exam perspective. You need to understand that step functions are used to orchestrate things as workflows, and as such, if you have a complex workflow happening in your DevOps, it would be a good idea to invoke a step function as part of it. Okay, so what about step functions and integration with Cloud Watch, for example? I’m glad you asked. So if we use Cloud Watch events, we can get a lot out of them. So for example, using Cloud Watch events, we can create a rule, and the rule would be for step functions, and the event type would be step functions, execution, change, and the status we’re looking for is something like failed. So for example, anytime a step function fails, maybe we want to add a target that’s a lambda function, and that lambda function would send a message to our Slack channel. So anytime our step function fails, we are notified of it.

So that’s a great integration. Or if we have all states, then we can send all states to all the step functions in the same lambda function. So we can start making some really cool automations here. Or you could have a schedule, for example, and say that every day you should start our step function. So let’s go here: step function, state machine, and say okay. Every day you should start the hello world example step function and run through it, and you can pass in some data. For example, you can say okay, here is the data you want to pass, and so on. So this is great. You are also able to create schedules for your step functions using CloudWatch events. And so, through all these integrations, you can start building some really, really cool automations. So that’s it for step functions. I’m not going to go deep into it, because we don’t need to, but you need to understand, from an architectural point of view, going into the DevOps exam, what it’s used for and what it can do. So I hope to see you at the next lecture. And make sure to review the step function. Use cases when you get the chance. Okay, see you in the next lecture.

img