Amazon AWS DevOps Engineer Professional – Configuration Management and Infrastructure Part 9

  1. API Gateway – Overview

So now let’s deal with the API gateway. API Gateway isn’t going to be a central service of your AWS DevOps exam, but it’s still really important to understand how it works. And you may have a few questions about it regarding canary deployments and so on. So I’d like to give you a quick overview of how it works, and we’ll view a few cool features as well. So I went to API Gateway, and as you can see, we already have an API that was created by the Sam framework. So we’ll just leave that one B alone for the time being. But first, let’s go ahead and create an API.

So when you create an API and API gateway, you need to know that it could be either a Rest API or a WebSocket API. We’re not going to use Web Sockets right now; instead, we’ll use a Rest. So to create a new API, we have four options. And we can start with the example API, which uses this file. And this file is a Swagger file. So we could import an API from Swagger or an open  or an peso the idea is that you enter this giant JSON, which represents how your API is, and that goes according to the specs of Open API or Swagger. And then the API gateway will create that API for you. We’re not going to deal with the example API for now, and we’re not going to import a Swagger. And we could clone an existing API if we wanted to, but we’ll just go ahead and create a new API. So I’ll refer to my API as demo API API.

Here we go. And in terms of the regional endpoints and the endpoint type, we have three options. The first one is regional. That means that we’ll deploy this API within the region I’m in right now, which is Ireland. But we could also have an edge-optimized endpoint. That means that this API will get deployed to all the cloud front-edge locations around the globe. So this is great if you want to reduce the latency of your API by deploying your API Gateway API alongside all the CloudFront edge-optimised endpoints. Then there are private APIs, which are only accessible via the API gateway’s VPC endpoint.

And so that means that this API gets deployed within your VPC, and it can access VPC resources within your VPC. So you can create a lot of private APIs this way. It just requires you to create a VPC endpoint for the API gateway first. And so you need to know about these three types of endpoints going into the exam. For now, we want to do something easy and public. So we’ll use the regional endpoint. Okay, let’s go ahead and create our API. So our API has now been created, and the first thing we have to do is create a resource. So we’ll create a method, which will be called Get. And so, okay, here we go.

And the Get method can have multiple types of integrations. We have lambda functions; we have the HTTP Mock AWS service or the VBC link. So let’s go over them one by one. You can integrate a lambda function into your account using the Lambda function. And that’s the one we’ll be using in this demo. You could use HTTP to proxy a request to another HTTP endpoint that you own. For example, suppose you already have an application built on your own on-premises infrastructure and want API Gateway to serve as the front-end mechanism for that API. Therefore, HTTP would be a great integration for this. You have Muck if you just want to mock an API, and an AWS service if you want to front an AWS service. And we’ll go over this in greater detail in one lecture.

So, I’m not going to go on this right now. Finally, if you wanted to link to a specific type of endpoint in your VPC, you could use a VPC link. So let’s go over lambda functions. And we have to create a lambda function for this. So let’s just go ahead and go to the lambda console and create a lambda function. So let’s create a function, and I’ll call this one lambda API gateway, and we’ll author it from scratch. Python three seven or three six will suffice. Here we go. And we’ll create a role with basic Lambda permissions. Let’s go ahead and create this function. So my function has now been created, and what it does is just return a status code of 200 and a buddy saying hello from lambda. So that seems pretty easy. Let’s go into the API gateway, and let’s refresh this page. So I’ll refresh it, and under Get, I’m going to choose a lambda integration. And the lambda function I want to use is going to be the lambda API gateway. And we’ll use the default timeouts, but we could set a custom timeout of, say, 5000 milliseconds to 5 seconds if we wanted to.

By default, the request timeout is 29 seconds overall. So we have the option to do lambda proxy integration. But I won’t take this right now. We’ll be seeing this in the Aosta Getting Started page. So let’s click on “Save.” And we’re about to give API Gateway the permission to invoke our lambda function. That makes sense. Let’s press OK. And here we go. So we have created our first Get method, and we can easily test it by clicking on “Test” in here and clicking on “Test.” And here we go. We get the response directly from Lambda, saying the status code is 200. Hello World by Lambda serves as the body. fairly easy, right? And that was our first test.

So let’s go back to the method of execution. Let’s go back to our API, and we’re all good to deploy this API. What we can do is click on Action and then deploy API, and the deployment stage will be a new stage. For now, we’ll call this stage “dev.” And let’s click on “deploy.” So, this is my first deployment. So the API won’t be usable until you do a deployment and create a stage. So we’ll deploy it, and here we go on the stages. On the left hand side, we have a stage that’s been created. Now we have an invoke URL for this stage. So if I click on it, I go to a public URL right here. And the API URL is here, Dev, because this represents the stage we have, and the result we get is status code 200. Buddy, hello from Lambda. So our API is now live and is actually invoking our lambda function. So, before we go any further, let’s take a look at a few things. So, number one is that you can create an API key.

So if you wanted to publish your API, create API keys, and give API keys to people, you could use API ok at a feThe second thing to note is that if you wanted to publish CloudWatch logs from your API gateway to CloudWatch, you would need to create a service role to do so. So that makes a lot of sense. Then we’ll take a look at the Sam getting started API. So this was a resource-constrained hello with the get. And the get was calling the lambda function we had written.

And this integration request was of type lambda proxy. We’ll see the difference between lambda proxy and lambda pass, the lambda we’ve used, in a future lecture. So let’s go back to here. And then the API itself was being deployed as a stage. And so we had two stages. We had the stage and the prod. So it is with API. So two stages were created, and we could go ahead and invoke our lambda function by going to this URL and getting the hello world from Sam. So, a small and quick overview of the API gateway We could go a little too far and verify that we have an authorizer. So it’s possible for us to authenticate with our API gateway and use cognitive user pools or lambda functions to make sure that user is authenticated before they are able to access our APIs. And lastly, I want to show you this article introducing Amazon API Gateway private endpoints.

So this is something I already discussed, which is that you can have a private API gateway deployed within your VPC. But what I really like about this article is that it describes a lot of the features of an API gateway visually. So let’s go for it right now. So we can see that here. For example, all of our Internet-accessible applications can use our Amazon API gateway, which can be deployed on the cloud front. We can also see that the API gateway is able to invoke lambda functions as well as endpoints on Easy2 or any other service. We’ll see this in the future lecture. as well as any publicly accessible endpoints. So that was through the HTTP integration. So an API gateway can just be used as a stand-alone service; it doesn’t have to be used with lambda. Then we’ll scroll down. It’s also able to access lambda functions that are within your VPC. So it’s definitely possible to have this sort of integration. It is also possible to gain access to other resources within your VPC. For example, endpoints on Amazon EC2 So it has more capability, not just lambda functions in your VPC but also endpoints on EC2.And finally, private endpoints As a result, you can deploy your Amazon API gateway within your VPC. And so services in your VPC are able to access your microservices or your APIs within your VPC. So we did everything in private. So I really like this blog because it shows a few diagrams that we can use to better understand the Amazon API gateway. So that’s it for this lecture. Just a quick overview, but don’t worry, we’ll see a lot more in the API gateway in future lectures. So I’ll see you in the next class.

  1. API Gateway – Integration with Lambda

So now let’s have a deeper look at how the integrations work with Lambda and the API gateway. So, if we look at Sam getting started and go to the hello getapi, we can see that this is a type of lambda proxy integration. The term lambda proxy refers to the fact that the entire requestget is proxied over to the lambda function. And so if I click on the lambda function, as you can see, it points to the alias named live. So this is the lambda function and its alias live. So it’s code. So let’s go to version two. For example, let’s go to Version 2. Here we go. So the code for Version 2 needs to handle the event type. So this is where we look at the event type, and there would be a proxy format. And so the dictionary ends. Here is the documentation for the proxy. So this is a bit more involved when you do some proxying to deal with some transformations and to deal with the lambda function. But what you need to remember here is that the integration request is pointing directly to the live alias. So it is definitely possible to point an API endpoint to an alias. And why am I saying this? Well, because if we go to that alias again, let’s go back to the lambda console and go to the alias.

As you can see here, I was able to do some canary testing. I’m able to add a version and do some waiting. So it is possible to do some A/B testing directly using the lambda alias function and keeping the API gateway as is. So it is one way of doing blue-green testing. As you can see here, resources will go through the API gateway, and they will get passed on to the lambda alias. And the lambda alias will probably be 90% and 10%. So, in order to perform a B test with or “blue-green” test with API gateway, you must first perform a canary test directly using the lambda alias. So that’s one way of doing things. It is also definitely possible to point the lambda function directly to a specific version. So instead of “live,” I could say “two.” And this would point the lambda function to version two of my lambda here. So version two is immutable. So there’s no way of doing any kind of canary testing if you point directly to your lambda instead of pointing to an alias. So that’s one way of doing things. And this works really, really well. The other option is to use lambda, not lambdaproxy, but to integrate lambda more deeply.

So let’s go to the other API that we have. And here we have an integration request, which is lambda. And this integration request is directly pointing to the Lambda API gateway. It is absolutely possible if we publish a version of that function. So let’s go ahead and publish a version. So we’ll go to the Lambda API Gateway and publish a new version. And this is version 1. So it is entirely possible to include version one here and have our demo API point to a specific version as well. So this is absolutely fine. We can definitely do this. And I’ll just save this and say, “Okay, here we are pointing to version one of our function.” But the idea with that integration request for Lambda is that we are able to do a bit more involved work here. So if we go back to the integration request, it is possible for us to use something called a mapping template. And a mapping template is a way for us to modify the content and the body that get passed to the function. So I won’t go into detail, but it is possible to have it for the content type application JSON, for example. It is possible Oops. It is possible for us to create a specific mapping template. And we have a few methods here, but it is possible to change the request using this complex scripting method called VTC to change the request. For example, you could add a field to a JSON document, or you could remove one, or you could edit one. So it is possible to have the API gateway actually do some transformation on the content of the payload before passing it to the lambda function.

And that could be really helpful to deal with compatibility events, for example. So going into the exam, remember that mapping templates can be used to modify the content before it goes to a lambda function. And also, it is possible for the response to be modified using a mapping template. So it is definitely possible for you to change both the request going to the lambda function and the response coming from the lambda function. So, in this case, we can change the API Gateway while leaving the lambda function alone. So it is possible for us to evolve the API while keeping the lambda function static. So, yet another way of doing things So, this is all I wanted to show you. And obviously, if you make any changes here in the Resources section and you want to publish this, you will need to do an API deployment, choose a stage again, and say, “We’ve changed XXX,” and then you deploy your API for it to become live. Okay, so that’s it for the integration of the API gateway in Lambda. I’ll see you at the next lecture.

  1. API Gateway – Stages and Deployments

So we have created our API, designed it, and said which methods and resources we wanted. Now we need to deploy our API. And so here is a common misconception: just because you make changes to the API gateway, it doesn’t mean they’re effective right away. For this, we need to make a deployment, and the changes can be in effect after we deploy these changes. And so it’s a very common source of confusion, and I believe the examination asks about it quite a bit. Now when we deploy a change, we deploy these changes to what are called “stages,” and we can have as many stages as we want and use whatever naming we want for changes in stages.

So we could be dev, test, and prod beta, whatever you want. You’re free to name your stages, and you can have as many stages as you want. Each stage will have its own configuration parameters; each state will be independent from one another; and the stages will have a history of all the deployments made to them. So you can roll back the deployment if it doesn’t work out. So, now that we’ve established that we have dev, test, and PRA stages, we can deploy to them. Now, why do we use stages? We use stages because we can have stage variables. And so these stage variables are going to be like environment variables for our API gateway. And so using these stage variables, we’re going to be able to change them, and they will drive changes into configuration values. So it’s a bit abstract, but they can be used in lambda functions, ARNs, http endpoints, parametric mapping templates, and so on.

So use cases for stage variables, and the main action will be shown in the next slide, which is that we can configure the HTTP endpoint that our stages communicate with. So, for example, if you have a dev stage, then you can talk to the dev HTTP API. If we have a test stage, then we can talk to the test HTTP API, and so on. We can pass configuration parameters down to the lambda functions using mapping templates. So the stage can be passed down all the way to the lambda function. And so the lambda function knows if it’s in dev, test, or production. And for example, the stage variables will be accessible to the context object in the AWS lambda as well. So there are several ways in lambda to retrieve these stage variables. So how do we use them? A very common pattern for using the stage variables is to use them with lambda aliases. So, since lambda aliases pointed to lambdaversions, we can use a stage variable to point to a specific lambda alias. And so our API gateway will automatically invoke the right lambda function based on the alias pointing to the stage variable. So let’s make it concrete with a diagram. We have our lambda function, and it has seven versions, right? Six versions have been snapshotted. and we are working on the latest version. We’ve created aliases. Just like before, we have dev test and prod.

As a result, the latest test will point to version number five, while production will point to version number two. Now, we want to expose all these aliases through an API. So we’ll have an API gateway, and it will have three stages. The variables are enclosed in parentheses. So the development stage will have a variable called “alias.” And it’s just me defining that a variable named alias equals dev. And so by doing this and configuring my API gateway correctly, we can have my dev stagepoint point to my dev alias in alias lambda. And then we can have my test stage using the alias equals test point to my test alias lambda, and so on for prod. And so, using these stages and these stage variables, we’re really able to modularize how we want our stages to behave and which lambda functions we want them to invoke. So it is a very common exam question about the use cases of stage variables. Just understand that they’re like environmental variables. And so environmental rules can change the code’s behavior; stage variables can change what the API gateway does or points to. Finally, when we deploy an API, we want to test the improvements to our API a little bit before going all in. And so this is called canary deployments. And canary deployments can be used at any stage, but usually they’re done when you go to production. And so you choose the percentage of traffic the Canary channel or Canary deployment receives.

So here’s an example: Our API comes in Version One and Version Two, and they all communicate with your back end in different ways. It is free to communicate in any way it wishes, but our client may direct 95% of our traffic to the V1 API and only 5% to our Canary channel. And so using this, we’re able to just have 5% of the request channel go through, and we’re able to have logs and metrics separate. And so, using these login metrics, we can ensure that our API gateway is not having any error responses, that our clients are happy, and that everything works great with our Version 2 API. And so when we’re ready, we can increase the version of our API gateway and shift all the traffic to V two. And so that’s the purpose of canary deployments. You’re able to test two versions at the same time. Now it is possible for you in the Canary stage to override stage variables, so you’re really free to customise as you want. And so this is sort of a blue-green deployment with Mslambda, an API gateway, because you’re able to test two versions at the same time and compare how things work. So, let’s see. For deployments, I’ll try to make it as concrete as possible in the next lecture with hands-on learning.

  1. API Gateway – Deployments and Canary Testing

Okay, so now let’s talk about deployments in stages. So we can see in the resources that we can do things like add a few actions, modify the request, and so on. And then, when we’re ready, we can deploy an API, so I can create a new stage, for example, and name that stage “Prod.” And effectively, what this would do is create a new stage with a new URL. And it’s slash production this time. And then I get access to it. And it’s currently forbidden, for reasons unknown, but it’s working. It says hello from Lambda. So it took a little bit of time to deploy it, but now we have different stages. And for example, if I went to resources—let’s assume I was modifying something—we could take action and deploy this API. Perhaps I should start with Dev. Said made some changes, and this is great.

We’ve just deployed it, and we’ll go ahead and test the development environments. So the development environment is working just fine. And now we want to publish and promote that API to production. So we go to Resources, actions, deploy API, and deploy to Prod this time. Okay, this is one way of doing things, and this works great. But as I said before, we have the support of Canary deployments. So let’s go to production, and let’s say we want to do a canary deployment. So even though we really like Dev, we don’t want to switch all the traffic on Product to the new environment right away. So let’s go to Canary as a setting, and we can create a Canary deployment. Now, what we did was nothing. It just created Canary, but nothing effectively happened. Let’s say now for example, that we want toswitch about 10% of the request to Canary. So now 10% of our requests will go to Canary, and 10% will go to Prod. But currently nothing is happening, and we haven’t done any canary deployments. So how does the canary work? Well, let’s go to Resources, and this time we’re actually going to change this method. So now let’s say the integration request is going to be a mess.

So we’re just changing the integration request altogether. And if we test this API now, let’s go in here and we’ll test this API; click on “Test.” The result we get is no data. Okay? So excellence. Now we’re happy with this method. So we’ll go to action, and we’ll deploy our API. So we’ll deploy it to development and go ahead and test our API there. So if I go to this URL now, it shouldn’t show anything. So let us wait here and see what happens. Okay? And we’re very happy with this new version that shows us nothing. So we want to promote it to production. So we’ll return to Resources, select action, and deploy API. And this time we’re going to Prod. However, prod is now canary-enabled. So we are deploying this API only to the Canary. And so as such, when we click on deploy and then go to our production stage, what we can see is that 90% of our traffic will go to the current production and 10% will go to the canary.

And so we can test this by going to the invoke URL in here. And, as you can see, it’s still saying hello from Lambda. But if I refresh enough, hopefully at some point it should show me an empty response, and that’s coming from a canary. And so it should show me an empty response only 10% of the time. So I can refresh, refresh, refresh. And it still shows “Hello from Lambda” and now shows “Canary.” So about 10% of the time, I get an empty response. Maybe then we want to switch a bit more traffic to it and say, “Okay, we’re happy with this version; let’s go 50/50 now.” So now, 50% of the time, it should redirect me to the canary. So let’s refresh. And now you see that half the time it shows me a blank response and half the time it shows me this “hello world” from lambda. So excellent. And when we’re very happy with this, we can promote the canary. And what it will do is that the canary will become our main stage, and the canary percentage will be set to zero. So we’ll say, “Okay, update.” And now if I go to my API and keep on refreshing, hopefully very, very soon I should only see yes; now I only see some blanks. It was taking some time to deploy the API, but now every time I click refresh, I get blanks.

And so this is the whole power of canaries. We’re able to have a second Canary API gateway and test that alongside our main production API using the same URL. So to summarize, we have two ways of doing caviar with the Amazon API gateway. The first way is to use this Canary feature, which works great and works with any kind of deployment. So your API gateway can do anything it wants. or if your API gateway is actually integrated with lambda functions. So if it’s actually integrated with a lambda function, and that lambda function will be this one, it’s possible to use, for example, a live alias. If you had an alias for this and did a canary deployment on your live alias, you could do it the way we’ve seen it using lambda, like in here, where we had an alias that was not the right one. So let’s go to Sam’s function, where we had an alias. We’re going live now. And we are able to do a canary deployment here as well by shifting traffic between version two and version one and assigning weights to it. So that’s two ways to do a deployment with API Gateway that’s going to be blue-green or API ORCA B testing or whatever you want, or canary. As a result, understanding the differences between these two before taking the exam is critical, as the exam will ask you questions about them. So I’m happy to have shown this to you, and I will see you in the next lecture.

  1. API Gateway – Throttles

So I need to talk to you about API gateway throttling. As a result, API Gateway serves as a thread link for APIs. And by default, if you click on account level throttled, you can see that the API gateway limits steady-state requests to 10,000 requests per second. That’s across all APIs within an AIS account. So across regions and APIs, it’s just by accounts.

Okay, this is obviously something you can increase using a service limit. But it is good to know, too, that you have API gateway-specific throttling at 10,000 requests per second. And this is obviously to avoid some costs in case you’re being DDoSed. For example, here in the API gateway, we have a throttle, but we can define a usage plan. And a usage plan would be to say, “Okay, for clients A, I want to define that they can only have 100 requests per second.” So this is it. And maybe a burst of 200 requests per second, and maybe they can have only 20,000 requests per month. And click on “next,” and here we go. We can associate this with an API stage, for example, with this one, and the stage prod and the method, and we can say, “Okay, the method throttling is going to be the getmethod, and this one could be requested that’s 50 time seconds with a burst of 100,” and we click on the green tick. So we can define some pretty sophisticated throttles in this area. And we can associate these usage plans with an API key. So if we associate with an API key, then we give the API key to the customer, and they will be limited by this usage plan. So this is definitely a really good way of doing things.

Okay, and that works well. And you need to know that then you have throttles at the API gateway level, at the usage plan level, and also, if your API gateway is invoking lambda functions, then you will have throttles at the lambda function level. So remember, for the lambda function, when you go to, for example, this one and we scroll all the way down, we see that we have an unreserved account concurrency of 1000 executions. As a result, only 1,000 lambda functions can be executed concurrently. As a result, it is possible to throttle at the API Gateway, the usage plan, or the lambda level. And the reason I’m saying this is that when a question will ask you about throttles at the DevOps exam, they need to understand what kind of throttles will be involved. Is it a lambda-function throttle? Is it a usage plan throttle or an API gateway account level throttle? So that concludes my brief lecture. Have an answer for this? It obviously depends on the context, but hopefully that helps you understand when the question comes up. What is throttling, and what does it mean for the API gateway? All right, that’s it. I will see you at the next lecture.

  1. API Gateway – Fronting Step Functions

So you remember how we used to have step functions? So let’s return to our stepfunction and examine it. So our step function was here in the Hello World example. And this was something within our account. And if we wanted to invoke the state machine, we would need to start an execution and pass on some data, for example, some input JSON. Okay? And then we start the execution. And the execution would go down in history. But what if you want to externally expose this state machine as an API? Well, for this, we could use an API gateway. So let’s play a little bit with it. Let’s go to our demo API, and I’ll create a resource called “Slash step function invoke.” And you can name this whatever you want. So I’ll call this step function “invoke.” And this is great. We’ll create this resource. Okay, now we need to define a method. And that method will be a post because we want to pass in data to our step function. So the same way here, when we start an execution, we have to define a JSON document.

This JSON document will be passed on by the API gateway into our step function. And so now the integration type could be a lambda function, and that lambda function could invoke the step function that we can actually bypass and just say that our API gateway should invoke an AWS service. and that’s really cool. So we go to the region and say “EU West.” And we could choose any kind of service here. So the API gateway could be in front of any of your Amazon cloud services, which is extremely exciting in terms of the possibilities. So for this, we’ll use step functions, and then we’ll choose the HTTP method of posting. The action type is going to be “action name.” And then the action itself is going to be called “start execution,” which is what the API calls. You are being used in these step functions to start the execution. Now we need an execution role.

So the execution role for this is something we need to create it. So let’s go into services. Let’s go into IAM. Here we go. and we’re going to create a role. So roles create roles. and this role is going to be for the API gateway. So here we go. Click “next permissions” and we can attach permissions to it. And this is great. For the next review, click on tags. And we’ll call this API gateway invoke stepfunction, followed by create role. Okay, excellent. So now that this role has been created, we can copy the role ARN. Let’s return to this page and paste the execution role ARN. The context handling is passed through, and we’ll use the default timeout. And finally, let’s click on “Save.” Okay, so we have this. This is working. And let’s go ahead and try this out. So we’re going to test this function, and then we need to pass on some information, buddy. So let’s pass on the body. And the body comes from the documentation. So this is what it’s supposed to look like. And it’s supposed to have an input, so we could pass in some data if you wanted to. And then we need to name the execution.

So my execution And then we need to pass the state machine-readable so we can go to state functions. Get the state machinery ARN right here and paste this in so we have the correct one. Now let’s try this out. Click on “Test” and see what we get. is not authorised to perform the start execution. So that makes a lot of sense. Okay, so I created an IAM role, and the IAM role is being used by our API gateway, but it does not have the right to invoke our step function. So let’s just tap “attach policy” and type “step function.” It’s always good to have a real understanding of how things work. And when we see an error, we know what happens right away. So this is an error, and we’ll attach this policy. So here we go. Now the policy has been attached to our role. So if we go back to our API gateway now, it can invoke step functions, and it also has the capability to push logs to cloud watch logs. So let’s try this one out again. We’ll test it, and this time, yes, we’ll get a response saying that the execution is starting and that we have the start date. And so if we go to step function management now and refresh on the execution, we can see that we have an execution that has started right here and it has failed because we don’t have the payload that’s necessary, I guess.

But it has worked in terms of the input that was passed being empty, and the output should be saying at the very, very end that it failed. And the reason for this is that the execution has failed. It said that we don’t have any lodestones in there. So we can correct this by passing the appropriate payload tor function. So if we look at what the right payload is, it looks like this: So let’s go ahead and pass in this payload as we should. So we’ll paste this in, and we’ll have to pass back ticks to make sure that this is correct. So let’s fix this really quickly. True, here we go. and we’ll close this off. Okay, so this should work, and we’ll have the comma in the right place. So here we go. This looks good. And then we can try this one again. And click on “test.” And now hopefully, if we go to our step functions and go to our step function executions, we should have a new execution being started very, very soon, and we can’t name it My Execution again. So we have to name it MyExecution Two, for example, and test it. Here we go. Now, this should work. Step functions refresh the execution process while it is actually running. And if I click on it very, very soon, I should see that it has succeeded. Yes. Excellent.

So we were able to invoke our step functions directly from our API gateway. And something I want you to notice is that the result of the step function invocation is not returned to the API gateway. The only thing that’s returned is the execution ARN and the start date, meaning that the step functions have started. But for us, there’s no way to know when the execution will end. It could take a year for a step function to run. And therefore, that’s why the API gateway just returns the fact that the execution has started. But here’s a pretty cool thing. We are able, using a post payload, to invoke our step function in our accounts with a public API gateway that we could secure with some authentication and so on. So, overall, I think this is a really, really nice and fun example to do. All right, that’s it. I will see you at the next lecture.