SCS-C01 Amazon AWS Certified Security Specialty – Domain 5 – Data Protection part 9

  1. Glacier Vault and Vault Lock

Hey everyone and welcome back. In today’s video, we will be discussing about the Glacier vault. Now, if you remember, AWS Glacier is basically an extremely low cost storage service, which allows us to store data securely as well as in a durable fashion. Now, Glacier is similar to S Three, but in terms of retrieval time, s Three is much more faster or any in terms of, let’s say you want to host a website, s Three really provides great solutions there. However, Glacier is much more cheaper and is typically used for data which does not need to be frequently accessed. Or you can say it is recommended for the archival data. Now, with respect to security, there are a few important parts to remember as far as the Glacier is concerned. One is that access to the data in Glacier can be controlled with Ian.

And the data in Glacier is also encrypted using the server side encryption. Now, in case for the customers who do not want serverside encryption and want to manage their own keys, they can encrypt the data on their side before uploading it to the Glacier. Now, in terms of Glacier terminology, vault is one part that you need to understand. So basically, in Glacier the data are stored in terms of archives and the archives are grouped together in terms of vault. So vault is basically a way in which the archives are grouped together in Glacier. Now, this vault becomes much more important because it contains the group of data. Now basically, your access control is also recommended. Now, we can control the access to the vault with the vault level access policies and we can also have the IAM policies which can be attached. So you can have Im policy.

We can also have the vault level policy so they’re similar to S Three. We can control access to s three through IAM. We also have the option to have a resource level policies in S Three. All right, now one of the very important part. So before we continue further, let’s look into how we can create our own vault. So I’m in my AWS management console and let’s go to S Three, Glacier so we can go ahead and create a vault. Let’s click on Create Vault and you’ll have to give the name for the vault. Let me give it as KP Labs vault. I can go ahead and I’ll do a next you have an option to set an event notification. I’ll just leave it as default for the time being. Let’s click on Next and we’ll click on Submit. So currently you see you have a vault which is created. We don’t really have any data here and we don’t really have any archives here.

So if you click on Vault, it will basically give you details like ARN number of archives. You also have options for enabling notification, you have option for permission. So this is like a resource level policies that you can attach very similar to S three policy like bucket policies that we do. Then you have vault lock. Vault lock is something which is very important for the exam. You might get a few questions related to vault lock. So let’s look into what vault lock is all about. Now, the glacier vault lock basically allows us to deploy and enforce various compliance control that your organization might have at an individual glacier vault with a vault lock policy. Now, one great thing about vault lock policy is that it is immutable. Immutable basically means that once you create a vault log policy, it is immutable. You cannot further make any changes to that.

So let’s take an example. Let’s say you want to have a control which says write once and read many in a vault lock so that you can do with the vault log policy. Because many times what happens is that you have a very sensitive data like logs that you must store and you must ensure that they do not get deleted and you must ensure that they cannot get edited. Now, a lot of compliance, like if you are an organization which is dealing with extremely sensitive data like defense, then you need to make sure that whatever logs are present from the sensitive system which is handling the defense data, those logs cannot be edited, those logs cannot be deleted as well. And for such kind of requirement, the vault lock policy is the best one. Because once you create a vault lock policy saying that all red logs cannot be deleted, that policy cannot be edited.

Furthermore, so let’s look into how exactly this would look like. So I’m in my glacier, let’s create a vault log policy. So let’s say that we want a vault lock policy in such a way that whatever data is present within this vault cannot be deleted further, all right? So there cannot be any deletion that can happen within the data which gets stored in this world. So let’s create a policy. So I’ll click on the button which says create a world of policy. Now, here you need to put a policy so you can click on add a permission. So the effect here would be denied principle here. Let’s say everybody. So I don’t want anyone, even administrator should not be able to delete the logs within the world. So now the actions would be let’s go a bit down and I’ll select delete archive.

All right? You can select all the actions or actions which depends upon the use cases that your organization might have. And I’ll click on add permission. So this is how the IAM policy looks like. So the actions glacier delete archive and the effect is denied. So deletion of archive should not be possible. Once you have this Wordlock policy in place, you can go ahead and initiate the word lock policy. Now, once you have done that, you see, it gives you a unique lock ID. This is very important part. So just copy it down and once you have copied it, you can close it. So what happens is basically once you have written the policy, the policy is still in progress, the vault has not been logged, so you will have to complete the vault lock process. Now, we already know that once the vault lock is completed, you will never be able to edit any of these policies.

So this becomes immutable and hence they have that additional layer of verification. So in order to do that, just click on Complete Vault Lock and specify the lock ID that you have copied. Click on I acknowledge the vault lock is configured and it says that completing this will basically make this process as irreversible. Once you have done this, click on Complete Vault Lock and once you have done that, you see all the option of edit or delete went away and this is immutable. Now you will not be able to change this specific vault lock policy. And this ensures like whenever auditor comes, you can say that once you have this policy, you can show them the policy and say that this is an immutable policy and from now it can never be edited. And hence they can ensure that any archive that gets stored, it can never be deleted.

  1. DynamoDB Encryption

Hey, everyone, and welcome back. In today’s video, we will be discussing about the DynamoDB encryption. Now, typically, if your organization is processing some sensitive data, you need to make sure that the data remains encrypted. Now, typically, during the earlier phases of DynamoDB, you do not really have a server side encryption. However, you did have an option to encrypt the data at the client level. So, basically, what we can do is we can make use of a DynamoDB encryption client. So this is the word that you need to remember. So this is the DynamoDB encryption client, which is used to protect the data in the table before we send it to the DynamoDB. So the data gets protected or it gets encrypted before it goes to the DynamoDB table. Now, DynamoDB client, the DynamoDB Encryption client basically can be used with AWS, Kms or even cloud HSN. Or you can use the library.

So it is not like you need to stick with Kms or cloud HSN. The library itself can support custom crypto keys if you have it, and you can manage it by yourself. Now, recently, AWS also released the feature for the server side encryption in DynamoDB. So you can make use of the server side encryption along with the Kms for DynamoDB. So let’s quickly look into both of them. So, if you look here, this is basically it’s available in GitHub. So this is the library, which is AWS DynamoDB encryption. So this library can be used for client site encryption. That means the encryption happens at the client side before it goes to the DynamoDB. Now, for this, you can use your own custom crypto keys as well. You have that flexibility present. Now, basically, if you go ahead and create a table in DynamoDB, this is one of the recent features.

If I just deselect the default settings, if you go a bit down, you have the option for encryption at risk, where you have the serverside encryption, and you also have the encryption for Kms. So you can go with both of them. So, one important part to remember is that in exam, it might happen that the question might say that you want to encrypt the data at the origin. So when it comes at the origin, then the DynamoDB Encryption client is for you, the option. All right? However, if it says that if you want to have the data encrypted at rest, then this option of DynamoDB encryption, which is the server side encryption or Kms, can be suitable for you. So this is a short video related to DynamoDB encryption. There can be certain questions which might come related to this topic. So with this, we’ll conclude this video. I hope this has been for you, and I look forward to seeing the next video.

  1. Overview of AWS Secrets Manager

Hey everyone and welcome back. In today’s video, we’ll be discussing about the AWS Secrets Manager. Now, the AWS Secrets Manager enables customer to rotate, manage and retrieve the database credentials, the API keys and any other secrets that you might have throughout its lifecycle. Now, generally in organization, if you might have seen lot of developers, they store their secrets in plain text or if you speak about DevOps team, they might add the secret as the environment variable. Now, this is again not a best practice and it creates a lot of security risk. Now, when you discuss about the compliance, various compliance like PCI, DSS, they do mandate that the credentials should be rotated at a predefined amount of time.

So a lot of organizations, they need a service which helps them keep the secrets. Now, Hashicop Vault is one of them, which a lot of organizations have been using. But again, you will have to manage those services. Now, great thing about Secrets Manager is that it is a managed service, so you don’t really have to worry about it going down or it is a responding slow or others. Now, there are certain great features of Secrets Manager where built in integration for rotating the MySQL PostgreSQL and aura around RDS. So this is one of my favorite feature and we’ll look into how exactly this works in the next video once we understand the basics about Secrets Manager. Now, along with its built in integration for various database, it also has the versioning capability so that application do not break when the secrets are rotated at a predefined amount of time that you put in Secrets Manager.

Now, along with that, you can have a fine grain access control on who can access the secret. Let’s say a user, you can define that a user only if he logins from a specific corporate network and he has the multi factor authentication in place, then only he should be able to access the secrets. So all of those fine gain access control which IAM supports, you can have it in the Secrets Manager along with the resource based policies. So, this is it about the theoretical perspective. Let’s jump into the practical and look into how exactly the Secret Manager looks like. So this is the AWS secrets manager console. Now, one part to remember is that it will be charged. They do offer 30 days free trial post, which you will be charged at zero point $40 per secret and $0. 5 per 10,000 API call.

So, this is one thing to remember. So make sure that if you’re doing it for demo, go ahead and delete it after your practical completes. So let’s go ahead and we’ll click on store a new secret. And there are three secret types. One is Credential for RDS database, Credential for other database and third is the other type of secrets. So this other type of secrets would be the SSH key or any API keys that you might have. We will discuss about credentials for RDS database in the next video because this is a great feature and I wanted to dedicate a separate video for that anyway. So here let’s click on other type of secrets. So here you need to give a key value pair. So for example, purpose what I have done is I have created a new key.

I have named it as Real demo key and this is the value associated with it. So let’s go ahead and store it. This in the secrets manager. So for the value here, which is the key, I’ll put the key name. Let’s just edit it with Hyphen. And here you need to put the actual value.I’ll copy this up and I’ll paste the value. Now you can go ahead and encrypt this data with a specific Kms key that you might have. I’ll just leave it as a default encryption key. Let’s click on next. Now you need to give it a secret name. Let’s say I’ll say real SSH key and you can even give it a tag. I’ll just ignore this for now. Let’s click on next. Now here you have option for automatic rotation. So let’s say various compliance states that after 90 days you should rotate your keys. So you also have option for automatic rotation.

We do not want to rotate our SSH key, so I’ll just disable it for now. Let’s click on next. It gives you a review. And also here it gives you a sample code. Now, this is great because let’s say that you have stored an API key. So if you look into the Python code, basically here you are importing both three as well as base 64. And then you are giving the secret name as Zlss key and the region at AP South is one. Basically what you want is you want the value associated with a secret call as Zlss key. So this is the entire code. If you have the Python code, you can just put it there and you’ll be able to retrieve the secret associated with the Zlss key name. Now, once you have done that, you can go ahead and you can store the credentials. So this is the first secret which got created.

Now, if you click over here and if you want to retrieve the secret value again, in order to do that, you need to have an appropriate permission for that. So when you click on retrieve secret value, you will be able to retrieve the key over here. Now, since I am an administrator, I do have access to this. You can just click on this button and you’ll be able to see it. But any other user who do not have permission will not be able to retrieve the secret value over here. Now, in case application wants to retrieve the secret value, you can go ahead and you can add the Im policy associated with the role with the EC to instance and then the application will be able to retrieve the secret. So this is the high level overview about the AWS Secrets Manager. I hope this video has been informative for you and I look forward to seeing in the next video.

  1. RDS Integration with AWS Secrets Manager

Hey everyone, and welcome back. Now, in the earlier video, we were discussing the basics about Secrets Manager. Now, one of the great features, in fact, this is one of my personal favorite one for Secrets Manager is its builtin integration for rotating the MySQL, PostgreSQL and aura on RDS. So now, typically what happens is that let’s say a database team created a database. Now the application team wants to integrate their application with the database. So DB team would typically give them the DB username and password. Now what the application team will do, they will store that username and password within their code, and the code will be able to connect. Now, again, the problem is that you are hard coding the values within your application code in a plain text. And that again is a security risk.

So that is the first part. The second part is when you want to rotate the credentials, let’s say every 30 days, you want to rotate your database credentials. So what would happen is DB team would have to create a new credentials, then they will have to give it to the application team. Application team would change the application code to put the new credentials and they’ll deploy it to the production environment. So there are a lot of hazards over here. So what instead you can do is you can store the credentials in a Secrets manager, let the application, which is running, contact the Secrets Manager for the database credentials. And in case of rotation, the Secrets manager can automatically rotate the credentials for database. And application team don’t really have to do anything.

They can just fetch the latest credentials. So, this is a great feature. Let’s try it out and look into how exactly it would work. Now, in order to do that, the first thing that we will be doing is we’ll be creating our sample RDS database. Let’s click on Create Database. So I’ll be putting the MySQL one for our testing. And here you can just say only enable options eligible for the RDS free tire usage. So let’s go a bit down. We are good with t two micro, 20 GB storage is also sufficient. The DB instance Identifier, I’ll say as KP labs secret DB the master username, I’ll give it as KP admin. And for the password, I already have a sample password here, we’ll just copy it here. I’ll paste the password. Once you have done that, we’ll leave things as default. It’s publicly accessible. Yes, because we don’t really have any VPN right now.

And the database name that we want, let’s say I’ll just say it as Kplabs and I’ll leave everything as default. And we can go ahead and click on Create database. Now, before we do that, just disable the deletion protection, because this is a test database, we do not really want that. Great. So our database instance has been created. So let’s quickly wait for a moment for the instance to get created. All right, so our database is now created. You see, it is available now. So what we can do is we can now go ahead and verify whether we are actually able to connect to this specific end point. So let’s try it out. I’ll do a MySQL hyphen h. I’ll specify the user as KP admin followed by the password. So I’ll copy the password from a text file and I’ll paste it over here.

Great. So now if you quickly do a show databases, you should see that the Ktlabs database is available over here that we had configured. Now, the next thing that you need to do is that you will have to change the security group. Again, this would depend. So basically what happens is that whenever you create a new secret, let’s say that this time we create a secret based on RDS database. In the back end it creates a lambda function. Now lambda function, if it is outside of VPC, then you need to provide the security group rules accordingly. So for our demo purposes, I’ll just add a zero, zero, zero. Now, you should not be doing this in your environment for production.

This is just for the demo purposes. I just wanted to show you how things work. Great. So once you have done that, let’s go ahead and create a RDS database. So, we’ll select this as the secret type. Here we’ll quickly give the username and the password so that the lambda function which the secret manager will create, it can go ahead and connect to it. So for the password, I’ll just copy and paste it here we’ll leave things as default. Here it says select which RDS database this secret will access. I’ll just select this is our RDS database which is Kplab’s secret DB once, then click on Next. Let’s give it a secret name. I’ll say. Kplabs RDS secret Manager. Once you’ve done that, click on Next and here by default the automatic rotation is disabled.

Let’s click on enable here and here. There are two options. You can either use an existing lambda function or you can create a new lambda function. So we’ll click on create a new lambda function. Let’s call it as KP Labs lambda and we’ll leave everything as default. So once you enable the automatic rotation, what will happen is that secrets manager will rotate the credentials. So currently if you see this is our credentials. So first time when it gets configured with automatic rotation, secrets manager will rotate the credentials and it will give you the new password at that time only. So you’ll be able to verify for sure that the rotation is working perfectly and if your applications application can also verify from their end.

So once you have done that, let’s go ahead and click on Next and this will give you an overview page. We’ll go ahead and click on Store. Great. So currently the secret is being configured. It will take around two minutes for it to be configured. So let’s quickly wait for a moment here. Great. So once your secret is ready, you will see that the blue tab changed to green. And now it says that your secret store is ready. So let’s go ahead and click on our secret. And now if you click on Retrieve secret value, you should see that it is giving you a different value altogether. That means that the secret has been rotated. So let’s quickly find it out whether this actually works or not.

So what I’ll do, I’ll copy this secret and let’s try to log in to the database through the CLI. So this was our earlier part. Let’s try it again. I’ll copy paste it and now you see I am able to log in. So, if you quickly do a show databases, things work as expected. That means the secret manager has actually rotated the credentials for our database. Great. So let’s explore few more things. So here, if you go under the rotation configurations, you should see that there is a lambda function. So this is the lambda function which is actually responsible for the rotation. Let’s look into how exactly this looks. So this is our lambda function. Now, if you look into this function, you have your lambda over here and it is calling the AWS Cloud Watch logs.

So basically what is happening is that the logs of your lambda function goes to Cloud Watch. Now, if you go a bit down here, let’s go a bit down under the network. Currently, it is not under the VPC. In case if your RDS database is inside a private subnet, you need to put your lambda function inside the VPC so that it will be able to access the RDS instance. Otherwise, if it is not inside VPC and your RDS is inside the private subnet and not publicly accessible, then your lambda function will not work. Now, let’s go to Cloud Watch. Let me also quickly show you. So under the Cloud Watch we’ll go to logs and you should see that there is one log group which is created with AWS Lambda secret manager kplabs, F and lambda.

If you click over here, this is the log stream and you will be able to see when was the function run and whether there was any error or not. So currently here you see that the create secret. It says successfully put the secret for this specific ARN. So here everything seems to be working correctly. In case if things are not working, you can go ahead and look into the Cloud Watch log group for any errors. So, this is the high level overview about the secrets manager. Let me actually show you one more thing before we conclude. So, if I click on Store a new secret, there were three options. We explored the first one, we explore the third one. However, if you click on credentials for other database over here, this is the option where you can actually specify the server address, database name and the database port.

Let’s say that your database is stored in a different VPC, so you can specify the credentials for other database, specify the server address, database name and port. And one of the difference that you will see between the first option and the second option is the lambda function will not get created when you click on credentials for other database. So you will have to create a lambda function manually. So this is one of the differences and the second difference. Again here we’ll have to specify the server address, the database name and the database port. So that’s the highlevel overview about the secrets manager. I hope this has been informative for you and I look forward to seeing you in the next video.

 

 

img