Use VCE Exam Simulator to open VCE files
Amazon AWS Certified Solutions Architect - Professional Practice Test Questions, Amazon AWS Certified Solutions Architect - Professional Exam Dumps
With Examsnap's complete exam preparation package covering the Amazon AWS Certified Solutions Architect - Professional Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Amazon AWS Certified Solutions Architect - Professional Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.
Hey everyone, and welcome back. In today's video, we will be discussing the KMS architecture. And also, throughout the course, we will be discussing Envelope encryption. So the overall KMS architecture consists of three major components. One is the KMS interface. So the KMS interface is where we interact to.So we can interact via console, via the CLI SDK. So this all terminates at the KMS interface level. The KMS interface is interconnected with the KMS host, and the KMS host at the back end is interconnected with the HSM. So even though KMS is at the backend, it uses HSM very extensively. Now, what really happens is that, let's say, you have the data to encrypt. Let's say the data shows that this is a KP lapse. So this is the data. So you want to encrypt the data? So you send the data to the KM interface. Now, the KMS interface on the back end interacts with the KMS host. You have the Customer Master key, which is associated with the KMS host, and you have the HSM. And then, together, this component becomes the KMS back end. So from there you have the encrypted data. Now, encrypted data is encoded in base 64 and it is sent back to you. Now, this entire part of data being sent over the network also brings certain disadvantages because you have the data and you're sending the data over the network. Now, along with that, one important part to remember is that the KMS will only accept the TLS connection. So there can't be a plaintext connection there. It only accepts a TLS connection over the network. Now, since there is a network involved, there are certain disadvantages. One among them is definitely the latency part. Now, let's look into some of the caveats. The first one is that we can encrypt a maximum of four KB of data with the help of CNK. So if you want to encrypt a huge amount of data, there is an alternate method. Now, the second point which we discussed is that since the data travels over the network,there will be latency which can be invoked. And this is the reason why. So AWS basically suggests the way of using the CMK, which is the CustomerMaster key plus data key base approach. Now, let's look into how exactly this works. So this is also referred to as the "Envelope encryption." What happens here is that first wegenerate one Customer Master key, all right? Now, once our CMK is generated, we can go ahead and generate the data keys. Now, when we basically send the request to generate the data keys, AWS basically returns the plain text and the ciphertext version of the data key. So you have two data keys which are written. One is the plaintext, and the second is the cypher text. Let's assume the green one is plain text and the grey one is ciphertext. Now, we can use the plaintext data key to encrypt the files on the server, right? because these are the two keys that are generated. Now, do remember that these two keys are directly associated with the CMK. So you can make use of this plaintext data key to encrypt the data. So you have your plaintext data encrypted with the plaintext data key. And now what you have is the ciphertext data. So now what you can do is store the ciphertext data with the ciphertext key. So this is the ciphertext key. So you can store both of them together. And once you have stored both of these things together, you can go ahead and delete the plaintext data and you can delete the plaintext key. All right, so this is the encryption part. So after you have completed the encryption process, the output, you have the encrypted data and the encrypted key. So at this specific point, even though your data is stolen by an attacker, all he has is encrypted data and an encrypted key. So he'll not be able to do anything here. So now let's look into the decryption steps. Now, in order for you to decrypt this data,you have to call the decrypt interface because the key is still encrypted and in order to decrypt the data, you have to have the decrypted key. all right? So you call it the decrypt interface. Kms will again send you the plaintext key. You make use of the plaintext key to decrypt your data on your server or at the end point. So this is the high level overview steps of how you can make use of the envelope encryption, which basically makes use of the customer master key plus the data key approach. So let me quickly show you exactly what this might look like. So I'm on the KMS CLI page. Let's go a bit down. And these are the available commands. Now we are more interested in the Generate data keyover here because this is what we were discussing about.Now, the generated data key is something that does what we were discussing. Basically, once we send the general data keys-based API call, it gives us the plaintext and the ciphertext version of the key. So let's look into the synopsis. So you have the generated database. We also have to specify the key ID. And along with that, we'll also have to basically specify the key spec. So let's do one thing. I'll copy up the key ID here and let's try it out. I'll do AWS KMS to generate a data key. I'll specify the key ID. all right? And I'll also specify the Espec, which can typically be 256 or 128 with AES. All right. So now what has happened here? It has returned the plaintext version and the ciphertext version of the key. So now, once you have both of them,you can go ahead and encrypt your data with this plaintext version of the key. Once your data is encrypted, what you can do is store the encrypted data and this encrypted key within your database or your location and that's about it. So during the decryption time, you can call the decrypt interface again and you will have this plain text key through which you will be able to decrypt the data.
Hey everyone, and welcome back. In today's video we will be discussing the network ACL. So let's look into some of the important pointers for network ACL. The first one is that network ACLs are stateless in nature. The second one is that they operate at the subnet level instead of the instance level like the security group. So this can be understood with the diagram here. So you have an EC2 instance here and the security group is basically associated with the network interface card that is attached to the EC2 instance. So the security group is operational at the instance level. However, when you talk about network ACL,they do not operate at the instance level, they operate at the subnet level. Now, a specific subnet can contain hundreds of instances and one rule in networkKCl will affect all the hundred instances that are associated with a specific subnet. The third important pointer is that all subnets in a VPC must be associated with the network KCl. So generally, whenever you create a new VPC, this will automatically create a default network ACL for you. And the fourth one is that, by default, the networkACL contains full inbound and outbound as allowed. So these are the default network ACLs that get generated when you create a VPC. So let's go ahead and understand the use case of why network ACL proves to be important. So, let's say that you have a company XYZ and it is getting a lot of attacks from random IPs which are 128, 190, and 1232. The company has more than 500 servers and the security team has decided to block that specific IP in the firewall for all the servers. Now, how can you go ahead and achieve the goal? So generally, if you talk about the organization, if they have internet-accessible applications, typically you'll have something like four, four, three, or 80 allowed. Now, in such cases, you cannot block a specific IP, which is something which is notpossible in terms of security group. Like, if you allow all the traffic and you want to block a certain specific IP, it is not really possible. And even if it was possible, it would have been difficult because, as you see, there are 500 servers and you don't want to add this blacklisted IP to all the 500 security groups associated with those servers. So the better way would be the network ACL one. So what you do is, if those 500 servers are within the same subnet, you block that IP at the network KC level and that's about it. So let me quickly show you what exactly the network looks like and how you can configure the rules there. So this is the EC to console, and I have one EC with two instances. So we'll be doing a demo based on this EC2 instance. And if you look into these EC two instances,it has the public IP before and within the firewall rule it has allowed 80 and 22 on. Let's say that I want to block only one IP from accessing my port 80. On the AC, for instance, it is not really possible without the help of a security group. So we need to take the root of the network as well. So in order to access network KCl, we need to go to the VPC console and let's select one VPC. So this is the VPC where ourEC two instances are currently created. Now, within your security, if you go a bit down under the security, you have a network ACL, and there is one network ACL which is created and it has a value of default is equal to yes. That means this is the default network ACL which is created. And this network ACL is associated with the six subnets which are part of the VPC. So a single network ACL can be associated with multiple subnets within the VPC. Now, within the network ACL, you have the inbound rules and the outbound rules. So within the inbound rules, you see there are two rules which are present, and within the outbound rules, there are also two rules which are present. Now, there is a rule number that is associated with them. So the first one has a rule number of 100 and all the traffic is being allowed over here. And the second one has started and all traffic is denied. So one important thing to remember is that in network ACL, the lower the number, the higher the priority it will take. So let's say a packet matches this specific rule, then the network ACL will either allow or deny, basically depending on the configuration that you set over here. If the traffic matches here, then it will not look into the below rules, it will just follow what is present over here. all right? And this is the reason why 100 is slower and all the traffic is therefore allowed. The same goes for the outbound. You see, 100 is lower here and this is the reason why all the traffic is allowed. So let's look into whether the things I am saying are correct or not. So let's do one thing. I'll copy the public IP here. So I'm in my CLI and let's try to ping this specific easy instance. And currently, you see you are getting the ping reply back, right? So that means the connectivity is present. So now let's do one thing. Let's modify the inbound rule. I'll add one more rule here. I put the rule number at 99 and I'll also add all the traffic here, similar to the first rule. And this time we'll put it as59 and I'll click on save. All right. So now you have two rules which are quite identical. One is denied and the second is allowed. And the rule priority in terms of numbers is different. So now, if you try to ping, you'll see that the ping has been blocked over here. Now, this has been blocked because there is a one-way rule over here, which is denying all the traffic. And the rule has a priority of 99. So as soon as the network ACL receives traffic from my network, it evaluates it against the rules that you have set. And since we have already specified all the traffic, So that means that this rule matches. And since this rule matches over here,the network ACL will block it. It will not look into the next rule altogether. All right. So now let's do one thing. I'll change this rule number from 99 to 10 and let's click on save. So this time, the allow rule has the higher priority. And here you see, you are able to get the reply back. great. So I hope you understand at a high level what a network ACL is all about. So one important part to remember is that a single network ACL can be associated with multiple subnets and you can also create rules for something similar to this. Let me do one thing. So, currently, this is my IP address. So let's say, because we were discussing use cases, let's say you have something like this: you are allowing traffic from everyone. And there is this one IP address that is trying to attack you all the time. So you can create a rule, you can put the rule number as 99, you can specify the source IP,I'll say 32, and you can put it as denied. You can go ahead and do it So now what will happen is a packet will originate from this source address, if it matches the rule number 99. And at rule number 99, it will be blocked. Now, any other source that is other than the specific one that we have specified over here, does not match over here. So the network ACL will look into the second rule. Now, the second rule says "allow all," which means the traffic will be allowed. So this is the high-level overview of the network ACL. Do remember that by default, when you create a network ACL, it will allow all. However, when you create a custom networkACL, let me actually show you this. So let's go ahead and create a network KCl. I'll call it Kplab hypercustom. Now you can associate a network KCl with a VPC. Let me be a creator. All right, so this is the custom network ACL. And within the customer network KCl, the rule is denied by default. The same is true for inbound and outbound traffic. And if the network ACL default is like you have not created a custom way, then it will have allowed for all. So that's the high-level overview of the network ACL. I hope this video has been informative for you and I look forward to seeing you in the next video.
Hey everyone, and welcome back. In today's video we will be discussing the EC two autorecovery feature. For many times now, it might happen that your systemchecks of your EC2 instance might fail. Typically, system checks of the EC two-instance fail due to various reasons like loss of network connectivity or loss of system power. It can be a software issue like there is a bug, there is an issue in the underlying virtualization software, or it might be the hardware issue itself. So, in the past, whenever the system check failed, we had to manually stop and restart the instance so that it could migrate to the host. Now, AWS has allowed us to automate that part with the feature of Auto covey. Now, what we can do is, if the system checksfail, then we can automatically recover the specific instance. Now let me quickly show you what exactly it might look like. So I'm in my easy to console and within, if we look into the status check, there are two status checks which are available. One is the system status check, and the second is the instance status check. Now, the system status checks are the ones that we were discussing over here. It can be due to loss of network connectivity, loss of system power or other things. However, the instance status check has more to do with the instance, such as exhausted memory, incompatible kernels, network configuration issues, and so on. For now, the EC two recovery features are only supported for the system status check. It is not supported at the instance status check level. So let's do one thing. Let's click on create a status check alarm over here and now you have the option to send a notification. If you would require an email, let's deselect this for now, but for production it's better to do it. Now there is an option to take action. You must now recover instance, stop instance, terminate instance, and reboot instance within take action. Now recover the instance. What it basically does is that whenever the system status check fails, it will automatically stop and start the EC two instances. So generally, what happens is that whenever you stop and restart the EC2 instance, let's say that this is the physical host where the instance is currently running and it is facing some systems check issue. Now, when you stop and start the EC2, the two instances will not be on the same physical host; they will be migrated to some different physical hosts. And this is the reason why stopping and restarting the instance is the best way. If you reboot the instance, it will be on the same physical host. So this feature of recovering this instance is what is referred to as the EC. It may appear to be a large one, but it is actually just a simple stop and start. Now, in order to also show you a few things. If you look into the condition of whenever, there are two options. One is the instance status check. The second is the system status check. If you select Instance Status Check and if you click on Take Action and if you select Recover this instance, you will see it automatically changed to System Status Check because the Instance Level Status Check is not supported for easy auto recovery. So let's do one thing. Let me show you a quick demo. Let's put the Take the Action as Stop the Instance, and I'll say the condition for at least 1 minute. all right? So you have at least one consecutive period of 1 minute. I'll go ahead and click on "Create Alarm." So, if you click on the alarm, it will usually tell you that there is insufficient data, owing to the fact that the metric pointers have not yet been generated. So it is showing the state as having insufficient data. So let's quickly wait for a moment, and after it collects sufficient data points, the state should change to okay. So it has been around 2 minutes and after a refresh, you see the status has changed from Insufficient Data to OK. Now, what we can do is manually set the state to alarm and see exactly when it might happen. So, this can be easily achieved with the help of CLI. So this is a simple CLI command. What it basically does, if you see it, is associated with the Cloud Watch service. Now it is setting an alarm state for the alarm named "Demo" and the state value is "Alarm" over here. And we can also specify the reason for that and also the region where the alarm is associated with. So, basically, let me copy this alarm name, and I'll replace the alarm name herewith demo with the alarm that we created. Now, what would typically happen is that once we run this command, the alarm state would change from okay to alarm. And then, hopefully, the action that we had set which was to stop the instance should be executed. So let me press Enter here and everything seems to be working fine. And within the EC Two console, if I quickly do a refresh, you will see the state changed to Alarm. Within the history, you will now see various options such as it being changed from Insufficient Data to OK and the alarm being updated from OK to Alarm Status. And after that, there was an action that was taken, and this action was basically stopped. Now, if you also want to verify, let's click on Refresh. Here you see the EC2 instance is stopping over here. So this is the high-level overview of the actions that you can take automatically based on the Cloud Watchalarm, as well as the high-level overview of what the EC Two autorecovery feature is all about. So, with that, I'll conclude this video and look forward to seeing you in the next one.
Hey everyone, how is it going? In today's video, we will be discussing the AWS personal health dashboard. Now the personal health dashboard basically gives you notifications as well as an overview if there are any errors or if there are any anomalies going on in the AWS services. Now we also have something called the AWS service health dashboard, and I'm sure that many of you have used it, or if not, the service health dashboard basically gives you an overview of whether the service is working correctly or not. For the past five years, I have typically worked in enterprises that use AWS, and there are numerous instances where the production service goes down, typically due to a backend AWS issue. So it's not like that. If you're using AWS, you are 100% good. AWS also has its own issues. So, whenever your production goes down or there's a networking issue, you need to verify whether it's anAWS issue or a problem on your end. So that was typically done with the help of a service health dashboard. You'll now get the information related to various services and you'll also have details on whether the service is operating normally or not. So, typically, this is a region. So you have North America, South America, Europe, and Asia Pacific. However, currently, if you see this is a very high-level overview and many times it happens that this is not a personalised dashboard. This is a very global dashboard that AWS provides. And since this is global, there are a lot of things, like event-driven operations, that you cannot perform. So because of that, AWS really recommends you get a personalised view of the AWS service with the help of a personal health dashboard. So let's go ahead and understand more about that. Now the AWS personal health dashboard displays issues that are impacting your resources, going to potentially impact the service, or it's already impacting the service that you are using in your AWS account. So this is what the dashboard looks like. And if you see there are something like open issues, schedule chains, or other notifications, it will give you the list of open issues that are related to your environment within AWS. So let's go ahead and look into what exactly it might look like in the management console. So I'm in my management console over here and here you have something called a bell icon. If you click on the bell icon, you will be taken to a list of alerts. You have open issues, scheduled changes, and other notifications. Typically, you do not really have an orange mark over here. If there are any notifications, then you will be able to see them from here. So let's click here and we'll select the "view all alerts" button. Within this section, you can now see a list of open issues, a list of scheduled changes, and other notifications. So let's assume that you are running a production server and suddenly there are certain networking issues that you are facing. You're not sure if it was because of the change you made last evening or because of something on the AWS component side. So the first thing that you would typically check is your personal health dashboard to see whether your open issues have certain AWS site failures that are occurring due to which your component might go down. So this is something which you will be able to see over here. Now, there is also something called an event log, which typically contains So if you see the dashboard, you have only the past seven days. The event log can contain data from as long as 90 days. So you will be able to see various information over here. So all of the events you see here fall under the category of issue. So let's click on easy to operational issue and they'll tell you what exactly the issue was,in which region the issue was, what was the start time, what was the end time. And typically, within the affected resources,there are no affected entities found. But if two instances of your EC2 are affected,you will be able to see the list of instances within your environment which are affected by this specific issue. So it becomes really simple for the SysOps or SRE team to quickly figure out which resources are affected by the AWS operational issue side.Now, one more powerful thing about the personal health dashboard is this notification. So on the top right, you have set up notification with a Cloud Watch event. Let's click here and let's look into what exactly this is. So this is the Cloud Watch rules page. Now basically, let's assume that what you want is that you want to get an email. You have three teams, so you have a SRE team, a soft team, and a security team. Now, whenever there is any security service that has any issue,then what you want is the security team to be alerted for whenever there is a compute related issue. You want SIS upstream to have the alert for. And if something like an API, gateway, or similar service has a problem, you want the SRE team to be notified, which is all possible with Cloud Watch. Now, from the service name, we need to select "Health." So this is the health check and instead of all the events, what you can do is specify the health check events. So let's specify. Let's say that there is an issue related to Im. Now, I am typically identified and generally managed by the security team. Now, within I, you can specify the event category. Say I want to only get events related to an issue. Whenever there is an issue related to theIm service, the security team needs to be notified. Now, you also have event codes here where there are various event codes related to API, operational issues, SAML related, or federation related issues. So I'll say any event typecode over here and any resources. Now, once you have selected that on the right hand side, you can put a target. So if I click on a target over here, there can be various targets. It can be an SNS topic. Now, SNS topics in turn can be integrated with email functionality where within SNS topics you can specify your topic name, which is associated with your security team, or you can even have a lambda function here that becomes quite powerful. So you have a lambda function and you can put some logic within the lambda function. Let's say there is an EC2 instance which is marked for host failure. So you can have a lambda function which will automatically restart that EC2 instance so that it migrates the host from the potential failure host to a completely new host within that availability zone. As a result, lambda is one of the important targets provided by the Cloud Watch event source. Typically, within the organisation I've been working with, we have various teams, and depending on the use case, any issue related to services with the security team is handled through a separate SMS topic. The SNS topic is integrated with the email address of the security team, and whenever there is a health issue related to security services,only the security team will get an alert. If there is a health issue related to compute service,only the Sisoffs team would get an alert and similar. This is why the personal health dashboard is so important, and the service health dashboard does not provide the same level of flexibility that the personal health dashboard does.
Hey everyone, and welcome back to the Knowledgeable Video series. And in today's lecture, we will primarily be speaking about streaming data. We have been looking into the overview of databases. And when we talk about databases, it generally holds conventional data like static data. However, there are a lot of applications where streaming data is actually required. So let's go ahead and understand more about it. So, streaming data access basically implies that instead of reading data as packets or chunks, data is read continuously with a constant bit rate. So, we will be discussing this point in just a moment. So let's cover the second point, which states that an application starts reading data from the start of a file. So this basically talks about streaming data and continues reading it in a sequence without any random seek. So what happens in streaming data is that the data is continuously streamed. So let's do one thing before we go ahead and talk more about it. Let's look into what streaming data really means. I hope most of you can relate to Uber. So what really happens in Uber? Let me show you. If you can see, I hope you can see. So what really happens in Uber is that when you open your Uber application, it will show you the cars that are available. Now, along with that, it actually shows you if the car is making a left turn, or if it is making a right turn, or if it is making a U turn on.So it actually shows you all of these. So what is actually happening is that it is kind of a streaming data set. As the car is moving forward, you can see that the car is moving forward in your application. As a result, this cannot be accomplished directly through the use of databases. So this is where a special kind of tool is required. A few more examples I can give you. So, there's a nice site called Pubnup. This now contains examples of streaming data. Let's open up two of them. I'll open one and I'll open one more. So this is the Twitter stream. So, as the person tweets, it will be shown in this particular stream. So what you are seeing is that you are continuously seeing new, new tweets which are coming up. You see, every second there are two or three tweets which are coming up. So this is something like streaming data. You see, data is streamed. So this is one of the examples of streaming data. The second example that I can give is market orders. You see this data, you also consider that it is actually streaming. So there are a lot of interesting examples that you will find on this specific website. I'll attach the link along with the video. So you can also check this out. So we looked into the three examples where real-time streaming data is actually required. So, having an application or having a tool that can work with the streaming data is very important. So again, directly using a database will not help over here. So you need some kind of tool that will help in this specific situation. So there are various tools which are available for streaming data. One of them is called Apache kafka. So, if you see, it is a distributed streaming platform. So this is a very nice tool that has been used in a lot of organizations. As a result, this is something you should look forward to. But in today's lecture we will be using or we will be exploring a tool called AWSKindnesses, which basically does very similar things to Kafka, and it is used for handling streaming data. So when you talk about AWS Kindnesses, there are three entities which are generally involved in this particular use case of streaming data. One is the producer, the second is the thematic store, and the third is the consumer. So you have a producer over here. The producer will constantly send data to the stream store. So this is the stream store and the consumers can pull the data or the data can be pushed to these consumers and this is how this really works. So we're concentrating our efforts on this particular aspect of the middleware. So this can be Kafka, this can be AWS Kindness, and it can be various other platforms as well. So this is the basis that I really wanted to show in this lecture. So what we'll do in the upcoming lecture is that we will be actually creating this middleware and we'll push the data from producers to this specific middleware and then we'll see how the consumers can pull it in a real-time based manner. So this is it. In this lecture, I hope you got the basic understanding of what streaming data is all about and why a database cannot be used in this kind of situation.
ExamSnap's Amazon AWS Certified Solutions Architect - Professional Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Amazon AWS Certified Solutions Architect - Professional Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.
Please post your comments about Amazon Exams. Don't share your email address asking for AWS Certified Solutions Architect - Professional braindumps or AWS Certified Solutions Architect - Professional exam pdf files.
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.