Use VCE Exam Simulator to open VCE files

Get 100% Latest AWS Certified Cloud Practitioner Practice Tests Questions, Accurate & Verified Answers!
30 Days Free Updates, Instant Download!
AWS Certified Cloud Practitioner Premium Bundle
Download Free AWS Certified Cloud Practitioner Exam Questions in VCE Format
Amazon AWS Certified Cloud Practitioner Certification Practice Test Questions, Amazon AWS Certified Cloud Practitioner Exam Dumps
ExamSnap provides Amazon AWS Certified Cloud Practitioner Certification Practice Test Questions and Answers, Video Training Course, Study Guide and 100% Latest Exam Dumps to help you Pass. The Amazon AWS Certified Cloud Practitioner Certification Exam Dumps & Practice Test Questions in the VCE format are verified by IT Trainers who have more than 15 year experience in their field. Additional materials include study guide and video training course designed by the ExamSnap experts. So if you want trusted Amazon AWS Certified Cloud Practitioner Exam Dumps & Practice Test Questions, then you have come to the right place Read More.
Hey everyone, and welcome back. Now in today's video, we will be discussing one of the new S-three storageclasses, which is One Zone infrequent access. Now, as we have already discussed, AmazonS Three has come up with various new storage class offerings like Intelligent Tiering One Zoneinfrequent Access and Glacier Deep Archive. Now, Intelligent Tiering is something that we have already discussed. So in today's video, we'll focus on the One Zone infrequent access with storage class. Storage classes such as standard s three are now available. The standard for infrequent access Basically, they store the data in a minimum of three Availability Zones.Do note that in this type of architecture, the overall cost of storage is increased. So, in this architecture diagram, there are three availability zones, and data is replicated across three availability zones. So, if you store the data across three availability zones, the storage pricing will undoubtedly rise. Now that data is stored in a single availability zone, the S 31 Zone infrequent Access storage class costs 20% less than this three-standard infrequent Access storage class. Now, this type of storage class is a good choice for storing secondary backup copies of unpredisedata or easily recreated table data.
Now, this is very important. So basically, since you are storing the data inside a single Availability Zone, and if this Availability Zone is destroyed for some reason,then your data is lost. And this is the reason why this type of storage is recommended for a secondary backup copy of the onpremise data or the data which is easily recreated. So again, you have to make a choice if you already have a primary backup somewhere and you want to store a secondary backup. So that is something that you can choose One Zone infrequent access for, provided that is a concern for you. Now, in terms of pricing, we've already established that there is a significant price difference between standard three and standard infrequent access. Now, if we look into the first point here, the S-three One Zone infrequent access, it basically costs 20%less than the S-three standard infrequent access. So this is the S three. Standard infrequent access costs $1250 TB.
Now the One Zone IA costs $10 for the same price. So there is a pricing difference that you will see between a standard IA and a One Zone IA. Now let's go back to the SA console. Let's click on upload. I'll select one random file. Let's click next. Now, within the storage class here, if you look in the One Zone IA, you will see there is also a block of Availability Zones. As a result, your availability zones are greater than one. So this is what it is. Now if you look into the other tiers,like standard is greater than equal to three, intelligent tiering standard infrequent access is greater than equal to three, and so on. The only storage class which has a greater thanequal to one is the One Zone IA. And this is the reason why it is referred to as One Zone because the data is stored in one availability zone and for this type of data now, whatever data you store, this data should be infrequently accessed. So, if you need to store data that is frequently accessed, this is not the best option.
Hey everyone, and welcome back. Now in today's video, we will be discussing yet another new S Threestorage class, which is Glacier Deep Archive. The S Three Glacier d Parchive is basically Amazon S three's lowest cost storage class. And it supports long-term retention and digitalprevention for data that may be accessed once or twice a year. So this storage class is designed specifically for data that you know will be accessed infrequently. So it is like if you want to access the data maybe once or twice a year, then the Glacier Deep Archive could be a good solution. All the data which you store in the Three Glacier Deep Archive can be restored within 12 hours. So it's not like that. You have some data within the Glacier Deep Archive and you can just recover the data within a few minutes.
That is not so. The restoration takes hours. So now, on the contrary, you have a Glacier Deep archive. So let me show you the storage classes. So you have the Glacier storage class and you have the Glacier Deep Archive storage class. Now, the Glacier storage class is ideal for archives where data is retrieved on a regular basis and some of the data must be retrieved within a few minutes. So that is the primary difference between them. Now in terms of costing, if you have one TB of data stored in a glacier, it costs $14. One TB of data stored in the Glacier Deep archive now costs $10. As a result, the Glacier Deep archive is one of the most affordable storage options available on AWS.
Now if you look into the S Three console, so again, you have the storage class here, and this is the storage class which we are discussing, which is the Glacier Deep archive. Now if you talk about the minimum storage duration, you see that the minimum storage duration here is 180 days as compared to the glaciers' 90 days over here. Now, within the description also,this is like self explanatory.So you have a glacier. It is basically designed for archiving data, with retrieval times ranging from minutes to hours. Now for the Glacier Deep Archive, it is forarchive data that rarely, if ever, needs to be accessed with a retrieval time of hours. As a result, the data in the Glacier DeepArchive cannot be retrieved in minutes. You will need to wait for hours for it to be retrieved.
Hey everyone, and welcome back to the Knowledge Portal video series. So we have started to explore the true power of S Three through its services. And today we will be talking about yet another service feature, which is called Lifecycle Policy. So let's look into what this is with a use case scenario. So this is a use case scenario. Let's explore it. So company XYZ is an e-commerce organization. Now they are storing access logs of their web servers to track from which country the users are coming from. So the access lock of the webservice centrally contains the IP address. And when you map the IP address with the GeoIP location, you can actually see which country the users are coming from. As they also deal with customer credit card data, one requirement from the Chief Information Security Officer is to store the log for at least five years.
Now, this is pretty interesting. Now there is a bit of cost cutting going on within the organization. And as a solutions architect, you have been given this particular use case and you have asked, how will you go ahead with this kind of a scenario? In exams you will not really get questions like "What is a Lifecycle Policy?" Select one of the options among four. So actually you'll be getting some kind of a scenario which is very similar to this, around five to six lines, and you will be asked to select one of the best use cases which will be given in the answer section. So in order to solve this kind of a scenario,let's look into how Lifecycle Policies will help here. We now understand that there are various storageclasses which are being offered by S Three. Each storage class has varying availability as per se. And depending upon the storage class that you choose, the pricing information will differ. So what you can do is we have to basically make the storage class or we have to select the storage class which is durable, but which is also affordable. So, with the help of LifecyclePolicy, you can store logs with Amazon S3 standard storage class for the first three months. Now, once the logs reach three months old, they can be moved to S3 standard infrequent access. As a result, this is significantly less expensive. So all the logs that are older than three months can be moved to Standard IA. And now all the logs that are older than one year can be moved to the Glacier service. So what we are doing is we are moving the logs from standard to standard IA, standard IA to Glacier, depending upon the duration of the logs. So let's do one thing. Let's go to the SThree console and see how we can do that. Now, if you just click on the bucket, you can select the properties. Ideally, what you want now is for this part to be automated. You do not want to manually select the object and move it to standard it.By this, I mean, let me just click.
If I just click on this specific object name and then click on more, you can see I can actually change the storage class from Standard to Standard IA, then reduce redundancy storage, et cetera. However, you don't really want to mark like after three months, you come to S three consoles and manually move to the different storage class. You want everything to be automated. And that is where true power really lies: automation. So in order to do that, what you need to do is click on the bucket name, select Properties,go to Management, and here in the Lifecycle, if you see, you click on the Add Lifecycle Rule. So basically, what this does is it automates the transition of your objects from various storage classes. Now I'll click on Add a lifecycle rule. Now you can rule by name. I'll say Kplabtyper lifecycle. I click on "next." Now, since the versioning is enabled, it is asking me whether I would like to have both the current and previous version enabled.
This lifecycle policy includes I'll select both of them. And now, if you see, there is a button called Add Transition. I'll click here. This is for the current version. Now the transition period would be to Standard IA. I'll click here in three months. So after I select 90 days,I'll add one more transition period. I'll select Glacier and, after one year, I want objects to be moved to Glacier. So, basically, whenever I upload an object to S3, it will be stored in standard S3 storage. After three months, the object will be moved to standard infrequent access. So after three months, which is 90 days,and the object after 365 days, it will automatically be moved to Amazon Glacier storage. So this is the lifecycle policy for the current version. You need to do the same thing for the previous version if you intend to. But we'll skip this for now. I'll click next. There's no need to configure expiration for now.
I'll select Next and it will give you an overview of what exactly is going to happen. So the scope is the entire bucket. It tells you the bucket lifecycle policy name. Now, if you see over here, this will give you an overview of what is going to happen. That the objects are going to transition to Standard IA after 90 days automatically. And after 365 days, the object will be moved to the Amazon Glacier. So this is it. I'll click on Save, and now if you see, I have a Lifecycle Policy added. So, this is quite easy. And from now on, Amazon LifecyclePolicy will automatically take care of moving the objects across storage classes. So this is quite important to understand when you talk about long-term storage? About five years. Then having this life cycle is not only efficient in terms of cost saving, but it is also efficient in that you don't waste the resources unnecessarily. So this is the basic idea of lifecycle policies.
Hey everyone, and welcome back. So we can finally begin our database primer today. So what we'll do is in today's lecture we will have a very high-level overview of the need for databases. So, when it comes to databases, they are something that you will find in almost every organization. And it is for this reason that knowledge of databases is essential. So let's go ahead and start with a lecture. So before we go and understand more about databases, let's start from scratch and discuss flat files. So flat files are essentially a method of storing data that lacks structure. So when you see over here the first line, it says HIJ 26, Astronomy and Chicago. So there are four columns over here. This is the first column; this is the second column; this is the third; and this is the fourth. When you go down towards the second line, you have Z 26, KP, Labs, Spirituality, and Mumbai. So you have 12345 columns. So, there is no predefined structure that you will find in this kind of file. So you can consider this a flat file. And one of the characteristics of a flat file is that each row is intended to stand on its own. So this is the first row, this is the second row, this is the third row, and so on. So there is no direct relationship between rows within this specific file.
So this is why each row is intended to stand on its own. Perfect. So before we go ahead, let me show you this in our terminal. So I have a file called "flat file TXT." And this basically contains the data that we were seeing in a PowerPoint presentation. Now, there are certain challenges if you want to store your data in a flat file. So let me show you if I want to. Let's assume there is a requirement where I want to print only the third column over here. Okay? So what you can do over here is a cat on a flat file TXT and use a command like AOC. So I'll do print. Let me print the first column. So what this will do is print the name column over here. So I press Enter and you begin to find names. So similarly, if I want to print a second column, which is the age column, I can print the age as well. However, in the third column, you will see that there is a bit of inconsistency that you will find. So in the third column you have an interest elsewhere, but here you find a company name. So there is a lot of inconsistency between each row that you will find in a flat file. And this is also one of the challenges. So let's assume that I want to update this specific column.
So I want to update Kpaps to the knowledge portal. Now we can do that with sets, but this has become very challenging in this kind of unstructured data. So this is the reason why flat files may be important for a smaller amount of data. But if you have a large amount of data which needs a lot of querying, then a flat file does not really help. As an example of a flat file, consider etc. password. So this is a classic example of flat files that you will find anyway, but you will find that this has a specific structure. So everything starts with a name, then you have a theshell, then you have the home directory of it. So this is more structured data anyway. So coming back to the topic, we had an overview of what a flat file is. So let's go ahead and understand more about databases. Databases are used to store information. This is the same for flat files as well as databases. The second, the second, and the very important purpose is to provide an organisational structure of data.
So whatever data that you are storing in a database needs to be structured. This is very important. And the third purpose, which makes databases so important, is that they provide a mechanism to do, query,create, update, modify, and delete data. So, when we talk about flat, flat data on the left hand side, when we convert it into a database-based structure, or properly structured data, you can see that it looks much better, and because it is more structured and better, operations like query creation, deletion, and update are much simpler if you have data in a similar kind of structure. Now, one more very important purpose specifically of a relational database model is the entity relationship. And this is quite important because you might have many customers like you have on Amazon Flipkart. You have many customers who have various orders associated with them. So having a relationship between a customer and an order is very important, and while having a relationship between multiple data sets in a flatfile is not possible, doing so in a database is.
So we will be looking into this in the relevant sections, but let's go ahead and understand the basic structure. So in the database, there are two important structures that you have to remember. One is the rows and the second is the columns, so the column. So rows, you can say, are the horizontal. So the horizontal is the row, and the vertical is a column. So you have a name column, you have an age column,you have an interest column, you have a city column. However, in the row side, this is the row, this is row 1, this is row 2, this is row 3, and this is row 4. So a column represents the type of information that will be stored. So you have the first column called the "name column." So in this name column, you know what type of information will be stored, which will be the names. In the secondcolumn, which is the age, you know what type of information will be stored, which will be some kind of an age.
So there will be integers, then you have an interest, and then you have a city. However, when we talk about rows, they actually represent the information by itself. So if you see, you have the information, which is the name, you have the age, you have the interest, and you have the city name. So this is the actual information. However, when we talk about columns, it basically means the type of information that will be stored. Okay, so there are various kinds of advantages that you will find in the databases. But before we actually understand this, let's go ahead and do some practical so that this lecture becomes much more interesting.
So what I'll do is I'll SSH into one of the servers perfectly. And I have MySQL running. So I'll just do system CTRL, Stages, and MySQL D. I have MySQL running over here, so I can connect. Let me connect. Perfect. So I'll do a quick show of databases and it will show me the list of databases that are available for our case. I'll use the Kplabs database. So I'll use Kplabs. I'll do show tables. Don't worry, we'll be discussing this in the upcoming lectures. So I have one table over here. So what I'll do is run a select query. So what you find over here is something that we already discussed in the earlier lecture,where you have rows and where you have columns. So if you look over here, this is a column. This is a user ID column. This is a user name column. This is the age column, interest incity, and then you have a row. So this is row one, this is row two, this is row three, and subsequent ones. So this is what we call structured data, where there is a proper structure of data. This can be refined even further by specifying that the only data allowed in column three is of the integer type. You cannot have a line. So you can have the kind of granularity that you will find.
So let me show you what I mean by this. So when I do describe users, you will find that the columns that are present over here are described here. So in the user ID column, the type is integer. In the username column, the type is workar,which is basically a character or a string. You can say in the third column you have age, which is int, you have an interest, which is work or city, which is worker. So what this basically means is that any data which is going to be stored in these specific columns needs to be of a specific type, like in age. You don't want users to type some random stuff, or you don't really want users to type in thiswaytew and t you don't really want that.You want a specific type, like a user should only enter the value in numerical form, which is 26, and thus that specific type is mentioned for each and every column. So for the age, you have a specific type which is an integer, which is mentioned. Along with that, there is also a column called asnull, which basically means that if a user decides to not enter a value, that is quite okay. However, there needs to be a certain kind of value which needs to be present for a specific column.
So anyway, we are going more into detail related to the databases, but for this lecture, I think it is good enough for us to go with the last slide. Since the data is structured, it supports appropriate querying functionality, which is basically represented byCrud, which means create, read, update, and delete. So this kind of functionality is present. We also have a relationship between multiple data sets and we will talk about more features in detail when we actually do a hands-on.
Hey everyone, and welcome back. In today's video we will be discussing the basics of Amazon's relational database service. Now, AWS RDS is basically a managed-relational database service in the cloud. Now, when it comes to managing, that basically means that AWS manages the underlyinghardware, they manage the operating system, they manage the security, the software patching, and the automated failure detection and recovery for you. Now, typically, if you install the database in yourEC, two instances, you will have to manage the operating system, you'll have to manage the security, and you even will have to manage the automated failure in case the master DB goes down, and various others. In terms of relational database services, all of those aspects are managed by AWS.
Now, in terms of AWS RDS, we can connect directly to the database itself. We do not have direct access to the operating system. Since AWS is managing the operating system for you, you will not have SSH access or any access directly to the OS level. Now, one of the great things about RDS is that a lot of things, like provisioning, the multiAZ deployments, the read replicas, all of them, can be enabled with just a click of the off button. You don't really have to modify any configuration settings. With just a click of a button, you will be able to enable multi-ez read replicas and various others. A lot of organizations, in fact, most of the organisations I've seen that have migrated to AWS, use AWS RDS primarily because AWS handles a lot of things like multi-site easy or automated failure and recovery. So you don't really have to worry much about that.
Now, for the exam perspective, you need to understand or you need to remember the difference between when you host your database in RDS and when you host your database in Eas. For instance, you need to understand the benefits that you will get when you run your database in RDS. So basically, when you have your database in RDS,within a few clicks of the button, you will be able to have the automated minor updates, you'll be able to have the multi-Availability zone deployments, the automated recovery in event of failure, and the automated backups. And one of the pain points is that you don't really have to manage the underlying operating system and its own security. So these are some of the benefits of running your database in RDS.
Now, since this is a managed solution, you will not have 100% control related to everything. So there are certain tradeoffs that you will have to work around with.First, you will not be able to optimise the MySQL or whatever database that you are using to its fullest level. There will always be a gap that you will have to work around, and you will definitely be unable to create My SQL clusters if you use MySQL. The same applies to the other platforms as well, depending upon whether the feature is inbuilt or not inbuilt in the RDS console. So this is the theoretical aspect of RDS. Let's go to the AWS console and we'll look into how we can create our own RDS instance. So I'm in My AWS management console, and we'll type RDS from the service and click on RDS here. great. So this is the RDS console and within here we'll go to Databases and we'll click on Create Database. Now, there are a lot of engines that you will be able to use. Now, definitely not all of them come under the free tag. For our testing purposes, we'll be making use of MySQL as the engine for our demo. When you select MySQL, you can go down and you can click on Next.
And there are three use cases that you see. One is production, which is why they are recommending Aurora. Then you have production MySQL, which basically allows a lot of features like multiAZ deployments and faster storage. And you have a development test in the form of MySQL. So I'll be using Dev Test MySQL for our testing purposes, and we'll click on Next. Once you have done that, you will be able to see which MySQL version you intend to use for your application. There are a lot of MySQL versions available. You can select according to what your application supports. The next important thing is the DB instance class. Definitely, we do not need a 30 GB RAM server. Now, one interesting thing that you will typically see is that once you select this and you go down, it will give you a monthly cost. So this specific instance class will have a monthly cost of $352. And since we are using it for testing, we do not really need it to be so expensive.
So let's go to the DBTtwo micro for our testing purposes. And now you see the estimated monthly cost has gone down to 14 USD. So along with that, the storage type, you can select general purpose or provision IOPS. Because this is an adepted instance, we don't need the provisioned IOPS, which is much faster. We'll go with general purpose and the allocated storage. If you see, the minimum is 20 GB. So if you try to have something like ten GB, it will give you an error saying that MySQL supports allocated storage from 20 to 16 TB. So we'll just select 20 TB for our testing. And the next thing is the DB instance identifier. So we'll say it as kplash DB. So when you have multiple databases,you need to identify your DBS. And this is what the identifier basically means: the master username. I'll give it as KPadmin and you can give the password. And once you have given your password, you can click on Next. And within this, you will be basically asked which VPC you want to launch your DBinstance into.
You can select any corresponding VPC that you might have for your setup. For simplicity, I just use the default VPChere and the important configuration here is public accessibility. Make sure you do not have public accessibility, and even if you are running it in your organization, make sure you do not have this. You should ideally always have a VPN or a Bastion host and then you should put "Public Accessibility" as "no." However, since we are doing our testing, we'll be doing it as an Availability Zone. Currently we'll have it as "no preferences". securitygroup will create a new security group. We'll just leave it as the default. and the database name. Let's select the database name, say KP Lapseand when you go back down to IMDb Authentication, let's keep it simple for the time being. We'll go a bit lower and within the backup period, you'll see it has the backup retention period. So if you take any backup, this is the intention for which the backups will be present. You also have an option for log exports where you will be able to specify if you want to send your audit logs or slow query logs to Cloud Watch. So this is one interesting feature that they have launched, and within the maintenance you will see it says Enable Auto Minor Version Upgrade.
So all of this flexibility is coming because this is a managed service. If this was not a managed service, then all of these things will have to be taken care of by you or the DB administrator that your organisation might have. So with this, you can even select Delete Protection. We'll just ignore it for now, and we can go ahead and create a database. Perfect. So now it says that your DB instance is beingcreated and if we click on View DB Instance Details,this is the database which is being created for us. Now, one interesting thing that you see over here is that once your database gets created, you will have a lot of Cloud Watch matrix related todatabase connection, read, IOPS free storage, and various others. So I'm just quickly refreshing the page, and typically it takes a certain amount of time for the RDS to get created. And now you see the info has been created and if you go a bit down,the end point has still not been populated. That means it is still in the creation stage. Once the RDS gets created, you will have an end point through which you will be able to connect to your RDS. So with this, we'll conclude this video. I hope this video has been informative for you and I look forward to seeing you in the next video.
Study with ExamSnap to prepare for Amazon AWS Certified Cloud Practitioner Practice Test Questions and Answers, Study Guide, and a comprehensive Video Training Course. Powered by the popular VCE format, Amazon AWS Certified Cloud Practitioner Certification Exam Dumps compiled by the industry experts to make sure that you get verified answers. Our Product team ensures that our exams provide Amazon AWS Certified Cloud Practitioner Practice Test Questions & Exam Dumps that are up-to-date.
Comments (0)
Please post your comments about AWS Certified Cloud Practitioner Exams. Don't share your email address
Asking for AWS Certified Cloud Practitioner braindumps or AWS Certified Cloud Practitioner exam pdf files.
Amazon Training Courses
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.