Use VCE Exam Simulator to open VCE files

100% Latest & Updated Amazon AWS Certified Data Analytics - Specialty Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!
AWS Certified Data Analytics - Specialty Premium Bundle
Download Free AWS Certified Data Analytics - Specialty Exam Questions
File Name | Size | Download | Votes | |
---|---|---|---|---|
File Name amazon.real-exams.aws certified data analytics - specialty.v2023-08-20.by.eleanor.96q.vce |
Size 285.1 KB |
Download 59 |
Votes 1 |
|
File Name amazon.selftesttraining.aws certified data analytics - specialty.v2021-10-01.by.florence.78q.vce |
Size 221.71 KB |
Download 752 |
Votes 1 |
|
File Name amazon.test-king.aws certified data analytics - specialty.v2021-05-15.by.cameron.61q.vce |
Size 177.04 KB |
Download 887 |
Votes 1 |
|
File Name amazon.passit4sure.aws certified data analytics - specialty.v2021-04-30.by.lucia.57q.vce |
Size 172.26 KB |
Download 899 |
Votes 2 |
Amazon AWS Certified Data Analytics - Specialty Practice Test Questions, Amazon AWS Certified Data Analytics - Specialty Exam Dumps
With Examsnap's complete exam preparation package covering the Amazon AWS Certified Data Analytics - Specialty Practice Test Questions and answers, study guide, and video training course are included in the premium bundle. Amazon AWS Certified Data Analytics - Specialty Exam Dumps and Practice Test Questions come in the VCE format to provide you with an exam testing environment and boosts your confidence Read More.
So let's get started learning Amazon S three andwe may already know what Amazon s three is.I just want to give you a quick reminder or atleast remind you the options are going to be necessary toknow when you want to pass the AWS big data exam.So first let's just talk about buckets at a high level.We can store files or objects in thingscalled buckets and they're basically and directories bucketsmust have a globally unique name.So when I create a bucket, I cannot name itlike you named it, even though we have different accounts.Buckets will be defined at the region level,although in the UI they will appear globally.There's a naming convention for these andthere is no uppercase, no underscore.There must be between three and 63 characterslong and not an IP and it muststart with a lowercase letter or number.So overall very basic.We'll be creating in a bucket very very shortly.Then what about the objects?While the objects are files and they musthave what's called a key and the keyrepresents the full path of that object.So here is example.Here's my bucket name, myfile TXT orif it's a file that's nested withindifferent folders, it looks like this.This is a full path to the file.So there's no concept of directories or folders withinBuckets, although it makes us think otherwise because wehave slashes, but it's just a UI trick.OK, right now we just have long keys withvery long names and they can contain slashes.So the two above have thesame level but they're different keys.And one is like my folder,one another folder, myfile TXT.Okay, so then the object has a value andthe max size of an object is five terabytes.So if you do big data, you can thinkabout what does that mean for your data.So max is five terabytes for your object.And if you upload more than5GB, you must use multipart upload.That's something you should know.But even though even if your filessay 100 megabytes, you will probably gainsome advantage by using multipart upload.So if it's more than 5GB, you must use it andif it's more than 100 megabytes, you should use it.Okay, now you have metadata on top of your objects.So it's a list of keyvalue pairs, system or user metadata.And then you have tags.You can tag your buckets if you wantto have security or lifecycle or whatever.Finally, your object can have a VERSIONID,which is if versioning is enabled.All right, last thing you should know about SThree is going to be the consistency model.And the consistency model is very interesting toknow because at the exam it will comeup, especially when using with EMR.The first one is that you have readafter write consistency for puts of new objects.What does that mean?Behind that complicated sentence.That means that as soon as youwrite an object, you can retrieve it.So if you do a put and you insert a new objectin S three and it did not exist before, then if youdo a get to retrieve that object, it will give you 200.That means, OK, that meansyou can retrieve your object.That is true, except if you do aget before to see if the object existed.So if you do a get of something that doesn'texist, you get a 404, mean it doesn't exist, thenyou put it because you want to make it exist.So you get put 200.And if you do a get immediatelyafter, you will get a 404.That's because it's eventually consistent.And basically the get 404 results, even though you didin certain objects comes from the fact that the previoustime you did a get it was 404.So you need to wait a second or two beforeconsistency happens and then you will get get a 200.Now you get eventual consistency.So the same behavior when youdo delete or overwrite puts.So puts for existing objects, that means thatif you read an object after updating it,you might get the older version.So if you do put 200, then another put200 for updating it and then get 200, youmight get the older version of that object.Or if you delete an object, you might still beable to retrieve it for a very short time.So even if you delete an object, you candelete 200, then you may get a 200 aswell, even though the object is not here anymore.That's again because of the eventual consistency model.So that's it.Let's just have a quick look at how S three works now.So let's tap S three and weare taking to the S three UI.Now we're going to create a bucket and a bucket.Or just call it the Bucket of Stefan big data.I'm sure this name will not be taken.And basically I choose a global unique name for this.If I just choose big data,I'm pretty sure this might work.So big data create no, the bucket name already exists.So basically you cannot have a bucketname that someone already chose and someonealready chose the big data bucket name.So I'm going to choose the Bucket of Stefan minus BigData and this will be globally unique and pretty sure.Now region wise, well, you can select theregion you want your bucket to be into,you know, something close to you. I'll choose us.East Ohio for me.And then you click on next youcan configure options for your bucket.Could be versioning, could be serveraccess, logging, object level logging, etc.Default encryption will be seeing encryptionin this section as well.So don't worry too much rightnow we will not check anything.Click on next finally, there is somepublic access setting for this bucket.So these are enhanced security to make surethat when you put files in a bucket,they're not publicly accessible by default.We'll leave it as is, and then next wecan just review all the settings we have.We haven't modified everything, so we couldhave just clicked on Create right away.Anyway, we'll create the bucket right now.And now our bucket is created. Excellent.So when you click on that bucket, you can basicallyupload files, and you can upload files by dragging anddrop in there or click on an add file.So let me click on Add Filesand I will choose online retailactrack.CSV is just a small filethat Frank will play with later.But this is just a CSVfile that contains our retail data.And so we're uploading it onto S Three. And here it is.Now it is onto S Three, as you can see.Now we could basically do a lot of analysis on it.Obviously we're not there yet.We're just learning about how to store data.But we've been able to store a CSV fileinto S Three, which is already something big.We could also have folders.For example, it could be storeid.So we could be like Stores.I could actually, instead of doing storeid,I could have I'll delete this. Here we go.So I could basically create a directory in history.And this is how you would organize data.So you would have your stores, and then within yourstores, maybe you would have all your different IDs.So you would have my StoreOne, which is my first store.And then within this store, maybeyou would also have a date. So who knows?10, 22, 20, 18, just a random date, right?And who knows if that dateis well familiar or whatever.You could have subtullers.You could do whatever you want.But in there, I could go aheadand upload my very same file.And so now we can start basically organizingsome data within a story to basically makesure that it is well organized.So I know that within my store,I will get all my stores.Within Store One, I will get all the dates ofdata, maybe because it's a daily extract that I'm goingto put there and notice that within each folder Iwill find the extract for the day.So it's just a way of organizing data.But it also shows you that you can have top level keys,which is quite a rare it can encounter for big data.Or most likely your data is goingto be organized in different folders.So here I could have a store too.And within each folder, you'll have more andmore folders, basically to partition further down.And when the partitioning is done, then you canfind your data in CSV formats or other formats.So s three support any format, right?It could be any file you could upload.It could be images, CSV files,binary files, whatever you want.That's just really what matters.So hopefully you get an idea.Now, don't worry about how doI organize my S Three folders?This is something we'll see in depth, andthis is something that when we do extractsgoing maybe from EMR or other technologies, thedirectory folder will be credited for us.So let's say just for a quick S Threeoverview on the big data, more side of things,I will see you in the next lecture.
In the exam, you need to know about all the S three storage classes and understand which one is the most adapted to which use case. So in this lecture, which is going to be quite long, I want to describe to you all the different storage classes. The first one is the one we've been using so far, which is Amazon S Threestandard, which is for general purpose. But there are some more optimised depending on your workload. The first one is S-3 infrequent access, or IA. also called S-three, IA. So this one is when your files are going to be infrequently accessed and we'll have a deep dive, by the way, on all of them. There's going to be three oneZone IA when we can recreate data. There's going to be S-Tier Intelligent Tiering, which is going to move data between your storage classes. Intelligently Amazon Glacier for Archives and Amazon Glacier Deep Archives for the Archives you don't need right away. Finally, there is one last class called AmazonS Three Reduced Redundancy Storage, which is deprecated, therefore I will not be describing it in detail in this lesson. Okay, so there are three general purpose tenders. We have a very high derivative. It's called Eleven Nine. So there are 99 objects spread across multiple AZs. So, if you store 10 million objects with Amazon S3 general purpose, you can expect to lose one object once every 10,000 years on average. The bottom line is that you should not lose objects on S Three standard.There's a 99.99% availability over a given year. And all these numbers, by the way,you don't have to remember them; they're just indicative to give you some knowledge. You don't need to remember exactly the numbers going into the exam, just understand the general idea of a storage class and how it can sustain two concurrent facility failures. So it's really resistant to AZ disasters. The use cases for general purpose are going to be big data analytics, mobile gaming applications, and content distribution. This is basically anything we've been using so far. Now we have S-3 standard infrequent access orIA, and this is suitable for data as the name indicates that is frequently less frequently accessed. That requires rapid access when needed. So we get the same durability across multiple AZ,but we have one nine less availability and it is lower cost compared to Amazon's 300. The idea is that if you accept your loss, you won't need to pay as much. It can sustain two concurrent facility failures and the use cases for this are going to be a data store for disaster recovery, backups, or any files that you expect to access way less frequently. Now we have S three, one zone IA, or infrequent access. And this is the same as IA. But now the data is stored in a single Availability Zone.Before, it was stored in multiple Availability Zones, which allowed us to make sure the data was stillavailable in case an AZ went down. So we have the same durability within a single AZ, but if that AZ is somewhat destroyed, so imagine an explosion or something like that, then you would lose your data. You have less availability. So you have 99.5% availability and you still have the low latency and high throughput performance you would expect from an asteroid. It's lower cost compared to it supporting SSL for all the encryption and it's going to be lower cost compared to infrequent access by about 20%. So the use case for One on IA is going to be to store secondary backup copies of onpremise data or store any type of data we can recreate. So what type of data can we recreate? Well, for example, we can recreate thumbnails from an image, so we can store the image on S for general purposes, and we can store the thumbnail on S three One Zone.If we can access and if we need to recreate that thumbnail over time, we can easily do that from the main image. Then we have three. It has Intelligent Cheering and it has the same load and high throughputs as the S Three standard. But there is a small monthly monitoring fee and a two-tiered fee. And what this will do is that it will automatically move objects between the access tiers based on the access pattern. So it will move objects between S and three general-purpose S. And so it will choose for you whether your object is less frequently accessed or not. And you're going to pay a fee from S3 to do that little monitoring. So the durability is the same, it's eleven nineS and it's designed for 99.9% availability. And you can resist an entire event that impacts an Availability Zone. So it's available. Okay, so that's for the general-purpose S-ray storage tears. And then we have the Amazon Glacier. So Glacier is going to be more about archive glaciers. cold. So think cold. archive. It's a low-cost object storage system, really for archiving and backups. And the data needs to be retained for a very long time. So we're talking about tens of years to retain the data here. It's a big alternative to unprecedented magnetic storage, where you would store data on magnetic types and put these types away. And so, if you wanted to retrieve the data from these types, you would have to find the type manually,put it somewhere, and then restore the data from it. So we still have the eleven nines of durability, so we don't lose objects. And the cost per GB is really, really low. ZeroZeroZero $4 per gigabyte plus a retrieval cost,and we'll see that cost in a second. So each item, which in English here is called an object, it's called an archive. And each archive can be a file up to 40 terabytes. And archives are stored not in buckets, they're stored in vaults. Okay, but this is a very similar concept. So we have two tiers within AmazonGlacier that we need to know about. The first one is Amazon Glacier, the basic one, and we have three retrievaloptions and they're very important to understandexpedited, which is one to five minutes. So you request your file and between one and five minutes you will get it back, which is three to five hours. So you wait a much longer time. And bulk when you require multiple files to run at the same time, which takes between five and twelve hours to give you back your files. So, as we can see here, Amazon Glitchhere is really to retrieve files and not have some kind of urgency around it. If you're very, very in a rush, you can go and use expedited, but it's going to be a lot more expensive than using standard or bulk. And the minimum storage duration for Glacier is going to be 90 days. So again, files that are going to be here are there for the longer term, and we have an even deeper storage tier for Glacier called Deep Archive, and this is for super long term storage and it's going to be even cheaper. However, the retrieval options are standard at 12 hours this time. As a result, you will be unable to retrieve a file in less than 12 hours. And bulk, if you have multiple files and you can wait up to 48 hours, it's going to be even cheaper. So Deep Archival is going to be for files that you really don't need to retrieve urgently. Even if it's archived, the minimum storage duration for DeepArchive is going to be 180 days. You have to remember these numbers at a high level because going into the exam there will be questions asking you to understand which point you pick between glacier and glacier. And for example, if the storage file is going to be less than 180 days, and you have to use glacier if you need to retrieve a file very, very quickly, between three and five hours is going to be glacier. But if it's going to be a file to be retrieved in 72 hours and it's going to stay one year in your vault in Glacier, then maybe DeepArchive is going to provide you with the best cost savings. So let's compare everything that we've seen. We've seen S three standard intelligent hearing, standard IAones on a glacier, and a glacier Deep Archive. So for durability, they're all eleven nine. So that means you don't lose any objects for availability, and the ones that we can look at are the three IA. Because it's infrequently accessed, we have a little bit less availability. And if it's one zone IA, then it's going to be even less available because you only have one availability zone. So that makes sense for the SLA. This is what Amazon will guarantee you to reimburse you. It's not something to know about, but I'll just put it in this chart in case you need it in real life. The number of AZs your data is stored on is going to be three everywhere except in one zone, IA. because, as the name indicates, it's only for one zone. So you're going to have one. Then there is a minimum capacity charge per item. So when you have the normal three or intelligence, you're fine. But when you're using IA, you need to have a large object, or rather, one larger than 128. For Glacier 40, the minimum storage duration is going to be 30 days for standard IA and 30 days for one IA. And for Glacier, 90 days. For extra Glacier Deep Archive. 180 days. And then finally, is there an admission fee for the first two? There isn't any. But when you have Standard IA because it's rarely accessed, then you're going to be charged a fee any time you retrieve the data. And then for Glacier and Glacier Archive, again, there's going to be a fee based on the number of gigabytes you receive and the speed you want to receive at. So you don't need to know all the numbers in it, but the numbers should make sense from what the storage tier really, really means to you. And for those who like numbers, here's just a chart that you can look up on your own time. But what it shows is that the cost of S three standards is 0.023, which is high. And if we go all the way to the right to Glacier, we have Deep Archive, we have 0.0099 per gigabyte per month, which is a lot cheaper. And so if you want the data fast enough for intelligent cheering, it's going to be between 00:23 and 01:25. Standard II is going to be that number one and one is going to be even cheaper and so on. And it also shows the retrieval cost. So if we want an expedited retrieval from Glacier, it's going to cost us $10 per 1000 requests, whereas if we use standard of bulk, it's going to cost us a lot less. The same for Glacier Deep Archive. Okay, so that's it. And finally, for extra intelligenteering, there is a cost to monitoring objects because they are going to be able to move between instruments and standard eyes on demand. And so the cost is quite small, but it'szero, ZeroZeroZero 25 per 10 objects monitored per month. Okay, well, that's it. Let's go into the hands-on to see how we can use these tiers. I'm going to create a bucket and I'll call it Stefan S. Three-storage class Demo. Okay, And I'm going to click next. I will not set up anything special. So next, Next again and create a bucket. Okay, excellent. My bucket is created, and I'll just find it. Here we go. And I'll go inside of it. Next, I'm going to upload a file and I will add an afile and that file is going to be my coffee JPG. Click on next. And this is fine. I'll click next. And for the properties, this is the interesting part. We can set the storage class. So, as I told you, there are a lot of storage classes. Standard intelligent cheering, standard II one and a glacier deep archive, and reduce redundancy,which is not recommended because it's deprecated. And so in this example, we have an additional table that describes what we have already learned already.But as you can see, we can choose a standard for frequently accessed data with intelligent cheering when we don't know the patterns in advance if it's going to be accessed frequently or not. And so we want Amazon to make that decision for us. Standard IA when it's going to be infrequently accessed one on II, when we can recreate it. As a result, noncritical data glacier for archival from minutes to hours. And then glacier deep archive for data that's going to be rarely accessed. And if ever needed, we can wait up to twelve or 48 hours to retrieve it. Okay, so let's have an example. We'll use standards as a class and click on the next upload. And our cafe has been uploaded. And as we can see on the right hand side, it says "storage standard." But what you can do is click on properties,click on storage class, and we're able to move the storage class, for example, to standard IA. So let me just save this. And now our object is in the standardIA storage class, which is here. And so we've just moved it. So very simple. And if I refresh this, it should show storage class STANDARDIA. And likewise, if we wanted to change the storage class, we could go to the properties (sorry, properties) of the object itself. So oops, here we go. Standard ahead. And I click on it and I say, "Okay,now I want you to be English here." And it's saying, okay, if you put English here, it's going to be built for 90 days, so save it. And here we go. My file is now in glacier, so it's an archive. And so that's really easy, as you can see. And from the UI, it tells you exactly which file belongs to which class. So based on your patterns, based on the applications you're using, how you have the best cost savings,and how you have the best performance, All right, that's it. I will see you in the next lecture.
So you can transition objects between storage classes,as we've seen in the previous Hans on.So we can do it in what way? Well, there is a giant graph on the average website that describes how to do it. So it's pretty complicated. But as you can see from Standard IA, you can go to intelligent ones on IA and then Glitch Heredeep Five, and it just shows possible transitions. As you can see from Glacier, you can not go back to Standard IA. You have to restore the objects and then copy that restored copy into IA if you want to. So for efficiently accessed objects,move them to Standard IA. For archived objects that you don't need in real time, the general rule is to move them to Glacier or Deep Archive. And so, moving all these objects around,all these classes, can be done manually,but it can also be done automatically using something called a lifecycle configuration. And configuring those is something you expect to know going into the exam. So lifecycle rules: what are they? You can define transition actions, which are helpful when you want to transition your objects from one storage class to another. For example, you're saying move objects to the studentIA class 60 days after creation and to Glacier for archiving six months later. So, fairly easy and fairly natural expiration actions, which is to delete an object after some time. So, for example, your access log files, maybe you don't need them after another year. So after a year you would say, "Hey, all my files are over a year old. Please delete them, please expire them." And it can also be used to delete old versions of a file. So if you have versioning enabled and you keep overriding a file and you know you won't need the previous versions after maybe 60 days, you can configure an expiration action to expire old versions of the file after 60 days. It can also be used to clean up incomplete multipart uploads. If some parts are hanging around for 30 days and you know they will never be completed, then you would setup an expiration action to remove these parts. And rules can be applied to a specific prefix. So if you have all your MP3 files within the MP3 quote unquote folder or prefix, then you can set a lifecycle rule just for that specific prefix. So you can have many lifecycle rules based on many prefixes in your bucket. That makes sense. And you can also have rules created for certain object tags. So if you want to have a rule that applies just to the objects that are tagged Department Finance, then you can do so. So the exam will ask you some scenario questions, and here is one, and you need to think about it with me. So your application on EC two creates image thumbnails after profile photos are uploaded to Amazon EC three,and the thumbnails can be easily recreated and only need to be kept for 45 days. The source images should be able to be immediately retrieved for these 45 days. And afterwards, the user can wait up to 6 hours. How would you design this solution? So I'll let you think for a second. Please pause the video, and then we'll get to the solution. So the S resource images can be in the standard class, and you can set up a lifecycleconfiguration to transition them to Glacier after 45 days. Why? because they need to be archived afterwards and we can wait up to 6 hours to retrieve them. And then for the thumbnails, they can be ones from IA. Why? because we can recreate them. Okay? And we can also set up a lifecycle configuration to expire them or delete them after 45 days. So that makes sense, right? We don't need the thumbnails after 45 days, so let's just delete them. Let's move the source image to Glacier. The thumbnail is going to be on one's own IA because it's going to be cheaper. And if we lose an entire AZ in AWS, we can easily, from the source image, recreate all the thumbnails. So this is going to provide you with the most cost-effective rules for your S three buckets. In the second scenario, there's a rule in your company that states that you should be able to recover three deleted objects immediately for 15 days. Although this may happen rarely after that and up to one year, deleted objects should be recoverable within 48 hours. So how would you design this to make it cost effective?OK, let's do it. So you must enable SF versioning, correct? because we want to be able to delete files but also recover them And so with SF versioning, we're going to have objectversions and the deleted objects are going to be hidden by a delete marker, so they can be easily recovered. But we're going to have noncurrent versions,basically the objects' versions from before. And so these non-current versions, we want to transition them into S three IA because it's very unlikely that these old object versions are going to be accessed. But if they are accessed, then you need to be sure to recover them immediately. And then afterwards, after this 15 day grace period to recover these non-current versions,you can transition them into deep archive, such as for 365 days. They would be recoverable within 48 hours. Why don't we just use "Glacier"?Because Glacier is a little more expensive, and we have a 48-hour timeline, we can use all the tiers all the way up to deep archive to retrofit and save even more money. So these are the kinds of exam questions you will get and it's really important for you to understand what the question is asking and what storage class corresponds the best to it and what lifecycle role can also correspond the best to it. So let's go into the hands just to set up a lifecycle role. So I am in my bucket and under management, I have lifecycle, and I can create a lifecycle rule. So I can say this is my first acting role. And then I can have a filterby tiger by prefix for these files. So, as I previously stated, it could be MP3, but it could also be a tag if you prefer. So do whatever you want. That means you can set up multipleSQL rules based on prefixes or tags. For now, I want to apply it to my entire buckets,so I will not add a prefix or a tag filter. And there's some text, apparently, in it. Okay, here we go. Next is the storage class transition. So do we want to transition to the current object version if we have versioning enabled or the noncurrent object version, so the previous versions? So this is just for the current versions of the object. And we can add a transition. For example, we're saying transition to standardIA after 30 days and then transition to Glacier after 60 days. Okay, And it says, by the way, if you transition small objects to Glacier or Deep Archive, it will increase the cost. I acknowledge this and I'm fine with it. And we can finally add one last transition, which will put it into Deep Archive after 150 days. Okay, this looks great. You can also migrate old object versions to non-current versions. So if we scroll down, we can add a transition and it's saying, okay, when the object becomes the previous version, you can transition it to standard IA after 30 days. And we can transition it right into the Deep Archive after 365 days. So here we go. I click this acknowledge box and I'm done with this. Finally, How about exploration? Do we want to delete objects after a while? Maybe? Yes. For the current version, I want to delete all the current objects after 515 days. That makes sense. And for the previous versions, maybe 730 days. And we want to clean up incomplete multipart uploads. Yes. Okay, that makes sense. And I'll click on Next and we can review this entire policy and click on Save. And here we go. We have created our first lifecycle rule, which is not showing up here right now. which is showing up here right now. Excellent. And you can create multiple ones based on if you have multiple filters, multiple prefixes or tags, and so on,and based on the actions you want. But as you can see, it's really, really powerful. And you can set more than one ethical rule per bucket. Alright, that's it for this lecture. I will see you in the next lecture.
So now let's talk about S3 versioning. It is critical that you be able to version your files in Amazon S3. It can be enabled at the bucket level and, basically, any time you overwrite a file, it will increment the version number one, two, or three. The version isn't quite one-two-three, but you get the idea. It is best practise to version your buckethits because it protects you against unintended deletion. For example, you're able to restore a previously deleted file or you can easily roll back to the version of an old file just in case. Any file that is not versioned, by the way,will get the version now, and if you want to stop it, you could suspend versioning. So in a big data context, you would enable versioning basically to prevent against-unintended deletes, all that kind of stuff. If you have a workload as well that overrides a file, that could be a really good way to version your bucket because in case your big data job fails or doesn't look like you did the correct output, you're always able to revertback that file to what it was before. So let's have a look at versioning in S three. So back in my bucket, I'm going to click onproperties versioning and I can enable or suspend versioning. So I'll enable it and I'll click on Save, Go To Overview, and now I can click on Version Show. So you see, now I will see the versions of my file. Because this file did exist before I enabled versioning, because it did exist before I enabled versioning, the VERSIONID is null. So what I'm going to do now is upload a file and upload the same file again,just assuming that it's a new version. Click on upload and now it's an overwritten upload. And as you can see now, my file has a new VERSIONID. So if you look at this file right here, it has two versions. The versions from before are null versions because they were enabled and added before versioning was enabled and then I added a new file and this file just had a VERSIONID excellence. But if I hide the versions, I can only see the latest version of my file. Similarly, if I click on this file and I doaction and delete, I'm going to remove it, right? So my file is now gone from this view, but actually what happens is that if I click on Show, you can see that the file is still here. The only thing that happened now is that on March 25 there was a delete marker added to this file, which has a version ID, and so myfile is actually right here, it's actually still there. But what I can do now is click on this delete marker and do action delete. And by deleting a marker, what will happen is that if I hide the versions, I will refresh my bucket very quickly. Here we go. My online retail file extract CSV is back. So basically, the versioning allows me to rollback to a version that I wanted. I could even roll back to the very earliest version by deleting this new file version. And here we go. I'm back to my version null. So it'll be fascinating to see how versioning can be used. And it's good to see the behaviour again. Remember that the version height is everything. So that's it for this lecture and I will see you in the next lecture.
ExamSnap's Amazon AWS Certified Data Analytics - Specialty Practice Test Questions and Exam Dumps, study guide, and video training course are complicated in premium bundle. The Exam Updated are monitored by Industry Leading IT Trainers with over 15 years of experience, Amazon AWS Certified Data Analytics - Specialty Exam Dumps and Practice Test Questions cover all the Exam Objectives to make sure you pass your exam easily.
Comments (0)
Please post your comments about Amazon Exams. Don't share your email address asking for AWS Certified Data Analytics - Specialty braindumps or AWS Certified Data Analytics - Specialty exam pdf files.
Purchase Individually
AWS Certified Data Analytics - Specialty Training Course
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.