Training Video Course

AWS Certified Data Analytics - Specialty: AWS Certified Data Analytics - Specialty (DAS-C01)

PDFs and exam guides are not so efficient, right? Prepare for your Amazon examination with our training course. The AWS Certified Data Analytics - Specialty course contains a complete batch of videos that will provide you with profound and thorough knowledge related to Amazon certification exam. Pass the Amazon AWS Certified Data Analytics - Specialty test with flying colors.

Rating
4.36rating
Students
133
Duration
12:15:00 h

Curriculum for AWS Certified Data Analytics - Specialty Certification Video Course

Name of Video Time
PlayCollection Section Introduction 1:00
PlayKinesis Data Streams Overview 7:00
PlayKinesis Producers 9:00
PlayKinesis Consumers 8:00
PlayKinesis Enhanced Fan Out 4:00
PlayKinesis Scaling 5:00
PlayKinesis Security 1:00
PlayKinesis Data Firehose 8:00
Play[Exercise] Kinesis Firehose, Part 1 6:00
Play[Exercise] Kinesis Firehose, Part 2 7:00
Play[Exercise] Kinesis Firehose, Part 3 9:00
Play[Exercise] Kinesis Data Streams 7:00
PlaySQS Overview 7:00
PlayKinesis Data Streams vs SQS 5:00
PlayIoT Overview 9:00
PlayIoT Components Deep Dive 7:00
PlayDatabase Migration Service (DMS) 7:00
PlayDirect Connect 4:00
PlaySnowball 6:00
PlayMSK: Managed Streaming for Apache Kafka 9:00
Name of Video Time
PlayS3 Overview 8:00
PlayS3 Storage Tiers 12:00
PlayS3 Lifecycle Rules 8:00
PlayS3 Versioning 3:00
PlayS3 Cross Region Replication 5:00
PlayS3 ETags 3:00
PlayS3 Performance 6:00
PlayS3 Encryption 8:00
PlayS3 Security 5:00
PlayGlacier & Vault Lock Policies 3:00
PlayS3 & Glacier Select 2:00
PlayDynamoDB Overview 7:00
PlayDynamoDB RCU & WCU 9:00
PlayDynamoDB Partitions 3:00
PlayDynamoDB APIs 9:00
PlayDynamoDB Indexes: LSI & GSI 5:00
PlayDynamoDB DAX 3:00
PlayDynamoDB Streams 2:00
PlayDynamoDB TTL 4:00
PlayDynamoDB Security 1:00
PlayDynamoDB: Storing Large Objects 4:00
Play[Exercise] DynamoDB 9:00
PlayElastiCache Overview 2:00
Name of Video Time
PlayWhat is AWS Lambda? 5:00
PlayLambda Integration - Part 1 5:00
PlayLambda Integration - Part 2 6:00
PlayLambda Costs, Promises, and Anti-Patterns 4:00
Play[Exercise] AWS Lambda 8:00
PlayWhat is Glue? + Partitioning your Data Lake 5:00
PlayGlue, Hive, and ETL 2:00
PlayGlue ETL: Developer Endpoints, Running ETL Jobs with Bookmarks 7:00
PlayGlue Costs and Anti-Patterns 2:00
PlayElastic MapReduce (EMR) Architecture and Usage 6:00
PlayEMR, AWS integration, and Storage 7:00
PlayEMR Promises; Intro to Hadoop 4:00
PlayIntro to Apache Spark 7:00
PlaySpark Integration with Kinesis and Redshift 4:00
PlayHive on EMR 8:00
PlayPig on EMR 2:00
PlayHBase on EMR 4:00
PlayPresto on EMR 3:00
PlayZeppelin and EMR Notebooks 5:00
PlayHue, Splunk, and Flume 4:00
PlayS3DistCP and Other Services 5:00
PlayEMR Security and Instance Types 6:00
Play[Exercise] Elastic MapReduce, Part 1 10:00
Play[Exercise] Elastic MapReduce, Part 2 11:00
PlayAWS Data Pipeline 5:00
PlayAWS Step Functions 4:00
Name of Video Time
PlayIntro to Kinesis Analytics 4:00
PlayKinesis Analytics Costs; RANDOM_CUT_FOREST 2:00
Play[Exercise] Kinesis Analytics, Part 1 10:00
Play[Exercise] Kinesis Analytics, Part 2 10:00
PlayIntro to Elasticsearch 9:00
PlayAmazon Elasticsearch Service 7:00
Play[Exercise] Amazon Elasticsearch Service, Part 1 11:00
Play[Exercise] Amazon Elasticsearch Service, Part 2 9:00
Play[Exercise] Amazon Elasticsearch Service, Part 3 6:00
PlayIntro to Athena 5:00
PlayAthena and Glue, Costs, and Security 6:00
Play[Exercise] AWS Glue and Athena 9:00
PlayRedshift Intro and Architecture 9:00
PlayRedshift Spectrum and Performance Tuning 5:00
PlayRedshift Durability and Scaling 4:00
PlayRedshift Distribution Styles 3:00
PlayRedshift Sort Keys 3:00
PlayRedshift Data Flows and the COPY command 8:00
PlayRedshift Integration / WLM / Vacuum / Anti-Patterns 11:00
PlayRedshift Resizing (elastic vs. classic) and new Redshift features in 2020 4:00
Play[Exercise] Redshift Spectrum, Pt. 1 8:00
Play[Exercise] Redshift Spectrum, Pt. 2 6:00
PlayAmazon Relational Database Service (RDS) and Aurora 4:00
Name of Video Time
PlayIntro to Amazon Quicksight 7:00
PlayQuicksight Pricing and Dashboards; ML Insights 5:00
PlayChoosing Visualization Types 13:00
Play[Exercise] Amazon Quicksight 10:00
PlayOther Visualization Tools (HighCharts, D3, etc) 3:00
Name of Video Time
PlayEncryption 101 6:00
PlayS3 Encryption (Reminder) 8:00
PlayKMS Overview 6:00
PlayCloud HSM Overview 2:00
PlayAWS Services Security Deep Dive (1/3) 6:00
PlayAWS Services Security Deep Dive (2/3) 5:00
PlayAWS Services Security Deep Dive (3/3) 9:00
PlaySTS and Cross Account Access 2:00
PlayIdentity Federation 10:00
PlayPolicies - Advanced 6:00
PlayCloudTrail 6:00
PlayVPC Endpoints 3:00
Name of Video Time
PlayAWS Services Integrations 11:00
PlayInstance Types for Big Data 3:00
PlayEC2 for Big Data 2:00
Name of Video Time
PlayExam Tips 9:00
PlayState of Learning Checkpoint 6:00
PlayExam Walkthrough and Signup 4:00
PlaySave 50% on your AWS Exam Cost! 2:00
PlayGet an Extra 30 Minutes on your AWS Exam - Non Native English Speakers only 1:00
Name of Video Time
PlayMachine Learning 101 7:00
PlayClassification Models 6:00
PlayAmazon ML Service 6:00
PlaySageMaker 8:00
PlayDeep Learning 101 10:00
Play[Exercise] Amazon Machine Learning, Part 1 8:00
Play[Exercise] Amazon Machine Learning, Part 2 6:00

Amazon AWS Certified Data Analytics - Specialty Exam Dumps, Practice Test Questions

100% Latest & Updated Amazon AWS Certified Data Analytics - Specialty Practice Test Questions, Exam Dumps & Verified Answers!
30 Days Free Updates, Instant Download!

Amazon AWS Certified Data Analytics - Specialty  Premium File
$43.99
$39.99

AWS Certified Data Analytics - Specialty Premium File

  • Premium File: 233 Questions & Answers. Last update: Jul 15, 2024
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS Certified Data Analytics - Specialty Premium File

Amazon AWS Certified Data Analytics - Specialty  Premium File
  • Premium File: 233 Questions & Answers. Last update: Jul 15, 2024
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$43.99
$39.99
Amazon AWS Certified Data Analytics - Specialty  Study Guide
$16.49
$14.99

AWS Certified Data Analytics - Specialty Study Guide

  • Study Guide: 557 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates

AWS Certified Data Analytics - Specialty Study Guide

Amazon AWS Certified Data Analytics - Specialty  Study Guide
  • Study Guide: 557 Pages
  • Latest Questions
  • 100% Accurate Answers
  • Fast Exam Updates
$16.49
$14.99

Free AWS Certified Data Analytics - Specialty Exam Questions & AWS Certified Data Analytics - Specialty Dumps

File Name Size Votes
File Name
amazon.real-exams.aws certified data analytics - specialty.v2024-01-27.by.eleanor.96q.vce
Size
285.1 KB
Votes
1
File Name
amazon.selftesttraining.aws certified data analytics - specialty.v2021-10-01.by.florence.78q.vce
Size
221.71 KB
Votes
1
File Name
amazon.test-king.aws certified data analytics - specialty.v2021-05-15.by.cameron.61q.vce
Size
177.04 KB
Votes
1
File Name
amazon.passit4sure.aws certified data analytics - specialty.v2021-04-30.by.lucia.57q.vce
Size
172.26 KB
Votes
2

Amazon AWS Certified Data Analytics - Specialty Training Course

Want verified and proven knowledge for AWS Certified Data Analytics - Specialty (DAS-C01)? Believe it's easy when you have ExamSnap's AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course by your side which along with our Amazon AWS Certified Data Analytics - Specialty Exam Dumps & Practice Test questions provide a complete solution to pass your exam Read More.

Domain 2: Storage

5. S3 Cross Region Replication

So now let's talk about three cross-region replications. So, like the name indicates, it allows us to take a bucket, for example, in the EU West region, and replicate it cross-regionally asynchronously in another bucket. For example, the east coast of the US Amazon's Three does that for us. So we don't need to enable it. I mean, we obviously need to enable, but we don't actually do it.

So for this to work, we need to enable versioning at the source and at the destination bucket. And the buckets must be, obviously, in a different AWS region. The buckets can be in different accounts. So it could be a really cool way to, for example, replicate all your data from one bucket into another account. The copying is asynchronous. So that means that when you insert a file in EU West One, it will not wait to be replicated into EU S One. It will happen asynchronously. And obviously, for it to work, you need to have proper IAM permissions for the S3 service that does it. So the use cases for this will be compliance, low latency access for your big data replication orcars accounts, all that kind of stuff. Now let's see how we can enable it real quick. Going back into Section 3. I can create another bucket. Call it the "bucket of Stefan" for big data. and I'll call it a replica. Here we go.

Now, in the region I created, my first bucket is Ohio. So let me create one, for example, in Singapore, in Asia Pacific, and I'll click on Next in terms of options, where I will enable versioning. Remember to enable cross-region replication and mutual versioning on both buckets. Okay, that sounds good. Now click on "next permissions." I'll keep it as usual and review; everything looks good. I'll create my bucket. OK, so my bucket has been created. Now I need to make sure that I'm able to replicate data from this bucket to the other bucket. So I'll click on the bucket of Stefan big data and then management. And here we already had a lifecycle rule, but next to it we can define a replication rule. So let's click on Replication and then click on Add Rule. Now, the rule can set the source as the entire bucket or a prefix or tag. So remember, we could also just choose to replicate our stores. For now, I'll just choose the entire bucket. Now, the replication criteria Do we want to also replicate objects encrypted with AES? We'll see about encryption later on, but this is an option. Click on "next." Now we need to select a destination bucket.

It could be a bucket in this account or a bucket in another account, in which case we need to enter the account ID and the bucket name. For this use case, we'll choose a bucket in this account, and I will choose a bucket that is defined as a big data replica. Okay, do we want to change the storage class with the replicated objects? Maybe a replica will not have its data accessed very often. So maybe we want to change the storage class, and maybe we want to move it to something like Glacier. could be a way of doing it, right? Who knows? Also, we can change the object's ownership to the destination bucket owner if you want to change the ownership. Great. Finally, in terms of options, what do we do? Well, we need to select an intern for this. And I'll create a new role, and I'll call it S Three Replication. And the status of this replication is enabled. Click on "next review." Everything looks good. I'll save it. As you can see, my role right here has been created for me. So it's called S Three Cr and all that stuff. Okay, excellent. So this replication is enabled and active right now. So now let's go look into Amazon. S three. And I'm going to go to Vibrates.

If I picked out a replica and nothing has been replicated, as you can see, that's because replication only starts the moment you enable it for the new files. So what I have to do is upload a file. I'll upload my online retail extract CSU again, and now it's been uploaded. So it's in East Ohio, in my view, a bucket of SF and big data. And if I go to my other bucket—the replica bucket—it's in Singapore, and I refresh it, hopefully within a second. Here we go. I see my online retailectric CSV that has been created, so you can see the use case now for cross-region replication. Maybe my team in Singapore wants to do the big data analysis on the data closest to them. So replicating data in Singapore would be a really good idea. Alternatively, I could use Replication Rule to archive all of my data in Singapore into Glacier mode. So it gives you a lot of ideas around what you could be doing, obviously, and the possibilities of it. Now, remember, it's in Management and Replication. Okay, so that's it for this lecture. I hope you like it, and I will see you in the next lecture.

6. S3 ETags

A little known feature of S Three is called etagor entity tags, and they allow us to verify if a file has already been applied to S Three and if the content of the file is what we expect. For example, "name" will work if you want to see if a file already exists. But how do we ensure that the content of the file is exactly the same? So we can use ETags. And Etag is a formula. Just remember it for less than 5GB. It's basically the MD5 hash. Now, MD5 is a hashing technique to basically obtain the file's signature, which hopefully will be unique. And so for simple uploads of less than 5 GB, AWS will use the MD5 hash as the etag. And for multiple uploads or bigger uploads, it's just way more complicated.

You don't need to know the algorithm; just know that using ETags, we can ensure the integrity of the files when we upload them and ensure that their content is exactly what we expect it to be. Alright, let's take a look at the tags and see where they're hidden in the S3 UI. So the use case here is that I would like to make sure that my online RetailX drag-and-drop CSV file, this one, is exactly the same as the one I had stored in my store. one right here. So how do we know this? Well, when we click on this file and look at the right hand side, an overview, there's an eTag to it. 89, A, 81, all that stuff, right? So let's just remember the first four characters. So eight, nine, and an eight So this file has this eTag. And if I go back up and click on this one, the E tag is exactly the same. Eight, nine, eight. That's because these files are exactly the same. So this is where the E tag could really come in handy. The E tag right here allows you to calculate that you have a three. Calculate the Mg5 hash of your online retail extract CSV file and ensure that it is the same across the file you have locally.

So how can we verify this? Well, maybe we can compute the MD5 hash of our file locally. So for this, let me open the command line, and I'll issue the Mg-5 command onto my file. Now this will work on my Mac. I'm not sure if this will work on Windows. This is not something you have to do. Just look at it. This is all that's needed. It's not something you have to do very often. But if I do Mg Five of this file, which is less than 5GB, press Enter. You can see that the MD5 I get is eight, nine, and eight, which is exactly the same value right here as the one we had obtained on the eTag in S Three. So the really cool thing here is that we can know right away in case that file was modified or not. So, for example, I just want to show you that the MDFI does change in the event that we modify this file. Say, for example, that I'll echo "hello" into this file. Okay? I'll just add something at the end of it. Oops. I'll just add something at the end of it.

This is something I'm doing on my Mac. So now, if you look at the content of that file, it has changed. There is a "hello" at the very bottom, which I will remove after. But now, if you compute the Mg5 again for that file, we obtained something completely different. Eight, four, eight, B, and before we had eight, nine, A, and eight. So here we are able to use this 85 and the e tags on the three to basically compare the contents of the files without even downloading them, just to know if the hash of them is the same as the ones we have locally on our computer. So that's it. I hope you liked it. I hope that makes sense to you, and I will see you in the next lecture.

8. S3 Encryption

Now we are getting into the fascinating topic of encryption for S 3. Just so you know, the exam loves to ask questions about S-3 encryption. So I require you to pay really close attention here. And I know encryption is not an easy topic. So I really tried my best here to explain to you in simple terms how encryption works in Three and what your different options are. So, there are four methods of encryption for objects in S 3. There is SSE S 3. And to encrypt S-3 objects using keys that are handled and managed by AWS SS-3 KMS, which is the exact same thing, except now Address will use KMS to encrypt your data using S-3 C, which is when you provide your own encryption keys. And Amazon S3 will encrypt your data and use client-side encryption, where you encrypt your data clientside. Don't worry; I have diagrams for all of those, just so you get a better idea of how they work. But just so you get an idea, there are four methods of encryption for S Three.

It's super important for you to know again what method is adapted to which example in the exam. So SS three s three.So, for example, this is the one where the encryption keys are handled and managed by Amazon S Three.You actually don't even see them. The object will be encrypted server-side, and the encryption type is AES 256. Remember this. To make it work, you must set an aheader when you send your data to Amazon S3, which is this very long header. Just remember the form. It's XAMZ for Amazon Serverside Encryption, AES 256. which makes sense because we're requesting Amazon to perform server-side encryption for us with the algorithm AES 256. OK, so here's what it looks like in a diagram. We have our object, and we want to put it into our Amazon S-3 bucket. But we want to encrypt it with SSE three. So the first thing I'm going to do is make an HTTP or HTTPS request, and I'm going to add that header xamz Serverside Encryption AES to 56.

You must set it. Okay, what happens now is that Amazon S3 receives our object. It's there. And then, because we requested server-side encryption, it's also going to create a managed key and a managed data key. And this is managed by S Three. And using these two things, what will happen is that there will be some encryption, and after encryption, the data will be put into the Amazon S3 bucket. Make sense? So the one thing you notice here is that the encryption happens server side. It happens on the Amazon's western side. And Amazon's Three provides the encryption key. Now, if you use KMS, it's also an encryption server side. Except this time, the data key will be managed by KMS. The advantage of using KMS is that you get more control over the rotation of the key and can get an audit trail about how that key is used. The object will be encrypted server-side, and you must set a header. The header is exactly the same. It's XIMZ server-side encryption.

But the value this time is AWS kilometers. So if we look at an example and a diagram again, we get the object, and we get address S 3. And what we do is that we transfer the object using HTTP or HTTPS and the header that we set before. And so the object is now on page three. And so now the key that is used is a KMS customer master key, or CMK. So that's the only difference. And now the encryption still happens, and the data is put in the bucket. So the difference between SSE S3 and SSE Kms is This time the key that is used is a KMS Customer Master key that you can manage over time. If you use SS3c, then it's first-party server-side encryption using data keys that are fully managed by you. Outside of AWS, Amazon will not store the encryption key you provide. And HTTP in this case must be used. An encryption key must be provided in the HTTP headers for every HTTP request made. So that's a lot of information. What does that look like? because I think that makes more sense to explain. We have the object, we have Amazon S3, and we provide and generate a client side data key. Okay? Now over HTTPS only, okay, not HTTP, HTTPS only, because it has to be encrypted in a secure connection. We provide the objects, and we also provide the data key in a header.

The exam doesn't ask about which header it is. Just so you know, the data key is in one of these headers. Now we have put into S3 the object and the client provided data key. OK? So now we transfer both things to Amazon. S three. Amazon S3 does the encryption between the object and the client provided data key. The object is encrypted into the bucket, and then Amazon throws away the client provided data key. So in this example, you see that the clients themselves have provided the key to encrypt the data. So in this case, Amazon just does the encryption but throws away the key right away. Finally, there are clients who use encryption. And for this, you need to use a library such as Amazon, which has three encryption clients just to make it a bit easier. And the idea is that now the clients must encrypt the data themselves before sending it to S Three.

The clients must decrypt the data themselves as well when they retrieve the data from S Three. The customer fully manages the key and encryption cycle. So now, how does it look like?We have Amazon S3 on the righthand side and the client on the lefthand side, and we are using the S3 encryption. In DK, we will generate a side data key altogether. With the object, we will encrypt that data client-side. That's why it's called clientside encryption. So after this, we get an encrypted object, and that object will be transferred over to the bucket. So you see the difference here, which is that now our clients are performing the encryption and also the decryption. Okay? So these are the four. Hopefully the diagrams just make a bit more sense, and finally, you may get questions about encryption in transit. So encryption in transit is that basically Amazon exposes HTTP endpoints for non encrypted traffic and HTTP endpoints where you have encryption in flights. That means that the data exchanged between two servers is encrypted in flights.And so you're free to use the end point you want. But overall, HTTPS is going to be the recommended method.

And if you paid a bit of attention, if you use SSEC, you have to use HTTP, OK? because you also transfer the data key over the network. Encryption in flight is also called Ssltls in the exam. OK, so that's all for encryption, server-side, client-side, and in transit. Now let's just go and do a quick hands-on to get an idea of how things work. So now let's try to upload a file. I'll upload these same files before my online retail extract. Click on "next." Click on next, and now in Properties, if I scroll down, there are the encryption properties, and I'm able to set no encryption, the Amazon S three-master key encryption, or the LWS KMS master key. and I can select the key. Either I can use the AOS S Three master key or I can create my own custom KMS IR arm. So the idea here is that we cannot do SSEC and we also cannot do client side encryption from the UI. But it's possible to do it programmatically. So here, for example, maybe I want to use the Amazon SGT master key and then let Amazon manage all the keys from me.

Or maybe I want to have some control over who uses which keys in which files. So maybe I'll use the Aus three AWS kms master key. Okay, I'll click on "next upload." And now my file is being uploaded, and behind the scenes, AWS will automatically encrypt it for me. How do we make sure? Well, by clicking on this file and going to Properties, we can see that the encryption is set to AWS KMS. We can also click. So here we can see a Ms. Kms., and we can also change it the other way to whatever we want. The other thing we can do is if we go back to the bucket and set properties, we can set a default encryption mechanism to basically store all the files by default with some encryption. So we can say as to 56, which is SSE S Three or AWS kms. And here we are again, to specify a key. So this way, click on "Save." And now any file uploaded to my Azure Bucket will automatically be encrypted with AWS KMS. So I hope that shows you all the encryption mechanisms you can do in KMS. That's really helpful if you want to protect your data, obviously, and for compliance reasons. I hope you liked it, and I will see you in the next lecture.

9. S3 Security

Now let's talk about S-level security at a high level, but we'll visit several topics. The first one is S-course or cross-origin resource sharing. So, for example, if you're visiting a website and that website requests data from another website, which could be a picture, then on that other website you need to enable something core using cross origin resource sharing. It will allow or limit the number of websites that can request files in your three buckets, limiting your costs. So you can say my S3 buckets can only access all the images from these websites that I own, and then other websites cannot link directly to your buckets. It's a very popular exam question. For example, the exam will ask you, "Okay, is this website working?" when we go online. But then, when you run the website offline on your local host, it doesn't work. Why? Well, the responses were well received, of course.

So let's have a look at the diagrams to understand how it works. Here are our clients, and we have my website.com, and all the website images are in my image bucket. So the client connects to the website and says, "I would like to get the index HTML to myofa.com," and then we load the HTML file. And the HTML file is asking us to load a picture directly from the image bucket in history. So the client will issue, for example, "I want to receive the coffee JPEG image, and by the way, here's the origin website I am on," and the Chrome browser, for example, will add that origin thing. So it says, "By the way, I'm on my website.com." Now, the image bucket looks at the origin of therequest, my website.com, and compares it to the cores. And if the course is positive and contains my web site.com," it says, "Yes, you're fine, you can definitely request that file. Here it is. And if it's not, obviously, then it will say, "No, you're not authorised to look at that file," and you will not give the file. So, using Course, you really are able to limit who is able to access a file in your bucket when people use their web browser to navigate your websites. All right, now the next one is going to be S-3 access logs. So now let's talk about S-3 access logs.

And for audit purposes, you may want to look at all the requests that have been issued against your S-3 buckets. So you want to log all the access to it. And for example, any request means you, as a free authorizer denied, will be logged into another S Three buckets. Then we can use that data, analyse it using a data analysis tool, do big data analysis, or use Amazon Athena, as we'll see later in this course. And then using this, we can draw conclusions as to whether someone is trying to gain access to files and they're not authorised to do so, so the log format is described at this link to the documentation. It's not very important, but what does it look like?

Well, our users are going to make a request against mybucket, and my bucket has S3 access logs enabled. So all the requests will be logged in the logging bucket. And using that data in the logging bucket, we're able to do some analysis and figure out which users are trying to do nasty stuff or not. OK, so now you know about SFX logs. Now, as for security overall, it's the user base. So you have IAM policies that control which users can take what action against your three buckets, but they are also resource based. You can apply security to your bucket policies, which are bucket white rules from the street console, and to allow cross-account access, for example, or ACL directly on the objects that give you your final grade. Finally, you get a bucket access control list, and this is way less common. So bucket policies are definitely going to be the most common resource-based security you'll see on screen.

So let's talk about bucket policies for a while. It's a JSON-based policy, and the resources are buckets and objects, and the actions will be a set of APIs to allow or deny. You will say, "Okay," "allow," "deny," and then the principle. So who does this "three bucket" policy apply to? And you would use an S bucket, for example, to grant public access to the bucket, force objects to be encrypted during upload if you haven't enabled default encryption, or grant access to another account, for example, via cross accounts. So, talking about SF bucket policies, which should you use: default encryption or bucket policies? Well, the old way of ensuring that everything was encrypted in your bucket was to use bucket policies. And this would be like the kind of bucket policy you would use to ensure that everything was encrypted. But now the new way is to really use the default encryption option S Three.

And then you're sure that every bucket, every object, will be uploaded to S3 while being encrypted. You're still free to use one or even both of them. And note that bucket policies will be evaluated before the default encryption setting. So if you use both, the bucket policy is evaluated. The first other thing you need to do for SFREE security networking is support VPC endpoints. So you can access Sfree within your private VPC without public internet access, login, or audit.

So you can do SBS Six Laws, as we said, to store data into an asteroid. Bucket API calls can also be logged using the cloud trail. We see this in the security section in depth and in user security. We can require multifactor authentication in version buckets when people delete objects. So we have extra security as far as objects never being deleted. And we can use signed URLs if we want to create a URL for someone. For example, we have a premium video service, and we just want to give them a URL that will work for five minutes so they can download the file using that URL. So that's it for security. Now, I know there's a lot, but don't worry, we'll review everything again in the security section. But I wanted to give you a quick up as to what's going to happen. Overall. Just replay this a couple of times, and you'll be fine. So I hope you like this lecture, and I will see you in the next lecture.

Prepared by Top Experts, the top IT Trainers ensure that when it comes to your IT exam prep and you can count on ExamSnap AWS Certified Data Analytics - Specialty (DAS-C01) certification video training course that goes in line with the corresponding Amazon AWS Certified Data Analytics - Specialty exam dumps, study guide, and practice test questions & answers.

Comments (0)

Add Comment

Please post your comments about AWS Certified Data Analytics - Specialty Exams. Don't share your email address asking for AWS Certified Data Analytics - Specialty braindumps or AWS Certified Data Analytics - Specialty exam pdf files.

Add Comment

Only Registered Members can View Training Courses

Please fill out your email address below in order to view Training Courses. Registration is Free and Easy, You Simply need to provide an email address.

  • Trusted by 1.2M IT Certification Candidates Every Month
  • Hundreds Hours of Videos
  • Instant download After Registration

Already Member? Click here to Login

A confirmation link will be sent to this email address to verify your login

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.