AWS Certified Cloud Practitioner CLF-C02 Amazon Practice Test Questions and Exam Dumps

Question No 1:

A company plans to use an Amazon Snowball Edge device to transfer files to the AWS Cloud. Which activities related to a Snowball Edge device are available to the company at no cost?

A. Use of the Snowball Edge appliance for a 10-day period
B. The transfer of data out of Amazon S3 and to the Snowball Edge appliance
C. The transfer of data from the Snowball Edge appliance into Amazon S3
D. Daily use of the Snowball Edge appliance after 10 days

Answer: A. Use of the Snowball Edge appliance for a 10-day period

Explanation:

Amazon Snowball Edge is a physical device designed for data transfer to and from AWS Cloud, especially when dealing with large volumes of data. The service can save time and bandwidth when uploading large data sets by shipping the appliance directly to a company. The cost model for Snowball Edge involves several aspects, but some activities are available at no additional cost.

  • Option A: Use of the Snowball Edge appliance for a 10-day period
    When a company uses the Snowball Edge appliance, the first 10 days of use are provided at no charge. This allows the company to load data onto the device or perform any related activities without incurring extra costs for that initial period. However, once the 10-day period expires, there is an additional charge for daily usage beyond this free period.

  • Option B: The transfer of data out of Amazon S3 and to the Snowball Edge appliance
    Transferring data out of Amazon S3 to the Snowball Edge appliance typically involves data transfer fees. These fees are not included in the free usage, so the company would incur charges for moving data out of S3.

  • Option C: The transfer of data from the Snowball Edge appliance into Amazon S3
    While data transfer into Amazon S3 is part of the primary service, it generally incurs a fee. This means the company would pay for the cost of transferring data into S3 from the Snowball Edge appliance.

  • Option D: Daily use of the Snowball Edge appliance after 10 days
    After the initial 10-day period, any additional usage of the appliance beyond the free window is billed on a daily basis.

To summarize, the first 10 days of usage are free, but additional usage, including data transfer, will incur charges.

Question No 2:

A company has deployed applications on Amazon EC2 instances. The company needs to assess application vulnerabilities and must identify infrastructure deployments that do not meet best practices. 

Which AWS service can the company use to meet these requirements?

A. AWS Trusted Advisor
B. Amazon Inspector
C. AWS Config
D. Amazon GuardDuty

Answer: B. Amazon Inspector

Explanation:

When a company needs to assess security vulnerabilities in its infrastructure, Amazon provides several tools to help identify potential risks and misconfigurations. Among the various services, Amazon Inspector is the best-suited option for identifying application vulnerabilities.

  • Option A: AWS Trusted Advisor
    AWS Trusted Advisor is a service that provides recommendations on best practices for a wide variety of AWS services, including cost optimization, performance, security, and fault tolerance. However, while Trusted Advisor offers insights into security best practices, it does not specifically focus on vulnerabilities in applications or EC2 instances.

  • Option B: Amazon Inspector
    Amazon Inspector is a service designed to automatically assess applications for security vulnerabilities and deviations from best practices. It analyzes EC2 instances and installed applications to identify potential security issues. The tool helps to evaluate infrastructure based on predefined security rules and also provides actionable insights on how to resolve any identified vulnerabilities. This makes Amazon Inspector the ideal tool for identifying vulnerabilities within applications running on EC2 instances.

  • Option C: AWS Config
    AWS Config is a service that tracks configuration changes to AWS resources, enabling compliance auditing and security analysis. While AWS Config is excellent for tracking resource configuration and ensuring compliance, it does not specifically identify vulnerabilities within the applications themselves.

  • Option D: Amazon GuardDuty
    Amazon GuardDuty is a threat detection service that monitors for malicious or unauthorized behavior. While it can detect suspicious activity, it does not focus specifically on vulnerability scanning or identifying infrastructure deployments that do not meet best practices.

In conclusion, Amazon Inspector is the most effective tool for identifying application vulnerabilities and misconfigurations in EC2 instances and infrastructure.

Question No 3: 

A company has a centralized group of users with large file storage requirements that have exceeded the space available on-premises. The company wants to extend its file storage capabilities for this group while retaining the performance benefit of sharing content locally. 

What is the MOST operationally efficient AWS solution for this scenario?

A. Create an Amazon S3 bucket for each user. Mount each bucket by using an S3 file system mounting utility.
B. Configure and deploy an AWS Storage Gateway file gateway. Connect each user’s workstation to the file gateway.
C. Move each user’s working environment to Amazon WorkSpaces. Set up an Amazon WorkDocs account for each user.
D. Deploy an Amazon EC2 instance and attach an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume. Share the EBS volume directly with the users.

Answer:

B. Configure and deploy an AWS Storage Gateway file gateway. Connect each user’s workstation to the file gateway.

Explanation:

In this scenario, the company needs to extend its file storage capabilities while maintaining local performance benefits for users. The most operationally efficient solution involves connecting on-premises users to scalable cloud storage in a way that is seamless and efficient.

  • Option A: Create an Amazon S3 bucket for each user. Mount each bucket by using an S3 file system mounting utility
    While this solution allows users to access data stored in Amazon S3, S3 is an object storage service, and mounting S3 buckets as a file system can be complex and introduce latency. It's also not optimized for local access performance, making it less suitable for a group of users with large file storage requirements.

  • Option B: Configure and deploy an AWS Storage Gateway file gateway. Connect each user’s workstation to the file gateway
    AWS Storage Gateway with the File Gateway configuration provides a hybrid cloud storage solution. It allows users to access cloud-based file storage through local file protocols (e.g., NFS, SMB) while leveraging the scalability of Amazon S3. The file gateway maintains local caching of frequently accessed files, ensuring low-latency access and offering a seamless experience for users. This solution is both scalable and efficient, meeting the company's requirements for operational efficiency.

  • Option C: Move each user’s working environment to Amazon WorkSpaces. Set up an Amazon WorkDocs account for each user
    Amazon WorkSpaces and WorkDocs are virtual desktop and file storage solutions, but they may not be necessary in this case, as they involve moving the entire working environment to the cloud. This can add complexity and costs that may not be needed if the goal is to extend file storage capabilities.

  • Option D: Deploy an Amazon EC2 instance and attach an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume
    EBS volumes are excellent for high-performance storage, but using them for multiple users is inefficient and costly, as the storage would be limited to a single EC2 instance. Sharing an EBS volume across users is not operationally efficient and would create significant management overhead.

In conclusion, AWS Storage Gateway provides the most operationally efficient solution for extending file storage capabilities while maintaining local performance benefits for users.

Question No 4:

According to security best practices, how should an Amazon EC2 instance be given access to an Amazon S3 bucket?

A. Hard code an IAM user’s secret key and access key directly in the application, and upload the file.
B. Store the IAM user’s secret key and access key in a text file on the EC2 instance, read the keys, then upload the file.
C. Have the EC2 instance assume a role to obtain the privileges to upload the file.
D. Modify the S3 bucket policy so that any service can upload to it at any time.

Answer: C. Have the EC2 instance assume a role to obtain the privileges to upload the file.

Explanation:

In Amazon Web Services (AWS), security best practices recommend using roles and policies to manage access permissions. When an Amazon EC2 instance needs to access an S3 bucket, the most secure and scalable approach is to use IAM roles for EC2 instances. This approach avoids the need for directly embedding sensitive credentials in the application or server.

  • Option A: Hard code an IAM user’s secret key and access key directly in the application, and upload the file
    This is a bad practice because hardcoding sensitive IAM credentials (e.g., access keys and secret keys) directly in the application code creates a significant security risk. If the application is compromised, the credentials can be extracted and misused. AWS recommends never hardcoding credentials in applications.

  • Option B: Store the IAM user’s secret key and access key in a text file on the EC2 instance, read the keys, then upload the file
    Storing sensitive credentials such as access keys and secret keys in text files on an EC2 instance is also insecure. If the EC2 instance is compromised, the keys can be easily accessed. This violates best practices, as it leads to the potential exposure of credentials.

  • Option C: Have the EC2 instance assume a role to obtain the privileges to upload the file
    This is the recommended approach. By assigning an IAM role to the EC2 instance, the instance automatically inherits the permissions needed to access the S3 bucket. The role grants permissions without requiring credentials to be stored or hardcoded in the instance. The role’s permissions are granted dynamically when the instance assumes the role.

  • Option D: Modify the S3 bucket policy so that any service can upload to it at any time
    Modifying the S3 bucket policy to allow all services to upload to it compromises security. Open permissions can lead to unauthorized access to the bucket and its contents, which is not recommended.

Thus, the most secure and recommended method is to have the EC2 instance assume a role with the necessary S3 permissions.

Question No 5: 

Which option is a customer responsibility when using Amazon DynamoDB under the AWS Shared Responsibility Model?

A. Physical security of DynamoDB
B. Patching of DynamoDB
C. Access to DynamoDB tables
D. Encryption of data at rest in DynamoDB

Answer: C. Access to DynamoDB tables

Explanation:

The AWS Shared Responsibility Model defines the division of security tasks between AWS and its customers. AWS manages the security of the cloud, meaning the underlying infrastructure, hardware, and network security, while customers are responsible for managing the security in the cloud—for example, access control, data protection, and application security.

  • Option A: Physical security of DynamoDB
    This is AWS’s responsibility. AWS is responsible for securing the physical infrastructure, including hardware and data centers. This ensures that the physical devices running services like DynamoDB are protected from unauthorized access, theft, or damage.

  • Option B: Patching of DynamoDB
    AWS is responsible for patching and maintaining the underlying systems running DynamoDB. As a managed service, DynamoDB abstracts away the underlying infrastructure management, including patching, from the customer. Customers do not need to worry about database maintenance tasks like patching, as this is managed by AWS.

  • Option C: Access to DynamoDB tables
    This is the customer’s responsibility. The customer controls who can access DynamoDB tables and how the data is accessed. This includes setting up appropriate IAM policies for users, roles, or services that need access to DynamoDB. Customers are also responsible for managing permissions and configuring access control mechanisms such as AWS Identity and Access Management (IAM), resource-based policies, and access keys.

  • Option D: Encryption of data at rest in DynamoDB
    AWS provides the infrastructure for encryption, and by default, DynamoDB encrypts data at rest using AWS Key Management Service (KMS). However, customers are responsible for managing their own encryption keys and can configure custom encryption settings if necessary.

In conclusion, while AWS handles the underlying security infrastructure, the customer is responsible for managing access control to their DynamoDB tables, ensuring only authorized users or applications can interact with the data.

Question No 6: 

Which option is a perspective that includes foundational capabilities of the AWS Cloud Adoption Framework (AWS CAF)?

A. Sustainability
B. Performance efficiency
C. Governance
D. Reliability

Answer: C. Governance

Explanation:

The AWS Cloud Adoption Framework (AWS CAF) is designed to help organizations transition to the AWS Cloud. It includes six perspectives that provide a structured approach to help plan and execute a cloud adoption strategy. These perspectives are:

  1. Business

  2. People

  3. Governance

  4. Platform

  5. Security

  6. Operations

Each of these perspectives outlines a set of capabilities and considerations that organizations need to focus on during the adoption process. One of the core perspectives of the AWS CAF is Governance.

  • Governance in the context of the AWS CAF includes establishing policies, processes, and controls to ensure that the use of cloud resources aligns with business goals and regulatory requirements. It involves managing and optimizing cloud usage, ensuring compliance, and establishing frameworks to handle decision-making processes regarding cloud adoption.

  • Option A: Sustainability
    Sustainability is important but is not a foundational perspective in the AWS CAF. It is more about ensuring that your AWS architecture is environmentally responsible and meets sustainability goals, but it’s not the primary focus in the framework.

  • Option B: Performance Efficiency
    Performance efficiency is a key principle in AWS architecture best practices. However, it does not represent a core perspective in the AWS CAF. It is more related to how resources are utilized and optimized for performance.

  • Option D: Reliability
    While reliability is an important principle in designing cloud architectures on AWS, it is also not classified as one of the six perspectives in the AWS CAF.

In conclusion, Governance directly addresses the foundational capabilities necessary to adopt and manage AWS services, making it the correct perspective in the AWS CAF.

Question No 7: 

A company is running and managing its own Docker environment on Amazon EC2 instances. The company wants an alternative to help manage cluster size, scheduling, and environment maintenance. Which AWS service meets these requirements?

Options:

A. AWS Lambda
B. Amazon RDS
C. AWS Fargate
D. Amazon Athena

Answer: C. AWS Fargate

Explanation:

Managing Docker containers manually on Amazon EC2 instances can be complex and resource-intensive. The company may face challenges related to scaling the environment, managing the underlying infrastructure, and ensuring optimal resource utilization. A better alternative would be to use a fully managed service that handles these tasks, and AWS Fargate is an ideal choice for such requirements.

AWS Fargate is a serverless compute engine that allows users to run containers without managing the underlying servers or clusters. With Fargate, there is no need to provision or manage EC2 instances, as it automatically provisions the compute resources required for containers.

  • Option A: AWS Lambda
    AWS Lambda is a serverless compute service that runs code in response to events but is typically used for functions or microservices, not for managing containerized environments like Docker. Lambda is best suited for running lightweight, event-driven code rather than managing a Docker environment.

  • Option B: Amazon RDS
    Amazon RDS (Relational Database Service) is used for managing relational databases, not containerized environments. It does not help with managing Docker clusters or maintaining containerized applications.

  • Option D: Amazon Athena
    Amazon Athena is an interactive query service that allows users to analyze data in Amazon S3 using standard SQL. It is not designed for container management.

AWS Fargate automatically handles container orchestration tasks such as scaling, scheduling, and resource management. It integrates seamlessly with Amazon ECS (Elastic Container Service) or Amazon EKS (Elastic Kubernetes Service), making it an excellent choice for companies looking to move away from manually managing their Docker environments on EC2 instances.

Question No 8: 

A company wants to run a NoSQL database on Amazon EC2 instances. Which task is the responsibility of AWS in this scenario?

Options:

A. Update the guest operating system of the EC2 instances.
B. Maintain high availability at the database layer.
C. Patch the physical infrastructure that hosts the EC2 instances.
D. Configure the security group firewall.

Answer: C. Patch the physical infrastructure that hosts the EC2 instances.

Explanation:

When using Amazon EC2 instances, the AWS Shared Responsibility Model outlines the division of responsibilities between AWS and the customer. In the context of running a NoSQL database on EC2, AWS is responsible for managing and securing the underlying infrastructure, while customers are responsible for managing the operating system and database layer.

  • Option A: Update the guest operating system of the EC2 instances
    This is the customer’s responsibility. The customer must manage the operating system (OS) on the EC2 instance, including applying patches, updates, and security fixes.

  • Option B: Maintain high availability at the database layer
    This is also the customer’s responsibility. The customer is responsible for ensuring the NoSQL database is highly available by implementing replication, clustering, and other mechanisms. AWS offers services like Amazon DynamoDB for fully managed NoSQL databases, which take care of availability automatically.

  • Option C: Patch the physical infrastructure that hosts the EC2 instances
    This is AWS’s responsibility. AWS manages the physical infrastructure, including the underlying servers, storage, networking, and data center security. AWS is responsible for ensuring that the hardware and physical network components are patched and maintained.

  • Option D: Configure the security group firewall
    This is the customer’s responsibility. The customer is responsible for configuring security groups to control inbound and outbound traffic to and from the EC2 instance, which includes ensuring proper network security.

In conclusion, AWS is responsible for the physical infrastructure of the EC2 instances, including patching and maintaining the servers that run the instances. The customer is responsible for managing the database, operating system, and network configuration.

Question No 9: 

Which AWS services or tools can help identify rightsizing opportunities for Amazon EC2 instances? (Choose two.)

Options:

A. AWS Cost Explorer
B. AWS Billing Conductor
C. Amazon CodeGuru
D. Amazon SageMaker
E. AWS Compute Optimizer

Answer:

A. AWS Cost Explorer
E. AWS Compute Optimizer

Explanation:

Rightsizing EC2 instances is an essential strategy for optimizing cloud costs. It involves adjusting instance types or sizes based on the actual usage and performance needs of applications, ensuring that an organization isn't over-provisioning (which leads to unnecessary costs) or under-provisioning (which affects performance). In this context, there are specific AWS tools designed to help with identifying rightsizing opportunities.

  • Option A: AWS Cost Explorer
    AWS Cost Explorer is a tool that allows users to visualize and analyze their AWS costs and usage. It provides insights into cost trends and can identify opportunities to optimize instance usage based on actual resource consumption. By analyzing your EC2 usage patterns, Cost Explorer can help pinpoint underutilized instances that can be downsized or instances that are over-provisioned, thereby supporting efficient cost management.

  • Option E: AWS Compute Optimizer
    AWS Compute Optimizer is specifically designed to analyze the performance metrics of EC2 instances and recommend the most appropriate instance type based on the workload. It uses machine learning to evaluate historical utilization data and recommends optimal instance sizes to achieve cost savings without compromising performance. It is highly effective in identifying rightsizing opportunities for EC2 instances, ensuring that businesses are running instances that best match their performance and cost requirements.

  • Option B: AWS Billing Conductor
    While AWS Billing Conductor is useful for managing AWS cost allocation and creating custom billing reports, it does not directly provide rightsizing recommendations for EC2 instances.

  • Option C: Amazon CodeGuru
    Amazon CodeGuru is a developer tool that provides automated code reviews and performance suggestions, but it is not designed to analyze infrastructure or recommend rightsizing for EC2 instances.

  • Option D: Amazon SageMaker
    Amazon SageMaker is a machine learning service, and while it can optimize machine learning models, it is not involved in rightsizing EC2 instances.

In summary, AWS Cost Explorer and AWS Compute Optimizer are the tools specifically designed to help identify rightsizing opportunities, ultimately contributing to cost savings and improved resource management for EC2 instances.

Question No 10: 

Which of the following are benefits of using AWS Trusted Advisor? (Choose two.)

A. Providing high-performance container orchestration
B. Creating and rotating encryption keys
C. Detecting underutilized resources to save costs
D. Improving security by proactively monitoring the AWS environment
E. Implementing enforced tagging across AWS resources

Answer:

C. Detecting underutilized resources to save costs
D. Improving security by proactively monitoring the AWS environment

Explanation:

AWS Trusted Advisor is a powerful service that provides real-time guidance to help users provision resources according to AWS best practices. It delivers valuable insights and recommendations for optimizing AWS accounts across cost, performance, security, fault tolerance, and service limits. Let’s break down the benefits of using AWS Trusted Advisor:

  • Option C: Detecting underutilized resources to save costs
    One of the primary features of AWS Trusted Advisor is its ability to identify underutilized resources, such as EC2 instances, RDS databases, and other services. By flagging instances that are not being fully utilized, Trusted Advisor helps organizations avoid over-provisioning and unnecessary expenditures. This can result in significant cost savings, as users can resize or terminate underused resources, ensuring they only pay for what they need.

  • Option D: Improving security by proactively monitoring the AWS environment
    AWS Trusted Advisor continuously monitors an account’s security posture and provides recommendations to improve security. It checks for potential security vulnerabilities, such as unused security groups or open ports, and offers suggestions to mitigate risks. For example, Trusted Advisor can flag S3 buckets with open access or advise on the use of multi-factor authentication (MFA), thus enhancing the security of your AWS environment.

  • Option A: Providing high-performance container orchestration
    This benefit is not part of AWS Trusted Advisor’s capabilities. Container orchestration, such as Kubernetes, is typically handled by services like Amazon ECS (Elastic Container Service) or Amazon EKS (Elastic Kubernetes Service), not Trusted Advisor.

  • Option B: Creating and rotating encryption keys
    Key management and encryption tasks are typically handled by AWS Key Management Service (KMS), not AWS Trusted Advisor.

  • Option E: Implementing enforced tagging across AWS resources
    While AWS Trusted Advisor can give suggestions for better resource management, enforcing tagging is primarily done through tools like AWS Organizations or AWS Config rather than Trusted Advisor.

In conclusion, AWS Trusted Advisor offers essential benefits for both cost optimization and security enhancement, helping users align their AWS infrastructure with best practices, detect underutilized resources, and proactively secure their environment.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.