Use VCE Exam Simulator to open VCE files

AWS Certified DevOps Engineer - Professional DOP-C02 Amazon Practice Test Questions and Exam Dumps
Question No 1:
A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header, and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
B. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
C. Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
D. Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.
Correct Answer: A
Explanation:
To gather the required metrics (API operation name, response code, and version number), a good approach would be to utilize Amazon CloudWatch Logs with metric filters. Here's why:
Writing to CloudWatch Logs: The Lambda function can write logs with the API operation name, response code, and version number as a log line. These logs will be stored in an Amazon CloudWatch Logs log group.
CloudWatch Logs Metric Filter: After the logs are written, CloudWatch Logs Metric Filters can be configured to extract the data and increment a metric for each API operation name.
Dimensions: You can specify response code and application version as dimensions for the metric. Dimensions allow you to break down the metric data for further analysis.
This approach is easy to configure, integrates seamlessly with CloudWatch, and will give you granular metrics for the different versions of the application.
CloudWatch Logs Insights: While CloudWatch Logs Insights can be used to query the log data, it’s typically used for log analysis rather than direct metric generation. This means you'd need an additional layer of querying and manual intervention to generate metrics from the logs.
This approach could work for ad-hoc queries, but it does not automate the metric creation process, which is what is required in this case.
ALB Access Logs: While ALB access logs provide useful information about requests, using them means you would be processing additional log data (from the ALB) rather than directly capturing the necessary data (from the Lambda function). The Lambda function already has the required data (API operation name, response code, and version), so writing that data directly from Lambda is more efficient.
Lambda Response Metadata: Modifying the Lambda function to send response metadata back to ALB does not address the issue of gathering the metrics in CloudWatch effectively.
AWS X-Ray Integration: While X-Ray is great for tracing and monitoring the performance of Lambda functions, it's not designed specifically for generating granular metrics like the ones required here (metrics for each API operation by response code and version).
This approach would involve extra overhead related to tracing and configuring X-Ray, and may not provide the simplicity and directness of using CloudWatch Logs and metric filters for metric generation.
Option A is the most straightforward and efficient approach to gather metrics on API operations, response codes, and application versions by leveraging CloudWatch Logs and metric filters. This solution directly addresses the need for automated, real-time metrics without the extra complexity of using X-Ray or querying with CloudWatch Logs Insights.
Question No 2:
A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold-start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.Which solution will meet these requirements?
A. Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.
C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.
Answer: C
Explanation:
The main issue here is long cold-start times for the Lambda function, which are exacerbated by high request volumes and large data loading from DynamoDB. The goal is to reduce latency, ensuring the Lambda function can scale efficiently to handle variable traffic levels, while minimizing the cold-start delay.
Let's break down the options:
Incorrect.
While provisioned concurrency can help reduce cold-start latency by keeping instances of the Lambda function "warm," setting the concurrency value to 1 may not be sufficient for handling the high traffic spikes during the day. Additionally, removing the DAX cluster could actually worsen the performance of the Lambda function in loading data from DynamoDB, as DAX is a caching layer that speeds up read operations
Incorrect.
Reserved concurrency guarantees the function a set amount of execution capacity, but setting it to 0 would effectively disable the Lambda function. This would prevent any requests from being processed, which is clearly not a solution for reducing latency or handling increased traffic.
Correct.
Provisioned concurrency is designed to eliminate cold starts by ensuring that a specified number of Lambda function instances are always initialized and ready to handle requests. This can drastically reduce cold-start latency. By configuring AWS Application Auto Scaling with a range of 1 to 100 for the provisioned concurrency, you can ensure that the Lambda function can scale to handle traffic fluctuations effectively. During times of high request volume (e.g., middle of the day), the Lambda function will scale up, and during lower request times, it will scale down, ensuring cost efficiency while keeping latency low.
Incorrect.
While reserved concurrency is useful for controlling the maximum number of concurrent executions for a Lambda function, it doesn't eliminate cold-start times or offer the scaling flexibility needed to handle dynamic traffic patterns efficiently. Additionally, API Gateway does not use reserved concurrency in this context, and scaling on API Gateway alone wouldn't address the cold-start issue for the Lambda function itself.
The best approach is to use provisioned concurrency to keep the Lambda function warm and reduce cold-start latency, while also enabling AWS Application Auto Scaling to automatically adjust concurrency levels to handle variable traffic volumes efficiently. Option C provides the most effective solution for both reducing latency and scaling to meet demand.
Question No 3:
A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production. The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?
A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.
B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.
C. Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.
Answer: B
Explanation:
To meet the requirement of dynamically changing the log level configuration based on the deployment group while minimizing management overhead, the solution should focus on leveraging the CodeDeploy environment variables and lifecycle hooks in a way that avoids requiring different application revisions for each group. Let's evaluate each option:
Option A: Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.
While this solution could work, it introduces unnecessary complexity. The metadata service and EC2 API calls are extra steps that increase the management overhead, particularly when the desired solution can be achieved directly with CodeDeploy environment variables. Moreover, AfterInstall is typically used for tasks after the application is installed, while the desired dynamic change of the log level should ideally happen earlier in the deployment process. This solution is more complex and not the most efficient.
Option B: Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.
This is the best solution. CodeDeploy provides a built-in environment variable called DEPLOYMENT_GROUP_NAME, which indicates which deployment group the instance belongs to. By referencing this environment variable in a script, you can dynamically configure the log level settings based on the deployment group without needing separate scripts or revisions. Additionally, placing the script in the BeforeInstall lifecycle hook ensures that the log level is configured before the application is installed, which aligns with the goal of adjusting the log settings before the application runs. This solution has the least management overhead.
Option C: Create a CodeDeploy custom environment variable for each environment.
Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
This option involves creating custom environment variables for each environment. While it is feasible, it adds additional management overhead by requiring manual configuration of custom variables. Furthermore, the ValidateService lifecycle hook is typically used for validating whether the application is working as expected, not for configuring the log level, so this hook is not appropriate for this task.
Option D: Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.
The DEPLOYMENT_GROUP_ID is not a standard environment variable in AWS CodeDeploy. Therefore, using it would require custom configuration or workarounds, and it is not an optimal approach. Additionally, the Install lifecycle hook is more appropriate for tasks related to installing the application, rather than adjusting configurations like log levels. This makes it less ideal for this requirement.
Conclusion:
The most effective and simple approach is Option B, which utilizes the DEPLOYMENT_GROUP_NAME environment variable provided by CodeDeploy and configures the log level in the BeforeInstall lifecycle hook. This ensures a dynamic configuration of the log level based on the deployment group with minimal management overhead.
Question No 4:
A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, daily, or weekly that correspond to the desired backup frequency. An audit finds that developers are occasionally not tagging the EBS volumes. A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?
A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup_Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup_Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
Answer: B
Explanation:
The company needs to enforce the tagging of Backup_Frequency for all Amazon EBS volumes and apply a default tag value of weekly for those that do not have a tag. The solution needs to automatically enforce this policy without requiring manual tagging by developers.
A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup_Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
This option is close but slightly incorrect because AWS Config is used to monitor compliance of resources, and creating a custom rule for EC2 resources generally involves writing specific logic that may not be as streamlined as a managed rule, particularly when targeting a specific resource type like EC2::Volume.
B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup_Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
This option is the best solution because it leverages an AWS Config managed rule designed specifically to check for tags on EBS volumes (EC2::Volume), which is the required resource. The managed rule can automatically check for the presence of the Backup_Frequency tag, and the AWS Systems Manager Automation runbook can be triggered to apply the weekly tag if missing. This ensures that the EBS volumes are always tagged as required, which aligns perfectly with the company's policy.
C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
This option uses EventBridge to react to EBS CreateVolume events, which is a viable solution. However, it does not address the case of volumes that may be modified after creation or volumes that were initially created without the required tag. Therefore, it only partially solves the problem.
D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
While this option broadens the scope by including ModifyVolume events, it still does not ensure comprehensive compliance like the AWS Config solution does. EventBridge rules for both creation and modification would react to the events but would require extra configuration, and this approach is not as streamlined or reliable as using AWS Config for ongoing compliance monitoring.
Thus, the most appropriate solution for ensuring that Backup_Frequency tags are applied to all EBS volumes, and that non-compliant volumes are remediated automatically, is B.
Question No 5:
What should a DevOps engineer do to meet these requirements?
A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
Answer: A
Explanation:
The goal is to ensure that the Aurora cluster remains available with minimal interruption during the upcoming maintenance window. The best solution should address availability during maintenance and provide a way for both read and write operations to continue without significant downtime.
Option A is the correct answer because adding a reader instance to the Aurora cluster provides read scalability and failover support. By using the cluster endpoint for write operations and the reader endpoint for read operations, the application can perform read and write operations on separate instances, ensuring that even if the main DB instance undergoes maintenance or fails over, read operations can continue without disruption. The reader instance ensures that read traffic is directed to a healthy instance while write operations continue on the primary instance, making the solution resilient with minimal interruption.
Option B is incorrect because creating a custom ANY endpoint does not provide the flexibility needed to separate read and write traffic. The ANY endpoint directs both read and write operations to any available instance in the cluster, which could lead to writes being directed to a reader instance during maintenance or failover. This would not provide the desired level of availability during maintenance windows, as the application should direct write traffic only to the primary instance.
Option C is incorrect because enabling the Multi-AZ option provides automatic failover in case of an instance failure, but it doesn't improve read scalability during the maintenance window. With Multi-AZ, there would still be a risk of performance degradation during maintenance, as the application would rely on the failover process, which might introduce brief downtime or delays.
Option D is incorrect for similar reasons as Option C. While enabling Multi-AZ provides failover capabilities, creating a custom ANY endpoint would still direct both reads and writes to the same set of instances, which could affect the overall performance and availability during the maintenance window.
Thus, Option A is the most effective solution because it optimizes both availability and performance during the maintenance window by leveraging the cluster endpoint for writes and the reader endpoint for reads. This minimizes the risk of downtime while maintaining operational continuity.
Question No 6:
A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)
A. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
B. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
C. In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
D. In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
E. In the source account, share the unencrypted AMI with the target account.
F. In the source account, share the encrypted AMI with the target account.
Correct Answer: A, C, F
Explanation:
To meet the requirement of encrypting the AMIs that are shared across accounts, the DevOps engineer will need to follow a series of steps. Here's a breakdown of why options A, C, and F are correct:
Copy the unencrypted AMI to an encrypted AMI: The first step is to copy the unencrypted AMI and specify the KMS key for encryption in the copy process. This ensures that the AMI is encrypted with the specified KMS key.
Why not B: The option to use the default Amazon Elastic Block Store (Amazon EBS) encryption key wouldn't meet the requirement of using the custom KMS key that is specified in the source account. The goal is to encrypt the AMI using the company’s custom KMS key, not the default encryption key.
KMS Grant: To allow the Auto Scaling group in the target account to launch instances from the encrypted AMI, the DevOps engineer must create a KMS grant in the source account. This grant will delegate the necessary permissions for the Auto Scaling service-linked role in the target account to use the KMS key for decryption.
Why not D: Modifying the key policy to allow the target account to create a grant is unnecessary in this case. The appropriate step is to create the grant in the source account, as specified in C. Creating a grant in the target account is redundant and complicates the solution.
Share the encrypted AMI: After encrypting the AMI in the source account, the engineer can share the encrypted AMI with the target account. This step ensures that the target account can use the AMI to launch EC2 instances from the Auto Scaling group.
Why not E: Sharing the unencrypted AMI does not meet the requirement to encrypt all shared AMIs. The company policy requires encryption before sharing the AMI with another account.
Default EBS Encryption Key: While specifying the default Amazon EBS encryption key might be a valid option for some scenarios, in this case, the company requires the use of a custom KMS key, not the default encryption key. This does not meet the security and compliance requirements for encryption using a specific KMS key.
Grant Delegation: The key policy modification and KMS grant creation in the target account (option D) are not required. The correct approach is to create the grant in the source account and allow the Auto Scaling service-linked role in the target account to use the KMS key via this grant. Modifying the key policy and creating grants in both accounts adds unnecessary complexity.
The correct set of steps for the DevOps engineer to perform are A (copy the unencrypted AMI to an encrypted AMI using the KMS key), C (create a KMS grant in the source account to delegate permissions to the Auto Scaling group service-linked role), and F (share the encrypted AMI with the target account). These actions ensure that the AMI is encrypted and can be used by the target account’s Auto Scaling group as per the requirements.
Question No 7:
A company uses AWS CodePipeline pipelines to automate releases of its application. A typical pipeline consists of three stages: build, test, and deployment. The company has been using a separate AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)
A. Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.
B. Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.
C. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.
D. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
E. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
Answer: B, D
Explanation:
The task is to integrate AWS CodeDeploy for the deployment stage of the pipeline, leveraging EC2 Auto Scaling groups, which require the deployment of the application (packaged as an RPM) to a fleet of EC2 instances. The requirements include having the EC2 instances prepared for use with CodeDeploy, specifying how the CodeDeploy deployment will take place, and ensuring integration with the CodePipeline pipeline.
Let's break down the options to determine the most appropriate solution:
Incorrect.
While installing the CodeDeploy agent on the AMI and ensuring proper IAM permissions are critical steps, this option doesn't address the creation of a proper deployment configuration for the application using CodeDeploy. This step is part of the solution but doesn't complete the deployment process itself. The IAM role is important, but the next step, such as creating the AppSpec file, is necessary to define how the application should be deployed.
Correct.
This option outlines the required steps to prepare the EC2 instances to work with CodeDeploy. Creating a new version of the AMI ensures that the CodeDeploy agent is available for deployment tasks. Additionally, creating the AppSpec file is necessary for defining how the deployment should occur (e.g., defining scripts to install the RPM package, specify locations, etc.). The AppSpec file guides CodeDeploy on how to handle the deployment process.
Incorrect.
This option introduces EC2 Image Builder and suggests using it to create a new AMI, but it doesn't align with the core requirement, which is to deploy the packaged application (RPM) using CodeDeploy. The focus should be on deploying the application and not on creating a new AMI with EC2 Image Builder. The Auto Scaling group should be the deployment target, but the focus on creating a new AMI via Image Builder introduces unnecessary complexity.
Correct.
This option correctly configures CodeDeploy to handle the in-place deployment of the RPM package to EC2 instances in the Auto Scaling group. The in-place deployment type means the application will be deployed directly onto existing instances, which is the desired approach for this scenario. The CodePipeline action will integrate CodeDeploy, ensuring that the deployment stage in the pipeline is automated correctly. This is the key step to integrating CodeDeploy with CodePipeline.
Incorrect.
This option suggests specifying the EC2 instances directly as the deployment target, but this approach is not ideal when using an Auto Scaling group. The better approach is to target the Auto Scaling group as a whole, as instances can scale in or out dynamically. Auto Scaling groups should be the deployment target, not individual EC2 instances.
To meet the requirements, the steps need to ensure the EC2 instances are properly prepared with the CodeDeploy agent, the AppSpec file is created to define the deployment behavior, and the CodePipeline pipeline uses CodeDeploy for the deployment stage. Therefore, the best combination is Option B (preparing the EC2 instances and creating the AppSpec file) and Option D (configuring CodeDeploy with Auto Scaling group as the deployment target and integrating with CodePipeline).
Question No 8:
A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs. The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations. The company has configured AWS Config for the organization. During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)
A. Delegate AWS Firewall Manager to a security account.
B. Delegate Amazon GuardDuty to a security account.
C. Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
D. Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
E. Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
Answer. A, C
Explanation:
To prevent future violations and ensure that all external ALBs and API Gateway APIs are consistently associated with AWS WAF web ACLs across a large number of AWS accounts in the organization, you can leverage AWS Firewall Manager and AWS Config.
Let’s break down the options:
Option A: Delegate AWS Firewall Manager to a security account.
AWS Firewall Manager allows for centralized management of security rules across multiple AWS accounts in AWS Organizations. By delegating AWS Firewall Manager to a security account, the company can enforce security policies across its entire organization, including ensuring that all Application Load Balancers (ALBs) and API Gateway APIs are associated with AWS WAF web ACLs. This is essential for automating the enforcement of web ACL attachment, especially in a multi-account environment. It ensures that future external-facing ALBs and API Gateway APIs will be configured in compliance with the security team’s requirements. This step is crucial and should be part of the solution.
Option B: Delegate Amazon GuardDuty to a security account.
Amazon GuardDuty is a threat detection service that identifies malicious activity and unauthorized behavior. While GuardDuty is excellent for detecting security threats, it does not have the ability to enforce security configurations like attaching AWS WAF web ACLs to ALBs or API Gateway APIs. GuardDuty does not directly contribute to preventing violations related to AWS WAF associations. Therefore, this option is not relevant to the goal.
Option C: Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
Once AWS Firewall Manager is delegated to a security account, you can create a policy that automatically attaches AWS WAF web ACLs to all newly created ALBs and API Gateway APIs across the organization. This will ensure that future ALBs and APIs are created in compliance with the security requirements, preventing any future violations. The policy can be configured to enforce the attachment of web ACLs to external-facing ALBs and APIs. This step is directly aligned with the requirement to prevent future violations.
Option D: Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
As mentioned earlier, Amazon GuardDuty is focused on security monitoring and threat detection, not on enforcing configuration policies for AWS resources. GuardDuty cannot be used to create a policy to attach AWS WAF web ACLs to ALBs or API Gateway APIs. Therefore, this option is not applicable to the problem.
Option E: Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
While AWS Config is excellent for monitoring and assessing the configuration of AWS resources, it does not directly enforce configuration changes. However, you can configure AWS Config rules to detect violations (i.e., when ALBs or API Gateway APIs are created without AWS WAF web ACLs) and trigger notifications or remediation actions. Although this is a useful monitoring tool, it is not as proactive in preventing violations as AWS Firewall Manager policies, which can directly enforce compliance. Thus, this option could be helpful for monitoring but doesn't entirely prevent future violations on its own.
Conclusion:
To prevent future violations with the least management overhead, the correct steps involve delegating AWS Firewall Manager to a security account to centrally manage policies across accounts and creating an AWS Firewall Manager policy to automatically attach AWS WAF web ACLs to new ALBs and API Gateway APIs.
Question No 9:
A company uses AWS Key Management Service (AWS KMS) keys and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days. Which solution will accomplish this?
A. Configure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
B. Configure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.
C. Develop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
D. Configure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
Answer: C
Explanation:
The goal is to ensure that the security team is notified if any AWS KMS keys have not been rotated after 90 days. AWS Config is the best service to monitor resource compliance over time, including the key rotation policies for AWS KMS keys.
A. Configure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
AWS KMS itself does not have a built-in feature to send notifications based on the age of keys or rotation schedules. It doesn't support direct integration for this type of monitoring, so this option won't meet the requirement.
B. Configure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.
While EventBridge and Lambda can be used to trigger notifications, AWS Trusted Advisor does not directly provide key rotation information in a way that would trigger an alert for KMS key age. This solution is more complex than necessary, and AWS Config provides a more direct method for monitoring key rotation.
C. Develop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
This option is the best solution. AWS Config allows you to create custom rules that evaluate resource compliance over time. A custom rule can be developed to check for the rotation status of AWS KMS keys and ensure that they are rotated at least every 90 days. If any keys have not been rotated within this time frame, the rule can trigger an Amazon SNS notification to alert the security team. This solution is scalable, efficient, and tailored to the specific requirement.
D. Configure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
AWS Security Hub provides security best practice checks and aggregates findings, but it does not specifically monitor the rotation status of AWS KMS keys. This option would not directly meet the goal of checking key rotation on a 90-day basis.
Therefore, the most appropriate and straightforward solution is C, using AWS Config to create a custom rule that checks the age of KMS keys and notifies the security team if any keys are overdue for rotation.
Question No 10:
How can this issue be corrected in the MOST secure manner?
A. Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
B. Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
C. Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
D. Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.
Answer: C
Explanation:
To correct the issue securely, we need to ensure that the AWS CodeBuild project is able to download the database population script from the S3 bucket, but only through authenticated access. The most secure solution involves removing unauthenticated access from the S3 bucket and modifying the CodeBuild project to authenticate properly via its service role.
Option A is incorrect because adding the bucket name to the AllowedBuckets section in the CodeBuild settings does not address the core issue of unauthenticated access. While this might help restrict access to a known bucket, it does not enforce proper IAM-based authentication for accessing S3, which is a more secure approach.
Option B is incorrect because enabling HTTPS basic authentication for the S3 bucket is not recommended for AWS services. S3 is typically accessed using IAM roles and policies, rather than custom authentication schemes like basic auth with tokens. Using cURL to pass a token would be less secure and would require manual management of tokens, which complicates access control.
Option C is the best choice. Removing unauthenticated access from the S3 bucket with a bucket policy ensures that only authenticated requests can access the resources. Then, modifying the CodeBuild service role to include the appropriate S3 permissions (such as s3:GetObject) ensures that the CodeBuild project can securely access the bucket. Using the AWS CLI to download the script is the correct approach for interacting with AWS resources in a secure and authorized manner. The AWS CLI will automatically use the IAM role associated with the CodeBuild project, ensuring that access is properly authenticated and authorized.
Option D is incorrect because it suggests using an IAM access key and secret access key directly for the download, which introduces security risks. Hardcoding or manually managing credentials is not recommended, as it could lead to accidental exposure of sensitive keys. Using the CodeBuild service role is a more secure, scalable, and manageable approach for authentication.
Therefore, Option C is the most secure and appropriate solution.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.