Use VCE Exam Simulator to open VCE files

AWS Certified Security - Specialty SCS-C02 AmazonPractice Test Questions and Exam Dumps
Question No 1:
A. Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS)
B. Default SSL certificate that is stored in Amazon CloudFront
C. Custom SSL certificate that is stored in AWS Certificate Manager (ACM)
D. Default SSL certificate that is stored in Amazon S3
Answer: C. Custom SSL certificate that is stored in AWS Certificate Manager (ACM)
Explanation:
To secure communications between a website and its users, SSL/TLS certificates are used to enable HTTPS. The SSL/TLS certificate ensures the authenticity and confidentiality of data transmitted between the web server and the user's browser.
The correct answer is C: "Custom SSL certificate that is stored in AWS Certificate Manager (ACM)." AWS Certificate Manager (ACM) is a service specifically designed to handle the provisioning, management, and deployment of SSL/TLS certificates. It can automatically renew certificates and simplify the management of certificates across various AWS services, including Amazon Elastic Load Balancer (ELB), Amazon CloudFront, and Amazon API Gateway. ACM also integrates directly with these services to simplify the process of securing websites, making it the most appropriate and recommended option for storing SSL/TLS certificates for websites like example.com.
Here’s why the other options are less appropriate:
A. Custom SSL certificate that is stored in AWS Key Management Service (AWS KMS): AWS KMS is primarily a service for managing encryption keys, not for storing SSL/TLS certificates. While KMS can be used in conjunction with ACM to manage encryption keys, KMS itself is not intended for storing SSL/TLS certificates.
B. Default SSL certificate that is stored in Amazon CloudFront: CloudFront does provide SSL/TLS support, but CloudFront uses SSL certificates for securing data between CloudFront and users. While CloudFront can use SSL certificates for your distribution, it typically refers to the certificate stored in ACM or imported by the user, rather than being a storage solution by itself.
D. Default SSL certificate that is stored in Amazon S3: S3 is primarily a storage service for data, not for managing SSL certificates. S3 does not natively manage SSL/TLS certificates, so storing certificates in S3 is not a valid approach for HTTPS security.
Thus, AWS Certificate Manager (ACM) is the most suitable choice for managing SSL/TLS certificates for securing HTTPS communications with example.com.
A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes.
A compromised EC2 instance's metadata must be updated with corresponding incident ticket information.
A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware.
Any investigative activity during the collection of volatile data must be captured as part of the process.
Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose three.)
A. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
B. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Move the instance to an isolation subnet that denies all source and destination traffic. Associate the instance with the subnet to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
C. Use Systems Manager Run Command to invoke scripts that collect volatile data.
D. Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data.
E. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.
F. Create a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information.
Answer: A, C, E
Explanation:
When responding to a security event on an EC2 instance, the security engineer needs to ensure that the compromised instance is properly isolated, and critical data is preserved for forensic analysis while minimizing operational overhead. The goal is to meet AWS security best practices while addressing the incident efficiently.
Let’s break down the best options:
A. Gather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
This step ensures that the instance is isolated by restricting network access via security group updates and preventing automatic termination via termination protection. It also removes the instance from scaling groups and load balancers to avoid spreading any potential compromise further.
C. Use Systems Manager Run Command to invoke scripts that collect volatile data.
Systems Manager Run Command is an efficient way to execute scripts remotely on EC2 instances without needing direct SSH or RDP access. This allows the security engineer to gather volatile data (such as memory or running processes) without compromising the integrity of the compromised instance or requiring complex manual intervention.
E. Create a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.
Creating a snapshot of the instance’s EBS volume is a critical step in preserving the state of non-volatile data (disk data) for forensic purposes. The snapshot can be analyzed later without risking further changes to the compromised instance.
The other options are not ideal:
B. Move the instance to an isolation subnet that denies all source and destination traffic: While subnet isolation can help, it adds more complexity to the process and may result in unnecessary operational overhead. Security groups and termination protection offer a simpler and quicker method to isolate the instance.
D. Establish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts: This step introduces risks of contamination (e.g., malware spreading to the investigator's machine) and operational overhead, as it requires manual access to the instance.
F. Create a Systems Manager State Manager association to generate an EBS volume snapshot: While using Systems Manager State Manager is a good option for automated management, a snapshot can be more efficiently created manually or with Run Command, which is more straightforward in this case.
Thus, the combination of A, C, and E provides the most efficient approach to meet the requirements with the least operational overhead.
A company is using AWS Organizations and wants to implement AWS CloudFormation StackSets to deploy various infrastructure components such as Amazon EC2 instances, Elastic Load Balancers (ELBs), Amazon RDS databases, and Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (ECS) clusters across its environments. Currently, developers are responsible for creating their own CloudFormation stacks, which helps speed up the delivery process. A centralized CI/CD pipeline in a shared services AWS account deploys these stacks.
The company's security team has already established internal standards for resource configurations. If any resources are found to be non-compliant, the security team should be notified so they can take appropriate action. However, it is crucial that the notification solution does not hinder the developers' ability to maintain their current delivery speed.
Which solution would be the most operationally efficient while meeting these requirements?
A. Create an Amazon Simple Notification Service (SNS) topic. Subscribe the security team's email addresses to the SNS topic. Develop a custom AWS Lambda function that runs the aws cloudformation validate-template AWS CLI command to validate CloudFormation templates before the build stage in the CI/CD pipeline. Configure the CI/CD pipeline to send notifications to the SNS topic if validation issues are found.
B. Create an Amazon Simple Notification Service (SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create custom rules in CloudFormation Guard to define resource configuration standards. In the CI/CD pipeline, create a Docker image to run the cfn-guard command on the CloudFormation template before the build stage. Configure the pipeline to send a notification to the SNS topic if there are any compliance issues.
C. Create an Amazon Simple Notification Service (SNS) topic and an Amazon Simple Queue Service (SQS) queue. Subscribe the security team's email addresses to the SNS topic. Set up an Amazon S3 bucket in the shared services AWS account, and configure an event notification to send updates to the SQS queue when new objects are added to the S3 bucket. Require developers to upload CloudFormation templates to the S3 bucket. Launch EC2 instances that automatically scale based on SQS queue depth, and configure the EC2 instances to use CloudFormation Guard to validate and deploy the templates if they meet the security standards. Configure the CI/CD pipeline to notify the SNS topic if any issues are found.
D. Create a centralized CloudFormation StackSet that includes a standard set of resources for developers to deploy in each AWS account. Ensure all CloudFormation templates adhere to security requirements. For any new resources or configurations, update the templates, send them to the security team for review, and upon approval, add the new templates to the repository for developers to use.
The question is focused on maintaining operational efficiency while ensuring that the security team's compliance standards are met for CloudFormation templates used by the developers. The goal is to ensure that security issues are identified and addressed without slowing down the development and deployment speed.
Solution B is the most operationally efficient option, and here's why:
Custom CloudFormation Guard Rules (CloudFormation Guard): CloudFormation Guard is a service that helps ensure CloudFormation templates comply with organizational policies and best practices. It allows users to create rules for validating resource configurations. This solution directly integrates the compliance checks into the CI/CD pipeline, before the build stage, ensuring that templates are validated for compliance before they are deployed.
Automation and Speed: Developers can continue to create CloudFormation templates as they currently do. The validation process is automated by integrating CloudFormation Guard into the CI/CD pipeline, ensuring that any non-compliant template is flagged automatically. The notification to the SNS topic can then inform the security team about any issues without requiring manual intervention.
Operational Efficiency: Since CloudFormation Guard can be directly integrated into the CI/CD pipeline using a Docker image, this approach eliminates the need for complex manual reviews or separate infrastructure (like EC2 instances or SQS queues), which would add unnecessary overhead. The simplicity of the pipeline's integration with CloudFormation Guard makes it highly efficient and scalable, ensuring both compliance and developer speed are maintained.
Option A is a valid solution but introduces more complexity, as it requires creating a custom AWS Lambda function to manually validate CloudFormation templates before the build stage. While this approach could work, it introduces unnecessary complexity and reduces the overall efficiency of the solution compared to the built-in CloudFormation Guard integration.
Option C introduces unnecessary infrastructure (SQS, EC2 instances) and adds more operational complexity with event-driven architecture. This solution requires developers to upload their templates to an S3 bucket, which is not as efficient as directly running compliance checks within the CI/CD pipeline using CloudFormation Guard. It also requires additional resources that would need to be maintained, increasing overhead.
Option D suggests using a centralized CloudFormation StackSet, but this is less flexible and does not address the need for automated notifications or compliance validation within the pipeline. Developers would still need to rely on a manual review process for new resources or configurations, which could slow down delivery speeds.
In conclusion, Option B provides a streamlined, scalable solution that integrates seamlessly into the developers' existing CI/CD process, ensuring compliance checks are performed efficiently without impeding the speed of deployment.
A company is migrating a legacy system from an on-premises data center to AWS. The application server will be moved to AWS, but for compliance reasons, the database must remain on-premises. The database is highly sensitive to network latency. Additionally, the data transferred between the on-premises data center and AWS must be encrypted using IPsec.
Which two AWS solutions will meet these requirements? (Choose two.)
A. AWS Site-to-Site VPN
B. AWS Direct Connect
C. AWS VPN CloudHub
D. VPC peering
E. NAT gateway
To meet the company's requirements, two AWS solutions should be chosen based on network latency sensitivity and the need for IPsec encryption.
AWS Site-to-Site VPN (Option A): This solution is designed to securely connect an on-premises network to an AWS Virtual Private Cloud (VPC) using an encrypted IPsec VPN tunnel. Since the company requires IPsec encryption for the data in transit between the on-premises data center and AWS, this solution is ideal. However, the latency could be a concern because of the use of the public internet for communication. Still, it can meet the security and compliance requirements.
AWS Direct Connect (Option B): AWS Direct Connect establishes a dedicated, private network connection from the on-premises data center to AWS, bypassing the public internet. This solution offers lower latency and more consistent performance compared to a Site-to-Site VPN. It can also meet the IPsec encryption requirement by configuring it alongside a VPN connection over the Direct Connect link. The dedicated nature of Direct Connect is especially important for latency-sensitive applications like the company’s database.
Why not the other options?
AWS VPN CloudHub (Option C): This service is used to connect multiple VPCs over a VPN, but it is not the best choice for linking an on-premises data center to AWS. It is more suitable for connecting multiple remote sites or VPCs, not specifically for the requirements of an on-premises-to-AWS connection with IPsec encryption.
VPC Peering (Option D): VPC peering allows communication between two VPCs but does not provide a secure, encrypted connection from an on-premises data center to AWS. It is not suitable for connecting an on-premises network to AWS.
NAT Gateway (Option E): A NAT gateway is used to allow instances in a private subnet to access the internet. It does not provide the necessary secure and encrypted connectivity between an on-premises data center and AWS.
Thus, AWS Site-to-Site VPN and AWS Direct Connect are the most appropriate solutions for meeting the company's requirements.
A company has an application that utilizes multiple Amazon DynamoDB tables to store data. During an audit, it was found that the tables are not in compliance with the company’s data protection policy. According to the company’s retention policy, all data must be backed up twice each month: once at midnight on the 15th day and once at midnight on the 25th day of each month. The company also requires the backups to be retained for three months.
Which combination of actions should a security engineer take to meet these requirements? (Choose two.)
A. Use DynamoDB’s on-demand backup feature to create a backup plan. Set up a lifecycle policy to expire backups after three months.
B. Use AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of three months.
C. Use AWS Backup to create a backup plan. Add a backup rule that includes a retention period of three months.
D. Set the backup frequency using a cron schedule expression. Assign each DynamoDB table to the backup plan.
E. Set the backup frequency using a rate schedule expression. Assign each DynamoDB table to the backup plan.
AWS Backup (Option C): AWS Backup is a fully managed backup service that allows you to centrally configure and automate backup policies for AWS resources, including DynamoDB tables. Using AWS Backup, you can create a backup plan that defines when and how often backups should occur. Additionally, you can set retention policies, such as keeping backups for a specific duration (in this case, three months). This solution aligns with the company’s requirement to back up data twice per month and retain those backups for three months. AWS Backup also simplifies the management of backup schedules and retention policies for DynamoDB, ensuring compliance with the company's data protection policy.
DynamoDB On-Demand Backups (Option A): Amazon DynamoDB provides an on-demand backup capability that allows you to manually back up your DynamoDB tables. By using the on-demand backup feature, you can create backups at the required times (such as midnight on the 15th and 25th of each month) and then configure a lifecycle policy to automatically expire these backups after three months. This ensures that the backups are retained for the required duration and no longer. While DynamoDB on-demand backups can be automated to some extent using AWS Lambda or other scheduling tools, AWS Backup provides a more centralized and streamlined solution for managing backup schedules and retention policies.
Why not the other options?
AWS DataSync (Option B): AWS DataSync is a service primarily used for transferring large amounts of data between on-premises storage and AWS storage services. It is not designed for scheduling or managing backups of AWS resources like DynamoDB tables. Therefore, DataSync would not be suitable for this backup requirement.
Cron schedule (Option D): While a cron schedule expression allows for fine-grained control over backup timing, this option requires significant manual effort to implement and manage, especially with multiple DynamoDB tables. A more integrated solution, like AWS Backup, is better suited for automating and simplifying the backup process.
Rate schedule (Option E): A rate schedule would allow for backups to occur at fixed intervals (e.g., every 30 days), but this does not provide the specific flexibility required to back up at the exact times needed (midnight on the 15th and 25th of each month). A cron expression or AWS Backup with precise scheduling would be more suitable for this specific requirement.
In conclusion, AWS Backup (Option C) and DynamoDB On-Demand Backups (Option A) are the most appropriate and efficient solutions for automating the backup process, ensuring compliance with the company’s retention policy, and simplifying backup management.
A company needs to implement a scalable solution for multi-account authentication and authorization. The solution should minimize the introduction of additional user-managed architectural components and leverage AWS's native features as much as possible. The company has already set up AWS Organizations with all features activated and AWS IAM Identity Center (AWS Single Sign-On) enabled.
What additional steps should the security engineer take to complete the task?
A. Use AD Connector to create users and groups for all employees that require access to AWS accounts. Assign AD Connector groups to AWS accounts and link them to IAM roles based on employees' job functions and access requirements. Instruct employees to access AWS accounts via the AWS Directory Service user portal.
B. Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Assign groups to AWS accounts and link them to permission sets based on employees' job functions and access requirements. Instruct employees to access AWS accounts via the IAM Identity Center user portal.
C. Use an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Link IAM Identity Center groups to IAM users in all accounts to inherit existing permissions. Instruct employees to access AWS accounts via the IAM Identity Center user portal.
D. Use AWS Directory Service for Microsoft Active Directory to create users and groups for all employees that require access to AWS accounts. Enable AWS Management Console access in the created directory and specify IAM Identity Center as the source for integrated accounts and permission sets. Instruct employees to access AWS accounts via the AWS Directory Service user portal.
The best solution in this case is to leverage IAM Identity Center (AWS Single Sign-On) to manage multi-account authentication and authorization without introducing complex user-managed components.
IAM Identity Center Default Directory (Option B): AWS IAM Identity Center (formerly AWS Single Sign-On) allows you to centralize authentication and authorization for multiple AWS accounts. By using the IAM Identity Center default directory, the security engineer can easily create and manage users and groups within AWS. The groups can then be assigned to specific AWS accounts and linked to permission sets, which define the level of access for each user based on their job functions. This approach eliminates the need for additional complex user management systems and leverages AWS's built-in features for centralized identity management. The users can access the AWS accounts through the IAM Identity Center user portal, which simplifies access management.
Why not the other options?
AD Connector (Option A): While AD Connector can be used for integrating on-premises Active Directory with AWS, it introduces an additional architectural component, which the company seeks to avoid. It also requires maintaining an Active Directory infrastructure, which is more complex and goes against the goal of minimizing user-managed components.
Linking IAM Identity Center to IAM Users (Option C): This option suggests linking IAM Identity Center groups to IAM users in all accounts, which can result in managing IAM users individually in each account. This approach complicates user management and is not the best use of IAM Identity Center, which is designed to centralize access control rather than relying on individual IAM user configurations across multiple accounts.
AWS Directory Service (Option D): AWS Directory Service can be used for integrating Microsoft Active Directory with AWS, but this option is more complex and requires maintaining a separate directory service. This would add overhead and complexity, contrary to the goal of using AWS-native features to simplify the solution.
In conclusion, Option B, using the IAM Identity Center default directory, is the most efficient and scalable approach for multi-account authentication and authorization. It leverages AWS's native identity and access management capabilities, providing a simpler and more streamlined solution.
A company has deployed Amazon GuardDuty and wants to implement automation for mitigating potential threats. The company has decided to start with RDP brute-force attacks originating from Amazon EC2 instances within its AWS environment. A security engineer needs to implement a solution that blocks the detected communication from any suspicious EC2 instance until investigation and potential remediation can take place.
Which solution will fulfill these requirements?
A. Configure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event using an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification through Amazon Simple Notification Service (SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.
B. Configure GuardDuty to send the event to Amazon EventBridge. Deploy an AWS WAF web ACL. Process the event with an AWS Lambda function that sends a notification through Amazon SNS and adds a web ACL rule to block traffic to and from the suspicious instance.
C. Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge. Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.
D. Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.
Correct Answer: D. Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.
Explanation:
In this scenario, the key requirement is to block traffic from the suspicious EC2 instance quickly, without affecting the ongoing operations of other instances. The most appropriate solution would be to leverage AWS Security Hub in combination with GuardDuty findings, Kinesis, and AWS Lambda.
AWS Security Hub is integrated with GuardDuty, which allows for centralized management and visibility of security findings.
By sending GuardDuty findings to Kinesis, events can be processed in real-time and the corresponding actions taken, such as modifying the security group of the suspicious instance.
AWS Lambda is the ideal tool for processing these events and dynamically altering the security group associated with the suspicious instance, denying it any further inbound or outbound communication until the situation is investigated.
This approach ensures the response is automated, scalable, and minimally disruptive to other resources in the environment.
A company has an AWS account hosting a production application. The company receives an email notification that Amazon GuardDuty has detected an "Impact:IAMUser/AnomalousBehavior" finding in the account. A security engineer must follow the investigation playbook for this security incident and collect and analyze information without affecting the application.
Which solution will meet these requirements most quickly?
A. Log in to the AWS account using read-only credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
B. Log in to the AWS account using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use Amazon Detective to review the API calls in context.
C. Log in to the AWS account using administrator credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
D. Log in to the AWS account using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use AWS CloudTrail Insights and AWS CloudTrail Lake to review the API calls in context.
Correct Answer: B. Log in to the AWS account using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use Amazon Detective to review the API calls in context.
Explanation:
The best approach for this situation is to leverage Amazon Detective, a service designed to help security teams investigate security findings. It allows you to explore and visualize data from GuardDuty and AWS CloudTrail in a way that is simple to understand and allows for faster identification of anomalous behavior.
Read-only credentials ensure that the investigation does not interfere with ongoing operations, thus preserving the integrity of the production application.
Amazon Detective can help you explore the GuardDuty finding in context, providing a detailed view of the IAM user’s activity and making it easier to trace the API calls leading to the security incident. This method avoids any disruptions to the production environment while gathering essential information for the investigation.
This solution is not only quick but also non-invasive, which is critical when dealing with live production systems.
Company A has an AWS account named Account A. Company A recently acquired Company B, which has an AWS account named Account B. Company B stores files in an Amazon S3 bucket. The administrators need to give a user from Account A full access to the S3 bucket in Account B. After the administrators adjust the IAM permissions for the user in Account A to access the S3 bucket in Account B, the user still cannot access any files in the S3 bucket.
Which solution will resolve this issue?
A. In Account B, create a bucket ACL to allow the user from Account A to access the S3 bucket in Account B.
B. In Account B, create an object ACL to allow the user from Account A to access all the objects in the S3 bucket in Account B.
C. In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B.
D. In Account B, create a user policy to allow the user from Account A to access the S3 bucket in Account B.
Correct Answer: C. In Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B.
Explanation:
The problem here is that the user from Account A has been granted IAM permissions, but the access still isn't working because the necessary permissions haven't been configured on the S3 bucket in Account B.
The correct solution is to use an S3 bucket policy in Account B to explicitly allow access for the user from Account A. A bucket policy can define permissions at the bucket level, ensuring that the user from Account A has the required access to read/write to the objects in the S3 bucket.
ACLs (Access Control Lists), while an older method of controlling access, are not as flexible as a bucket policy and might not be sufficient for cross-account access, especially in this scenario.
By configuring a bucket policy, you ensure that the access rights are enforced at the S3 level, regardless of the IAM policies in Account A. This is the most effective and secure way to enable cross-account access.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.