AWS-SysOps Amazon Practice Test Questions and Exam Dumps

Question 1

You are currently hosting multiple applications in a VPC and have logged numerous port scans coming in from a specific IP address block. Your security team has requested that all access from the offending IP address block be denied for the next 24 hours. 

Which of the following is the best method to quickly and temporarily deny access from the specified IP address block?

A. Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block
B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block
C. Add a rule to all of the VPC’s Security Groups to deny access from the IP address block
D. Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block

Correct answer: B

Explanation:
When you need to quickly and effectively block traffic from a specific IP address block across multiple resources within a Virtual Private Cloud (VPC), the best approach is to block traffic at the Network Access Control List (Network ACL or NACL) level. Network ACLs operate at the subnet level and are stateless, meaning they evaluate traffic both as it enters and leaves the subnet. They also allow both allow and deny rules, unlike Security Groups which only allow traffic.

In this scenario, you are experiencing port scans—a behavior typically indicative of a potential probing attack or reconnaissance phase by a malicious actor. These types of scans can target many IPs and ports and need to be handled at a level that ensures all potential targets are covered.

Let’s analyze each option:

A. Create an AD policy to modify Windows Firewall settings on all hosts in the VPC to deny access from the IP address block:
This method is not scalable or timely. Group Policy updates may not take effect immediately, and only apply to Windows-based instances managed by Active Directory. It also doesn't help with Linux instances or other services exposed to the internet.

B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP address block:
This is the best option. Network ACLs are designed to control traffic at the subnet boundary. They can quickly block both inbound and outbound traffic from the offending IP address block. Since NACLs are stateless, rules must be added for both directions if desired, but they are ideal for quickly applying a blanket block across an entire subnet. Furthermore, they do not require any reconfiguration of individual instances.

C. Add a rule to all of the VPC’s Security Groups to deny access from the IP address block:
This is not feasible because Security Groups only allow "allow" rules—they do not support "deny" rules. Therefore, you cannot use Security Groups to explicitly block traffic from specific IPs.

D. Modify the Windows Firewall settings on all Amazon Machine Images (AMIs) that your organization uses in that VPC to deny access from the IP address block:
Modifying AMIs affects only newly launched instances, not existing ones. Also, this method would require deploying new instances with the updated AMIs and does not protect existing resources immediately. It is also platform-specific and not effective across all operating systems or services.

In conclusion, the fastest and most effective temporary solution to deny access from a specific IP address block is to modify the Network ACLs associated with the public subnets. This allows you to block traffic regardless of the instance type, operating system, or application, fulfilling the security team's request in a scalable and timely manner.

Question 2

When preparing for a compliance assessment of your system hosted in AWS, which three best practices should you follow to properly prepare for the audit? (Choose three.)

A. Gather evidence of your IT operational controls
B. Request and obtain applicable third-party audited AWS compliance reports and certifications
C. Request and obtain a compliance and security tour of an AWS data center for a pre-assessment security review
D. Request and obtain approval from AWS to perform relevant network scans and in-depth penetration tests of your system's Instances and endpoints
E. Schedule meetings with AWS's third-party auditors to provide evidence of AWS compliance that maps to your control objectives

Answer: A, B, D

Explanation:

Preparing for a compliance audit in an AWS environment requires careful planning, documentation, and alignment with both your organization’s responsibilities and AWS’s shared responsibility model. Among the many potential steps, three are considered best practices:

A. Gather evidence of your IT operational controls
This is a foundational step. Auditors will require you to provide evidence that your internal processes and controls meet the compliance requirements (e.g., access controls, encryption, logging, patch management). These operational controls are your responsibility, and AWS will not provide them. You should collect logs, policies, procedures, configurations, and change management records. This evidence is critical for demonstrating compliance with security frameworks such as ISO 27001, SOC 2, or HIPAA.

B. Request and obtain applicable third-party audited AWS compliance reports and certifications
AWS provides compliance documentation through the AWS Artifact service. This portal includes downloadable reports such as SOC 1, SOC 2, PCI DSS, and ISO certifications that auditors often require. These reports validate that AWS’s infrastructure and services meet specific compliance standards. While you are responsible for securing your application and data, AWS is responsible for the physical infrastructure, so obtaining their audited certifications is vital to prove that the underlying cloud services meet the standards expected by your auditors.

D. Request and obtain approval from AWS to perform relevant network scans and in-depth penetration tests of your system's Instances and endpoints
AWS requires that customers request authorization before conducting penetration tests or vulnerability scans on their infrastructure. This is to ensure that testing doesn't interfere with AWS operations or violate terms of service. Obtaining this approval is part of being audit-ready, especially if your compliance framework (such as PCI DSS or FedRAMP) requires that such security testing is performed on a regular basis. Therefore, requesting and receiving approval from AWS to conduct these tests is a recognized and important best practice.

Now, let’s consider why the other options are not best practices:

C. Request and obtain a compliance and security tour of an AWS data center for a pre-assessment security review
This is not possible. AWS does not permit public access to its data centers for security and confidentiality reasons. Physical security is part of AWS’s responsibility in the shared responsibility model, and their compliance reports already address this area. Asking for a data center tour is not practical or aligned with AWS’s policies.

E. Schedule meetings with AWS's third-party auditors to provide evidence of AWS compliance that maps to your control objectives
This is also not feasible. AWS does not arrange meetings between its third-party auditors and customers. Instead, they provide standardized documentation through AWS Artifact. You are responsible for mapping this evidence to your specific control objectives. AWS’s third-party auditors are not made available to customers for individual consultation or compliance validation.

In conclusion, the three best practices for preparing for an audit in AWS are: collecting internal control evidence (A), obtaining AWS compliance reports (B), and securing AWS approval for penetration tests (D). These steps align with AWS’s shared responsibility model and ensure that both your application and its underlying infrastructure are properly accounted for in the compliance process.

The correct answers are A, B, D.

Question 3

You’ve recently started a new role and are evaluating your organization’s AWS infrastructure. You discover that one web application uses an Elastic Load Balancer (ELB) placed in front of instances managed by an Auto Scaling Group. CloudWatch metrics show four healthy instances in Availability Zone (AZ) A and none in AZ B, with zero unhealthy instances overall. 

What should be adjusted to ensure instance distribution is balanced across both AZs?

A. Set the ELB to only be attached to another AZ
B. Make sure Auto Scaling is configured to launch in both AZs
C. Make sure your AMI is available in both AZs
D. Make sure the maximum size of the Auto Scaling Group is greater than 4

Answer: B

Explanation:
This question revolves around the concept of high availability and fault tolerance in AWS by distributing instances across multiple Availability Zones (AZs). The scenario highlights an imbalance: four healthy instances are deployed in AZ A, but none in AZ B. The ELB is functioning correctly, and there are no unhealthy instances. Therefore, the issue lies in the deployment of instances, not the health or functionality of existing ones.

The root of this problem is almost certainly in the Auto Scaling Group (ASG) configuration. When you configure an ASG in AWS, you have the option to specify which Availability Zones it should use to launch instances. If the ASG is only configured to deploy instances into AZ A, then even if the ELB is configured to span multiple AZs, no traffic will be routed to AZ B because no instances exist there.

Option B is correct because the Auto Scaling Group needs to be explicitly configured to launch instances in both Availability Zones. This means during ASG setup, or when modifying it, you need to select both AZ A and AZ B in the configuration. AWS will then balance instances across the selected AZs according to the scaling policies and the desired capacity. Once instances exist in AZ B, and the ELB is registered with both AZs, traffic will be routed appropriately to maintain availability and load distribution.

Option A is incorrect because removing AZ A and only attaching the ELB to another AZ doesn't solve the actual deployment issue — it would potentially disconnect users from functioning instances rather than add redundancy. It would also still leave AZ B empty of instances unless the ASG is modified.

Option C refers to AMI availability across AZs, which is not a relevant problem here. AMIs are region-specific and can be used in all AZs within the same region, so unless you're crossing region boundaries (which isn’t mentioned), AMI distribution is not the issue.

Option D mentions the maximum size of the ASG being greater than 4. While this is a good consideration if scaling is constrained, it does not directly explain or fix the unbalanced distribution across AZs. If the ASG is allowed to scale but is only launching in one AZ, the maximum size won’t force it to balance across zones.

To summarize, to achieve balanced deployment across Availability Zones, the Auto Scaling Group must be configured to launch instances in both AZ A and AZ B. This will ensure resilience, improved fault tolerance, and a proper distribution of traffic through the ELB.

Therefore, the correct answer is B.

Question 4

You have been asked to leverage Amazon VPC EC2 and SQS to implement an application that submits and receives millions of messages per second to a message queue. You want to ensure your application has sufficient bandwidth between your EC2 instances and SQS. 

Which option will provide the most scalable solution for communicating between the application and SQS?

A. Ensure the application instances are properly configured with an Elastic Load Balancer
B. Ensure the application instances are launched in private subnets with the EBS-optimized option enabled
C. Ensure the application instances are launched in public subnets with the associate-public-IP-address=true option enabled
D. Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SQS queue size

Correct answer: D

Explanation:
The problem presented involves an application using Amazon EC2 instances to communicate with Amazon SQS (Simple Queue Service) at a very high scale—specifically millions of messages per second. The key objective is to ensure scalability and sufficient bandwidth between the EC2 instances and the SQS queue.

Let’s evaluate each of the answer choices in this context:

A. Ensure the application instances are properly configured with an Elastic Load Balancer:
Elastic Load Balancers (ELBs) are used for distributing incoming traffic across multiple EC2 instances. However, SQS is a fully managed message queuing service, and the communication here is outbound from EC2 to SQS, not inbound to EC2. Therefore, an ELB doesn’t serve any purpose for SQS communication. This option does not address bandwidth or scalability in the EC2-to-SQS direction.

B. Ensure the application instances are launched in private subnets with the EBS-optimized option enabled:
EBS optimization enhances the throughput and performance of Amazon Elastic Block Store (EBS) volumes attached to EC2 instances. However, this is completely unrelated to SQS, which is a network-based service accessed via HTTPS APIs. EBS optimization won’t help with bandwidth between the EC2 application and the SQS endpoint.

C. Ensure the application instances are launched in public subnets with the associate-public-IP-address=true option enabled:
Launching instances in public subnets with public IPs allows them to directly access the internet, including SQS endpoints. However, using public IPs is not necessary when instances in private subnets can use NAT Gateways or VPC endpoints to reach AWS services. Moreover, relying on public IPs doesn’t inherently scale better or improve bandwidth. It may also incur additional data transfer costs and security complexities.

D. Launch application instances in private subnets with an Auto Scaling group and Auto Scaling triggers configured to watch the SQS queue size:
This is the most correct and scalable solution. Auto Scaling allows the number of EC2 instances to increase or decrease automatically in response to changes in workload—in this case, based on SQS queue depth. When message traffic surges and the queue length grows, Auto Scaling can launch more EC2 instances to consume messages, thus increasing processing bandwidth. When traffic subsides, unnecessary instances can be terminated to save costs.

Using private subnets with proper routing (such as VPC endpoints for SQS or NAT Gateways) ensures secure and efficient access to SQS without exposing EC2 instances to the internet. AWS also provides interface VPC endpoints for SQS, allowing private, secure, and highly performant access to the SQS service without going over the public internet.

In conclusion, the most scalable and bandwidth-appropriate design for this use case is to deploy EC2 instances in private subnets, utilize Auto Scaling to handle fluctuations in message volume, and configure scaling policies based on SQS queue size. This approach ensures high availability, elasticity, and secure communication with SQS.

Question 5

You have determined that network throughput is limiting performance on your m1.small EC2 instance when uploading data to Amazon S3 in the same region. What is the best way to resolve this issue?

A. Add an additional ENI
B. Change to a larger Instance
C. Use DirectConnect between EC2 and S3
D. Use EBS PIOPS on the local volume

Answer: B

Explanation:

When you are experiencing network throughput limitations on an EC2 instance such as an m1.small, the root cause is usually the instance type itself. AWS defines network performance levels based on instance size and generation, and smaller or older instances inherently come with lower network capabilities. Therefore, if the instance’s network bandwidth is inadequate, the most effective solution is to upgrade to a larger or newer instance type that offers better network performance.

Let’s examine why B is the correct answer in detail and why the other options are not suitable:

Option B: Change to a larger instance
This is the most direct and effective approach. Larger EC2 instances come with higher baseline and burst network throughput capabilities. For example, moving from an m1.small to a newer and larger instance like t3.large or m5.large would provide significantly better network performance. Newer generation instances also support enhanced networking features like Elastic Network Adapter (ENA), which further improves throughput and lowers latency. Therefore, upgrading the instance type directly addresses the network bottleneck and is aligned with AWS best practices for scaling.

Option A: Add an additional ENI (Elastic Network Interface)
Adding a second ENI does not increase the total available network throughput of the instance. ENIs are useful for multi-homed instances or segregating traffic across subnets and security groups, but they do not enhance the bandwidth limit set by the instance type. The total throughput is a property of the instance size, not the number of ENIs attached.

Option C: Use DirectConnect between EC2 and S3
AWS Direct Connect is a service that provides a dedicated network connection between your on-premises infrastructure and AWS. However, it is not used within AWS itself. Since your EC2 instance and the S3 bucket are in the same region, Direct Connect does not apply. Moreover, even if you had a Direct Connect connection, it would route traffic from your on-premises location, not between EC2 and S3 within AWS. This option is therefore irrelevant in this context.

Option D: Use EBS PIOPS on the local volume
Provisioned IOPS (PIOPS) applies to EBS volume performance, not to network throughput. This setting affects how fast data can be read from or written to disk. If your issue is related to uploading data to S3, then disk I/O is not the bottleneck—network bandwidth is. Therefore, enhancing EBS performance will not improve the situation.

In summary, the primary cause of your throughput issue is the limitation imposed by the m1.small instance’s network capacity. AWS instance types have specific thresholds for network performance, and the only viable solution in this scenario is to change to a larger instance with better networking capabilities. This resolves the bottleneck directly by providing more bandwidth for your data uploads to Amazon S3.

The correct answer is B.

Question 6

When an Amazon VPC is in use, which two components are responsible for enabling communication with external networks? (Choose two.)

A. Elastic IPs (EIP)
B. NAT Gateway (NAT)
C. Internet Gateway (IGW)
D. Virtual Private Gateway (VGW)

Answer: B, C

Explanation:
To enable a Virtual Private Cloud (VPC) in AWS to communicate with external networks, such as the public internet or external data centers, specific components must be in place and correctly configured. Let’s evaluate each option to determine which two are essential for enabling external connectivity.

Option A: Elastic IPs (EIP)
Elastic IPs are static, public IPv4 addresses that AWS allows you to allocate and associate with instances. While they provide a way for an individual instance to be reachable over the internet, they do not by themselves establish connectivity. They must be used in conjunction with an Internet Gateway and correct route table configurations. Without those supporting components, simply assigning an EIP doesn’t enable network communication. Therefore, Elastic IP is not one of the two required components — it is helpful but not independently sufficient.

Option B: NAT Gateway (NAT)
A NAT Gateway allows private instances in a VPC to initiate outbound connections to the internet (for example, for software updates or API access) without allowing inbound connections. This is especially useful in architectures where some resources (like databases or internal services) need access to the internet but must remain inaccessible from it. NAT Gateways are placed in public subnets and used to route internet-bound traffic from private subnets. Hence, this is a correct choice for external connectivity from private instances.

Option C: Internet Gateway (IGW)
An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It is a crucial component for any internet-bound or internet-facing application in AWS. When you want public instances to access or be accessed from the internet, you must attach an IGW to your VPC and configure route tables appropriately. This makes the Internet Gateway a correct answer.

Option D: Virtual Private Gateway (VGW)
A Virtual Private Gateway is the component that connects your VPC to a remote network via a VPN or AWS Direct Connect. It is not used for internet access but rather for private, secure communication with on-premises infrastructure or other VPCs. While VGWs do provide external connectivity, it is not to the public internet — instead, it is to your own external private networks. Therefore, it is not one of the components that enable general external internet access.

To summarize:

  • NAT Gateway enables private instances to access the internet.

  • Internet Gateway allows general internet communication for public instances.

  • Elastic IPs are only helpful when paired with other infrastructure.

  • Virtual Private Gateway is used for VPNs or private links, not general internet access.

The correct answers are B and C.

Question 7

Your application currently leverages AWS Auto Scaling to grow and shrink as load increases/decreases and has been performing well. Your marketing team expects a steady ramp-up in traffic to follow an upcoming campaign that will result in a 20x growth in traffic over 4 weeks. Your forecast for the approximate number of Amazon EC2 instances necessary to meet the peak demand is 175. 

What should you do to avoid potential service disruptions during the ramp-up in traffic?

A. Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches
B. Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits
C. Change your Auto Scaling configuration to set a desired capacity of 175 prior to the launch of the marketing campaign
D. Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign

Correct answer: B

Explanation:
This question deals with scaling infrastructure to meet anticipated traffic growth, specifically a 20x increase in demand. The forecasted number of required Amazon EC2 instances is 175, and the goal is to ensure that this growth does not cause service disruptions due to resource limits.

Let’s evaluate each answer choice:

A. Ensure that you have pre-allocated 175 Elastic IP addresses so that each server will be able to obtain one as it launches:
This is unnecessary and impractical. First, Elastic IPs are a scarce resource, and AWS discourages allocating large numbers unless you have a specific use case. Also, most EC2 instances do not require dedicated Elastic IPs; they can use private IPs and NAT Gateways or VPC endpoints to reach the internet. Unless every instance must be publicly reachable (which is rare), this is not relevant or scalable.

B. Check the service limits in Trusted Advisor and adjust as necessary so the forecasted count remains within limits:
This is the most accurate and necessary step. Each AWS account has soft service limits (quotas) for EC2 instance types, total instances per region, and Auto Scaling group configurations. If your forecast requires 175 EC2 instances, but your current EC2 limit in a region is, say, 100 instances, the Auto Scaling group will fail to launch additional instances beyond that limit, leading to service disruptions.

Using AWS Trusted Advisor or the Service Quotas console, you can review current limits for EC2, EBS, Auto Scaling Groups, and more. You can then request limit increases well ahead of the campaign to ensure that the infrastructure can scale without restriction.

C. Change your Auto Scaling configuration to set a desired capacity of 175 prior to the launch of the marketing campaign:
This would provision 175 instances immediately, which is not cost-effective or necessary if traffic will ramp gradually over four weeks. Auto Scaling is designed to handle growth dynamically. Instead of pre-launching all instances, you should verify that your configuration (scaling policies, launch templates, and quotas) allows the system to grow gradually and reliably. Simply setting a desired capacity early may unnecessarily consume resources and costs.

D. Pre-warm your Elastic Load Balancer to match the requests per second anticipated during peak demand prior to the marketing campaign:
Pre-warming is largely obsolete for modern Application Load Balancers (ALB) and Network Load Balancers (NLB), which can scale automatically with demand. Pre-warming used to be necessary for Classic Load Balancers (CLB), but AWS has since improved load balancer elasticity. Unless you're using CLBs and expecting a sudden and massive spike in traffic, pre-warming is not required in this use case—especially with a 4-week gradual ramp.

In conclusion, the best step to take in preparation for a large, predictable scale-up is to check and increase your AWS service limits in advance. This ensures that when Auto Scaling attempts to launch additional instances, it won’t be blocked by quota constraints, allowing for a smooth, uninterrupted ramp-up.

Question 8

You are using an Auto Scaling group linked to an Elastic Load Balancer (ELB), and you've observed that instances identified as unhealthy by the ELB are not being terminated. 

What should you do to ensure that these unhealthy instances are properly terminated and replaced?

A. Change the thresholds set on the Auto Scaling group health check
B. Add an Elastic Load Balancing health check to your Auto Scaling group
C. Increase the value for the Health check interval set on the Elastic Load Balancer
D. Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks

Answer: B

Explanation:

When using Auto Scaling with an Elastic Load Balancer (ELB), it is critical to ensure that your Auto Scaling group is properly configured to use the ELB’s health checks. By default, Auto Scaling only checks EC2 instance status, and unless it is explicitly told to consider ELB health checks, it will ignore whether the load balancer thinks an instance is healthy or not. This is why, in your scenario, unhealthy instances are identified by the ELB but not being terminated or replaced—the Auto Scaling group isn’t configured to act on ELB health feedback.

Let’s examine each option and see why B is correct.

Option B: Add an Elastic Load Balancing health check to your Auto Scaling group
This is the correct action. Auto Scaling allows you to define the health check type as either EC2 or ELB. If it's set to EC2, then Auto Scaling will only respond to the instance’s status reported by the EC2 service itself. However, if you set it to ELB, Auto Scaling will consider the health checks reported by the ELB as well. In your situation, you need to configure the Auto Scaling group to use ELB health checks by setting the health check type to ELB in the Auto Scaling group configuration. This ensures that instances reported as unhealthy by the ELB will be marked for replacement by the Auto Scaling policy.

Option A: Change the thresholds set on the Auto Scaling group health check
This would only affect how sensitive the Auto Scaling group is to the EC2 health checks—not ELB health checks. If the Auto Scaling group is not using ELB health checks, changing thresholds won’t help address the core issue of ELB-based health check failures being ignored. Therefore, this option does not resolve the problem.

Option C: Increase the value for the Health check interval set on the Elastic Load Balancer
Changing the health check interval might reduce the frequency of ELB checks or delay marking instances as unhealthy, but it does not change the behavior of the Auto Scaling group. If the Auto Scaling group is not listening to ELB health check results, modifying ELB settings alone won’t cause unhealthy instances to be terminated. This option may only mask the issue, not solve it.

Option D: Change the health check set on the Elastic Load Balancer to use TCP rather than HTTP checks
Switching from HTTP to TCP may reduce false negatives if your application layer is unstable but the server is still reachable. However, this also does not address the root problem—whether the Auto Scaling group is configured to act on ELB health data. Regardless of whether you use HTTP or TCP health checks, if the Auto Scaling group is only using EC2 status checks, unhealthy instances from ELB's point of view will still be ignored. So, this option does not resolve the underlying configuration gap.

In conclusion, the issue lies in the Auto Scaling group not being set up to take ELB health check results into account. The solution is to explicitly configure the Auto Scaling group to use ELB health checks, which is exactly what option B proposes.

The correct answer is B.

Question 9

Which two AWS services offer built-in user-configurable options for automatic backups and backup rotation features without requiring additional tools? (Choose two.)

A. Amazon S3
B. Amazon RDS
C. Amazon EBS
D. Amazon Redshift

Answer: B, D

Explanation:
AWS provides various services with native features for automating backups and handling backup lifecycle management. The focus of this question is to identify two services that offer out-of-the-box support for user-configurable automatic backups and backup rotation — meaning, services that don’t need external scripting or third-party integrations to enable and manage these functions.

Let’s evaluate each option individually:

Option A: Amazon S3
Amazon S3 is primarily an object storage service. While it provides features like versioning, Cross-Region Replication (CRR), lifecycle rules, and intelligent tiering, it does not offer traditional backup-as-a-service features out of the box in the way that databases or volumes do. Lifecycle policies can be used to transition or expire objects, but these are not traditional backup mechanisms with features like snapshotting or point-in-time recovery. There’s also no automatic backup rotation mechanism specific to S3; instead, administrators use versioning or manual tools to maintain backups. Therefore, S3 does not qualify for this question.

Option B: Amazon RDS
Amazon RDS (Relational Database Service) is one of the most prominent AWS services that provides automatic backups out-of-the-box. When you create a DB instance, you can enable automatic backups and configure the retention period from 1 to 35 days. These backups allow point-in-time recovery, and you can also create manual snapshots for longer retention. The backup window and rotation are completely configurable through the console or API, making RDS a perfect match for this question. So, this option is correct.

Option C: Amazon EBS
Amazon EBS (Elastic Block Store) does support snapshotting volumes, but automatic backup and rotation are not fully handled natively by EBS in the traditional sense. While you can create snapshots manually or through AWS Backup or custom Lambda functions, EBS itself does not offer an out-of-the-box, user-configurable backup schedule and retention policy directly tied to the service. To get automatic backup rotation, you'd typically need to integrate with AWS Backup, which is a separate service. Therefore, EBS on its own does not qualify.

Option D: Amazon Redshift
Amazon Redshift offers automatic snapshots that are created at regular intervals and retained for a user-defined number of days — all without needing to set up any additional tools. Users can specify the backup retention period (from 1 to 35 days) and also create manual snapshots. Redshift even enables cross-region snapshot copying and automated snapshot deletion after the retention period ends. This makes Redshift a valid answer.

To conclude:

  • Amazon RDS has out-of-the-box support for automatic backups with configurable retention and rotation.

  • Amazon Redshift also supports similar backup capabilities natively.

  • Amazon S3 and EBS do not provide backup-as-a-service with rotation natively; they require additional services or configurations.

The correct answers are B and D.

Question 10

An organization has configured a VPC with an Internet Gateway (IGW), pairs of public and private subnets (each with one subnet per Availability Zone), and an Elastic Load Balancer (ELB) configured to use the public subnets. The application’s web tier leverages the ELB. Auto Scaling and a multi-AZ RDS database instance are also in place. The organization would like to eliminate any potential single points of failure in this design. 

What step should you take to achieve this organization's objective?

A. Nothing, there are no single points of failure in this architecture.
B. Create and attach a second IGW to provide redundant internet connectivity.
C. Create and configure a second Elastic Load Balancer to provide a redundant load balancer.
D. Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database.

Correct answer: D

Explanation:
This question focuses on identifying and addressing potential single points of failure (SPOF) in an architecture that uses a VPC, an Internet Gateway (IGW), an Elastic Load Balancer (ELB), Auto Scaling, and a multi-AZ RDS instance. The objective is to ensure high availability and redundancy for the organization’s infrastructure.

Let’s evaluate each of the options to address the goal:

A. Nothing, there are no single points of failure in this architecture:
This is incorrect because while the setup includes Auto Scaling and a multi-AZ RDS instance, there are still single points of failure in the architecture, particularly concerning the RDS instance and possibly the Elastic Load Balancer. For example, if the only multi-AZ RDS instance fails, there would be a potential downtime. Similarly, if the ELB in the public subnet fails, traffic cannot be routed. Thus, the architecture has weak spots that can be improved to ensure full redundancy and high availability.

B. Create and attach a second IGW to provide redundant internet connectivity:
While adding a second Internet Gateway (IGW) seems to provide redundancy for internet connectivity, this is not supported by AWS, as a VPC can have only one IGW attached at a time. Therefore, this solution does not address the issue of single points of failure in the design. An IGW does not directly impact the redundancy of your compute resources or database.

C. Create and configure a second Elastic Load Balancer to provide a redundant load balancer:
Creating a second Elastic Load Balancer (ELB) for redundancy is unnecessary. AWS already provides built-in redundancy for ELBs within a single Availability Zone (AZ) by distributing traffic across multiple instances in that AZ. Furthermore, if an AZ fails, the ELB will route traffic to instances in other AZs. Therefore, you do not need to manually configure multiple ELBs for redundancy in this case.

D. Create a second multi-AZ RDS instance in another Availability Zone and configure replication to provide a redundant database:
This is the best solution for eliminating a single point of failure in the database. By deploying a second multi-AZ RDS instance in another Availability Zone and configuring replication, you can achieve automatic failover in case the primary RDS instance becomes unavailable. This ensures high availability and redundancy for the database layer. AWS RDS multi-AZ deployments automatically replicate data between Availability Zones and can fail over to a standby instance if the primary instance fails. This design eliminates the database’s single point of failure and ensures the database remains operational even in the event of an AZ or instance failure.

To achieve redundancy and eliminate potential single points of failure, the correct step is to create a second multi-AZ RDS instance in another Availability Zone and configure replication. This ensures the database layer is highly available and fault-tolerant, achieving the desired high availability for the application.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.