Use VCE Exam Simulator to open VCE files

AWS Certified Solutions Architect - Professional SAP-C02 AmazonPractice Test Questions and Exam Dumps
A company is designing a hybrid DNS solution using Amazon Route 53 for the domain. The solution will support DNS resolution for resources stored within VPCs as well as on-premises systems. The company has the following requirements:
On-premises systems must be able to resolve and connect to.
All VPCs should be able to resolve.
The company already has an AWS Direct Connect connection between its on-premises network and AWS Transit Gateway.
Which architecture should the company use to meet these requirements with the HIGHEST performance?
A. Associate the private hosted zone to all the VPCs. Create a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for that point to the inbound resolver.
B. Associate the private hosted zone to all the VPCs. Deploy an Amazon EC2 conditional forwarder in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for that point to the conditional forwarder.
C. Associate the private hosted zone to the shared services VPC. Create a Route 53 outbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for that point to the outbound resolver.
D. Associate the private hosted zone to the shared services VPC. Create a Route 53 inbound resolver in the shared services VPC. Attach the shared services VPC to the transit gateway and create forwarding rules in the on-premises DNS server for that point to the inbound resolver.
Explanation:
In this scenario, the company needs to architect a hybrid DNS solution with high performance that supports both on-premises systems and resources inside AWS VPCs.
Option A provides the optimal solution by associating the private hosted zone with all VPCs. This enables all VPCs to resolve properly.
A Route 53 inbound resolver is set up in the shared services VPC, allowing DNS queries to be forwarded from on-premises systems to AWS resources.
Forwarding rules are created in the on-premises DNS server to forward queries for to the inbound resolver.
AWS Transit Gateway enables seamless communication between the VPCs and on-premises systems, ensuring that the DNS resolution works for all connected environments. This architecture provides a high-performance DNS resolution setup as it uses native AWS services for DNS forwarding and resolution.
Option B and D both suggest alternatives that are either less efficient or introduce additional complexity. Option C introduces an outbound resolver, which isn't optimal for this scenario.
A company provides weather data through a REST-based API hosted by Amazon API Gateway. The API integrates with various AWS Lambda functions for different operations. The company uses Amazon Route 53 for DNS, with a record for, and stores data for the API in Amazon DynamoDB tables.
The company needs a solution that allows the API to fail over to a different AWS Region in case of failure. Which solution will meet these requirements?
A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the DynamoDB tables to global tables.
B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.
D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.
Explanation:
To achieve failover between AWS regions, the solution must ensure high availability and automatic DNS routing in the event of failure. Here’s the breakdown:
Option C is the optimal solution for failover:
Deploy a new API Gateway API and Lambda functions in a new region to provide redundancy.Update the Route 53 DNS record to a failover record. A failover routing policy allows Route 53 to direct traffic to a secondary region when the primary region becomes unhealthy.
Enable target health monitoring to ensure that Route 53 can detect if the primary region’s API is down, triggering the failover to the secondary region.Convert DynamoDB tables to global tables for cross-region data synchronization, ensuring the data remains consistent across both regions.
Option B involves a multivalue answer routing policy, but it doesn’t provide the same failover capability as the failover record in Option C.
Option A involves using edge-optimized endpoints and is typically designed for global distribution but doesn’t meet the failover requirement.
Option D introduces global Lambda functions, which isn’t the most effective approach for this specific failover requirement.
Question 1: The correct answer is A, which provides the most efficient architecture for hybrid DNS resolution between on-premises systems and AWS VPCs.
Question 2: The correct answer is C, which ensures that the weather data API can fail over between regions using Route 53 failover routing and DynamoDB global tables for data consistency.
A company uses AWS Organizations with a single Organizational Unit (OU) called Production to manage multiple AWS accounts. All accounts in the organization are part of the Production OU, and the organization’s administrators manage restricted services through deny list Service Control Policies (SCPs) at the root level. Recently, the company acquired a new business unit and invited the business unit’s existing AWS account to join the organization.
Once the new business unit’s account was onboarded, the administrators discovered that they were unable to update existing AWS Config rules to meet the company’s policies, due to restrictions from the root SCPs.
Which option will allow administrators of the new business unit to make changes to AWS Config, while maintaining current policies without introducing additional long-term maintenance?
A. Remove the organization’s root SCPs that limit access to AWS Config. Create AWS Service Catalog products for the company’s standard AWS Config rules and deploy them throughout the organization, including the new account.
B. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the new account to the Production OU when adjustments to AWS Config are complete.
C. Convert the organization’s root SCPs from deny list SCPs to allow list SCPs to allow only required services. Temporarily apply an SCP to the organization’s root that allows AWS Config actions for principals in the new account.
D. Create a temporary OU named Onboarding for the new account. Apply an SCP to the Onboarding OU to allow AWS Config actions. Move the organization’s root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete.
In AWS Organizations, Service Control Policies (SCPs) are used to set permission boundaries for accounts in an organization. In this case, the root SCPs are used to restrict access to certain services, including AWS Config. After onboarding a new business unit, administrators in the new account cannot update AWS Config rules due to these restrictions.
Option B allows for a temporary workaround without making permanent changes to the organization’s structure or SCPs. By creating a temporary Onboarding OU for the new account, the administrators can apply a specific SCP that temporarily allows access to AWS Config for that account. Once the necessary updates to the AWS Config rules are made, the new account can be moved to the Production OU. This option ensures that the existing deny list SCPs in the root of the organization remain intact for all other accounts, maintaining current security policies.
A. Removing the root SCPs would create a significant change in the organization’s security posture, potentially allowing access to other restricted services, which is not advisable.
C. Changing the root SCPs to an allow list and temporarily allowing access for the new account would be a drastic and potentially risky approach that could open up other unintended services to the new account.
D. Moving the root SCP to the Production OU is unnecessary and introduces complexity. SCPs at the root level should not be modified frequently, as this could lead to management challenges.
A company runs a two-tier web application in its on-premises data center. The application consists of a stateful application server and a PostgreSQL database hosted on separate servers. The company expects a significant increase in the user base, so it is migrating both the application and database to AWS using Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing (ELB).
Which solution will ensure a consistent user experience and allow the application and database tiers to scale efficiently?
A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
B. Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.
D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.
To ensure that both the application layer and database tier scale efficiently with the growing user base, several factors need to be considered:
Aurora Auto Scaling for Replicas: Aurora can scale read operations efficiently by using read replicas. Enabling Auto Scaling for Aurora Replicas ensures that additional read replicas are automatically added as the workload increases, providing better read scalability.
Application Load Balancer (ALB): The ALB is designed to handle HTTP and HTTPS traffic and provides round robin routing by default, which is ideal for distributing traffic evenly across multiple application servers. Additionally, sticky sessions can be enabled to ensure that user sessions are maintained, which is essential for stateful applications.
Scalability and Consistency: This combination ensures that both the application layer and database layer can scale horizontally, providing a consistent user experience even as the user base grows.
A. Using a Network Load Balancer (NLB) is not suitable for HTTP-based traffic, and while NLBs are great for high throughput and low latency, the ALB is better suited for web applications.
B. Enabling Auto Scaling for Aurora writers is unnecessary since the write operations are typically handled by the primary writer instance, and read replicas are more critical for scaling reads.
D. Using a Network Load Balancer with sticky sessions would be inappropriate for managing web traffic and handling HTTP-based requests efficiently.
A company uses a service to collect metadata from applications that it hosts on-premises. Consumer devices such as smart TVs and internet radios access these applications. Many of the older devices do not support certain HTTP headers and show errors when these headers are present in responses. The company has set up an on-premises load balancer to strip out the unsupported headers from responses sent to these older devices, based on the User-Agent headers that identify the devices.
The company wants to migrate this service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions.
Which solution will allow the company to support older devices while maintaining the ability to process metadata and use serverless technologies?
A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.
B. Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header.
C. Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.
D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.
To solve this problem, the company needs to implement a solution that can handle metadata collection while supporting older devices by removing unsupported HTTP headers based on the User-Agent header. The solution should utilize serverless technologies, which are ideal for scaling and maintaining cost efficiency.
CloudFront Distribution: Amazon CloudFront can act as a Content Delivery Network (CDN) to efficiently distribute the metadata service globally, reducing latency for end-users.
Lambda@Edge: By using Lambda@Edge, the company can run code closer to the viewer, which allows them to modify HTTP responses on the edge. Specifically, it can remove the problematic HTTP headers based on the User-Agent header before they reach the older devices.
Application Load Balancer (ALB): The ALB will direct the requests to the appropriate Lambda functions for processing the metadata, based on the incoming request.
This solution is optimal because Lambda@Edge can process responses at the edge, allowing real-time modification of headers without introducing latency at the origin. It also integrates seamlessly with CloudFront and ALB for scalability and ease of management.
A. While CloudFront can be used, it is not necessary to configure it to forward requests to an ALB and remove headers through a CloudFront function. Lambda@Edge is more suited for handling headers directly at the edge.
B. API Gateway REST API is not the best solution for this use case since the company wants to handle the request at the edge before reaching the backend.
C. Using API Gateway HTTP API and response mapping templates is a valid option, but it does not provide as efficient handling of HTTP headers at the edge as Lambda@Edge does.
A company is currently running a traditional web application on Amazon EC2 instances. The company needs to refactor the application to a microservices architecture that runs on containers. The application will have separate versions for two distinct environments: production and testing. The load for the application is variable, but both the minimum and maximum loads are known. The company seeks to design the updated application with a serverless architecture that minimizes operational complexity.
Which solution will meet these requirements MOST cost-effectively?
A. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.
B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.
C. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto-scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.
D. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments.
To design a serverless and cost-effective architecture for refactoring the application into microservices on containers, let’s examine each of the options.
AWS Lambda is a serverless compute service that can run code in response to events without provisioning or managing servers. However, Lambda functions are designed to execute individual pieces of code in response to events and are not ideal for running containerized microservices.
Uploading container images to Lambda is possible but is better suited for small, short-lived workloads. Additionally, Lambda has memory and timeout limits that may not be ideal for handling varying loads of a web application, especially if the load peaks significantly.
While Lambda is serverless, the overhead of managing concurrency limits, along with Lambda's inherent constraints, could complicate the design and incur higher costs for managing long-running, stateful microservices.
Amazon Elastic Container Service (ECS) with Fargate is an ideal choice for running containerized microservices in a serverless manner. ECS abstracts the underlying infrastructure management, while Fargate manages container scaling automatically.
Amazon ECR will store the container images, and ECS tasks can be deployed from there. ECS will handle auto-scaling based on demand, and you only pay for the resources consumed by the tasks.
Using two separate ECS clusters for production and testing environments ensures that the environments are isolated. Additionally, Application Load Balancers (ALB) will route traffic to the correct clusters, ensuring scalability and fault tolerance. This approach minimizes operational complexity while being cost-effective because of the auto-scaling nature of ECS and Fargate.
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service, which is a more complex and feature-rich solution compared to ECS. While EKS also supports Fargate, it requires more setup, configuration, and ongoing management.
Kubernetes is better suited for complex orchestration and requires expertise in managing clusters and deploying microservices. Given that the company is looking to minimize operational complexity, EKS would involve more operational overhead than ECS.
Elastic Beanstalk is a platform-as-a-service (PaaS) solution that automates application deployment but is better suited for traditional applications rather than containerized microservices. While it can handle Docker containers, it is not as tailored for containerized microservices as ECS with Fargate.
Elastic Beanstalk environments are more rigid in their management compared to ECS, and you may face difficulties when scaling out complex containerized applications.
Option B is the most cost-effective and efficient solution for refactoring the web application into a containerized, serverless architecture. By using Amazon ECS with Fargate, the company can leverage the containerized microservices architecture, ensuring auto-scaling, isolated environments for production and testing, and minimal operational complexity. This approach balances cost-effectiveness with the flexibility needed for managing variable loads while keeping the architecture serverless.
A company operates a multi-tier web application on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances are managed by an Auto Scaling group, and both the ALB and Auto Scaling group are replicated in a backup AWS Region. The minimum and maximum values for the Auto Scaling group are set to zero. The application’s data is stored in an Amazon RDS Multi-AZ DB instance, and a read replica of the DB instance exists in the backup Region. The application presents an endpoint to users via an Amazon Route 53 record.
The company needs to reduce its Recovery Time Objective (RTO) to less than 15 minutes by allowing the application to automatically fail over to the backup Region. The company’s budget is not large enough for an active-active strategy.
What should a solutions architect recommend to meet these requirements?
A. Reconfigure the application’s Route 53 record with a latency-based routing policy that balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Set up an Amazon CloudWatch alarm based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region, and configure the CloudWatch alarm to invoke the Lambda function.
B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Set up a Route 53 health check that monitors the web application, and use an Amazon SNS notification to invoke the Lambda function when the health check status is unhealthy. Update the Route 53 record with a failover policy to route traffic to the ALB in the backup Region in case of a health check failure.
C. Configure the Auto Scaling group in the backup Region to match the primary region's settings. Reconfigure the Route 53 record with a latency-based routing policy to load balance traffic between the two ALBs. Remove the read replica and replace it with a standalone RDS instance. Enable Cross-Region Replication between the RDS DB instances using snapshots and Amazon S3.
D. Set up an AWS Global Accelerator endpoint with both ALBs as equal-weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Set up a CloudWatch alarm based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region, and configure the CloudWatch alarm to invoke the Lambda function.
To meet the company's requirement of reducing the Recovery Time Objective (RTO) to less than 15 minutes and enabling automatic failover to the backup region, the solution must be cost-effective while ensuring minimal operational overhead.
Reconfiguring the Route 53 record with latency-based routing balances traffic between the two ALBs. However, latency-based routing is primarily used to optimize traffic routing, not for failover. The Lambda function in the backup region to promote the read replica and modify the Auto Scaling group values based on the CloudWatch alarm is a valid approach but may not be as fast as a health check-based solution.
This approach provides a cost-effective solution that minimizes operational complexity. Using Route 53 health checks ensures automatic failover when the primary region becomes unhealthy. The SNS notification triggers the Lambda function in the backup region, which promotes the read replica and modifies the Auto Scaling group. This failover strategy is both quick (under 15 minutes) and reliable since it triggers failover based on actual service health rather than reliance on traffic patterns.
This solution involves setting up cross-region replication and removing the read replica, replacing it with a standalone instance. Although this ensures data consistency across regions, it introduces more complexity, higher costs, and delays in failover, making it less ideal than Option B.
AWS Global Accelerator is a powerful tool for routing traffic globally, but in this case, it adds unnecessary complexity without significantly improving the failover speed. Additionally, the architecture with Global Accelerator and CloudWatch alarms adds more layers of complexity without directly addressing the failover requirement effectively.
Option B offers the simplest and most cost-effective solution for automatic failover by using Route 53 health checks and SNS notifications to trigger failover actions. This approach ensures that the company meets its RTO requirements of less than 15 minutes while minimizing operational complexity.
A company is hosting a critical application on a single Amazon EC2 instance. The application relies on the following infrastructure:
Amazon ElastiCache for Redis: A single-node cluster for in-memory data storage.
Amazon RDS for MariaDB: A relational database instance.
For the application to remain operational, all infrastructure components must remain healthy and in an active state. The solutions architect is tasked with improving the architecture to ensure that the infrastructure can automatically recover from failures with minimal downtime.
Which combination of steps should the solutions architect take to meet the requirements of improving infrastructure reliability and reducing downtime in case of failure? (Choose three.)
A. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
B. Use an Elastic Load Balancer to distribute traffic across multiple EC2 instances. Ensure that the EC2 instances are configured in unlimited mode.
C. Modify the DB instance to create a read replica in the same Availability Zone. Promote the read replica to be the primary DB instance in failure scenarios.
D. Modify the DB instance to create a Multi-AZ deployment that extends across two Availability Zones.
E. Create a replication group for the ElastiCache for Redis cluster. Configure the cluster to use an Auto Scaling group that has a minimum capacity of two instances.
F. Create a replication group for the ElastiCache for Redis cluster. Enable Multi-AZ on the cluster.
To improve the reliability and fault tolerance of the application while minimizing downtime, the solutions architect needs to address redundancy and failover for both the compute and data storage layers.
Using an Elastic Load Balancer (ELB) to distribute traffic across multiple EC2 instances is a key step in ensuring that the application remains highly available. If one instance fails, the traffic will be automatically redirected to healthy instances. Additionally, configuring the Auto Scaling group with a minimum capacity of two instances ensures that there will always be at least one healthy EC2 instance available to handle traffic, even in the case of instance failure.
Creating a Multi-AZ deployment for the RDS for MariaDB instance extends the database across two Availability Zones, providing automatic failover if one AZ becomes unavailable. This improves availability and ensures that the database can continue functioning during an AZ failure, significantly reducing the risk of downtime.
Configuring Multi-AZ replication for ElastiCache for Redis ensures that the Redis cluster is replicated to a secondary AZ. In the event of a failure in the primary node, the secondary node can take over, ensuring that the in-memory data store remains available. This setup provides both high availability and disaster recovery for the Redis cluster.
Option B: While using an Elastic Load Balancer is important, configuring EC2 instances in unlimited mode does not address the issue of failure recovery. Unlimited mode mainly addresses EC2 instance limits and does not improve fault tolerance.
Option C: Creating a read replica in the same Availability Zone does not provide sufficient failover capabilities. If the AZ goes down, the read replica will not be able to take over, so a Multi-AZ deployment for RDS is preferred.
Option E: While Auto Scaling for ElastiCache is a good approach for scaling, it does not inherently provide high availability across multiple AZs. Using Multi-AZ replication for ElastiCache provides true fault tolerance.
By combining these steps, the solution improves the availability and fault tolerance of the application, ensuring that critical services like EC2, RDS, and ElastiCache can recover automatically from failures with minimal downtime.
A retail company operates its e-commerce application on AWS. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB), and the company uses an Amazon RDS DB instance for database management. The application leverages Amazon CloudFront with one origin pointing to the ALB, caching static content. Amazon Route 53 manages the public DNS zones for the domain.
After updating the application, the ALB occasionally returns a 502 status code (Bad Gateway) error. The error is caused by malformed HTTP headers returned to the ALB, but the webpage loads correctly if it is reloaded immediately after the error occurs.While the company works on fixing the issue, the solutions architect needs to display a custom error page instead of the default ALB error page when the error occurs.
Which combination of steps will meet this requirement with the LEAST amount of operational overhead? (Choose two.)
A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
B. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
C. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
D. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
To meet the requirement of providing a custom error page instead of the default ALB error page, the solution needs to handle the 502 errors while ensuring low operational overhead. Here’s why options A and E are the most suitable:
Creating an Amazon S3 bucket to host static webpages is a straightforward and cost-effective solution. The S3 bucket can serve as a static file server for custom error pages, and configuring the bucket for static website hosting will allow the application to display custom error pages.
Uploading the custom error pages to S3 will ensure that the error page is always available, even during application errors or downtime.
This option doesn't require complex automation or additional infrastructure, providing a low operational overhead.
By configuring CloudFront custom error pages, you can have CloudFront handle errors for the application. CloudFront can serve custom error pages directly from the S3 bucket in case of a 502 error, thereby offloading the error handling from the ALB.
This solution makes use of CloudFront’s caching and error-handling capabilities, which makes it efficient and minimizes overhead compared to implementing additional monitoring, alarms, or Lambda functions.
Option B and Option D involve configuring CloudWatch alarms and AWS Lambda functions to modify ALB forwarding rules. This approach adds unnecessary complexity and operational overhead because you would need to constantly manage and monitor the Lambda function and alarms.
Option C involves modifying Route 53 records with health checks and fallback targets, which is more suitable for traffic routing based on health, not for customizing error pages.
The best approach with the least operational overhead is to use Amazon S3 to host custom error pages and configure CloudFront to serve those pages during errors. These solutions are both simple and effective, ensuring the application can handle error situations gracefully with minimal ongoing management.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.