AWS Certified Advanced Networking - Specialty ANS-C01 Amazon Practice Test Questions and Exam Dumps


Question No 1:

Which solution will meet these requirements?

A. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure a Network Load Balancer with a TCP listener on port 443 to forward traffic to the IP addresses of the backend service Pods.
B. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure an Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the IP addresses of the backend service Pods.
C. Create a target group. Add the EKS managed node group's Auto Scaling group as a target. Create an Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the target group.
D. Create a target group. Add the EKS managed node group’s Auto Scaling group as a target. Create a Network Load Balancer with a TLS listener on port 443 to forward traffic to the target group.

Answer: A

Explanation:

To meet the requirements for mutual TLS (mTLS) encryption in transit, you need a solution that ensures that the traffic is encrypted between the client and the backend service without decrypting it. Additionally, it needs to support mutual TLS for two-way authentication and work effectively with the scaling features of Amazon EKS, such as the Kubernetes Cluster Autoscaler and Horizontal Pod Autoscaler.

Here’s a breakdown of the solutions:

A. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure a Network Load Balancer with a TCP listener on port 443 to forward traffic to the IP addresses of the backend service Pods.
This solution is correct because the Network Load Balancer (NLB) supports TLS passthrough and TCP traffic. With this setup, the traffic is encrypted in transit and forwarded directly to the backend service Pods without being decrypted at the load balancer. The mutual TLS (mTLS) can be handled by the backend service itself, ensuring that it performs the authentication and encryption between the client and the backend. This is a suitable choice for gRPC traffic over port 443.

B. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure an Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the IP addresses of the backend service Pods.
This solution uses an Application Load Balancer (ALB), which is not the best fit for TLS passthrough. The ALB would terminate the TLS connection (decrypting the traffic), which contradicts the requirement of not decrypting traffic between the client and the backend. This is not appropriate for mTLS because the traffic would be decrypted at the load balancer.

C. Create a target group. Add the EKS managed node group's Auto Scaling group as a target. Create an Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the target group.
Similar to option B, this solution uses an ALB, which terminates the TLS connection, violating the requirement for encryption in transit without decryption at the load balancer. The backend service would need to handle mTLS itself if you were to use this solution, but this setup does not align with the need for TLS passthrough.

D. Create a target group. Add the EKS managed node group’s Auto Scaling group as a target. Create a Network Load Balancer with a TLS listener on port 443 to forward traffic to the target group.
While the Network Load Balancer (NLB) with a TLS listener does support encrypted traffic, the configuration here is less ideal than option A because the NLB would terminate the TLS connection, which is not the intended behavior for this scenario. The correct approach would be to use TCP passthrough (as in A) to ensure that encryption is maintained.

In summary, the most suitable solution is A, as it allows traffic to remain encrypted with mTLS, forwarded directly to the backend Pods without decryption at the load balancer, and supports scaling with Amazon EKS.

Question No 2:

Which solution will meet these requirements?

A. Deploy an Application Load Balancer with an HTTPS listener. Use path-based routing rules to forward the traffic to the correct target group. Include the X-Forwarded-For request header with traffic to the targets.
B. Deploy an Application Load Balancer with an HTTPS listener for each domain. Use host-based routing rules to forward the traffic to the correct target group for each domain. Include the X-Forwarded-For request header with traffic to the targets.
C. Deploy a Network Load Balancer with a TLS listener. Use path-based routing rules to forward the traffic to the correct target group. Configure client IP address preservation for traffic to the targets.
D. Deploy a Network Load Balancer with a TLS listener for each domain. Use host-based routing rules to forward the traffic to the correct target group for each domain. Configure client IP address preservation for traffic to the targets.

Correct Answer: B

Explanation:

In this scenario, the company has specific requirements regarding HTTPS traffic, routing based on URL, and logging of the user’s IP address for security purposes. Let’s analyze the options to see which best fits the requirements:

A. Application Load Balancer with HTTPS listener (Path-based routing)

  • HTTPS listener: This option would indeed use HTTPS for secure communication, and TLS termination can be done at the load balancer. This matches the requirement that all traffic must use HTTPS, and TLS processing should be offloaded to the load balancer.

  • Path-based routing: This routing method allows traffic to be forwarded to different target groups based on the URL path in the request. However, the requirements specify routing based on the URL, which implies that the routing should be more specific to domains rather than paths alone.

  • X-Forwarded-For header: This will allow the web server to see the original client IP address, as the load balancer includes the X-Forwarded-For header with the client's IP in each request. This is critical for accurate logging of the user's IP.

  • Conclusion: While path-based routing works, the requirement for routing based on domains makes this option less ideal. However, it's still a valid choice if domain-based routing is not a strict requirement.

B. Application Load Balancer with HTTPS listener (Host-based routing)

  • HTTPS listener: As before, the HTTPS listener and TLS offloading meet the requirement for HTTPS traffic and offloading of TLS processing.

  • Host-based routing: This allows routing based on the domain name (hostname) in the request, which directly addresses the requirement to route traffic based on the URL. This is a better solution than path-based routing when you need routing based on specific domains.

  • X-Forwarded-For header: The inclusion of the X-Forwarded-For header ensures that the web server receives the original client's IP address, allowing for accurate logging for security purposes.

  • Conclusion: This solution directly meets all the stated requirements: HTTPS traffic, domain-based routing, and IP address preservation.

C. Network Load Balancer with TLS listener (Path-based routing)

  • TLS listener: A Network Load Balancer (NLB) can handle TLS termination, but unlike the Application Load Balancer, NLB doesn’t support routing based on the content of HTTP requests like URL paths. It only handles Layer 4 traffic and requires additional configuration for Layer 7 functionality.

  • Path-based routing: While this would work in terms of routing traffic, it's not the best option because it would lack the domain-based routing that is required in this scenario.

  • Client IP address preservation: The NLB preserves the original client IP, so this part of the requirement is met.

  • Conclusion: While the NLB can preserve the client IP and is capable of TLS termination, it lacks host-based routing, making it less suitable for this scenario.

D. Network Load Balancer with TLS listener (Host-based routing)

  • TLS listener: Again, the NLB can terminate TLS, but it lacks advanced Layer 7 capabilities, such as host-based routing.

  • Host-based routing: While the NLB can route based on the domain name, it’s limited to handling Layer 4 traffic and doesn’t provide the same flexibility as an Application Load Balancer in terms of routing HTTP requests based on specific HTTP content.

  • Client IP address preservation: This requirement is met by the NLB’s ability to preserve the client IP address.

  • Conclusion: This solution isn’t ideal because the NLB lacks full support for Layer 7 functionality, including advanced HTTP routing based on the URL or headers, which are necessary for this use case.

Conclusion:

Option B is the best solution because it meets all the requirements: HTTPS traffic, TLS offloading, domain-based routing, and client IP address preservation for accurate logging.

Question No 3:

Which solution will meet these requirements?

A. Configure the ALB in a private subnet of the VPC. Attach an internet gateway without adding routes in the subnet route tables to point to the internet gateway. Configure the accelerator with endpoint groups that include the ALB endpoint. Configure the ALB’s security group to only allow inbound traffic from the internet on the ALB listener port.
B. Configure the ALB in a private subnet of the VPC. Configure the accelerator with endpoint groups that include the ALB endpoint. Configure the ALB's security group to only allow inbound traffic from the internet on the ALB listener port.
C. Configure the ALB in a public subnet of the VPC. Attach an internet gateway. Add routes in the subnet route tables to point to the internet gateway. Configure the accelerator with endpoint groups that include the ALB endpoint. Configure the ALB's security group to only allow inbound traffic from the accelerator's IP addresses on the ALB listener port.
D. Configure the ALB in a private subnet of the VPC. Attach an internet gateway. Add routes in the subnet route tables to point to the internet gateway. Configure the accelerator with endpoint groups that include the ALB endpoint. Configure the ALB's security group to only allow inbound traffic from the accelerator's IP addresses on the ALB listener port.

Correct Answer: D

Explanation:

The requirement is that the application should only be accessible through the AWS Global Accelerator and not through a direct connection over the internet to the ALB. The solution should prevent direct access to the ALB from the internet.

Key considerations:

  • ALB in a private subnet: The ALB should be in a private subnet to ensure it is not directly accessible over the public internet. Private subnets do not have direct access to the internet unless explicitly routed through a NAT gateway or other service.

  • Internet Gateway: The internet gateway should be attached to the VPC, but the private subnet's route tables should not route traffic directly to the internet through the internet gateway. This ensures that the ALB cannot be accessed directly from the internet, only via the accelerator.

  • Global Accelerator: The AWS Global Accelerator is designed to direct traffic through the closest edge location to the end user, providing improved performance and availability. You need to configure the Global Accelerator to point to the ALB endpoint via endpoint groups.

  • Security Group for ALB: The security group attached to the ALB should be configured to only accept traffic from the IP addresses of the AWS Global Accelerator (i.e., only allow traffic from the accelerator and not from the internet directly). This guarantees that only traffic routed via the accelerator can reach the ALB.

Explanation of why other options are incorrect:

  • A. The internet gateway is attached, but the subnet route table should not have a route to the internet gateway. This would not ensure proper security, as it might inadvertently allow direct access to the ALB.

  • B. While the ALB is in a private subnet, the solution does not involve the appropriate configuration for limiting inbound traffic to only the Global Accelerator IP addresses.

  • C. The ALB should be placed in a private subnet for security, not a public subnet. A public subnet would make the ALB accessible directly via the internet, which is against the requirements.

Thus, D is the correct answer because it satisfies all the conditions: keeping the ALB in a private subnet, preventing direct internet access via correct route table configuration, and configuring the security group to accept traffic only from the accelerator’s IP addresses.

Question No 4:

A global delivery company is modernizing its fleet management system. The company has several business units. Each business unit designs and maintains applications that are hosted in its own AWS account in separate application VPCs in the same AWS Region. Each business unit's applications are designed to get data from a central shared services VPC. The company wants the network connectivity architecture to provide granular security controls. The architecture also must be able to scale as more business units consume data from the central shared services VPC in the future. 

Which solution will meet these requirements in the MOST secure manner?

A. Create a central transit gateway. Create a VPC attachment to each application VPC. Provide full mesh connectivity between all the VPCs by using the transit gateway.
B. Create VPC peering connections between the central shared services VPC and each application VPC in each business unit's AWS account.
C. Create VPC endpoint services powered by AWS PrivateLink in the central shared services VPC. Create VPC endpoints in each application VPC.
D. Create a central transit VPC with a VPN appliance from AWS Marketplace. Create a VPN attachment from each VPC to the transit VPC. Provide full mesh connectivity among all the VPCs.

Answer: A

Explanation:

To meet the requirements of granular security controls, scalability, and efficient management, the A. solution using a central transit gateway is the most appropriate option.

Here’s a breakdown of the solution options and why A is the best:

  1. A. Create a central transit gateway. Create a VPC attachment to each application VPC. Provide full mesh connectivity between all the VPCs by using the transit gateway:
    This solution provides a central, scalable, and secure architecture. The AWSTransit Gateway simplifies network management by enabling full mesh connectivity between all VPCs. It allows you to attach VPCs to a central hub, reducing the need for point-to-point VPC peering connections. The transit gateway also supports granular security controls, such as routing policies, and can scale easily as new VPCs are added. It is the most scalable and secure solution because it simplifies management and optimizes routing policies for complex network architectures.

  2. B. Create VPC peering connections between the central shared services VPC and each application VPC in each business unit's AWS account:
    While VPC peering is a viable option for direct VPC-to-VPC connectivity, it does not scale well. Each VPC would need a separate peering connection, and a full mesh of peering connections between every VPC becomes difficult to manage as the number of VPCs increases. This would not provide the same level of flexibility, security, or scalability as using a transit gateway.

  3. C. Create VPC endpoint services powered by AWS PrivateLink in the central shared services VPC. Create VPC endpoints in each application VPC:
    AWS PrivateLink allows private connectivity to services across VPCs, but it is primarily used for exposing services securely rather than providing full network connectivity between VPCs. While PrivateLink can provide secure access to the shared services, it may not be the most scalable solution when you need to manage multiple VPCs and scale the architecture over time.

  4. D. Create a central transit VPC with a VPN appliance from AWS Marketplace. Create a VPN attachment from each VPC to the transit VPC. Provide full mesh connectivity among all the VPCs:
    Using a VPN appliance adds complexity and overhead in terms of managing the VPN infrastructure. VPN appliances from AWS Marketplace can be used for certain use cases, but this solution is less efficient and scalable than a transit gateway, which is designed for VPC-to-VPC connectivity without the additional complexity of managing VPN appliances.

The AWS Transit Gateway solution (Option A) provides the most secure, scalable, and manageable architecture, aligning well with the company's need to grow and secure its network as more business units consume data from the central shared services VPC.

Question No 5:

Which solution will meet these requirements?

A. Review the Amazon CloudWatch metrics for VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress to determine which VIF is sending the highest throughput during the period in which slowness is observed. Create a new 10 Gbps dedicated connection. Shift traffic from the existing dedicated connection to the new dedicated connection.
B. Review the Amazon CloudWatch metrics for VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress to determine which VIF is sending the highest throughput during the period in which slowness is observed. Upgrade the bandwidth of the existing dedicated connection to 10 Gbps.
C. Review the Amazon CloudWatch metrics for ConnectionBpsIngress and ConnectionPpsEgress to determine which VIF is sending the highest throughput during the period in which slowness is observed. Upgrade the existing dedicated connection to a 5 Gbps hosted connection.
D. Review the Amazon CloudWatch metrics for ConnectionBpsIngress and ConnectionPpsEgress to determine which VIF is sending the highest throughput during the period in which slowness is observed. Create a new 10 Gbps dedicated connection. Shift traffic from the existing dedicated connection to the new dedicated connection.

Correct answer: B

Explanation:

In this scenario, the network engineer needs to identify which Virtual Interface (VIF) is sending the highest throughput during the periods of slowness and then implement a solution to resolve the problem of Direct Connect saturation. Let's evaluate the options:

Option A:

This option suggests reviewing VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress metrics in CloudWatch to identify which VIF is causing the slowness. However, it suggests creating a new 10 Gbps dedicated connection and shifting traffic to it. While the idea of shifting traffic to another connection is viable, it doesn't directly address the issue of upgrading the existing connection to meet the throughput demands. The problem likely lies with the existing connection's bandwidth, and simply adding another connection won't resolve the root cause efficiently.

Option B:

This option also suggests reviewing VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress metrics to identify the problematic VIF. This is a correct approach because CloudWatch metrics provide the necessary insight into traffic patterns across the VIFs, and identifying the high-throughput VIF allows for more targeted troubleshooting. The solution to upgrade the bandwidth of the existing connection to 10 Gbps addresses the issue directly. This approach is more efficient than creating a new connection because it resolves the problem of saturation by increasing the available bandwidth to meet demand.

Option C:

This option suggests reviewing ConnectionBpsIngress and ConnectionPpsEgress metrics. While these metrics are related to the overall connection throughput and packet processing, they are less specific to individual VIFs, which are tied to specific business units and their traffic. In this case, the VirtualInterfaceBps metrics are the correct choice because they reflect the throughput at the VIF level, which is directly related to the business units' traffic. Additionally, upgrading to a 5 Gbps hosted connection does not directly address the throughput issue when a 10 Gbps dedicated connection is likely required for full resolution.

Option D:

This option also suggests reviewing ConnectionBpsIngress and ConnectionPpsEgress metrics, which are less specific to identifying the VIF with high throughput. Creating a new 10 Gbps dedicated connection could be an additional solution, but it is not as effective as upgrading the existing connection to 10 Gbps to directly resolve the saturation issue. Shifting traffic from the current dedicated connection might introduce complexity without solving the bandwidth problem of the original connection.

Conclusion:

Option B is the most effective solution. By reviewing the correct CloudWatch metrics (VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress) and upgrading the existing connection to 10 Gbps, the company will address the throughput saturation issue directly without introducing unnecessary complexity.

Question No 6:

To address the requirements outlined in the scenario — where there is IP address overlap, and customers cannot share their internal IP addresses or connect over the internet — the solution must focus on private, secure connectivity with no need to expose traffic to the internet.

Let's analyze the options:

A. Deploy the SaaS service endpoint behind a Network Load Balancer.

  • Explanation: A Network Load Balancer (NLB) operates at Layer 4 (TCP/UDP) and is a suitable option for high-performance, low-latency, and network-level load balancing. It supports private IP addresses and can be used to direct traffic from private networks without exposing it to the internet.

  • Correct Option: This step fits well because it can provide a private connection between the customer and the SaaS provider, even with overlapping IP addresses. It also avoids internet traffic.

B. Configure an endpoint service, and grant the customers permission to create a connection to the endpoint service.

  • Explanation: An endpoint service is a private connection method provided by AWS PrivateLink, which enables customers to securely connect to a SaaS provider’s service without traversing the public internet. The service is accessed over private IPs, which is ideal for environments with overlapping IP addresses.

  • Correct Option: This is a key part of the solution because it allows private communication between the customers and the provider without sharing internal IPs or exposing the traffic over the internet.

C. Deploy the SaaS service endpoint behind an Application Load Balancer.

  • Explanation: An Application Load Balancer (ALB) operates at Layer 7 (HTTP/HTTPS), which is typically used for web-based traffic. While ALBs support SSL/TLS termination and routing based on HTTP headers, they are not the best option when dealing with IP address overlap and do not provide the same level of private connectivity as other options like NLB or PrivateLink.

  • Incorrect Option: This does not meet the requirements, as ALBs are designed for public-facing applications and are not ideal for private connectivity in this scenario.

D. Configure a VPC peering connection to the customer VPCs. Route traffic through NAT gateways.

  • Explanation: VPC peering allows private connectivity between two VPCs, but it cannot work with overlapping IP address ranges. Moreover, routing traffic through NAT gateways involves internet traffic, which is explicitly to be avoided in this scenario.

  • Incorrect Option: This solution does not resolve the issue of overlapping IPs and is not suitable because it involves internet traffic.

E. Deploy an AWS Transit Gateway, and connect the SaaS VPC to it. Share the transit gateway with the customers. Configure routing on the transit gateway.

  • Explanation: An AWS Transit Gateway is a highly scalable solution that connects multiple VPCs, even with overlapping IP address ranges. Transit Gateway supports private, secure connectivity, and customers can connect to the provider’s VPC through the shared Transit Gateway. It avoids internet traffic and resolves IP overlap issues.

  • Correct Option: This is an ideal solution for managing secure, private connections while handling overlapping IP address ranges.

Final Answer:

The correct combination of steps is:

  • A. Deploy the SaaS service endpoint behind a Network Load Balancer.

  • B. Configure an endpoint service, and grant the customers permission to create a connection to the endpoint service.

  • E. Deploy an AWS Transit Gateway, and connect the SaaS VPC to it. Share the transit gateway with the customers. Configure routing on the transit gateway.

These steps provide a secure, scalable, and private connectivity solution that addresses both the IP address overlap and the requirement to avoid internet traffic.

Question No 7:

Which solution will meet these requirements?

A. Create an Amazon S3 bucket. Create an AWS Lambda function to load logs into the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Enable Amazon Simple Notification Service (Amazon SNS) notifications on the S3 bucket to invoke the Lambda function. Configure flow logs for the firewall. Set the S3 bucket as the destination.
B. Create an Amazon Kinesis Data Firehose delivery stream that includes the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster as the destination. Configure flow logs for the firewall. Set the Kinesis Data Firehose delivery stream as the destination for the Network Firewall flow logs.
C. Configure flow logs for the firewall. Set the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster as the destination for the Network Firewall flow logs.
D. Create an Amazon Kinesis data stream that includes the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster as the destination. Configure flow logs for the firewall. Set the Kinesis data stream as the destination for the Network Firewall flow logs.

Correct answer: B

Explanation:

In this case, the network engineer needs to deliver Network Firewall flow logs to Amazon OpenSearch Service (formerly Amazon Elasticsearch Service) as quickly as possible. Let’s analyze the different options:

Option A:

This option suggests using an Amazon S3 bucket to store logs and then using an AWS Lambda function to load the logs into Amazon OpenSearch Service. While this approach could work, it introduces unnecessary complexity and delays due to the intermediate step of storing logs in S3 and invoking a Lambda function to move the logs into OpenSearch. This is not the fastest or most efficient solution for real-time log delivery.

Option B:

This option recommends using an Amazon Kinesis Data Firehose delivery stream as the destination for the Network Firewall flow logs. Kinesis Data Firehose can deliver logs to Amazon OpenSearch Service directly and in near real-time. It is an efficient and fully managed service designed for high-throughput log delivery, ensuring the shortest possible time for log delivery to OpenSearch. This is the ideal solution for meeting the requirements of the task.

Option C:

This option suggests directly setting the Amazon OpenSearch Service as the destination for the flow logs. However, as of the current AWS capabilities, Amazon OpenSearch Service cannot be set directly as the destination for AWS Network Firewall flow logs. Therefore, this solution is not feasible.

Option D:

This option involves using a Kinesis data stream instead of Kinesis Data Firehose. While Kinesis data streams could work, they typically require more management and configuration (such as consumers and processing) compared to Kinesis Data Firehose, which is a more streamlined and automated option for log delivery to Amazon OpenSearch Service. This approach could work, but it is more complex and less optimized for this use case than Kinesis Data Firehose.

Conclusion:

The best solution to meet the requirement of delivering the Network Firewall flow logs to Amazon OpenSearch Service in the shortest possible time is Option B, which uses Amazon Kinesis Data Firehose to stream the logs directly to OpenSearch with minimal delay and complexity.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.