Amazon AWS Certified Advanced Networking – Specialty ANS-C01 Exam Dumps and Practice Test Questions Set 5 Q81-100

Visit here for our full Amazon AWS Certified Advanced Networking – Specialty ANS-C01 exam dumps and practice test questions.

Question 81 

A company wants to deploy a hybrid cloud architecture with encrypted, low-latency connectivity to multiple AWS Regions. Which solution should they choose?

A) AWS Transit Gateway inter-region peering
B) Site-to-Site VPN over the public internet
C) VPC Peering
D) NAT Gateway

Answer: A)

Explanation: 

AWS Transit Gateway inter-region peering enables private, encrypted connectivity between Transit Gateways in different AWS Regions using AWS’s high-performance backbone. It provides low-latency traffic flows between VPCs in different regions without routing over the public internet. By using inter-region peering, organizations can simplify management, scale to hundreds of VPCs, and maintain private traffic across regions while ensuring encryption in transit. TGW automatically manages routing, making it easier than manually configuring multiple VPN tunnels or peering connections.

Site-to-Site VPN over the public internet provides encrypted connectivity but is subject to variable latency, jitter, and throughput limitations. It is less reliable for hybrid applications requiring consistent low-latency communication.

VPC Peering allows direct VPC-to-VPC connectivity but only within a single region or between specific pairs. It does not scale well for hundreds of VPCs and does not provide a managed inter-region backbone.

NAT Gateway provides IPv4 translation for outbound traffic but does not provide inter-VPC connectivity or encryption between regions. It is irrelevant for hybrid multi-region architectures.

Therefore, AWS Transit Gateway inter-region peering is the correct solution for private, low-latency, encrypted multi-region connectivity.

Question 82 

A company wants to centralize DNS query logging and block access to malicious domains for all VPCs in multiple AWS accounts. Which service should they use?

A) Route 53 Resolver DNS Firewall with query logging
B) NAT Gateway with CloudWatch logs
C) Security groups with outbound rules
D) VPC Flow Logs

Answer: A)

Explanation: 

Route 53 Resolver DNS Firewall allows organizations to define domain-level filtering across VPCs and accounts. It supports firewall rule groups, which can block, allow, or redirect queries based on domain names. Query logging captures DNS activity centrally, sending logs to CloudWatch Logs or S3 for analysis. By associating firewall rule groups and query logging with multiple VPCs, administrators achieve centralized management, consistent security policies, and detailed visibility into DNS activity, including potential malicious requests.

NAT Gateway only provides IPv4 NAT services for outbound traffic. It does not filter traffic based on domains, and logging is limited to CloudWatch metrics, not DNS-level activity.

Security groups control L3/L4 access by IP addresses, protocols, and ports but cannot inspect or block based on DNS names. They cannot provide centralized DNS security.

VPC Flow Logs capture metadata about network traffic, including source/destination IPs and ports, but not DNS query contents or domain-level filtering. They are useful for network-level monitoring but insufficient for domain-based enforcement.

Therefore, Route 53 Resolver DNS Firewall with query logging is the correct solution for centralized DNS inspection and malicious domain blocking.

Question 83 

A company wants to capture packet-level data for security and performance monitoring from EC2 instances across multiple VPCs. Which solution should they use?

A) VPC Traffic Mirroring
B) VPC Flow Logs
C) AWS CloudTrail
D) GuardDuty

Answer: A)

Explanation: 

VPC Traffic Mirroring is a highly specialized AWS feature designed to capture packet-level data directly from Elastic Network Interfaces (ENIs) of EC2 instances. It enables organizations to monitor, analyze, and secure network traffic with a level of granularity that metadata-based solutions cannot provide. By creating mirror sessions, administrators can replicate inbound and outbound traffic from one or more source ENIs to a target appliance, such as an intrusion detection system, network analyzer, or packet-capturing tool. This capability allows for deep packet inspection, detection of anomalies, forensic investigations, and performance monitoring at the application layer, providing visibility into not just connection details but the actual payload of network communications.

Traffic Mirroring is especially powerful in multi-VPC architectures because it supports centralized collection of traffic from multiple VPCs. Using features like Transit Gateway or inter-VPC routing, mirrored traffic from distributed EC2 instances can be aggregated into a single monitoring environment, facilitating holistic network analysis. The mirrored traffic can include both east-west traffic, which flows between VPCs or within a VPC, and north-south traffic, which flows to or from the internet. This comprehensive coverage is essential for organizations that require detailed insight into their cloud network, whether for performance optimization, compliance auditing, or threat detection.

Other AWS services do not provide this level of granularity. VPC Flow Logs capture metadata such as source and destination IP addresses, ports, protocols, and packet/byte counts, but they do not capture the content of packets, making them unsuitable for in-depth payload inspection or application-level analysis. AWS CloudTrail records API calls made to AWS services and is primarily used for governance, compliance, and auditing; it does not capture network traffic at all. Amazon GuardDuty performs threat detection by analyzing VPC Flow Logs, DNS logs, and CloudTrail events, providing alerts about suspicious behavior, but it does not offer raw packet-level data for custom monitoring or forensic analysis.

Because of its ability to deliver real-time, granular, and centralized visibility into network traffic across multiple VPCs, VPC Traffic Mirroring is the correct solution for organizations needing packet-level inspection, detailed threat analysis, or advanced performance monitoring of EC2 instances. It ensures operational and security teams have the full context of network communications for proactive and reactive measures.

Question 84 

A company needs to enforce encrypted service-to-service communication between VPCs in multiple AWS accounts without managing TLS certificates manually. Which AWS service should they use?

A) AWS VPC Lattice
B) VPC Peering
C) Transit Gateway
D) AWS PrivateLink

Answer: A)

Explanation: 

AWS VPC Lattice is a fully managed service designed to facilitate secure, service-to-service communication across multiple VPCs and AWS accounts while minimizing operational complexity. One of its key advantages is that it automatically encrypts all traffic at the transport layer, providing end-to-end protection between services without requiring administrators or developers to manually manage TLS certificates. This automatic encryption ensures compliance with security best practices while reducing the risk of misconfigurations that could expose sensitive data.

In addition to encryption, VPC Lattice offers centralized authentication and access control mechanisms. Organizations can define service-level policies that govern which services are allowed to communicate with each other across accounts and VPC boundaries. This policy enforcement, combined with automatic service discovery, simplifies the management of complex multi-account architectures, enabling teams to connect applications securely and consistently without worrying about network-level routing, firewall rules, or certificate rotation. These capabilities are particularly valuable in environments with multiple teams, development pipelines, and dynamically changing service endpoints.

Other connectivity options in AWS provide partial functionality but lack the fully managed encryption and authentication that VPC Lattice provides. VPC Peering establishes network-level connectivity between VPCs but does not enforce service-to-service encryption or centralized authentication; each application must implement and manage TLS independently. Transit Gateway centralizes routing across multiple VPCs but similarly does not provide automatic encryption or identity-based access controls between services, meaning that application teams must configure TLS and certificate management manually. AWS PrivateLink allows private access to specific services within or across VPCs, but it requires individual services to manage TLS certificates and does not offer automatic authentication between communicating services.

By leveraging VPC Lattice, organizations can achieve secure, cross-account service-to-service communication with minimal operational overhead. It ensures traffic is encrypted, authenticated, and discoverable without the manual processes traditionally associated with TLS certificate provisioning, distribution, or renewal. This makes it the ideal solution for modern multi-account architectures where security, compliance, and ease of management are priorities.

Question 85 

A company wants to accelerate uploads of large files to Amazon S3 from multiple global locations. Which service provides optimized network paths?

A) S3 Transfer Acceleration
B) DataSync
C) Snowball Edge
D) CloudFront

Answer: A)

Explanation: 

Amazon S3 Transfer Acceleration is a service designed to significantly increase the speed of data transfers to S3 buckets, particularly for large files uploaded from geographically distant locations. The primary mechanism behind Transfer Acceleration is its use of AWS’s extensive global network of edge locations, which are part of the Amazon CloudFront content delivery network. When a client uploads a file to an S3 bucket using Transfer Acceleration, the data first travels over the AWS edge network to the nearest edge location. 

From there, it is routed through AWS’s private, highly optimized backbone network to the S3 bucket, bypassing the slower public internet paths. This approach reduces the number of network hops, avoids congestion points, and leverages the low-latency AWS backbone, thereby improving upload speed and consistency. Transfer Acceleration is especially beneficial for companies with users distributed across multiple continents, where traditional internet routes might introduce unpredictable latency, packet loss, or bandwidth throttling. 

It also does not require any complex setup like VPNs, Direct Connect, or custom routing configurations, making it operationally simple and cost-effective. In contrast, AWS DataSync is a service designed to automate large-scale data transfers between on-premises storage and AWS, or between AWS storage services, but it is optimized for bulk transfer rather than latency-sensitive, geographically dispersed uploads. Snowball Edge, a physical appliance, is intended for offline data migration or edge compute workloads; it requires shipping the device to AWS data centers, making it unsuitable for continuous real-time uploads. 

Amazon CloudFront is primarily a content delivery network for distributing content from S3 to end users with low latency, but it is optimized for downloads, not uploads. Transfer Acceleration also provides measurable improvements using a speed comparison tool, allowing organizations to evaluate benefits before adoption. 

Therefore, for global-scale, high-speed, low-latency uploads to S3, S3 Transfer Acceleration is the most suitable solution.

Question 86 

A company requires low-latency edge compute for 5G applications that need near real-time processing. Which service should they deploy?

A) AWS Wavelength Zones
B) Local Zones
C) Outposts
D) Snowball Edge

Answer: A)

Explanation: 

AWS Wavelength Zones are specifically designed to extend AWS compute and storage services to the edge of 5G networks, colocating resources in telecom provider data centers near base stations. This architecture dramatically reduces latency by minimizing the physical distance between end-user devices and the compute infrastructure. Low latency, typically in the single-digit millisecond range, is critical for applications requiring near real-time responses, such as augmented and virtual reality, live video streaming, autonomous vehicles, IoT sensors, and mobile gaming. 

 

Wavelength integrates seamlessly with standard AWS services like EC2, ECS, EKS, and VPC, allowing developers to deploy edge workloads while maintaining a consistent cloud operational model. Additionally, Wavelength supports the same IAM policies, networking, monitoring, and security features as traditional AWS services, enabling enterprises to maintain governance and compliance. Compared to Local Zones, which also extend AWS infrastructure closer to metropolitan areas, Wavelength provides even lower latency for mobile and 5G users because Local Zones are generally located in traditional data centers and not directly in telecom networks. 

 

AWS Outposts delivers AWS services on-premises for low-latency access within enterprise facilities, but it does not colocate resources with telecom providers, so the benefits for mobile 5G users are limited. Snowball Edge can provide compute at the edge for disconnected or remote locations, but it is a physical appliance and is not suitable for continuous, near-real-time processing for highly mobile workloads. 

 

Therefore, AWS Wavelength Zones are the ideal solution for low-latency edge computing in 5G applications, providing fast, scalable, and fully managed cloud capabilities directly adjacent to mobile users.

Question 87 

A company wants to inspect encrypted traffic for threats without modifying client devices. Which AWS service supports this pattern at scale?

A) Gateway Load Balancer with inspection appliances
B) Classic Load Balancer with SSL termination
C) NAT Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Gateway Load Balancer (GWLB) is a service that enables transparent deployment of third-party virtual appliances for traffic inspection at scale, including next-generation firewalls, intrusion detection/prevention systems, and deep packet inspection tools. It works by receiving traffic from VPCs or on-premises networks and seamlessly forwarding it to inspection appliances deployed in a highly available, autoscaling manner. A key feature of GWLB is its ability to handle encrypted traffic without requiring modifications to client devices. 

This is achieved through the integration of appliances that can terminate and inspect SSL/TLS traffic and then forward it securely to the destination, all while preserving source IP addresses and connection context. This centralized inspection model simplifies security management, reduces operational overhead, and supports enterprise-scale deployments with multiple VPCs, accounts, and regions. In contrast, Classic Load Balancer with SSL termination only decrypts HTTP/S traffic and is limited to Layer 7 inspection; it cannot inspect arbitrary protocols or scale efficiently across multiple VPCs. 

NAT Gateways provide network address translation for outbound traffic but do not inspect packets or perform threat detection. Security groups operate at Layer 3 and Layer 4 to allow or deny traffic based on IP addresses and ports, but they cannot inspect packet content or handle encrypted traffic. GWLB also integrates with AWS Transit Gateway and VPC routing, enabling inspection of inter-VPC, intra-VPC, and hybrid traffic with minimal configuration. 

The combination of transparent routing, scalability, and appliance integration makes GWLB the optimal solution for inspecting encrypted traffic at scale without client-side modifications, ensuring both security and operational simplicity.

Question 88 

A company wants centralized logging of DNS queries across multiple VPCs and accounts. Which solution meets this requirement?

A) Route 53 Resolver query logging
B) CloudTrail
C) VPC Flow Logs
D) Security groups

Answer: A)

Explanation: 

Route 53 Resolver query logging provides a centralized mechanism to capture and store all DNS queries originating within Amazon VPCs. This service is particularly valuable for organizations that need to monitor, audit, and secure DNS activity across multiple VPCs and AWS accounts. Query logs include metadata such as the queried domain, response, timestamp, source IP, and VPC information, which can be sent to Amazon S3, CloudWatch Logs, or Kinesis Data Firehose for storage, analytics, and real-time processing. 

 

By aggregating DNS activity from multiple VPCs and accounts, enterprises gain visibility into potential security threats, misconfigurations, or anomalous behavior, enabling proactive detection of malware, exfiltration attempts, or unauthorized domain access. Unlike AWS CloudTrail, which captures API activity and changes to resources, Route 53 Resolver query logging focuses specifically on DNS queries, which are critical for network security monitoring and threat intelligence. VPC Flow Logs, while useful for capturing network-level metadata like IP addresses, ports, and traffic volume, do not capture domain names or DNS request/response details, making them unsuitable for detailed DNS analysis. 

 

Security groups operate at Layer 3 and Layer 4 and enforce access control but do not log or inspect DNS queries. Resolver query logging can also integrate with DNS Firewall for policy enforcement, providing a dual benefit of monitoring and protective controls. This centralized, scalable, and flexible logging solution is ideal for multi-account AWS environments that require comprehensive DNS visibility. 

 

Therefore, Route 53 Resolver query logging is the recommended solution for centralized logging of DNS queries across multiple VPCs and accounts, supporting both security monitoring and compliance objectives.

Question 89 

A company wants to implement zero-trust connectivity between application services across multiple AWS accounts. Which AWS service should they use?

A) AWS VPC Lattice
B) VPC Peering
C) Transit Gateway
D) PrivateLink

Answer: A)

Explanation: 

AWS VPC Lattice is a fully managed service that enables secure, service-to-service communication across multiple VPCs and AWS accounts, supporting the principles of zero-trust networking. In a zero-trust model, services do not inherently trust each other based on network location; instead, each service must authenticate, authorize, and encrypt traffic individually. VPC Lattice abstracts the network complexity and enforces these security controls at the service layer rather than relying on network-level trust. It provides built-in service discovery, traffic encryption, and granular access policies that define which services can communicate, effectively implementing authentication and authorization by default. 

 

This reduces administrative overhead while improving security posture. Traditional VPC Peering establishes full network connectivity between VPCs but relies on implicit network trust, without enforcing service-level access policies or authentication, making it incompatible with strict zero-trust requirements. 

 

Transit Gateway simplifies inter-VPC routing at scale but does not provide authentication, authorization, or encryption controls for individual services, leaving security enforcement at the network layer. AWS PrivateLink enables private connectivity to services in other VPCs or accounts but does not natively provide zero-trust policy enforcement or service discovery across multiple services. VPC Lattice simplifies cross-account service connectivity while maintaining security, scalability, and observability. It integrates with AWS IAM and monitoring tools for auditing and compliance. Organizations adopting VPC Lattice can enforce zero-trust policies consistently, ensuring that only authorized services communicate, and all traffic is encrypted in transit. 

 

Therefore, AWS VPC Lattice is the most suitable service for implementing zero-trust connectivity across multiple AWS accounts, combining security, scalability, and operational simplicity.

Question 90 

A company wants to control outbound traffic by domain names from multiple VPCs while maintaining hybrid connectivity. Which AWS service supports this?

A) Route 53 Resolver DNS Firewall
B) NAT Gateway
C) Internet Gateway
D) Network ACLs

Answer: A)

Explanation: 

AWS Route 53 Resolver DNS Firewall enables centralized control of outbound DNS queries at the domain level across multiple VPCs. It allows organizations to define rule groups that explicitly allow or block queries for specific domains, enabling granular security and policy enforcement. Resolver endpoints can extend this functionality to on-premises networks, supporting hybrid cloud scenarios where DNS queries traverse both AWS and local environments. 

The firewall provides visibility and auditing through logging, which can be directed to CloudWatch Logs, S3, or Kinesis Data Firehose for monitoring, analytics, and compliance reporting. By associating firewall rule groups with multiple VPCs, organizations can enforce consistent outbound policies without managing distributed firewall configurations or endpoint-specific rules. This centralized model simplifies administration and ensures uniform security standards across accounts and regions. NAT Gateways, Internet Gateways, and Network ACLs operate at the network level, controlling IP and port access rather than domain-level DNS queries. 

NAT Gateways perform IP translation, Internet Gateways provide public internet access, and Network ACLs filter traffic at Layer 3/4, none of which can inspect or enforce rules based on domain names. Route 53 Resolver DNS Firewall is therefore uniquely positioned to manage outbound domain filtering while supporting hybrid connectivity, centralized management, and detailed auditing. Organizations can prevent access to malicious domains, enforce corporate policies, and ensure regulatory compliance for DNS traffic. The service scales seamlessly with the number of VPCs and accounts, providing a robust and secure method to control DNS-based access at scale. 

Consequently, Route 53 Resolver DNS Firewall is the correct solution for domain-based outbound traffic control across multiple VPCs, combining policy enforcement, hybrid support, and operational efficiency.

Question 91 

A company wants to securely connect multiple VPCs in different AWS accounts while allowing overlapping CIDR ranges. Which AWS service should they use?

A) AWS PrivateLink
B) VPC Peering
C) Transit Gateway
D) Direct Connect gateway

Answer: A)

Explanation: 

AWS PrivateLink enables private service-to-service connectivity across VPCs, accounts, and regions without requiring non-overlapping IP address ranges. It leverages interface endpoints and the AWS private network to ensure traffic never traverses the public internet. Because PrivateLink operates at the service level rather than the network level, overlapping CIDR blocks do not pose routing conflicts. Access policies and endpoint policies allow granular control over which services or accounts can access a given endpoint, providing both security and flexibility for multi-account deployments.

VPC Peering requires non-overlapping CIDR ranges. Overlapping IP addresses in peered VPCs create routing conflicts and make this option unsuitable for networks with overlapping address spaces.

Transit Gateway simplifies inter-VPC connectivity across accounts but also requires non-overlapping CIDR ranges to correctly route traffic between VPCs. While Transit Gateway supports multi-account management, it cannot resolve overlapping IP conflicts without additional NAT solutions.

Direct Connect gateway provides hybrid connectivity from on-premises networks to multiple VPCs but does not solve overlapping CIDR issues between VPCs in AWS. It primarily handles on-premises-to-cloud connectivity and cannot facilitate service-level cross-account access.

Thus, AWS PrivateLink is the correct choice for securely connecting multiple VPCs with overlapping CIDR ranges and enabling private, cross-account communication.

Question 92

A company wants to route users to the AWS Region with the lowest latency while ensuring automatic failover for unhealthy endpoints. Which Route 53 routing policy should they use?

A) Latency-based routing with health checks
B) Weighted routing
C) Geolocation routing
D) Simple routing

Answer: A)

Explanation: 

Latency-based routing directs users to the AWS Region that provides the lowest network latency from the user’s location. When combined with health checks, Route 53 can automatically remove unhealthy endpoints from DNS responses, ensuring failover and high availability. This is ideal for global applications requiring optimal performance and resiliency. Latency-based routing leverages AWS’s global network infrastructure and continuously monitors latency metrics, dynamically adjusting DNS responses to maintain minimal response times for end users.

Weighted routing allows distribution of traffic based on predefined percentages, which is useful for testing or gradual rollouts but does not consider latency or automatically route users to the fastest endpoint.

Geolocation routing routes users based on their geographic location rather than latency. While it can direct traffic to specific regions, it does not guarantee the lowest latency path and is less responsive to changing network conditions.

Simple routing is a basic DNS approach that returns a single IP address without considering latency, health, or load balancing. It cannot provide failover or latency optimization.

Therefore, latency-based routing with health checks is the correct choice for ensuring users are routed to the lowest-latency, healthy endpoints.

Question 93 

A company wants to enforce centralized inspection of east-west traffic between hundreds of VPCs. Which architecture is most appropriate?

A) Transit Gateway with appliance mode and Gateway Load Balancer
B) Multiple VPC peering connections
C) Internet Gateway with security groups
D) Direct Connect with VPN

Answer: A)

Explanation:

Transit Gateway (TGW) in appliance mode allows centralized routing of traffic across multiple VPCs while enabling asymmetric routing through inspection appliances. The Gateway Load Balancer (GWLB) distributes traffic across multiple firewall or intrusion detection appliances in a scalable and highly available manner. This architecture supports east-west traffic inspection between hundreds of VPCs, ensuring consistent security policy enforcement without manually configuring hundreds of peering connections. TGW with appliance mode and GWLB simplifies management, supports scaling, and integrates with monitoring tools for centralized security operations.

Multiple VPC peering connections can connect VPCs but are not scalable for hundreds of VPCs. Peering lacks centralized routing and cannot enforce traffic inspection at a single point.

Internet Gateways with security groups provide basic north-south filtering but cannot inspect east-west VPC-to-VPC traffic centrally. Security groups operate at L3/L4 only and are managed at the instance level, making centralized inspection impractical.

Direct Connect with VPN focuses on hybrid connectivity between on-premises networks and AWS, not inter-VPC inspection. It does not centralize inspection or provide traffic distribution to multiple appliances.

Thus, Transit Gateway with appliance mode and Gateway Load Balancer is the correct solution for centralized inspection of east-west VPC traffic at scale.

Question 94 

A company wants to inspect encrypted traffic for threats using a scalable solution without modifying client devices. Which AWS service combination should they use?

A) Gateway Load Balancer with inspection appliances
B) Classic Load Balancer with SSL termination
C) NAT Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Gateway Load Balancer (GWLB) is a fully managed service designed to route traffic through inspection appliances such as firewalls or intrusion detection systems without requiring changes to client devices. When integrated with Transit Gateway or VPC routing, GWLB allows both north-south and east-west traffic to be inspected transparently. This is especially useful for encrypted traffic, as appliances connected to GWLB can terminate, inspect, and re-encrypt traffic automatically. This eliminates the operational complexity of manually configuring SSL/TLS certificates on client devices or applications.

GWLB provides high availability and scalability, distributing traffic across multiple appliances. The autoscaling capabilities ensure that inspection throughput can grow dynamically as network load increases. Enterprises benefit from centralized security management, simplified deployment, and minimal maintenance overhead. This approach is ideal for organizations with complex network architectures, including multi-VPC, multi-account setups, or hybrid cloud environments where centralized threat detection is critical.

Other options such as Classic Load Balancer with SSL termination are limited to decrypting HTTP/S traffic and cannot inspect arbitrary protocols or traffic flows at scale. NAT Gateways provide only outbound IPv4 translation and do not perform packet inspection. Security groups operate at Layer 3 and Layer 4, allowing or denying traffic based on IPs and ports but cannot inspect payloads or encrypted flows.

By combining GWLB with inspection appliances, organizations achieve a fully managed, scalable solution for inspecting encrypted traffic across AWS environments. It ensures consistent security enforcement while reducing operational complexity and maintaining minimal impact on client devices, making it the optimal choice for enterprise-scale deployments.

Question 95

A company wants to analyze hybrid network performance, including on-premises to AWS connections, and visualize metrics in a global network view. Which AWS service should they use?

A) Transit Gateway Network Manager with CloudWatch
B) VPC Flow Logs
C) GuardDuty
D) AWS Config

Answer: A)

Explanation: 

Transit Gateway Network Manager is an AWS service that provides centralized visibility into global network performance. It allows enterprises to monitor and visualize traffic across multiple AWS regions, VPCs, and on-premises networks connected via Direct Connect or VPN. Network Manager collects critical performance metrics such as latency, jitter, packet loss, and throughput, providing a comprehensive view of the health and performance of hybrid networks.

By integrating with Amazon CloudWatch, organizations can collect, analyze, and visualize real-time data, enabling proactive troubleshooting and capacity planning. Dashboards display connectivity between VPCs, regions, and on-premises sites, simplifying operational oversight and compliance monitoring. Network Manager also allows organizations to set alerts for performance thresholds, helping identify degraded network paths or misconfigurations before they impact applications.

Alternative solutions have limitations. VPC Flow Logs capture metadata such as source/destination IPs, ports, and protocols but do not provide performance metrics or end-to-end visibility. GuardDuty focuses on threat detection and anomaly detection rather than performance monitoring. AWS Config audits resource configurations but does not measure network latency, jitter, or throughput.

Thus, Transit Gateway Network Manager combined with CloudWatch delivers a scalable, centralized solution for analyzing and visualizing hybrid network performance. It provides enterprises with actionable insights, operational control, and the ability to optimize connectivity between AWS and on-premises resources.

Question 96 

A company wants to ensure VPC-to-VPC traffic remains encrypted and authenticated between accounts without managing TLS certificates. Which AWS service supports this?

A) AWS VPC Lattice
B) VPC Peering
C) Transit Gateway
D) AWS PrivateLink

Answer: A)

Explanation: 

AWS VPC Lattice enables secure, service-to-service communication across VPCs and AWS accounts. It provides automatic transport-layer encryption and authentication, ensuring that all communication between services is encrypted without requiring manual management of TLS certificates. This built-in encryption simplifies operations for organizations managing multiple accounts or cross-VPC architectures.

VPC Lattice also provides service discovery and centralized access control policies, allowing administrators to define which services can communicate. This supports a zero-trust security model where trust is not assumed based on network topology, and all communication is explicitly authorized. Lattice eliminates the operational burden of configuring individual TLS certificates for each service, which can be error-prone and difficult to maintain at scale.

Other options, such as VPC Peering, provide network-level connectivity but do not enforce encryption or authentication. Transit Gateway centralizes routing but lacks application-layer security. AWS PrivateLink provides private access but does not automatically handle TLS or service-level authentication.

Therefore, VPC Lattice is the ideal choice for secure, encrypted service-to-service communication across accounts and VPCs. It simplifies management, enforces zero-trust principles, and ensures operational efficiency while maintaining robust security without requiring manual certificate operations.

Question 97 

A company wants to accelerate large uploads to S3 from global locations with optimized network paths. Which AWS service should they use?

A) S3 Transfer Acceleration
B) DataSync
C) Snowball Edge
D) CloudFront

Answer: A)

Explanation: 

S3 Transfer Acceleration improves upload speeds to Amazon S3 by leveraging AWS edge locations to route data over the AWS global network backbone. When a client uploads a file, it first reaches the nearest edge location and is then routed through AWS’s optimized network to the destination bucket. This reduces latency, increases throughput, and avoids congestion on public internet paths.

This solution is ideal for geographically distributed users who need to upload large datasets such as media files, logs, or backups efficiently. It requires minimal client configuration changes, as it works with existing S3 APIs. DataSync automates bulk transfers but does not provide network-level acceleration via edge locations, making it less suitable for latency-sensitive uploads.

Snowball Edge is designed for offline migrations or edge computing workloads, not real-time uploads. CloudFront is optimized for content delivery to end users (downloads) and cannot accelerate uploads to S3.

S3 Transfer Acceleration is the optimal solution for enterprises requiring fast, global-scale upload performance. It leverages AWS infrastructure, reduces latency, and enables reliable transfer of large files from anywhere in the world without the operational complexity of VPNs or dedicated networking.

Question 98 

A company wants to route users to the nearest region with automatic failover for unhealthy endpoints. Which routing policy should they implement in Route 53?

A) Latency-based routing with health checks
B) Weighted routing
C) Geolocation routing
D) Simple routing

Answer: A)

Explanation: 

Latency-based routing in Amazon Route 53 is a powerful DNS routing policy designed to optimize application performance by directing end users to the AWS region that provides the lowest network latency. The fundamental goal of this routing strategy is to minimize the time it takes for a client’s DNS query to resolve and for the corresponding application response to reach the user. By measuring the network latency between users and multiple AWS regions, Route 53 can dynamically respond to changes in network performance, ensuring that each user is consistently routed to the region where they will experience the fastest response times.

When latency-based routing is combined with health checks, Route 53 adds a critical layer of resilience to global applications. Health checks continuously monitor the availability and performance of resources associated with a DNS record, such as web servers or application endpoints. If an endpoint is detected as unhealthy or unresponsive, Route 53 automatically removes it from DNS responses. This mechanism ensures that traffic is not sent to failing endpoints, effectively providing automatic failover and maintaining application availability. In other words, users are dynamically routed away from unhealthy endpoints without requiring manual intervention, which reduces downtime and prevents degraded user experience.

This routing policy is particularly suitable for global applications with multiple AWS regions because it balances performance optimization and high availability. The combination of latency awareness and real-time health monitoring allows organizations to deliver a seamless experience to users regardless of their geographical location.

Other Route 53 routing policies are less suited for this scenario. Weighted routing allows traffic distribution according to predefined percentages but does not account for latency or the health of endpoints, meaning users could be sent to slower or unhealthy resources. Geolocation routing directs traffic based solely on the user’s geographic location, which does not guarantee that the selected endpoint provides the fastest network performance. Simple routing returns a single IP address without any health check or optimization capabilities, offering no failover or latency benefits.

Therefore, latency-based routing with health checks is the optimal solution for organizations seeking both high performance and high availability in a multi-region deployment. It ensures users are routed to the nearest healthy endpoint, providing automatic failover while continuously optimizing network latency, which is essential for global applications that demand resilient, low-latency connectivity.

Question 99 

A company wants to centralize inspection of east-west VPC traffic across multiple accounts. Which AWS architecture should they deploy?

A) Transit Gateway in appliance mode with Gateway Load Balancer
B) VPC Peering
C) Internet Gateway with security groups
D) Direct Connect

Answer: A)

Explanation: 

Centralized inspection of east-west traffic in Amazon Web Services requires an architecture that can scale across multiple VPCs and accounts while providing robust traffic inspection and policy enforcement. The recommended solution is a Transit Gateway deployed in appliance mode combined with a Gateway Load Balancer (GWLB). Transit Gateway acts as a hub for inter-VPC traffic, simplifying routing and connectivity by consolidating traffic flows into a single, manageable interface. Appliance mode enables asymmetric routing, ensuring that all traffic between VPCs can traverse inspection appliances without being dropped. This guarantees that security appliances, such as firewalls or intrusion detection systems, can inspect all traffic flowing between VPCs.

The Gateway Load Balancer complements this setup by providing scalable distribution of traffic across multiple inspection appliances. GWLB ensures that inspection devices are not overwhelmed by high traffic volumes and can provide high availability. It allows administrators to easily add or remove appliances as needed without disrupting traffic flow, which is critical for dynamic and growing cloud environments. Together, Transit Gateway in appliance mode and GWLB provide a robust architecture that supports centralized inspection across multiple VPCs and accounts while maintaining consistent security policies.

This design also simplifies management. Administrators can apply security rules and monitoring configurations centrally rather than configuring each VPC individually. Integration with monitoring and Security Information and Event Management (SIEM) systems further enhances visibility into east-west traffic, enabling rapid detection of anomalies or malicious activity. It is especially effective in large-scale, multi-account AWS environments where manual inspection configuration for each VPC would be cumbersome and error-prone.

Alternative approaches do not meet the same requirements. VPC Peering connects individual VPCs directly but lacks centralized traffic inspection and does not scale well across numerous VPCs. Internet Gateways with security groups provide traffic filtering at Layer 3 and Layer 4 but cannot inspect internal VPC-to-VPC traffic centrally. Direct Connect provides connectivity to on-premises networks but does not facilitate centralized inspection of cloud-native VPC traffic.

Therefore, the combination of Transit Gateway in appliance mode with Gateway Load Balancer is the optimal solution. It provides scalable, centralized inspection of east-west traffic, ensures high availability, and simplifies management, making it the recommended architecture for organizations that need robust security enforcement across multiple VPCs and accounts.

Question 100 

A company wants to enforce outbound domain filtering across multiple VPCs and hybrid networks. Which AWS service should they use?

A) Route 53 Resolver DNS Firewall
B) NAT Gateway
C) Internet Gateway
D) Network ACLs

Answer: A)

Explanation: 

Amazon Route 53 Resolver DNS Firewall is a managed service that enables organizations to implement centralized, domain-based outbound DNS filtering across multiple Amazon Virtual Private Clouds (VPCs). By leveraging DNS Firewall, administrators can define rules that explicitly allow or block DNS queries for specific domains, effectively enforcing corporate policies, preventing access to malicious or unauthorized sites, and ensuring compliance with organizational security standards. Unlike traditional network controls that operate at the IP or transport layer, DNS Firewall provides filtering at the domain name level, enabling far more granular control over outbound DNS traffic.

Firewall rule groups are the core component of DNS Firewall. They contain a set of rules that can either block queries to known malicious or non-compliant domains or allow queries to approved domains. These rule groups can be associated with one or more VPCs, allowing consistent policy enforcement across multiple environments. This centralized model significantly reduces administrative overhead, as changes to a rule group automatically propagate to all associated VPCs, eliminating the need for repetitive configuration on a per-VPC basis.

Route 53 Resolver DNS Firewall also integrates seamlessly with hybrid cloud architectures. Through the use of inbound and outbound Resolver endpoints, on-premises networks can forward DNS queries to the cloud, enabling the same domain filtering policies to apply to both cloud-based and on-premises workloads. This ensures consistent security policies across the organization’s entire network, regardless of whether resources reside in AWS or on-premises.

In addition to filtering capabilities, DNS Firewall provides robust query logging functionality. Administrators can capture detailed logs of all allowed and blocked queries, which supports auditing, operational monitoring, and compliance reporting. These logs enable security teams to detect unusual patterns, investigate potential threats, and maintain visibility into DNS activity across the organization.

It is important to note that traditional network constructs like NAT Gateways, Internet Gateways, and Network ACLs cannot perform domain-based filtering. NAT Gateways only provide IP address translation for outbound traffic, Internet Gateways enable internet access without filtering capabilities, and Network ACLs operate at Layer 3 and Layer 4, making them incapable of inspecting DNS queries by domain name.

By implementing Route 53 Resolver DNS Firewall, organizations achieve centralized, scalable control over outbound DNS traffic. It ensures security, compliance, and operational visibility across both cloud and hybrid environments, making it the recommended solution for domain-based outbound DNS filtering in AWS.

img