Amazon AWS Certified Advanced Networking – Specialty ANS-C01 Exam Dumps and Practice Test Questions Set 6 Q101-120

Visit here for our full Amazon AWS Certified Advanced Networking – Specialty ANS-C01 exam dumps and practice test questions.

Question 101 

A company wants to implement zero-trust connectivity between application services across multiple AWS accounts. Which AWS service should they use?

A) AWS VPC Lattice
B) VPC Peering
C) Transit Gateway
D) AWS PrivateLink

Answer: A)

Explanation: 

AWS VPC Lattice provides secure service-to-service connectivity across multiple VPCs and AWS accounts. It enforces zero-trust principles by enabling authentication, authorization, and transport-layer encryption without relying on network-level trust. Lattice also supports centralized service discovery and access policies, allowing administrators to control which services can communicate. Traffic is automatically encrypted and authenticated, eliminating the need for manual TLS certificate management. This solution is ideal for multi-account enterprises needing secure, scalable service-level connectivity.

VPC Peering allows network-level connectivity between VPCs but does not enforce service-level authentication or encryption. Peered networks rely on implicit trust, which does not meet zero-trust principles. TLS must be managed at the application level manually, increasing operational overhead.

Transit Gateway centralizes routing between multiple VPCs and accounts but operates at the network level. It cannot enforce service-level zero-trust controls, and applications must still handle authentication and encryption independently.

AWS PrivateLink provides private access to services across VPCs and accounts. While it keeps traffic on the AWS network, it does not automatically enforce service-to-service authentication or zero-trust access policies. Certificates and trust management remain the responsibility of the user.

Thus, AWS VPC Lattice is the correct solution for zero-trust service communication across multiple accounts with centralized security enforcement.

Question 102 

A company wants to capture detailed packet-level data from EC2 instances across multiple VPCs for security and compliance monitoring. Which AWS service should they use?

A) VPC Traffic Mirroring
B) VPC Flow Logs
C) GuardDuty
D) CloudTrail

Answer: A)

Explanation: 

AWS VPC Traffic Mirroring is a powerful service that enables organizations to capture packet-level traffic from Elastic Network Interfaces (ENIs) of EC2 instances. Unlike traditional logging solutions that only provide high-level metadata, Traffic Mirroring captures the full packet payload, offering deep visibility into network traffic for advanced security monitoring, performance analysis, and compliance auditing. This makes it an essential tool for enterprises that need detailed insights into application behavior, network interactions, and potential security threats.

Traffic Mirroring supports centralized collection across multiple VPCs, enabling organizations to monitor both east-west traffic (communication between instances within or across VPCs) and north-south traffic (traffic flowing to or from the internet). Mirrored traffic can be sent to monitoring or security appliances, intrusion detection systems (IDS), firewalls, or Security Information and Event Management (SIEM) tools for real-time analysis. This allows teams to inspect payloads for malicious content, detect anomalous behavior, perform forensic investigations, or analyze application-level performance metrics.

One of the key benefits of VPC Traffic Mirroring is its configurability. Administrators can define filtering rules to capture only specific network flows, such as traffic from certain IP addresses, ports, or protocols, thereby minimizing unnecessary data collection, reducing operational overhead, and controlling storage costs. This selective capture ensures that high-value traffic is analyzed without overwhelming monitoring systems with extraneous data.

Alternative AWS services do not provide the same level of detail. VPC Flow Logs, for instance, capture only metadata such as source and destination IP addresses, ports, protocols, and the volume of data transferred. While useful for understanding traffic patterns or troubleshooting, Flow Logs do not include payload-level information, making them insufficient for detailed security inspections or regulatory compliance requirements. Similarly, AWS GuardDuty is a threat detection service that analyzes VPC Flow Logs, DNS logs, and CloudTrail events to identify suspicious activity. While GuardDuty can alert administrators to potential threats, it does not provide raw packet-level traffic necessary for deep inspection or forensic analysis. AWS CloudTrail, on the other hand, records API activity and management-plane operations, which is valuable for auditing and compliance at the control plane level but does not capture network traffic or application-level interactions.

VPC Traffic Mirroring is the ideal solution for organizations requiring comprehensive, packet-level visibility into network traffic. It enables centralized monitoring across multiple VPCs, supports real-time threat detection, aids performance analysis, and satisfies compliance requirements for network auditing. Its ability to capture full payloads and selectively filter traffic makes it highly flexible and efficient, distinguishing it from other AWS logging or security services that lack packet-level detail.

Question 103

A company wants to centrally inspect and filter encrypted traffic without modifying client applications. Which AWS service should they deploy?

A) Gateway Load Balancer with inspection appliances
B) Classic Load Balancer with SSL termination
C) NAT Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Gateway Load Balancer (GWLB) is a fully managed service designed to enable centralized, scalable, and transparent inspection of network traffic, including encrypted flows, without requiring modifications to client applications. GWLB acts as a single entry and exit point for network traffic, seamlessly routing it to inspection appliances such as firewalls, intrusion detection systems (IDS), intrusion prevention systems (IPS), or malware inspection tools. By combining GWLB with Transit Gateway or standard VPC routing, organizations can consolidate traffic from multiple VPCs and accounts, ensuring consistent policy enforcement and centralized security visibility across their AWS environment.

One of the key advantages of GWLB is its ability to handle encrypted traffic. Inspection appliances connected to the GWLB can decrypt SSL/TLS flows, perform deep packet inspection, and then re-encrypt traffic before forwarding it to the destination. This allows enterprises to enforce security policies, detect threats, and monitor sensitive traffic without requiring any changes to applications, client devices, or end-user workflows. For large-scale deployments, GWLB provides automatic scaling, distributing traffic across multiple appliances for high availability and reliability. This ensures that inspection keeps pace with growing network traffic while maintaining performance and operational efficiency. Redundancy is also built in, allowing failover between appliances to prevent single points of failure.

Alternative AWS services do not provide the same level of centralized, encrypted traffic inspection. Classic Load Balancer with SSL termination can decrypt HTTPS traffic, but it is limited to HTTP and HTTPS protocols, cannot inspect arbitrary network traffic, and does not scale efficiently across multiple VPCs or accounts. Additionally, it often requires endpoint configuration on client devices to manage SSL certificates. NAT Gateway only performs outbound IPv4 network address translation and provides no ability to inspect, decrypt, or analyze traffic payloads. Security groups operate at the network and transport layers (L3/L4), filtering traffic based on IP addresses, ports, or protocols. They are unable to inspect encrypted payloads or enforce detailed application-level security policies.

By leveraging GWLB with inspection appliances, organizations gain a centralized, scalable, and fully managed solution for inspecting both encrypted and unencrypted network traffic. This architecture provides enterprise-grade security for multi-VPC and multi-account environments, simplifies operational management, and ensures comprehensive visibility and threat detection without requiring client-side modifications. For companies needing advanced, automated, and centrally managed inspection of encrypted traffic, Gateway Load Balancer combined with inspection appliances represents the most effective solution within AWS.

Question 104 

A company wants to analyze hybrid network performance and visualize global connectivity metrics for on-premises and AWS VPCs. Which AWS service should they use?

A) Transit Gateway Network Manager with CloudWatch
B) VPC Flow Logs
C) GuardDuty
D) AWS Config

Answer: A)

Explanation: 

Transit Gateway Network Manager provides centralized monitoring and visualization of global network performance across VPCs, Direct Connect links, and VPN connections. Integrated with CloudWatch, it collects metrics such as latency, jitter, packet loss, and throughput, enabling proactive troubleshooting and performance optimization. Network Manager allows visualization of end-to-end hybrid connectivity, including multiple regions and on-premises locations. This service is ideal for enterprises that require real-time performance monitoring and historical trend analysis for hybrid networks.

VPC Flow Logs capture network metadata within a VPC, including source/destination IPs, ports, and protocols. They provide basic network visibility but do not measure latency, packet loss, or other performance metrics across hybrid links.

GuardDuty is a threat detection service that analyzes network logs and API activity to identify security threats. It does not provide network performance monitoring or visualization of hybrid connectivity.

AWS Config audits configuration changes and ensures compliance but does not monitor or measure network performance or traffic flows.

Therefore, Transit Gateway Network Manager with CloudWatch is the correct solution for comprehensive hybrid network performance monitoring and visualization.

Question 105 

A company wants to route end users to the AWS Region with the lowest latency while automatically failing over unhealthy endpoints. Which Route 53 routing policy should they implement?

A) Latency-based routing with health checks
B) Weighted routing
C) Geolocation routing
D) Simple routing

Answer: A)

Explanation: 

Latency-based routing directs DNS queries to the AWS Region that provides the lowest network latency from the user’s location. When combined with health checks, Route 53 automatically removes unhealthy endpoints from DNS responses, providing seamless failover and high availability. This approach optimizes performance for global users and ensures applications remain accessible even during regional failures. Latency metrics are continuously monitored and DNS responses dynamically adjusted to minimize response times.

Weighted routing allows traffic to be distributed based on predefined percentages. It is typically used for A/B testing or gradual deployments rather than latency optimization or failover.

Geolocation routing routes users based on geographic location rather than latency. While useful for regulatory compliance or regional content delivery, it does not guarantee routing to the lowest-latency endpoints.

Simple routing is a basic DNS setup that returns a single IP address without considering latency or endpoint health. It cannot provide automated failover.

Thus, latency-based routing with health checks is the correct choice for directing users to the lowest-latency, healthy endpoints.

Question 106 

A company wants to centrally log and block malicious outbound DNS requests across multiple VPCs and hybrid networks. Which AWS service should they use?

A) Route 53 Resolver DNS Firewall
B) NAT Gateway
C) Internet Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Route 53 Resolver DNS Firewall provides a robust and scalable solution for managing DNS traffic across multiple VPCs and hybrid environments. It enables centralized domain-level filtering by allowing administrators to define firewall rules in rule groups that can block, allow, or redirect outbound DNS queries. These rule groups can be associated with one or more VPCs, making it possible to enforce consistent security policies across a large, distributed cloud architecture without manually configuring DNS filtering per instance or per VPC.

For hybrid architectures that include on-premises data centers, Resolver endpoints can be deployed to allow DNS queries from on-premises networks to traverse AWS, ensuring that all DNS requests, whether originating in the cloud or on-premises, adhere to the same filtering policies. Query logging is a critical feature that provides detailed visibility into DNS activity. Logs capture information about the domain names queried, the VPC or account from which the request originated, and whether the request was allowed or blocked. This visibility is essential for security monitoring, compliance auditing, and proactive threat detection, as it allows teams to identify attempts to contact malicious domains and respond appropriately.

Other AWS network components do not provide this level of DNS-specific control. For example, a NAT Gateway performs network address translation, enabling private resources to access the internet, but it cannot inspect, log, or filter DNS queries. Similarly, Internet Gateways provide VPC connectivity to the internet but lack the capability to apply domain-based filtering or central logging. Security groups operate at the network and transport layers (L3/L4) and filter traffic based on IP addresses, ports, and protocols, but they do not understand or filter traffic based on domain names, nor do they provide detailed query logging.

Route 53 Resolver DNS Firewall addresses the unique challenge of centrally enforcing DNS security policies and monitoring malicious activity across multi-VPC and hybrid environments. It combines the flexibility of VPC associations, the visibility of query logging, and the enforcement of domain-level filtering to protect workloads from malicious external and internal domains without requiring client-side configuration changes. This makes it the ideal choice for companies seeking centralized, scalable DNS security and operational oversight.

Question 107 

A company wants low-latency, edge compute for 5G applications requiring near real-time processing. Which AWS service should they deploy?

A) AWS Wavelength Zones
B) Local Zones
C) Outposts
D) Snowball Edge

Answer: A)

Explanation: 

AWS Wavelength Zones extend AWS infrastructure to the edge of telecom provider networks, placing compute and storage resources physically close to 5G base stations. This architecture is specifically designed for ultra-low-latency applications, with round-trip latency often in the single-digit millisecond range. By co-locating workloads with 5G networks, Wavelength Zones ensure that applications such as augmented reality (AR), virtual reality (VR), real-time gaming, autonomous vehicle systems, IoT telemetry processing, and industrial automation can operate with near-instantaneous responsiveness, which is impossible with standard cloud-only deployments.

Wavelength integrates seamlessly with existing AWS services such as Amazon EC2, Amazon ECS, Amazon EKS, and Amazon S3, enabling developers to deploy edge applications using familiar AWS APIs and orchestration tools. This integration allows traffic to stay on the AWS network backbone as much as possible, further minimizing latency and avoiding the variability of public internet routing. AWS handles network provisioning, scaling, and management, so developers can focus on application logic instead of edge network complexities.

Other options do not meet the stringent latency requirements of 5G applications. AWS Local Zones place compute closer to large metropolitan areas but typically still reside in traditional data centers, leading to higher latency than Wavelength for mobile network traffic. Outposts provide AWS infrastructure on-premises for workloads that need local compute and storage, but they are not designed to integrate with 5G mobile networks, limiting real-time responsiveness for edge applications. Snowball Edge is a physical, portable device primarily for offline data transfer or local edge compute; it is not suitable for continuously running low-latency 5G applications.

Therefore, for applications requiring near real-time processing at the network edge, AWS Wavelength Zones provide the most effective, fully managed solution. By leveraging Wavelength, companies can deploy latency-sensitive 5G applications that deliver high performance, reliability, and integration with the broader AWS ecosystem, without the need for complex custom edge infrastructure or extensive network engineering.

Question 108 

A company wants to centrally monitor and analyze packet-level traffic from multiple VPCs for security and compliance. Which AWS service should they use?

A) VPC Traffic Mirroring
B) VPC Flow Logs
C) CloudTrail
D) GuardDuty

Answer: A)

Explanation: 

VPC Traffic Mirroring is a powerful AWS service designed to provide deep visibility into network traffic at the packet level. It allows administrators to capture copies of inbound and outbound network traffic from Elastic Network Interfaces (ENIs) on EC2 instances and send this mirrored traffic to monitoring appliances, security appliances, intrusion detection systems, or SIEM solutions for inspection, analysis, and compliance auditing. This capability is critical for organizations that require detailed understanding of network activity, including payload content, session patterns, and protocol usage.

Traffic Mirroring supports centralized monitoring across multiple VPCs, enabling a security team to analyze east-west traffic (between instances within VPCs) and north-south traffic (traffic entering or leaving the VPC) without impacting production workloads. Administrators can configure selective mirroring based on IP addresses, ports, or protocols, ensuring that only relevant traffic is mirrored to reduce overhead and optimize monitoring performance. This selective approach is important in large environments with high traffic volumes.

While VPC Flow Logs provide metadata about traffic, such as source/destination IP addresses, ports, and protocols, they do not provide visibility into packet payloads, which is necessary for deep inspection and compliance auditing. CloudTrail focuses on API-level activity and does not capture network-level traffic. GuardDuty provides threat detection based on existing logs and machine learning but does not deliver raw packet-level traffic for detailed analysis.

By using VPC Traffic Mirroring, organizations gain the ability to perform detailed forensic analysis, detect anomalies, investigate security incidents, and meet regulatory compliance requirements. It offers the flexibility to centralize monitoring and integrates with third-party tools or custom solutions to analyze traffic in real time. VPC Traffic Mirroring therefore serves as the definitive solution for centralized, packet-level monitoring and analysis across multi-VPC architectures, providing security, operational visibility, and compliance assurance.

Question 109 

A company wants to accelerate uploads of large files to S3 from globally distributed clients. Which AWS service should they use?

A) S3 Transfer Acceleration
B) DataSync
C) Snowball Edge
D) CloudFront

Answer: A)

Explanation: 

Amazon S3 Transfer Acceleration (TA) is designed to speed up uploads and downloads to and from S3 for clients distributed across the globe. It accomplishes this by routing client traffic through the nearest Amazon CloudFront edge location, where it is then transferred over the highly optimized and secure AWS global network backbone to the destination S3 bucket. This reduces latency caused by long-distance internet routing and network congestion, significantly improving throughput for large files and enhancing end-user experience for geographically dispersed clients.

Transfer Acceleration is fully compatible with standard S3 APIs, so developers do not need to change client-side logic or rewrite applications. It supports both PUT and multipart uploads, which allows efficient handling of large objects by breaking them into smaller parts that can be uploaded in parallel, further reducing total transfer time. Additionally, Transfer Acceleration is fully managed by AWS, removing the complexity of configuring specialized networking or caching solutions for global performance improvements.

Alternative solutions are less effective for this scenario. AWS DataSync is intended for large-scale, automated transfers between on-premises storage and S3 or between AWS storage services, but it does not optimize transfers over the public internet for globally distributed clients. Snowball Edge is a physical appliance for offline transfer or edge compute and is impractical for real-time, continuous uploads. CloudFront accelerates content delivery for downloads but does not improve upload performance.

By deploying S3 Transfer Acceleration, companies can ensure faster, more reliable uploads of large files to S3 from clients worldwide. This improves workflow efficiency, reduces time-to-availability, and provides a seamless experience for distributed users. TA leverages AWS’s global network, edge locations, and infrastructure to optimize performance while maintaining compatibility with existing S3 operations, making it the optimal choice for globally accelerated S3 uploads.

Question 110 

A company wants to centrally inspect encrypted traffic across multiple VPCs without modifying client applications. Which AWS service should they deploy?

A) Gateway Load Balancer with inspection appliances
B) Classic Load Balancer with SSL termination
C) NAT Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Gateway Load Balancer (GWLB) enables centralized inspection of traffic, including encrypted flows, without requiring modifications to client applications. GWLB allows traffic to be transparently routed to third-party security or inspection appliances that can decrypt, inspect, and re-encrypt traffic before it reaches its destination. This is critical for organizations that need enterprise-level security, compliance, and monitoring across multiple VPCs, as it ensures consistent inspection for both east-west (within VPCs) and north-south (internet-bound or incoming) traffic.

GWLB is designed for high availability and scalability. It can distribute traffic across multiple inspection appliances and automatically scale throughput to handle large volumes of encrypted traffic. When integrated with AWS Transit Gateway or VPC routing, GWLB allows traffic from multiple VPCs to flow through a centralized inspection point, reducing complexity compared to deploying inspection appliances individually in each VPC. Administrators can also maintain source IP visibility and traffic context, which is important for logging, security analytics, and compliance reporting.

Other options do not provide the same level of capability. Classic Load Balancers with SSL termination can decrypt traffic, but only for HTTP/S protocols, and require clients to manage SSL certificates and configuration. NAT Gateways perform network address translation but cannot inspect encrypted payloads. Security groups operate at L3/L4 and cannot inspect application layer payloads or encrypted data.

By deploying GWLB with inspection appliances, organizations can achieve centralized, seamless traffic inspection, including encrypted traffic, across multiple VPCs and hybrid networks. This approach ensures that security policies are consistently enforced, potential threats are detected in real time, and client applications remain unchanged. It provides a scalable, transparent, and fully managed solution for enterprise-level network security and compliance, making it the optimal choice for encrypted traffic inspection.

Question 111 

A company wants to route users to the AWS Region with the lowest network latency while automatically failing over unhealthy endpoints. Which Route 53 routing policy should they use?

A) Latency-based routing with health checks
B) Weighted routing
C) Geolocation routing
D) Simple routing

Answer: A)

Explanation: 

Latency-based routing in Amazon Route 53 is designed to direct user traffic to the AWS region that can provide the fastest response time based on network latency measurements. It works by continuously assessing the latency between a user’s location and AWS endpoints, then dynamically returning DNS responses that direct clients to the region with the lowest latency. When combined with Route 53 health checks, this policy ensures automatic failover: any unhealthy endpoint is removed from DNS responses, and traffic is rerouted to the next lowest-latency healthy region.

This combination of latency optimization and health monitoring is crucial for latency-sensitive applications, such as real-time gaming, streaming, financial trading, or IoT services. Users experience minimal delay regardless of their geographic location, while service continuity is maintained even in case of outages or degraded regional performance. Additionally, latency-based routing automatically adapts to changing network conditions, providing a dynamic solution that optimizes global performance without manual intervention.

Alternative routing policies do not address both latency and failover. Weighted routing distributes traffic according to predefined percentages, which is useful for A/B testing or staged deployments, but it does not consider network latency or automatically handle endpoint failures. Geolocation routing directs traffic based on the user’s geographic location, which can ensure compliance with regional requirements but may not route users to the fastest or healthiest endpoint. Simple routing is the most basic option, returning a single IP address regardless of latency or health status, providing no optimization or automatic failover.

Question 112 

A company wants to enforce domain-level outbound DNS filtering across multiple VPCs and hybrid networks. Which AWS service should they use?

A) Route 53 Resolver DNS Firewall
B) NAT Gateway
C) Internet Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Route 53 Resolver DNS Firewall provides centralized control over outbound DNS queries, enabling organizations to enforce domain-level filtering across multiple VPCs and accounts, as well as hybrid architectures that include on-premises environments. Firewall rule groups allow administrators to define rules to block, allow, or redirect specific domains. These rule groups can be associated with multiple VPCs, ensuring consistent enforcement of security policies across distributed environments.

Hybrid environments can leverage Resolver endpoints to route on-premises DNS queries through the firewall, ensuring that all queries adhere to the same security standards. Additionally, query logging captures detailed metadata, including the queried domain, originating VPC or account, and whether the query was allowed or blocked. This provides visibility for security monitoring, compliance auditing, and threat detection. DNS Firewall protects workloads from malicious domains without requiring client-side changes or configuration, enabling seamless integration and policy enforcement.

Other AWS components are insufficient for domain-level filtering. NAT Gateways perform IP address translation for outbound traffic but do not inspect DNS queries, offering only basic network-level logging. Internet Gateways provide public internet access but cannot filter or log DNS queries by domain. Security groups operate at the network and transport layers (L3/L4), filtering by IP, port, or protocol, but they cannot analyze DNS payloads or enforce domain-based rules.

By deploying Route 53 Resolver DNS Firewall, organizations gain a scalable, centrally managed solution for enforcing outbound DNS policies. It ensures protection from malicious or unauthorized domains, provides comprehensive visibility and logging for auditing and monitoring, and supports hybrid networks. This approach simplifies administration while maintaining consistent security across multi-VPC and hybrid environments, making it the optimal solution for domain-level DNS filtering.

Question 113 

A company wants centralized inspection of east-west traffic between hundreds of VPCs in multiple accounts. Which architecture is best?

A) Transit Gateway in appliance mode with Gateway Load Balancer
B) Multiple VPC peering connections
C) Internet Gateway with security groups
D) Direct Connect

Answer: A)

Explanation: 

AWS Transit Gateway (TGW) in appliance mode enables centralized routing and inspection of VPC-to-VPC (east-west) traffic at scale. In appliance mode, traffic is routed through inspection appliances such as firewalls or intrusion detection systems before reaching its destination. Gateway Load Balancer (GWLB) integrates seamlessly with TGW to distribute traffic across multiple inspection appliances, providing scalability, high availability, and automatic load balancing.

This architecture supports centralized policy enforcement, logging, and auditing across multiple VPCs and accounts. It eliminates the complexity and operational overhead associated with establishing hundreds of individual VPC peering connections. TGW in appliance mode also supports asymmetric routing, allowing traffic to return via inspection appliances while maintaining proper source IP visibility. Security and compliance teams can use this centralized design to monitor, detect threats, and enforce consistent security policies.

Other approaches are less scalable. Multiple VPC peering connections become unmanageable in large-scale environments, lack centralized inspection, and require manual routing management. Internet Gateways combined with security groups only provide L3/L4 filtering for north-south traffic and cannot centrally inspect intra-VPC traffic. AWS Direct Connect is intended for hybrid connectivity between on-premises networks and AWS VPCs and does not provide centralized VPC-to-VPC inspection.

Therefore, using Transit Gateway in appliance mode with GWLB is the most effective and scalable solution for centralized inspection of east-west traffic across hundreds of VPCs. It provides visibility, security enforcement, and simplified management while supporting multi-account, large-scale AWS environments.

Question 114 

A company wants to capture packet-level traffic from EC2 instances for security compliance. Which AWS service should they use?

A) VPC Traffic Mirroring
B) VPC Flow Logs
C) GuardDuty
D) CloudTrail

Answer: A)

Explanation: 

VPC Traffic Mirroring enables organizations to capture full packet-level network traffic from Elastic Network Interfaces (ENIs) of EC2 instances. Mirrored traffic can be sent to security appliances, intrusion detection systems, or SIEM tools for deep inspection, analysis, or compliance auditing. It supports both east-west traffic between instances and north-south traffic entering or leaving the VPC, providing comprehensive visibility into network activity.

Administrators can configure selective mirroring based on IP addresses, ports, or protocols, reducing unnecessary data duplication and storage costs while maintaining critical visibility for monitoring and analysis. Capturing full packet data allows organizations to analyze payload content, detect advanced threats, and perform forensic investigations, which is essential for regulatory compliance and enterprise security programs.

VPC Flow Logs provide only metadata, such as IP addresses, ports, and protocol information, and cannot capture payloads. GuardDuty analyzes logs to detect security anomalies but does not provide raw packet-level traffic. CloudTrail records API activity at the management plane level and cannot inspect network flows.

By leveraging VPC Traffic Mirroring, organizations can achieve granular, real-time visibility into all traffic traversing their EC2 instances. This enables proactive security monitoring, detailed threat analysis, and compliance verification. Traffic Mirroring is fully managed, scalable, and configurable, making it the ideal solution for enterprises needing packet-level inspection and auditing across their AWS workloads.

Question 115 

A company wants low-latency, edge computing for 5G applications requiring real-time processing. Which AWS service should they deploy?

A) AWS Wavelength Zones
B) Local Zones
C) Outposts
D) Snowball Edge

Answer: A)

Explanation: 

AWS Wavelength Zones extend AWS compute and storage services directly to telecom provider networks near 5G base stations. This co-location reduces network latency to single-digit milliseconds, enabling real-time processing for applications such as AR/VR, gaming, IoT telemetry, autonomous vehicles, and industrial automation. By deploying workloads in Wavelength Zones, traffic remains on the AWS network backbone, bypassing the public internet and reducing latency, jitter, and packet loss.

Wavelength integrates seamlessly with standard AWS services like EC2, ECS, and S3, allowing developers to orchestrate and deploy edge applications using familiar AWS APIs. This simplifies development, operations, and scaling while providing consistent connectivity and security.

Local Zones bring AWS services closer to metropolitan users but are not embedded within telecom networks, offering less latency improvement for 5G mobile applications. Outposts deliver AWS infrastructure on-premises, suitable for private datacenters but not for low-latency, mobile edge scenarios. Snowball Edge is a physical appliance for offline data transfer or limited edge compute and cannot support continuous, real-time low-latency workloads.

By leveraging AWS Wavelength Zones, organizations can deploy applications that require ultra-low latency and high reliability at the edge, enabling near real-time responsiveness for 5G-enabled workloads. Wavelength provides a fully managed, high-performance solution for edge computing in telecom networks, making it the optimal choice for low-latency, real-time applications.

Question 116 

A company wants to accelerate large file uploads to S3 from global clients. Which AWS service should they use?

A) S3 Transfer Acceleration
B) DataSync
C) Snowball Edge
D) CloudFront

Answer: A)

Explanation: 

Amazon S3 Transfer Acceleration (TA) is designed specifically to optimize file uploads to S3 from geographically distributed clients by leveraging AWS’s global network of edge locations. Normally, data uploads over the public internet can experience high latency and variable throughput due to distance, network congestion, and routing inefficiencies. Transfer Acceleration mitigates these limitations by routing client traffic first to the nearest AWS edge location. From there, data travels over the AWS global backbone network, which is optimized for high-speed, low-latency transport to the target S3 bucket.

This architecture is particularly beneficial for workloads that involve large datasets, such as media content, backups, or bulk ingestion pipelines. TA is fully compatible with existing S3 APIs, so applications require minimal modifications—clients simply enable the transfer acceleration endpoint. Additionally, it supports multipart uploads, which breaks large objects into smaller chunks that can be uploaded in parallel, further improving throughput and reducing total upload time.

Alternative AWS solutions are less suitable for globally accelerated uploads. AWS DataSync is effective for transferring large volumes of data between on-premises storage and S3 but does not optimize transfer speeds over the public internet or make use of edge locations for latency reduction. Snowball Edge is a physical, portable appliance designed for offline data transfer and edge computation, making it impractical for continuous or real-time uploads. Amazon CloudFront is a content delivery network optimized for fast content delivery to end users, but it primarily accelerates downloads rather than uploads.

By implementing S3 Transfer Acceleration, companies can ensure consistent, high-performance uploads from any geographic location. This reduces bottlenecks for distributed teams, improves productivity, and provides a seamless experience for users interacting with applications requiring frequent large data transfers. In addition to performance gains, using TA can improve operational predictability, as throughput and latency become more consistent compared to standard internet transfers. Therefore, for any globally distributed workflow involving large S3 uploads, S3 Transfer Acceleration is the most effective, fully managed solution.

Question 117 

A company wants to enforce service-to-service encryption across multiple AWS accounts without manual TLS management. Which AWS service should they use?

A) AWS VPC Lattice
B) VPC Peering
C) Transit Gateway
D) PrivateLink

Answer: A)

Explanation:

AWS VPC Lattice provides a fully managed service-to-service connectivity solution that enables secure communication between services across multiple VPCs and AWS accounts. One of its key advantages is that it automatically handles transport-layer encryption and authentication, removing the need for manual TLS certificate management or configuration at the application level. This is particularly valuable for organizations operating in a multi-account or multi-VPC architecture, as it simplifies operations while enforcing security standards consistently across services.

Lattice supports service discovery and access policies, allowing administrators to define which services can communicate and under what conditions. All traffic between services is encrypted in transit, ensuring that sensitive data is protected and zero-trust security principles are maintained. This is critical for organizations with stringent compliance requirements or those that want to reduce operational overhead associated with managing TLS certificates, rotating keys, and ensuring end-to-end encryption.

Other connectivity options do not provide the same integrated security. VPC Peering establishes network-level connectivity but does not inherently enforce encryption or service authentication; TLS must be implemented and managed at the application layer. Transit Gateway centralizes routing between VPCs but does not manage encryption or authentication between individual services. AWS PrivateLink allows private connectivity to services but does not automatically enforce TLS or provide service-to-service authentication.

By using AWS VPC Lattice, organizations can achieve secure, encrypted communication across VPCs and accounts with minimal operational complexity. It simplifies multi-account networking, reduces human error in managing certificates, and ensures that service-to-service communication adheres to enterprise-grade security and compliance requirements. Lattice is therefore the optimal solution for automated, zero-trust encryption across services and accounts.

Question 118 

A company wants to route users to the nearest healthy AWS region with automatic failover. Which Route 53 policy should they use?

A) Latency-based routing with health checks
B) Weighted routing
C) Geolocation routing
D) Simple routing

Answer: A)

Explanation: 

Amazon Route 53 latency-based routing combined with health checks is designed to direct users to the AWS region that offers the lowest network latency while automatically avoiding unhealthy endpoints. Latency-based routing measures the round-trip time from a user’s location to AWS regions and returns DNS responses that direct traffic to the region with the shortest latency, improving application performance and user experience globally.

Health checks provide continuous monitoring of the availability and health of each endpoint. If an endpoint becomes unhealthy due to service disruption, latency-based routing automatically redirects traffic to the next lowest-latency healthy region. This combination ensures both optimal performance and high availability for global applications, enabling seamless failover without manual intervention. It is suitable for mission-critical workloads that require minimal downtime and consistent response times.

Weighted routing, while useful for traffic testing or phased rollouts, does not inherently optimize for latency or provide automated failover. Geolocation routing directs users based on geographic location, which may not correspond to the lowest-latency endpoint and does not provide automatic failover. Simple routing returns a single IP address without considering health or performance, offering no resilience or optimization for global users.

Latency-based routing with health checks thus delivers a dynamic, performance-driven, and highly available solution for directing users to the optimal AWS region. It improves both reliability and end-user experience while reducing operational complexity.

Question 119 

A company wants to inspect encrypted traffic centrally without modifying client devices. Which AWS service should they deploy?

A) Gateway Load Balancer with inspection appliances
B) Classic Load Balancer with SSL termination
C) NAT Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Gateway Load Balancer (GWLB) provides a centralized mechanism for inspecting network traffic, including encrypted flows, without requiring changes to client devices or applications. GWLB can transparently route traffic to inspection appliances that decrypt and analyze traffic for security threats such as malware, intrusion attempts, or policy violations. This is critical for enterprises needing a scalable, centralized solution for monitoring multi-VPC or multi-account traffic while maintaining security compliance.

GWLB integrates with AWS Transit Gateway or VPC routing to support inspection across multiple VPCs, enabling organizations to consolidate their security stack and reduce operational complexity. It supports automatic scaling and high availability, distributing traffic across multiple appliances and ensuring consistent performance even under high network load. Inspection appliances maintain source IP visibility and logging, which is essential for auditing and compliance reporting.

Classic Load Balancers with SSL termination can handle HTTP/S traffic decryption but are limited to specific protocols and require client-side TLS configuration. NAT Gateways perform IP translation but cannot inspect encrypted payloads. Security groups operate at L3/L4 and cannot inspect packet contents or encrypted flows.

GWLB with inspection appliances is therefore the ideal solution for enterprises requiring centralized inspection of encrypted traffic, offering scalability, operational efficiency, and comprehensive visibility without disrupting applications.

Question 120 

A company wants to enforce domain-level filtering for outbound DNS traffic across multiple VPCs. Which service should they use?

A) Route 53 Resolver DNS Firewall
B) NAT Gateway
C) Internet Gateway
D) Security groups

Answer: A)

Explanation: 

AWS Route 53 Resolver DNS Firewall enables centralized control over outbound DNS queries at the domain level across multiple VPCs and accounts. Administrators create firewall rule groups to define allowed or blocked domains, which can then be associated with multiple VPCs. This ensures consistent enforcement of security policies, protecting workloads from accessing malicious or unauthorized domains.

Resolver endpoints extend this capability to hybrid architectures, allowing on-premises DNS traffic to traverse the DNS firewall. Query logging captures detailed metadata for auditing, compliance, and threat detection, including information about the queried domain, the VPC, and whether the request was blocked or allowed.

Alternative AWS services do not provide domain-level DNS filtering. NAT Gateways handle IP address translation without DNS-level inspection. Internet Gateways provide general internet access but cannot filter or log domain queries. Security groups operate at L3/L4 and cannot inspect DNS payloads or enforce domain-based policies.

By leveraging Route 53 Resolver DNS Firewall, organizations achieve centralized, granular control over DNS traffic without modifying client applications, ensuring both security and operational consistency across multi-VPC and hybrid environments.

img