Amazon AWS Certified Advanced Networking – Specialty ANS-C01 Exam Dumps and Practice Test Questions Set 1 Q1-20

Visit here for our full Amazon AWS Certified Advanced Networking – Specialty ANS-C01 exam dumps and practice test questions.

Question 1: 

A company needs to connect two VPCs in different AWS Regions using an encrypted, highly available solution that supports transitive routing between on-premises networks and both VPCs. Which AWS service or design is the BEST fit?

A) AWS Site-to-Site VPN between on-prem and each VPC plus VPC peering between the VPCs
B) AWS Transit Gateway with inter-Region peering and Site-to-Site VPN attachments
C) VPC peering between the VPCs and AWS Direct Connect to on-prem only to one VPC
D) AWS Client VPN configured into each VPC with a shared directory

Answer: B) AWS Transit Gateway with inter-Region peering and Site-to-Site VPN attachments

Explanation: 

A) AWS Site-to-Site VPN between on-prem and each VPC plus VPC peering — Site-to-Site VPN provides encrypted connections from on-prem to VPCs and can connect multiple VPCs individually. VPC peering connects VPCs but does not support transitive routing: peered VPCs cannot route traffic to on-prem via a VPN attached to another VPC. Managing many VPNs and peering relationships becomes operationally heavy and lacks the centralized routing and policy control needed for large scale transitive topologies. While encrypted, this design does not meet the transitive routing requirement cleanly.

B) AWS Transit Gateway with inter-Region peering and Site-to-Site VPN attachments — Transit Gateway is specifically designed for centralizing connectivity: it acts as a hub for VPCs, VPNs, and Direct Connect attachments and supports route tables to control routing and enable transitive flows. Transit Gateway supports inter-Region peering so separate regional TGWs can route between regions, and Site-to-Site VPN attachments to each TGW allow on-prem networks to access all attached VPCs. This provides encryption, high availability (managed by AWS), simplified management, and transitive routing—so it satisfies the stated requirements best.

C) VPC peering between the VPCs and AWS Direct Connect to on-prem only to one VPC — VPC peering is simple point-to-point connectivity but does not support transitive routing. If Direct Connect terminates into one VPC, the other peered VPC cannot route to on-prem via that Direct Connect. Direct Connect plus Transit VPC patterns can be used but require more components; pure peering plus single Direct Connect fails the transitive requirement and is operationally limited.

D) AWS Client VPN configured into each VPC with a shared directory — Client VPN is intended for remote user access over TLS; it is not designed for site-to-site topology or high-throughput inter-VPC routing and would not be an appropriate mechanism to connect data center networks and VPCs for transitive routing. It lacks the centralized routing capabilities and throughput characteristics required.

Therefore, Transit Gateway with inter-Region peering and VPN attachments is the most appropriate architecture for encrypted, highly available connectivity that supports transitive routing between on-premises and multiple VPCs.

Question 2: 

A security team requires that all cross-account traffic to a central VPC must traverse an inspection appliance deployed in a security account before reaching other VPCs. Which design enforces this with the least management overhead and supports scale?

A) Central VPC with Network Firewall and route tables redirecting traffic via a NAT gateway
B) AWS Transit Gateway with centralized inspection VPC attachment and Transit Gateway route tables
C) VPC peering with route propagation pointing to the inspection appliance ENI
D) Configure Security Groups to force traffic through the appliance

Answer: B) AWS Transit Gateway with centralized inspection VPC attachment and Transit Gateway route tables

Explanation: 

A) Central VPC with Network Firewall and route tables redirecting traffic via a NAT gateway — Deploying Network Firewall inside a central VPC is viable for inspection, and Network Firewall integrates with VPC route tables via endpoint-like attachments. Using a NAT gateway does not itself perform inspection and is not appropriate for forcing cross-VPC transit through an inspection appliance. Managing custom route table rules across many VPCs becomes cumbersome compared with a hub architecture.

B) AWS Transit Gateway with centralized inspection VPC attachment and Transit Gateway route tables — Transit Gateway supports centralized inspection by attaching a security/inspection VPC to the TGW and using TGW route tables to steer traffic from spoke VPCs through the inspection VPC before reaching other spokes or egress. This model scales well because route tables and attachments are centrally managed, supporting thousands of VPCs. It reduces per-VPC configuration and is the recommended pattern for centrally enforcing inspection and segmentation across accounts.

C) VPC peering with route propagation pointing to the inspection appliance ENI — VPC peering is a direct mesh and doesn’t support transitive routing. You cannot route traffic from one peered VPC to another via a third VPC or an appliance in another VPC. Attempting to use an ENI to forward traffic across peering is not supported by AWS routing. Hence it fails the core requirement.

D) Configure Security Groups to force traffic through the appliance — Security Groups are stateful filters controlling allowed traffic to resources; they cannot force routing or traffic traversal through an appliance. They do not influence packet forwarding paths and thus cannot implement centralized inspection enforcement.

Reasoning about the correct answer: Transit Gateway’s centralized route control and the ability to attach an inspection VPC (or use Network Firewall integrated with TGW) makes it the optimal choice. It supports scalable attachment of many VPCs across accounts and regions with clear routing policies, enabling the security team to mandate that cross-account traffic traverse the inspection appliance prior to reaching destination VPCs. This minimizes distributed management overhead and is designed for large scale, multi-account AWS environments.

Question 3: 

An application requires very low latency and high packet per second (PPS) throughput between EC2 instances in different Availability Zones within the same Region. Which networking feature or instance configuration best achieves this?

A) Use standard elastic network interfaces with instances in different AZs without placement control
B) Deploy instances in the same subnet using placement groups (cluster placement group) and enhanced networking (ENA) enabled instances
C) Use VPC peering across AZs with jumbo frames enabled but without ENA
D) Place instances in different Regions and use AWS Global Accelerator for reduced latency

Answer: B) Deploy instances in the same subnet using placement groups (cluster placement group) and enhanced networking (ENA) enabled instances

Explanation: 

A) Use standard elastic network interfaces with instances in different AZs without placement control — While VPC networking offers cross-AZ connectivity, placing instances without placement control across AZs increases latency relative to tightly colocated instances. Standard ENIs on instances without enhanced networking have higher CPU overhead and lower PPS capability. For the lowest latency and maximum PPS, more advanced options are required.

B) Deploy instances in the same subnet using placement groups (cluster placement group) and enhanced networking (ENA) enabled instances — Cluster placement groups place EC2 instances physically close within a single Availability Zone to provide low-latency, high-throughput networking ideal for HPC and tightly coupled workloads. Enhanced Networking with the Elastic Network Adapter (ENA) reduces latency, improves throughput, and increases PPS. Combining cluster placement (note: cluster is single AZ) with ENA yields the best intra-AZ performance. If multi-AZ is required, consider spread across AZs with placement strategies tradeoffs; however for minimal latency and highest PPS, cluster placement in the same AZ with ENA is best.

C) Use VPC peering across AZs with jumbo frames enabled but without ENA — VPC peering is used for routing between VPCs, not for AZ-level performance tuning. Jumbo frames (MTU 9001) can reduce CPU overhead for large payloads but do not increase PPS or reduce latency significantly for small-packet, high-PPS workloads. Without enhanced networking, high PPS demands will not be met.

D) Place instances in different Regions and use AWS Global Accelerator for reduced latency — Cross-Region deployments inherently introduce higher latency due to distance and cannot beat intra-Region AZ proximity. Global Accelerator optimizes global routing for end users but does not make cross-Region instance-to-instance latency better than well-placed intra-Region instances.

Reasoning about the correct answer: For extremely low latency and high PPS, the best practice is to colocate instances within the same AZ using cluster placement groups and enable enhanced networking (ENA) on instance types that support it. This minimizes network hops, lowers CPU overhead for networking, and maximizes throughput and packet processing capacity. Cross-AZ or cross-Region approaches inherently add latency and cannot match the performance of tightly coupled, colocated instances with ENA.

Question 4: 

A network engineer must ensure that packets from a specific on-premises subnet are routed to only a subset of VPCs attached to a Transit Gateway while other on-prem subnets can reach all VPCs. Which Transit Gateway feature accomplishes this?

A) Transit Gateway route propagation from VPN attachments only
B) Transit Gateway route tables with explicit association and propagation controls
C) AWS Resource Access Manager (RAM) to share selective routes
D) VPC route tables with static routes for each on-prem subnet

Answer: B) Transit Gateway route tables with explicit association and propagation controls

Explanation: 

A) Transit Gateway route propagation from VPN attachments only — While propagation determines which routes are learned into a TGW route table from attachments, relying solely on propagation without association control cannot selectively control which spokes are reachable by particular on-prem subnets. Propagation is a component of the solution but not sufficient by itself without route table association.

B) Transit Gateway route tables with explicit association and propagation controls — Transit Gateway supports multiple route tables per TGW and allows each attachment (VPC, VPN, DX) to be associated with a specific route table and to have its routes propagated selectively. By creating route tables that only propagate routes from the desired on-prem subnet attachment into a subset of VPC attachment associations, you can enforce that only specific VPCs receive routes for that subnet while other on-prem subnets have different propagation/association scopes. This feature is explicitly designed for fine-grained routing control and segmentation.

C) AWS Resource Access Manager (RAM) to share selective routes — RAM is used for sharing AWS resources across accounts but does not control Transit Gateway route distribution or per-subnet routing policies. It cannot selectively distribute specific routes from an attachment to a subset of VPCs.

D) VPC route tables with static routes for each on-prem subnet — VPC route tables control traffic leaving the VPC but cannot directly control which routes the Transit Gateway advertises or learns. Managing static routes at VPC level with many on-prem subnets and VPCs is operationally complex and does not provide the centralized associative/propagation controls TGW route tables provide.

Reasoning about the correct answer: Transit Gateway route tables with explicit association and propagation controls are designed for this exact use case: segmenting traffic and controlling which attachments see which routes. By defining multiple TGW route tables and carefully associating attachments and choosing which attachments propagate routes into each route table, you can ensure that a particular on-prem subnet’s routes are visible only to the intended subset of VPCs while other on-prem networks are handled differently. This is scalable and manageable.

Question 5: 

You need to transfer large datasets (many terabytes) from on-premises to AWS over a private connection with predictable throughput and lower cost than using internet VPN. Which AWS service combination is most appropriate?

A) AWS Snowball for offline transfer only
B) AWS Direct Connect dedicated connection with AWS DataSync over Direct Connect for incremental transfers
C) Site-to-Site VPN with multiple tunnels aggregated for throughput
D) Amazon S3 Transfer Acceleration over public internet

Answer: B) AWS Direct Connect dedicated connection with AWS DataSync over Direct Connect for incremental transfers

Explanation: 

A) AWS Snowball for offline transfer only — Snowball devices are excellent for large one-time bulk transfers where network transfer would be too slow or costly. Snowball is offline physical transfer; it’s a valid pattern for initial bulk ingestion but may not be suitable if ongoing incremental transfers and predictable private connectivity are required. The question implies private connection with predictable throughput and ongoing transfers; Snowball alone does not fully satisfy that.

B) AWS Direct Connect dedicated connection with AWS DataSync over Direct Connect for incremental transfers — Direct Connect provides a private, dedicated network link between on-prem and AWS with predictable throughput and lower egress costs compared to internet. AWS DataSync is optimized for automated, accelerated, and secure transfer of large datasets and supports transferring data over Direct Connect. Using Direct Connect for continuous/private connectivity plus DataSync for efficient, incremental, and parallel transfers gives predictable performance and cost benefits for many terabytes and ongoing sync needs.

C) Site-to-Site VPN with multiple tunnels aggregated for throughput — VPN over the internet is encrypted but subject to public internet variability and often does not provide predictable throughput at the scale of many terabytes. Aggregating tunnels increases complexity and still suffers from jitter and variable latency, making it inferior to Direct Connect for predictable, high-throughput bulk data transfer.

D) Amazon S3 Transfer Acceleration over public internet — Transfer Acceleration uses AWS edge locations to accelerate uploads to S3 over the internet. It’s useful for global client uploads but still traverses the public internet and incurs acceleration fees; for very large datasets many terabytes and private network requirements, Transfer Acceleration is less appropriate than Direct Connect + DataSync.

Reasoning about the correct answer: For large-scale, repeated or ongoing data transfers requiring private connectivity and predictable throughput, a Direct Connect dedicated link combined with DataSync for managed incremental transfers provides the best balance of performance, security, and cost. Snowball is a complementary option for initial seeding if offline transfer is desired, but the question’s emphasis on private, predictable, and incremental transfers points to Direct Connect + DataSync as the optimal solution.

Question 6: 

A multi-account AWS environment requires centralized DHCP option management so all VPCs use the same DNS and NTP settings. What’s the most manageable approach?

A) Create a single VPC and share it across accounts using VPC sharing for all workloads
B) Use AWS Organizations SCPs to enforce DHCP options at the account level
C) Use AWS Resource Access Manager (RAM) to share a centrally managed VPC that contains the DHCP options set and use VPC sharing for subnets
D) Create identical DHCP options sets in each account manually

Answer: C) Use AWS Resource Access Manager (RAM) to share a centrally managed VPC that contains the DHCP options set and use VPC sharing for subnets

Explanation: 

A) Create a single VPC and share it across accounts using VPC sharing for all workloads — Sharing a VPC via Resource Access Manager is possible and can centralize DHCP options, but running all workloads in a single VPC may create limits and blast radius concerns. VPC sharing is intended for shared network resources like DNS and centralized services, not necessarily to host all workloads in one VPC.

B) Use AWS Organizations SCPs to enforce DHCP options at the account level — Service Control Policies (SCPs) constrain API actions but do not provision or enforce specific DHCP option sets across VPCs. They cannot set network configuration parameters like DNS or NTP.

C) Use AWS Resource Access Manager (RAM) to share a centrally managed VPC that contains the DHCP options set and use VPC sharing for subnets — RAM combined with VPC sharing allows multiple accounts to consume subnets from a centrally managed VPC. DHCP option sets are associated at VPC creation time; centralizing VPC resources and sharing them ensures consistent DHCP behavior (DNS, NTP) across participants. This approach centralizes management and reduces configuration drift.

D) Create identical DHCP options sets in each account manually — Manually replicating DHCP option sets across many accounts is error prone and increases operational overhead. It does not prevent divergence over time and is not the most manageable solution.

Reasoning about the correct answer: While DHCP option sets are VPC-scoped, using AWS RAM and VPC sharing to centralize network resources lets you maintain a single authoritative DHCP configuration and expose subnets to other accounts. This minimizes drift, provides consistent settings for DNS and NTP, and reduces administrative overhead compared to manual replication.

Question 7: 

An application requires end-to-end TLS with mutual authentication between services in different AWS accounts. Which approach implements this most securely and with minimal operational complexity?

A) Terminate TLS at Network Load Balancer (NLB) with client certificate verification at backend EC2 instances
B) Use AWS Private CA to issue client and server certificates and deploy mTLS in the application or via an Envoy sidecar (or Gateway) for mutual authentication
C) Use IAM roles for service authentication and skip TLS since IAM provides authorization
D) Use Security Groups to restrict which source IPs can connect

Answer: B) Use AWS Private CA to issue client and server certificates and deploy mTLS in the application or via an Envoy sidecar (or Gateway) for mutual authentication

Explanation: 

A) Terminate TLS at Network Load Balancer (NLB) with client certificate verification at backend EC2 instances — NLB supports TLS termination in TLS listeners with client certificates only when using TLS listeners in NLB with TLS negotiation? Historically, NLB operates at Layer 4; TLS termination with client cert validation is better handled by an Application Load Balancer (ALB) or termination at the application. Offloading TLS can be acceptable but performing mutual auth at the backend adds operational complexity and distribution challenges.

B) Use AWS Private CA to issue client and server certificates and deploy mTLS in the application or via an Envoy sidecar (or Gateway) for mutual authentication — AWS Private Certificate Authority (Private CA) allows centralized issuance of private X.509 certificates across accounts. Implementing mutual TLS (mTLS) either at the application level or with an Envoy sidecar/gateway enforces strong cryptographic mutual authentication. Sidecars or a service mesh can centralize cert rotation, revocation, and policy enforcement, reducing operational complexity while ensuring end-to-end security across accounts.

C) Use IAM roles for service authentication and skip TLS since IAM provides authorization — IAM is for API-level authentication and authorization, not for securing transport between services at the network/TLS layer. Skipping TLS would expose traffic to interception and would not satisfy a requirement for mutual TLS encryption and client certificate validation.

D) Use Security Groups to restrict which source IPs can connect — Security Groups provide coarse network-level access control but offer no cryptographic authentication, no confidentiality, and cannot implement mutual identity verification. They are helpful as defense in depth but insufficient alone for mTLS.

Reasoning about the correct answer: AWS Private CA combined with mTLS implemented either in the application or via a sidecar/gateway provides centralized, manageable certificate issuance and automated mutual authentication enforcement. This solution scales across accounts, supports certificate lifecycle operations, and delivers true end-to-end encryption plus mutual identity verification with manageable operational overhead.

Question 8: 

You must design routing so that specific prefixes learned from an on-prem BGP speaker are advertised only to a subset of VPCs attached to a Transit Gateway. How can this be achieved?

A) Use BGP communities at the on-prem router and configure Transit Gateway route table propagation filters accordingly
B) Rely on VPC route tables to filter received routes automatically
C) Use ECMP at the Transit Gateway to balance between VPCs and implicitly limit advertisement
D) Use AWS WAF to block prefixes in undesired VPCs

Answer: A) Use BGP communities at the on-prem router and configure Transit Gateway route table propagation filters accordingly

Explanation: 

A) Use BGP communities at the on-prem router and configure Transit Gateway route table propagation filters accordingly — BGP communities can tag routes originating from on-prem, enabling the Transit Gateway (and upstream routers) to apply selective propagation logic. On the TGW side, you configure route tables and control propagation/association so that only specific route tables receive routes with certain community tags. This combination enables selective advertisement to a subset of VPC attachments.

B) Rely on VPC route tables to filter received routes automatically — VPC route tables are local to the VPC and control egress; they don’t filter which routes the TGW propagates to other attachments. They can only control where traffic goes from within the VPC, not what the TGW advertises.

C) Use ECMP at the Transit Gateway to balance between VPCs and implicitly limit advertisement — ECMP is for load balancing across multiple equal-cost paths and does not control route advertisement or selective propagation to attachments. It does not provide policy enforcement for which prefixes are visible to which VPCs.

D) Use AWS WAF to block prefixes in undesired VPCs — AWS WAF operates at Layer 7 for HTTP/S traffic on supported services and cannot filter IP prefixes at the network routing level. It’s irrelevant for controlling BGP route advertisement.

Reasoning about the correct answer: Combining BGP community tagging with Transit Gateway route table propagation and association provides powerful, scalable control over which on-prem prefixes are propagated to which VPC attachments. Communities let the on-prem BGP speaker mark routes, and TGW route tables and propagation controls enforce selective distribution to the desired subset of VPCs.

Question 9: 

A requirement exists to ensure high availability for Direct Connect connectivity between on-prem and AWS across multiple Regions with active-active traffic. Which architecture fulfills this?

A) Single Direct Connect connection to one Region and rely on AWS backbone to route to other Regions
B) Multiple Direct Connect connections to Direct Connect locations in multiple Regions with BGP over DX and Transit Gateway inter-Region peering or Direct Connect Gateway
C) Site-to-Site VPN fallback only and no Direct Connect redundancy
D) Use S3 Cross-Region Replication to distribute traffic between Regions

Answer: B) Multiple Direct Connect connections to Direct Connect locations in multiple Regions with BGP over DX and Transit Gateway inter-Region peering or Direct Connect Gateway

Explanation: 

A) Single Direct Connect connection to one Region and rely on AWS backbone to route to other Regions — While AWS backbone can route traffic between Regions, relying on a single DX connection is a single point of failure for on-prem connectivity. For high availability and active-active traffic, redundant DX connections across multiple locations/Regions are recommended.

B) Multiple Direct Connect connections to Direct Connect locations in multiple Regions with BGP over DX and Transit Gateway inter-Region peering or Direct Connect Gateway — Establishing multiple Direct Connect connections to different DX locations and Regions, using BGP for dynamic routing and leveraging Direct Connect Gateway or Transit Gateway with inter-Region peering, provides active-active connectivity. BGP enables failover and path selection; Direct Connect Gateway allows you to access VPCs across Regions, and Transit Gateway inter-Region peering enables scalable cross-Region VPC connectivity. This design supports redundancy and active traffic distribution.

C) Site-to-Site VPN fallback only and no Direct Connect redundancy — VPN fallback provides resilience but does not provide the predictable performance and lower cost of redundant Direct Connect links. Relying solely on a single DX plus VPN fallback is lower availability compared to multiple active DX connections.

D) Use S3 Cross-Region Replication to distribute traffic between Regions — S3 replication handles object data replication at the S3 level and is irrelevant to on-prem network connectivity and active-active Direct Connect architecture.

Reasoning about the correct answer: High availability and active-active traffic patterns require redundant physical connections and dynamic routing. Multiple Direct Connect connections across locations/Regions with BGP, combined with Direct Connect Gateway or Transit Gateway inter-Region peering, enable resilient, low-latency, and high-bandwidth network design that satisfies the active-active requirement.

Question 10: 

Your company wants to capture and analyze all VPC traffic metadata for security auditing without impacting application performance. Which AWS feature combination meets this?

A) VPC Flow Logs delivered to CloudWatch Logs, powered by an agent on each instance
B) VPC Traffic Mirroring to an EC2-based inspection instance only
C) VPC Flow Logs (S3 or CloudWatch) for metadata plus Traffic Mirroring selectively for packet-level inspection when needed
D) Enable Packet Capture in Security Groups

Answer: C) VPC Flow Logs (S3 or CloudWatch) for metadata plus Traffic Mirroring selectively for packet-level inspection when needed

Answer: C) VPC Flow Logs (S3 or CloudWatch) for metadata plus Traffic Mirroring selectively for packet-level inspection when needed

Explanation: 

A) VPC Flow Logs delivered to CloudWatch Logs, powered by an agent on each instance — VPC Flow Logs capture network flow metadata at the ENI level and can be delivered to CloudWatch Logs or S3. They do not require agents on instances — they are captured by the VPC infrastructure. However, flow logs only provide metadata (source/destination IP, ports, bytes, packets), not full packet payloads, and may not be sufficient for deep packet inspection. Relying solely on flow logs may miss payload-level indicators.

B) VPC Traffic Mirroring to an EC2-based inspection instance only — Traffic Mirroring enables packet-level capture by mirroring traffic from ENIs to an inspection target. While powerful, mirroring all traffic across many ENIs can significantly impact bandwidth and may introduce cost and performance concerns if overused. Using mirroring alone for all traffic metadata is overkill and operationally heavy compared to flow logs.

C) VPC Flow Logs (S3 or CloudWatch) for metadata plus Traffic Mirroring selectively for packet-level inspection when needed — This combined approach is best: use VPC Flow Logs for scalable, low-impact collection of metadata across the VPC for continuous auditing and analytics. When deeper analysis is required (e.g., suspicious flows identified in flow logs), selectively enable Traffic Mirroring for those ENIs or flows to capture full packets for forensic inspection. This minimizes performance impact and cost while providing both broad visibility and deep inspection capabilities when needed.

D) Enable Packet Capture in Security Groups — Security Groups do not have packet capture capabilities; they only define allowed/denied traffic rules. They cannot capture or export traffic data for analysis.

Reasoning about the correct answer: For non-intrusive, comprehensive metadata capture VPC Flow Logs are the standard. For deeper packet inspection when needed, selective Traffic Mirroring complements flow logs without the heavy overhead of mirroring all traffic. Combining both provides scalable auditing, detection, and forensic capability while limiting performance impact.

Question 11: 

A team plans to deploy an application across multiple VPCs and wants DNS names under a single namespace to resolve to different records per VPC (split-view). What is the recommended AWS solution?

A) Use Route 53 public hosted zone and update records per VPC
B) Use Route 53 Resolver inbound and outbound endpoints with conditional forwarding and Route 53 private hosted zones associated per VPC
C) Use EC2 instance as DNS forwarder in each VPC and configure split-DNS manually
D) Use Elastic IPs and hardcode DNS entries in instance hosts file

Answer: B) Use Route 53 Resolver inbound and outbound endpoints with conditional forwarding and Route 53 private hosted zones associated per VPC

Explanation: 

A) Use Route 53 public hosted zone and update records per VPC — Public hosted zones are globally visible and do not provide per-VPC split-view behavior. They expose records to the public internet and cannot implement private, VPC-specific DNS resolution.

B) Use Route 53 Resolver inbound and outbound endpoints with conditional forwarding and Route 53 private hosted zones associated per VPC — Route 53 private hosted zones allow creating private DNS namespaces that are associated with one or more VPCs; the same domain can have different records in different private hosted zones if designed carefully, enabling split-view behavior. Route 53 Resolver endpoints support conditional forwarding between on-prem and VPCs or between VPCs in different accounts, allowing a flexible split-DNS architecture. This is the AWS-native and managed approach for split-view DNS across multi-VPC and multi-account environments.

C) Use EC2 instance as DNS forwarder in each VPC and configure split-DNS manually — Running custom DNS forwarders on EC2 is possible but operationally heavier, less scalable, and introduces single points of failure and management burden compared to Route 53 Resolver and private hosted zones.

D) Use Elastic IPs and hardcode DNS entries in instance hosts file — Hardcoding hosts files is brittle, not scalable, and error-prone. It does not provide a manageable split-DNS solution.

Reasoning about the correct answer: Route 53 private hosted zones associated per VPC, combined with Route 53 Resolver endpoints for conditional forwarding and cross-VPC resolution, provide a managed, scalable way to implement split-view DNS where different VPCs can resolve the same domain to different records while keeping the namespace private and controllable.

Question 12: 

You require network encryption between VPCs in the same AWS Region without using the public internet. Which approach provides encryption while keeping traffic within the AWS backbone?

A) VPC peering with IPsec tunnels between ENIs
B) Transit Gateway attachments with AWS-provided IPsec for Site-to-Site VPN over the AWS network using Transit Gateway VPN attachments with the “AWS backbone” option enabled or use AWS Network Firewall
C) Rely on default AWS backbone being encrypted for all VPC peering traffic automatically
D) Use public internet with TLS between services

Answer: B) Transit Gateway attachments with AWS-provided IPsec for Site-to-Site VPN over the AWS network using Transit Gateway VPN attachments with the “AWS backbone” option enabled or use AWS Network Firewall

Explanation: 

A) VPC peering with IPsec tunnels between ENIs — VPC peering provides private connectivity over the AWS network but does not support custom IPsec tunnels between ENIs natively. Implementing IPsec between EC2 instances is possible but increases management overhead and may traverse host networking not optimized for high performance. It’s not the simplest or most managed way to encrypt VPC-to-VPC traffic.

B) Transit Gateway attachments with AWS-provided IPsec for Site-to-Site VPN over the AWS network using Transit Gateway VPN attachments with the “AWS backbone” option enabled or use AWS Network Firewall — Transit Gateway supports VPN attachments and can create IPsec tunnels that traverse the AWS backbone (if configured to avoid the public internet via AWS’s VPN over Direct Connect or by choosing the right options). Additionally, Transit Gateway supports integrating with Network Manager and Direct Connect Gateway for private encryption over AWS infrastructure. AWS also offers features like VPN over AWS global network to keep traffic on AWS backbone in certain configurations. Using Transit Gateway with managed VPN attachments gives a scalable, managed approach to encrypt inter-VPC traffic while staying on AWS infrastructure.

C) Rely on default AWS backbone being encrypted for all VPC peering traffic automatically — AWS does not guarantee that the default backbone is encrypted end-to-end at the application layer for all internal traffic. While AWS secures its control plane and infrastructure, customers requiring encryption in transit should implement explicit encryption (TLS, IPsec) as needed for compliance and security controls.

D) Use public internet with TLS between services — Using the public internet increases exposure and variability and is explicitly ruled out by the requirement to stay within AWS backbone and avoid public internet.

Reasoning about the correct answer: For explicit encryption while keeping traffic private within AWS, using Transit Gateway with managed VPN attachments (or combining Direct Connect and VPN) is the managed, scalable solution. It allows IPsec encryption with routing control while leveraging AWS networking to stay off the public internet.

Question 13: 

You must implement a high throughput egress path from multiple VPCs to a single on-premises firewall for inspection. Which design provides scalable, predictable throughput and central inspection?

A) VPC peering to a central VPC that contains an NAT gateway and an EC2 firewall
B) Transit Gateway central egress with a Direct Connect Gateway and EC2-based firewall or an inspection VPC attached to the Transit Gateway
C) Internet Gateway in each VPC directing traffic to on-prem via public internet
D) Use S3 Transfer Acceleration for egress inspection

Answer: B) Transit Gateway central egress with a Direct Connect Gateway and EC2-based firewall or an inspection VPC attached to the Transit Gateway

Explanation: 

A) VPC peering to a central VPC that contains an NAT gateway and an EC2 firewall — VPC peering does not support transitive routing, so a peered VPC cannot route traffic through a central VPC to on-prem. Therefore, peering to centralize egress and inspection is not feasible for multiple VPCs without complex architectures.

B) Transit Gateway central egress with a Direct Connect Gateway and EC2-based firewall or an inspection VPC attached to the Transit Gateway — Using Transit Gateway as a hub, attach an inspection VPC that contains the firewall appliances and connect TGW to on-prem via Direct Connect (Direct Connect Gateway). Transit Gateway route tables can steer egress traffic through the inspection VPC, enabling centralized inspection with scalable throughput. Direct Connect provides predictable bandwidth while TGW supports large scale attachments and route management. This pattern is a common enterprise design for centralized inspection at scale.

C) Internet Gateway in each VPC directing traffic to on-prem via public internet — Sending egress over the public internet to reach on-prem firewalls is insecure and inconsistent with the requirement for predictable throughput and central inspection. Internet Gateways are for public internet access, not centralized on-prem inspection.

D) Use S3 Transfer Acceleration for egress inspection — Transfer Acceleration is for accelerating uploads to S3 via edge locations and is irrelevant to generic network egress inspection and routing.

Reasoning about the correct answer: Transit Gateway central egress with a Direct Connect Gateway and an inspection VPC enables centralization, predictable bandwidth via Direct Connect, and scalable routing via Transit Gateway. It is the recommended enterprise pattern to achieve central inspection for multiple VPCs while maintaining performance and manageability.

Question 14: 

A compliance requirement mandates packet capture retention for 90 days for specific workloads. Capturing packets at scale in AWS should minimize performance impact. Which approach satisfies this?

A) Enable VPC Flow Logs and store logs for 90 days — Flow logs provide metadata, not full packet captures, so they may not meet packet capture requirements. They are efficient for long-term retention but lack payload data.

B) Use Traffic Mirroring to mirror only the workload ENIs to an autoscaling fleet of capture appliances that write PCAPs to S3 with lifecycle policies retaining 90 days — This captures packets, can be selective, and offloads storage to S3 with lifecycle control.

C) Install tcpdump on every instance and push PCAPs to S3 via cron jobs — This is intrusive, error-prone, and may impact instance performance and management overhead.

D) Use CloudTrail to capture network packets — CloudTrail logs API calls and not network packet data, so it cannot satisfy packet capture retention.

Answer: B) Use Traffic Mirroring to mirror only the workload ENIs to an autoscaling fleet of capture appliances that write PCAPs to S3 with lifecycle policies retaining 90 days

Explanation: 

A) Enable VPC Flow Logs and store logs for 90 days — VPC Flow Logs are excellent for metadata (IPs, ports, bytes) and are low overhead for long retention, but they do not provide packet payloads required for true packet capture and forensic analysis. If regulation requires full packet capture, flow logs alone are insufficient.

B) Use Traffic Mirroring to mirror only the workload ENIs to an autoscaling fleet of capture appliances that write PCAPs to S3 with lifecycle policies retaining 90 days — Traffic Mirroring allows selective mirroring of specific ENIs or traffic filters and sends mirrored traffic to monitoring appliances (EC2 or NLBs integrated with packet collectors). Using an autoscaling capture fleet ensures scalability and resilience; writing PCAPs to S3 with lifecycle rules enables retention control and cost management. This approach minimizes impact by mirroring only targeted traffic and offloading storage to S3.

C) Install tcpdump on every instance and push PCAPs to S3 via cron jobs — While possible, installing packet capture tooling on application instances burdens those instances, risking performance degradation and complexity across updates and scaling. Centralized mirroring to dedicated capture appliances is preferable.

D) Use CloudTrail to capture network packets — CloudTrail logs AWS API activity; it does not capture network packets. It cannot meet packet capture compliance requirements.

Reasoning about the correct answer: Traffic Mirroring combined with a scalable capture pipeline writing to S3 with lifecycle retention achieves selective packet capture at scale while minimizing impact on application instances. It satisfies retention requirements and centralizes management and storage.

Question 15: 

An application in multiple VPCs needs to access a shared on-premises LDAP server with low latency and high availability. The on-premises team can support BGP. Which AWS architecture is best?

A) Each VPC uses Site-to-Site VPN to on-prem with static routes pointing to the LDAP server
B) Use Direct Connect with a Direct Connect Gateway and Transit Gateway in AWS to expose the LDAP server to all VPCs with BGP for dynamic routing and redundancy
C) Replicate LDAP to AWS and stop using on-prem entirely
D) Use API Gateway to proxy LDAP requests over HTTPS

Answer: B) Use Direct Connect with a Direct Connect Gateway and Transit Gateway in AWS to expose the LDAP server to all VPCs with BGP for dynamic routing and redundancy

Explanation: 

A) Each VPC uses Site-to-Site VPN to on-prem with static routes pointing to the LDAP server — While VPNs are viable, managing multiple VPNs and static routes across many VPCs creates operational complexity. VPNs over the internet may not provide the low latency and SLA desired compared to Direct Connect.

B) Use Direct Connect with a Direct Connect Gateway and Transit Gateway in AWS to expose the LDAP server to all VPCs with BGP for dynamic routing and redundancy — Direct Connect provides a private, low-latency, high-bandwidth connection ideal for LDAP traffic that may be latency sensitive. Using Direct Connect Gateway (to span multiple regions if needed) and Transit Gateway to attach multiple VPCs centrally lets you expose the on-prem LDAP server to all VPCs efficiently. BGP provides dynamic route exchange and redundancy, enabling failover and simplified routing management. This pattern supports HA and low latency.

C) Replicate LDAP to AWS and stop using on-prem entirely — Replication may be an architectural alternative, but it requires LDAP replication design, data consistency handling, and possibly application changes. If the requirement is to access the on-prem LDAP server specifically, replication may not be acceptable.

D) Use API Gateway to proxy LDAP requests over HTTPS — LDAP is a directory protocol over TCP (often LDAPS). Proxying through API Gateway is not a native fit for LDAP and would require middleware translating protocols, adding latency and complexity.

Reasoning about the correct answer: Direct Connect + Direct Connect Gateway + Transit Gateway with BGP provides dedicated, low-latency, highly available connectivity for LDAP access from multiple VPCs, with centralized routing and redundancy. It minimizes latency and operational complexity compared to many point-to-point VPNs.

Question 16: 

You want to centralize egress internet access for audit and cost control. Which AWS pattern centralizes egress and allows per-account visibility and control?

A) Configure Internet Gateway in each VPC and use NAT Gateways per VPC with CloudWatch billing tags
B) Central egress VPC with Transit Gateway, an egress proxy/NGFW, and route tables steering egress through the central VPC; use VPC Flow Logs and AWS Traffic Mirroring for visibility
C) Use VPC peering and route all traffic through a single NAT instance in one VPC
D) Rely on default route tables to manage egress centrally

Answer: B) Central egress VPC with Transit Gateway, an egress proxy/NGFW, and route tables steering egress through the central VPC; use VPC Flow Logs and AWS Traffic Mirroring for visibility

Explanation: 

A) Configure Internet Gateway in each VPC and use NAT Gateways per VPC with CloudWatch billing tags — This is decentralised; while NAT Gateways provide egress, managing auditing and control per account across many NATs is harder. Billing tags help cost allocation but don’t centralize policy enforcement or provide a single inspection point.

B) Central egress VPC with Transit Gateway, an egress proxy/NGFW, and route tables steering egress through the central VPC; use VPC Flow Logs and AWS Traffic Mirroring for visibility — A centralized egress VPC attached to Transit Gateway allows routing all outbound traffic through a controlled inspection/proxy/firewall environment. Transit Gateway route tables steer traffic, and VPC Flow Logs plus selective Traffic Mirroring provide per-account visibility and packet inspection as needed. This provides centralized control, auditing, and cost management while enabling consistent policy enforcement.

C) Use VPC peering and route all traffic through a single NAT instance in one VPC — VPC peering is not transitive and cannot easily centralize egress across many VPCs. A NAT instance is also a single point of failure and less scalable than managed NAT gateways or centralized inspection via TGW.

D) Rely on default route tables to manage egress centrally — Default route tables are local constructs and do not enable centralized control across accounts or VPCs. They cannot enforce cross-VPC egress routing without additional constructs.

Reasoning about the correct answer: The Transit Gateway central egress model provides a managed, scalable way to funnel and inspect outbound traffic centrally, enabling centralized audit, egress policy enforcement, and per-account visibility when combined with logging and mirroring.

Question 17: 

An application needs deterministic routing between on-prem and AWS for certain prefixes and preference for Direct Connect over VPN. Which Border Gateway Protocol (BGP) mechanism can influence path selection for preferred routes?

A) BGP local preference and AS path prepending combined with route maps/policies on on-prem routers and AWS side (if supported)
B) Use ECMP on Direct Connect to prefer VPN paths
C) Rely on static routes in Transit Gateway to influence BGP path selection
D) Modify Security Group rules to change path selection

Answer: A) BGP local preference and AS path prepending combined with route maps/policies on on-prem routers and AWS side (if supported)

Explanation: 

A) BGP local preference and AS path prepending combined with route maps/policies on on-prem routers and AWS side (if supported) — BGP provides mechanisms to influence path selection such as local preference (higher preferred), AS path prepending (longer AS path is less preferred by peers), MED, and route policies. Setting local preference for routes learned via Direct Connect and using AS path prepending for VPN routes can influence inbound and outbound path selection deterministically. While some adjustments are performed on on-prem routers, AWS Direct Connect and Transit Gateway support BGP attributes for route exchange, so coordinating policies across both sides achieves the desired preference.

B) Use ECMP on Direct Connect to prefer VPN paths — ECMP is for load balancing across equal-cost paths, not for preferring one transport over another. It won’t deterministically prefer Direct Connect over VPN without BGP policy changes.

C) Rely on static routes in Transit Gateway to influence BGP path selection — TGW static routes control local routing but do not directly alter BGP attributes across AS boundaries. Deterministic path selection across BGP requires BGP attribute manipulation.

D) Modify Security Group rules to change path selection — Security Groups only filter traffic; they cannot change route selection or influence BGP.

Reasoning about the correct answer: BGP attributes like local preference and AS path prepending, combined with well-designed route maps/policies, are the standard way to influence deterministic path selection in multi-path, multi-transport network designs. For preferring Direct Connect over VPN, setting higher local preference on Direct Connect-learned routes and using prepending on VPN paths is effective.

Question 18: 

A service requires preserving source IP addresses when traffic goes through a load balancer to backend EC2 instances. Which AWS load balancing option preserves client source IP?

A) Application Load Balancer (ALB) preserves source IP by default
B) Network Load Balancer (NLB) with TCP/UDP target type preserves the original source IP for instances in the target group when using instance targets or IP targets appropriately
C) Classic Load Balancer always rewrites source IP and cannot preserve it
D) Use NAT Gateways to preserve source IP

Answer: B) Network Load Balancer (NLB) with TCP/UDP target type preserves the original source IP for instances in the target group when using instance targets or IP targets appropriately

Explanation: 

A) Application Load Balancer (ALB) preserves source IP by default — ALB operates at Layer 7 and terminates the client connection; it does not preserve the original source IP to the backend TCP connection. Instead, ALB adds X-Forwarded-For headers containing the client IP for HTTP/HTTPS traffic so applications can learn the original IP at the application layer, but the EC2 instance’s socket sees the ALB’s IP.

B) Network Load Balancer (NLB) with TCP/UDP target type preserves the original source IP for instances in the target group when using instance targets or IP targets appropriately — NLB is a Layer 4 load balancer that, in its default mode for instance targets or with appropriate target configurations, preserves the client’s source IP at the backend instance’s TCP/IP socket. This is the correct option when backend applications require the original client IP at the OS/network layer.

C) Classic Load Balancer always rewrites source IP and cannot preserve it — The Classic Load Balancer in TCP mode could preserve source IP under certain configurations but is legacy and not recommended. It’s less flexible than NLB for preserving original source IPs across modern designs.

D) Use NAT Gateways to preserve source IP — NAT Gateways perform source NAT and rewrite source addresses to the NAT gateway’s elastic IP; they do not preserve the original client source IP to backends.

Reasoning about the correct answer: For scenarios requiring the original client source IP at the socket level (for logging, IP-based access control, or geolocation), the Network Load Balancer is the AWS-recommended solution because it operates at Layer 4 and can forward the original source IP without termination. ALB provides application headers for HTTP but not raw source IP preservation at the network layer.

Question 19: 

You must enforce that traffic between two subnets always traverses a virtual appliance (insertion) for inspection regardless of which AZ the instances are in. Which AWS feature supports enforcing this across AZs?

A) Use NACLs to redirect traffic to the appliance ENI
B) Use Transit Gateway with appliance VPC and route tables that steer traffic from spoke VPCs/subnets through the appliance, ensuring per-AZ routing via TGW attachments and route propagation
C) Use security groups to force traffic via appliance
D) Use Elastic Load Balancer with sticky sessions to force traversal

Answer: B) Use Transit Gateway with appliance VPC and route tables that steer traffic from spoke VPCs/subnets through the appliance, ensuring per-AZ routing via TGW attachments and route propagation

Explanation: 

A) Use NACLs to redirect traffic to the appliance ENI — Network ACLs are stateless packet filters and cannot perform packet forwarding or redirection to specific ENIs. They cannot enforce traversal through an appliance.

B) Use Transit Gateway with appliance VPC and route tables that steer traffic from spoke VPCs/subnets through the appliance, ensuring per-AZ routing via TGW attachments and route propagation — Transit Gateway supports centralized routing where attachments are associated with route tables and propagation can be controlled. By attaching an inspection/appliance VPC and using TGW route tables to direct traffic, you can ensure traffic from any AZ/spoke traverses the appliance. The TGW architecture supports this enforcement uniformly across AZs.

C) Use security groups to force traffic via appliance — Security Groups control permitted traffic but do not influence routing. They cannot cause traffic redirection or ensure traversal through an appliance.

D) Use Elastic Load Balancer with sticky sessions to force traversal — Load balancers distribute traffic but are not designed to enforce that arbitrary subnet-to-subnet traffic flows through an appliance. They are for client-to-service load distribution, not for steering inter-subnet traffic.

Reasoning about the correct answer: Transit Gateway route tables and an appliance VPC provide a robust method to enforce traffic inspection insertion across AZ boundaries. Proper TGW association and propagation policies guarantee that traffic between subnets will be routed through the inspection appliances regardless of AZ placement.

Question 20: 

A customer needs to reduce cross-AZ data transfer charges for high east-west traffic between VPCs in the same Region while maintaining isolation. What approaches can help reduce costs?

A) Consolidate services into fewer AZs using cluster placement groups to remove cross-AZ transfers entirely (tradeoff with AZ resilience)
B) Use a single VPC with multiple subnets in the same AZs and leverage ENIs and local traffic where possible, or use Transit Gateway with intra-AZ routing optimizations (note some patterns may not eliminate cross-AZ charges)
C) Use VPC peering and move frequently communicating services into the same AZ where possible; use PrivateLink for service endpoints to avoid cross-AZ traffic when clients and endpoints are in same AZ
D) All of the above combined with evaluating architecture tradeoffs

Answer: D) All of the above combined with evaluating architecture tradeoffs

Explanation: 

A) Consolidate services into fewer AZs using cluster placement groups to remove cross-AZ transfers entirely (tradeoff with AZ resilience) — Moving interacting services into the same AZ reduces cross-AZ data charges because intra-AZ traffic within the same AZ typically avoids cross-AZ charges. However, this reduces availability and resiliency since AZ failures could affect all services. It’s a tradeoff between cost and fault tolerance.

B) Use a single VPC with multiple subnets in the same AZs and leverage ENIs and local traffic where possible, or use Transit Gateway with intra-AZ routing optimizations (note some patterns may not eliminate cross-AZ charges) — Consolidating into a single VPC and designing to keep traffic local to an AZ reduces cross-AZ billing. Transit Gateway and other AWS services have specific billing models; review TGW data transfer pricing and design to minimize inter-AZ hops. Some TGW patterns still incur inter-AZ charges, so careful design is required.

C) Use VPC peering and move frequently communicating services into the same AZ where possible; use PrivateLink for service endpoints to avoid cross-AZ traffic when clients and endpoints are in same AZ — VPC peering keeps traffic on AWS backbone and cross-AZ charges may still apply depending on AZ placement. PrivateLink creates ENIs in consumer VPCs in the same AZ as the client, enabling traffic to remain local if consumer and endpoint are in the same AZ, reducing cross-AZ costs.

D) All of the above combined with evaluating architecture tradeoffs — A combination of these approaches—architectural consolidation where acceptable, using PrivateLink for service endpoints, placing high-bandwidth peers in the same AZ, and reviewing Transit Gateway/VPC peering patterns—yields the best cost reduction while balancing availability and isolation requirements. Each option has tradeoffs and should be evaluated against business requirements.

Reasoning about the correct answer: Reducing cross-AZ data transfer costs requires architectural choices—co-locating high-bandwidth peers, leveraging PrivateLink, careful use of TGW and peering, and understanding pricing. No single approach fits all cases; combining techniques while weighing resiliency tradeoffs is the practical path to cost optimization.

img