Use VCE Exam Simulator to open VCE files

300-420 Cisco Practice Test Questions and Exam Dumps
Question No 1:
Which two BGP features will result in successful route exchanges between eBGP neighbors sharing the same AS number? (Choose two.)
A advertise-best-external
B bestpath as-path ignore
C client-to-client reflection
D as-override
E allow-as-in
Answer: D, E
Explanation:
BGP (Border Gateway Protocol) is a critical protocol used for exchanging routing information between different networks, typically across the internet. However, eBGP (External BGP) typically assumes that each neighbor belongs to a different Autonomous System (AS). When eBGP neighbors share the same AS number, special BGP features are necessary to facilitate proper route exchanges. Let’s go over the two key BGP features in the provided options that allow such exchanges to occur.
D. as-override
This feature is specifically designed to address the issue where eBGP neighbors might share the same AS number. Normally, BGP expects that eBGP peers should have different AS numbers. When the AS number is the same on both sides, BGP would normally reject these routes. The as-override feature allows the AS number in the AS path to be "overridden" or "masked" when a route is advertised between eBGP peers that belong to the same AS. By enabling this feature, BGP allows the routes to be exchanged even when the peers share the same AS number, essentially making the AS path appear as if it came from a different AS, thus allowing route propagation between those peers.
E. allow-as-in
The allow-as-in feature enables the acceptance of routes that have the local AS number in the AS path. By default, BGP will not accept such routes because it would create a loop (since the AS number of the local router would appear in the AS path). However, in certain situations (e.g., when eBGP neighbors share the same AS number), the allow-as-in feature allows the AS path to include the local AS, and BGP will still accept and propagate the route. This feature is particularly useful in cases of route reflection or when routes need to be exchanged between eBGP neighbors within the same AS.
The other options are not directly relevant to resolving the issue of route exchanges between eBGP neighbors sharing the same AS number:
A. advertise-best-external
This feature is used to advertise the best external routes to internal peers in BGP. It does not specifically address the issue of sharing an AS number between eBGP peers and, thus, is not relevant to the question.
B. bestpath as-path ignore
This feature allows BGP to ignore the AS path when determining the best path. While useful in certain scenarios, it does not directly impact the ability of eBGP peers with the same AS number to exchange routes. It is more focused on influencing the path selection process rather than enabling eBGP route exchanges between peers with the same AS number.
C. client-to-client reflection
This feature is related to route reflectors and defines how a route reflector handles routes from one client to another. This is not directly relevant to eBGP neighbors sharing the same AS number. It is more concerned with the internal BGP (iBGP) topology and client configurations rather than eBGP operations.
In conclusion, the correct features that enable successful route exchanges between eBGP neighbors sharing the same AS number are as-override and allow-as-in because they specifically allow BGP to handle the unique case of neighbors with identical AS numbers.
Question No 2:
A customer has an IPv4-only network and wants to enable IPv6 connectivity while preserving the current IPv4 topology. The customer plans to migrate IPv4 services to the IPv6 network and eventually decommission the IPv4 topology.
Which network topology supports these requirements?
A dual stack
B 6VPE
C 6to4
D NAT64
Correct answer: A
Explanation:
In this scenario, the customer seeks to implement IPv6 connectivity while maintaining the existing IPv4 services temporarily, enabling a smooth migration to IPv6. The solution must allow both IPv4 and IPv6 to coexist until the customer completes the migration, after which the IPv4 infrastructure can be decommissioned. This scenario requires a solution that supports both IPv4 and IPv6 running simultaneously.
The dual stack topology is the most appropriate for this use case. Dual stack enables both IPv4 and IPv6 to run on the same network infrastructure. In a dual-stack network, devices and routers are configured to handle both IPv4 and IPv6 addresses, meaning they can support communication over either protocol depending on the situation. This solution allows the customer to implement IPv6 connectivity while still utilizing the existing IPv4 network, making it ideal for gradual migration to IPv6. Over time, IPv6 traffic can be increased while IPv4 services are gradually decommissioned.
Let’s analyze the other options:
B. 6VPE (IPv6 Virtual Private Edge): This option is related to extending IPv6 connectivity over MPLS-based networks. While 6VPE allows the transport of IPv6 packets over an MPLS backbone, it is a more specialized solution typically used in MPLS environments to enable IPv6 service across existing IPv4 infrastructures. However, it is not the best solution for a customer wanting a straightforward migration from IPv4 to IPv6 across their whole network.
C. 6to4: The 6to4 tunneling protocol is used to transmit IPv6 packets over an IPv4 network. This is useful when an IPv6-only network needs to communicate with an IPv4-only network. However, this is not ideal for a network that plans to migrate entirely to IPv6 while preserving the IPv4 topology. 6to4 is often used for transitional purposes but does not support a long-term dual-stack strategy, as it is primarily a tunnel for IPv6 traffic over IPv4.
D. NAT64: NAT64 is a technique used to translate IPv6 addresses to IPv4 addresses and vice versa. It’s often used in environments where IPv6-only clients need to access IPv4-only servers. However, NAT64 is a solution for specific scenarios where there is already IPv6-only or IPv4-only communication. It does not provide the full dual-stack functionality required for a gradual migration from IPv4 to IPv6 across the entire network.
In summary, the dual stack topology is the best solution for this customer because it allows both IPv4 and IPv6 to operate simultaneously, enabling a gradual migration while still preserving IPv4 services during the transition. This aligns perfectly with the customer’s goal of migrating to IPv6 and eventually decommissioning the IPv4 network.
Question No 3:
A company is running BGP on a single router that has two connections to the same ISP. Which BGP feature ensures that traffic is distributed across both links to the ISP for load balancing?
A Multihop
B Multipath Load Sharing
C Next-Hop Address Tracking
D AS-Path Prepending
Correct Answer: B
Explanation:
To achieve load balancing across two links to the same ISP using BGP, the router must be able to use both links simultaneously for traffic, distributing the traffic across them in a balanced manner. The BGP feature that accomplishes this is Multipath Load Sharing.
Multipath Load Sharing allows a router to use multiple equal-cost paths to a destination. In BGP, when there are multiple routes with the same AS Path length (i.e., equal-cost paths), BGP can distribute traffic across these paths. This enables load balancing, where the router sends packets over both links, ensuring that traffic is balanced and not sent over just one link.
Here's a breakdown of the other options and why they are incorrect:
A (Multihop) refers to a method where BGP peers are more than one hop away from each other. This is typically used when BGP peers are not directly connected, such as in scenarios where the BGP session is established between routers that are not directly adjacent. While useful for some BGP configurations, it does not directly contribute to load balancing.
C (Next-Hop Address Tracking) refers to a feature that allows a BGP router to track changes in the next-hop address in the network. While this can help ensure that traffic is routed correctly to the appropriate destination, it does not perform load balancing across multiple paths.
D (AS-Path Prepending) is a technique used to influence the BGP routing decision by artificially increasing the length of the AS Path for specific routes. This can make certain paths less attractive to other BGP routers, but it does not perform load balancing across multiple links. Instead, it is used to control outbound routing decisions.
The correct answer is B, Multipath Load Sharing, which enables BGP to distribute traffic across multiple equal-cost paths, ensuring that both links to the ISP are used for load balancing. This feature is key for optimizing bandwidth utilization and redundancy when multiple connections to the same ISP exist.
Question No 4:
Company A has recently acquired another company, and users of the newly acquired company need access to a server located on Company A's network. However, both companies use overlapping IP address ranges.
Which action would conserve IP address space and provide access to the server?
A Use a single IP address to create overload NAT
B Use a single IP address to create a static NAT entry
C Build one-to-one NAT translation for every user that needs access
D Re-IP overlapping address space in the acquired company
Correct answer: A
Explanation:
In situations where two networks use overlapping IP address ranges — such as in the case of a merger or acquisition — direct access between the networks can become problematic because routing and address conflicts prevent communication. To resolve this while conserving IP address space, Network Address Translation (NAT) is often used.
Let’s analyze each option:
Option A: Use a single IP address to create overload NAT
This option refers to PAT (Port Address Translation), a form of NAT where multiple devices on a private network can share a single public IP address, differentiating them by their port numbers. In this scenario, users from the acquired company, who have overlapping IP addresses with Company A’s network, can be mapped to a single IP address, thus preserving IP address space while allowing access to Company A’s server. This solution is efficient and minimizes the need for extensive changes or a large pool of new IP addresses. Overload NAT works well for conserving IP address space, making it the best option for this situation.
Option B: Use a single IP address to create a static NAT entry
A static NAT mapping would involve mapping an internal IP address to a specific external IP address on a one-to-one basis. While this option would allow access to a specific server on Company A’s network, it does not scale well if multiple users from the acquired company need to access multiple resources or servers. This approach also consumes IP address space more quickly, as each internal IP address would need a corresponding external IP address. Hence, it is less efficient than Option A.
Option C: Build one-to-one NAT translation for every user that needs access
This option implies a separate one-to-one NAT translation for each user, mapping each overlapping internal IP address to a unique external IP address. This would consume a large amount of IP address space, as each user would require a dedicated NAT translation. This approach is not ideal for conserving IP address space, as it would require a significant number of IP addresses for each user needing access.
Option D: Re-IP overlapping address space in the acquired company
Re-IPing the acquired company’s entire network would involve reassigning new, non-overlapping IP addresses to every device and user on that network. While this would eliminate the overlap issue, it would be a time-consuming and costly process. Additionally, it could cause disruptions in services and require extensive reconfiguration across both networks. This solution is effective but not the most efficient or cost-effective, especially compared to using NAT to resolve the issue.
Thus, Option A is the most efficient solution. By using a single IP address to create overload NAT, you can allow users from the acquired company to access resources on Company A’s network without requiring extensive changes to the IP address space, preserving both the network’s integrity and its available address space. This solution is scalable and provides a quick, less disruptive fix.
Question No 5:
Which design consideration should be observed when EIGRP is configured on Data Center switches?
A Perform manual summarization on all Layer 3 interfaces to minimize the size of the routing table.
B Prevent unnecessary EIGRP neighborships from forming across switch virtual interfaces.
C Lower EIGRP hello and hold timers to their minimum settings to ensure rapid route reconvergence.
D Configure multiple EIGRP autonomous systems to segment Data Center services and applications.
Correct answer: B
Explanation:
When configuring EIGRP (Enhanced Interior Gateway Routing Protocol) on Data Center switches, the main consideration should focus on preventing unnecessary EIGRP neighborships from forming across switch virtual interfaces (SVIs). This is crucial for maintaining a clean and efficient routing topology, as unnecessary neighborships could result in redundant routing updates, increased CPU utilization, and unnecessary network traffic.
In a Data Center environment, switches often have multiple Layer 3 interfaces, especially if SVIs (Switch Virtual Interfaces) are used for routing between VLANs. If EIGRP is configured across all interfaces indiscriminately, neighborships might form between interfaces that are not meant to participate in the same EIGRP domain. This can lead to inefficient routing and can cause issues like unnecessary EIGRP updates or even loops. Therefore, the design recommendation is to control which interfaces are allowed to form EIGRP neighborships using interface-level configuration or by applying filtering techniques to limit unnecessary neighbor relationships.
Let’s review the other options:
A – Perform manual summarization on all Layer 3 interfaces to minimize the size of the routing table:
While manual summarization is a good practice in large-scale networks to reduce the size of routing tables and optimize routing performance, it is not the primary design consideration in this specific context. EIGRP typically handles automatic summarization well, and performing summarization on all interfaces may not always be feasible, especially in dynamic or complex Data Center environments. The primary concern here is limiting the formation of unnecessary neighborships, not the routing table size.
C – Lower EIGRP hello and hold timers to their minimum settings to ensure rapid route reconvergence:
Although hello and hold timers can influence the speed of EIGRP’s route reconvergence, lowering them to the minimum settings is generally not recommended in Data Center environments. This could result in increased overhead, as the protocol would be more sensitive to network instability or transient issues. EIGRP’s default timers are typically sufficient for most Data Center environments, and reducing them too much could lead to unnecessary flapping or instability in the network.
D – Configure multiple EIGRP autonomous systems to segment Data Center services and applications:
While using multiple EIGRP autonomous systems could theoretically segment routing for different services or applications, this approach is not ideal in a Data Center environment. Maintaining multiple EIGRP autonomous systems can add unnecessary complexity to the network design and could make it more difficult to troubleshoot or manage. In practice, a single EIGRP autonomous system is usually sufficient to handle Data Center routing, especially when combined with techniques like route filtering and summarization to control routing information flow.
In summary, the correct design consideration when configuring EIGRP on Data Center switches is to prevent unnecessary EIGRP neighborships from forming across switch virtual interfaces (SVIs), as it helps in maintaining a clean, efficient, and manageable routing environment. Therefore, the correct answer is B.
Question No 6:
Which design consideration must be made when using IPv6 overlay tunnels?
A Overlay tunnels that connect isolated IPv6 networks are considered a final IPv6 network architecture.
B Overlay tunnels should only be considered as a transition technique toward a permanent solution.
C Overlay tunnels should be configured only between border devices and require only the IPv6 protocol stack.
D Overlay tunneling encapsulates IPv4 packets in IPv6 packets for delivery across an IPv6 infrastructure.
Correct Answer: B
Explanation:
When designing networks with IPv6 overlay tunnels, it is essential to recognize that these tunnels are typically not considered a permanent, long-term solution. B is the correct answer because overlay tunnels are generally used as a transitional mechanism to help organizations move from IPv4 to IPv6 networks during a migration period. These tunnels allow IPv6 packets to be transmitted over an IPv4 network by encapsulating the IPv6 packets within IPv4 packets. This method provides a bridge while the infrastructure is transitioning toward full IPv6 adoption.
The key consideration when using IPv6 overlay tunnels is that they are not a final architecture but rather a temporary solution that helps ensure IPv6 connectivity between networks or devices that may not yet support IPv6 natively. Over time, the goal is for the network to migrate to a pure IPv6 infrastructure, eliminating the need for these tunnels.
Now, let's review the other options:
A: Overlay tunnels are not considered a final architecture for IPv6 networks. While they help bridge the gap between IPv4 and IPv6, they are part of the transition process rather than the ultimate solution.
C: Overlay tunnels do not need to be confined to border devices and do not always require the exclusive use of IPv6. They can involve both IPv4 and IPv6 in different network configurations and can be used across multiple types of devices.
D: This answer is misleading. While it is true that tunneling can encapsulate IPv6 packets inside IPv4 packets for delivery across IPv4 infrastructure, the answer is about encapsulating IPv4 packets in IPv6 packets. IPv6 tunneling mechanisms generally focus on encapsulating IPv6 traffic into IPv4 for IPv6 transport, not the other way around.
In conclusion, B is the best design consideration for using IPv6 overlay tunnels, as they are primarily used during the transition phase of IPv6 adoption.
Question No 7:
Which two circuit types are supported when a network is designed using IS-IS? (Choose two.)
A. nonbroadcast multiaccess
B. multiaccess
C. point-to-multipoint
D. nonbroadcast
E. point-to-point
Correct answer: D, E
Explanation:
The IS-IS (Intermediate System to Intermediate System) protocol supports several types of circuit types based on the topology and characteristics of the network. IS-IS is often used in large, complex networks, including service provider environments, and can operate in both LAN and WAN environments.
Point-to-point (option E) is one of the primary circuit types in IS-IS. A point-to-point circuit is used between two directly connected routers, typically representing a link between two routers without any intermediate devices. This is a basic and efficient configuration used in IS-IS routing because it minimizes the complexity and number of nodes involved in the link.
Nonbroadcast (option D) is another supported circuit type in IS-IS. In nonbroadcast networks, there is no automatic broadcasting of link-state information, and the routers are required to configure their neighbors manually. This type of circuit is common in environments like Frame Relay or X.25 networks, where the underlying infrastructure does not natively support broadcast communication.
On the other hand, the other options are less suited for IS-IS:
Nonbroadcast multiaccess (option A) typically applies to protocols like OSPF, which use a specific method of handling broadcast and non-broadcast multiaccess networks. However, this term does not directly apply to IS-IS in the same way it does for OSPF.
Multiaccess (option B) is a general term and may apply to Ethernet networks, but IS-IS treats multiaccess networks in a different way. IS-IS is more specific in how it handles point-to-point and nonbroadcast circuits.
Point-to-multipoint (option C) is not a direct match for IS-IS’s common configurations, which focus more on point-to-point and nonbroadcast types of circuits for efficient routing.
Thus, the two circuit types supported by IS-IS are nonbroadcast and point-to-point, making options D and E the correct answers.
Question No 8:
Which Cisco proprietary BGP path attribute will influence outbound traffic flow?
A. Local Preference
B. MED
C. Weight
D. AS Path
E. Community
Correct Answer: C
Explanation:
When designing a network solution for a company that connects to multiple Internet service providers (ISPs), controlling the flow of outbound traffic is critical. In Border Gateway Protocol (BGP), various path attributes are used to influence routing decisions. However, some attributes are specifically relevant for managing outbound traffic, particularly when there are multiple ISPs involved.
A. Local Preference is a BGP attribute used to prefer a particular exit point (or outbound path) when there are multiple choices for outgoing traffic. It is a local decision made by the autonomous system (AS) and only affects the routes within the AS. While it does influence routing decisions, it is not specific to Cisco proprietary configurations but is commonly used across BGP implementations.
B. MED (Multi-Exit Discriminator) is used to inform external neighbors about the preferred path when multiple links exist between two different autonomous systems. It is used by the receiving AS to make decisions about which path to take. While it can affect the inbound traffic flow to the AS, it doesn't directly influence outbound traffic within the AS.
C. Weight is a Cisco proprietary BGP attribute that influences outbound traffic flow. Weight is local to the router and is not propagated to other routers. It is used to influence the decision of which path to take for outbound traffic. The router will prefer routes with a higher weight, making this attribute particularly useful for controlling outbound traffic flow to different ISPs.
D. AS Path is a BGP path attribute that lists the ASs a route has traversed. It is primarily used for loop prevention and path selection. While it helps in routing decisions, it doesn’t directly influence outbound traffic flow in the context of a network with multiple ISPs.
E. Community is a BGP attribute used to group routes and apply policies. While communities can be used to influence routing decisions, they don’t directly impact outbound traffic flow in a manner that’s as direct as the weight attribute.
Thus, the correct answer is Weight, as it is a Cisco proprietary attribute that directly influences outbound traffic flow by giving preference to specific routes on a local router.
Question No 9:
Refer to the exhibit. EIGRP has been configured on all links. The spoke nodes have been configured as EIGRP stubs, and the WAN links to R3 have higher bandwidth and lower delay than the WAN links to R4.
When a link failure occurs at the R1-R2 link, what happens to traffic on R1 that is destined for a subnet attached to R2?
A. R1 has no route to R2 and drops the traffic
B. R1 load-balances across the paths through R3 and R4 to reach R2
C. R1 forwards the traffic to R3, but R3 drops the traffic
D. R1 forwards the traffic to R3 in order to reach R2
Correct Answer: D
Explanation:
In this scenario, we are dealing with EIGRP (Enhanced Interior Gateway Routing Protocol), which is a distance-vector routing protocol used to share information about network reachability. The question specifies that the spoke nodes (likely routers at the remote sites) are configured as EIGRP stubs. This means that these routers do not participate in certain EIGRP exchanges, typically for optimal route management and limiting the number of routing updates they process.
When the R1-R2 link fails, R1 needs to find an alternate path to reach the subnet attached to R2. According to the setup, R3 has higher bandwidth and lower delay compared to R4, meaning R1 is likely to prefer the link through R3 to reach R2, if it can.
Let's analyze the options:
A. R1 has no route to R2 and drops the traffic: This option would be true only if R1 had no other way to reach the R2 subnet, which isn't the case here. R3 is available as an alternate path to reach R2, so R1 will still have a route to R2 via R3.
B. R1 load-balances across the paths through R3 and R4 to reach R2: This option would be valid if R1 could use both R3 and R4 for load balancing, but the problem states that R3 has higher bandwidth and lower delay, so it would be the preferred path. R1 wouldn't load balance to R4 because R3 is the better route.
C. R1 forwards the traffic to R3, but R3 drops the traffic: This would happen if R3 were somehow incapable of forwarding traffic to R2, but the configuration doesn't indicate this. R3 is likely capable of forwarding the traffic, so this option doesn't fit.
D. R1 forwards the traffic to R3 in order to reach R2: This is the most likely scenario. Since R1 prefers R3 due to the higher bandwidth and lower delay, and R3 is capable of forwarding traffic to R2, R1 will forward the traffic to R3, which will continue the routing process to reach the subnet attached to R2.
Thus, the correct answer is D, as R1 will forward the traffic to R3, which can then take it toward R2.
Top Training Courses
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.