350-401 Cisco Practice Test Questions and Exam Dumps

Question No 1:  

What is the difference between a RIB and a FIB?

A. The FIB is populated based on RIB content.
B. The RIB maintains a mirror image of the FIB.
C. The RIB is used to make IP source prefix-based switching decisions.
D. The FIB is where all IP routing information is stored.

Answer: A. The FIB is populated based on RIB content.

Explanation:

In computer networking, the Routing Information Base (RIB) and Forwarding Information Base (FIB) are both crucial elements of routing and forwarding decisions, but they serve different purposes.

  • The Routing Information Base (RIB) is a table that contains all the routing information gathered by a router from various routing protocols (like OSPF, BGP, or RIP). The RIB stores a comprehensive collection of routes and their associated metrics (e.g., next-hop addresses, cost, etc.), and it is used by the router to make routing decisions. It’s essentially a database for all known routes.

  • The Forwarding Information Base (FIB), on the other hand, is a subset of the RIB, specifically designed for efficient packet forwarding. The FIB contains a smaller set of data that is used for the actual forwarding of packets. The entries in the FIB are optimized to ensure quick lookups, which is crucial for high-performance networking. The FIB is essentially a table of best paths that the router uses to forward packets toward their destination.

The FIB is populated based on the RIB content: the router selects the best routes from the RIB based on criteria like route preference and installs them into the FIB for quick packet forwarding.

  • Option B is incorrect because the RIB does not mirror the FIB. The RIB is more comprehensive, containing all routes, whereas the FIB is specifically for forwarding.

  • Option C is incorrect because the RIB is responsible for making routing decisions based on the network topology, not for making IP source prefix-based switching decisions, which is the job of the FIB.

  • Option D is also incorrect because the FIB does not store all IP routing information; it only stores the best routes selected from the RIB for packet forwarding.

Question No 2: 

Which QoS component alters a packet to change the way that traffic is treated in the network?

A. Policing
B. Classification
C. Marking
D. Shaping

Answer: C. Marking

Explanation:

Quality of Service (QoS) is essential for managing traffic in networks, ensuring that data flows smoothly, especially in environments with limited bandwidth or when handling sensitive applications like VoIP or video conferencing. There are different QoS components, each with specific roles in altering how traffic is treated.

  • Marking refers to the process of tagging packets with specific values (usually in the IP header or the Ethernet header) that indicate how the packets should be treated by the network. These values, often in the form of Differentiated Services Code Point (DSCP) or Explicit Congestion Notification (ECN), help to prioritize traffic or apply specific policies (e.g., low-latency for voice packets, higher priority for critical data). Marking allows devices to recognize how traffic should be handled (e.g., low priority or high priority).

  • Policing involves monitoring traffic to ensure that it adheres to a defined traffic profile. If traffic exceeds the acceptable limits, policing might drop or remark the packets to enforce traffic conformance. While policing can influence how traffic is treated, it does not directly alter the packet in the same way marking does.

  • Classification involves categorizing packets based on certain attributes (e.g., source/destination address, protocol). Classification is part of the process of determining how traffic will be marked or treated but does not directly alter the packet content.

  • Shaping is the process of controlling the flow of traffic to match a defined profile over time. It typically buffers or delays packets to smooth out traffic flows, ensuring that traffic conforms to a set rate, but does not directly modify packet content like marking does.

Thus, marking is the correct QoS component that alters a packet to specify how it should be treated in the network.

Question No 3: 

Which statement about Cisco Express Forwarding is true?

A. The CPU of a router becomes directly involved with packet-switching decisions.
B. It uses a fast cache that is maintained in a router data plane.
C. It maintains two tables in the data plane: the FIB and adjacency table.
D. It makes forwarding decisions by a process that is scheduled through the IOS scheduler.

Answer: C. It maintains two tables in the data plane: the FIB and adjacency table.

Explanation:

Cisco Express Forwarding (CEF) is a highly efficient packet-forwarding mechanism used by Cisco routers. CEF improves packet forwarding performance by enabling high-speed switching and reducing the involvement of the CPU in routine packet forwarding tasks.

  • Option A (Incorrect): Cisco Express Forwarding minimizes the involvement of the router’s CPU in packet-switching decisions. It does not require the CPU to be directly involved in forwarding packets. Instead, the forwarding decision is made through the pre-built Forwarding Information Base (FIB) and adjacency table. The CPU is only involved in managing the tables and performing network management tasks.

  • Option B (Incorrect): While CEF does use a cache for fast lookups, this is done by maintaining two key tables in the data plane (the FIB and adjacency table). The data plane is responsible for forwarding packets, and it does not rely solely on a generic cache for forwarding. Instead, these two tables store the necessary forwarding information, ensuring packets are forwarded quickly.

  • Option C (Correct): Cisco Express Forwarding maintains two tables in the data plane: the Forwarding Information Base (FIB) and the adjacency table.

The FIB stores the best routes for packet forwarding, while the adjacency table stores information about the next-hop router interfaces and Layer 2 addresses needed to forward packets correctly. This dual-table approach allows CEF to make fast, efficient forwarding decisions without needing to consult the main routing table in the control plane.

Option D (Incorrect): CEF does not make forwarding decisions through the IOS scheduler. Instead, it uses the FIB and adjacency table in the data plane to make fast, hardware-accelerated forwarding decisions. The IOS scheduler is responsible for managing various system tasks and processes but is not involved directly in packet forwarding when CEF is used.

Thus, the correct answer is Option C, as it accurately describes the two primary tables in Cisco Express Forwarding.

Question No 4: 

What is a benefit of deploying an on-premises infrastructure versus a cloud infrastructure deployment?

A. Ability to quickly increase compute power without the need to install additional hardware.
B. Less power and cooling resources needed to run infrastructure on-premises.
C. Faster deployment times because additional infrastructure does not need to be purchased.
D. Lower latency between systems that are physically located near each other.

Answer: D. Lower latency between systems that are physically located near each other.

Explanation:

When comparing on-premises infrastructure to cloud infrastructure, the differences typically focus on flexibility, scalability, and physical proximity of systems, among other factors. Let’s evaluate each option:

  • Option A (Incorrect): One of the main benefits of cloud infrastructure is the ability to quickly increase compute power. Cloud providers offer elasticity, allowing organizations to scale resources on-demand without needing to purchase or install additional hardware. On-premises infrastructure, on the other hand, requires organizations to manually acquire and install new hardware, which takes more time.

  • Option B (Incorrect): On-premises infrastructure generally requires more power and cooling resources than cloud environments. This is because the company is responsible for the physical servers, data centers, and the overhead associated with running them. In contrast, cloud providers typically have data centers optimized for energy efficiency, which can reduce overall energy consumption and cooling requirements.

  • Option C (Incorrect): Cloud infrastructure enables faster deployment times because it allows organizations to provision resources dynamically without the need to physically purchase and deploy hardware. On-premises infrastructure usually requires a longer timeline for procurement, setup, and deployment of hardware.

  • Option D (Correct): One of the key advantages of on-premises infrastructure is the lower latency that can result from systems being physically located near each other. When servers and systems are co-located within the same data center or facility, the data transfer times are reduced, and latency is minimized. In contrast, cloud infrastructure can introduce higher latency due to the need to route data through the internet and potentially across different regions or data centers.

Thus, Option D is the correct answer, as on-premises deployments provide the benefit of lower latency between systems that are physically located close to each other.

Question No 5: 

How does QoS traffic shaping alleviate network congestion?

A. It drops packets when traffic exceeds a certain bitrate.
B. It buffers and queues packets above the committed rate.
C. It fragments large packets and queues them for delivery.
D. It drops packets randomly from lower priority queues.

Answer: B. It buffers and queues packets above the committed rate.

Explanation:

Quality of Service (QoS) is a crucial concept in networking, designed to manage and optimize the performance of traffic across a network. One of the primary goals of QoS is to ensure that critical applications or services receive the necessary bandwidth and low latency, while less important traffic can be delayed or throttled to avoid congestion. Traffic shaping is one of the key QoS techniques used to alleviate network congestion and ensure that data is delivered efficiently, especially in networks with limited bandwidth.

Traffic Shaping is the process of controlling the amount and the rate of traffic sent to the network. It involves buffering and queuing excess traffic to smooth out bursty transmissions, thus preventing sudden congestion that might overwhelm the network.

Let’s analyze each option in detail:

  • Option A (Incorrect): Dropping packets when traffic exceeds a certain bitrate is typically associated with traffic policing, not traffic shaping. Traffic policing involves enforcing a traffic profile, and when the traffic exceeds the defined rate, the excess packets can be dropped or marked for discard. This method is more aggressive and doesn't alleviate congestion as effectively as traffic shaping.

  • Option B (Correct): Traffic shaping buffers and queues packets that exceed the committed rate and sends them out at a smoother rate, which helps prevent congestion. Essentially, it ensures that the network doesn’t get overloaded by controlling the rate at which packets are sent. Excess traffic that exceeds the committed rate is held in a buffer, and when the network becomes less congested, those packets are forwarded, preventing sudden spikes in bandwidth usage. This smoothens the overall traffic flow and reduces the likelihood of congestion.

  • Option C (Incorrect): While traffic shaping can manage the rate of data flow, it does not involve fragmenting large packets. The fragmentation of packets is a separate concept that may occur for transmission over networks that have Maximum Transmission Unit (MTU) restrictions. Traffic shaping deals with the timing and rate of packet delivery, not the fragmentation of large packets.

  • Option D (Incorrect): Dropping packets randomly from lower-priority queues is more aligned with traffic policing or random early detection (RED) mechanisms, which attempt to mitigate congestion by dropping packets selectively. However, traffic shaping generally aims to avoid dropping packets by smoothing out traffic and ensuring efficient data flow, which helps alleviate congestion rather than exacerbating it.

Thus, the correct answer is Option B because traffic shaping involves buffering and queuing excess traffic above the committed rate, allowing it to be transmitted in a controlled manner and helping alleviate network congestion by smoothing traffic bursts.

Question No 6: 

An engineer is describing QoS to a client. Which two facts apply to traffic policing? (Choose two.)

A. Policing should be performed as close to the source as possible.
B. Policing adapts to network congestion by queuing excess traffic.
C. Policing should be performed as close to the destination as possible.
D. Policing drops traffic that exceeds the defined rate.
E. Policing typically delays the traffic, rather than drops it.

Answer: A. Policing should be performed as close to the source as possible.

D. Policing drops traffic that exceeds the defined rate.

Explanation:

Traffic policing is a key component of Quality of Service (QoS) used to monitor and manage the rate of traffic flow across a network. It enforces traffic profiles by ensuring that data transmitted does not exceed a predefined rate. When traffic exceeds this rate, traffic policing can either drop the excess packets or mark them for discard. The goal of traffic policing is to maintain network performance and ensure that no single flow consumes more than its fair share of bandwidth.

Let’s break down each option:

  • Option A (Correct): Traffic policing is typically performed as close to the source as possible. This is because policing helps to manage the traffic right at the point where it originates. By performing policing near the source, it prevents congestion from building up as traffic traverses the network, ensuring that only compliant traffic reaches the network core or destinations. This helps avoid network-wide congestion and minimizes the impact on downstream devices.

  • Option B (Incorrect): Traffic policing does not adapt to network congestion by queuing excess traffic. Rather, policing involves monitoring the rate of traffic and dropping or marking for discard any traffic that exceeds the allowed threshold. This is in contrast to traffic shaping, which buffers excess traffic and sends it at a controlled rate. Policing is more about enforcing rate limits, not buffering or delaying traffic.

  • Option C (Incorrect): Traffic policing should ideally be performed as close to the source as possible to avoid overloading network resources, not at the destination. Performing policing at the destination would not effectively prevent congestion and would not allow early control of traffic flow. Policing near the source ensures compliance with bandwidth requirements before it affects the entire network.

  • Option D (Correct): A key aspect of traffic policing is that it drops traffic that exceeds the defined rate. When traffic exceeds the configured rate, the excess traffic is dropped or marked for discard. This method prevents congestion from worsening by actively limiting traffic that is outside the defined bandwidth limits. By dropping excess traffic, it ensures that the network remains stable and that critical traffic is not delayed or interrupted.

  • Option E (Incorrect): Policing typically drops traffic, rather than delaying it. Unlike traffic shaping, which queues and delays excess traffic, traffic policing does not allow for delays. Its goal is to enforce the traffic profile by dropping or marking traffic that does not conform to the rate limit.

Thus, the correct answers are Option A and Option D, as traffic policing is designed to be applied close to the source and to drop traffic that exceeds the defined rate, preventing network congestion.

Question No 7: 

Which component handles the orchestration plane of the Cisco SD-WAN?

A. vBond
B. vSmart
C. vManage
D. WAN Edge

Answer: C. vManage

Explanation:

Cisco SD-WAN (Software-Defined Wide Area Network) is a revolutionary approach to managing WAN (Wide Area Network) infrastructures. It uses software-defined principles to create a centralized control plane for managing network traffic, security, and policy enforcement, among other tasks. Cisco SD-WAN uses several key components to operate effectively, each with distinct roles within the architecture.

In the Cisco SD-WAN architecture, the orchestration plane plays a pivotal role in overseeing the network's configuration, provisioning, monitoring, and management. This is where the overall network operations are centralized and streamlined.

  • Option A: vBond (Incorrect)
    vBond is responsible for authentication and establishing secure connections between the various devices within the Cisco SD-WAN fabric. It plays a crucial role in the initial setup and the secure establishment of communications between vManage, vSmart, and WAN Edge devices. However, it does not handle the orchestration plane, which involves high-level management and policy enforcement.

  • Option B: vSmart (Incorrect)
    vSmart handles the control plane by managing routing, policy, and data plane distribution. It enables efficient routing decisions across the SD-WAN by centralizing the routing policies, but it is not responsible for orchestration. vSmart works in conjunction with vManage to implement the network's configuration but does not control the orchestration plane.

  • Option C: vManage (Correct)
    vManage is the component responsible for the orchestration plane of Cisco SD-WAN. It allows network administrators to configure, monitor, and manage the SD-WAN environment. With vManage, administrators can define and push policies, create configurations, and manage device settings across the entire SD-WAN network. vManage also enables visibility into the network, providing real-time metrics and insights into network performance, security, and health.

  • Option D: WAN Edge (Incorrect)
    WAN Edge devices are the physical or virtual devices deployed at the edge of the network. They act as the data plane components, handling the traffic routing and communication between branch offices, remote locations, and the central network. While essential to the operation of SD-WAN, WAN Edge devices are not responsible for orchestrating the network.

Thus, the correct answer is vManage, as it is responsible for handling the orchestration plane within Cisco SD-WAN.

Question No 8: 

What are two device roles in Cisco SD-Access fabric? (Choose two.)

A. edge node
B. vBond controller
C. access switch
D. core switch
E. border node

Answer: A. edge node, E. border node

Explanation:

Cisco SD-Access is a solution designed for Software-Defined Networking (SDN) within enterprise networks. It simplifies the network by using automation, segmentation, and centralized control to streamline operations and enhance security. The SD-Access fabric is made up of several key device roles that help to structure and manage the network's connectivity and traffic flow.

Here’s a breakdown of the device roles in Cisco SD-Access fabric:

  • Option A: edge node (Correct)
    An edge node is a device at the boundary of the SD-Access fabric. These nodes are typically responsible for connecting end-user devices, such as computers, phones, and other endpoints, to the network. Edge nodes are the first point of interaction for user devices, handling tasks such as traffic forwarding and enforcing policies related to user access. In an SD-Access environment, edge nodes typically operate at the access layer and are responsible for connecting users and devices to the network infrastructure.

  • Option B: vBond controller (Incorrect)
    vBond is part of the Cisco SD-WAN solution, not SD-Access. It plays a role in authenticating and establishing secure connections between SD-WAN components, including vManage and vSmart, but does not serve as a device within the SD-Access fabric. It is not involved in the SD-Access network roles.

  • Option C: access switch (Incorrect)
    While access switches are important in traditional networking for connecting endpoint devices to the network, they are not specifically the role in SD-Access. In SD-Access, the edge node performs the function of handling endpoint connectivity, which can be seen as a more advanced version of the role traditionally played by access switches.

  • Option D: core switch (Incorrect)
    Core switches are part of the traditional networking architecture and act as the backbone of the network. They provide high-speed connectivity between different network segments. However, in Cisco SD-Access, the focus is on automating and managing the network through centralized policies, with the edge and border nodes playing more critical roles.

  • Option E: border node (Correct)
    A border node serves as the gateway between the SD-Access fabric and external networks, such as the internet or other branches of an organization. The border node facilitates inter-fabric communication and handles the egress and ingress of traffic that is either leaving or entering the SD-Access environment. This device plays a crucial role in connecting the SD-Access fabric to other parts of the organization’s network or the outside world.

Thus, the correct answers are edge node and border node, as both roles are critical to the SD-Access fabric’s functionality and connectivity to other networks.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.