Cisco CCNP Enterprise 300-420 ENSLD – Designing Layer 2 Campus Part 2

  1. Ether Channel

Ether Channel is the Cisco term for bundling two or more physical Ethernet links for the purposes of aggregating available bandwidth. L2 Ethernet Channel combines the bandwidth of multiple layer 2 links, causing STP behavior to change. All in bundle are forwarding and treated as one link.L3 ether channel aggregates and optimizes the bandwidth of multiple layers three. Since there is only one neighbor relationship per switch interconnect, an ether channel bundles individual Ethernet links into a single logical link that provides the aggregated bandwidth of physical links. Ether Channel can be deployed on any bundle of links in your campus network.

The aim is to increase bandwidth and availability. Port aggregation considerations Ether Channel can use link aggregation control protocol establishment and an ACL-based protocol that has protection against misconfiguration. PAGP is a Sagely protocol. Ether Channel can be established using one of three mechanisms LACP, Peg and static persistence. LACP is a standard-based negotiation protocol that is defined in IEEE 802.13 Ad. It helps protect against layer-two loops that are caused by misconfiguration. One downside is that it introduces overhead and delays when setting up the bundle. Actively tries to negotiate an Ether interface in passive mode and only responds to LACP requests. PEP is the Cisco proprietary negotiation protocol.

You should probably disqualify it as an “allegiance” proprietary protocol unless you have bold hardware that does not support LACP. PAGP modes are similar to the ones of LACP; instead of active and passive modes, desirable and auto modes are available. Static persistence configuration does not impose the overhead that LACP does. The only problem with static configuration is that it can cause problems if not configured properly, depending on the platform. problems such as the availability of load-balancing members of the Ether Channel. LACP uses a combination of source and destination port load balancing if available on a switch’s platform and offers the most granular load balancing bundling.

Several links that are powers of two, four, eight, etc. will result in optimal load balancing. Frames are for an Ether Channel link that is based on the results of a hashing algorithm. Options that the switch can use to create hash vary from platform to platform and software. A common set of options includes the following: destination IP address, destination Mac address, and a combination of source and destination IP addresses It is not possible to have different load balancing methods. Print ether channels on one switch. If the load balancing method is changed, it is applied to all Ethernet channels. If most of the traffic is IP, it makes sense to load balance according to IP addresses or port numbers.

If you load balance traffic per IP addresses, what happens with noni traffic in that case, which load balances frames according to Mac addresses? To achieve optimal traffic distribution, all vendors must have a minimum of two links. You cannot control the port that a particular flow uses. You can only influence load balancing with frame distribution method that results in the greatest variety VSS considerations. The Cisco Virtual Switching System (VSS) allows the clustering of two physical chassis together into a single logic unit. VSS enables high availability, scalability management and maintenance. From a design perspective, the most important benefit of a VSS is that it now becomes possible to build an ether channel bundle. Link is physically terminated on two separate chassis. This is also referred to as a “multi-chassis ether channel” because the two chassis form a single logical entity. The devices on the other end of the Eunuch link use standard Ethernet channel technology to connect to the VSS.

As a result, the VSS solution is entirely transparent to ether channel peer devices such as access switches or servers. VSS and Amy can be utilized to build a logical starting topology while still maintaining full redundancy in the underlying physical topology. VSS can be deployed in the access, distribution, or core layers. Most commonly, VSS is used in the distribution layer. Use at least two links in a my between switches, terminate links on different line cards for maximum availability, and do not configure switch preemption within VSS within Vs. One chassis is designated as the active virtual switch, and the other is designated as the standby virtual switch. The active supervisor engine of the active Virtual Switch chassis centrally manages all control plane functions.

From the perspective of the data plane, both switches in VSS actively forward traffic to supervisors and to specific modules that can be placed in a modular switch chassis. The line card modules are where the Sedate plane resides. Supervisor modules are where the switches, management, and control planes reside. The benefit of the modular buffer system is that you can swap individual pieces and make upgrades. When an upgrade is required, the two chassis are combined into a single logical node. Special signaling and control information must be exchanged between the two chassis in a timely manner. This link is referred to as the “virtual switch link.” The VSL, formed as a Cisco Ether Channel interface, can comprise links ranging from one to eight LE member ports. These links carry two types of traffic: the Cisco Virtual Switching System control traffic and ML data traffic. To make sure that control traffic gets the highest priority across the VSL, a special bit on all Vs cell control frames, it is recommended that you have at least two physical links in an ether channel between switches.

To minimize the possibility of connection failure, you need to ensure sufficient redundancy for cell. If the VSL fails, the standby supervisor assumes that the other supervisor has been lost and it takes on the active role. However, the switch did not fail. Only VSL failed. Having active supervisors on both switches is something that you need to prevent. In this way, you ensure maximum availability. Configuring switch preemption is not recommended The problem is that switch preemption can cause an unnecessary switch reboot. In some shows, there is no benefit in having one supervisor active instead of the other, so there is no benefit in an A switch preemption. It is recommended that you connect one of the links in VSL directly to the chassis supervisor. If all VSL links are connected through line cards, it takes a lot longer for the chassis to have VSS operation. Cisco NXOS software VPX and Cisco Catalyst virtual switching systems are similar technologies for Cisco.

Ether Channel technology. The term multiphasic ether channel riff either technology. Interchangeably virtual port channels allow links that are physically connected to trim Cisco Nexus switches to appear toe third downstream device, to be coming from single device and as part of a single portal. Cisco Stack Wise shows four switches. Next are hot-swappable fans. Next are power stack cables. On the right side, there are redundant supplies. Stacking E G Stack Wise provides a method to join multiple physical switches into a single digital switching unit. Switches are united by special interconnect cables.

A master switch is elected. The stack is managed as a single object and has a single management IP address. Special stack interconnect cables that form a bidirectional closed loop path connect the switches to form a single logical unit. This bidirectional path acts as a switch fabric for all the connected switches. The network topology and information are updated continuously through the stack interconnect cables. All stack members have full access to the stack interconnect bandwidth. The stack is managed as a single unit by master switch which is elected from one lac member. Up to nine separate switches can be linked together.

Each stack of switches has a single address and is managed as a single object. This single IP management applies to activities such as Fortune VLA and its creation and modification. security and Qu’s controls. Each stack has only one configuration file, which is distributed to each member of the stack. In addition, it allows for any member to become the master. Master ever Fail examples of stacking technologies are Stack Wise and Stack Wise Plus, ISCO catalyst 375375 X, and 3850 series switches, which support Stack Wise and Stack Wise Plus. Stack Wise plus is an evolution of Stack Wise. Stack Wise supports local switching, so locally destined packets do not need to traverse the stack ring. Additionally, it’s a facial reuse and has the ability to more efficiently utilize the stack interconnect, thus further improving its throughput performance.

The Catalyst 3850 switches support Stack Wise 480 with Hove 480 GBP stacking. The Catalyst 2960s switches support Flex Stack, a Stack Wise-based feature that is tailored for layer-two switches. The Cisco Stack Power Technology is an innovative feature that aggregates all the available power in a stack of switches and manages it as one common power pool for the entire stack. Multiple access switches in the same rack reduced management overhead, and multiple intact switches could create an ether channel connection.

Stacking Considerations Stacking typically unites access switches that are mounted in the same rack. Multiple switches are used to provide enough access ports. The stack containing up to nine switches is managed as a single unit, reducing the number of units you have to manage. Network switches can be added to and removed from a working stack without affecting the stack’s performance. When a new switch is added, the master switch automatically configures the unit. With the currently running iOS image and the commission of the stack, you do not have to do anything to bring up the switch before it is ready to operate. Switches are united into a single logical unit by using special stack interconnect cables that create a bidirectional link. This bidirectional path acts as a switch fabric for all the connected switches. When a brake is detected in a cable, the traffic is immediately wrapped back across the remaining path to continue forwarding.

  1. First Hop Redundancy

Default gateway or First Hop Redundancy enables the network to recover from the failure of a device acting as a spanning gateway. Protocols’ initial hop owls first Hop Redundancy protonated if only needed if the actor.  is layer two. They are not necessary with a layer HSRP is a popular FHRP choice. Internet Engineering Task ForceIETF standard or Gateway load Cisco proprietary virtual or redundant protocol If you have a layer-2 connection, the distribution switch serves as the default gateway for the entire layer-2 domain. err two domain. If you have a routed accede knottier, the donor need a first Mordancy protocoaccess switched access switch then acts gateway. fault gateway. The bayou use true if you use VSS in the layer. ablution layer.

With a VSS default gateway, taken care is taken care of through of MAEC and combination of Vienna me For end devices, sub second impossible only possible on networks running the rapid version of spanning tree HRP VRRP When tuning HSRP behavior, take into account preemption. This preempts the desired behavior because the RSTP route should be on the same device as the HSRPA active router for a given VLA. Otherwise, the connection between distribution switches can become a transit link, and traffic takes one more hop. Final Destination with HSRP, preemption is disabled by default and needs to be enabled with VRRP. Preemption is enabled by default. Enable preemption to ensure alignment of the HSR VRRPactive master router and STP route bridge for a given set of optimal traffic paths. By default, preemption is disabled for HSRP and enabled for VRRP. Preemption delay should be configured to ensure that there is full connectivity from distribution to the core before preemption is performed. Perm must not occur before the primary distribution layer. Switch has layer three connectivity to the core funs, traffic from the client will be dropped until the connectivity to the core is established.

HSRP/RRP load sharing HSRP and VRRP do not support load sharing as part of the protocol specification. However, load sharing can be achieved through the configuration of different HSRP VRPs. Different VLANs, HSRP, and VRRP do not support load sharing with a default configuration. However, it can be achieved through implementing the multiple first hop redundancy protocol (FHRP) with different active master first hop devices. In the figure, two HSRP-enabled layer 2 switches segment two separate VLANs using IEEE 802 Type Q trunks. By leaving the default HSRP priority values, a single-layer two switch will likely become an active gateway for both VLANs. You can configure different HSRP groups for different VLANs to use both toward the core network.

Group Ten is good for VLA and Ten. Group 20 is configured for VLA and 20 for Group 10. Switch one is good and has a higher priority to become the active gateway, and switch two becomes the standby gateway for group 20. Switch two is configured with a higher priority to become the active gateway, and switch one becomes the standby router. Now, both uplinks toward the core are utilized, one with VLA and 10 and one with VLA and 20 traffic. Note that this kind of configuration will not result in optimal link utilization for uplinks from the access layer. Switches are equally loaded only if traffic is the same in both VLANs (VLA, NS, ten, and 20). Load sharing for VRRP and HSRP can also be configured in a flat network. That is, when all end devices are members of the same VLA.

Two or more HSRP groups with different virtual IP addresses can be configured for a single interface serving different clients in a single VLAN for HSRP and VRRP tracking. One is the master VRRP router for VLAN ten in the figure of R. Interface track utilized to prevent traffic taking a suboptimal path after failover occurs to achieve an optimal craft. After R One’s subline failure, R One needs to be configured to track its uplink. When the uplink fails, VLP priority needs to be decremented so that it is lower than the priority of R Two. Then R2 becomes the VRRP router for VLA and Ten. VLA and Ten clients are now using R Two because the default gateway traffic path is optimal. Object tracking is an independent process that manages creating, monitoring, and winning tracked objects. To configure interface tracking using tracking objects, you first need to configure and that tracks the line protocol status of the R One uplink one confit hash track one interface Ethernet 0 over 0 line protocol. Then you need to tell the VRRP procession R One that in case that defined object fails, it should decrement VRRP priority. The effect will be the same as if you configure interface tracking through object tracking, only the configuration is simpler through the native mechanism.

Ethernet zero over zero and decrement value 20 standby tuck Ethernet zero over 00:20, the effect will be the same as if you configure interface tracking through object tracking, only the configuration is simpler through the native mechanism. Also note that preemption must be built on both FHRP routers. If preemption is not enabled, switchover will not happen. is for GLBP GLBP is a Cisco proprietary first-hop redundancy protocol that allows packet load sharing among groups of redundant routers. When you have more than two first hop devices and more than a few VLANs on accessory, it can get really complex from the perspective of configuration, maintenance, and management, as opposed to HSRP and VRRP with GLBP, where all routers within a group forward traffic.

By default, GLBP provides automatic true load sharing with a single group and no administrative overhead. For HSRP, a single virtual Mac address is given to the endpoints. Endpoints use the physical Mac address of their default gateways when using the address resolution protocol. Glop allows a group of routers to function as a virtual router by sharing one virtual IP address while using multiple virtual Mac addresses for traffic forwarding. When an endpoint uses the app as its default gateway, GLBP’s active virtual gateway provides virtual Macs. In the example R One was elected, the active default gateway, Avgas, gives out virtual Misaddresses in a round robin fashion. Gateways that assume responsibilities for forwarding pack virtual Mac address are called Avian the example, r one and R two are virtual forwarders’ One was given the active forwarding role for forwarder One, and R Two was given an active role for forwarder 2.

PC One and PC Three got the virtual Mac address of Forwarder One through the app process, so they both use R One as the default gateway. PC Two got the virtual Mac address of Forwarder Two through the app process, so it uses R Two as that gateway. Suppose that one fails. In that case, R Two takes over as the Avgas for group.R2 is the only operational GLBP-enabled device in Group 1. It is also designated as AVF 1 and AVF 2. R Two now forwards traffic for all clients. PC 1, PC 2, and P 3. The average means two different timers for this purpose. The redirect timer determines when the AVG will stop using the old virtual dress in app and replies that the AVG that uses it continues to act as a gateway for anyone who attempts to use it. When the timeout timer expires, the old Mac of the virtual router and the virtual forward are flushed from all GLBP peers, and the AVG assumes that the old AVF will not return to service.

Clients using an old mac address must refresh the entry to a new virtual mac address when a virtual Mac address is reclaimed. By default, the redirect timer is set to ten minutes, and the timeout timer is our case against GLBP in the figure because the same VLAN is on multiple access switches. Some uplinks get blocked by spanning trees to prevent layer-two loops. You might end up with situations where traffic paths are suboptimal. In cases like this, stick to Varro or HSRP. In the example where R One was elected, the active virtual gateway for R One is AVF for group one. Two stands for AVF in group two. Coincidentally, the ARP request got PC-1 the Mac address of “virtual” for group 2. PC 2 got the Mac address of virtual forwarder group 1. It is not recommend span VLANs across the access layer. However, if you stick with this design, you should use HSRP or RP and not GLBP.

  1. Describe Network Requirements of Applications

Different applications have different network requirements. Being aware of the most common traffic flows within an organisation helps you incorporate application needs into the network design. You can categories network applications into types like peer-to-peer. Typical peer-to-peer applications include IP phone calls, video conference systems, instant messaging, and file sharing applications. Most traffic that is caused by such applications is from one end device, such as a PC or IP phone, to another end device through the organizational network. Anne is not oversubscribed. Fulfilling the demands of voice and video applications should not be a hard task. Otherwise, Qu’s must be deployed to fulfil latency, jitter, and bandwidth demands. Client Local Server: Between clients and servers that are located very close to the clients, a typical application is a local server, which receives very little traffic from other network segments. Client data center applications are mail servers, common file servers, database servers, and organizational applications running in that center.

As most business applications become centralized in a common data center, network infrastructure needs to be highly resilient and redundant, providing adequate throughput for client enterprise edge applications. Users on the enterprise edge to exchange data between an organisation and its public servers. Examples of these applications include external mail servers and public web servers. As users adopt cloud applications, more traffic is headed toward the Internet. The most important requirements are network security and resiliency. Business applications, which are reachable over the Internet, are constantly exposed to security threats and the denial of their attacks. Client-Server Traffic Considerations Historically, clients and servers were mostly attached to a network device in a single land segment. However, this trend has changed in recent years. The 20th rule for client-server applications indicated that 80% of the traffic was local and 20% left the land segment.

With increased traffic volumes on the network and a relatively fixed location for users, an organisation would have split the network into several isolated segments with distributed servers for each specialized application. Large organisations hire users to have fast,  reliable, and controlled access to critical applications. To fulfil these administrative costs, The solution is to place the servers in the data center. The use of datasets requires a network infrastructure that is highly resilient and redundant and that provides adequate throughput with data center. Typically dedicated data center, high end switches are deployed, which are also optimized for high density ability.

Intrabuilding structure considerations: an intrabuilding campus network structure provides civility for all end nodes that are located in the same building and provides them with access to network resources. provides connectivity within the building constructed with the building access and building distribution layers. Transmission options copper (one G, ten G), asterisk optical fiber (one G, ten G), and wireless separation. One Oblast is available only on selected enterprise switches. User workstations are typically attached via twisted-pair copper wiring to the building access switches in flow wiring closets. WLANs can also provide intrabuilding connectivity either as a primary or as a complementary access method. Attenuator switches in wiring closets are normally connected to the building distribution switches over optical fiber.

Optical fiber enables you to connect devices over longer distances and is less sensitive to environmental disturbances than copper media. Until recently, optical fiber was also the only affordable option to provide 10 G connectivity. In recent years, support for the OGBASCT standard has been added to enterprise switches. Depending on the number of network nodes, the building access switches may be connected to the building distribution switches or directly to the collapsed campus core switches. The C3KXN M10 GT is an example of a network module that you install into a Cisco catalyst. Three five six or three seven five ox switches get support for two one B A SCT. Copper interfaces, underbuilding structure considerations, and underbuilding networks provide connectivity between the central switches of individual campus buildings. These structures are usually lit close to each other, only a few hundred meters or kilometers apart.

Provides connectivity between campus buildings. Short-distance optical fiber transmission typically meets distances between buildings of a few kilometers. Long-range Ogbasesrup to 400 meters over multimode fiber one OGBAS-C LR up to ten kilometers over single-mode fiber. Extended range Other components of airing are widely used to interconnect media design and other end devices available nicely for bandwidth access. The allowable distance between the characteristics of a design cable depends on the quality of the internal transverse described by the design considerations. Interference bandwidth is limited to one of the most common physical transmission transaction media selections due to the possibility of transmission cable devices. These carries different physical distance and the map of them are the main factors in candidates and drawbacks. When building the single 500 meters, deciding which type of campus labelling infrastructure you need to connect end users necessitates slightly different reflections.

Typically, a transition has no existing chemical diameter in a single mode. Allowing users to connect single-mode fiber reduces disappointment and thus allows for high transit cable installation for each employee; however, when deploying new cables, you must predict at least a low loss. The installation of total bill optical, on the other hand, ensures that you choose even a small deviation to pull the optical connectors can result in an alternative to an environment where the cable does some access points due to the high sensitivity of the traffic when using or building 500 meters apart. A member-connector requires a high cost and potential interference mode when known only once in an option available that you need wireless.

Long range transceivers are rather expensive in comparison with the multimode shot range transceivers. The recommended alternative is to first aggregate flow switches within the building and connect only a single switcher a pair of switches to the using a pair of single mode long range LR transceivers, how would you connect flow switches to the aggregates to fulfill the uplink requirements? In general, you can select between multimode fiber optic short-range transceivers and the deployment of copper-based Gobi CT Ethernet. When evaluating these two options, keep in mind that UTP is limited to a maximum length of 100 meters. Smaller aggregations, such as those that terminate one Obese, are also uncommon in most ultimo installations today.

img