Cisco CCNA 200-301 – EtherChannel

  1. Introduction

You’ll learn about Ether Channel, which can be used to bundle multiple physical interfaces into a single logical interface. I’ll cover why we need Ether Channel, and spoiler alert, it’s because of spanning tree. I’ll also cover how the Ether Channel load balancing and redundancy works. You’ll learn what all the different protocols are that are available for Ether Channel, how to configure and how to verify them.

And then, finally, we’ll talk about another problem where if your switch is uplinking to two different switches for redundancy, even if you’re using Ether Channel, spanning Tree is going to cut your available bandwidth in half. Cisco do have some technologies that will avoid that, though. That stack wise. VSS and VPC will cover an overview of those multishass Ether Channel options in the last lecture.

  1. Why we have EtherChannel

In this lecture you’ll learn about why we have Ether Channel and we’ll start off by having a very quick review of the campus design model again. So our end hosts, like our PCs, get plugged into our access layer switches, our access layer switches up link to the distribution layer switches and then they uplink to the core layer switches. End hosts do not constantly send traffic onto the network. Most of the time their network connection is sitting idle.

If you think about what you’re doing when you’re sitting on a PC, if you’re working on a Word document or an Excel spreadsheet or something like that, there’s no traffic actually going over the network. Because of this, you can connect less uplinks to each higher layer than the number of hosts you have and still maintain acceptable network performance because you don’t need to support all of the possible bandwidth that your hosts have because they’re not all going to be using it at the same time.

But if I go back a slide you see here we’ve got our two buildings, we’ve got four access layer switches in each building for the example. And let’s say that there are 48 part switches and I’ve got 40 end hosts plugged into each switch. So that would be four times four a 160 hosts in the main building, 160 hosts in building one as well. They’re uplinking to a pair of distribution switches in both buildings. So I’ve got 160 devices in both buildings, but I don’t have 160 up links going from the access layer to the distribution layer.

Also, I don’t have that amount of uplinks going from the distribution layer up to the core layer. I don’t need to put that many in because I know that my PCs are not going to be transmitting at the same time. They don’t actually need that much bandwidth. A starting rule of thumb recommendation for how much oversubscription you should have in your campus land is 20 to one from the access layer to the distribution layer.

Meaning if you had 20 PCs connected with one gigabit per second network cards at the access layer, you would require a single one gig uplink to do the distribution layer to support their traffic. The recommendation is four to one for the distribution to core layer links. And bear in mind that those are general values. You should analyze the traffic on your network to verify that your links are not congested because it depends on particular traffic patterns in your network, what applications you’re running, et cetera. What will be a good oversubscription ratio for you. But these are good ballpark figures. Switches often have dedicated uplink ports which have got higher bandwidth than the bandwidth on their access ports. For example, a 48 port one gigabit switch with a pair of ten gig uplinks and that can help with the subscription ratio. For example, if you’ve got 48 one gig clients plugged into that switch, then the total bandwidth there possible would be 48 gigabits per second.

You’ve got your 210 gig uplinks so that’s 20 gig on your uplink links. So that gives a subscription ratio of 2. 4 to one if we didn’t have those ten gig uplinks, if the uplinks were also one gig as well, the subscription ratio would be 24 to one. Obviously not as good. So normally when you do have switches which have got higher bandwidth uplinks then over subscription is not going to be a problem. However, we do have a problem when we want to connect to uplinks and that problem is spanning T because spanning T provides redundancy but it does not provide load balancing. Spanning T always selects the one best path to avoid loops.

So if a switch has got multiple equal cost paths they have the same neighbor switch towards the root bridge. It will select one of those parts, the one which has got the lowest part ID. It’s not going to load balance across all of them. So in our example here with the diagram we’ve got uplinks from our access layer axis one switch going to the distribution one switch and we’ve got 210 gigabit Ethernet interfaces.

Zero slash one and 0201 will be selected as the root part as it has got the lowest part ID and T zero two is blocking. So even though we physically connected to ten gigabit Ethernet uplinks, we only get ten gigs worth of uplink bandwidth, not the 20 gig because spanning three is going to block one of those links. So that’s the problem. We don’t get all of our available physically connected uplink bandwidth. The solution is Ether Channel. Ether Channel groups multiple physical interfaces into a single logical interface and spanning three then sees that Ether Channel as a single interface so it doesn’t block any parts. We now get the full 20 gigs worth of bandwidth. So if you look back on the previous slide when we weren’t using Ether Channel, spanning three sees that as a possible loop because traffic could go up T zero one and then back down t zero two and then back up T zero one again. So we’ve got a potential loop there when we don’t have Ether Channel. But when we do configure Ether Channel for spanning three it counts as a single link as a single interface on both sides.

So spanning three does not see it as a potential loop. And now we get the full 20 gigs worth of bandwidth. Traffic will be load balanced across all the links that are in the Ether Channel. So traffic from my PC is going upstream is going to be load balanced across all the links the same for the traffic coming back down in the other direction. It doesn’t just provide load balancing, it provides redundancy as well. If an interface goes down its traffic will fail over to the remaining links. So that was Ether Channel on our switches. We can do basically the same thing on our servers as well with Nick teaming. So going back a slide, Ether Channel is where we can bundle multiple physical parts into a single logical part on our intraswitch links on our servers. With Nick teaming, we can bundle multiple physical network cards into a single logical interface. Benefit we get from this is we get the load balancing and the redundancy again. And because the operating system sees it as a single interface, we just have one IP address on there which makes things much more convenient and simple to configure.

I’m putting this information in here as well because I wanted to explain the terminology to you and let you know that there are several different names for what’s basically the same thing. Ether Channel on our switches is also known as a port channel. In fact, when you hear me talking about it during the section, you’ll probably hear me call it a port channel. More of an Ether Channel. People in industry, we tend to call it that more often. It can also be known as a lag, which stands for link aggregation or a link bundle. When we bundle our physical interfaces on our servers, we’ll usually call it Nick teaming. It can also be called bonding, Nick balancing and again, link aggregation was why we have Ether Channel. It gets us past that problem with spanning three and also a quick overview of the terminology as well. See you in the next lecture.

  1. EtherChannel Load Balancing

In this lecture you’ll learn about how Ether Channel load balancing works. And I’m going to use the diagram that you see on the slide here throughout this lecture. So I’ve got two switches which have got four links between them that have been grouped into an Ether Channel. And each of those four links is gigabit ethernet interfaces. So starting with gigabit ethernet, slash one on the left, going to gigabit ethernet zero four on the right. In the bottom switch I’ve got some PCs plugged in there. I’ve got PC One and PC Two. And in the top switch I’ve got some servers server one and Server two. So this lecture we’re going to cover how Ether Channel load balances the different flows that are going across the links between the switches.

A flow is communication from a client to a server using a particular application. If PC One in our example opens a web session to server One and PC Two opens an FTP session to server two, we’d have two flows going through our switches. And with Ether Channel a single flow is load balanced onto a single port channel interface. For example, all packets in the flow from PC One to server one always go over interface gig zero one. All packets in the flow from PC Two to Server two always go over interface gig zero two. So looking at that with an animation, the first packet in the flow from PC One to Server one, it hits the first switch.

The switch decides which interface it’s going to load balance it over. It chooses gig zero one in our example and then that goes to the server. The next packet in the flow will also go over the same interface. So it comes into the switch, it load balances it to the same interface again and then it goes up to the server on the second flow from PC Two to server two that comes into the switch. The switch will use its algorithm to decide which interface to load balancer onto gig zero two in our example. And then it goes to the server.

When the second and the third and fourth and so on packets come in from that flow, they’ll all be load balanced onto the same interface. Packets from the same flow are always load balanced on the same interface. We’re not load balanced round robin across all the interfaces in the port channel. For example, we don’t load balance the first packet from PC One or Server One on interface gig zero one and then the second packet on that same flow to gig zero two. The reason for that is that round robin load balancing could cause packets to arrive out of order at the destination and that would break some applications. So it makes sure that doesn’t happen.

We always load balance packets from the same flow onto the same interface. So we’re always going to arrive in order. So if this does not happen, you see, the first packet in the flow went over interface gig zero one. The second packet in the same flow over gig zero two. We don’t do that. So because the way this works, because a single flow always gets a load balance on the same interface, any single flow receives, the bandwidth of a single link in the port channel is its maximum. That’s a maximum of one gigabits per second bandwidth per flow. In our example where we were using one gig links between our switches, but there’s an aggregate bandwidth of four gig across all available flows.

So you can think of a port channel as a multilane motorway. The cars always stay in their own lane in a single lane, but because there’s multiple lanes, the overall traffic gets there quicker. Obviously, in our example, if we only have add one up link rather than four, we don’t have so much overall bandwidth available between the switches. Ether channel provides redundancy as well as the load balancing. If a link fails, the flows will be load balanced to the remaining links. Okay, so that’s how the load balancing and the redundancy works. See you in the next lecture for more Ethernet.

img