CompTIA Network+ N10-008 – Ethernet Fundamentals

Ethernet Fundamentals Ethernet fundamentals. So in early computer networks, there were so many different network technologies out there, and each of them was competing for a piece of the market share. There wasn’t a lot of standardisation, so we had things like Ethernet, Token Ring, Fiber Distributed Data Interface, or Fit, and others that were all fighting to be the dominant market leader. Well, currently we have one market leader, and that is Ethernet. It is dominant for layer-one networks. And so in this entire section, we’re going to focus…

  1. Ethernet Fundamentals

Ethernet fundamentals. So in early computer networks, there were so many different network technologies out there, and each of them was competing for a piece of the market share. There wasn’t a lot of standardisation, so we had things like Ethernet, Token Ring, Fiber Distributed Data Interface, or Fit, and others that were all fighting to be the dominant market leader. Well, currently we have one market leader, and that is Ethernet. It is dominant for layer-one networks. And so in this entire section, we’re going to focus specifically on Ethernet because it is so important. Ethernet has gotten so popular that if you don’t understand Ethernet, you really don’t understand how networks work today. Ethernet was originally run over coaxial cables using those BNC connectors and vampire taps. These were the ten base five and ten base two networks.

Those are no longer covered on the Network Plus exam as of version seven. So you don’t have to memorise ten base-two or ten base-five facts. But over time, we migrated from those coax networks into the copper twisted pair networks, and that was changed by using UTP and STP, like we talked about in the last section. Ten-base T is what is called unshielded twisted pair or cat-3 cabling. It has a maximum speed of ten megabits per second and can travel up to 100 meters, and that’s not really that great with ten megabits per second because, if you think about our networks nowadays, that is not a lot of speed, but back in the early 80s, that was sufficient for almost all uses.

Now, how should devices access the network? This is really one of the core questions that Ethernet has: should it be deterministic or should it be contention-based? Now, what does that mean? Well, determinism is very organized and orderly. You need some sort of electronic token to transmit, and we use this in things like token rings. And so if you think about if we’re in a classroom setting and there’s 20 of us in the classroom, if everyone talked at the same time, no one would be heard.

And so the way we would do this in a classroom is we would say you have to raise your hand, and when I call on you, you can then talk. That’s deterministic. I am the one who determines who gets to talk because I’m the instructor. Now, the second way is what’s called contention-based, and contention-based is very chaotic. It’s more like when you go to the pub with your friends—there are five of you there—and you all just kind of listen and go, “Oh, I hear a gap in the conversation.” So then you start talking, and it’s very chaotic because what if two of you start talking at the same time? You step on each other, right? And so you can transmit whenever you want, but not always. Are people going to be able to hear you? Well, that’s how Ethernet networks are actually done. They are contention-based. There is no electronic token to transmit.

It is not the way that fibre was. And the reason for that is that overall, we’ve come up with ways around having to have a very deterministic and dedicated way of deciding who talks. And we’re going to go into that here in this lecture. Now, the way Ethernet approaches that is by using something called Smack, which stands for “Carrier Sent Multiple Access Collision Detect.” Now what does that really mean? Well, let’s go back to our pub example. What usually happens if there are five of us sitting around the bar and I start talking, and then you start talking on top of me? We’ll go; oh, I’m sorry, you go first, right? That’s the idea here. We’ve detected there was a collision, and then we negotiate who’s going to go next. Ethernet devices do the same thing.

They do this by listening to the wire, and if it’s not busy, they’re going to start talking. That’s the carrier-sense part. Multiple access means that we all can start talking at any time we want, but only one of us should talk at a time. Now the collision detection is if we detect an error, then we’re going to back off, wait a random time, and then try again. Have you ever been walking down a hallway at work and someone’s coming from the other direction, and it’s a small hallway, and as you walk up to each other, you kind of do that little dance of “I’ll go left, you’ll go right, you’ll go right, and I’ll go left,” and you don’t really say anything; you just kind of figure it out.

And what I usually do is kind of what Ethernet does. I just stop and wait for the person to walk around me. Otherwise, if I go right, eight, they will also go right, and there will be a problem, right? So that’s the idea here with carrier-sent multiple access and collision detection. Now consider the following example: We’ve got six devices on the network, and we’re using a bus network here. We’re all sharing the same wire. There is no problem if four wants to talk to five and they want to communicate over the segment because no one else is talking. But what happens if two people want to talk to one another? Again, it was clear; there was no issue. What happens if three and five try talking at the same time? Oh no, we have a collision marked by this red X. We now have a pause. That happens, and each of them chooses a random time. Now in the case at number three, it decided to wait 30 milliseconds. In the case of number five, it shows 150 milliseconds.

Now what happens 30 milliseconds later? Number three is going to start communicating. Number five is still waiting because they still have another 120 milliseconds. And so the problem is that the traffic just starts flowing again with no issues, and 120 milliseconds later, number five will make its communication, and there are no issues. Now, all of this takes into account something called collision domains. On that diagram, those six machines were all connected by the same cable and were all in the same collision domain. Every Ethernet segment that shares a collision domain is on the same cable or on a hub. That’s what comprises a collision domain. So devices are going to operate in half-duplex when they’re connected to a hub because they have to listen and then talk. They can’t talk and listen at the same time. Devices must listen before they transmit to ensure they don’t cause a collision. And they have to detect if a collision occurs and then back off and wait. So using a large collision domain can really slow down your networks. In the case of this hub, I have four machines on it. That’s probably not going to be a big issue. If we’re at the bar with four people, we can make that conversation work. But if we have 20 people, we’re going to have a lot more collisions and a lot more issues. And so we want to break those collision domains down into smaller chunks to get more efficiency out of our networks.

Now, when we add switches to the network, Ethernet switches are going to increase our scalability by creating lots of collision domains. and that’s actually a good thing. So when you look at a switch, every single switch port is its own collision domain. So here in the diagram, you’ll see my switch in the center, and I have four collision domains, one between each computer and the switch. Whereas with the hub, it was all one domain for all four devices. This is going to increase the speed. And because nobody else is talking on that switch port, the switch port can operate in full duplex mode. Because if the network is just me and the switch, I can talk to it all day and there will never be a collision. So I don’t have to do that listening anymore, and I can operate in full duplex, speaking faster and getting more bandwidth out. Now, when we talk about speed limitations, each type of Ethernet has its own limitations on speed. And as a network guy, I want the most speed possible. So if I look at Ethernet, it’s going to operate at ten megabits per second, which is what we call a Cat 3 cable. Now, if I use fast Ethernet, which is Cat 5, that’s going to operate at 100 megabits per second. If I go to gigabit Ethernet, that’s one gigabit, or 1000 megabits per second. And going back to our chart a couple of lessons ago, that would be Cat 5 E or Cat 6. And then I have ten gigabit Ethernet, which operates at ten gigabits per second, which is Cat 6A and Cat 7. And then we have 100-gigabit Ethernet, which only resides on fibre networks. and you get the idea here. Bandwidth is measured in bits per second, which is how many bits the network can transmit in one second. When we get to megabits, that’s millions of bits per second. A gigabit is one billion bits per second.

And the type of cable is going to determine the capacity for bandwidth on your network. Like I mentioned, going from Cat Three toCat Seven increases your bandwidth by a lot. So when we talked about the category cables, like Cat Three through Cat Seven, those were all copper cables, and they are all limited to 100 meters. But what about fiber? Well, there’s a couple you need to know about, and those are going to be the 100-base SX, LX, and ZX cables. Those can be either multimode or single-mode fibers. Now, you don’t need to memorise their distance limitations—the fact that they’re 220 metres or 550 meters—but you do need to understand they are higher than 100 meters, right? 100 is for copper cables. The other way that we designate cables besides calling them Cat 3 or Cat 5 is by using the Ethernet standard. And you’ll see that in the left column here, Ten base T is ten megabits per second. T means twisted pair, and that is a Category 3 cable.

If we have 100 base TX, that refers to100 megabits per second and twisted pair cable. If we deal with 1000 baseT, that’s 1000 megabits per second. or one gigabit at TX (twisted pair). And then we get into single-mode and multimode fibers. And they are all on my screen here, at 1000 base points or something. Now, how do you know or memorise multimode versus single mode? Well, this is where I have a little saying. The saying is “S is not single.” Okay? S is not single. Now why do I say that? When you look at something like 1000-based SX, there’s an S in there somewhere. So that means it’s not single-mode; it’s multimode fiber. Now, if I look at the other ones on there, you’ll see I have LX and ZX, for example. Those are both not SX. And if they’re not SX, that means they’re in single mode because S is not single. So if you see an S in it, remember that’s multimode; all the other ones will be single mode. So on the exam, you may get a question that says, “Which of these is a multimode fibre base T, 1000-based SX, or 1000-based ZX?” And you’ll have to pick the right one, right? S is not single. So this is a helpful chart. It’s a good summary of everything we’ve discussed about these connections’ speeds, distance limitations, media type, and the Ethernet standard.

  1. Network Infrastructure Devices (Overview)

Network infrastructure devices. So for the network plus exam, you have Tobe able to identify network infrastructure devices both by identifying their icons as well as knowing what they do, what broadcast domains they’re going to break up, and what collision domains they’re going to break up. and we’re going to talk about that in this lecture. The primary devices we use in our networks today are routers and switches. But they did evolve from bridges and hubs. So we’re going to start with hubs and work our way up through routers and switches. Hubs are layer-one devices. They’re used to connect multiple network devices and workstations together. and you can identify them by a square icon with an arrow pointing in both directions.

These are known as multi-port repeaters. And there are three basic types of hubs: passive hubs, active hubs, and smart hubs. Now, a passive hub is going to repeat the signal, but with no amplification. So if I have an eight-port hub and something comes in on port one, it’s going to pass it out on ports two through seven. If I have an active hub, it’s going to do the same thing, but it’s going to boost that signal back up. And where this is useful is if I have a long, long hallway, say, and my office building is 300 metres in length. Well, I can only go 100 metres with a Cat 5 cable. So I might go 60 or 70 meters, then put an active hub there, then go another 60 or 70 metres and put an active hub there, and then go another 60 or 70 metres and put an active hub there. And every time I have an active hub, it restarts that 100-meter limit. With a passive hub, all those connections would have been added together. And so if I had 360s, that is180 meters, which is not going to work. So I have to have an active hub in there. Now, a smart hub is an active hub, but it also has enhanced features like a simple network management protocol. So I can actively control and configure that hub from a distance. It’s not just a dumb device; that way, it adds a little bit of intelligence that way. However, hubs are not used almost exclusively in modern networks. We’re going to use switches, and I’ll show you why in just a few minutes. So the next thing we need to talk about here is collision domains.

Now, a hub is a layer-one device, like we said, and it’s used to connect multiple network segments together. We just said we wanted to take five or six computers and make them talk. Well, we need Hubs to do that. Each land segment becomes a separate collision domain. But hubs don’t break up collision domains. So if I have a diagram, like I do here on the screen, where I have two, four-port hubs, I have three machines on the left side talking to the hub and two on the right side talking to their hub, and the hubs are communicating together. This is as if they were all on one cable. It’s all treated as one large collision domain. And that can become a big issue as we get into larger networks like a 24-port hub or a 48-port hub because too many machines are trying to talk at the same time, which will cause too many collisions. So how do I fix that?

Well, that introduces the bridge. A bridge is going to analyse the source Mac address in the frame and populate an internal Mac address table based on that table. It’s going to make forwarding decisions based on the destination Mac address in those frames. So in our earlier example of those six machines with the two hubs, I can now put a bridge in between and break them into two pieces. And so the information will still make it across, but only when it needs to. So if the PC in the upper right corner wants to talk to the server, it will go through the bridge. But if it wants to talk just to the PC that’s on the hub that it’s sharing, it never has to go to that bridge. And the three machines on the left will never hear that communication. This adds security and efficiency to our networks. Now, if I take a hub and a bridge and marry them together, what I get is a switch. Now, a switch is a layer-two device, just like a bridge. And it’s used to connect multiple network segments together, just like a hub.

Essentially, this is a multiport bridge. It’s going to have every single port act as if it were a hub with a bridge on every port. It’s going to learn the Mac addresses and make forwarding decisions based on those addresses, just like a bridge. It’s going to analyse the source Mac address, and then it’s going to decide where to send the information based on its internal table, just like a bridge. So how does this operate in the real world? I’ll show you in just a second. First, let’s talk about this layer to switch. Every port represents an individual collision domain, and all the ports are on the same broadcast domain. So if we go back to the way we had our bridge set up, we have now combined those two hubs and the bridge into one device, and all those machines are going to share it. Now, let’s look at how switches can boost network performance. I am sitting at PC One, and I want to take remote control of the server by using SSH, or Secure Shell. How can I do that? Well, if I’m on PC 1, I have a Mac address of twelve B’s, and I want to talk to a server who has a Mac address of twelve CS. I’m going to refer to PC One’s Mac address as BB and Server’s Mac address as CC, just for simplicity’s sake.

Now, notice I have the switch tables at the bottom, and right now they’re empty. They don’t know who is connected to them. But when PC One, with its Mac address of BB, says to Switch One, “I want to talk to Server CC,” we send an art packet. That art packet gets to SwitchOne, and it checks its table. It looks at its table and goes, “I don’t know how to get to CC, so I’m going to push that art packet out every other port that I have to see if I can find it.” But before I do that, I do know that you’re coming from BB. And so now I can populate my table. That port on my switch is BB. So I populate my table and push out the art packet to everybody else. PC 2 says I’m not CC, so it ignores it. PC five is out. I’m not CC, so it ignores it. However, switch two is disabled. I don’t know who CC is either. It’s not in my Mac address table. So what am I going to do? I’m going to rebroadcast that ARP to my broadcast domain, which is PC Three, PC Four, and Server. So it sends the ARP packet out, and it goes, “Hey, I learned that Switch One knows where BB is, so it notes that in its port table as well as for its Mac addresses.” And as those arcs go out to all of the other servers, the server then sees them and goes, “Hey, I’m CC.” So it responds with an ARP packet back to Switch Two and says, “Hey, that’s me.”

So what does switch two do? It populates its table with CC being on port two and forwards that back to the requester, which was Switch One. When Switch One gets it, it populates its table and pushes it back to its requester, PC One. So, at this point, everyone on the network was asked, “Who is CC?” That was a lot of traffic to figure out where the server was. But now that we know who it is, PC One sends an SSH packet out and says, “Let’s talk.” It gets to Switch One, and instead of bugging PC Two and PC Five, it only sends it out to where CC is, which is port two. That brings us to switch number two. Who sends it out? It’s port two, and it gets to the server. And so now we have two-way communication via SSH going between the server and the PC, and PCs 2, 3, 4, and 5 don’t have to hear this and can operate on their own without dealing with this SSH traffic.

Now, this is why Switch has improved network performance and security. Because at this point, Switch One and Two are only sending out the traffic between PC One and Server across that one line, so PC Two, Three,  Four, and Five never hear the rest of it. And if PC Two and Three wanted to talk, they could be talking at the sametime because these switches support full duplex. This is where our switching efficiency comes into play. Now, the next thing we have is a router, and this is going to be used to connect two dissimilar networks, like an internal network and an external network. They make forwarding decisions based on logical network address information such as IPV4 or IPV6, whereas switches were all about layer 2 and Mac addresses. Routers are all about layer 3 and IP addresses. Routers are more feature-rich, and they support a broader range of interface types. A router may have a serial port, a copper RJ-45 port, an ST fibre connector, and you may be using Gbix or SPFson it, and it may have multiples of these connectors, whereas a switch is typically all copper or all fiber. Now, routers have one distinct advantage over switches, and that is that they separate broadcast domains.

So going back to our earlier example, I have three PCs on the left and two on the right, and they’re talking to those switches. If the router wasn’t there, that would be one broadcast domain with five collision domains. But because I put the router in there, I separated that into two broadcast domains, and that is going to reduce the traffic and reduce the noise. This is going to lead to efficiency in our networks. Now, there are devices called layer three switches, and this tends to confuse some students because we talked about switches being layer two and routers being layer three. Well, just like we took hubs and bridges and combined them to make a switch, if you took a switch and a router and combined them, you’d get a layer-three switch. Layer-3 switches are layer-3 devices that are used to connect multiple networks together and perform routing functions. They can make routing decisions just like a router, and they can connect network segments just like a switch. because they act like a router. Each of their ports is going to act as a broadcast domain and a collision domain. So this is an efficient way to do things on an internal network by using layer 3 switches. Now, if you have a very, very large network, I would not recommend using layer 3 switches as your router because they’re not as efficient at routing as a dedicated router would be. But if you’re in a small office or home office environment of 20 or 30 machines, replacing a router and switch with a single-layer three-switch configuration can be a useful way to save some money because you’re only having to buy one device instead of two. Now, lastly, I have here on the screen a nice little summary chart for you that’s going to show you the five types of devices that we just talked about: hubs, bridges, switches, multilayer switches, and routers.

And it will show you the possible collision domains that they have and the possible broadcast domains that they have. Remember, hubs are just like one shared cable, one collision, and one broadcast, whereas a bridge adds one collision domain per port and one broadcast. A switch is just like a bridge, and it has one per port and one broadcast domain. Routers and multilayer switches operate the same way. So one port represents one collision domain and one port represents one broadcast domain. And you can see the layer of operations over on the right side. Hubs are at layer one; bridges and switches are at layer two; and multilayer switches and routers are at layer three. Now, one last word about multilayer switches. On the network plus exam, when they mention a switch, they are almost exclusively talking about layer 2 switches. So I want you to think of switches as two devices that are focused on Mac addresses. Routers are layer-three devices that are focused on IP addresses. The only exception to this is if the test specifically writes the word “multilayer switch.” If they say “multilayer switch,” I want you to treat it like a router. It would then be a layer-three device, but in every other case, treat switches as layer-two devices.

  1. Additional Ethernet Switch Features

Additional Ethernet features So we’ve covered the basics of Ethernet with the cabling and the cable types and some of the devices like routers and switches and bridges and hubs. But there’s a lot more to Ethernet out there, and we’re going to dive into that in this lesson. Now, when we talk about additional features of Ethernet, these features are there to enhance network performance, redundancy, security, management, flexibility, and scalability. All of these are great things, and we use different features and different devices to give us these abilities. Now, some of the common switch features that we have are virtual LANs, or VLANs; trunking-spanning tree protocol, or STP; link aggregation; power over Ethernet; port monitoring; and user authentication. Now the first three of these VLANs, trunking, and STP areas are a little bit more in depth, so we’ll cover each of those in their own lesson over the next several videos. But for this video, we’re going to focus on link aggregation, power over Ethernet, port monitoring, and user authentication. So link aggregation If you’re taking notes as we go, I want you to write down this link, “Aggregation 802 three ad,” because you’re going to see questions on the test where the answer is either listed as a number like “802 three ad,” or they might ask you, “What is 802 three ad?” And you need to answer questions about link aggregation, bpower over Ethernet, cport monitoring, and stuff like that.

So it’s going to be important as we go through this process to write these down and remember them. Now, with link aggregation, we have a problem in our networks, and that’s that congestion can occur when all the ports operate at the same speed. So if you have a 100-megabit-per-second switch, every port on that network operates at 100 megabits per second, which is not a problem if everyone’s taking their turn. But if you remember from the last lesson, we talked about the fact that switches are full duplex, meaning that every port can operate at 100. So if I have three ports, PC One, PC Two, and PC Three, all sending data in at 100 megabits per second, that means I need to send that out. The uplink is at 300 megabits per second. However, that port can only handle 100 megabits per second, resulting in a bottleneck where traffic can be dropped, as shown in the image. Now, to solve that, we use what’s called link aggregation. What link aggregation does is combine multiple physical connections into a single logical connection. So if I have a 24 port switch, for instance, I can use 20 ports as part of the service for each of those 20 machines, and then take four ports and combine them together to be a 1400 megabit per second connection. And that will help alleviate this congestion by increasing the bandwidth available for the uplink. Now, if I have four connections going out and 20 connections coming in, Is there a possibility there’s still going to be a backup? Well, yes, but it’s not going to necessarily happen all the time because it’s very rare that every PC on the network is using all 100 megabits per second of its network capacity at the same time.

Now, the next one we have is power over Ethernet. And there are two variants of this: power over Ethernet and power over Ethernet plus. Power over Ethernet is eight, two, three AF. The Power over Ethernet Plus code is 8 2 3 Again, I would write both of these down as part of your memorization guide, which provides the electrical power over Ethernet. That’s the whole purpose of it. The benefit of this is that if I’m using a Category 5 or higher cable, I only need a cable to give data and power as opposed to a power cable and a data cable. And this can provide you with up to 15.4 watts of power for the device. Now with Power over Ethernet Plus, it does support higher wattage, going up to 25.5 watts. Both of those are numbers that I would add to your memorization sheet for Power over Ethernet; it’s 15.4 watts. Power over Ethernet plus 25.5 watts Now there are two types of devices out there. There is power sourcing equipment, which would be the switch providing the power, and powered devices. Things like your VoIP phone, your laptop, or your wireless access point They are considered the powered device if they are receiving power over Ethernet. And all of this occurs over an RJ-45 connector.

And we’re still using Pins 1, 2, 3, and 6, which are our data ports, but they’re also providing power. And again, that means half of our cables in this RJ 45 are simply not being used. They’re just there for future growth and expansion. The next one we have is port monitoring or port mirroring. Now, there’s not necessarily a number standard you have to memorise for this one, but you need to understand the concept of it. Port monitoring is helpful to analyse packet flow over the network because each switchport has its own collision domain. You can’t listen from PC one to PC two. So if you wanted to listen to that traffic, you’d have to connect a network sniffer to AHU, and you’d be able to hear everything. Or you can set up port monitoring or port mirroring. And, for example, if you have a 24-port switch, you could say that all traffic from ports 1 through 23 will be mirrored out to port 24. And that’s where the interface for the network analyst machine would be. So the switch requires that port mirroring be enabled for you to be able to see that traffic. So in the case of this envelope that we want to send from PC One to PC Two, A copy is made and sent over to the network analyst machine as well by the switch over that port mirroring. The port mirroring will make a copy and send it over so we can analyse it with Wireshark or another network analyst tool. Following that, we have user authentication, 802 One X.

And for security purposes, switches can require users to authenticate themselves before they get access to the network. And 802 One X will enable us to do so. Once you’re authenticated, there’s a key that’s generated, and it’s shared between the supplicant, which is the device requesting access, like your laptop or desktop, and the switch, which we call the authenticator. So how this works is, as you can see here on the screen, the supplicant or PC One is going to talk to the switch first and ask for permission, and that’s going to send it straight through to the authentication server. The authentication server is going to check the applicant’s credentials and create a key. That key is then used to encrypt traffic between the switch and the client. And you can see that here with the key distribution going from the authentication server to the authenticator. And then the key management goes from the authenticator to the switch to the PC. And at that point, both the switch and the PC have the same key. And we can create a symmetric encryption tunnel that secures our data. Next, we have management access and authentication to configure and manage our switches. You can use two different options. You can use SSH to do it remotely or a console port to do it locally. With SSH, or Secure Shell, it operates on port 22, and it is a remote administration programme that allows you to connect to the switch over your network. So anywhere I’m sitting on the network, I can SSH into that switch and remotely manage it. Now, with a console port, I have to use an RS-232 cable that has a rollover cable, and the other end is an RJ-45, and we will plug that into the console port of the switch. So I’ll physically connect my laptop to the switch, and then I’ll be able to access it locally via SSH using that separate laptop and that rollover cable.

Now, this is good for things like band management. Out-of-band management can also be accomplished by using a separate network configuration network on a separate network. So the way this will work is, for instance, if I have that 24-port switch, I might dedicate port number 23 to be connected to an out-of-band network for management access only. And so all my management devices are on one network, and all my data transfer is on another network. It’s a way to have additional security to make sure your configurations aren’t touchable by end users and only by your system administrators. Next, we have something called “first hop redundancy.” And this has to do with layer 3 switches and routers. With First Hop Redundancy, we can use a protocol like HSRP, which is the hot standby router protocol, to create a virtual IP address and Mac address that can then create an active and a standby router. So in this case, on the screen, you’ll see that I have three routers displayed. I have an active router, which is the dot one. I have a standby router, which is a dot two, and a virtual router, which is a dot three. So in the real world, if I wanted to touch these routers, there would be two sitting there: the active and the standby. However, my PC is only set up to see one router, the virtual router, the dot three. So, when it searches for the gateway, it will go to dot three, or the virtual router. Now, the virtual router will know, based on which router is currently up, which one to send the traffic to, and HSRP is the protocol to do that. Now, we’re going to go much more in depth into first-hop redundancy later on. But for right now, I just wanted to introduce you to the idea. Now. HSRP is not the only first-hop redundancy protocol.

There’s also the gateway load-balancing protocol called GLBP. There is the virtual router redundancy protocol, or VRRP. and the common address redundancy protocol, or Carp. But HSRP, the “Hot Standby” router protocol, is the most popular one that’s in use in most networks. So that’s the one I would remember and be able to recognise among these others as first-hop redundancy protocols. When we get into routing later on, we’ll talk about this a little bit more in depth. Next, we have Mac filtering. Now, Mac filtering is the process of allowing or denying traffic based on a device’s Mac address. And this can be used to help improve security. It’s one of many layers of security we can add. And honestly, it’s not really that strong, but it is one that we will use, and according to Network Plus, you should use it. So how does Mac filtering work? Well, here on the screen, you’ll see I have a wireless access point, and we have a wired desktop, a wireless desktop, and a wireless printer. I could block the wireless desktop by its Mac address if I wanted to ensure that only the wired desktop could communicate with that printer. And so we can tell the switch that if it comes from Mac address A, it will be allowed, but if it comes from Mac address B, block that traffic and that’s what it will look like. Next, we have traffic filtering. Now, traffic filtering is kind of like Mac filtering, except that we do this at a multilevel switch. As a result, it can do so based on an IP address or port. And so we are now talking about layers three and four. So if I have PC One trying to talk to PC Two, as in the top example, I can block it at the multilayer switch by saying anything coming from the address is not allowed, but anything coming from or PC Two is allowed. In the bottom example, I can say that things coming over port 25 are allowed because they’re mail servers, but things coming from port 5353 are not allowed, and I’ll block them. So I can either block it based on your IP address or your port address. Either one is a way to do it. And this will be configured using the Access Control List, which we’ll talk about much later in the course.

Lastly, we have quality of service, and quality of service is going to forward your traffic based on priority markings. So we have this switch, this multilayer switch again, and I have three devices connected to it: PC One, PC Two, and a phone. Well, because phones are dealing with UDP voice traffic, I want to make sure it’s high priority and gets first in, first out priority. So if I pick up the phone and start talking, I want to make sure packets aren’t dropped. Now, with PC One and PC Two, I can make those lower priorities, and so they’ll get a lower level of service because if packets are dropped and they’re using TCP, they can be retransmitted. And so in this example, PC One has a higher priority than PC Two, and the phone has a higher priority than both of them. Now, when we get later on in the course, we’re going to dive deep into the quality of service, spending at least two or three videos on it because it is a very important concept. But for right now, now I just want you to understand that you can tell a switch that this is more important than that, and that’s the idea of quality of service.