Cisco CCNA 200-301 – HSRP – Hot Standby Router Protocol

  1. Introduction

In this section, you’ll learn about HSRP, the hot standby router protocol. We’ll start off with a discussion of network redundancy in general. This is where we put extra devices into the network so that we don’t have any single points of failure. If any single one of our network infrastructure devices, like a router or switch which fails, there’s another one sitting there that can take over its load. So this is great. It makes our network more resilient, but it brings its own set of issues as well. Namely, how are we going to control the paths that traffic is going to take over all these redundant links? And if a device does go down, how are we going to control failover to the spare? We’ll talk about how that works with our layer three routing in the first lecture as well.

We’ll move on from there to talk about FHRP. This is our first hop redundancy protocols. When we do have redundant devices in the network, we’re going to have redundant default gateways for our end devices. If their main default gateway goes down, we’re going to have another default gateway that they can fail over to. I’ll talk about what the different FHRP protocols that are available for in that first FHRP lecture. Now, the most popular of the FHRP protocols in a Cisco environment is HSRP, the hot standby router protocol. And that’s going to be the focus of the final two lectures. In this section, I’ll explain all the theory of how it works, and I’ll also show you how to configure it as well. Okay, so let’s get into HSRP.

  1. Network Redundancy

Lecture you’ll learn about network redundancy. You see in the diagram on the left on this slide we’ve got an example network and there is no redundancy at all in this network. Everything is a single point of failure. Looking at it from the Enterprise point of view, you can see where I’ve put in the red line that shouldn’t was the demarcation point between the Enterprise and the service provider. The SP router is the service provider router.

R One is the Enterprise one edge wide area network edge router. So from R One we’ve got a single connection going to a single service provider router. We’ve only got that one edge router, R One. We’ve got a single core distribution layer switch and our single access switches. So if any one of those devices goes down, our PCs are going to lose connectivity. So no redundancy there. That’s quite common though for small branch offices where the cost of adding redundant devices wouldn’t be justified. In larger offices though, the cost will be justified because the cost of an outage would be more expensive.

So we’re going to want to have redundancy there. The point of redundancy is to eliminate single points of failure. So if any single device goes down, there’s another device already in place which will take over. So in the example you see on the slide now we’ve got two one edge routers R One and R Two which are connected to two service provider routers, SP One and SP Two with separate links. We’ve got a pair of core distribution layer switches and our access layer switches have got dual up links going to them. So if any of the service provider routers or the one edge routers or the core distribution layer switches goes down or any of the links there go down, we still maintain connectivity. Now you might have noticed that we don’t have redundancy at the access layer. You see access layer switch three here. If that goes down, then the PCs that are connected in will lose connectivity. This is normal because desktop PCs typically just have one network card. They can only connect into one switch anyway. An exception to this would be servers which will often have redundant nicks. So for your servers you’ll usually put in redundant access layer switches. But if an access layer switch goes down, it’s only the PCs that are connected to that one switch which will lose connectivity.

All of the other PCs in the building that are connected to different switches will still maintain connectivity. If we compare that with the first slide where if say R One or our core distribution layer switch went down, everything in the building loses connectivity. With the second example here where we have built redundancy into the solution, we’ve got full redundancy on our service provider links, our one edge routers and our core distribution layer switches. With the example topology I gave you, we’ve got a clear demarcation point between layer three and layer two.

The links between our one edge routers, R One and R Two and going up to the service provider, they’re all layer three links, meaning the interfaces have got IP addresses on them. And we’re going to be running routing. The downstream links from R one and R two will go up. The downstream devices from there are layer two devices. The core distribution layer switches for the example are going to be layer two only. The reason I’ve done this is because it makes it easier to explain and to understand if I’ve got a clear boundary between layer three and layer two. But if this was a real world deployment, we probably would have deployed layer three switches for our core distribution layer switches for everything. I’m going to teach you during the section, it doesn’t really make any difference. Everything still applies. It’s just easier to understand what’s going on with the routing and switching when we’ve got that clear layer three and layer two boundary.

Okay, so in the example here, how are we going to configure the connecton activity from our one edge routers and going upstream and also down to our PCs? Well, redundancy and failover are relatively easy to implement for layer three routing. If you look at our routes on R One here, it’s got a direct connection to the service provider router at SP One. So we’ll have a default static route pointing upstream there. So our route is IP route Zero Zero. The next top address is the SP One router at 203 01131. Now we want redundancy in case the SP One router or the link to the router goes down. So we have a backup route for that which is going to point to R Two. So if the connectivity to SP One goes down from R One, we can send traffic to R Two, which will then send the traffic up to SP Two. So our backup route is also going to be a static default route. And the next hop address is R Two at 1010 22. Now on this backup route, we give it a higher administrative distance of five because I don’t want to load balance my Internet bound traffic to SV One and to R Two.

When I send it from R One I want it to always go up the direct link to SP One, unless that goes down and then I want to send it to R Two. So I need to make the route to SP One a more preferred route than the route to R Two. The way I do that is by manipulating the administrative distance. So on the first route to SP One, I don’t specify an administrative distance, it’s a static route. So the administrative distance will be one. On the backup route going to R Two, I specify an administrative distance of five. The lower the administrative distance the better. So the second route with an ad of five is only going to come into effect if the first one goes down. If the link to SP One goes down, then the router will automatically fail over to using that backup route for traffic going downstream to the PCs on the 1010 network. Well, R One, it’s got an interface on the inside gigabit Ethernet One which has got IPRs 1010 Two. So it’s already got an interface that is in that subnet. So I don’t need to configure a route. However, if interface gig Zero One goes down, I want to have a backup route going down to the PCs.

So that’s why I have IP route 1010 025-525-2550 with a next hop address of 1010 22. Again. Pointing over to R two. I don’t need to specify administrative distance because the default administrative distance on a connected route is zero. It’s always going to be the most preferred. The default ad on a static route is going to be one. So this is going to be the backup route anyway, even without having to change the ad.

So that’s how we configure our routes and our backup routes on R One. On R Two we’re going to be doing the same configuration, except R Two will use SP Two as its preferred route out to the Internet. It and R One will be used for the backup routes. Okay, so that was our layer three redundancy information. But we also need to worry about how are the PC is going to send their traffic upstream and out to the Internet. We’re going to need to have redundancy configured there as well. That’s what we’re going to discuss in the next.

  1. FHRP First Hop Redundancy Protocols

In this lecture you’ll learn about FHRP first hop redundancy protocols. I’ll start with just a really quick review of the routing from the last lecture. So looking at the network topology, r One and R Two are the default gateways for our PCs. On R one we’ve got a default static route pointing up to the servers provider router SP One and it’s directly connected to the 1010 network going downstream. If either the upstream link to SP One or the downstream link to the core distribution switch one goes down, we’ve got backup routes pointing at R Two so it can fail over around that outage in the network. And on R Two we’ve got a similar configuration where it’s preferred route up to the internet goes up to SP Two. It’s directly connected to core distribution two for downstream traffic and if either of those links goes down it will fail over to R One. So you see the configuration on the slide here, we covered it in the last lecture.

That was quite simple. Looking downstream from the R One and the R 21 edge routers we’ve got our core distribution layer switches and our access layer switches. We’ve got redundant links between them as well. They are layer two only devices so we don’t need to worry about configuring IP addresses or configuring routing there. But when we look down at the bottom at the PCs they do have IP addresses so they do need layer three configuration on there and things get a little bit more messy at that point. Looking at the network from the point of view of the PCs, there are redundant gateways. R One has got IP address 1010 Two and R Two has got IP address ten.

1010 Three and R One and R Two are going to function as the default gateway for the PCs. So how are we going to configure this? Well, we could set up half of our PCs to use R 110 Two as their default gateway and the other half of the PCs could use R Two at 1010 Three as their default gateway. But it would be really inconvenient to set up half of our PCs to use one gateway and the other half to use the other gateway. And an even bigger problem is say if R One went down. Well all the PCs that were using 1010 Two as their default gateway, we would need to manually reconfigure them to use ten 1010 Three instead. You saw when we did the routing configuration in R One and R Two we’d got the backup routes there and if a link goes down it will automatically fail over to using the other path. We don’t want to have to manually reconfigure our PCs if a route or a path to the router goes down because that’s going to be very inconvenient and it’s going to be very time consuming.

So we want a better solution than that. And that is where FHRP comes in stands for First Hop Redundancy Protocols with FHRP, the default gateway routers. So R One and R Two, in our example, they have a virtual IP address which is negotiated between the two of them. So R One and R Two both run a First Hop Redundancy protocol and they talk to each other and they agree on what their virtual IP address is going to be. There’s also an associated virtual Mac address as well, and they negotiate on which router is going to be answering on which particular IP address and Mac address. So now the PCs, rather than having to use IP address 1010 Two or 1010 Three as their default gateway, in that example, they use the virtual IP address of ten 1010 One. So say that we’ve got PC One and it is currently using R One as its default gateway. With IP address 1010 One and router R One goes down well, R Two will detect that and it will automatically take over the virtual IP address of 1010 One.

So PC One’s default gateway address does not change, so it will automatically fail over to using R Two without having to reconfigure anything. The different FHRP first top redundancy protocols that we have, our first one is HSRP, the Hot spare router protocol. This is Cisco proprietary, and with HSRP it’s deployed in an active standby pair. So looking back, a slide with HSRP, if R One is the active, R Two will be a standby only, so all traffic always goes through R One. If R one fails, it will then fail over to R two. So HSRP, it’s an active standby configuration. HSRP, if you’re in a Cisco environment, is the most commonly used first Top Redundancy protocol. The next one that we have available is VRRP, the virtual Router Redundancy protocol. This is very, very similar to HSRP.

It’s also deployed in an active standby pair. But VRRP is an open standard, so not just supported on Cisco routers. It’s so similar actually, that if you look at the configuration between HSRP and VRP, it’s nearly exactly the same. Apart from HSRP uses the keyword standby, VRP uses the keyword VRP, and the last option that we have available is GLBP, the gateway load balancing protocol. This, like HSRP, is also Cisco proprietary. With GLBP, it supports active load balancing across multiple routers.

So rather than just being active standby, not doing load balancing like HSRP does GLBP, you can have it doing load balancing between the two routers for the same IP subnet. Jlbp is a little bit more complicated to set up and troubleshoot though. So HSRP is the one that’s more commonly used and HSRP is the one that’s covered in the CCNA exam. So that’s what we’ll be covering in the next lecture.

img