F5 101 – Application Delivery Controller (ADC)

  1. Configuring Load Balancing in ADC Part 2

Before we talk about load balancing, concepts, methods and its configuration, let’s talk about first the terminology that we’ll be using. We have here three servers with an IP address of 170, 216, 21, dot two and three. These three servers are connected to to internal VLAN. Big IP is also connected to the internal VLAN. Now, in F five world, an IP address of a server. This is what we called Notes. Again, if we’re just talking about IP addresses, we call this Notes. Now if the IP address is combined with a port or a service number, in this case all three IP addresses 170, 216, 21, two, three, all are listening to port 80. This is what we called full members. Again, IP address plus port. We call this full member. Sometimes we use members only as a short terminology. But full members and members. This is what is defined as IP address and port. Now, we also have pool. This is where we add our pool members. And it is safe to say that pool is a container of pool members. And you don’t add pool members individually. You can only add pool members when you are creating a pool.

Okay, next we have virtual servers. Sometimes we call this SBS as a short terminology. But Virtual servers is the listener. It is like pool members. It is a combination of IP address and port. Now, this IP address usually is exposed on the outside network because this is the listener and this is contacted by clients. Again, the virtual server is the listener and virtual server. Most of the configuration objects are you associated here like I roll or profiles or persistence.

But the most commonly used out association is pull association to the Vs. Why is that? Because this is how you enable load balancing. And as you can see, we have a pool with the name of Http underscore pool associated to the virtual servers. It is also safe to highlight without pool associated to Vs, vs can be reached by the client. But the Vs will not be able to forward traffic to the pool members or to the servers. And again, there is no load balancing. Now in this case, the client will start communicating to the virtual server. Again, our virtual server has an IP address of 1010 1100 listening to port 80.

The client sends a request to the virtual server. Okay? And again, Http rule is associated to our virtual server to enable load balancing. Okay, so here’s the client sending to the Vs. Now as the virtual server receives the traffic, it will check from the existing configuration. And there are many like idles profiles, persistence and the most common here is the pool association. It will check, hey, I have a pool associated and inside a pool we have three pool members.

And this is where we will start forwarding traffic to not just one pool member, but it can also forward traffic to the other pool members as well using load balancing methods. And I am back in our f five big IP GUI. Now what I have here is a fresh configurations of application objects. If I go under local traffic, I click nodes. There is no added notes yet. If I hit pools, there is none as well as virtual servers.

Now we’re going to start adding pool under local traffic. I will click pulls again and I will hit create. I’m going to name our pool http underscore pool. I’m not going to add help monitors yet. I will go straight to the new members here and I’m going to add 170, 216, 21 and with the server port or service port 80, I’m going to click add. This is our first full member. I am just going to change the fourth Octane to two. The service port is still here. I am going to click add and again change the third or fourth Octet to three. Click add.

And now we are almost complete adding our three pool members and the pool. I’m going to hit finish now or before, I almost forgot, before I hit finish. As you can see, I added three pool members right as I hit finish, it will not only create three pool members, it will also automatically create three notes. We’ve already verified that there were nodes added yet.

I’m going to hit finish now. Okay, we have our pool and if I hit pool and go to members, you will see three pool members. As I mentioned, you don’t add pool members separately. It is always created and added when you are creating a pool. Now if we verify under nodes, it is automatically added these three nodes with an IP address of 170, 216, 21, two and three. Now we’re going to create virtual servers. Under virtual servers there’s none and we are about to create. I’m going to name our virtual servers as http underscore PS with an IP address of 1010 100. I’m listening to port 80.

Okay, we’re going to leave everything by default except for the default pool. As you can see, it has selected none yet. And if I hit the select box arrow here, it only shows us one pool. And this is the currently added pool http underscore pool. If I selected, we just associated our pool to our Vs. And take note, there’s also a plus sign here. Now this plus sign button will automatically takes you on the page when we want to create a pool.

But we don’t need to do this because we just created a pool separately. I’m going to hit finish now. There you go. We just created our first virtual server. Now if we verify all of these application objects with the use of network map, you will see that we have three pool members that is residing in the Http underscore pool. And http underscore pool is associated to our http underscore Vs. Now, I’m here in our client PC.

I open a browser to test not only our virtual server reachability, but also to test if the applications are working properly. So I’m going to enter 1010 100. This is the IP address that we configured in our virtual server. It will also go be to send Http requests. Since this is a web browser, it will automatically use port 80. I’m going to hit enter now. And as you can see, the web application is working. We have here some useful information. Our virtual server is 1010 100 and it has selected server number one. I’m going to highlight this here. Server number one, also known as our pool member. 170, 216, 120 excuse me, 170, 216, 21, listening on port 80. And you see our client IP address is 1010 130. Now, some of the images are in different number. For example, this is number two, number one, number two, number three, number two. And the background here is number three.

This shows us the traffic is distributed across to all three pool members. Now if I hit refresh, you will see that the server connected changed to server number two. If I hit refresh again, it is now connected to server number three. Now, if you want to not to be confused with all of the images, you have an option to click the source IP address link here. Okay? And as you see, the source IP address is 1010 130. This is the IP address of our client PC. We also have the virtual server 1010 100 and the selected node. Or this is the IP address of our pool member. If I hit refresh, it is selecting the second node again. Now is selecting the first server, second server and the third server. So we just tested the virtual server pools pool members and it is all working properly.

  1. Configuring Load Balancing in ADC Part 3

Now let’s set priority group activation. Now under load balancing method, I’m going to change this to ratio member and I’m going to enable priority group activation. I’m going to set the available members less than two. I’m going to click Update now. Now for the three pool members members, we still need to configure the values since we’re using priority group activation. Okay, I need to click every single pool members and add priority group value. For the first pool member, I’m going to leave the ratio value to one. And I’m going to set priority group value to two. I’m going to click update. There you go. And let’s change the second pool member. I am going to change the ratio value of one and the priority group to three. I’m going to click update. Okay, now for the third pool member, I will change the priority value group to one and the ratio to a value of four. There you go. Now let’s review. We have load balancing method. Ratio member, we set less than two available members for our priority group activation. And for pool member one, we have ratio value of one.

Priority group two. For pool member two. Ratio value of one, priority group three. And for the third pool member, ratio value of three. Priority group is only one. Now guys, I want you to pause this video, think analyze and let me know what’s going to happen next. Now let’s discuss using what board the load balancing method that we’re currently using is ratio member. We also set the priority activation groups to less than a value two for available members.

So I’m going to set this to two. We also set the priority group value to each pool members. And I’m going to use red. For the first pool member, the priority group value we set is two. And for the second pool member, we use three. And for the third, we set it to one. For the ratio values, these are one, one, and for the third is ratio value four. Now if the virtual server received a request from the client and the pool is association associated to that Vs, the Vs will start establishing connections to one or more of these pool members. Now let’s first think about what the concept of priority group. Priority group. It will choose the highest priority group to forward the traffic from the big IP. What is the highest priority group among all of these three pool members anyway? It’s three, right? This one, this is the highest. So the big IP will start establishing connection to the second pool member. We know that very easy. Now if you compare our value to available members equal to two, does it mean that pool member two alone will satisfy the requirement? Well, compare it to versus one is not enough. Okay, so we need another pool member.

So the next highest value to three is priority group number two, pool number one. So the big IP will not only establish connection to the second pool member, but also to the first pool member. How many pool members do we have now? It’s two. And does it satisfy our requirement? Yes, two pool members is equal to the available numbers that we set for the priority activation group. So we know that the big IP will establish connection to these two pool members. The second question is how will be the load balancing going to work? Well, the low balancing method, we selected this ratio and members, what is the value for pool numbers one pool numbers two ratio is it not one and one? If pool members one and two have the same ratio values and this is one, this is the default.

This means that the low balancing from or between the big numbers, between the load balancing between pool numbers one and two will be like around Robin load balancing. And the big IP will distribute the traffic evenly to pool members one and two. How about pool member number three? Well, you have the lowest priority group value and you are not needed. You will only be needed if one of the pool members go offline. But for our current set up, it’s only pool members one and two will receive the traffic and they will receive it evenly. Now we’re back in the statistics page and I already reset the statistics counters and we’re going to access the Windows client and test SSH connectivity as we discuss, the connection will be distributed only to pool members one and two. Now we’re now in the Windows client and I will create multiple lens age connections. Okay, I think that is enough stage. I will click refresh and we have currently a total number of 24 connections. And do you see that it only load balance to pool numbers one and two. And we have equal number of active connections. Twelve for pool members one and twelve also for pool members. So our theory for priority group activation and ratio member load balancing is working properly.

  1. Configuring Load Balancing in ADC Part 4

Let’s have one more load balancing exercise. I am going to change some parameters here. For the load balancing method, we’ll be using ratio node. For the group activation, I’m going to change this value less than three available numbers. I’m going to click update. And guys, I want you to pause the video, think, analyze and tell me what’s going to happen next. Now let’s talk about what’s going to happen. We have here three pool members. The first pool members has a prior group value of two. The second pool member still has a priority group value of three. And the third pool member has a priority group value of one. What we change is the load balancing method. We are now using ratio. Note we also change the priority group activation less than two for the available members. We change it to three. Okay, so this is our current configuration. We know that once the big IPVs receive a traffic it will load balance it to one of these or two or more of these three pool members. The priority will be pool member two. Why? Because it has the highest priority group.

It has three. We already know that. The second question is are there any available pool members will receive the connection from the big IP? Let’s think. We already have one pool member. Does it satisfy the requirement of three? No. So we should also forward the traffic to the second highest priority group value, which is two. And whose pool member has the priority group value of two is pool member one. So we know it’s priority group three and two. Who will receive the traffic. This is equivalent to two pool members. But wait, it’s not enough. We are still not satisfying the requirement of three. We still need one more pool member. How about this last pool member? It has a priority group value of one. Since that’s the only priority group available.

The big IP will also include this to our load balancing scheme. So the big IP will forward the traffic and will create the connection to all three pool members. Now the second question is how will the big IP forward the traffic and how the load balancing works in this current setup? We have here load balancing method, ratio node. So how will the traffic will be distributed? Is it something like this? First traffic will go to the first pool member. Second traffic will go to the second pool member. The third traffic goes to the third pool member. How about the fourth, the fifth and the 6th. Fourth. And the 6th goes to pool member three. Is this correct? No, it is not correct. Why is that? Because the value that I showed you, which is ratio one for pool members one and two and ratio four for pool members three.

This is a ratio value for pool members. Our load balancing configuration specifies that we should get the ratio values from the node configuration so the node ratio, what’s the value? Well, by default is one. So I’m going to change it here. Ratio value is equal to one. If ratio values equal to one configured on all pool members, the traffic or the load balancing distribution should operate like a round robin. So the fourth and the fifth and the 6th connection should be distributed evenly across to all three pool members. I’m back here in the statistics page and I just reset the statistics for both Http and SSH pool. You see we have the three pool members, the connections are all zero. So now we’re going to test it. I’ll go to my Windows client and I’m going to initiate multiple SSH connections. I’m going to initiate now and as we are expecting, if I refresh the statistics, we’ll be able to see the active SSH connection. So we have three, four and two. It’s pretty close, but four versus two, it’s just twice the difference.

So what I’m going to do next is I will go back to my Windows client and I’m going to initiate more SSH connection. Now what we’re going to expect is the SSH connection to the pool members will be evenly distributed. So I’ll hit more SSH.

Let’s go back to our statistics page. I will hit refresh and let’s see, we have 40 for the pool numbers one and two. Both are equal 14 total of connections, while pool member three has only twelve. That’s fine, they’re still close. Now what I’m going to do is hit more SSH connection and this will be the last time I’m going to do refresh and SSH. There you go. So we only have like 18 to 19 different, it’s just one connection. I’m pretty sure if we hit more connections we’ll see evenly distributed traffic to all pool members.

Now maybe you’re thinking if the ratio values is seen and configured on this page and all I need to do is click a pool member, I will see and change the ratio value currently is four. I can change this to anything I want. And this pool members is under pool configuration. So if I click properties, I see the name of the pool other values and parameters.

But the question now is how do you verify and change the ratio value of notes? It’s pretty easy. All you need to do is go to notes under local traffic. As you can see, we have the three nodes, the IP address 170, 216, 21, 22 and 20. Dot three. If you want to change the value of ratios, all you need to do is click the node link. Okay, you will be taken to this page and as you can see, the ratio value of node one with an IP address of 170, 216, 21 is one, which is a default. Same with the second node. Also same with the third node. All three nodes has a default value ratio equivalent to one.

  1. Health Monitors Part 1

Help monitors. In our previous lab, we created configuration objects. We have our virtual servers. And under virtual servers we associate a pool with three pool members. If the client sends requests to our Vs, the big IP will load balance the connection, it will send it to pool number one and then pull member two and pull number three. And the big IP has no idea what’s the real status of these servers. It can be online. It also can be offline. Now, the disadvantage of having no monitor configuration is that we don’t know what’s going to happen if the server is down. The big IP will still forward the traffic to that server and the client will experience downtime or inaccessible to the application. Now, would it be good if we configure health monitors to our resources such as pull, so that the big IP can track the health of the servers? And as you can see, we have Green Circle.

This indicates that the server is online or available. Now, help monitors. What it does is it tracks specific resource and it monitors expected response from an application. It also checks using predefined time interval. The big IP forwards the traffic per load balancing method. And it forwards because it knows the servers are online. Forwards the traffic to server one and two and server three. Now, if the big IP is not receiving a successful response from a specific pool member or a node, the big IP will mark that node or pool member as offline. So in this case, the pool member two is offline. The big IP will not forward the traffic to that pool member. Why? Because it’s offline. We don’t want our client to experience in accessibility to the application.

Help monitors also have different types of resources. First. We have nodes. And under nodes, you can apply your Help monitors under node default or on a specific note, it is most common to apply the help monitors under pools. And all the pool members added to that pool will automatically inherit the pool member or the pool help monitor configuration. You can also apply a Help monitor to a specific pool member. And lastly, Links is not part of LTM. But in big IP, DNS and Link controller, there is a type of resource called Links where you can apply help monitor as well. Help Monitor uses different kinds of checks. First, we have Address Check Monitor and the best example for these is ICMP. What a big IP will do is it will send an ICMP request to the pool member or the node. And if that node or member responds successfully with an ICMP echo reply, that means that server is tested healthy.

Okay? It determines the availability of a device using ICMP, and if the member or node doesn’t respond within the timeout period, the big IP will mark the server offline. Okay? We also have service check monitor. The best example of the Service Check Monitor is TCP based protocol, and it uses an open connection to pull members, also known as service, to determine the availability of a service. So the big IP will initiate a scene request to the server and the server will reply with ascend act. Now, the big IP needs to terminate this TCP three way handshake with a reset flag, because we’re not using any application. But the big IP will mark the server as online. If the big IP sends TCP scene to a pool member without response, the big IP will mark this pool member as offline.

We also have content check, and the best example for this is Http application. And the content checks doesn’t only check the reachability or the service, it also opens the connection, send a command, and examine the response. So here’s an example. The big IP send TCP scene and completes the TCP three way handshake and it encapsulates it to the application layer. The big IP will start sending data. Since this is an Http based application, it will send Http get request. The pool member will reply, and as the big IP received the response, it will examine it and compare it to the monitor configuration. If the response and the configuration matches, the big IP will mark the full member online or available. But if it doesn’t match the configuration and the return Http response, the big IP will mark this full member as offline. Okay, so again, it determines the availability of a service and appropriate content.

  1. Health Monitors Part 2

Every health monitor has both interval and timeout values. The interval is the number of seconds between each test and by default it is 5 seconds. The moment you apply your health monitors to a type of resource such as pull or pull numbers, the monitor will start doing the test testing. If it’s successful, the timer or the interval timer will reset to zero and another 5 seconds. It will do the test now. But what if the device doesn’t respond or is not responding successfully? The VIP will give the device another 5 seconds. But if it hasn’t responded, the big IP will give him an ultimatum I will give you last 5 seconds. And if it doesn’t reply, it will mark the device offline after 1 second more.

Okay, this is what we call time out. This is how long before the device is marked unavailable. If there is no successful test, by default the timeout value is 16. Now, you can always change the value of interval and timeout, but it is preferred to use this formula. Okay, the interval value times three plus one should be that timeout value. So if I change the interval value to three, the time out will be three times three plus one total of 10 seconds for the timeout.

So we’ve already discussed the types of help monitors. We have the address check where we use ICMP to test our devices, our servers using ICMP echo requests. We also have the service check. This time we use TCP three way handshake. We also have content check. And the best example is Http application where the big IP received the Http response, examine it and compare it with our help monitor configuration. We also talk about the monitor interval and timeout settings where the default interval is 5 seconds. The timeout is 15 seconds plus 116. Now, we’re also going to talk about custom Http monitor in the lab, as well as testing monitor prior, applying it to our type of resources such as pulls and pull members. We’re also going to demonstrate how to assign our help monitors to pull pull, member specific, node default and node specific.