Chapter 3 Azure Load Balancer

This Chapter covers following Topic Lessons

  • Load Balancers in Azure
  • Azure Load Balancer
  • Internet or Public Facing Load Balancer
  • Internal load Balancer
  • Azure Load Balancer Types
  • Traffic Distribution Mode for Azure Load Balancer
  • Load Balancer Health Probes
  • Idle timeout settings for Azure Basic Load Balancer
  • Outbound connections of Load Balanced VMs
  • Port Forwarding in Azure Load Balancer

This Chapter covers following Lab Exercises

  • Create Internet facing Azure Load Balancer
  • Create Backend Address Pool and Add Endpoints (VMs)
  • Create Health Probe
  • Create Load Balancer Rule
  • Access the Websites on Load Balanced VMs

Chapter Topology

In this chapter we will add Azure Load Balancer to the topology. Virtual Machine VMFE1 and VMFE2 will be added as an endpoint to the Azure Load Balancer. We will then access default website on VMFE1 and Custom Website on VMFE2 using public IP of the Azure Load Balancers.

Screenshot_176

Load Balancers in Azure

Load balancing distributes traffic across multiple computing resources.

Microsoft Azure offers three types of load Balancers: Azure Load Balancer, Application Gateway & Traffic Manager.

Azure Load Balancer is a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances defined in a loadbalanced set.

Application Gateway works at the application layer (Layer 7). Application Gateway deals with web traffic only (HTTP/HTTPS/WebSocket). It acts as a reverse-proxy service, terminating the client connection and forwarding requests to back-end endpoints.

Traffic Manager works at the DNS level. It uses DNS responses to direct end-user traffic to globally distributed endpoints. Clients then connect to those endpoints directly.

Comparing Different Types of Azure Load Balancers

Screenshot_177

Azure Load Balancer

Azure Load Balancer is a managed Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances of services defined in a load-balanced set.

Azure Load Balancer Types : Basic & Standard.

Azure Load Balancer can be configured as Internet/Public Facing Load Balancer or Internal Load Balancer.

Internet or Public Facing Load Balancer

Internet or Public Facing Load Balancer distributes incoming Internet traffic to virtual machines. Figure below show shows internet traffic being distributed between Virtual Machines. LB has public IP and DNS name.

Screenshot_178

Internal load Balancer

In Multi-Tier applications, Internal Load Balancer distributes traffic coming from Internet/Web tier to virtual Machines which are in back-end tiers and are not Internet-facing.

Internal Load Balancers can also distribute traffic coming from application tier which is not internet facing to Database tier which is also not internet facing.

An internal load balancer is configured in a virtual network.
Screenshot_179

Important Point: Internal Load Balancer can also direct traffic to on-premises VM which are connected to Azure through VPN gateway.

An internal Load Balancer enables the following types of load balancing:

  1. Within a virtual network : Load balancing from VMs in the virtual network to a set of VMs that reside within the same virtual network.
  2. For a cross-premises virtual network : Load balancing from on-premises computers to a set of VMs that reside within the same virtual network.
  3. For multi-tier applications : Load balancing for internet-facing multi-tier applications where the backend tiers are not internet-facing. The backend tiers require traffic load-balancing from the internet-facing tier (see the figure in previous page).

Azure Load Balancer Types

Azure Load Balancer comes in 2 types: Basic & Standard . Basic Load Balancer is free of charge whereas Standard Load Balancer is charged.

Standard includes all the functionality of Basic Load Balancer and provides additional functionalities.

Azure Load Balancer Standard and Public IP Standard together enable you to provide additional capabilities such as multi-zone architectures, Low latency, high throughput, and scalability for millions of flows for all TCP and UDP applications.

Additional Features in Standard Load Balancer

Enterprise scale: With Standard Load Balance you can design Virtual Data Center which can support up to 1000 Virtual Machine instances.

Cross-zone load balancing: With Standard Load Balancer you can load balance Virtual Machines in backend pool spread across Availability Zones. Note that Availability Zones are also in Preview.

Resilient virtual IPs (VIP): A single front-end IP address assigned to Standard Load Balancer is automatically zone-redundant. Zone-redundancy in Azure does not require multiple IP addresses and DNS records.

Improved Monitoring: Standard Load Balancer is integrated with Azure Monitor (Preview) which provides new metrics for improved monitoring. Monitor your data from front-end to VM, endpoint health probes, for TCP connection attempts, and to outbound connections. New Metrics include VIP Availability, DIP Availability, SYN Packets, SNAT connections, Byte counters and Packets counters.

New SNAT: Load Balancer Standard provides outbound connections for VMs using new port-masquerading Source Network Address Translation (SNAT) model that provides greater resiliency and scale. When outbound connections are used with a zone-redundant front-end, the connections are also zone-redundant and SNAT port allocations survive zone failure.

Traffic Distribution Mode for Azure Load Balancer

Traffic Distribution mode determines how Load Balancer will distribute client traffic to load balanced set.

Traffic Distribution mode is selected in Load Balancing Rules.

Hash-based distribution mode

It uses 5 tuple hash of source IP, source port, destination IP, destination port, protocol type to map client traffic to available load balanced servers.

It provides stickiness only within a transport session. Packets in the same session will be directed to the same datacenter IP (DIP) instance behind the load balanced endpoint.

When the client starts a new session from the same source IP, the source port changes and causes the traffic to go to a different DIP endpoint.

Screenshot_180

Source IP affinity distribution mode

Source IP Affinity also known as session affinity or client IP affinity use a 2- tuple (Source IP, Destination IP) or 3-tuple (Source IP, Destination IP, Protocol) to map traffic to the available servers.

By using Source IP affinity, connection initiated from the same client IP goes to the same datacenter IP (DIP) instance.

Screenshot_181

Source IP Affinity distribution method provides session affinity based on Client IP address.

Source IP Affinity distribution method can result in uneven traffic distribution if clients are coming behind a proxy.

Load Balancer Health Probes

Load Balancer uses health probes to determine the health of instances in the backend pool.

When a probe fails to respond, the Load Balancer stops sending new connections to the unhealthy instances. Existing connections are not affected, and they continue until the application terminates the flow, an idle timeout occurs, or the VM is shut down.

Health Probes timeout and interval values

Timeout and interval values are used to determine whether an instance is marked as up or down.

Interval: Interval is the number of seconds between probe attempts.

Unhealthy threshold: This value is the number of consecutive probe failures that occur before a VM is considered unhealthy.

Timeout and interval values are specified when you create Health Probes.

Health Probe Types

Azure Load Balance supports 3 probes types depending on the Load Balancer type.

Screenshot_182

TCP Probe

TCP probes initiate a connection by performing a three-way open TCP handshake with the defined port.

The minimum probe interval is 5 seconds and the minimum number of unhealthy responses is 2. You can change these values when you are creating Health Probes.

TCP probe Failure

  1. The TCP listener on the instance doesn't respond at all during the timeout period. A probe is marked down based on the number of failed probe requests, which were configured to go unanswered before marking down the probe.
  2. The probe receives a TCP reset from the instance.

HTTP/HTTPS Probe

HTTP and HTTPS probes build on the TCP probe and issue an HTTP GET request with the specified path. HTTPS probe is same as HTTP probe with the addition of a Transport Layer Security (TLS, formerly known as SSL) wrapper.

HTTP / HTTPS probes can also be used if you want to implement your own logic to remove instances from load balancer rotation. For example, you might decide to remove an instance if it's above 90% CPU and return a non200 HTTP status.

The health probe is marked up when the instance responds with an HTTP status 200 within the timeout period.

HTTP / HTTPS probe fails when:

  1. Probe endpoint returns an HTTP response code other than 200 (for example, 403, 404, or 500). This will mark down the health probe immediately.
  2. Probe endpoint doesn't respond at all during the 31-second timeout period. Multiple probe requests might go unanswered before the probe gets marked as not running and until the sum of all timeout intervals has been reached.
  3. Probe endpoint closes the connection via a TCP reset.

Idle timeout settings for Azure Basic Load Balancer

In its default configuration, Azure Load Balancer has an idle timeout setting of 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.

A common practice is to use a TCP keep-alive so that the connection is active for a longer period. With keep-alive enabled, packets are sent during periods of inactivity on the connection. These keep-alive packets ensure that the idle timeout value is never reached and the connection is maintained for a long period.

Idle timeout is configured in Load Balancing Rules. TCP Timeout is configured on Virtual Machine Public IP.

Outbound connections of Load Balanced VMs

Load-balanced VM with no Instance Level Public IP address : Azure translates the private source IP address of the outbound flow to the public IP address of the public Load Balancer frontend.

Azure uses Source Network Address Translation (SNAT) to perform this function. Ephemeral ports of the Load Balancer's public IP address are used to distinguish individual flows originated by the VM. SNAT dynamically allocates ephemeral ports when outbound flows are created.

Load-balanced VM with Instance Level Public IP address (ILPI P): When an ILPIP is used, Source Network Address Translation (SNAT) is not used. The VM uses the ILPIP for all outbound flows.

Port Forwarding in Azure Load Balancer

With Port forwarding you can connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.

This option is commonly used to connect to Azure VMs when Azure VMs have private IP assigned only.

Port Forwarding is enabled by Creating Load Balancer Inbound Nat Rules which forward traffic from a specific port of the front-end IP address to a specific port of a back-end VM.

Steps to creating Internet or Public facing Azure Basic Load Balancer

  1. Create Load Balancer with Public option
  2. Create backend address pool and add end points (VMs) to it.
  3. Create a probe for monitoring end points (VMs).
  4. Create Load Balancing rules ( add backend address pool & probe created in step 2 and step 3 respectively and choose session persistence method)

Steps to creating Internal Azure Basic Load Balancer

  1. Create Load Balancer with internal option.
  2. Create backend address pool and add end points to it.
  3. Create a probe for monitoring end points
  4. Create Load Balancing rules (add backend address pool & health probe created in step 2 and step 3 respectively and choose session persistence method).

Exercise 45: Create Internet facing Azure Load Balancer

In this Exercise we will create Basic Azure Load Balancer in Resource Group RGCloud and in region East US 2 .

In Azure Portal Click +Create a Resource> Networking> Load Balancer>Create Load Balancer Blade opens>Select Resource Group RGCloud, Enter a name, For Location select East US 2, For type Select Public and for SKU select Basic> For IP address select Create new and enter a name and select Dynamic> Click review + Create> After validation is passed click create.

Screenshot_183

Note: We have chosen IP Address as Dynamic to save on Azure Credits.

Figure below shows the Dashboard of Load Balancer.

Screenshot_184

Exercise 46: Create Backend Address Pool and Add Endpoints (VMs)

Backend Address pool will include VMs (VMFE1 & VMFE2) which are to be load balanced. VMFE1 has default website and VMFE2 has Custom website.

Click Backend pools in left pane of Load Balancer Dashboard> Click +Add> Add Backend pool Blade opens> Give a name> Select Availability Set from Drop Down box> In Availability Set Select ASCloud> Add Virtual Machine VMFE1 and VMFE2 by clicking +Add a target Network IP configuration>Click OK.

Screenshot_185

It will take 2-3 minutes to add both VMs to Backend pool. Proceed to next step after both VMs are added to the backend pool.

Exercise 47: Create Health Probe

Health probes are used to check availability of virtual machines instances in the back-end address pool. When a probe fails to respond, Load Balancer stops sending new connections to the unhealthy instance. Probe behavior depends on:

  1. The number of successful probes that allow an instance to be labeled as up.
  2. The number of failed probes that cause an instance to be labeled as down.
  3. The timeout and frequency value set in SuccessFailCount determine whether an instance is confirmed to be running or not running.

Go to Load Balancer Dashboard>Click Health Probes in left Pane>+Add>Add health Probe blade opens>Enter a name>Select HTTP in Protocol>Click Ok.

Screenshot_186

Exercise 48: Create Load Balancer Rule

Load Balancer rule defines how traffic is distributed to the VMs. You define the front-end IP configuration for the incoming traffic and the back-end IP pool to receive the traffic, along with the required source and destination port, Health probe, session persistence and TCP idle timeout.

Go to Load Balancer Dashboard and Click Load Balancing Rules in left Pane> +Add> Add load Balancing Rule blade opens>Enter a name>Select Backend Pool and health Probe creates in previous exercise>Rest Select all default values>Ok.

Screenshot_187

Note: There is Session Persistence and idle Timeout setting are also there which is not shown in above figure. You just need to scroll down.

Exercise 49: Access the Websites on Load Balanced VMs

  1. Go to Load Balancer Dashboard>From right pane copy the Public IP Address (104.208.234.10)> Open a browser and http:// 104.208.234.10> Custom website on VMFE2 opens.

    Screenshot_188
  2. Pres f5 couple of times to refresh the browser> Default Website of VMFE1 opens as shown below.

    Screenshot_189

    Note: In Load Balancing rule we had chosen Session Persistence as none.

Azure Basic Load Balancer Pricing

Azure Basic Load Balancer is free of charge.

Azure Standard Load Balancer Pricing

The pricing for standard Load Balancer will be based on the number of rules configured (load balancer rules and NAT rules) and data processed for inbound originated flows.

UP

LIMITED OFFER: GET 30% Discount

This is ONE TIME OFFER

ExamSnap Discount Offer
Enter Your Email Address to Receive Your 30% Discount Code

A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.