Solution for Load balancing and Traffic routing methods

  1. Home
  2. Solution for Load balancing and Traffic routing methods

Go back to AZ-304 Tutorials

AZ-304 exam is retired. AZ-305 replacement is available.

In this, we will learn and understand about different components of the Azure load balancing that distribute traffic and provide high availability. Moreover, we will understand the process of creating an Azure load balancer, load balancer traffic rules, and more. Further, we will learn about Azure traffic routing methods for determining how to route network traffic to the various service endpoints.

Overview of Azure load balancing

An Azure load balancing is a Layer-4 (TCP, UDP) load balancer that provides high availability by distributing incoming traffic among healthy VMs. Moreover, this also monitors a given port on each VM and only distributes traffic to an operational VM. In this, you define a front-end IP configuration that contains one or more public IP addresses. However, this front-end IP configuration allows your load balancer and applications to be accessible over the Internet.

Creating Azure load balancer

In this, we will create and configure each component of the load balancer. For example, we will create a resource group with New-AzResourceGroup. However, the example below creates a resource group named myResourceGroupLoadBalancer in the EastUS location:

Azure PowerShell

New-AzResourceGroup `

  -ResourceGroupName “myResourceGroupLoadBalancer” `

  -Location “EastUS”

Creating a public IP address

For accessing apps on the Internet, you require a public IP address for the load balancing. Creating a public IP address with New-AzPublicIpAddress. For example, creates a public IP address named myPublicIP in the myResourceGroupLoadBalancer resource group:

Azure PowerShell

$publicIP = New-AzPublicIpAddress `

  -ResourceGroupName “myResourceGroupLoadBalancer” `

  -Location “EastUS” `

  -AllocationMethod “Static” `

  -Name “myPublicIP”

Creating a load balancer

  • Firstly, let’s create a frontend IP pool with New-AzLoadBalancerFrontendIpConfig. The example creates a frontend IP pool named myFrontEndPool:

Azure PowerShell

$frontendIP = New-AzLoadBalancerFrontendIpConfig `

  -Name “myFrontEndPool” `

  -PublicIpAddress $publicIP

  • Now, we will develop a backend address pool with New-AzLoadBalancerBackendAddressPoolConfig. 

Azure PowerShell

$backendPool = New-AzLoadBalancerBackendAddressPoolConfig `

  -Name “myBackEndPool”

  • Lastly, we will develop the load balancing with New-AzLoadBalancer.

Azure PowerShell

$lb = New-AzLoadBalancer `

  -ResourceGroupName “myResourceGroupLoadBalancer” `

  -Name “myLoadBalancer” `

  -Location “EastUS” `

  -FrontendIpConfiguration $frontendIP `

  -BackendAddressPool $backendPool

Creating a load balancer rule

A load balancer rule explains the traffic distribution to the VMs. In this you will define the front-end IP configuration for the incoming traffic and the back-end IP pool to receive the traffic, along with the required source and destination port. However, to make sure only healthy VMs receive traffic, you also define the health probe to use.

So, let’s create a load balancer rule with Add-AzLoadBalancerRuleConfig. 

Azure PowerShell

$probe = Get-AzLoadBalancerProbeConfig -LoadBalancer $lb -Name “myHealthProbe”

Add-AzLoadBalancerRuleConfig `

  -Name “myLoadBalancerRule” `

  -LoadBalancer $lb `

  -FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `

  -BackendAddressPool $lb.BackendAddressPools[0] `

  -Protocol Tcp `

  -FrontendPort 80 `

  -BackendPort 80 `

  -Probe $probe

AZ-304 Practice tests

Creating network resources

In this, we will create a virtual network with New-AzVirtualNetwork. 

Azure PowerShell

# Create subnet config

$subnetConfig = New-AzVirtualNetworkSubnetConfig `

  -Name “mySubnet” `

  -AddressPrefix 192.168.1.0/24

# Create the virtual network

$vnet = New-AzVirtualNetwork `

  -ResourceGroupName “myResourceGroupLoadBalancer” `

  -Location “EastUS” `

  -Name “myVnet” `

  -AddressPrefix 192.168.0.0/16 `

  -Subnet $subnetConfig

Testing load balancer

In this, we will obtain the public IP address of your load balancer with Get-AzPublicIPAddress. For example,

Azure PowerShell

Get-AzPublicIPAddress `

  -ResourceGroupName “myResourceGroupLoadBalancer” `

  -Name “myPublicIP” | select IpAddress

Adding and removing VMs

You may require to perform maintenance on the VMs running your app like installing OS updates. However, to deal with increased traffic to your app, you may need to add additional VMs. And, the section below shows how to remove or add a VM from the load balancer.

Removing a VM from the load balancer

Get the network interface card with Get-AzNetworkInterface, then set the LoadBalancerBackendAddressPools property of the virtual NIC to $null. Lastly, update the virtual NIC:

Azure PowerShell

$nic = Get-AzNetworkInterface `

    -ResourceGroupName “myResourceGroupLoadBalancer” `

    -Name “myVM2”

$nic.Ipconfigurations[0].LoadBalancerBackendAddressPools=$null

Set-AzNetworkInterface -NetworkInterface $nic

However, for viewing the load balancer distributing traffic across the remaining two VMs running your app you can force-refresh your web browser. And, now you can perform maintenance on the VM that is installing OS updates or performing a VM reboot.

Adding a VM to the load balancer

After performing VM maintenance, or if you need to expand capacity, just set the LoadBalancerBackendAddressPools property of the virtual NIC to the BackendAddressPool from Get-AzLoadBalancer. For Example,

Get the load balancer:

Azure PowerShell

$lb = Get-AzLoadBalancer `

    -ResourceGroupName myResourceGroupLoadBalancer `

    -Name myLoadBalancer 

$nic.IpConfigurations[0].LoadBalancerBackendAddressPools=$lb.BackendAddressPools[0]

Set-AzNetworkInterface -NetworkInterface $nic

Traffic Manager routing methods

Azure Traffic Manager supports six traffic-routing methods for determining how to route network traffic to the various service endpoints. However, for any profile, Traffic Manager applies the traffic-routing method associated with it to each DNS query it receives. Methods in Traffic Manager:

  • Firstly, select Priority when you want to use a primary service endpoint for all traffic. After that, it provides backups in case the primary or the backup endpoints are unavailable.
  • Secondly, select Weighted when you want to distribute traffic across a set of endpoints, either evenly or according to weights, which you define.
  • Thirdly, select Performance when you want end users to use the “closest” endpoint in terms of the lowest network latency.
  • After that, select Geographic so that users are directed to specific endpoints based on which geographic location their DNS query originates from. 
  • Then, select MultiValue for Traffic Manager profiles that can only have IPv4/IPv6 addresses as endpoints. 
  • Lastly, select the Subnet traffic-routing method to map sets of end-user IP address ranges to a specific endpoint within a Traffic Manager profile. 

Priority traffic-routing method

The organization wants to provide reliability for its services by deploying one or more backup services in case their primary service goes down. Not to mention, the ‘Priority’ traffic-routing method gives access to Azure customers to easily implement this failover pattern.

However, the Traffic Manager profile contains a prioritized list of service endpoints.That is by default, Traffic Manager sends all traffic to the primary endpoint. And, if the primary endpoint is not available, then Traffic Manager routes the traffic to the second endpoint. 

Weighted traffic-routing method

The ‘Weighted’ traffic-routing method gives access to distribute traffic evenly or to use a predefined weighting. However, the weighted method enables some useful scenarios:

  • Firstly, Gradual application upgrade: In this, there is allocation of percentage for traffic to route to a new endpoint, and gradually increase the traffic over time to 100%.
  • Secondly, Application migration to Azure: This creates a profile with both Azure and external endpoints. 
  • Lastly, Cloud-bursting for additional capacity: This expands an on-premises deployment into the cloud by putting it behind a Traffic Manager profile.

Performance traffic-routing method

Deploying endpoints in two or more locations across the globe can improve the responsiveness of many applications by routing traffic to the closest location. Most importantly, the ‘Performance’ traffic-routing method provides this capability. However, the ‘closest’ endpoint is not necessarily closest as measured by geographic distance. But, the ‘Performance’ traffic-routing method determines the closest endpoint by measuring network latency. As the Traffic Manager looks up the source IP address of the incoming DNS request in the Internet Latency Table. Then, it selects an available endpoint in the Azure datacenter with the lowest latency for that IP address range.

Geographic traffic-routing method

Traffic Manager profiles can be configured for using the Geographic routing method so that users move to specific endpoints on the basis of which geographic location their DNS query originates from. However, this empowers Traffic Manager customers to enable scenarios where knowing a user’s geographic region and routing them based on that is important. When a profile is configured for geographic routing, each endpoint associated with that profile needs to have a set of geographic regions assigned to it. However, a geographic region has following levels:

  • Firstly, World– any region
  • Secondly, Regional Grouping – for example, Africa, Middle East, Australia/Pacific etc.
  • Thirdly, Country/Region – for example, Ireland, Peru, Hong Kong SAR etc.
  • Lastly, State/Province – for example, USA-California, Australia-Queensland, Canada-Alberta etc. 
load balancing concept using Az-304 online course

Reference: Microsoft Documentation, Documentation 2

Go back to AZ-304 Tutorials

Menu