Monitor over 100 Azure resources with Site24x7's Azure monitoring tool for optimal Azure performance.
Load balancing is a well-known solution for overloaded websites when a single server cannot support the entire workload. You can apply load balancing to any network topology, whether running in an on-premises data center, in a hybrid scenario, or on a fully public cloud platform like Microsoft Azure.
Azure provides organizations with data center capabilities, running in a shared hosting environment in 64 locations worldwide. The service enables you to run traditional computing services like virtual machines (VMs), storage, and networking. Furthermore, you can run application services, serverless microservice architectures, and container-based solutions.
Let’s take a closer look at load balancing to discover how it works and explore some common use cases that benefit from it. Then, we’ll examine Azure’s load balancing services more closely and compare three popular scenarios for using Azure.
Load balancing needs differ between organizations, vary between applications, and impact customer requirements. There are three primary factors that can influence it: global presence, local presence, and web protocol layer.
One reason to implement load balancing is to optimize traffic routing. For example, you can redirect a US-based user to a local Azure region where the required application is running. Furthermore, distributing your workload across Azure regions can ensure greater availability and that server failures rarely affect service quality, so your app maintains a pristine user experience.
Imagine a website providing a global e-commerce solution. You can deploy the web and database components in different locations (North America, Europe, and Asia). While each region hosts a local version of the application at a local web address, the end users can connect to a uniform webshop.company.com URL.
Depending on the configuration, it redirects users to the closest version of the web application. Or, if an application runtime in one location is down, it sends users to another available region.Fig. 1: Load balancing global architecture
While redirecting traffic is a common implementation, you can also use load balancing within the same location or data center (on-premises).
Imagine a website running in a data center spread across three servers. Users connect to a unified URL, which forwards the request to the load balancer. The load balancer redirects the user to one of the available servers. If a server endpoint becomes unavailable, the load balancer bypasses this endpoint until it returns to a healthy state.
Typically, load balancers use an algorithm to decide the route or path from the end user to the application endpoint. However, other network components, such as a domain name system (DNS), reliable network communication routing, and high-quality performance between the load balancer and the endpoint, are also crucial. If any of these components fails, it impacts the load balancer.
The IT industry developed the Transmission Control Protocol/Internet Protocol (TCP/IP) method — the backbone of the modern Internet — to establish a standardized mode known as the Open Systems Interconnection (OSI) model. This model splits the parts of a network traffic stream into layers.
The network cabling is the physical layer (Layer 1), the layer responsible for packet-routing is the transport layer (Layer 4), and the layer closest to the user is the application layer (Layer 7).
Layer 7 load balancers are active on the application layer, closest to the user. Typical protocols here are the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Server Message Block (SMB).
Layer 4 supports TCP and the User Datagram Protocol (UDP). This support makes it a viable option to load balance any network traffic, such as the Remote Desktop Protocol (RDP) on port 3389, the Secure Shell Protocol (SSH) on port 22, FTP on port 21, the DNS on port 53, SMTP on port 25, and database server ports, for example, Structured Query Language (SQL) on port 1433, MySQL on port 3306, and so on. But you can also use it to load balance web traffic on port 80/443.
Organizations are moving to public cloud environments, such as Microsoft Azure, AWS, and GCP, and running more workloads than ever. The base load balancing requirements of your application workloads must meet the expectations for uptime and performance of the customers connecting to your business. These requirements remain the same when transitioning your services to a cloud environment.
Azure offers five different load-balancing resource types to ensure your applications remain at optimal performance:
Azure Traffic Manager is a DNS-based load balancer. It is slightly different from what a more specific load-balancing solution does, and network virtual appliances are any supported third-party load-balancing solution running inside an Azure VM.
This article focuses on the Application Gateway, Front Door, and Load Balancer. Overall, the options available have many similarities and overlapping capabilities, but each offers distinct features that make them more suitable for some use cases than others. The article compares the capabilities, focusing on the specific features that make them ideal for particular use cases.
Azure Application Gateway is based on Layer 7 load balancing and only supports traffic load balancing using the HTTP or HTTPS protocol. This makes it the perfect candidate for web applications running in Azure App Services, Azure VMs, or Kubernetes containers.
Thanks to its deep integration with the HTTP protocol, Azure Application Gateway offers features like cookie session affinity, Secure Sockets Layer (SSL) and Transport Layer Security (TLS) certificate termination, and URL redirection.
Examples of redirection are sending incoming requests on HTTP to HTTPS or redirecting incoming requests on https://webshop.company.com/products to a pool of web servers running the products piece of the application, which redirect requests to https://webshop.company.com/discounts, the discounts pool of web servers.
Another exciting feature of Application Gateway is the optional extension with a web application firewall (WAF). WAF protects your web application from common vulnerabilities and exploits, using the Open Web Application Security Project® (OWASP) foundation’s detection patterns as a starting point to which you can add custom rules.
Remember, like most other Azure resources, you deploy Azure Application Gateway in a specific Azure region. While this is more than sufficient for small-to-midsize applications, a global, high-availability architecture would require deploying App Gateway and regional load balancing in multiple Azure areas to meet reliability requirements. (App Gateway does not offer global load balancing.)
Similar to App Gateway, Azure Front Door is a Layer 7 load balancing service. It shares many similarities with App Gateway, such as URL redirection capability, SSL termination, and optional support for the Web Application Firewall (WAF) protection mechanism.
That said, Azure Front Door also has several unique features that differentiate it from Application Gateway.
Global service: You deploy Front door as a worldwide service, meaning that you do not link it to a specific Azure area. This implementation reveals its primary use scenario as a global load balancer. If your application’s availability requires uptime in case of a regional outage, Front Door is the best solution.
CDN caching: A load balancer acts as a gateway between the end user and a web application to which the user connects. The more traffic and data you can cache, the better the performance. You can configure Front Door as a content distribution network (CDN) endpoint, caching larger files that it can retrieve for subsequent users. This dramatically improves the end user’s experience and offloads performance demand from back-end web applications. Connecting to Microsoft’s global network of more than 100 points of presence (POP) guarantees a fast, reliable connection to a location near the user.
Routing rules: Apart from the standard URL redirection in App Gateway, Front Door provides a more granular and enhanced rule engine, allowing for more complex redirection options using regular expressions and custom variables.
Compared to the solutions above, which load balances web traffic on Layer 7, Azure Load Balancer is active on Layer 4. It can be deployed as an external (public internet-facing) or Azure-internal IP address load balancer. However, you cannot combine these implementations to serve the same resource. So, you would have to deploy a public-facing Azure Load Balancer to load balance front-end application traffic and an internal-facing load balancer for back-end database traffic.
You can deploy this load balancing scenario in combination with Azure VM workloads, such as web server clusters, database server clusters, and virtual desktop environments. And you can integrate it with Azure VM Scale Sets. It is also the default load balancing option when deploying a container platform with Azure Kubernetes Service (AKS).
Azure Load Balancer exists in two different flavors — basic and standard. The basic option is a free service, whereas the standard option requires a monthly consumption cost. It is a best practice to use the standard choice in a production environment, ensuring regional load balancing is available in your location.
|App Gateway||Front Door||Load Balancer|
|Protocol support||HTTP/HTTPS only||HTTP/HTTPS only||All protocols (TCP/UDP)|
Microsoft offers several Azure reference architectures in its documentation, including a decision tree to help you choose the right load balancing option.
Whether running workloads on VMs or native cloud services, you still need a load balancer to provide scale and high availability for your workloads. When running a web application, the recommendation is to use an Azure Application Gateway for a single region and Azure Front Door (geo-load balancing) for multiple areas. Both scenarios support advanced security using a WAF extension.
Azure Load Balancer supports all TCP/UDP protocols, making it suitable for web applications and even more so for non-web protocol applications, such as RDP, DNS, and SSH.
As developers and DevOps engineers, you must assess each solution’s unique capabilities and characteristics to select the most effective one for your workloads. No single solution covers all load balancing needs. Realistically, you will likely integrate different solutions as part of your overall workload architecture.To keep track of all these moving parts in one place, you might consider a solution like Site24x7, which provides an all-in-one dashboard for performance monitoring.
Learn about the 4 golden signals in monitoring that aid in the consistent tracking of the performance across your infrastructure.➤
Learn about the various features, benefits, and possible use cases of the Azure Kubernetes Service (AKS) and how it helps to manage container-based applications on azure cloud clusters.➤
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.Apply Now