- learning
- file-uploader
- #load-balancing
What Is Load Balancing?
Load balancing is the process of distributing incoming network traffic across a group of backend servers in order to decrease latency. This is usually done by sending the same number of users from one backend server to another server based on its load.
In other words, load balancing is the virtual machine that optimizes workload on the basis of resource usage or requirements.
Benefits of Load Balancing
Load balancing can help with performance issues and it is a common term when talking about server farms. When an application server is too busy, users have to wait in the queue until this single server is available back again. Here’s why load balancing works miracles. It basically relieves your web application and sends workload to available servers giving it high-performance and reducing downtime.
Disadvantages of Load Balancing
Load balancing requires that all of the backend servers have the same code and data. This means that if one of the servers needs to be upgraded, then all servers will need to be upgraded at the same time. If different servers need to serve different codes or data sets, then they just can’t be used.
Load Balancing Algorithms
There are two different types of load balancing. Dynamic load balancing methods use algorithms that take into account the current state of each server to distribute traffic accordingly while static load balancing does not make adjustments and distributes even traffic, sending an equal amount of user requests to each server in a group, either in a specified order or randomly.
Static load balancing algorithms
Round Robin
Round Robin is a very simple way to balance application requests by putting them in order of what comes in. It does not know the computing power, availability, or anything else about the servers it is passing requests onto, it just knows that they are each one of many servers.
Weighted Round Robin
Weighted round robin is a load balancing algorithm that selects the best server given its abilities. Unlike standard round robin, which would randomly select every machine one by one, a weighted round robin will take into account various machines’ abilities and assign more requests to the most effective ones within each round.
Random
Opportunistic/Randomized algorithms don’t consider the current server load when assigning tasks. They are better for smaller tasks, and performance falls as task size increases. The distribution is randomized, and more frequently a machine with a high workload might receive a new task at any time, making the problem worse.
Dynamic load balancing algorithms
Dynamic load balancing algorithms search for the fastest and most lightweight web servers and then distribute the workload on them. In this way, performance is maximized. Here are the types of dynamic load balancing algorithms:
Resource-based
Here, load balancing is accomplished by using the active connections that each server has at a given time. A specialized agent (software) accounts for CPU and memory availability, and end-user traffic is routed accordingly by the software load balancers.
Weighted response time
Weighted response time averages the response time of servers and uses that information to determine where to send traffic. The server with the fastest response time will receive most of the traffic, which ultimately provides faster service for users and high availability of services all day long.
Least response time
Distributes traffic to the fewest number of running servers. The least connection option assumes that all servers are processing at roughly the same speed, and can process all connections simultaneously.
Weighted least connection
After you type a domain name, this algorithm averages the response time of servers and uses that information to determine where to send traffic. The server with the fastest response time will receive most of the traffic, which ultimately provides faster service for users.
Cloud load balancing
Load balancing with HTTP
HTTP(S) Load balancing will not only balance HTTP and HTTPS traffic across multiple backend instances, but it will also enable your application to become available at a single global IP address. There is no need for DNS forwarding, as the application assigns its own SSL certificates while load balancing.
Load balancing with TCP/SSL
Load balancing can help you share traffic across servers. It is scalable, doesn’t require pre-heating, and health checks ensure only healthy instances receive traffic. An SSL proxy lets you terminate SSL connections so that your website appears more secure to users.
DNS-Based Load Balancing
Packet filtering routers use a simple routing table to forward packets from one interface to another based on the destination address in each packet header. Usually, this is enough, but sometimes it is necessary to be able to route packets according to information in the packet header other than the destination address. DNS-based load balancing provides a solution to this problem. DNS-based load balancing takes advantage of the fact that DNS servers can receive queries and send responses that contain more than just hostname and address mappings. In addition to a host's IP address, a DNS server can return other information that might be useful to network devices, such as the host's bandwidth or the name of the ISP providing Internet service.
Routers or servers that use this information can direct traffic among multiple hosts based on criteria other than IP addresses. This type of load balancing is called DNS-based since DNS servers are used to distribute the load-balancing solutions across multiple servers.
How SSL offload is reducing security risks
With SSL offload, you can manage and encrypt your certificates centrally. With this automation, you can ensure some level of security while maintaining performance.