Load Balancers Algorithms: What are they and how do they work?

LayerStack Load Balancers Algorithm

An essential triage to internet traffic with practical security features, load balancers is the gatekeeper that directs web-based traffic to the best available servers for optimal application efficiency. This process involves different algorithms, all come with unique pluses. And in this post, you will read all about them so you can make the most of our balancers.

Round Robin

This is the most common algorithm where all available servers form a queue. When a new request comes in, the load balancers forward it to the first server in the queue. Upon the next request, the balancers distribute the traffic to the next server in the list.

The below diagram gives you a picture of how this works. Say we have an environment with three available servers, the first client’s request (1) received by the balancer is assigned to server 1. The next request (2) is then assigned to the next server in turn, namely server 2. When the balancer finishes routing the third request and reaches the bottom of the server list, it directs the next client (4) to the first on the list again, which is server 1. And the cycle continues.

LayerStack Load Balancers Algorithm: how does round robin work

It is the simplest and the easiest algorithm to be implemented where each server handles a similar amount of workload, making sure there is no overload or starvation of server resources.

Least Connections

The name says it all – the balancers monitor the current capacity of each available server, and assigns new requests to the one with the fewest active connections.

In the diagram below, servers 1 and 2 are serving a higher demand of requests. Therefore, when client 1 comes in, their request is directed to server 4 as it is currently idle. The next client (2) is assigned to server 4 as it is now one of the two servers (3 and 4) with the least connections. Now, with server 4 having two connections while server 3 having just one, the third incoming client request is routed to server 3 – the one with the least active connections.

LayerStack Load Balancers Algorithm: how does least connection work

This intelligent mechanism ensures all requests are handled in the most effective manner possible, and is more resilient to heavy traffic and demanding sessions.


In similar nature to least-connection, source-based algorithm pairs certain requests with the client’s IP address. Once you set up the rules in the LayerPanel, our load balancers will route the workloads accordingly.

For instance, the balancer recognizes the IP address that you have previously specified, and autonomously directs the requests from that specific client to a specific server – server 2 – in the diagram below. When the same client returns a few days later with a new request, the balancer recognizes its IP address and will distribute the request to the same server.

LayerStack Load Balancers Algorithm: how does source work

This algorithm provides you with the flexibility to group certain application-specific tasks together or slightly tailor the environment best to process specific requests. This allows your application to handle requests with desirable resources and reach desirable, more predictable results.

Want more details on how to configure Load Balancers for LayerStack’s cloud servers? Read our tutorials and product docs.

Related Content:

If you have any ideas for improving our products or want to vote on other ideas so they get prioritized, please submit your feedback on our Community platform. Feel free to pop by our community.

LayerStack’s cloud solutions have it all – and it’s free

LayerStack free cloud features

Why choose when you can have it all? LayerStack’s cloud server solutions are not only high-performing, reliable and versatile, they also come with a slew of features and services – all without extra charge! LayerStack promises stress-free, one-stop solutions, and those are exactly what you get.

Cloud Control Panel

Manage every aspect of your cloud with our foolproof and intuitive Control Panel that generates insightful analytics and tracking reports. Build, configure, and scale your infrastructure with just a few clicks.


Safeguard your business with customizable Firewalls that secure your network traffic. Users can fine-tune firewall rules for security optimal to their specific business needs. Pre-defined templates are present so that setting up firewalls across multiple servers is easy and fuss-free.  


Personalize the configurations with our powerful API so you can make the most of the server’s competence to fit your most specific requirements.

Free Migration

Migrating your applications and workloads across environments is sweat-free because our specialists will do it for you, after evaluating the migration process and careful planning of course – all on the cuff!

High-Performance Architecture

LayerStack holds a wide range of innovative cloud server options in our arsenal, all of which are as powerful and efficient as you can expect. Our Memory-Optimized servers and Compute-Optimized servers are equipped with 100% dedicated AMD EPYC vCPU for unbeatable CPU power and NVMe SSD that offers superior storage capability and speed, while the General Purpose servers are perfectly competent at day-to-day operations. Regardless of what you are looking for, we have just what the doctor ordered to drive your business.

Cloud Managed Service

From server configuration and operating system installation to bandwidth usage monitoring and SSL certificate installation with Plesk/ cPanel, our comprehensive cloud management support always has your back so you can focus on your business.

Additional Benefits

Need more convincing? On top of the benefits of having data centers around the globe, most cloud server plans at LayerStack offer unlimited traffic and are binding to a service-level agreement that guarantees a server uptime of 99.95% (Click here to check the status of our servers). Human technical support is up and ready 24/7/365, while an extensive Documentation Library is fully accessible whenever an issue presents itself.

LayerStack understands that your time should be spent on your business growth instead of wading through technical matters. Let our world-class cloud experts do the heavy-lifting while you sit back and watch how our solution drives your business.

Got questions about our cloud service? Click here to arrange a free consultation with our cloud experts now!

LayerStack Load Balancers support Global Private Networking and DDoS Protection

LayerStack Load Balancers support Global Private Networking and DDoS Protection

Load Balancers are LayerStack’s latest product that maximizes the capabilities of your applications by distributing traffic across multiple cloud servers regionally and globally. Whether you are running high traffic websites, performing disaster recovery or maintaining multiple sites that require high availability, our Load Balancers can be a valuable player to avoid overloading of any single server, so your applications can run at optimal speed and capabilities.

Intelligent traffic direction, however, is just a fraction of what makes our Load Balancers amazing. They support various features that make your cloud journey stress-free. Setting up is a cinch and can be done with just a few clicks in the LayerStack cloud panel or LayerStack API.

Global Private Networking

By configuring Load Balancers with the Global Private Networking, all data are transmitted via a low latency isolated network without compromising speed and security concerns.

How do Load Balancers support the Global Private Networking?

To further enhance the security of your cloud, you can combine Global Private Networking with DDoS Protection.

DDoS Protection

Distributed denial-of-service (DDoS) attack is one of the greatest cyber threats in recent decades where attackers overwhelm a network with a flood of internet traffic, preventing your applications from serving your genuine customers.

Sitting in front of the Load Balancers, the DDoS Protection mechanism protects both the balancers and the cloud servers behind it.

How do Load Balancers support the DDoS Protection?

Putting it all Together

With the combination of these Load Balancers’ features, you can create stable and secure configurations for enhanced availability and performance.  For example, you may create isolated web traffic through a private network to transmit data securely while still handling a swarm of simultaneous requests, hence ensuring a smooth running of your website.

How does LayerStack Load Balancers support Global Private Networking and DDoS Protection

Available today

LayerStack’s Load Balancers improve the availability, performance and scalability of your applications.  You can deploy Load Balancers now in LayerPanel with ease and minimal configuration.  In the Panel, you can also perform custom health checks, choose a preferred load balancing algorithm, set up sticky sessions, proxy protocol and SSL certificates, as well as activate DDoS protection and Global Private Networking.

Our Load Balancers are available to all cloud servers by LayerStack, including General Purpose Cloud Servers, Memory Optimized Cloud Servers and Compute Optimized Cloud Servers in Hong Kong, Singapore and Tokyo.

As always, we tirelessly improve our solutions while also developing new features – there will be more exciting news! Check out the LayerStack Community and keep an eye on our social media for more announcements.

LayerStack Load Balancers Use Cases

We are pleased to announce that LayerStack Load Balancers are now available in Hong Kong, Singapore and Tokyo.  With our Load Balancers, you can maximize the capabilities of your applications by distributing traffic among multiple cloud servers regionally and globally.

Why do you need load balancers?

Over the course of my career, I’ve learned that one tenet of work efficiency is sensible delegation. The same applies to cloud servers.

At its core, load balancers are traffic controllers that distribute the incoming flow of data to a pool of servers. Spread workload means no one server bears too much traffic at any given time, providing the best insurance against unsought speed drags or service downtime.

This is, however, just the beginning. Tactful traffic direction opens doors to countless possibilities. Maintaining high service availability and scaling across regions are just a few of many common use cases, and we will get to them in a minute.

Use case 1: Spread workload for scaling

The dynamic traffic routing by the load balancers creates a distributive system that handles varying workloads at maximum efficiency. The balancer directs inbound flow to available cloud servers for stable, responsive web performance, making it ideal for quick horizontal scaling – whether in response to sudden traffic surges or deliberate business expansions.

In the LayerStack control panel, you can choose from three algorithms with which the load balancers decide how to route the traffic, each coming with its own benefits:

Round-robin: The most common algorithm where available servers form a queue. When a new request comes in, it is handled by the server at the top of the queue. Upon the next request, the balancer goes down the server list and assigns it to the next server in the queue. When it reaches the bottom of the list, the next request is directed to the first on the list again. It is the simplest and the easiest algorithm to be implemented.

Least connection: Just like when you need help, you won’t reach out to that one colleague with the most pressing deadlines to meet (hopefully). Under this algorithm, the balancer keeps track of the current capacity of each available server and assigns new requests to the one with the fewest active connections. This mechanism is more resilient to heavy traffic and demanding sessions.

Source-based: Workloads are distributed according to the IP address of the incoming requests, giving you the flexibility to group certain application-specific tasks together, or slightly tailor the environment optimal to certain requests.

Use case 2: High availability with health check

While the load balancers do fantastic jobs in withstanding volatile traffic patterns, they are equally good at diminishing system failures.

The load balancers perform periodic health checks on all servers available and route requests only to the healthy ones until the issue is resolved. It naturally provides a failover mechanism that prepares you for the worst.

Customizing the health checks to your liking is quick and easy. Simply tweak the parameters in the Settings section of the Load Balancers in the control panel:

Use case 3: Distribute traffic across multi-regions

Similar to how failover works, load balancers are great ways to scale your applications geographically, given they can direct incoming data flow to various cloud servers across different regions. All you need to do is to have your infrastructure – web servers, databases, load balancers – and private networks replicated and set up in different locations. Load balancers will distribute inbound requests to servers in the corresponding locations for optimal performance, all while carry out scheduled health checks to achieve overall stability.

Let nothing stop you!

These are just a fraction of possible examples where load balancers can be helpful. Get creative and explore more use cases that fit your specific needs!

Setting up load balancers is pretty fool proof, but we also understand that a detailed step-by-step tutorial always comes in handy.  Please click here for more details.

If you have any ideas for improving our products or want to vote on other ideas so they get prioritized, please submit your feedback on our Community platform. Feel free to pop by our community.

Shared vs. dedicated bandwidth – Which one is right for you?

Shared or dedicated – two words that you may come across on multiple occasions during the course of looking for a perfect cloud server plan for your business. From the actual cloud server, memory, to internet connection (a.k.a. the bandwidth), it’s not uncommon that you need to decide and pick one from the two options. What is the difference between the two? Why are there such price differences? In this week’s post, let’s dive in and learn all about shared and dedicated bandwidth.

What is bandwidth?

Bandwidth, predominantly measured in Mbps (or Megabytes per second), is the maximum volume of data that can be transmitted in one second. The more bandwidth, the more data can be sent and received at a certain time, essentially meaning a faster connection.

Shared bandwidth

When a cloud server plan offers a shared bandwidth, it means several users are using the same internet connection, and everyone essentially gets a fraction of the bandwidth.

In many cases, especially when traffic is light and if your business does not rely on data-heavy applications, overall server performance rarely suffers. Also, in times where other users are inactive, you have the potential to enjoy the benefits of a full bandwidth.

One big plus about a shared bandwidth plan is its price. As you are also splitting the cost among other users, it is a more economical option for businesses who need the cloud server for small to medium databases and everyday back-office operations.

In fact, you can spend the money you save on a plan with a shared but higher bandwidth for a faster connection.

Dedicated bandwidth

Dedicated bandwidth means you have every ounce of the guaranteed bandwidth at your disposal. This means your connection is more resilient to peak traffic because it is independent of other users.

While plans with dedicated bandwidth are generally more expensive, you enjoy the benefits of solid uptime and stability.

Workloads that involve constant upload, download, or transfer of large files or amounts of data, as well as time-sensitive functions like e-commerce service can take advantage of a bandwidth solely devoted to you. It is also a good option for businesses that have a sizable workforce sharing the same internet connection.

Which one is your right choice?

At the end of the day, the choice eventually comes down to your needs and budget. It is best to assess your own circumstances before jumping to a decision.

Yes, it’s easier said and done. That’s why LayerStack is willing to take one step further and provide you with a free consultation regarding which setup is best for you. Simply reach out to our solution specialists upon signing up.

Latest AMD 3rd Generation EPYC CPU to join LayerStack lineup

It might be obvious, but server processors pull a lot of weight when it comes to compute-intensive tasks in the cloud. AI, machine learning, data analytics – you name it.

For this exact reason, LayerStack is equipping our infrastructure with the latest AMD 3rd Generation EPYC CPU – following its debut earlier in March – and offering the performance standards that our users expect, while retaining our core services, global availability and competitive pricing that you know and love.

Our coming AMD-based offerings feature EPYC™ 7003 Series server CPUs, the world’s highest performing server processer by the leading semiconductor manufacturer. Courtesy of its remarkable memory and I/O capacity, the redesigned processing core takes the speed of application performance to the next level and helps you drive business outcomes.

What is special about the AMD 3rd Gen EPYC processor?

A solid upgrade with impressive benchmarks

AMD has been producing top-quality products since its first innovation of EPYC chip back in 2017, with the first 2 generations of EPYC earning the company massive shares in the high-performance computing market. Bound to be the cornerstone in clouds, datacenters and supercomputers, AMD’s 3rd Gen EPYC x86 processors bring substantial leaps in performance and tackle workload-intensive applications across the board with more speed and economy.

Highly performant architecture

The new 7003 Series is built on AMD’s Zen 3 core architecture, and promises a 19% boost in instructions per cycle/clock (IPC) and a doubled L3 cache. Coupled with the Infinity Fabric™ Technology, the upgraded core delivers two-fold improvement in x86 performance, as well as twice the throughput for AI inference and INT8 performance over the previous generation. These improvements mean users will see lower latency in the most demanding workloads.

Enhanced cloud security

Another highlight that the new generation of processor brings is additional security. Known as the AMD Infinity Guard, the robust set of end-to-end security features creates an isolated execution environment and prepares the Zen architecture to defend against malicious hypervisor-based attacks.

To celebrate the inclusion of AMD’s EPYC™ 7003 Series in our lineup, LayerStack is bringing you promotional offers to selected plans so you can enjoy the new generation processor and what it has to offer. Stay tuned for more details and don’t miss out!

Rocky Linux as the next clone of CentOS

Misery may love company, but the right company heals misery, too. Red Hat’s controversial move of forgoing the development of CentOS towards the end of this year has upset many. But in times of crisis like this, the Linux community always shows their ability to adapt and selflessly make change for the better.

In March came AlmaLinux, a one-for-one open-source CentOS clone developed by the creator of CloudLinux. Getting its name from the Latin word for “soul”, AlmaLinux pulls together the collective wisdom of the Linux community and delivers robust and stable performance that is praised by all accounts. That’s why we, too, jumped on the bandwagon and has started offering AlmaLinux as one of our operating systems and ISO templates.

And the good news does not stop there, a beta version of yet another replacement has joined the party. On the last day of April, Rocky Linux launched a release candidate – Rocky Linux 8.3 Release Candidate 1, a community enterprise operating system believed to be a reliable substitution of CentOS with complete bug-for-bug compatibility.

The intelligent minds behind the creation are led by one of the founders of the CentOS project, Gregory Kurtzer. Similar to the AlmaLinux, this new Linux distribution is developed and supported by the joint effort of the Linux community. The final release of the actual operating system is yet to announce, but the release candidate is now available and serves as a testing ground for IT professions to dip their toes in the waters of Rocky Linux, trying out features, validating its functionalities and reporting issues before the official launch.

According to Kurtzer, the system is intentionally built to resemble CentOS. Everything from installation to actual operation should be instantly familiar to users of the old platform, making it as much of an easy swap as possible.

If you are also interested in this new kid in the block, visit the download page here and give it a go!

Follow us on Facebook/Twitter/LinkedIn to get the latest updates!

Latest Product and Feature Updates: May 2021

LayerStack consistently committed on improving and providing our valued customers with superior cloud computing services. We have revamped the plans and offers on LayerPanel (LayerStack New Generation Control Panel) as well as introduced a new OS, the highlights are listed below:

China Direct CN2 Route: Expanded our CN2 GIA network in Asia Pacific.  It is now available in Hong Kong, Singapore and Tokyo regions.

Updates in Standard Cloud Servers plans: R008-HK has been removed while new included R001-HK with the billing cycle of 3-month and 12-month.  For more details, please view our price plan.

New OS & ISO template:  Newly added AlmaLinux 8 as one of the options in OS and ISO template.  If you are interested in migrating from CentOS 8 to AlmaLinux seamlessly and painlessly, please click here.

You can check out the release notes to get up to date information about product updates, and read about updates from the previous month here

Follow us on Facebook/Twitter/LinkedIn to get the latest updates!

If you have any idea on improving our products or want to vote on other ideas so they get prioritized, please submit your feedback on our Community platform. Feel free to pop by our community.

LayerStack officially includes AlmaLinux as CentOS replacement

When Red Hat decided to “shift investment” from CentOS towards the end of last year, the tech world was not happy (Dude, seriously? Isn’t the pandemic crazy enough?!). After what felt like an eternity, AlmaLinux – a one-for-one Open-source replacement of the soon-to-be-defunct CentOS – was released in late March and, at long last, ended the anger, confusions and drama from all fuming users.

It’s a huge deal to CentOS users – and LayerStack, too. Now LayerStack officially include AlmaLinux OS as part of our operating system options and ISO template for all your cloud computing needs. Check LayerPanel (LayerStack New Generation Control Panel) and our API for this stable yet license-free operating system.

Waiting is painful, and we make sure that waiting is the only hardest part. Check out our detailed tutorial where you can find the prerequisites and step-by-step guide for migration from CentOS 8, this is to ensure you have a painless and seamless setup and migration.

Still Err?  Visit our Community platform where you can find answers to common problems or submit your feedback.

Related Content:

AlmaLinux to join LayerStack’s Cloud Servers Family

April is the month of rebirth. It is where Easter falls. Astrologically, it is when the sun enters Aries – the first zodiac sign that marks the beginning of the new year (happy birthday to all the Aries!). It’s also the start of a new fiscal year, and schools in Japan start in April.

All I’m trying to say is, this is a perfect time for new thing – AlmaLinux, the new enterprise-grade Linux distribution. Upon the beta release last month, the new operating system is now officially launched as CentOS’s replacement.

The mastermind behind is the creator of CloudLinux that we have known for years and loved by over 4,000 tech companies worldwide. With such a solid foundation, expertise, stability and reliability, the license-free AlmaLinux OS is poised to follow CloudLinux’s success in filling the gap of CentOS EOL.

Named after the Latin word for “soul”, AlmaLinux taps into the collective wisdom of the Linux community in its creation and future maintenance. This means the Linux distribution will be owned and governed by the community. Due to its open-source nature, it allows IT professionals to tack on any issues, as well as explore and contribute to the AlmaLinux community.

In the month of rebirth, souls reincarnate. To help you embrace in this exciting journey, LayerStack is excited to announce that we will include AlmaLinux as part of our cloud servers option. Most importantly we are committed to a simple yet seamless transition from CentOS to AlmaLinux with zero downtime for existing users.

Follow us on Facebook/ Twitter/ LinkedIn to get the latest updates!

If you have an idea for improving our products or want to vote on other ideas so they get prioritized, please submit your feedback on our Community platform. Feel free to pop by our community.