It’s not a surprise that the world of operating systems constantly moves at a fast clip. As a leading cloud server provider, LayerStack keeps close tabs on the latest trends and updates our comprehensive library of operating systems so you won’t be missing out.
Recently added to our OS selection is the Rocky Linux 8.4 that has reached General Availability for x8664 and aarch64. It is created by one of the founders of the CentOS project as CentOS’s successor.
Not a Rocky user? No problem, we have plenty more where that came from. From Ubuntu, Fedora and Debian to AlmaLinux and Windows, LayerStack has gone out of our way and maintains a comprehensive library where you can find multiple versions of operating systems of almost every description.
For those special snowflakes out there, if you need an environment outside our official image list, don’t feel left out – because you are not! LayerStack allows you to upload your customized VM images, meaning you can enjoy the flexibility of bringing your own operating system images on our servers to fit your specific needs.
Get some ideas of our operating system library? Share them in our Community and keep posted for more of our latest releases on our social media.
From your apartment furniture, to your cell phone data, to your entire application workload, moving stuff to a new environment can make you sweat bullets, but it doesn’t have to be. Our tech experts at LayerStack will do the heavy lifting for you – all free of charge. On top of that, you can enjoy a slew of benefits from the cloud packages in which we take pride.
Why migrate to LayerStack clouds?
We let the quality of our cloud infrastructure speaks for itself. Snatching top spots in various benchmark evaluations, our cloud solutions excel in web performance, CPU power, stability, disk I/O performance, network performance and many other areas, beating some of the biggest names in the field for our unparalleled functioning.
Our intuitive LayerPanel allows you to control the smallest details in a single portal. Build, manage and monitor the environment and receive insightful, broken-down analytics so you can make the best and informed decisions that drive your business.
And we don’t stop there. LayerStack’s development team works tirelessly and rolls out new features to further improve your cloud journey with us. Global Private Networking, Load Balancers and API are just the start – the sky is only the limit, not our cloud.
There’s no better news than being able to keep everything the way they were after a transition. We promise an easy and professional migration with minimal disruptions, and that’s what we deliver.
Our templates let you import your original VM images to preserve the software and previous configurations in your customed environment. You can install multiple instances from the same image and keep everything you know and love, at the same time enjoying the new benefits that LayerStack’s remarkable features offer.
2. Fuss-free migration delivered by experts
If all you want is a simple migration, you can have it too – for free! Our experienced experts have executed migrations of all sizes, and they will put together a migration strategy best to your situation for a smooth, simplified and swift migration with minimum downtime. During which, all data will be encrypted and transmitted over a secure channel, as security has always been our priority.
An essential triage to internet traffic with practical security features, load balancers is the gatekeeper that directs web-based traffic to the best available servers for optimal application efficiency. This process involves different algorithms, all come with unique pluses. And in this post, you will read all about them so you can make the most of our balancers.
This is the most common algorithm where all available servers form a queue. When a new request comes in, the load balancers forward it to the first server in the queue. Upon the next request, the balancers distribute the traffic to the next server in the list.
The below diagram gives you a picture of how this works. Say we have an environment with three available servers, the first client’s request (1) received by the balancer is assigned to server 1. The next request (2) is then assigned to the next server in turn, namely server 2. When the balancer finishes routing the third request and reaches the bottom of the server list, it directs the next client (4) to the first on the list again, which is server 1. And the cycle continues.
It is the simplest and the easiest algorithm to be implemented where each server handles a similar amount of workload, making sure there is no overload or starvation of server resources.
The name says it all – the balancers monitor the current capacity of each available server, and assigns new requests to the one with the fewest active connections.
In the diagram below, servers 1 and 2 are serving a higher demand of requests. Therefore, when client 1 comes in, their request is directed to server 4 as it is currently idle. The next client (2) is assigned to server 4 as it is now one of the two servers (3 and 4) with the least connections. Now, with server 4 having two connections while server 3 having just one, the third incoming client request is routed to server 3 – the one with the least active connections.
This intelligent mechanism ensures all requests are handled in the most effective manner possible, and is more resilient to heavy traffic and demanding sessions.
In similar nature to least-connection, source-based algorithm pairs certain requests with the client’s IP address. Once you set up the rules in the LayerPanel, our load balancers will route the workloads accordingly.
For instance, the balancer recognizes the IP address that you have previously specified, and autonomously directs the requests from that specific client to a specific server – server 2 – in the diagram below. When the same client returns a few days later with a new request, the balancer recognizes its IP address and will distribute the request to the same server.
This algorithm provides you with the flexibility to group certain application-specific tasks together or slightly tailor the environment best to process specific requests. This allows your application to handle requests with desirable resources and reach desirable, more predictable results.
Want more details on how to configure Load Balancers for LayerStack’s cloud servers? Read our tutorials and product docs.
Why choose when you can have it all? LayerStack’s cloud server solutions are not only high-performing, reliable and versatile, they also come with a slew of features and services – all without extra charge! LayerStack promises stress-free, one-stop solutions, and those are exactly what you get.
Cloud Control Panel
Manage every aspect of your cloud with our foolproof and intuitive Control Panel that generates insightful analytics and tracking reports. Build, configure, and scale your infrastructure with just a few clicks.
Safeguard your business with customizable Firewalls that secure your network traffic. Users can fine-tune firewall rules for security optimal to their specific business needs. Pre-defined templates are present so that setting up firewalls across multiple servers is easy and fuss-free.
Personalize the configurations with our powerful API so you can make the most of the server’s competence to fit your most specific requirements.
Migrating your applications and workloads across environments is sweat-free because our specialists will do it for you, after evaluating the migration process and careful planning of course – all on the cuff!
LayerStack holds a wide range of innovative cloud server options in our arsenal, all of which are as powerful and efficient as you can expect. Our Memory-Optimized servers and Compute-Optimized servers are equipped with 100% dedicated AMD EPYC vCPU for unbeatable CPU power and NVMe SSD that offers superior storage capability and speed, while the General Purpose servers are perfectly competent at day-to-day operations. Regardless of what you are looking for, we have just what the doctor ordered to drive your business.
Cloud Managed Service
From server configuration and operating system installation to bandwidth usage monitoring and SSL certificate installation with Plesk/ cPanel, our comprehensive cloud management support always has your back so you can focus on your business.
Need more convincing? On top of the benefits of having data centers around the globe, most cloud server plans at LayerStack offer unlimited traffic and are binding to a service-level agreement that guarantees a server uptime of 99.95% (Click here to check the status of our servers). Human technical support is up and ready 24/7/365, while an extensive Documentation Library is fully accessible whenever an issue presents itself.
LayerStack understands that your time should be spent on your business growth instead of wading through technical matters. Let our world-class cloud experts do the heavy-lifting while you sit back and watch how our solution drives your business.
Got questions about our cloud service? Click here to arrange a free consultation with our cloud experts now!
Load Balancers are LayerStack’s latest product that maximizes the capabilities of your applications by distributing traffic across multiple cloud servers regionally and globally. Whether you are running high traffic websites, performing disaster recovery or maintaining multiple sites that require high availability, our Load Balancers can be a valuable player to avoid overloading of any single server, so your applications can run at optimal speed and capabilities.
Intelligent traffic direction, however, is just a fraction of what makes our Load Balancers amazing. They support various features that make your cloud journey stress-free. Setting up is a cinch and can be done with just a few clicks in the LayerStack cloud panel or LayerStack API.
Global Private Networking
By configuring Load Balancers with the Global Private Networking, all data are transmitted via a low latency isolated network without compromising speed and security concerns.
To further enhance the security of your cloud, you can combine Global Private Networking with DDoS Protection.
Sitting in front of the Load Balancers, the DDoS Protection mechanism protects both the balancers and the cloud servers behind it.
Putting it all Together
With the combination of these Load Balancers’ features, you can create stable and secure configurations for enhanced availability and performance. For example, you may create isolated web traffic through a private network to transmit data securely while still handling a swarm of simultaneous requests, hence ensuring a smooth running of your website.
LayerStack’s Load Balancers improve the availability, performance and scalability of your applications. You can deploy Load Balancers now in LayerPanel with ease and minimal configuration. In the Panel, you can also perform custom health checks, choose a preferred load balancing algorithm, set up sticky sessions, proxy protocol and SSL certificates, as well as activate DDoS protection and Global Private Networking.
Our Load Balancers are available to all cloud servers by LayerStack, including General Purpose Cloud Servers, Memory Optimized Cloud Servers and Compute Optimized Cloud Servers in Hong Kong, Singapore and Tokyo.
As always, we tirelessly improve our solutions while also developing new features – there will be more exciting news! Check out the LayerStack Community and keep an eye on our social media for more announcements.
We are pleased to announce that LayerStack Load Balancers are now available in Hong Kong, Singapore and Tokyo. With our Load Balancers, you can maximize the capabilities of your applications by distributing traffic among multiple cloud servers regionally and globally.
Why do you need load balancers?
Over the course of my career, I’ve learned that one tenet of work efficiency is sensible delegation. The same applies to cloud servers.
At its core, load balancers are traffic controllers that distribute the incoming flow of data to a pool of servers. Spread workload means no one server bears too much traffic at any given time, providing the best insurance against unsought speed drags or service downtime.
This is, however, just the beginning. Tactful traffic direction opens doors to countless possibilities. Maintaining high service availability and scaling across regions are just a few of many common use cases, and we will get to them in a minute.
Use case 1: Spread workload for scaling
The dynamic traffic routing by the load balancers creates a distributive system that handles varying workloads at maximum efficiency. The balancer directs inbound flow to available cloud servers for stable, responsive web performance, making it ideal for quick horizontal scaling – whether in response to sudden traffic surges or deliberate business expansions.
In the LayerStack control panel, you can choose from three algorithms with which the load balancers decide how to route the traffic, each coming with its own benefits:
Round-robin: The most common algorithm where available servers form a queue. When a new request comes in, it is handled by the server at the top of the queue. Upon the next request, the balancer goes down the server list and assigns it to the next server in the queue. When it reaches the bottom of the list, the next request is directed to the first on the list again. It is the simplest and the easiest algorithm to be implemented.
Least connection: Just like when you need help, you won’t reach out to that one colleague with the most pressing deadlines to meet (hopefully). Under this algorithm, the balancer keeps track of the current capacity of each available server and assigns new requests to the one with the fewest active connections. This mechanism is more resilient to heavy traffic and demanding sessions.
Source-based: Workloads are distributed according to the IP address of the incoming requests, giving you the flexibility to group certain application-specific tasks together, or slightly tailor the environment optimal to certain requests.
Use case 2: High availability with health check
While the load balancers do fantastic jobs in withstanding volatile traffic patterns, they are equally good at diminishing system failures.
The load balancers perform periodic health checks on all servers available and route requests only to the healthy ones until the issue is resolved. It naturally provides a failover mechanism that prepares you for the worst.
Customizing the health checks to your liking is quick and easy. Simply tweak the parameters in the Settings section of the Load Balancers in the control panel:
Use case 3: Distribute traffic across multi-regions
Similar to how failover works, load balancers are great ways to scale your applications geographically, given they can direct incoming data flow to various cloud servers across different regions. All you need to do is to have your infrastructure – web servers, databases, load balancers – and private networks replicated and set up in different locations. Load balancers will distribute inbound requests to servers in the corresponding locations for optimal performance, all while carry out scheduled health checks to achieve overall stability.
Let nothing stop you!
These are just a fraction of possible examples where load balancers can be helpful. Get creative and explore more use cases that fit your specific needs!
Setting up load balancers is pretty fool proof, but we also understand that a detailed step-by-step tutorial always comes in handy. Please click here for more details.
If you have any ideas for improving our products or want to vote on other ideas so they get prioritized, please submit your feedback on our Community platform. Feel free to pop by our community.
Shared or dedicated – two words that you may come across on multiple occasions during the course of looking for a perfect cloud server plan for your business. From the actual cloud server, memory, to internet connection (a.k.a. the bandwidth), it’s not uncommon that you need to decide and pick one from the two options. What is the difference between the two? Why are there such price differences? In this week’s post, let’s dive in and learn all about shared and dedicated bandwidth.
What is bandwidth?
Bandwidth, predominantly measured in Mbps (or Megabytes per second), is the maximum volume of data that can be transmitted in one second. The more bandwidth, the more data can be sent and received at a certain time, essentially meaning a faster connection.
When a cloud server plan offers a shared bandwidth, it means several users are using the same internet connection, and everyone essentially gets a fraction of the bandwidth.
In many cases, especially when traffic is light and if your business does not rely on data-heavy applications, overall server performance rarely suffers. Also, in times where other users are inactive, you have the potential to enjoy the benefits of a full bandwidth.
One big plus about a shared bandwidth plan is its price. As you are also splitting the cost among other users, it is a more economical option for businesses who need the cloud server for small to medium databases and everyday back-office operations.
In fact, you can spend the money you save on a plan with a shared but higher bandwidth for a faster connection.
Dedicated bandwidth means you have every ounce of the guaranteed bandwidth at your disposal. This means your connection is more resilient to peak traffic because it is independent of other users.
While plans with dedicated bandwidth are generally more expensive, you enjoy the benefits of solid uptime and stability.
Workloads that involve constant upload, download, or transfer of large files or amounts of data, as well as time-sensitive functions like e-commerce service can take advantage of a bandwidth solely devoted to you. It is also a good option for businesses that have a sizable workforce sharing the same internet connection.
Which one is your right choice?
At the end of the day, the choice eventually comes down to your needs and budget. It is best to assess your own circumstances before jumping to a decision.
Yes, it’s easier said and done. That’s why LayerStack is willing to take one step further and provide you with a free consultation regarding which setup is best for you. Simply reach out to our solution specialists upon signing up.
It might be obvious, but server processors pull a lot of weight when it comes to compute-intensive tasks in the cloud. AI, machine learning, data analytics – you name it.
For this exact reason, LayerStack is equipping our infrastructure with the latest AMD 3rd Generation EPYC CPU – following its debut earlier in March – and offering the performance standards that our users expect, while retaining our core services, global availability and competitive pricing that you know and love.
Our coming AMD-based offerings feature EPYC™ 7003 Series server CPUs, the world’s highest performing server processer by the leading semiconductor manufacturer. Courtesy of its remarkable memory and I/O capacity, the redesigned processing core takes the speed of application performance to the next level and helps you drive business outcomes.
What is special about the AMD 3rd Gen EPYC processor?
A solid upgrade with impressive benchmarks
AMD has been producing top-quality products since its first innovation of EPYC chip back in 2017, with the first 2 generations of EPYC earning the company massive shares in the high-performance computing market. Bound to be the cornerstone in clouds, datacenters and supercomputers, AMD’s 3rd Gen EPYC x86 processors bring substantial leaps in performance and tackle workload-intensive applications across the board with more speed and economy.
Highly performant architecture
The new 7003 Series is built on AMD’s Zen 3 core architecture, and promises a 19% boost in instructions per cycle/clock (IPC) and a doubled L3 cache. Coupled with the Infinity Fabric™ Technology, the upgraded core delivers two-fold improvement in x86 performance, as well as twice the throughput for AI inference and INT8 performance over the previous generation. These improvements mean users will see lower latency in the most demanding workloads.
Enhanced cloud security
Another highlight that the new generation of processor brings is additional security. Known as the AMD Infinity Guard, the robust set of end-to-end security features creates an isolated execution environment and prepares the Zen architecture to defend against malicious hypervisor-based attacks.
To celebrate the inclusion of AMD’s EPYC™ 7003 Series in our lineup, LayerStack is bringing you promotional offers to selected plans so you can enjoy the new generation processor and what it has to offer. Stay tuned for more details and don’t miss out!
Misery may love company, but the right company heals misery, too. Red Hat’s controversial move of forgoing the development of CentOS towards the end of this year has upset many. But in times of crisis like this, the Linux community always shows their ability to adapt and selflessly make change for the better.
And the good news does not stop there, a beta version of yet another replacement has joined the party. On the last day of April, Rocky Linux launched a release candidate – Rocky Linux 8.3 Release Candidate 1, a community enterprise operating system believed to be a reliable substitution of CentOS with complete bug-for-bug compatibility.
The intelligent minds behind the creation are led by one of the founders of the CentOS project, Gregory Kurtzer. Similar to the AlmaLinux, this new Linux distribution is developed and supported by the joint effort of the Linux community. The final release of the actual operating system is yet to announce, but the release candidate is now available and serves as a testing ground for IT professions to dip their toes in the waters of Rocky Linux, trying out features, validating its functionalities and reporting issues before the official launch.
According to Kurtzer, the system is intentionally built to resemble CentOS. Everything from installation to actual operation should be instantly familiar to users of the old platform, making it as much of an easy swap as possible.
If you are also interested in this new kid in the block, visit the download page here and give it a go!
LayerStack consistently committed on improving and providing our valued customers with superior cloud computing services. We have revamped the plans and offers on LayerPanel (LayerStack New Generation Control Panel) as well as introduced a new OS, the highlights are listed below:
China Direct CN2 Route: Expanded our CN2 GIA network in Asia Pacific. It is now available in Hong Kong, Singapore and Tokyo regions.
Updates in Standard Cloud Servers plans: R008-HK has been removed while new included R001-HK with the billing cycle of 3-month and 12-month. For more details, please view our price plan.
New OS & ISO template: Newly added AlmaLinux 8 as one of the options in OS and ISO template. If you are interested in migrating from CentOS 8 to AlmaLinux seamlessly and painlessly, please click here.