New Invention for More Climate-Friendly Data Servers

reading time Reading Time: 5 minutes

There’s no debating the fact that a higher population means higher power consumption. In the same way greater demands on data servers caused by so many more people using them indirectly are unavoidable too, and the way data centers are already using way too much power is definitely not good for the environment. Denmark isn’t anywhere close to being one of the most populated countries in Europe, but even there a single large data center consumes the equivalent of four percent of Denmark's total electricity consumption.

That’s according to the Danish Council on Climate Change, and when you consider what that means you can imagine what the number would be like for much more heavily populated countries around the world. The growth of IoT and SaaS applications is going to increase this consumption and in the big picture it’s really not good. Here at 4GoodHosting, the nature of what we do as a quality Canadian web hosting provider means we can 100% relate to anything connected to operating large-scale data centers.

Plus, we prefer good to news to any news that is not as good and that’s why this particular piece of news out of Denmark made the cut for our blog entry this week. Let’s get into it, and it might make us all feel a little more at ease about our own small but significant contributions to power usage.

A+ Algorithm

What’s behind all of this is a new algorithm developed by Danish researchers that’s able to promote major reductions with the world's computer servers and their resource consumption. We need to keep in mind that computer servers are every bit as taxing on the climate as all global airline traffic combined, and that is why green transitions in IT are so important.

So why exactly is this such an issue?

The world’s increasing internet usage has a very real and profound impact on climate due to the immense amount of electricity that’s demanded by computer servers. Current CO2 emissions from data centers are very high, and unfortunately they are expected to double within just a few years. Studies have indicated global data centers consume more than 400 terawatt-hours of electricity each year, accounting for approximately 2% of the world's total greenhouse gas emissions

The person who gets the very real credit for developing this algorithm is Professor Mikkel Thorup and his team of researchers. They previously found a way to streamline computer server workflows and it resulted in considerable saved energy and resources. It was promptly incorporated by Google and Vimeo, among other tech giants.

Vimeo in particular stated that using this algorithm had cut back their bandwidth usage by an eight factor.

So what this team has done is built on that algorithm, and the reason it is noteworthy is because they have built on it in a big way. It is now capable of addressing a fundamental problem in computer systems and specifically with the way some servers become overloaded while other ones have capacity remaining.

Overload Stopper

The consensus is that this version of the algorithm is many times better and reduces resource usage as much as possible. And what’s also hugely beneficial is it being made available freely to whoever would like to make use of it. With worldwide internet traffic continuing to soar, the new algorithm developed by Thorup and his team addresses the problem of servers becoming overloaded with more client requests than they are able to handle.

Look no further than how many people are streaming content through Netflix or something similar every night. When this happens, systems commonly require a shifting of clients to make it so that servers have balanced distribution. It’s challenging to do, as often up to a billion servers are part of one individual system.

What results usually is congestion and server breakdowns, and with all the congestion and overload of requests comes a LOT of power and other resource consumption. Internet traffic is projected to triple between 2017 and 2022, and as it continues to increase, the problem will continue to grow but the algorithm provides the perfect scalable solution.

Minimal Movement

To this point these types of fixes have always involved a lot of different steps, but this new algorithm isn’t like that and that’s another reason why it’s being heralded the way it is. It ensures that clients are distributed as evenly as possible among servers by moving them around as little as possible, and by making content retrieval as local as possible.

Ensuring client distribution among servers is balanced so that no server is more than 10% excessively burdened than others would have meant previous fixes dealing with an update by moving a client 100 times or more. Now that can be done in about 10 moves or less, and if the number of clients and servers in the system is a billion or more.

You may also like: