Containerisation, or Not: How to Choose for Your Hosting

reading time Reading Time: 5 minutes

There’s an expression that goes ‘nothing stays simple for long’ and gosh darn if that isn’t just so true for so much of nearly everything in the world of economics and commerce. As is nearly always the case, it’s the way things develop interconnectedness and interdependencies very naturally means that what was once kind of basic eventually becomes at least somewhat complicated.

So it is with server hosting too, and what used to be just fine for a particular company or organization with regards to hosting their website. Here at 4GoodHosting, we’re obviously like any other Canadian web hosting provider at the forefront of the industry in that this is one of the more front and centre issues for us as it pertains to providing our customers with the type of web hosting service that actually suits them best.

All of this leads to what we’ll discuss here today, and the term is ‘containerisation.’ The term on itself means to break up a mass of any objects or material and separate them into a number of containers. What exactly those containers are could be any of thousands of different potential ones, but all of them will have the quality of having some type of exterior on at least 3 of 4 sides to create a barrier than ‘contains’ the ‘contents’ exactly as desired.

Decisions, Decisions

As far as servers, it was in fact a simple choice once upon a time – dedicated, or shared. That’s often still the basic decisions, but for ever greater numbers of customers there are additional considerations about what’s going to accommodate your website most ideally.

So what is containerisation, and would it be a good fit for you? And is there a specific type of server hosting required to run containers? How about the term ‘serverless’, what needs to be known there?

Lets have a look at all of this today.

Functionalities and Options

If there are no other external considerations, an application would be run via a web hosting package or dedicated server with an operating system and a complete software stack. But now, there are other options.

Operating-system level virtualisation is the far-too-long and awkward term that containerisation replaced. Containerisation uses a platform like Docker to run isolated instances, which make up the containers. But what exactly is the container then?

It’s a package of software that includes everything needed for a specific application, and the collection of it allows it to operate like a separate server environment. Because they share a single OS kernel, multiple containers can run on one server or virtual machine (VM) and have no effect on each other in any way. Most users identify with the container as being much like its own unique environment, and irrespective of the host infrastructure.

Containers are able to perform tasks that would require a whole server or VM if they weren’t around, and they also have the benefit of consuming far less resources. Being lightweight and agile allows them to be deployed, shut down and restarted at a moment’s notice, and they can also be transferred easily across hardware and environments.

Containers are also standalone packages, and that means they behave reliably and consistently regardless of the local configuration.

Orchestrator Needs

Safe to say Kubernetes is the most popular choice as a container orchestrator. There are several out there, but Kubernetes gets the highest marks for anyone running a large numbers of containers in a production environment because it automates the deployment, scheduling and management of containerised applications. Automatically scaling containers across multiple nodes (servers or VMs) to meet current demand and perform rollouts seamlessly is a huge plus, and it also promotes a sort of self-healing with containerised applications - if a node fails, Kubernetes restarts, replaces or reschedules containers as needed.

Working Considerations

Traditional web hosting solutions make it so that you can choose whether to run your containers in a shared environment, and where the best value for the money is if you have relatively small workloads not utilizing resources of a whole cluster of nodes (VMs or servers). Those with larger workloads or regulatory obligations to meet may find that a dedicated server environment – perhaps with your own cluster - may be required.

Serverless computing involves the orchestrator automatically stopping, starting and scaling the container on the infrastructure best situated to handle demand at that time. The benefit of this is that the developer has even less to be concerned about; code runs automatically, and there’s no need to manually configure the infrastructure. Costs are kept down because all instances of a container automatically shut down when demand for it trickles off or ends entirely.

Another term often used when discussing containers is microservices. A traditional application is built and consist as one big block with a single file system, shared databases and one common language across its various functions. Where a microservices application reveals itself is behind the scenes where functions are broken down into individual components.

Examples could be a product service, payment service, or a customer review service. Containerisation technologies like Kubernetes provide platforms and management tools for implementation, and then allowing microservices to be lightweight and run anywhere they’re needed. Microservices can technically be built on traditional server hosting, but creating and maintaining a full microservices architecture creates a working reality where a container platform like Docker and an orchestration tool like Kubernetes are integral parts of making the whole operation work as intended.

You may also like: