5G Download Speed in Canada Is 2nd Best in World

Reading Time: 5 minutes

We’ve all heard so much fanfare, excitement, and anticipation about the arrival of 5G network technology and what we can expect once it’s the new normal. There’s been some trepidation about it too, and most notably in who we’ll allow to build the 5G network in Canada. We’re going to steer well clear of that topic of discussion, but what we will do is have a look at a recent survey that found that 5G downloads in Canada are fairly darn speedy in comparison to elsewhere.

Here at 4GoodHosting, that’s going to be quite promising for any good Canadian web hosting provider that has a fairly inherent understanding of all the potential that’s going to come with 5G and how pleasing it’s going to be to enjoy it with open-throttle operating speeds. However, the one thing that’s likely the most promising is probably the one aspect people are least enthusiastic about – the IoT (Internet of Things for anyone not familiar with the acronym)

So back to the topic, what went into this determination, and what does all this suggest in the big picture once the rollout of 5G Networks is complete?

All About the Signal

We’ve been told all sorts of what 5G wireless technology may become, but not what it is exactly. Unless you’re a mega tech-savvy person, there might a need to start from the start. 5G networks are the next generation of mobile internet connectivity, and it’s promising to offer connections that are much, much faster and reliable than what was offered with previous mobile technology.

You may not be familiar with what 1 Full gigabyte of download speeds entails, but trust us when we say it’s fast and a LOT faster than what most of us have enjoyed as a standard on 4G. And the good news being that 1Gbps (or darn close to it) speeds are set to become the new standard.

Provided, that is, that you’re running on a good strong signal.

What a 5G network is able to offer will depend in large part on what signal your 5G is running on, and there are three categories of signal bands. You’ll be working with either high-band, mid-band, or low-band signal bands. And before you jump to conclusions about low-band signal bands, you might want to know that they’re better for penetrating walls, which makes them a better choice for condos, basement suites and the like.

Considering how many Canadians in major metro areas live in these type of homes that’s going to be a good thing. We can imagine the sale of Wi-Fi expanders to people who get home to find they do little if anything is going to go down considerably.

Mid-band works is ideal for connectivity in the city, but not in the country. High-band is impressively fast, but it can be unreliable and especially when you’re indoors and have other local factors that are also affecting the signal.

 

 

And even while 5G technology is being trumpeted in the most favourable of lights pretty much all over the place, the technology does have its detractors. An entry in the Scientific American journal last year highlighted how more than 240 scientists signed the International EMF Scientist Appeal and expressed their concern about nonionizing radiation attributable to 5G.

5G will use millimeter waves plus microwaves that have been in use for older cellular technologies from 2G all the way through the current 4G. The issue with 5G in this way is that it will require cell antennas every 100 to 200 metres or so, and that’s going to ramp up radiation in a big way. 5G also employs new technologies which pose unique challenges for measuring exposures.

The most well known of these are active antennas capable of beam-forming phased arrays, and massive multiple inputs and outputs, or MIMO as they’re called.

While that’s a very legit concern, however, the old expression ‘you can’t stop progress’ probably really applies here. The potential for good (at least in as far as determining that by what people want) outweighs the potential for bad – at least in the court of public opinion.

Pretty Darn Speedy

Alright, enough about relevant related and background information. People who read the title almost certainly want to know more about Canada coming in second for 5G network speeds.

It’s true, as a company that tests the performance of mobile networks recently analyzed users’ real-world 5G experiences in 10+ different countries to determine who’s enjoying the best 5G network speeds.

Taken into evaluation were users’ average 5G and 4G download speeds measured through various mobile operators, while also weighing time spent on connecting to each generation of wireless technology.

So we’ve already established Canada having the Second fastest 5G network speeds on the planet, but by this point you’re probably thinking when are they going to say who got top spot?

Any guesses?

KSA #1

We’re going to go ahead an imagine none of you envisioned the correct answer being Saudi Arabia here, but it’s true. Right there smack dab in the Middle of the Middle East they were enjoying 144.5Mbps (mega bits per second). Even if that’s the furthest thing from being within your comprehension abilities, trust us when we say that’s pretty much screaming fast.

And with Canada coming second, the truth is that we came in a distant second. Canada did come second with 90.4Mbps, but the different but that’s a difference of nearly 55Mbps and that pretty much makes it qualify as a distant second.

Now we DO imagine that a lot of you would have guessed South Korea based on the fact it’s regarded as the most wired country in the World AND they have the highest adoption rates for 5G networks so far. They did come in the top 5, but what’s also surprising is that the country that came in with the worst score (32.6Mbps) wasn’t a developing country or anything of the like.

It was the UK!

 

 

However, the study did find that if they were only examining 5G speeds rather than both 5G and 4G, South Korea moved ahead into second place at 312.7 Mbps and the Saudis retained the top spot with 414.2 Mbps. We Canadians slid back to 5th spot at 178.1 Mbps, trailing Australia (215.7 Mbps) and Taiwan (210.2 Mbps).

And to continue with our trend of surprises here, it was actually the USA that came dead last when looking at 5G speeds exclusively. 50.9 Mbps.

Keep in mind though that these less-than-impressive 5G download speeds in the U.S. are due to a combination of the limited amount of new mid-band 5G spectrum that is available and the continuing popularity of low-band spectrum and its excellent availability and reach but lower average speeds than the 3.5GHz mid-band spectrum used as the main 5G band in every country outside of the U.S.

6 Best Practices for Quickly Scaling Apps to Meet Demand

Reading Time: 6 minutes

It’s been said that you can never predict what the future hold, and in the same way you also can’t predict what the future will hold for you. And perhaps we can also agree that you’re going to be equally uncertain as to what the future will demand of you. It’s likely that there’s no better sphere for this reality to be put on display than the digital one, and one corner of that world where it’s ever so true it with applications.

Some apps make quite a splash upon introduction, while others sink like a stone. The majority of them fall somewhere in between, but for those that splash it’s not uncommon to see the demand for your app exceed even your expectations. That’s the rosy part of it all, and the app developer will be just basking in their success and relishing the demand.

When the demand is more related to what the app’s users – particularly if they’re paying ones – expect of it, however, that’s when the receptiveness is often mixed with some degree of ‘how am I going to go accommodate these demands?’

Now here at 4GoodHosting we’re well established as quality Canadian web hosting provider, but we’re the furthest thing from web app developers and that’s probably a good thing considering our expertise is in what we do and a developer’s expertise is in what they do. That’s as it should be, but one thing we do know is that everyone is going to be all ears when it comes to learning what they can do better to be better at what they do.

Which leads us to today’s topic – what are the best ways for scaling apps rapidly when a developer simply doesn’t have the time he or she’d like to accommodate demand that expects expanded capabilities now.

We’ll preface here by saying we’ve taken this entirely from SMES (subject matter experts if you’re not familiar with the acronym) who are on top of everything related to the world of app development, but we can say it checks out as legitimately good information.

Pandemic Spiking Demand

The COVID-19 pandemic continues on, and many companies in e-commerce, logistics, online learning, food delivery, online business collaboration, and other sectors are seeing big time spikes in demand for their products and services. Many of these companies are seeing evolving usage patterns caused by shelter-in-place and lockdown orders creating surges in business and specifically in demand for their products.

These surges have pushed many an application to its limits, and what that’s often doing is potentially resulting in frustrating outages and delays for customers. So how do you best and most effectively accommodate application loads?

What’s needed is the best, quickest, and most cost-effective way to increase the performance and scalability of applications as a means of offering a satisfactory customer experience, but not assuming excessive costs in doing so.

Here are 6 of the best ways to do that:

Tip 1: Understand the full challenge

Addressing only part of the problem is almost certainly not going to be sufficient to remedy these new shortcomings. Be sure to consider all of the following.

  • Technical issues– Application performance under load (and how end users experience it) is determined by the interplay between latency and concurrency. Latency is the time required for a specific operation, or more simply how long it takes for a website to respond to a user request.
  • Concurrency – the number of simultaneous requests a system can handle is its concurrency. When concurrency is not scalable, a significant increase in demand can cause an increase in latency because the system is unable to respond to all requests as quickly as they are received. A poor customer experience is what’s usually the outcome here, as response times increase exponentially and look bad on your app. So while ensuring low latency for a single request may be essential, it may not solve the challenge created by surging concurrency on its own and you need to be aware of this and make the right moves to counter.

It’s imperative that you find a way to scale the number of concurrent users while simultaneously maintaining the required response time. It’s equally true that applications must be able to scale seamlessly across hybrid environments, and often ones that span multiple cloud providers and on-premises servers.

  • Timing– Fully grown strategies take years to implement, like when you’re rebuilding an application from scratch, and they aren’t helpful for addressing immediate needs. The solution you adopt should enable you to begin scaling in weeks or months.
  • Cost– Budget restrictions are a reality for nearly every team dealing with these issues, so a strategy that minimizes upfront investments and minimizes increased operational costs is going to be immeasurably beneficial and it’s something you need to have in place before you get into the nitty gritty of what you expanded scaling is going to involve.

Tip 2: Planning both short and long term

So even as you’re smack dab in the middle of addressing the challenge of increasing concurrency while keeping latency in check, it’s never a good idea to rush into a short-term fix that may lead to a dead end due to the haste with which it was incorporated. If a complete redesign of the application isn’t planned or feasible, then you should adopt a strategy that will enable the existing infrastructure to scale to whatever extent it’s needed.

Tip 3: Choose the right technology

The proven consensus for the most cost-effective way to rapidly scale up system concurrency is with Open source in-memory computing solutions that can still maintain or even improve latency. Apache Ignite, for example, is a distributed in-memory computing solution which is deployed on a cluster of commodity servers. It consolidates CPUs and RAM of the cluster and distributes data and compute to the individual nodes. Whether deployed on-premises or in a public or private cloud or hybrid environment, Ignite can be deployed as an in-memory data grid (IMDG) stuffed between existing application and data layers and requiring no major modifications to either component. Ignite also supports ANSI-99 SQL and ACID transactions.

Relevant data from the database is cached in the RAM of this newfound cluster when an Apache Ignite in-memory data grid is in place. It is then available for processing that’s free of the delays caused by normal reads and writes to a disk-based data store. The Ignite IMDG uses a MapReduce approach and runs application code on the cluster nodes to execute massively parallel processing (MPP) across the cluster with minimal data movement across the network. Between in-memory data caching, sending computes to the cluster nodes, and MPP dramatically increases concurrency and reduces latency, you get an up to 1000 times increase in application performance compared to applications built on a disk-based database.

By adding new nodes the distributed architecture of Ignite makes it possible to increase the compute power and RAM of the cluster, and it’s also now able to automatically detect the additional nodes and redistributes data across all nodes in the cluster. This means optimal use of the combined CPU and RAM and you also now have massive scalability to support rapid growth.

We only have so much space to work with here, but a Digital Integration Hub (DIH) and Hybrid transactional/analytical processing (HTAP) get honourable mentions here as other really smart choices for scaling up apps. Look into them too.

Tip 4: Open Source Stacks – Consider Them

You need to identify which other proven open-source solutions make the grade for allowing you to create a cost-effective, rapidly scalable infrastructure, and here are 3 of the best:

Apache Kafka or Apache Flink for building real-time data pipelines for delivering data from streaming sources, such as stock quotes or IoT devices, into the Apache Ignite in-memory data grid.

Kubernetes for automating the deployment and management of applications previously containerized in Docker or other container solutions. Putting applications in containers and automating the management of them is becoming a norm for successfully building real-time, end-to-end business processes in our newdistributed, hybrid, multi-cloud world.

Apache Spark for taking large amounts of date and processing and analyzing it efficiently. Spark takes advantage of the Ignite in-memory computing platform to more effectively train machine learning models using the huge amounts of data being ingested via a Kafka or Flink streaming pipeline.

Tip 5: Build, Deploy, and Maintain Correctly

The need to deploy these solutions in an accelerated timeframe is clear, and the consequences of delays being very serious is usually a standard scenario too. For both reason it is necessary to make a realistic assessment of the in-house resources that are available for the project. If you and your team are lacking in either regard then you shouldn’t hesitate to consult with 3rd-party experts. You can easily obtain support for all these open source solutions on a contract basis, making it possible to gain the required expertise without the cost and time required to expand your in-house team.

Tip 6: Keep Learning More

There are plenty of online resources available to help you get up to speed on the potential of these technologies and garner strategies that fit your organization and what is being demanded of you. Start by exploring whether your goal is to ensure an optimal customer experience in the face of surging business activity, or whether it’s to start planning for growth in a (hopefully) coming economic recovery. And determine whether or not either aim is going to involve an open source infrastructure stack powered by in-memory computing being your cost-effective path to combining unprecedented speed with scalability that’s both not limited by constraints and can be rolled out without taxing you and your people too much.

COVID Lockdowns Putting Strain on Broadband Infrastructure Around the Globe

Reading Time: 5 minutes

Safe to say there won’t be anyone who’s even slightly enamoured with all the different fallouts from the Global Pandemic, and if your discontent is particularly strong then you had best buckle down as projections of the 2nd wave arriving imminently are looking to be pretty darn accurate (at least here in Vancouver where the general disregard to protocols is pretty much absolute in public). One wrinkle in all of this – albeit a pretty big wrinkle – is we’re leaning on the World Wide Web more heavily than ever before it seems.

This was especially in the early spring when the stay-at-home messaging was still being well received, and people were either online working or keeping themselves entertained indoors. Since then the nature of demand has shifted, but we’re not sufficiently in the know regarding all of this to say exactly how it’s all worked. But the long and short of it is that collectively we’re putting demand strains on Broadband infrastructure like never before, and in a lot of ways it’s buckling under the weight of these demands.

We’re like any quality Canadian web hosting provider here at 4GoodHosting in that we’re likely more up front when it comes to having this be readily apparent. We know from extensive 2nd hand experience how much people get up in arms over the struggles that come with a lack of bandwidth and the nature of what we do (and know accordingly) makes us all to aware of how big a problem this has the potential to become. Particularly with the imminently ubiquitous nature of 5G network use around the globe.

All this said, let’s use today’s blog to have a more detailed look at this ‘constriction’ and the significance of it.

Only So Much Width to the Tube

Not the most natural of analogies for this phenomenon, but bear with us. So there’s been a map recently created in Australia, and while we’re not able to show the map due to copyright restrictions it’s quite telling. Its been referred to as a ‘global internet pressure’ map and what it does is show the extent to which the coronavirus pandemic is putting constrictions on internet services around the world.

Now as you might guess, the #1 cause of such bandwidth-intense activity is high definition (HD) video streaming and online gaming, and it’s true these are among the leading causes of contribution to the congestion. No matter how you might it wish it were otherwise, more and more people either working from home or lounging at home means much more in the way of big bandwidth appetites.

So here’s where we get our tube analogy from. The workings of this is not that much functionally different from a very large group of overweight children trying to make their way through a crowded subway tunnel. The streaming video or video upload during teleconferencing is made up of packets of information that can be far from small depending on what’s contained within them. When too many of these packets are trying to make their way down copper and fiber-optic cables across vast distances it’s inevitable that some aren’t going to arrive when they’re expected to.

Internet Use Through Lockdowns

Researchers have been looking at how each nation’s internet was performing from the time people started to stay at home and use it for both work and home-based entertainment through until now. Also tracked were changes in internet latency that emerged between March 12 to 13, which coincided with several countries — including France, Spain and Italy — beginning enforcement of government-imposed lockdowns aimed at stopping the spread of the coronavirus.

There was a point made to differentiate between the first days of the lockdown period and the baseline period in early February, and then finding a median starting point for legit internet pressure, where marked latency or speed, issues started to affect millions of internet users across certain regions. They then made a point to look at those a collective whole, but that information is more subjective to readers who’ll have a look at the map.

The long and short of it is this – current Internet bandwidth infrastructure is sufficient only at the very best of times, and even without a global pandemic we’re very likely nearing the end of the realistic and practical working life of the existing infrastructure as it is. Without major investments in upgrades all the ‘progress’ we’ve prided ourselves in being able to offer one another is about to hit some serious snags.

3 – 7% – Much Bigger Numbers in Reality

The values for increased usage may seem relatively small – like the 3 to 7 percent that is fairly standard for many specific regions indicated – but it’s actually quite a jump that is far from normal and it’s a difference that indicates that many users are quite likely experiencing bandwidth congestion.

What has been seen in his is the highest levels of pressure on internet networks is in countries like Italy, Spain, Sweden, Iran and Malaysia. That’s not to suggest residents in other countries aren’t experiencing the same difficulties, it’s just that they’re not on the leaderboard yet.

Now, yes there’s been all sorts of jokes about fully grown men spending long stretches of days playing online games. As funny and somewhat pathetically accurate as the truth of that might be, it’s not just men playing a whole lot of online games and eating up plenty of bandwidth while they slay dragons or whatever it is they do.

However, it turns out that entertainment streaming is a whole lot more gluttonously consumptive when it available bandwidth. Verizon reporting a 75 percent increase in gaming traffic during peak hours is among many different stats and observed behaviours that bear this out.

The More To It

It might then seem to be a legit default conclusion that gaming is the primary source of the increase in internet use. However, that’s not entirely true. The overall bandwidth used by the medium pales in comparison to that of others and a study comparing how much bandwidth gaming consumed compared to online video streaming services found that gamers consumed an average of only 300 megabytes per hour.

In comparison, HD content streamers consumed 3,000 megabytes per hour, and that jumped up to 7K per hour when it’s 4K video. While it’s true streaming companies are trying to limit bandwidth use, there’s really only so much that can be done in that regard and who’s going to give up Netflix n’ Chill, right?

There are some helpful efforts being made though. A number of video streaming companies are now implementing measures to decrease their bandwidth use. Streaming giant Netflix recently stated that they would work to reduce traffic on networks by around 25%.

Baby steps, but progress needs to start somewhere if collectively we’re going to have the infrastructure in place to handle our ever-growing insatiable thirst for Internet-based whatever-it-is we can’t go without at any given time.

If you’d like to see this map, you can click here.

Steering Clear of New-Found Risks Related to Adware

Reading Time: 4 minutes

There are a lot of people who’ve decried the way the Internet has gone from being a genuine open-source information and content resource to by and large a massive, seething marketing tool. That’s a fair enough complaint, but if you’re making it you had better by an average joe and not someone with business (or income) interests in it serving that purpose. Truth is it was going to happen one way or another anyways, and so we’re all well advised to get used to it.

The explosion of advertising that has come with that and assaulted your screen at every opportunity is something you need to tolerate, but with it has come the concurrent explosion of adware that can do all sorts of nefarious things to your web-browsing devices without you being aware of it. These web security threats tend not to rest on their laurels, and these days we’re seeing the threats related to Adware morphing into different and more sneaky forms.

Now of course here at 4GoodHosting, our being a premier Canadian web hosting provider has us fairly front and center for watching all of this transpire in the same way it would be for anyone working in the industry. All of this does beg the question what exactly are the interests of those who put time and effort into building and deploying this adware, but that’s a whole different discussion.

Anyways, back to the topic at hand – There are both emerging and expanding problems related to Adware these days, and keeping yourself insulated from them requires more than it used to. So let’s have a look at all of that today and hopefully put you more in the know about what you need to do to stay safe from them.

Adware – What is It Exactly?

Adware is lurking-in-the-background software that works to display ads on computers and mobile devices. At times it’s referred to as an adware virus or a potentially unwanted program (PUP) and nearly all the time they’re installed without a user okaying any such ad. Adware is quite the troublemaker – it interferes with browsing experiences, displays excessive amounts of unwelcome pop-ups, banners, text link ads, and even sometimes auto-plays video commercials that have absolutely no business being wherever it is you are on the Web

And to what aim you ask? Well, the goal of any adware is income generation for its creator by displaying all those excessive ads. With that basic understanding I place, we can now look at different types of adware. There are two main types, and they’re differentiated based on their ‘penetration’ method:

With Type 1, the adware infiltrates the device by the means of freeware, while Type 2 breaks in via exposure to infected websites. The reason this sketchy behaviour occurs is because the developers want to fund the development and distribution of these free programs by monetize them with adding ‘additional’ programs to the installation files. The type of adware that usually comes hidden in free software usually isn’t the malicious type, but it sure can be annoying.

Not the Same as Spyware

Adware should not be confused with spyware. For starters, Spyware represents a separate program but it still gets downloaded without user knowledge. Spyware tracks user’s browsing actions on the Internet to display targeted advertisements and with this comes collection of different information about users exposed to it.

Infected website adware is often associated with web browser hijacking when users visit an infected website loaded with malicious scripts that promote unauthorized adware installation. Once an infected user browses those sites they are actively shown ads on an ongoing basis. They might think this is just the ‘way it is’ but in reality the ads are being shown as a result of the adware activity that was installed on the device.

Adware Red Flags

It’s good to be in the know about signs that may indicate your web browsing device has been infected with adware. Here’s some of the more common ones:

Below are several signs that indicate adware is installed:

  • A web browser’s home page has been changed without user’s permission
  • Advertisements appear where they ordinarily would not
  • Websites start redirecting you to unanticipated pages
  • Web page layouts are displayed in a different way each time users visit the web page
  • Web browsers are inexplicably slow and may also malfunction
  • Unwanted programs being installed automatically
  • New plugins, toolbars, or extensions appear without user consent
  • PC resource consumption (CPU usage for example) is unsteady and jumps without any reasonable explanation

The Extent of Risk from Adware

It is true that most adware pieces are more of an annoyance than a legit danger. Annoying activities include text ads, banners, and pop-ups that appear inside the browser window while users dig for information. You may also have random pages or bookmarks open unexpectedly or see strange malfunctions occurring with the device.

But there also more serious and legit threat issues when adware collects the user’s data. This usually involves the developer trying to sell the user’s ad profile along with browsing history and included IP address, performed searches, and visited websites. As you imagine, no good is going to come of that.

Preventing Adware from Infecting Devices

The best and most direct way to prevent adware is to exercise caution when visiting web sites that look suspicious. Be wary of downloading free software and download all programs only from trusted sources. While downloading freeware, the installation wizard may display small pre-checked checkboxes that indicate your agreeing to installation of additional ‘bundled’ software.

Another good general suggestion is to not click any ads, alerts, or notifications when browsing the Internet. The old ‘Your PC is infected with serious viruses, and this n antivirus scan is strongly recommended’ is a classic ploy here. A lot of people fall victim to this cunning deception and then install adware without any idea that’s what they’ve done.

Also ensure that your operating system and other software are regularly updated. Non-updated software is vulnerable to many types of hacker attacks with malware exploiting their security holes.

Certain security settings available on Windows, Apple, and other devices can be enabled to protect users from inadvertently downloading adware. Configuring a web browser to block all pop-ups is a good start. It is also particularly important to carefully check each file that gets download from the Internet and the best (see not free) antiviruses will also provide real-time web protection.

Removing Adware

Unless you’re a webmaster wizard or anything else of the sort, the recommendation here is going to be to use special antimalware solutions like Malwarebytes to get rid of Adware that’s taken up residence inside your device. And then be much more wary of these threats in the future and be smarter about what you can do to avoid exposure to them.

Getting to Know the Next Big aaS – Container As a Service

Reading Time: 4 minutes

If you’re a layperson like most and haven’t heard of Kubernetes, or container orchestration systems in the bigger picture of things, then your unawareness is excusable. If you’re a web developer and you haven’t heard of either, however, you almost certainly have been living under a rock as the expression goes. These days the digital world is doing everything on the grandest of scales, and with that size of operations comes the need to be able to consolidate and store data and everything else that makes that world tick as much as it needs to tick.

Now even most of the most ordinary of you will have heard of SaaS – Software as a service – and it’s likely a good many are even taking advantage of it at this very time. One of the most common instances is how, for example, you’re paying a monthly fee to enjoy the Microsoft Office suite on your desktop or notebook rather than having had to fork over big bucks all at once for a box containing a disk and a need for you to install the software. The SaaS train continues to pick up speed, and truth be told that’s a good thing – no one needs excess packaging OR having to spend time installing anything if there’s a legitimate alternative to it.

With SaaS, there is – and here at 4GoodHosting we’re not unlike any other good Canadian web hosting provider in that we ourselves are benefitting from all these ‘aaS’ developments too and it’s something we’re almost certainly just seeing the tip of the iceberg of when it comes to how much of what we previously ‘bought’ hard copies of is not available as a service provided through the web.

CaaS – containers as a services – is just the latest and shiniest offering in this regard. So what’s it all about, and how is it going to be relevant for who? Let’s get into that here today.

Brief Background

Global enterprises are more and more eager to make use of containers, with 65 percent of organizations stating they use Docker containers, and 58 percent using the Kubernetes orchestration system in some way or another according to a recent report.

However, as appealing and mega functional as they are, the primary challenges for new converts is with lack of resources and insufficient expertise with using containers to build and maintain applications. This is the primary reason why containers-as-a-service (CaaS) offerings are being very welcome conveniences as soon as they’re made available.

Containers-as-a-Service Defined

When cloud vendors provide a hosted container orchestration engine — typically based on the super-popular Kubernetes open source project, which originated at Google — the appeal of a CaaS option to go along with is in the ability to deploy and run containers, manage clusters, automate scaling and failure management, and allow easier maintenance of the common infrastructure layer. Governance and security is included in this.

The entirety of networking, load balancing, monitoring, logging, authentication, security, autoscaling, and all continuous integration and delivery (CI/CD) functions are handled by the CaaS platform, making it an excellent task consolidator and handler.

CaaS allows individuals to take the benefits of their cloud infrastructure and best leverage them, while helping to avoid any vendor lock-in common with platform-as-a-service (PaaS) — that might come along with them. The containers are very portable across various environments, and this makes them even more versatile and multi functional.

For most it will be helpful to know the difference between a CaaS and running on classic infrastructure-as-a-service (IaaS). In large part it comes down to whether the organization has the resources and skills to implement and manage a specific container orchestration layer itself, or perhaps leaving that to a cloud provider would be a better choice. That will often depend on whether your container environment must span multiple clouds and/or on-prem environments. CaaS platforms that can be deployed either on-prem or in the cloud are offered by a number of vendors these days.

To summarize, the big benefit is in you either managing things at the infrastructure level and set up the orchestrator yourself, or using a container platform that handles the underlying infrastructure and puts in place a pre-installed orchestrator that is ready for you to deploy and scale your containers.

CaaS Benefits

Running containers on CaaS is very much like running your virtual machines on IaaS. Speed of deployment and ease of use are the primary benefits, along with the simplicity of the pay-as-you-go cloud model and the ability for vendor lock-in we mentioned previously.

Leaving your container infrastructure to a cloud vendor means you can get up and running without investing in your own hardware and no need to build or run your own container orchestration system(s). In addition, by containerizing applications you’re able to migrate applications into different environments or vendor ecosystems more easily, giving greater flexibility and scaleability options.

Cost efficiencies are definitely a part of the appeal too, as containers are better equipped to scale horizontally as demand dictates and make it so that organizations pay only for the cloud resources they use. Containers are also nowhere near as heavy as VMs, meaning they’re less resource intensive, which usually means better speeds and general operating costs reductions.

Another benefit comes with consistency of instrumentation and logging, as isolating individual services in containers can allow for more effective log aggregation and centralized monitoring through the popular sidecar deployment model.

After a long string of pluses, we do have to share one minus. Migrating traditional apps to containers is a hurdle for some who are interested in making the switch. It’s common to have to break down monolithic applications into microservices when migrating to containers, and for larger, older organizations that can sometimes be too drastic a change to be expected of them all at once.