Be it right or wrong, being accommodating and understanding of something or someone only occurs if basic expectations are still being met. Most of you who’d be reading this blog in the first place will know what a bounce rate is, and even though we might not know it we all have an inner clock that dictates how long we’ll be willing to wait for a page to load.
Page loads and page speeds are different, though, but all of this just highlights what’s already well known in the digital world. There’s only so much waiting a person can be expected to so, and so this has lead to efforts to minimize loss and latency with cloud-based apps.
The success they’ve had with doing that is what we’ll talk about with our blog entry here this week. Cloud-based technology has been integral to how many of the newest apps have the impressive functionality they do, and even if you’re not the savviest person to it you are probably benefiting it in ways you’re not even aware of based on that mini PC or Mac masquerading as a ‘phone’ in your pocket.
Having so many developers catering to public cloud IaaS platforms like AWS and Azure, and PaaS and SaaS solutions too, is made possible by the simplicity of consuming the services. At least to some extent when you are able to connect securely over the public internet and start spinning up resources.
This is something that shows up on the horizons for good Canadian web hosting providers like us here at 4GoodHosting, as it’s definitely within our sphere.
So let’s have a look at what’s known with the best ways to minimize loss and latency with cloud-based apps.
VPN And Go
The default starting point for any challenge that needs to be addressed or choice that needs to be made is to use the internet to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. And preferably with a VPN, but doing so doesn’t guarantee an absence of problems for modern applications that depend on lots of network communications among different services and microservices.
Quite often the people using those applications can run into problems with performance, and more often than not it’s related to latency and packet loss. That’s logical enough to make that connection, but there’s more to it. Specifically with their magnitude and variability. Loss and latency problems will be a bigger deal for internet links than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications, and too much latency will mean slower response to requests.
If that’s the scenario and there are service or microservice call across the network then this is where loss and latency are most going to hamper performance and take away from user satisfaction in a big way. Values that might be tolerable when there’s only a handful of back-and-forths can become wholly intolerable when there are now exponential values of them given how much application architecture is in place.
Varying Experiences
More variability with latency (jitter) and packet loss on internet connections improves the chance that any given user gets a widely varying application experience. One that may be great, or absolutely terrible and everywhere in between. That unpredictability is as big an issue as the slow responses or glitchy video or audio for some users some of the time.
3 specific cloud-based resources come to the forefront as solutions to these problems; direct connection, exchanges, and cloud networking.
A dedicated connection to the cloud is the first one we’ll look at. This is where the customer’s private network is directly connected to the cloud provider’s network. This will usually involve placing a customer switch or router in a meet-me facility. The cloud service provider’s network-edge infrastructure then connects them with a cable so packets can travel directly from the client network to the cloud network. And there’s no need to traversing the Internet.
The only potential hangup is with WAN latency. But as long as the meet-me is acceptable, performance should be comparable to an inside-to-inside connection. If there’s a potential downside it’s probably with how direct connects are expensive compared to simple internet connectivity. They also tend to only come in large-denomination bandwidths only. Finding something smaller than 1Gbps is unlikely.
Multiple CSPs with Exchanges
Big pipes are always an advantage and that’s true for any type of context you can use the term. Content service providers (CSP) with big pipes are able to take large physical connections and separate them into smaller virtual connections at a broad range of bandwidths under 100Mbps. Making a single direct physical connection to the exchange is beneficial for the enterprise user, and any identified means of making a virtual direct connections over it to reach multiple CSPs through the exchange is preferable.
The next consideration here is for internet-based exchanges that maintain direct connects to CSPs, but still leave customers free to connect to the exchange over the internet. The provider typically offers more in the way of onloading locations plus a wide network of points of presence at its edge. This makes it so that customer traffic doesn’t need to be moving around the internet before making the important exit into the private network and without experiencing latency and loss.