Network traffic is like the type of motor vehicle traffic where we most immediately connect the term. Build it and they will come, and the concept of induced demand really does work in exactly the same way. If space is created the fact it has been created means a demand will be created to fill it. That’s not so good when it comes to trying to build enough infrastructure to accommodate traffic, and servers are struggling in the same smaller-scale way with what it takes to accommodate more data traffic demands.
The advantages of cloud computing have compounded the problem with so many more users demanding cloud storage space, and increasingly there are apps that are cloud-based and require bandwidth access to the point that without it they won’t function properly. That’s bad news for app developers who want people using their app to not be impeded in any way. Performance of cloud-based apps that create lots of network traffic can be hurt by network loss and latency, and ways of best dealing with that is what we’ll look at this week here.
It’s a topic that will be of interest to any good Canadian web hosting provider, and that certainly applies to us here at 4GoodHosting. We have large data centers of our own too, but wouldn’t be able to accommodate even 1/1000th of the demand created by cloud storage at all times. There are ways to minimize loss and latency for cloud-based apps and SaaS resources and so let’s get onto that.
Mass Adoption
A big part of the popularity of choosing to adopt of public cloud IaaS platforms and PaaS and SaaS has come from the the simplicity of consuming the services. The means of connecting securely over public internet and then accessing and utilizing resources creates strong demands on infrastructure and there are big challenges associated with private communication between users and those resources.
Using an Internet VPN is always going to be the simplest solution if your aim is to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. But there are problems that can come with relying on the internet when modern application depend heavily on extensive network communications. And it is also very common for people using those applications to be running into problems with performance because of latency and packet loss.
It is the magnitude and variability of this latency and packet loss that are the primary notable aspects here, and the issue is more acute when they are experienced in execution of internet links rather than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications while slower responses to requests come with latency.
Occurrences of service or microservice calls across the network are opportunities where loss and latency can hurt performance. Hundreds of additional requests can be added to values with back-and-forths and they can quickly become unbearable when modern application architectures makes them explode in numbers based simply on how the operations go.
Need to Reduce Jitters
Latency is also referred to as jitters, and the greater variability that comes with latency for cloud apps is related to packet loss on internet connections. What this does is increase the chance that any given user gets a widely varying application experience that may be great, or it may be awful at the same time. That unpredictability is sometimes as big an issue for users as the slow responses or glitchy video or audio.
Dedicated connections to the Cloud are what needs to happen and the advantages with connecting a customer’s private network to the cloud provider’s network are considerable. This usually involves a customer switching or routing in a meet-me facility where the cloud service provider also has network-edge infrastructure at their disposal. The cabled connection means packers are able to travel directly from the client network to the cloud network and with no need to be traversing the Internet.
Direct connects will darn near guarantee that loss and jitter don’t occur. As long as WAN latency is favorable then performance gets as close as possible to an inside-to-inside connection. The only downside might be when direct connects are pricey compared to simple internet connectivity, and have only large-denomination bandwidths of 1Gbps or higher available.
Exchanges for Multiple CSPs
Separating big physical connections into smaller virtual connections at a broad range of bandwidths all under 100Mbps is possible now and extremely effective as a wide-reaching means of cutting back on cloud storage needs. It becomes possible for a single enterprise client to make a direct physical connection to the exchange, and provisions virtual direct connections over it to reach multiple CSPs through the exchange. A single physical connection for multiple cloud destinations is now all that’s needed.
Most enterprises use multiple cloud providers, and not just one. Most use more all the time and many will never be 100% migrated to cloud even if a good portion of them already are. This makes closing the gap between on-premises resources and cloud resources a part of the ongoing challenges as well but fortunately the different sets of options for addressing the challenges have evolved and improved quite notably in recent years.