Leveraging Private Edge Computing and Networking Services for Better Scaled VPNs

Fairly common for data storage and management needs to have outgrown what you originally set up for their accommodation and giving them the elbow room they need. All sorts of possibilities and variations on and around what that outgrowing might mean for a person or organization depending on what they do and how extensive their data needs have become. For a long time now the default suggestion for any in such a situation would be to move to a Virtual Private Network (VPN).

But here we are again collectively struggling to keep up with changing needs and realities, and if we were to list out all the potential explanations as to why a VPN wouldn’t be quite cutting it like it used to for people then we’d have an entire blog entry of its own. But VPNs are so well entrenched as a critical enabling tool for today’s distributed organizations and internet users. Roughly one-third of all internet users now use a VPN to protect personal data, and that’s a number that’s going to get the attention of us here at 4GoodHosting in the same way it would for any good Canadian web hosting provider.

Then there’s the fact that there’s plenty ready to push this trend even further, especially with  rampant cybercrime and privacy concerns likely to be front and center in the coming years. The pressure this puts on VPN providers is to offer reliable ways for this surging demand to be quickly, efficiently, and cost-effectively accommodated. And the need is even more acute in high-growth emerging markets which offer massive growth potential – Indonesia, China, Thailand, India, and the UAE to name the most notable ones.

The most recent and popular industry consensus is that the best way to do this is to leverage private edge computing and networking services as a means of scaling VPNs more ideally, and that’s what we’re going to look at with this week’s blog entry.

Difficult, but Doable

Let’s start with what makes this difficult. Heavy regulatory barriers, lacking infrastructure, gaps in connectivity, and expensive operating costs means reaching customers in these markets can prove to be challenging. The entirety of scaling a VPN service is difficult too, and much of that is because until now there’s only really been two approaches to doing that – either horizontally or vertically.

When you scale up vertically it is almost always necessary to upgrade servers by replacing them. Expensive? Absolutely, and prohibitively so for a lot of the organizations that would need to eat those costs. But having optimal performance per server is a must, and so if you’re going to scale up vertically these high hardware replacement costs are pretty much unavoidable.

Scaling out horizontally presents its own set of reasons for decision makers to be dissuaded. Scaling out horizontally by adding more servers to your current infrastructure to accommodate peak user loads is expensive and time consuming. Putting together a private high-performing global network that is capable of spanning geographical distances can seem like a daunting task with how long it will take and how much it will likely cost. This is making no mention of the additional maintenance costs which add to the expensiveness.

Private Edge Solution

Having infrastructure providers that offer global, private edge computing and networking services is what’s needed, but who has the means of stepping up and doing what’s necessary to make that it available for those who need it. Another option exists for VPN providers that don’t find cost efficiencies in scaling horizontally or vertically.

That’s to work with a 3rd-party infrastructure enabler that has private, high-quality compute and networking services at the edge of the network available. The key part of this being at the edge is the way it would be relatively close to end users in strategic markets. That eliminates the distance problem from the equation, and by outsourcing network and computer operations these providers can instantly scale into global markets and serve new VPN customers.

Immediate benefits:

  • Improved performance with more ensured performance and stability in overseas markets
  • Reduction in long distance data transmissions resulting in faster data transfers and much less in the way of performance issues (latency / jitter)
  • Better security stemming from 3rd-party infrastructure providers being able to grant access to premium bare metal and virtual machines (VM) for enhanced VPN security and scaling more safely
  • Less maintenance due to avoiding more-constricted VPN services where many servers spread out across multiple locations
  • Lower operating costs as by outsourcing operations you are able to leverage flexible pricing models and pay less for the bandwidth you need

Last but not least, aggregate bandwidth pricing makes it more possible for you to evaluate the balance between underutilized and overutilized servers. You are then able to reduce bandwidth waste and make the most of your bandwidth spend.

Minimizing Loss and Latency with Cloud-Based Apps

Be it right or wrong, being accommodating and understanding of something or someone only occurs if basic expectations are still being met. Most of you who’d be reading this blog in the first place will know what a bounce rate is, and even though we might not know it we all have an inner clock that dictates how long we’ll be willing to wait for a page to load.

Page loads and page speeds are different, though, but all of this just highlights what’s already well known in the digital world. There’s only so much waiting a person can be expected to so, and so this has lead to efforts to minimize loss and latency with cloud-based apps.

The success they’ve had with doing that is what we’ll talk about with our blog entry here this week. Cloud-based technology has been integral to how many of the newest apps have the impressive functionality they do, and even if you’re not the savviest person to it you are probably benefiting it in ways you’re not even aware of based on that mini PC or Mac masquerading as a ‘phone’ in your pocket.

Having so many developers catering to public cloud IaaS platforms like AWS and Azure, and PaaS and SaaS solutions too, is made possible by the simplicity of consuming the services. At least to some extent when you are able to connect securely over the public internet and start spinning up resources.

This is something that shows up on the horizons for good Canadian web hosting providers like us here at 4GoodHosting, as it’s definitely within our sphere.

So let’s have a look at what’s known with the best ways to minimize loss and latency with cloud-based apps.

VPN And Go

The default starting point for any challenge that needs to be addressed or choice that needs to be made is to use the internet to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. And preferably with a VPN, but doing so doesn’t guarantee an absence of problems for modern applications that depend on lots of network communications among different services and microservices.

Quite often the people using those applications can run into problems with performance, and more often than not it’s related to latency and packet loss. That’s logical enough to make that connection, but there’s more to it. Specifically with their magnitude and variability. Loss and latency problems will be a bigger deal for internet links than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications, and too much latency will mean slower response to requests.

If that’s the scenario and there are service or microservice call across the network then this is where loss and latency are most going to hamper performance and take away from user satisfaction in a big way. Values that might be tolerable when there’s only a handful of back-and-forths can become wholly intolerable when there are now exponential values of them given how much application architecture is in place.

Varying Experiences

More variability with latency (jitter) and packet loss on internet connections improves the chance that any given user gets a widely varying application experience. One that may be great, or absolutely terrible and everywhere in between. That unpredictability is as big an issue as the slow responses or glitchy video or audio for some users some of the time.

3 specific cloud-based resources come to the forefront as solutions to these problems; direct connection, exchanges, and cloud networking.

A dedicated connection to the cloud is the first one we’ll look at. This is where the customer’s private network is directly connected to the cloud provider’s network. This will usually involve placing a customer switch or router in a meet-me facility. The cloud service provider’s network-edge infrastructure then connects them with a cable so packets can travel directly from the client network to the cloud network. And there’s no need to traversing the Internet.

The only potential hangup is with WAN latency. But as long as the meet-me is acceptable, performance should be comparable to an inside-to-inside connection. If there’s a potential downside it’s probably with how direct connects are expensive compared to simple internet connectivity. They also tend to only come in large-denomination bandwidths only. Finding something smaller than 1Gbps is unlikely.

Multiple CSPs with Exchanges

Big pipes are always an advantage and that’s true for any type of context you can use the term. Content service providers (CSP) with big pipes are able to take large physical connections and separate them into smaller virtual connections at a broad range of bandwidths under 100Mbps. Making a single direct physical connection to the exchange is beneficial for the enterprise user, and any identified means of making a virtual direct connections over it to reach multiple CSPs through the exchange is preferable.

The next consideration here is for internet-based exchanges that maintain direct connects to CSPs, but still leave customers free to connect to the exchange over the internet. The provider typically offers more in the way of onloading locations plus a wide network of points of presence at its edge. This makes it so that customer traffic doesn’t need to be moving around the internet before making the important exit into the private network and without experiencing latency and loss.

Artificial Intelligence Now Able to Crack Most Passwords < 60 Seconds

There are some people who have more in the way of long-term memory ability than short-term memory, and while that may sound good it does come with its own set of problems. Ideally you have a balance of short and long-term memory, and that will be more beneficial if you’re the type who has a demon of a time remembering their passwords. But it’s never been a good idea to create simple passwords, and it’s even less of a good idea nowadays with the news that the rapid advances in AI recently mean that artificial intelligence is almost certainly going to be able to figure out those passwords.

The fact that most of us use password apps on our phones attests to two things. First, how many passwords we need to have given the ever more digital nature of our world. And second, just how many of us don’t have the memory to be able to remember them organically. So if you’re not good with memory but you’ve resisted putting one of these apps on your phone then you may want to now. This is a topic that will be of interest for us here at 4GoodHosting as like any good Canadian web hosting provider we can relate the proliferation of passwords we all have these days.

Some of you may be familiar with RockYou, and if you are you’ll know that it was a super popular widget found on MySpace and Facebook in the early years of social media. There’s a different connection there between the widget and where we’re going with AI being able to hack passwords in less than a minute, so let’s start there with our web hosting topic blog entry this week.

Password Mimicker

How it is part of the reason that now is a good time to update your password is this. Experts have found AI systems are able to crack almost all passwords easily, and that’s just one more example of how the capabilities of artificial intelligence are expanding in leaps and bounds these days. In 2009 RockYou was the victim of a big-time cyber attack and 32 million passwords that were stored in plaintext were leaked to the dark web.

From that dataset, the researchers used 15.6 million and fed them into PassGAN, where the passwords now often used to train AI tools. The significance of that is in how PassGAN is a password generator based on Generative Adversarial Network (GAN), and it creates fake passwords that mimic real ones found genuinely on the Web.

It has two neural networks. The first one is a generator, and the second on is a discriminator. The generator builds passwords which the discriminator before they are scanned and sent back to the generator. Both networks improve their results based on this constant back and forth interaction.

More than Half

Passwords shorter than 4 characters are not common and neither are ones longer than 18, so those were the minimum and maximum where before and beyond the password was excluded from consideration in the research. The findings were that 51% of passwords that could be considered common’ could be cracked in less than a minute by the AI. 65% of them were cracked in less than an hour.

More than 80% of them were able hold strong for over a month, but even this many passwords had been deciphered by AI within that time. The average for passwords with 7 characters was to have them AI-broken within six minutes, and even less if the password had any combination on 1-2-3 or 3-2-1 in it. Any other combination of numbers, upper- or lower-case characters or symbols made no difference in the relative strength of the password when squared up with AI.

Go 15 or Longer from Now On

The consensus now is that to have AI-proof passwords you should be creating ones that have 15 characters or more. researchers suggest people go for passwords with at least 15 characters, and with lower and upper-case letters, numbers, and symbols, being mandatory. Going with one that is unique as possible and updating / changing it regularly is recommended too, particularly considering that – like everything – AI is going to get better at this too.

Minimizing Loss and Latency for Cloud-Based Apps 

Network traffic is like the type of motor vehicle traffic where we most immediately connect the term. Build it and they will come, and the concept of induced demand really does work in exactly the same way. If space is created the fact it has been created means a demand will be created to fill it. That’s not so good when it comes to trying to build enough infrastructure to accommodate traffic, and servers are struggling in the same smaller-scale way with what it takes to accommodate more data traffic demands.

The advantages of cloud computing have compounded the problem with so many more users demanding cloud storage space, and increasingly there are apps that are cloud-based and require bandwidth access to the point that without it they won’t function properly. That’s bad news for app developers who want people using their app to not be impeded in any way. Performance of cloud-based apps that create lots of network traffic can be hurt by network loss and latency, and ways of best dealing with that is what we’ll look at this week here.

It’s a topic that will be of interest to any good Canadian web hosting provider, and that certainly applies to us here at 4GoodHosting. We have large data centers of our own too, but wouldn’t be able to accommodate even 1/1000th of the demand created by cloud storage at all times. There are ways to minimize loss and latency for cloud-based apps and SaaS resources and so let’s get onto that.

Mass Adoption

A big part of the popularity of choosing to adopt of public cloud IaaS platforms and PaaS and SaaS has come from the the simplicity of consuming the services. The means of connecting securely over public internet and then accessing and utilizing resources creates strong demands on infrastructure and there are big challenges associated with private communication between users and those resources.

Using an Internet VPN is always going to be the simplest solution if your aim is to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. But there are problems that can come with relying on the internet when modern application depend heavily on extensive network communications. And it is also very common for people using those applications to be running into problems with performance because of latency and packet loss.

It is the magnitude and variability of this latency and packet loss that are the primary notable aspects here, and the issue is more acute when they are experienced in execution of internet links rather than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications while slower responses to requests come with latency.

Occurrences of service or microservice calls across the network are opportunities where loss and latency can hurt performance. Hundreds of additional requests can be added to values with back-and-forths and they can quickly become unbearable when modern application architectures makes them explode in numbers based simply on how the operations go.

Need to Reduce Jitters

Latency is also referred to as jitters, and the greater variability that comes with latency for cloud apps is related to packet loss on internet connections. What this does is increase the chance that any given user gets a widely varying application experience that may be great, or it may be awful at the same time. That unpredictability is sometimes as big an issue for users as the slow responses or glitchy video or audio.

Dedicated connections to the Cloud are what needs to happen and the advantages with connecting a customer’s private network to the cloud provider’s network are considerable. This usually involves a customer switching or routing in a meet-me facility where the cloud service provider also has network-edge infrastructure at their disposal. The cabled connection means packers are able to travel directly from the client network to the cloud network and with no need to be traversing the Internet.

Direct connects will darn near guarantee that loss and jitter don’t occur. As long as WAN latency is favorable then performance gets as close as possible to an inside-to-inside connection. The only downside might be when direct connects are pricey compared to simple internet connectivity, and have only large-denomination bandwidths of 1Gbps or higher available.

Exchanges for Multiple CSPs

Separating big physical connections into smaller virtual connections at a broad range of bandwidths all under 100Mbps is possible now and extremely effective as a wide-reaching means of cutting back on cloud storage needs. It becomes possible for a single enterprise client to make a direct physical connection to the exchange, and provisions virtual direct connections over it to reach multiple CSPs through the exchange. A single physical connection for multiple cloud destinations is now all that’s needed.

Most enterprises use multiple cloud providers, and not just one. Most use more all the time and many will never be 100% migrated to cloud even if a good portion of them already are. This makes closing the gap between on-premises resources and cloud resources a part of the ongoing challenges as well but fortunately the different sets of options for addressing the challenges have evolved and improved quite notably in recent years.