Minimizing Loss and Latency with Cloud-Based Apps

Reading Time: 4 minutes

Be it right or wrong, being accommodating and understanding of something or someone only occurs if basic expectations are still being met. Most of you who’d be reading this blog in the first place will know what a bounce rate is, and even though we might not know it we all have an inner clock that dictates how long we’ll be willing to wait for a page to load.

Page loads and page speeds are different, though, but all of this just highlights what’s already well known in the digital world. There’s only so much waiting a person can be expected to so, and so this has lead to efforts to minimize loss and latency with cloud-based apps.

The success they’ve had with doing that is what we’ll talk about with our blog entry here this week. Cloud-based technology has been integral to how many of the newest apps have the impressive functionality they do, and even if you’re not the savviest person to it you are probably benefiting it in ways you’re not even aware of based on that mini PC or Mac masquerading as a ‘phone’ in your pocket.

Having so many developers catering to public cloud IaaS platforms like AWS and Azure, and PaaS and SaaS solutions too, is made possible by the simplicity of consuming the services. At least to some extent when you are able to connect securely over the public internet and start spinning up resources.

This is something that shows up on the horizons for good Canadian web hosting providers like us here at 4GoodHosting, as it’s definitely within our sphere.

So let’s have a look at what’s known with the best ways to minimize loss and latency with cloud-based apps.

VPN And Go

The default starting point for any challenge that needs to be addressed or choice that needs to be made is to use the internet to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. And preferably with a VPN, but doing so doesn’t guarantee an absence of problems for modern applications that depend on lots of network communications among different services and microservices.

Quite often the people using those applications can run into problems with performance, and more often than not it’s related to latency and packet loss. That’s logical enough to make that connection, but there’s more to it. Specifically with their magnitude and variability. Loss and latency problems will be a bigger deal for internet links than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications, and too much latency will mean slower response to requests.

If that’s the scenario and there are service or microservice call across the network then this is where loss and latency are most going to hamper performance and take away from user satisfaction in a big way. Values that might be tolerable when there’s only a handful of back-and-forths can become wholly intolerable when there are now exponential values of them given how much application architecture is in place.

Varying Experiences

More variability with latency (jitter) and packet loss on internet connections improves the chance that any given user gets a widely varying application experience. One that may be great, or absolutely terrible and everywhere in between. That unpredictability is as big an issue as the slow responses or glitchy video or audio for some users some of the time.

3 specific cloud-based resources come to the forefront as solutions to these problems; direct connection, exchanges, and cloud networking.

A dedicated connection to the cloud is the first one we’ll look at. This is where the customer’s private network is directly connected to the cloud provider’s network. This will usually involve placing a customer switch or router in a meet-me facility. The cloud service provider’s network-edge infrastructure then connects them with a cable so packets can travel directly from the client network to the cloud network. And there’s no need to traversing the Internet.

The only potential hangup is with WAN latency. But as long as the meet-me is acceptable, performance should be comparable to an inside-to-inside connection. If there’s a potential downside it’s probably with how direct connects are expensive compared to simple internet connectivity. They also tend to only come in large-denomination bandwidths only. Finding something smaller than 1Gbps is unlikely.

Multiple CSPs with Exchanges

Big pipes are always an advantage and that’s true for any type of context you can use the term. Content service providers (CSP) with big pipes are able to take large physical connections and separate them into smaller virtual connections at a broad range of bandwidths under 100Mbps. Making a single direct physical connection to the exchange is beneficial for the enterprise user, and any identified means of making a virtual direct connections over it to reach multiple CSPs through the exchange is preferable.

The next consideration here is for internet-based exchanges that maintain direct connects to CSPs, but still leave customers free to connect to the exchange over the internet. The provider typically offers more in the way of onloading locations plus a wide network of points of presence at its edge. This makes it so that customer traffic doesn’t need to be moving around the internet before making the important exit into the private network and without experiencing latency and loss.

Artificial Intelligence Now Able to Crack Most Passwords < 60 Seconds

Reading Time: 3 minutes

There are some people who have more in the way of long-term memory ability than short-term memory, and while that may sound good it does come with its own set of problems. Ideally you have a balance of short and long-term memory, and that will be more beneficial if you’re the type who has a demon of a time remembering their passwords. But it’s never been a good idea to create simple passwords, and it’s even less of a good idea nowadays with the news that the rapid advances in AI recently mean that artificial intelligence is almost certainly going to be able to figure out those passwords.

The fact that most of us use password apps on our phones attests to two things. First, how many passwords we need to have given the ever more digital nature of our world. And second, just how many of us don’t have the memory to be able to remember them organically. So if you’re not good with memory but you’ve resisted putting one of these apps on your phone then you may want to now. This is a topic that will be of interest for us here at 4GoodHosting as like any good Canadian web hosting provider we can relate the proliferation of passwords we all have these days.

Some of you may be familiar with RockYou, and if you are you’ll know that it was a super popular widget found on MySpace and Facebook in the early years of social media. There’s a different connection there between the widget and where we’re going with AI being able to hack passwords in less than a minute, so let’s start there with our web hosting topic blog entry this week.

Password Mimicker

How it is part of the reason that now is a good time to update your password is this. Experts have found AI systems are able to crack almost all passwords easily, and that’s just one more example of how the capabilities of artificial intelligence are expanding in leaps and bounds these days. In 2009 RockYou was the victim of a big-time cyber attack and 32 million passwords that were stored in plaintext were leaked to the dark web.

From that dataset, the researchers used 15.6 million and fed them into PassGAN, where the passwords now often used to train AI tools. The significance of that is in how PassGAN is a password generator based on Generative Adversarial Network (GAN), and it creates fake passwords that mimic real ones found genuinely on the Web.

It has two neural networks. The first one is a generator, and the second on is a discriminator. The generator builds passwords which the discriminator before they are scanned and sent back to the generator. Both networks improve their results based on this constant back and forth interaction.

More than Half

Passwords shorter than 4 characters are not common and neither are ones longer than 18, so those were the minimum and maximum where before and beyond the password was excluded from consideration in the research. The findings were that 51% of passwords that could be considered common’ could be cracked in less than a minute by the AI. 65% of them were cracked in less than an hour.

More than 80% of them were able hold strong for over a month, but even this many passwords had been deciphered by AI within that time. The average for passwords with 7 characters was to have them AI-broken within six minutes, and even less if the password had any combination on 1-2-3 or 3-2-1 in it. Any other combination of numbers, upper- or lower-case characters or symbols made no difference in the relative strength of the password when squared up with AI.

Go 15 or Longer from Now On

The consensus now is that to have AI-proof passwords you should be creating ones that have 15 characters or more. researchers suggest people go for passwords with at least 15 characters, and with lower and upper-case letters, numbers, and symbols, being mandatory. Going with one that is unique as possible and updating / changing it regularly is recommended too, particularly considering that – like everything – AI is going to get better at this too.

Minimizing Loss and Latency for Cloud-Based Apps

Reading Time: 4 minutes

Network traffic is like the type of motor vehicle traffic where we most immediately connect the term. Build it and they will come, and the concept of induced demand really does work in exactly the same way. If space is created the fact it has been created means a demand will be created to fill it. That’s not so good when it comes to trying to build enough infrastructure to accommodate traffic, and servers are struggling in the same smaller-scale way with what it takes to accommodate more data traffic demands.

The advantages of cloud computing have compounded the problem with so many more users demanding cloud storage space, and increasingly there are apps that are cloud-based and require bandwidth access to the point that without it they won’t function properly. That’s bad news for app developers who want people using their app to not be impeded in any way. Performance of cloud-based apps that create lots of network traffic can be hurt by network loss and latency, and ways of best dealing with that is what we’ll look at this week here.

It’s a topic that will be of interest to any good Canadian web hosting provider, and that certainly applies to us here at 4GoodHosting. We have large data centers of our own too, but wouldn’t be able to accommodate even 1/1000th of the demand created by cloud storage at all times. There are ways to minimize loss and latency for cloud-based apps and SaaS resources and so let’s get onto that.

Mass Adoption

A big part of the popularity of choosing to adopt of public cloud IaaS platforms and PaaS and SaaS has come from the the simplicity of consuming the services. The means of connecting securely over public internet and then accessing and utilizing resources creates strong demands on infrastructure and there are big challenges associated with private communication between users and those resources.

Using an Internet VPN is always going to be the simplest solution if your aim is to connect to the enterprise’s virtual private clouds (VPC) or their equivalent from company data centers, branches, or other clouds. But there are problems that can come with relying on the internet when modern application depend heavily on extensive network communications. And it is also very common for people using those applications to be running into problems with performance because of latency and packet loss.

It is the magnitude and variability of this latency and packet loss that are the primary notable aspects here, and the issue is more acute when they are experienced in execution of internet links rather than across internal networks. Loss results in more retransmits for TCP applications or artifacts due to missing packets for UDP applications while slower responses to requests come with latency.

Occurrences of service or microservice calls across the network are opportunities where loss and latency can hurt performance. Hundreds of additional requests can be added to values with back-and-forths and they can quickly become unbearable when modern application architectures makes them explode in numbers based simply on how the operations go.

Need to Reduce Jitters

Latency is also referred to as jitters, and the greater variability that comes with latency for cloud apps is related to packet loss on internet connections. What this does is increase the chance that any given user gets a widely varying application experience that may be great, or it may be awful at the same time. That unpredictability is sometimes as big an issue for users as the slow responses or glitchy video or audio.

Dedicated connections to the Cloud are what needs to happen and the advantages with connecting a customer’s private network to the cloud provider’s network are considerable. This usually involves a customer switching or routing in a meet-me facility where the cloud service provider also has network-edge infrastructure at their disposal. The cabled connection means packers are able to travel directly from the client network to the cloud network and with no need to be traversing the Internet.

Direct connects will darn near guarantee that loss and jitter don’t occur. As long as WAN latency is favorable then performance gets as close as possible to an inside-to-inside connection. The only downside might be when direct connects are pricey compared to simple internet connectivity, and have only large-denomination bandwidths of 1Gbps or higher available.

Exchanges for Multiple CSPs

Separating big physical connections into smaller virtual connections at a broad range of bandwidths all under 100Mbps is possible now and extremely effective as a wide-reaching means of cutting back on cloud storage needs. It becomes possible for a single enterprise client to make a direct physical connection to the exchange, and provisions virtual direct connections over it to reach multiple CSPs through the exchange. A single physical connection for multiple cloud destinations is now all that’s needed.

Most enterprises use multiple cloud providers, and not just one. Most use more all the time and many will never be 100% migrated to cloud even if a good portion of them already are. This makes closing the gap between on-premises resources and cloud resources a part of the ongoing challenges as well but fortunately the different sets of options for addressing the challenges have evolved and improved quite notably in recent years.

Stalkerware an Ever Increasing Threat for Mobile Devices

Reading Time: 3 minutes

It has been quite a while since we made any type of cybersecurity threat the focus for our blog entry, but one that is really prominent and increasingly so these days is mobile Stalkerware. Stalkers are different from spies, and Stalkerware is different from spyware. As the name suggests, what it involves in uninvited following and tracking of your activities and whereabouts. It can be just as disconcerting digitally as it is in real life, and more specifically what they do is record conversations, location, and pretty much everything you type.

So yes, they pretty much nix any aspect of privacy at all that might have – and expect to have with the operation of your mobile device. And the problem is that all of this occurs while you have no idea you’re being ‘stalked’. These types of malware often comes disguised as calendar or calculator apps and ‘Flash Keylogger’ is the most infamous one that was busted for it and is fittingly nowhere to be found these days.

Passing on information that their clients can use to keep themselves cyber-safe is going to be agreeable for any reputable Canadian web hosting provider, and that certainly applies for us here at 4GoodHosting too. So we’re taking this entry to talk about why Stalkerware is even more of a problem now than before, and then we’ll conclude by talking a little bit about what you can do to get rid of it.

3x The Risk Now

Apparently there is 3x the risk of being a victim of Stalkerware as compared to what it was three years ago. The possibility of encountering this form of mobile malware has gone up 329% since 2020. These attacks involve the attacker stealing the physical and online freedom of the targeted person, and doing that by tracking their location and monitoring their smartphone activity without consent or the victims being aware it’s going on.

One of the biggest risks can be with valuable information that may be exchanged in a text message, for example. Stalkerware may also be installed secretly on mobile phones by people who have grudges or ill will towards a person, and there have even been instances where it is concerned parents who are being a Stalkerware infection on a device. This is not only about stealing personal data, there are also tangible implications concerning the safety of the individual targeted.

As mentioned above, Stalkerware commonly imitates benign apps such as notes, calculators, or ones similar to these. This allows them to stay hidden in plain sight, with the victims seeing the apps every day on their phone and not thinking much of them. Sometimes they are advertised as apps used to keep a close eye on children and other people that are unable to take care of themselves.

Detection & Removal

The most reliable way to make sure your devices aren’t carrying Stalkerware is to go through all of the apps installed on the device and make sure they all work as intended. A phone that suddenly drops in performance, or starts crashing and freezing for no apparent reason may have a Stalkerware app installed on it. Another indicator can be if suddenly you have a new browser homepage, new icons on your desktop, or a different default search engine.

There are 3 best ways to get rid of Stalkerware on your phone. First is to conduct a factory reset of the device. That’s not something you should do unless you’re aware of what you stand to lose, but if you do decide to it’s important to first back up all important data on your phone: your videos, contacts, photos, etc. You can do this using your phone’s default cloud service, or use something like Google Drive to back up your data.

Your next choice would be to update your operating system. Some Stalkerware is only designed for older operating system versions and so an OS update might disable the stalkerware installed on your device. The Stalkerware may still continue to operate even after an OS update though. If your device has an OS update available, you’ve really got nothing to lose by trying this way.

Last up is using a malware removal app specifically designed for stalkerware. There are lots of good ones, including Norton, McAfee, Bitdefender, and Avira. However, be aware that you’ll need to pay for them.

Use of Sovereign Clouds for Country-Specific Data

Reading Time: 4 minutes
Improved Cloud Security a Welcome Reassurance for Cloud-Hosted Websites

Try to travel outside your country of citizenship without a passport and see how far you get. Likely no further than the airport or terminal of either sort, and that is the way it should be considering there are some people – and some things – that aren’t allowed to leave the country. It’s best that they stay in, or it’s best that they aren’t to be elsewhere in the world. The last part of that would apply to anyone who’s a risk to others, but the first part of it can apply to big data too.

There are companies that need to give federal governments the assurance that some of the data they collect from customers or investors doesn’t go beyond their borders, and a good example of this would be manufacturers who work with the Department of Defence or others who have patents on manufactured goods where the government has a vested stake in keeping that technology under wraps and out of the eyes of others in different countries where patent laws aren’t adhered to the same way they are here.

All of this comes at a time when the ongoing shift to cloud computing is as strong as ever, and where the need to not have the bulk and expense of physical data storage is a huge plus for companies of all sorts. This is the aspect that we can relate to here at 4GoodHosting with our being a good Canadian web hosting provider that can see the bigger picture with anything digital despite our being a mere bit player who ensure that a company’s website is always there and open for business or at the very least connection on the World Wide Web.

The need for cloud storage while staying inside laws around domestic data control has led to what are called ‘Sovereign’ Clouds, and that’s what we are going to look at with this week’s blog entry.

Protecting Interests

Banking is another prominent example here of an industry that has been very eager to adopt cloud computing where certain data segments need to be kept within the country. Insurance, healthcare, and the public sector are other ones where complying with laws and requirements within specific regions is important. Sovereign clouds are the type of cloud architecture that has been developed to meet this need.

They are semi-public cloud services that are owned, controlled, and operated by a particular country or region. In some instances that controller may be a cloud provider serving a smaller nation or region. They may be owned by the local government outright or by a consortium of private and public organizations, or owned by private companies that work closely with the government.

The objective of sovereign clouds is to provide computing infrastructure that can support specific government services. Most notably with protecting sensitive data and complying with laws and regulations specific to a country or region. Until not long ago mega cloud providers that served all countries were quite standard, but even with introduction of sovereign clouds ad the shift to them we will likely continue to need the hyperscalers for some systems that are less cost-effective to run on sovereign clouds.

It’s always going to be the case that sovereign clouds are part of multi-cloud deployments, and having the acceptance of multi-cloud and its flexibility driving new interest in sovereign clouds is what we’re seeing today.

Sovereign Cloud Benefits

Increased control and ownership of data is far and away the most prominent advantage to having big data in a sovereign cloud. It ensures that data is stored and managed in compliance with local regulations and laws, including keeping data in specific countries or regions. Use of public clouds might put you at risk of having your data made available outside of the country, and it might not be a situation where anyone or any group has done something ‘wrong’ to allow that to happen.

Sovereign clouds take that possible risk out of the equation since they physically exist in the country they support. Enhanced security measures are another big part of the appeal for sovereign clouds. They offer more unique encryption, access controls, and network segmentation that also may be tailored to specific countries or regions. Larger public clouds may provide the same or better services, but the way sovereign cloud security systems are purpose-built for a specific country’s laws and regulations results in superiority for supporting data security measures for that country.

Other benefits:

  • Higher service availability and reliability levels in comparison to commercial cloud providers
  • Customizability for meeting the specific needs of a country or organization, including compliance requirements, data storage, and processing capabilities
  • Creation of jobs and supporting local economic development

Sovereign Cloud Drawbacks

It is always possible that a sovereign cloud may not be compatible with other cloud infrastructures, resulting in the chance interoperability and data exchange challenges. Data privacy concerns are legit too, as there’s been plenty of instances across history where governments have taken advantage of having this type of control. Many companies prefer to use global public cloud providers if they believe that their local sovereign cloud could be compromised by the government.

Sovereign clouds may not have the same capacity for speedy adoption of new technologies and services compared to global cloud providers. This might limit their ability to innovate and remain competitive. Sovereign clouds won’t likely be able to offer the same types of services, considering that they don’t have billions to spend on R&D like the larger providers. Flexibility and autonomy may be lessened or compromised as organizations that rely on a sovereign cloud may end up having a dysfunction based on being dependent on the government or consortium operating it.

Needed Space: Server Colocation Services

Reading Time: 4 minutes

There’s always going to be the need to make distinctions between wants and needs, but when it comes to space and resources there’s many times that having a greater amount of it is much more of a need than a want. That is obviously true for people, and the best example would be how a married couple with 3 kids can’t make do with a 1-bedroom condo even though it’s in the most desirable part of town. But it can also be true of objects and inanimate items of all sorts, and it’s true for servers too and the data centers they reside in.

It’s never going to be a situation where businesses, organizations, or ventures uniformly have the exact same data management needs and for ones that have especially large ones server colocation often makes a lot of sense. This is where a server owned by one of them is deployed and hosted within a managed service facility / environment, and usually an existing data center or IT facility. The customer retains control of server services, operating system and applications while the MSP provides the physical space, power and network resources.

Obviously with 4GoodHosting being one of the best Canadian web hosting providers this is a topic that we’re very familiar with even though a hosting provider is going to have their own expansive data centers all the elbow room we need for our clients’ data storage needs to begin with.

We’ve got two excellent ones full of capacity in both Vancouver and Toronto, but that’s all we’ll say about that and instead let’s move to discussing what sort of server colocation services you will want to be aware of if your data management needs are much bigger than they used to be.

  1. Tower Server Colocation

Server colocation is sometimes also referred to as ‘carrier hotels’ and having the maximum of safety when having your server within a facility is best achieved with tower server colocation. It provides your servers with an extra layer of security that wouldn’t be available with another type of server colocation arrangement. Server colocation pricing and things of that nature also need to be taken into consideration, but identifying the type of server colocation service you need comes first.

Tower server colocation is a wise choice for those who have very basic needs. With it there will typically only be space on one shelf for one tower server. If you can make do with this arrangement it’s a very efficient and usually very cost-effective one. Another big plus is that there is unmetered incoming bandwidth made available and you are free to use any one of several IP addresses most of the time.

  1. Full Rack Colocation Space

Greater server space needs will require the bigger fit, and that is a full-rack colocation space. This will be the situation if you need a lot of space to protect several servers at once and all of them are essential to whatever it is you are doing online and doing it is entirely worth your while. Whether that’s a business, e-commerce, or something else entirely. This is the setup that will benefit you the most. As the name suggests, this setup provides an entire rack of space for your servers.

 

This certainly is the most expensive option, but if it’s what you need then the cost becomes one you’re willing to pay as it is fairly standard to have upwards of 50 IP addresses along with something like 20 AMPs of power made available to your servers. It is one of the most advanced colocation options on the market, and quite often it is the most ideal fit if you need a setup with plenty of power and security.

Pricier for sure, but for so many clients it is simply part of the cost of doing business and especially in the digital age where so much data is in the cloud and there’s so much operational efficiency and reliability that comes with that.

  1. 1U – 4U Colocation Space

For a mix of affordability and more capacity / performance a smaller U colocation space may your ideal server colocation arrangement. It allows you to take your 1U / 2U / 3U / 4U colocation server and put it safely on a single rack in a very secure colocation facility. Along with that you’ll have somewhere around 2-5 IP addresses to use while still being able to have unmetered incoming bandwidth.

This options is usually quite affordable even though it is a functional step up from a simple tower server colocation. The basic concept for how much you’d pay is actually quite simple. The more rack space you use, the less available space the colocation center will have available, and the more power the data center will consume as a whole.

Rack space is measured first in terms of single tray units, which is the amount of space occupied by a single tray in a server rack enclosure. Then the actual racks themselves where you physical server will sit. These two units form the backbone of most colocation center pricing scales, but there are also more precise measurements of space that can be used to further tailor your data colocation space to best fit your business and budget

Advantages of Dedicated Hosting for Headless Commerce

Reading Time: 4 minutes

People may think of Abraham Van Brunt or the Legend of Sleepy Hollow when they hear the word headless, but in the context of it we’re about to lay out here we’re not talking about horsemen. We’re talking about headless commerce, which is a separation of the front end and back end of an e-commerce application where brands are more free to build whatever and however they want. One of the many offshoots of conventional e-commerce since it came around some 25 years or so ago.

 

Headless commerce has gained attention as a method that businesses can employ to address some of the challenges associated with the evolving eCommerce market and it does allow businesses to obtain a substantial competitive advantage when the presentation layers and the backend of the application are built and operated distinct from each other and continue to be standing apart from each other as the business gains traction.

 

And of course a primary resource for any e-commerce business, venture, or other commercial enterprise online – headless or not – is going to be the website. And as a top Canadian web hosting provider, 4GoodHosting is always all ears when it comes to whatever it may be that might be relevant for the small / smallish businesses that have their websites hosted by us.

 

Dedicated hosting is often a better choice than shared hosting for many reasons, but it is especially so for headless commerce interests and that’s what we’re going to look at with our entry for this week.

 

Challenges With Headless Commerce

 

There are common challenges that need to be overcome by organizations implementing a headless commerce environment. What is quite common is that these challenges aren’t overcome sufficiently (or easily) and the company then decides to choose to deploy a traditional e-commerce solution. But if they were to move to a dedicated server arrangement with their Canadian web hosting provider then at least a PART of why that is would be taken out of the equation.

 

These are the most common challenges:

 

Cost – There are often substantial initial cost of implementing a headless commerce architecture and there may not be a fit with a company’s projected budget. Weighing costs against the projected benefits and increased revenue generated by the new system is often what transpires.

 

Technical expertise – Considerable technical expertise is often required to correctly implement headless commerce. Changing existing processes is rarely straightforward and failure to get it right can expose a company to failed systems, downtime, and lost revenue.

 

Data integration and connectivity – Integrating the diverse data streams and sales channels is often a heavy challenge, and very time consuming. Faulty integrations will lead to dissatisfied customers and mounds of work for a development team.

 

Dedicated Web Hosting Benefits

 

Shared servers are always the least expensive option, but will not always deliver the desired level of performance or reliability. A virtual private server (VPS) addresses some of the shortcomings of a shared server but will still not offer the performance achievable with dedicated physical hardware. Costs can be controlled with options such as renting a dedicated server from a 3rd-party vendor rather than purchasing it and housing it in an on-premises data center.

 

Considerable as well that partnering with a 3rd-party hosting provider can reduce or eliminate any concerns about the availability of adequate technical resources, and 3rd-party tools that help integrate and automate data intake are essential when implementing a headless commerce environment.

 

Superior customization potential is a benefit too A dedicated server provides full control over the deployment of operating systems and software applications. Both frontend and backend resources can be fine-tuned and customized to provide an improved user experience. Same goes for reliability as a dedicated server enables a company to use in-house technical resources to make changes or respond to an emergency or unexpected outage.

 

Next up is flexibility. When they own the hardware a company can quickly make changes as soon as they become necessary. Shared tenants or limits on partitioned resources are not part of the picture and you won’t be disadvantaged this way. A dedicated server for headless commerce applications also gives transparency as companies know exactly what software is running. There will be no surprised with unexpected compatibility issues.

 

Choosing Implementation

 

After you’ve decided that a dedicated hosting environment is going to be the best working fit for your headless commerce architecture, you then need to choose how it will be implemented. As with any website, a company can elect to host it using internal or third-party resources. The following factors influence an organization’s choice. Here are the standard considerations around this:

 

Financial status – Setting up a dedicated server in an internal data center is not inexpensive. You may want to think about finding a reliable provider and renting your hardware from them. This is one way to minimize the cost of going with a headless solution.

 

Technical resources available – Lacking the in-house technical resources to adequately handle the responsibilities of maintaining a dedicated server is quite common. A reliable vendor can supply those resources, allowing a company to implement headless commerce with a limited staff.

 

Datacenter space – Dedicated servers need to be located and a 3rd-party vendor will already have space allocated. Companies that do not have on-premises space available can still go headless by paying for use of provider’s space and servers.

Industry Cloud Architecture Best Practices

Reading Time: 4 minutes

Businesses that are more centrically attached to a certain industry do tend to be ones that do not have as much of their success invested in e-commerce, or in a more simpler sense reliant on having a certain type of web presence. But countering that is the fact that often they have a greater amount of their investment connected to profit-generation capacity that is related to the infrastructure of their operations.

Industry clouds are a good example of that, and we’ll skip the W5 overview of them and go right to saying that industry clouds are nearly always vertical for obvious reasons. They also need to be more agile way to manage workloads and accelerate change against the particular business, data, compliance or other needs of their segment.

The last part of that is important to note, as business compliance needs are a characteristic of operations for these types of very industry-connected businesses in a way that is not seen at all for most of them that are strictly commercial in their operations.

So yes, the vast majority of businesses operating commercially and nearly all in online retail will be the types that make the services of a good Canadian web hosting provider part of their monthly operating budget. You’re basically invisible without a quality website and developed online presence and identity these days, and providers like us are just conduits that make your website up and visible on the information superhighway.

Data Fabrics Factor

Industry-aware data fabrics are in many ways the biggest part of how these clouds differ from conventional or community clouds. Innovative technologies and approaches are a close runner-up, but one constant is that using industry-specific services will add cost and complexity. There will be more value returned to the business but it’s not a simple or straightforward equation for exactly what and how is the best way to make that happen.

Investment in industry clouds is really taking off now as companies seek higher returns on their cloud computing investments, and these are investments they’ve had no choice but to make. As industry-related technology becomes better and more available, enterprises that climb on the industry cloud bandwagon today will be better positioned for noticeable successes in the future.

Many major public cloud providers do not have industry-specific expertise but are partnering with professional services firms and leaders in banking, retail, manufacturing, healthcare, and other industries. The result is a collaboration between people who understand industry-specific processes and data and people who understand how to build scalable cloud services.

Best Practices

A. Understand the Complexities of Service Integration, and Costs Attached to Them

For the longest time IT was dominated by service-oriented architecture concepts that are systemic to today’s clouds. Industry-specific services or APIs that could save us from writing those services ourselves weren’t ideal, but they were readily available. Programmableweb.com is a good example of where many went to find these APIs.

Today you’re more likely to be weighing whether or not industry-specific service should be leveraged at all. This is the ‘build-versus-buy’ decision that people talk about in relation to this. The considerations are the cost to integrate and operate a service versus the cost to build and operate it yourself. Using OPC code is what most people opt for, but that choice can can come with unanticipated costs and much more complexity than you planned on.

To master this best practice, just ask the questions and do the math. You’ll find that the cost and complexity usually bring back more value to the business. Not always though.

B. Ensure Systemic Security Across the Board

Sufficient security with industry-specific clouds is never to be assumed. Those sold by the larger cloud providers may be secure as stand-alone services but then turn into a security vulnerability when integrated and operated directly within your solution.

The best practice here is to build and design security into your custom applications that leverage industry clouds. Doing so with an eye to ideal integration so no new vulnerabilities are opened. The best approach is to take 2 things that are assumed to be secure independently, and then add dependencies as you see fit to change / improve the security profile.

C. Seek Out Multiple Industry-Specific Services & Compare

It is fairly common for platforms to be built with use of industry-specific cloud services from just one provider. That may be the easy way to move forward and often you’ll feel fairly confident in your decision based on your research or referrals. But just as often the best option is on another cloud or perhaps from an independent industry cloud provider that decided to go it alone.

It’s good advice to say you shouldn’t limit the industry-specific services that you are considering. As time goes on, there will be dozens of services to do tasks such as risk analytics. You will be best served by going through long and detailed evaluations of which one is the best fit based on your structure top-to-bottom, as well as taking your operation dynamics into consideration too.

DRL Deep Reinforcement Learning for Better Cybersecurity Defences

Reading Time: 4 minutes

Needs usually diminish, and that’s the way it goes the majority of the time for whatever reason. But as so much more of the work and personal worlds for people has gone digital and ever greater amount of everything is in the Cloud there is so much opportunity out there for cyber attackers to go after and attempt to acquire valuable data and information. From malware to ransomware and all wares in between, they’re out there and they’re becoming more complex right in step with how the digital world makes its own daily advances.

Here at 4GoodHosting like any other good Canadian web hosting provider we have hosting SSL certificates that can secure a website for basic e-commerce purposes. But that’s the extent of what folks like us are able to offer with regards to web security. Cybersecurity is a much lager umbrella, and a more daunting one if it’s possible for an umbrella to be daunting. But fortunately there are much bigger players at work working on defences so the good guys still have a chance of staying intact in the face of ever-great cybersecurity threats.

One of the more promising developments there as of recently is Deep Reinforcement Learning, which is an offshoot of sorts from other artificial intelligence aims where researchers found cross-purpose applications for what they’d been working with. So let’s use this week’s blog entry to look at this as these days nearly every one has some degree of an interest in cybersecurity. If not an outright need for it.

Smarter & More Preemptive

Deep reinforcement learning offers smarter cybersecurity, the ability for earlier detection of changes in the cyber landscape, and the opportunity to take preemptive steps to scuttle a cyber attack. Recent and thorough testing with realistic and widespread threats had deep reinforcement learning being effective at stopping cyber threats and rendering them inept up to 95% of the time. The performance of deep reinforcement learning algorithms is definitely promising.

It is emerging as a powerful decision-support tool for cybersecurity experts and one that has the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. In comparison to other forms of AI that will detect intrusions or filter spam messages, deep reinforcement learning expands defenders’ abilities to orchestrate sequential decision-making plans so that defensive moves against cyberattacks are undertaken more ‘on the fly’ and in more immediate response to threats that are changing as they happen.

This technology has been built with the understanding that an effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts. Deep reinforcement learning has been crafted with that need taken very much into account, combining reinforcement learning and deep learning to that it is entirely agile and adept in situations where a series of decisions in a complex environment need to be made.

Incorporating Positive Reinforcement

Another noteworthy functionality of DRL is how good decisions leading to desirable results are reinforced with a positive reward that is encompassed as a numeric value, and then at the same time bad choices leading to undesirable outcomes come with a negative cost. This part of DRL has strong fundamental A.I. underpinnings as it is similar to how people learn tasks. Children at a young age learn that if they do something well that leads to a favorable outcome as seen that way by people expecting it of them, they know they will benefit from that in some way.

The same thing of sorts occurs with DLR here in deciphering cybersecurity threats and then disabling them. The agent can choose from a set of actions. With each action comes feedback, good or bad, that becomes part of its memory. There’s an interplay between exploring new opportunities and exploiting past experiences and working through it all builds memory as to what works well and what doesn’t.

4 Primary Algorithms

Recent advances with DLR that have taken it to the next level and put it on the radar for the cybersecurity world as a promising new A.I.-based technology have been based on four deep reinforcement learning algorithms – DQN (Deep Q-Network) and three variations of what’s known as the actor-critic approach. Here is an overview of what was seen in the trials:

  • Least sophisticated attacks: DQN stopped 79% of attacks midway through attack stages and 93% by the final stage
  • Moderately sophisticated attacks: DQN stopped 82% of attacks midway and 95% by the final stage
  • Most sophisticated attacks: DQN stopped 57% of attacks midway and 84% by the final stage. This was notable as it was far higher than the other 3 algorithms

While DRL for cybersecurity looks promising and may someday be a well-known acronym in the world of web technology and online business, the reality is that for now at least it will need to be working in conjunction with humans. A.I. can be good at defending against a specific strategy but isn’t as adept with understanding all the approaches an adversary might take and it is not ready to completely usurp human A.I. cybersecurity analysts yet.

Average World Broadband Monthly Usage Nears 600GB

Reading Time: 3 minutes

Clipping can have all sorts of different meanings for different people, but the only time it has a positive context is if you’re talking about scrapbooking or something similar. When the maximum speed limits for broadband internet connectivity are reached you are going to experience something called broadband speed clipping. This happens very often with video streaming, conferencing, gaming and other bandwidth-hungry pursuits.

To put it in perspective how much of a problem this is becoming, a little more than a year ago there was a report that the number of U.S. broadband users who regularly push the upper limits of their provisioned internet speed around 9 p.m. at night increased 400% from just one year earlier. Makes sense when you consider how many people are streaming content at the time of the night in a country of 350+ million people, and the only reason that doesn’t happen in Canada to the same extent is that we have only 10% of that population.

All of this leads to the inevitable reality that the entire world is stretching broadband networks to their limit like never before, and for us here at 4GoodHosting this is something that any reputable Canadian web hosting provider will take some interest in given the nature of what we do and how connectivity speed and the simple availability of sufficient bandwidth is quite front and center for a lot of the businesses and other venture for whom we provide reliable web hosting.

Hybrid Infrastructure Strain

Where we are now is that the percentage of subscribers pushing against the upper limits of their broadband networks’ speed tiers had increased dramatically over the past few years, putting massive strain on hybrid infrastructures, and along with it data consumption within infrastructures has rocketed right alongside it.

All of this was measured with a suite of broadband management tools, and used to pinpoint usage patterns, especially the differences between two key categories. Those being the number of subscribers on flat-rate billing (FRB) plans that offer unlimited data usage and in comparison to those on usage-based billing (UBB) plans where subscribers are billed based on their bandwidth consumption.

The results for the first 10 months of 2022 showed that average broadband consumption approached a new high of nearly 600GB per month by that point and the percentage of subscribers on gigabit speed tiers had gone up 2x over the course of the previous 12 months. Average per-subscriber consumption was 586.7GB at the end of 2022, and that’s a nearly 10% increase from 2021. The percentage of subscribers provisioned for gigabit speeds rose to 26% over that same time frame.

That’s more than double that reported for the fourth quarter of 2021 figure of 12.2%. Nearly 35% of surveyed subscribers were receiving gigabit network speeds, its own increase of 13% from a year ago and 2.5 times the percentage of FRB gigabit subscribers. Year-over-year upstream and downstream bandwidth growth remained relatively even for Q4 022 – 9.4% and 10.1% respectively.

Monthly 1TB+ Usage More Common

The 586.7GB average data usage number for that Q4 was up 9.4% from its Q4 2021 equivalent of 536.3GB. This show the year-on-year pace had slowed since its peak of 40.3% growth to 482.6GB in Q4 2020. Along with this the number of power users consuming 1TB or more per month was 18.7% for Q4 2022, and that equates to a year-over-year increase of 16% and 10 times the percentage seen just five years ago.

This is a very indicative reflection of the tremendous extent to which more people are going really heavy on bandwidth with streaming and the like these days. ‘Superpower’ users are being defined as anyone consumes 2 terabytes or more a month, and the number of these super users increased by 25% in Q4 2022, a significant jump from 2.7% to 3.4%, working out to a 30x increase over the previous 5 years.

Another relevant consideration is the way that as migration to faster speed tiers continued, the percentage of subscribers in tiers under 200Mbps went down by 43% for that same 4th quarter 2022. Median usage for the cross-sectioned ‘standard’ users was 531.9GB, more than 34% higher than the 396.6GB recorded by all subscribers.

The biggest higher-than-average single day aberration for much higher usage was on Christmas Day. On December 25th there was significantly higher average usage beginning in the mid-morning hours and then continuing into the afternoon. Clearly demand for greater internet speed continues to increase and network planning needs to be done around this ever-present reality. Here in Canada there is an ongoing progression towards more rural communities having high-speed internet and this will need to be a consideration for network providers as well.