Industry Cloud Architecture Best Practices

Reading Time: 4 minutes

Businesses that are more centrically attached to a certain industry do tend to be ones that do not have as much of their success invested in e-commerce, or in a more simpler sense reliant on having a certain type of web presence. But countering that is the fact that often they have a greater amount of their investment connected to profit-generation capacity that is related to the infrastructure of their operations.

Industry clouds are a good example of that, and we’ll skip the W5 overview of them and go right to saying that industry clouds are nearly always vertical for obvious reasons. They also need to be more agile way to manage workloads and accelerate change against the particular business, data, compliance or other needs of their segment.

The last part of that is important to note, as business compliance needs are a characteristic of operations for these types of very industry-connected businesses in a way that is not seen at all for most of them that are strictly commercial in their operations.

So yes, the vast majority of businesses operating commercially and nearly all in online retail will be the types that make the services of a good Canadian web hosting provider part of their monthly operating budget. You’re basically invisible without a quality website and developed online presence and identity these days, and providers like us are just conduits that make your website up and visible on the information superhighway.

Data Fabrics Factor

Industry-aware data fabrics are in many ways the biggest part of how these clouds differ from conventional or community clouds. Innovative technologies and approaches are a close runner-up, but one constant is that using industry-specific services will add cost and complexity. There will be more value returned to the business but it’s not a simple or straightforward equation for exactly what and how is the best way to make that happen.

Investment in industry clouds is really taking off now as companies seek higher returns on their cloud computing investments, and these are investments they’ve had no choice but to make. As industry-related technology becomes better and more available, enterprises that climb on the industry cloud bandwagon today will be better positioned for noticeable successes in the future.

Many major public cloud providers do not have industry-specific expertise but are partnering with professional services firms and leaders in banking, retail, manufacturing, healthcare, and other industries. The result is a collaboration between people who understand industry-specific processes and data and people who understand how to build scalable cloud services.

Best Practices

A. Understand the Complexities of Service Integration, and Costs Attached to Them

For the longest time IT was dominated by service-oriented architecture concepts that are systemic to today’s clouds. Industry-specific services or APIs that could save us from writing those services ourselves weren’t ideal, but they were readily available. Programmableweb.com is a good example of where many went to find these APIs.

Today you’re more likely to be weighing whether or not industry-specific service should be leveraged at all. This is the ‘build-versus-buy’ decision that people talk about in relation to this. The considerations are the cost to integrate and operate a service versus the cost to build and operate it yourself. Using OPC code is what most people opt for, but that choice can can come with unanticipated costs and much more complexity than you planned on.

To master this best practice, just ask the questions and do the math. You’ll find that the cost and complexity usually bring back more value to the business. Not always though.

B. Ensure Systemic Security Across the Board

Sufficient security with industry-specific clouds is never to be assumed. Those sold by the larger cloud providers may be secure as stand-alone services but then turn into a security vulnerability when integrated and operated directly within your solution.

The best practice here is to build and design security into your custom applications that leverage industry clouds. Doing so with an eye to ideal integration so no new vulnerabilities are opened. The best approach is to take 2 things that are assumed to be secure independently, and then add dependencies as you see fit to change / improve the security profile.

C. Seek Out Multiple Industry-Specific Services & Compare

It is fairly common for platforms to be built with use of industry-specific cloud services from just one provider. That may be the easy way to move forward and often you’ll feel fairly confident in your decision based on your research or referrals. But just as often the best option is on another cloud or perhaps from an independent industry cloud provider that decided to go it alone.

It’s good advice to say you shouldn’t limit the industry-specific services that you are considering. As time goes on, there will be dozens of services to do tasks such as risk analytics. You will be best served by going through long and detailed evaluations of which one is the best fit based on your structure top-to-bottom, as well as taking your operation dynamics into consideration too.

DRL Deep Reinforcement Learning for Better Cybersecurity Defences

Reading Time: 4 minutes

Needs usually diminish, and that’s the way it goes the majority of the time for whatever reason. But as so much more of the work and personal worlds for people has gone digital and ever greater amount of everything is in the Cloud there is so much opportunity out there for cyber attackers to go after and attempt to acquire valuable data and information. From malware to ransomware and all wares in between, they’re out there and they’re becoming more complex right in step with how the digital world makes its own daily advances.

Here at 4GoodHosting like any other good Canadian web hosting provider we have hosting SSL certificates that can secure a website for basic e-commerce purposes. But that’s the extent of what folks like us are able to offer with regards to web security. Cybersecurity is a much lager umbrella, and a more daunting one if it’s possible for an umbrella to be daunting. But fortunately there are much bigger players at work working on defences so the good guys still have a chance of staying intact in the face of ever-great cybersecurity threats.

One of the more promising developments there as of recently is Deep Reinforcement Learning, which is an offshoot of sorts from other artificial intelligence aims where researchers found cross-purpose applications for what they’d been working with. So let’s use this week’s blog entry to look at this as these days nearly every one has some degree of an interest in cybersecurity. If not an outright need for it.

Smarter & More Preemptive

Deep reinforcement learning offers smarter cybersecurity, the ability for earlier detection of changes in the cyber landscape, and the opportunity to take preemptive steps to scuttle a cyber attack. Recent and thorough testing with realistic and widespread threats had deep reinforcement learning being effective at stopping cyber threats and rendering them inept up to 95% of the time. The performance of deep reinforcement learning algorithms is definitely promising.

It is emerging as a powerful decision-support tool for cybersecurity experts and one that has the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. In comparison to other forms of AI that will detect intrusions or filter spam messages, deep reinforcement learning expands defenders’ abilities to orchestrate sequential decision-making plans so that defensive moves against cyberattacks are undertaken more ‘on the fly’ and in more immediate response to threats that are changing as they happen.

This technology has been built with the understanding that an effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts. Deep reinforcement learning has been crafted with that need taken very much into account, combining reinforcement learning and deep learning to that it is entirely agile and adept in situations where a series of decisions in a complex environment need to be made.

Incorporating Positive Reinforcement

Another noteworthy functionality of DRL is how good decisions leading to desirable results are reinforced with a positive reward that is encompassed as a numeric value, and then at the same time bad choices leading to undesirable outcomes come with a negative cost. This part of DRL has strong fundamental A.I. underpinnings as it is similar to how people learn tasks. Children at a young age learn that if they do something well that leads to a favorable outcome as seen that way by people expecting it of them, they know they will benefit from that in some way.

The same thing of sorts occurs with DLR here in deciphering cybersecurity threats and then disabling them. The agent can choose from a set of actions. With each action comes feedback, good or bad, that becomes part of its memory. There’s an interplay between exploring new opportunities and exploiting past experiences and working through it all builds memory as to what works well and what doesn’t.

4 Primary Algorithms

Recent advances with DLR that have taken it to the next level and put it on the radar for the cybersecurity world as a promising new A.I.-based technology have been based on four deep reinforcement learning algorithms – DQN (Deep Q-Network) and three variations of what’s known as the actor-critic approach. Here is an overview of what was seen in the trials:

  • Least sophisticated attacks: DQN stopped 79% of attacks midway through attack stages and 93% by the final stage
  • Moderately sophisticated attacks: DQN stopped 82% of attacks midway and 95% by the final stage
  • Most sophisticated attacks: DQN stopped 57% of attacks midway and 84% by the final stage. This was notable as it was far higher than the other 3 algorithms

While DRL for cybersecurity looks promising and may someday be a well-known acronym in the world of web technology and online business, the reality is that for now at least it will need to be working in conjunction with humans. A.I. can be good at defending against a specific strategy but isn’t as adept with understanding all the approaches an adversary might take and it is not ready to completely usurp human A.I. cybersecurity analysts yet.

Average World Broadband Monthly Usage Nears 600GB

Reading Time: 3 minutes

Clipping can have all sorts of different meanings for different people, but the only time it has a positive context is if you’re talking about scrapbooking or something similar. When the maximum speed limits for broadband internet connectivity are reached you are going to experience something called broadband speed clipping. This happens very often with video streaming, conferencing, gaming and other bandwidth-hungry pursuits.

To put it in perspective how much of a problem this is becoming, a little more than a year ago there was a report that the number of U.S. broadband users who regularly push the upper limits of their provisioned internet speed around 9 p.m. at night increased 400% from just one year earlier. Makes sense when you consider how many people are streaming content at the time of the night in a country of 350+ million people, and the only reason that doesn’t happen in Canada to the same extent is that we have only 10% of that population.

All of this leads to the inevitable reality that the entire world is stretching broadband networks to their limit like never before, and for us here at 4GoodHosting this is something that any reputable Canadian web hosting provider will take some interest in given the nature of what we do and how connectivity speed and the simple availability of sufficient bandwidth is quite front and center for a lot of the businesses and other venture for whom we provide reliable web hosting.

Hybrid Infrastructure Strain

Where we are now is that the percentage of subscribers pushing against the upper limits of their broadband networks’ speed tiers had increased dramatically over the past few years, putting massive strain on hybrid infrastructures, and along with it data consumption within infrastructures has rocketed right alongside it.

All of this was measured with a suite of broadband management tools, and used to pinpoint usage patterns, especially the differences between two key categories. Those being the number of subscribers on flat-rate billing (FRB) plans that offer unlimited data usage and in comparison to those on usage-based billing (UBB) plans where subscribers are billed based on their bandwidth consumption.

The results for the first 10 months of 2022 showed that average broadband consumption approached a new high of nearly 600GB per month by that point and the percentage of subscribers on gigabit speed tiers had gone up 2x over the course of the previous 12 months. Average per-subscriber consumption was 586.7GB at the end of 2022, and that’s a nearly 10% increase from 2021. The percentage of subscribers provisioned for gigabit speeds rose to 26% over that same time frame.

That’s more than double that reported for the fourth quarter of 2021 figure of 12.2%. Nearly 35% of surveyed subscribers were receiving gigabit network speeds, its own increase of 13% from a year ago and 2.5 times the percentage of FRB gigabit subscribers. Year-over-year upstream and downstream bandwidth growth remained relatively even for Q4 022 – 9.4% and 10.1% respectively.

Monthly 1TB+ Usage More Common

The 586.7GB average data usage number for that Q4 was up 9.4% from its Q4 2021 equivalent of 536.3GB. This show the year-on-year pace had slowed since its peak of 40.3% growth to 482.6GB in Q4 2020. Along with this the number of power users consuming 1TB or more per month was 18.7% for Q4 2022, and that equates to a year-over-year increase of 16% and 10 times the percentage seen just five years ago.

This is a very indicative reflection of the tremendous extent to which more people are going really heavy on bandwidth with streaming and the like these days. ‘Superpower’ users are being defined as anyone consumes 2 terabytes or more a month, and the number of these super users increased by 25% in Q4 2022, a significant jump from 2.7% to 3.4%, working out to a 30x increase over the previous 5 years.

Another relevant consideration is the way that as migration to faster speed tiers continued, the percentage of subscribers in tiers under 200Mbps went down by 43% for that same 4th quarter 2022. Median usage for the cross-sectioned ‘standard’ users was 531.9GB, more than 34% higher than the 396.6GB recorded by all subscribers.

The biggest higher-than-average single day aberration for much higher usage was on Christmas Day. On December 25th there was significantly higher average usage beginning in the mid-morning hours and then continuing into the afternoon. Clearly demand for greater internet speed continues to increase and network planning needs to be done around this ever-present reality. Here in Canada there is an ongoing progression towards more rural communities having high-speed internet and this will need to be a consideration for network providers as well.

Risk of Exploitation for Widely-Used WordPress Plugins

Reading Time: 3 minutes

WordPress is a big deal around here at 4GoodHosting, and like other Canadian web hosting providers we’ve recently recently debut our Managed Canadian WordPress hosting here. It’s optimized for WordPress sites, and the reason it’s been worth the time and efforts to put it together is that WordPress powers more sites than any other source around the world. It’s certainly come a long way from its humble beginnings as a means of putting your blog on the web.

But its popularity is also based on the thousands of plugins that users have to choose from to customize their pages. That popularity is the reason that these plugins and have become the target for SQL injection attacks recently, and with many of our web hosting in Canada customers having WP sites it makes sense for us to use this week’s blog entry to discuss this and make any one who the needs the info aware of the risk.

This is because a little less than 2 months ago (December 19, 2022) a critical security alert was issued for users with multiple WordPress Plugins. Apparently their inability to properly verify request parameters were increasing the risk for SQL injection attacks.

The assumption was that the threat factor was magnified even more by the fact that many people have so many plugins utilized within their website that they may not even be able to identify whether or not they’re at risk. These types of attacks can give an attacker the ability to access sensitive information, prompt the deletion or modification of data, or even take control of the entire website.

Input Validation Issue

The biggest of these discovered vulnerabilities in a plugin specifically relates to the lack of proper input validation in the ‘code’ parameter in the /pmpro/v1/order REST route. What results is an unauthenticated SQL injection vulnerability, able to occur because the parameter was not properly escaped before being used in a SQL statement.

The next serious vulnerability was found in a plugin that relates to the lack of proper input validation in the ‘s’ parameter in the ‘edd_download_search’ action. This specifically is being sent to stem from the ‘edd_ajax_download_search()‘ function located in the ‘./includes/ajax-functions.php’ file.

The third of these significant vulnerabilities was discovered in a plugin, which relates to the lack of proper input validation in the ‘surveys_ids’ parameter in the ‘ays_surveys_export_json’ action. Explaining how this works exactly, it means aan attacker needs to be authenticated but administrator privileges are not required. An example of this can be seen when it is used by an account with a ‘subscriber’ privilege level.

Explicit Control

From there the values are inserted into SQL queries without modification or with minimal modification, making them vulnerable to classic SQL injection attacks. As mentioned, the attacker may then ability to access sensitive information, delete or modify data, or even take control of the entire website.

These vulnerabilities were found in widely-used plugins, and a significant number of websites being at risk is likely. Any user who is using these plugins is strongly advised to update their software immediately as a means of protecting their websites from potential exploitation. WordPress IS aware of the issue and the team behind these plugins is working quickly to address the vulnerabilities and release updates.

Addressing the Issue

At the time of this release, the three vulnerabilities have been assigned CVE identifiers, but they are still pending approval. This means that they are currently being evaluated by the relevant authorities to determine their severity and potential impact. 3WAF rules have been issued for user reference in response to these security vulnerabilities:

CVE-2023-23488 -> 406016

CVE-2023-23489 -> 406017

CVE-2023-23490 -> 406018

There will be a need to continuously monitor the results and any false positive rates.

 

 

f

 

 

 

 

 

 

 

 

 

 

Challenges of Data Gravity

Reading Time: 4 minutes

Unless something is entirely weightless it has the potential to be pulled earthward by gravity, and the term is used figuratively for data when used in a discussion of data gravity. As you likely know from being here we do like to talk about data in our blog, and given the fact that 4GoodHosting is a Canadian web hosting provider that shouldn’t come as too much of a surprise. But anyone and everyone who’s operating in the digital space has more demands than ever before when it comes to managing and warehousing data, and so if you’re not familiar with data gravity that’s where we’ll start this week.

The clinical definition for data gravity is to say that is the ‘propensity for bodies of data to draw an expanding swath of applications and services into closer proximity’. It affects an enterprise’s ability to innovate, secure customer experiences, and even deliver financial results on a global scale. As such it’s an issue that major businesses will have to deal with if they’re going to continue to have the same benchmarks for themselves operating their business in the way they have for likely more than 2 decades now. If not longer.

No one anywhere is taking their foot off the gas even a bit with making progress, so data storage and management challenges continue to be magnified. Data gravity is certainly one of them, so let’s look at it in more detail with this week’s entry.

Heavy Sets

When data piles up, applications and services are always going to move closer to the data sets and this data gravity at work. It’s already been identified as a key megatrend for certain industries, and the prominence of it is foreseen to double in the next couple of years. It may mean problems down the road with some organizations and their IT infrastructure.

Data should be openly available to access by its related applications and services, easily managed, analyzed, and activated regardless of location. In order for that to happen traffic must flow easily everywhere across a company’s network footprint and including – among other point-to-points – from the public internet to every private point of presence for the business.

The problem is that the gravity of massive data sets can lock applications and services in place within one particular operational location. When the stored data is trapped there it can’t be made useful anywhere else, and this problem related to inevitable centralization can affect every other aspect of the system as a whole.

The fix is to make sure that no particular data set becomes uncontrollable by overwhelming IT capacity with excessive data volumes. But how is that done?

Agile Architecture

The starting consideration in evaluating that approach has to be the volume of data being generated and consumed. The number and type of places where data is stored and used, the way data is distributed across such places, and the speed the data is transmitted also needs be to taken into account.

But managing data gravity effectively can become a competitive differentiator, and if you have solid and well-built infrastructure you’ll be more ahead of the curve. Data gravity affects a company’s IT ability to be innovative and agile, and whether or not that’s a big deal depends on what you’re doing with your venture.

The main thrust of addressing data gravity has to be two-fold. Teams need to start with maintaining multiple centers where data processes take place. From there they must design an architecture where applications, compute and storage resources can move efficiently within and throughout those centers. But the cost and time involved with moving data around once their IT elements are decentralized is often considerable and underestimated. Unanswered questions around scale can cause transaction and egress fees to pile up. Having vendor lock-in causing headaches is common too.

Colocation Fix

Colocation data centers can be one fix when hybrid or multi-cloud setups are being utilized. If they’re located near a cloud location, they are able to facilitate solutions from multiple clouds. This eliminates data duplication and reduces latency at the same time. The right colocation provider can provide cross-connects, private circuit options and hyperscale-ready onramps. The only hang up can be geographical distribution.

Doubling down on major urban centers to creating an emphasis on expanding interconnected ecosystems around existing data gravity isn’t always such a straightforward solution. But one newer approach that is being seen is where organizations differentiate themselves by focusing on data hubs in multi-tenant data centers near cloud campuses at the edge. By placing data closer to the end user they solve much of the latency problem. Further, the ability to process some data close to cloud computing applications can solve the problem of data storage being too dense to move.

Edge Data Servers to Meet Need

This leads to the seemingly most ideal approach to dealing with data gravity – using Edge data centers where the architecture is prepped for the needed hybrid or multi-cloud solutions. The best ones will be working closely alongside cloud-based models where storage capacity at the edge can reduce the size of otherwise centralized data sets. This is done by discarding unneeded data and the compressing the vital stuff.

These ones will also need to be set up for increased bandwidth as more and more processing will take place in the one data storage location. If configured properly, Edge data servers can serve as the first stop for processing before it moves to the cloud and it is predicted that within 3 years 75% of data will be processed at the Edge, including 30% of total workloads.

Looking into an industry crystal ball, it’s hard to predict which ways data gravity will influence the ways networks work and look. That said, solving data gravity in the future will involve an intermeshed collaboration between parties such as content providers, edge data centers and enterprise IT teams.

DDR5 Penetration for Servers Set to be a Trend for 2023

Reading Time: 3 minutes

People with computing know-how won’t need an introduction to DDR5 RAM, and that’s in large part because the upgrade on DDR4 has been rolled-out in full and with widespread adoption for about a year and a half now. What also doesn’t need a whole lot of explanation is how Random Access Memory is what allows everything to happen at the most elemental levels for digital devices, and for a layman’s terms understanding of it a person can look no further than how that 8GB smartphone they used to have needed to be replaced much sooner than they thought.

In comparison to V4, DDR5 RAM has better speeds, better power management and efficiency, and more RAM itself in the same physical package. The reason DDR5 is so noteworthy here is that utilizing it has a lot of significance for servers too. We’ve talked about it at length and are definitely qualified to do so, but servers are really being pushed to the limit these days and that’s going to be an ongoing reality for the long-foreseeable future.

DDR4 memory bandwidth per CPU core has declined, and in the same way this means reaching its limit for next-gen CPUs the same type of ceiling has been hit for servers. Staying ahead of resultant significant lag and lessened performance is something that every good Canadian web hosting provider will be able to relate to and that certainly applies to us here at 4GoodHosting too.

Exceed 50% for First Half

The way DDR5 prices continue to fall for downstream manufacturers has created an opportunity to upgrade iterations of the product and that’s what is being seen with Intel, AMD and other manufacturers offering conversions to the point that DDR5 penetration rate will be further enhanced. As this relates to server side, new CPUs that support 12 DDR5 memory channels are right in line with what’s needed for servers now and all web hosting providers in Canada are looking to the Sapphire Rapids release for the data center sector that will be coming very soon this year.

The penetration rate of DDR5 server memory modules will continue to rise in the future as mainstream server CPUs supporting DDR5 are shipped on a larger scale. This should mean that those of us with large data centers and reworked servers in place for them should be able to see more affordable prices attached for putting them in place. We do know that high-end DDR5 7000MHz specification products don’t stay on shelves for any time at all as it is right now. Should DDR5 prices continue to fall for downstream manufacturers it is a good opportunity to upgrade iterations of the product, and that’s what web hosting providers will be looking to see as well.

Less Power Usage Appeal Too

Data Centers have increasing power demands, and that goes a long with ever-increasing power needs to cool them as well which creates something of vicious circle with the whole thing. Greater adoption and utilization of DDR5 RAM in web hosting and data center management promises to have benefits here too. DDR5 has a lower voltage, down to 1.1 volts instead of the standard 1.2 volts. Thinking that as speeds go higher the voltage increases would make sense, but DDR5 shows how RAM manufacturers, are able to produce super-fast RAM at lower voltages.

Like most things Korean, SK Hynix chips are the best in the business and they have developed uper speedy DDR5-8400 RAM that has superior performance while not needing more than that 1.1 volts mark. DDR5 also has the server-side appeal of being able to handle voltage regulation on the modules themselves, as opposed to requiring the motherboard to handle it. Last but not least, DDR5 RAM can have on-die error correction code, which helps to detect and correct memory errors on the RAM itself and the way this will benefit web hosting data centers will need no explanation either.

Boosted Signaling and Bandwidth

The biggest appeal for the industry with DDR5 is the way that system bandwidth has the potential to be nearly doubled, from a 33.6 gigabytes per second average for DDR4 to 69.2Gbps for DDR5. That space is certainly needed given the data collection and concentration demands we talked about earlier. Less system latency is a big plus too, but another factor that people aren’t talking about to the same extent is better bank group enabling.

There is the potential for near doubling here too when moving from DDR4 to DDR5 RAM, and this means improved memory channel efficiency. Data storage comes hand in hand with the need for efficient data retrieval, and this of course is very central to website function as it relates to web hosting. The standard for crucial DDR5 hosting being 64 as compared to 32 for DDR4 hosting is another big part of why web hosting providers can’t get DDR5 into their hardware configurations fast enough.

Progressing Towards Powering Devices with Ocean Energy

Reading Time: 3 minutes

Fair enough to think of the World Wide Web as the most vast expanse you can think of, but when we are talking about the literal natural world there are no more vast expanses than the world’s oceans. Let’s look no further than the Pacific Ocean, with a size that exceeds that of all the continents on earth put together and is 46% of the water surface on earth. But the Indian Ocean needs a nod here too, as it has the greatest stretch of open water with absolutely nothing stand on between points. 19,000km between the Colombian Coast and the Malay Peninsula.

 

Enough about ocean bodies of water for now, the Web is a vast expanse in its own right but the number of devices in the world that rely on utilizing it is a fairly mammoth number in its own right. And perhaps the two are coming together now with news that researchers are well on their way to finding a way to harness the energy in the oceans to power devices needed out there.

 

It’s been said that there’s no stopping the tides, and when you think about the way the tides move based on lunar cycles and the all-powerful nature of all of that it’s really no surprise that this is considered as a potential supremo power source.

 

We’re like any other reputable Canadian web hosting provider here at 4GoodHosting in that we’re the furthest thing from scientists, but the prospect of anything that can provide solutions to the world’s growing power needs is something that we’ll take interest in right away. So this is something that is definitely interesting, and as such its our blog entry topic for this week.

 

Utilizing TENGs

 

In a world of global warming and resultant wilder weather there is even more of a need to stay on top of tsunamis, hurricanes, and maritime weather in general. There are sensors and other devices on platforms in the ocean to help keep coastal communities safe but they need a consistent and stable power supply like any other type of device.

 

It is required for those ocean sensors to collect critical wave and weather data, and if that’s not guaranteed then there are safety concerns for coastal communities that rely on accurate maritime weather information. As you’d guess, replacing batteries at sea is also expensive and so what if this could all be avoided by powering devices indefinitely from the energy in ocean waves?

 

There are overweight researchers working to make this a reality with the development of TENGS – triboelectric nanogenerators that are small powerhouses that convert wave energy into electricity to power devices at sea. Developed to a larger scale these TENGS may be able to power ocean observation and communications systems with acoustic and satellite telemetry.

 

The good news to that end are they are low cost, lightweight, and can efficiently convert slow, uniform or random waves into power. This makes them especially well-suited to powering devices in the open ocean where monitoring and access are going to be a challenged and likely coming with a lot of cost too.

 

Converter Magnets

TENGS work by means of carefully placed magnets converting energy more efficiently than other cylindrical versions of the same technology so that they are better for transform slow, uniform waves into electricity. More on how they work is related to triboelectric effect and for most of us the best way to conceptualize this is to think of all of the times we’ve received a static electric shock from clothing that’s fresh out of the dryer

 

A cylindrical TENG is made up of two nested cylinders with the inner cylinder rotating freely. Between the two cylinders are strips of artificial fur, aluminum electrodes, and a material similar to Teflon called fluorinated ethylene propylene (FEP). With the device rolling along the surface of an ocean wave, the artificial fur and aluminum electrodes on one cylinder rub against the FEP material on the other cylinder. The static electricity that results can be converted into power.

 

More movement, more energy and researchers have been positioning magnets to stop the inner cylinder in the device from rotating until it reached the crest of a wave, allowing it to build up ever increasing amounts of potential energy. Nearing the crest of the wave, the magnets released and the internal cylinder started rolling down the wave very quickly. The faster movement produced electricity more efficiently, generating more energy from a slower wave.

 

The FMC-TENG is lightweight and can be used in both free-floating devices and moored platforms. Eventually it may be able to power integrated buoys with sensor arrays to track open ocean water, wind, and climate data entirely using renewable ocean energy.

The Inevitability of Network Congestion

Reading Time: 3 minutes

Slow page load speeds are one thing, but the frustrations that people have with them are only just a small part of what grows out of digital network congestion. In the same motor vehicle traffic becomes even more of a problem when cities become more populated a network only has so much capacity. And when that capacity is exceeded, performance – and namely the speed at which requests are handled – starts to suffer. Interpersonal communications are way down the list of issues that are seen with urgency related to this though, and that doesn’t need explanation.

Excess latency with networks resulting from over congestion can be a big problem when major operations are relying on those networks. This is even more of a looming potential issue with the way healthcare is increasingly relying on 5G network connectivity, and that’s one are where they especially can’t have lapses or downtime because of network congestion. And what is being seen interestingly with this network congestion issue is that some key algorithms designed to control these delays on computer networks are actually allowing some users to have access to most of the bandwidth while others get essentially nothing.

Network speeds are of interest because of operations for a lot of service providers, and here at 4GoodHosting that will apply to us to a Canadian web hosting providers like any other. This is definitely a worthy topic of discussion, because everyone of us with a smartphone is relying on some network functioning as it should every day. So what’s to be made of increasing network congestion?

Average Algorithms / Jitters

A better understanding of how networks work may be the place to start. Computers and other devices that send data over the internet before breaking it into smaller packets and having special algorithms decide how fast these packets need to be sent. These congestion-control algorithms aim to discover and exploit all the available network capacity while sharing it with other users on the same network.

There are also congestion-control algorithms, but they don’t work very well as mentioned above. This is because a user’s computer does not know how fast to send data packets because it lacks knowledge about the network. Sending packets too slowly results in poor use of the available bandwidth but sending them too quickly may overwhelm a network and mean packets get dropped.

Congestion-control algorithms take notes on packet losses and delays as details to infer congestion and making decisions on how quickly data packets need to be sent. But they can get lost and delayed for reasons other than network congestion and one common way that is occurring now more than ever before is what is called a ‘jitter’ in the industry. This is where data may be held up and then released in a burst with other packets and inevitably some of them have to delayed in sending as the bulk of them can’t all go at once.

More Predictable Performance

Congestion-control algorithms are not able to distinguish the difference between delays caused by congestion and jitter. This can be problematic because delays caused by jitter are unpredictable and the resulting ambiguity with data packets confuses senders so that they estimate delays differently and send packets at unequal rates. The researchers found this eventually leads to what they call ‘starvation’ – the term for what was described above where some get most and many get next to nothing.

Even with tests for new and better data packet control and sending algorithms there were always scenarios with each algorithm where some people got all the bandwidth, and at least one person got basically nothing. Researchers found that all existing congestion-control algorithms that have been designed to curb delays are delay-convergent and this means that starvation continues to be a possibility.

Finding a fix for this is going to be essential if huge growth in network users is going to be the reality, and of course it is going to be the reality given the way world’s going and with population growth. The need is for better algorithms can enable predictable performance at a reduced cost and in the bigger picture to build systems with predictable performance, which is important since we rely on computers for increasingly critical things.

Understanding Relevance of CBRS for 5G Network Advancements

Reading Time: 4 minutes

Here we are in a brand-new year the same way we were at this time last year, and one thing we can likely all agree on is that they sure do go by quickly. As is always the case a lot is being made of what to expect in the digital communication world for the coming year, and in many ways it is a lot of the usual suspects in the list but with more in the way of even more advances. We specialize in web hosting in Canada here at 4GoodHosting, and we took the chance a few blog entries back to talk about what might be seen with advances in web hosting for 2023.

But as is always the way we like to talk about the industry at-large and beyond quite often with our blog here, and that’s what we will be doing again here considering the ongoing shift to 5G continues to be a forefront newsworthy topic as we all look at what might be a part of the coming year. Every person that has major newfound success nearly always has some behind-the-scenes individuals who have been integral to their success, and in the same way any time a new digital technology or profound new tech advancement reorients the landscape there are buttresses underneath it that not a lot of people talk about.

One of these with 5G is CBRS, and this is something that will be of interest for us in the same way it will be for any good Canadian web hosting provide. Those who like to know the ALL of what’s contributing to people being able to make better use of Web 3 technology and in doing so getting more of the websites that we make available on the World Wide Web.

So let’s get into it, and happy New Year 2023 to all of you.

Definition

CBRS is Citizens Broadband Radio Service, and it is a band (band 48) of radio frequency spectra from 3.5 GHz to 3.7 GHz with applications for incumbent users, priority access licensees, and general authorized access cases – the most common of which would be the thousands of different potential instances where software is being accessed and utilized by unlicensed users.

This band was originally reserved for use by the U.S Department of Defense, and for U.S. Navy radar systems in particular. 7+ years ago the Federal Communications Commission (FCC) named the 3.5 GHz band as the ‘innovation band’ and earmarked it for being opened up to new mobile users. It has since then evolved into CBRS.

What it has the potential to do now is an create an opportunity for unlicensed users and enterprise organizations who want to use 5G, LTE, or even 3GPP spectra to establish their own private mobile networks. This has led to the Googles, Qualcomms, Intels, Federated Wireless etc of the world to band together to form the OnGo Alliance to support CBRS implementers and adopters with development, commercialization, and adoption of LTE solutions for CBRS.

These OnGo technology, specifications, and certifications ensure interoperability of network components, and with them businesses have more of an ability to create services and run applications running on 4G LTE and 5G wireless networks. The greater relevance of all of this is in how this is being enabled to the extent that entirely new industries could be sprouting from this greater access to and interoperability within the best new broadband technologies.

How it Works‍

CBRS Band 48 is a total of 150MHz of spectrum ranging from 3.55 to 3.7 GHz. CBRS can be used for 4G LTE or for fixed or mobile 5G NR. The entire system is reliant on a series of CBRS standards that were set up and put in place by over 300 engineers and 60 different organizations working in conjunction with the FCC.

Contained within them are security measures, licensing details, and protocols that have been tested and determined to be most suitable and performing at a high level for communicating with devices. Certification programs were developed to help establish standards for installing proper CBRS deployments that follow the proper guidelines in identifying itself, as well as communicating with the necessary FCC databases for operation.

The architecture of this is very noteworthy. Each CBRS domain features a Spectrum Access System (SAS) that connects to Federal Communications Commission (FCC) databases and incumbent reporting systems. The SAS will also bounce back and forth info with Environmental Sensing Capability (ESC) systems that automatically detect radar use in the area.

Components that support a CBRS antenna or antenna array is the Citizens Broadband Radio Service Device (CBSD). CBSDs register with an SAS and request spectrum grants and they also pass along their unique geolocation, height, indoor or outdoor status, along with a unique call sign registered with the FCC. All of this is done within HTTPS protocol and messages are encoded via JavaScript Object Notation (JSON).

Major Advantages

As we have stated, CBRS enables enterprise organizations to establish their own private LTE or 5G networks, and what this does is create and ‘express lane’ of sorts where wireless connectivity for enterprise applications that require wider coverage, interference-free wireless spectrum, and guaranteed service level agreement (SLAs) for network performance metrics such as latency and throughput have those needs met to the extent they need to be.

The most prominent CBRS benefits – at this point – are looking likely they will be the ability to:

  • Deliver up to 10x wider coverage, indoors or outdoors
  • Offer superior SIM authentication and authentication of the type that relies on centralized encryption, by default
  • Enabling mobile devices handover between access points at an unnoticeable speed
  • Better scaling of digital automation initiatives as they invest in new generation of use cases with computer vision sensors, automated mobile robots (AMR), voice and video communication tools that require real-time exchanges to make computations and provide data that can be relied on for making major decisions

A.I. and Ongoing Impact on Web Hosting Industry / Technology

Reading Time: 5 minutes

We are nearing the end of another calendar year, and we imaging many of you are like us in thinking that this one has flown by just like the last one and the one before that. Apparently this is the way as you get older, but we’ll prefer to believe it’s just because we are all busy at work most of the time and that is why those 12 months go by as quickly as they do. And if you’re reading this it’s more likely that what’s keeping you preoccupied is some type of business or personal venture that involves a website.

You don’t get far at all these days without one in business, and it’s probably fair to say you don’t actually get anywhere at all without one. We’re fully in the information age now, and if you want to be reliably visible to the greatest volume of prospective customers you need to be discoverable via their smartphone or notebook. Simple as that, and we won’t go on anymore at length about. But as mentioned 2022 is coming to a close here so we thought to take our last blog entry of the year here and center it around one of the most newsworthy topics of all for the digital world in 2022 – A.I.

And more specifically how artificial intelligence is changing the parameters of what we do here as a good Canadian web hosting provider – providing web hosting with reliable uptime in Canada and the best in affordable Canadian web hosting too. We’re not the only one, and we’re not the only one who can attest to how artificial intelligence is changing the web hosting industry and able to discuss it in some detail. So here goes.

Functionally Influential

Major advances in computerized and internet computing can be attributed to AI, and particularly over the last 10+ years. The different uses and applications of AI have given it the upper hand over computer-generated reality and Augmented Reality (AR) and it’s been quite the influential factor in the relatively short time it’s been a factor in the big picture of technology trends and their levels of influence.

AI is factoring in in so many ways, from human resources to business operations along to advanced showcasing, security, and all types of innovative technology development that is being built to improve upon our existing digital technologies. This is without even mentioning all the new potential uses and applications for medical care, education, security, and retail. Even web facilitating is an industry that is being revolutionized by A.I. and that starts to point us in the direction we’re going to go here.

One the ways that A.I. is poised to really factor into better web hosting in the immediate future is with detecting malware and other cybersecurity risks more capably and more reliably. Giving web hosting providers a possible means of incorporating these tech advances into a possible product that can be offered along with Canadian web hosting packages is something that we’ll be keen to see, and the demand for them for people who have e-commerce websites (and especially larger ones for larger businesses) goes without saying.

A.I. may also be better enabling web service providers for keeping track of outstanding tasks for their clients, and doing so much more quickly than would otherwise be possible. Anticipating the needs of customers and staying on top of their expectations will have really obvious benefits for providers too, although it doesn’t have the same product-incorporation appeal that AI for advanced malware detection via your web hosting provider would.

More on Better Safety

We all know how the number of cyberattacks and digital assaults has increased exponentially over the last few years, and webmasters definitely have to be more aware and more vigilant as well as more concerned overall. Most malware and other threats take aim at sites via algorithms, but the simple truth in explaining how A.I. has huge potential in this regard is that – quite plainly – the machine is always smarter than the man. A.I. has the ability to outsmart the malware makers and reliably stay one step ahead of them at all times.

Pairing machine learning technologies with A.I. is something we can expect to see implemented into the web hosting industry too when it comes to offering better cybersecurity through web hosting. As is always the case, the key will be in anticipating attacks more intelligently and then making better decisions about the best way to deploy defenses.

Aiming for Increased Accuracy

Any and all tedious tasks required of webmasters or developers when it comes to cyber defenses are made simpler by AI, but with the assignments still being carried out with the utmost precision. Regardless of the volume of traffic or sudden changes to the site, computer-based intelligence ensures uninterrupted web page execution.

Better computer-based intelligence will be used to do things like send pre-programmed messages, respond to visitors, and so on. We can also look forward to A.I. doing more of the work that human programmers have had no choice but to do until this point, and that freeing of time and human resources is another offshoot benefit for how A.I. is going to benefit the web hosting industry more in the near future.

Better Data Reports and Improved Domain Name Performance

Data generation in web hosting will benefit from A.I. to by having reports better analyzed over a long period and then with the data received and sent helping to clarify any adjustments that need to be made or changes in direction with any number of different metrics and so forth. Better analyzing of purchase and repetition rates, the cost of procurement, and much more will be improved by utilizing artificial intelligence.

Improved domain name performance will be part of this too, with better research and intuition on how well domain names will perform later on by observing traffic and conversion rates. Other aspects such as the substance’s composition will undoubtedly influence site execution, but this information will help them determine which approach yields the best results for their target audience.

Even Better Uptime Guarantees

Artificial intelligence (AI) can help service providers like us in this way too, with AI improving web hosting uptime reliability by intelligently suggesting what is needed to optimal system redesigns, recognizing any example within a framework and recalling it at any time or place as needed and taking any and pretty much all guesswork out of the equation.

One absolutely huge factor that we can by and large count on at this point is A.I. better anticipating increases in website traffic during peak hours. That alone is going to be HUGE for improving uptime reliability all across the board.

Automated Self-Maintenance

A.I. is also going to factor strongly into web hosting services by providing smarter and more focused improves to website infrastructure and optimizing protocols for how computerized data is going to be used. It will be helping to fix and maintain the structure on its own and this ‘self-healing’ will allows hosts to check the framework to see if there are any issues before they arise and then taking preventative measures as needed.

We can look forward to A.I. enhancing security, automating system preparation, improving thwarting of malware and viruses, and overall improving web hosting services in Canada and everywhere else in the world as well.

4GoodHosting wishes all of you Happy Holidays and a Happy New Year and we’ll see you next week for our first entry of 2023.