Cloud Infrastructure Growth Fueled by Server and Storage Price Hikes

Reading Time: 3 minutes

Abstract means to be created outside of any conventions or norms that apply to whatever it is, but abstraction technology is entirely different. It is the technology by which a programmer hides everything except relevant data about an object, and the aim with is to reduce complexity. Abstraction technology has been integral to the development of cloud computing, and of course we don’t need to go on even a bit about how it has so wholly changed the landscape of the digital world and business within it to a great extent.

With regards to cloud infrastructure, virtualization is a key part of how it is possible to set up a cloud environment and have it function the way it does. Virtualization is an abstraction technology itself, and it separates resources from physical hardware and pools them into clouds. For there the software that takes direction of those resources is known as a hypervisor, where the machine’s CPU power, memory, and storage are then virtualized themselves. It was almost unheard of for hypervisors to be maxed out for the early years of cloud computing. Not anymore.

This leads to a different angle on why cloud infrastructure growth continues full force even though it’s becoming more challenge in relation to the expense of it. This is a topic that any good Canadian web hosting provider is going to take an interest in and that’s the case for those of us here at 4GoodHosting too. Servers are part of hardware of course, and the way virtualization can connect two servers together without any literal physical connection at all is at the very center of what makes cloud storage so great.

The mania surrounding AI as well as the impact of inflation have pushed cloud spending even more, and the strong contributing factors to that are what we’re going to lay out here today.

Componentry Differences

Spending on computer and storage infrastructure products in the first quarter increased to $21.5 billion last year, and this year spending on cloud infrastructure continues to outpace the non-cloud segment, which declined 0.9% in 1Q23 to $13.8 billion. Unit demand went down 11.4%, but average selling prices grew 29.7%.

The explanation for these gains seems to be that the soaring prices are likely from a combination of inflationary pressure as well as a higher concentration of more expensive, GPU-accelerated systems being deployed by cloud service providers. AI is factoring in two, with unit sales for servers down for the first time in almost two decades and prices up due to the arrival of dedicated AI servers with expensive GPUs in them.

The $15.7 billion spent on cloud infrastructure in the first quarter of 2023 is a gain of 22.5% compared to a year ago. Continuing strong demand for shared cloud infrastructure is expected, and it is predicted to surpass non-cloud infrastructure in spending within this year. So we can look for the cloud market to expand while the non-cloud segment will contract with enterprise customers shifting towards capital preservation.

Super Mega

A dip in the sales of servers and storage for hosting under rental/lease programs is notable here too. That segment declined 1.5% to $5.8 billion, but the fact that over the previous 12 months sales of gear into dedicated cloud use has gone up 18+% makes it fairly clear that was an aberration. The increasing migration of services to the cloud is also a reflection of how on-premises sales continue to slow while cloud sales increase

Spending on cloud infrastructure is expected to have a compound annual growth rate (CAGR) in the vicinity of 11% over the 2022-2027 forecast period, with estimates that it will reach $153 billion in 2027 and if so making up for 69% of the total spent on computer and storage infrastructure We’ll conclude for this week by mentioning again just how front and center AI is in all of this. It is extremely compute- and storage-intensive nature makes it expensive, and many firms now have AI-ready implementation as a top priority. A survey found that 47% of companies are making AI their top spending area in technology over the next calendar year.

Continued Growth of Ethernet as Tech Turns 50

Reading Time: 3 minutes

Wired connections will have some people immediately thinking of dial-up modems and the like from the early days of Internet connectivity, but that is really not how it should be considering that Ethernet has in no way gone the way of the Dodo bird. Or AOL for that matter, but what we’re setting up for here is a discussion where we explain how Ethernet connectivity is still entirely relevant even though maybe not as much as when it made its functional arrival 50 years ago.

That’s right, it took quite some time before the applications of the technology become commonplace the way it did in the early to mid-1990s and some of us are old enough to remember a time when making the physical connection was the only option. And it’s entirely true to say that doing so continues to have some very specific advantages, and that can segue easily into a similar discussion about how large cloud data centers rely so completely on the newest variations of Ethernet technology.

Both topics are always going be in line with what we take interest in here at 4GoodHosting given we’re one of the many good Canadian web hosting providers. We’ve had previous entries where we’ve talked about Wi-Fi 6 and other emerging technologies, so now is an ideal time to talk about just how integral Ethernet technology advances have been for Cloud computing.

Targeted Consolidation

Ethernet was invented in 1973, and since then it has continuously been expanded and adapted to become the go-to Layer 2 protocol in computer networking across industries. There is real universality to it as it has been deployed everywhere from under the oceans to out in space. Ethernet use cases also continue to expand with new physical layers, and high-speed Ethernet for cameras in vehicles is one of a few good examples.

But where there is likely the most impact for Ethernet right now is at this point is with large cloud data centers. The way growth there has included interconnecting AI/ML clusters that are ramping up quickly adds to the fanfare that Ethernet connectivity is enjoying. And it has a wide array of other potential applications and co-benefits too.

Flexibility and adaptability are important characteristics of the technology, and in many ways it has become the default answer for any communication network. Whether that is for connecting devices or computers, in nearly all cases inventing yet another network is not going to be required.

Ethernet also continues to be a central functioning component for distributed workforces, something that has more of an emphasis on it since Covid. Communication service provider were and continue to be under pressure to make more bandwidth available, and the way in which Ethernet is the foundational technology used for the internet and enabled individuals to carry out a variety of tasks efficiently from the comfort of their own homes is something we took note of.

Protocol Fits

Ethernet is also a more capable replacement for legacy Controller Area Network (CAN) and Local Interconnect Network (LIN) protocols, and for that reason it has become the backbone of in-vehicle networks implemented in cars and drones. Ethernet also grew to replace storage protocols, and the world’s fastest supercomputers continue to be backed by Ethernet nearly exclusively. Bus units for communication across all industries are being replaced by Ethernet, and a lot of that has to do with the simplicity of cabling.

Ethernet is also faster, cheaper, easier to troubleshoot because embedded NICs in motherboards, ethernet switches that can be of any size or speed, jumbo-frame Gigabit Ethernet NIC cards, and smart features like Ether Channel The ever-increasing top speed of Ethernet does demand a lot of attention, but there are focuses on the development and enhancement slower speed 2.5Gbps, 5Gbps, and 25Gbps Ethernet, and even the expansion of wireless networks will require more use of Ethernet. Remember that wireless doesn’t exist without wired and wireless access points require a wired infrastructure. Each massive-scale data center powering the cloud, AI, and other technologies of the future are all connected together by wires and fiber and originating from Ethernet switches.

More to Know About Load Balancers for Applications

Reading Time: 4 minutes

The concept of induced demand is a very real one for motor vehicle traffic, but it doesn’t apply to internet traffic quite the same way. You may not be able to build your way out of traffic congestion on the roads, but along the information superhighway it is much more of a doable proposition. And a darn good thing it is, because load speeds for web applications are a make-or-break factor with whether or not people will continue to use it. Patience may be a virtue, but users have little to any of it and that’s not going to change.

The interest of maintaining or improving these speeds is the reason that that load balancer exist, and it’s also why they are more in demand than ever before. Researchers at Google have said that a load speed should never be longer than 3 seconds, and having it nearer to 2 is what all should be aiming for. If we’re going to be right honest it is the scalability of any web application that is going to do the most for this interest in the long term, but balancing load does have a whole lot of here-and-now value despite that.

All of this is going to be a topic that we’ll take interest in here at 4GoodHosting in the same way it would for any good Canadian web hosting provider. Anything related to web speeds is going to qualify around here, and we know that there will be more than a few here who do have production interests in web applications. Although chances are they’d have heard of load balancers already, but if not we’re going o cover them in greater detail here this week.

Smart Distributor

A load balancer is a crucial component of any web app’s cloud infrastructure with the way it distributes incoming traffic across multiple servers or resources. The sum of all its functions are to redistribute incoming traffic across multiple servers or resources, ensuring efficient utilization, improved performance, and ensuring web applications are as available as possible at all times. The lack of one may mean that traffic distribution becomes uneven, and this is a standard precursor to server overload and major drops in performance.

In something of the way we alluded to at the start here, the load balancer works as a traffic manager and directs traffic with a measure of authoritative control that would never be even remotely possible with the type of traffic that infuriates most everyday people. The load balancer evenly distributes the workload, and this stops any single server from becoming overwhelmed.

The versatility of them is very much on display too, as they can operate at different layers of the network stack, including Layer 7 (application layer) and Layer 4 (transport layer). The algorithms they use like round robin, source IP and URL hash, and others have the breadth of knowhow to distribute traffic effectively based on any number of incidental factors that may be in play at the time. This is exactly what you want for consistently fast load times, and that is going to be true no matter if you have VPS hosting or another type of dedicated server setup

Those who put a load balancer in place often quickly come to see how effectively they ensure optimal performance, efficient resource utilization, and a seamless user experience for web applications.

3 Types

There are 3 types of web application load balancers

  • Application Load Balancer (ALB)

This is the Toyota Corolla of load balancers in modern web applications, microservices architectures, and containerized environments. Application load balancers operate at the application layer of the network stack. Incoming traffic is distributed by the ALB depending on advanced criteria like URL paths, HTTP headers, or cookies.

  • Network Load Balancer (NLB)

This type of load balancer works at the transport layer and is designed for distributing traffic-based network factors, including IP addresses and destination ports. Network load balancers will not take content type, cookie data, headers, locations, and application behavior into consideration when regulating load. TCP/UDP-based (Transmission Control Protocol/User Datagram Protocol) applications are where you’ll find these most commonly.

  • Global Server Load Balancer (GSLB)

This one promotes more optimal performance by distributing traffic across multiple data centers or geographically dispersed locations. It is usually the best fit for globally distributed applications, content delivery networks (CDNs), and multi-data center setups. Location, server health, and network conditions are the key factors taken into account when a GSLB is making the decision on load balancing

Why They Are Needed

Load balancers are the most capable when it comes to the optimum performance of web applications. The first common consideration where they tend to fit perfectly if the one we talked about earlier – scalability. When demand for your application goes up, load balancers allocate the workload or traffic appropriately across different servers so no single one becomes overwhelmed or fails.

Next will be with the need for high availability. With load balancers preventing a single server from being overwhelmed it means that the reliability and availability of your application is improved. They can also route your traffic to available servers in case one server becomes unavailable due to hardware failure or maintenance. Performance optimization is made possible by evenly distributing incoming requests, Directing traffic to servers that have lower utilization or are geographically closer to the user reduces latency, and this a good example of the type of ‘smart’ rerouting that we’re talking about here.

Hyperscale Cloud Data Centers Becoming Pillars in Enterprise Infrastructure Investment

Reading Time: 3 minutes

It takes a certain type of person to be aware of how the rhetoric with cloud storage has shifted. Whether or not you’d be aware of how the narrative shifted from where the technology would quickly replace the entire need for physical storage to one that now promoted smarter and more capable physical data storage would depend on what you do for work or where your interests lie. We have talked about data center colocation in a number of previous blog entries here, so we don’t need to go on too much more about the role of that in the revamping of cloud data infrastructure.

As is the case with everything, budgetary constraints have factored into this as so many businesses and organizations came to terms with just how much it was going to cost to move ALL of their data into the cloud, no matter how reliable or safe the procedure was going to be. This and many other factors were the ones that came together to push advancement and investments into data center colocation, and in truth most people would say that – currently at least – the mix between fully-cloud storage and new and improved physical data centers is just about right.

This leads to our look at the newest of these cloud storage technologies that is starting to cement itself in the industry, and we’re talking about hyperscale cloud data centers. It’s naturally a topic of interest for us here at 4GoodHosting in the same way it would be for any good Canadian web hosting provider, and we likely said the same with the entry from last year when we discussed colocation data centers.

Shifting Landscape

As of now hyperscale cloud data centers now make up a little less than 40% of all data centers around the world. An estimated 900+ of these facilities globally reinforces the major impact cloud computing continues to have on enterprise infrastructure investment. Of this number of hyperscale cloud data centres, about half are owned and operated by data center operators, and colocation sites are where the remainder of them will be located.

And as non-hyperscale colocation capacity makes up another 23% of capacity, that leaves on-premise data centres with just 40% of the total. Now if you skip back half-a-decade ago or so, the share for on-premise data centers was quite a bit larger, working out to nearly 60% of total capacity. But now the big surge in enterprise spending on data centers suggests a majorly shifting landscape.

The fact that companies were investing over $80 billion annually in their data centers, whereas spending on cloud infrastructure services was around $10 billion supports that. And when you consider that cloud services expenditure surged to $227 billion by the end of 2022 while data center spending has grown modestly at an average rate of 2% per year, it’s even in more in line with the attestation that hyperscale cloud data centers are increasingly where the industry is gravitating towards.

Onwards and Upwards

Over the next five years it is predicted that hyperscale operators will make up more than 50% of all capacity, with on-premise data centers declining to under 30% over that same time frame . But let’s be clear – on-premise data centers are not going to completely disappear. Rather, they will maintain a fairly steady capacity and still be extensively utilized despite the overall decline. A similar expectation is that colocations share of total capacity will remain stable for the most part too during this period.

And so it is now that amidst all of the excitement over the growth of hyperscale operators and the big push towards enterprises outsourcing datacenter facilities, on-premise datacenters will still be utilized and there will still be sufficient demand for them to the extent that investment will still be made. The total capacity of on-premise datacenters will remain reasonably steady over the next five years, declining but barely – going down by an average of just a fraction of 1% each year.

More notably for all of us in the hosting business will be continuing to see the rise of hyperscale data centers being driven by the increasing popularity of consumer-oriented digital services. Front and center among those are social networking, e-commerce, and online gaming, and they are just some of the many leading to a transformative shift in enterprise IT investments.

Introducing Li-Fi: Light Wi-Fi is Near Ready to Go

Reading Time: 3 minutes

It is getting on to darn near 30 years since humans were ‘untethered’ when it came to being able to access the Internet. Now being able to be entirely unconnected to anything is the norm when it comes to web-browsing devices of any type, and there are even plenty of desktop computer that would laugh at the idea of an Ethernet cable. Wi Fi has been the way, and when Wi-Fi 6 came around a few years back it was definitely a big deal.

But what we’re on the verge of here may be the biggest deal with Internet connectivity to come along since the information superhighway was first paved. We’re talking about light-based communication, and the advantages of Wi Fi are about to be undone in a big way by Li-Fi, an emerging wireless technology that relies of infrared light instead of radio waves. To go down every avenue with all the potential advantages for this and how it’s stealing the thunder of Wi-Fi 7 would require a whole lot of typing, but let’s start with the one that everyone will like to hear – speed.

They may be fewer and more far between, but some people still do have latency concerns based on what it is they are doing online and whatever hardware they’re doing it with. You want wildly faster internet speeds? Li-Fi is going to be a godsend for you then as the estimates right now are that Li-Fi could offer speeds 100XS faster than what the current Wi-Fi networks are able to provide.

No need for any explanation as to why this is going to be noteworthy stuff for any good Canadian web hosting provider, and that’s going to be the case for us here at 4GoodHosting too. So we’re taking this week’s entry to give you a brief overview into Li-Fi, because it may be that Wi-Fi is about to become archaic technology fast.

Utilize Light

Wi-Fi has made connecting to the internet wirelessly possible by using radio waves, but now it appears there’s a better way. Li-Fi was recently given its own standard – IEEE 802.11bb — and when you see it you’ll know your connection is being created with the power of light to give you connectivity. Although technically Li-Fi belonging to the same family of standards Wi-Fi lives in, it is very different.

Li-Fi uses light as its source of electromagnetic radiation instead. What is of note here with this is the way LED lights work by turning on and off many times a second by them to save energy, and Li-Fi spectrum does the same but turning off and in a way that is able to communicate with a receiver to interpret and transfer data with. It works with visible, infrared, and ultraviolet light and so there’s not going to necessarily be a need to have light physically in the room either.

And less light bouncing off walls and more being confined to individual rooms means there is less interference and higher bandwidth, and traffic is harder to intercept from outside. Another big advantage is that Li-Fi antennas are small enough to be integrated into smartphone frames, and have them functioning in a way that is similar to IR blasters

Addition To, Not Wi-Fi Replacement

The concept behind Li-Fi is pretty simple and has been around for some time, but with several developments to it adoption and challenges along the way, with the lack of an official standard being among them but now with the IEEE 802.11bb standard in place. It’s good to understand as well that Li-Fi isn’t intended as a Wi-Fi replacement, but rather an option that can be utilized when a Wi-Fi network connection is a weaker alternative or is more simply not an option at all.

There should be no shortage of those instances too, and they consider as well as places where Wi-Fi’s radio waves can interfere with everything from hospitals to airplanes to operations in and around military bases. Li-Fi will also be able to co-exist with your home Wi-Fi networks, and having devices be seamlessly able to switch between networks automatically based on needs and resources available is going to be a real plus.

One example might be having your phone stay connected to Wi-Fi while it’s in your pocket but then jump over and onto faster and more interference-free Li-Fi when it moves into your hand and is exposed to light. One thing is for sure, the idea of light-based internet is definitely exciting and especially if means super-fast network speeds and in many cases leaving Wi-Fi for more IoT purposes and the like.

Clustering Servers for Maximum Performance Delivery

Reading Time: 3 minutes

Strength in number is often enhanced in a big way when those numbers of whatever it is are in close proximity to one another, and there are all sorts of examples of that. There are all sorts of examples of that, and in some of them it’s more about providing shared resources even if the collective aim isn’t the same right across the board. The nature of what people do with Internet connectivity is as varied as the 6-digit number combinations, and it’s only going to keep on growing out from here.

Again, much of that is made possible by shared resources, even if those in possession of the resources may not even be aware their sharing them. It may be in more of an indirect way but the herring in the innermost area of the ball are providing a benefit to the fish on the edge of it even though those fish are most clearly at risk of being eaten and thus protecting them. They create a possibility, and that’s what keeps the herring ball in a constant state of flux as the competition continues without stopping.

This type of strength in numbers can relate to servers too. With the demand for server speed and reliability increasing, there is the need to implement a reliable server cluster for maximum performance. An integrated cluster of multiple servers working in tandem often provides more resilient, consistent, and uninterrupted performance. Here at 4GoodHosting we are a good Canadian web hosting provider that sees the value in relating what goes into decisions in the industry with regard to how you get better performance from your website and in the bigger picture more traction for you online presence.

Better Availability / Lower Costs

Server clusters are conducive to better business service availability while controlling costs at the same time. Learn some of the key benefits that come with utilizing a server cluster. That’s the term for when a group of servers all tied to the same IP address, and providing access to files, printers, messages and emails, or database records. Node is the one given to each server on the cluster, and each node can run independently as it has its own CPU and RAM and either independent or shared data storage.

The foremost argument for server clustering is better uptime through redundancy. In the event a node in the cluster fails, the others have the ability to pick up the slack almost instantly. User access is essentially uninterrupted, and as long as the server cluster was not already substantially under-resourced there the expected user load shouldn’t cause performance shortcomings.

Many different hosting environments will have their own specific benefits attached to server clustering. Server cluster advantages are not exclusive to mission-critical applications, but the one that will extend to all of them is the way they are not subject to a service interruption from a single server node failure.

Traditional or Shared-Nothing

Operating a backup server in the same way has benefits too but there is almost always a significant failure of service while transferring to the backup. In these instances the possibility of data loss is high, and if the server is often not backed up continually the risk of that increases. That is likely the only real detractor point when discussing server clusters, but most organizations will not have large-scale data back up needs of the size that will make this an issue.

The primary key server cluster benefits are always going to be reliability and availability and there are essentially two types of server clustering strategies – the traditional strategy and the shared-nothing strategy.

Traditional server clustering involves multiple redundant server nodes accessing the same shared storage or SAN resource. Server nodes that fail or experiences downtime have the next node picking up the slack immediately, and because it is drawing from the same storage, you shouldn’t expect any data loss to occur.

Shared nothing server clustering involves each node having a completely independent data store, making it into its own hard drive essentially. These drives are generally synchronized at the block level and function identically from moment to moment. Any failure occurring anywhere in the cluster will be immediately remedied by another node taking over in full from its own hard drive.

Security Considerations

Despite the long list of benefits all servers are potentially vulnerable. We’ll conclude our entry here this week by getting right down to what you’d need to know about server cluster security interests and listing out what you should have in place:

  • Good firewall
  • Updated OS
  • Strong authentication procedure
  • Physically secured servers
  • Strong file system encryptions

There are HPC Storage (high-performance clustered storage) with top of the line hardware in each node enabling the fastest interconnects available. These are ideal, but with some you will need to take all of these security recommendations more into consideration.

Advantageousness of Cloud Computing Increasingly in Question

Reading Time: 4 minutes

The appeal of removing the need for physical storage was profound and immediate when cloud computing first made its appearance on the scene, and there are other reasons why the advantages of it made it one of most revolutionary developments in computing ever seen to this point. We are like many others in the way we’ve gone on about it at length, and with a focus on how it’s had an equally profound effect on the nature of web hosting. Even the most laymen of people will have had their digital world altered by it, even if they don’t quite understand the significance of how their Microsoft OneDrive works or something similar.

Cloud computing has indeed had quite a shine to it for many years. But here we are now at point where perhaps the luster is wearing off, and more than just a little. And this isn’t primarily because the cloud has performance shortcomings. Some might say that it does depending on the nature of what they do related to business or any other type of venture moved online, but the primary reason that cloud computing is not regarded as obvious choice any more is because of the price required to be utilizing it.

That is not to say that cloud computing is too expensive, and it really isn’t when you look at it solely from the perspective of storing data in the cloud. But what is increasingly the issue is that traditional data centers are increasingly more affordable and offer cost savings to go along with the greater capacity for data storage themselves.

This is going to be a subject of interest for any good Canadian web hosting provider in the same way it is for us at 4GoodHosting, and before we get into the topic in more detail we’ll mention that the core server banks in our Vancouver and Toronto data centers have their server capacity expanded regularly and we’ve had them designed so that they have this ability to grow. For some it definitely may be the most cost-effective choice to go with traditional data storage means via you web hosting provider.

Upped Affordability

A PPI (producer price index) report has shown that there’s been a month-over-month decline in the cost of host computers and servers of about 4%. At the same time cloud services saw prices increases of around 2.3% starting in the 3rd quarter of 2022. The overall PPI declined 0.3% in May as prices for goods dropped 1.6% and service fees increased 0.2%. This begins to indicate how cloud storage options are increasingly more expensive, while physical data storage ones are increasingly more affordable.

So what we have now is some companies pressing the reset button for some systems already on cloud platforms and relocating them back to traditional data centers because of the more appealing lower costs. The limitations of physical data storage is going to mean that’s not a possibility for some that have more extensive needs, but for any individual or organization who doesn’t have such large-scale needs the ongoing change to price differential for the service between traditional and the cloud may well have them reconsidering too.

It is a marked change, because even just 10 years ago the rationale for moving to a public cloud consumption model was wholly convincing for nearly all. Much of that was based on avoiding the costs of hardware and software, the pain and expense of platform maintenance, and having leveraged computing and storage being a utility. Operational cost savings were a big part of the sell too, and the foremost of them was the way cloud storage allowed for avoiding many capital expenses (capex versus opex). Add the benefits of agility and speed to deployment and it was a solid sell.

Reduced Hardware Costs

One very relevant development with all of this is the way prices for data center hardware have come down considerably, and done so as cloud computing service costs have increased a good bit during the same time. This has led to many CFOs stopping to reconsider any decision to move all IT assets to the cloud. Especially if there is no significant cost advantage to outweigh the migration risks. In relation to this let’s keep in mind that today’s business cases are more complex than they were previously.

Weighing advantages and disadvantages can be a complex equation, and decision makers will definitely need to look at more than just the costs for each deployment model. New lower prices for hardware will likely factor in, but you need to look at the bigger picture with value to a business. There will be larger strategic forces that need to be considered too, especially with applications and data sets featuring repeatable patterns of computing and storage usage.

You’ll also be inclined to ask how likely it is that you will rapidly scale up or down? Traditional servers are more capable of accommodating that now, and it’s included within those increasingly lower prices for traditional data storage that you can find here with our Canadian web hosting service and elsewhere.

Not all applications and data sets fall into this category though . The more dynamic the storage and computing requirements for an application or set of applications, the more likely it is that a public cloud is the better option. Scalability and seamless integration offered by cloud services are critical for these types of data and computing, and being able to quickly expand and build on public cloud services with other native services is going to be important too

Cloud computing should never be a default solution. Consider your business requirements and make it a priority to question the advantages of any technological solution. We are seeing the prices for traditional servers decrease, it is important to review objectives for utilizing these systems and reevaluate whether or not it continues to be the best choice with regards to the value / cost proposition.

Here at 4GoodHosting we’re always available if you’d like to discuss big data storage needs, and we’ll be happy to hear from you if so.

Private vs Public Cloud Considerations for Organizations

Reading Time: 4 minutes

Exclusivity is wonderful, especially when it comes to access to the resources that mean productivity and profitability for your organization. The cost of having that exclusive access is often what tempers the level of enthusiasm and willingness to jump all over opportunities to have it. Emphasizing all the advantages that yours may gain from cloud computing by laying sole claim to resources in the cloud that make that possible will be beneficial for obvious reasons, but can you afford it?

That is always going to be the most basic but foremost question when it comes to weighing public cloud computing versus private. Public cloud computing is decidedly affordable, and it will meet the functional needs of the majority of organizations or businesses based on the related needs created by what it is that they do. Ask people where they think the biggest shortcoming will be and they’ll think that it will be with speed and in particular with server requests around data.

Not so though, and here at 4GoodHosting we having something of an intermediary interest in this. As a good Canadian web hosting provider we know the direct interest business web hosting customers will have as they may well be weighing their options with cloud computing nowadays too. So this will be the subject for our interest this week – standard considerations for decision makers when it comes to public vs private cloud.

Mass Migrations

The industry’s expectation is that a minimum of 80% of organizations are expected to migrate to the cloud by the year 2025. But easy access and management of all your data isn’t guaranteed, and so organizational leaders will need to be very judicious about how they make their move. Different types of cloud services are there to choose from, but the vast majority will be option for some variant of either the public and private cloud.

There are pros and cons to each, so let’s get right to evaluating all of them.

The public cloud provides cloud computing services to the public over the internet, with off-site hosting and managed by a service provider the whole time to have control over infrastructure, back-end architecture, software, and other essential function all provided at a very reasonable cost. This option always appeals to single users and businesses who are drawn to ‘pay-as-you-go’ billing model given the limitations of the scope of their business operations.

The private cloud will also be referred to as the ‘enterprise cloud’ and both terms fit for the arrangement where cloud services are provided over a private IT infrastructure made available to a single organization. All management is handled internally and with a popular VPC setup (virtual private cloud) over a 3rd-party cloud provider’s infrastructure you pretty much have the best cloud computing arrangement when it comes to autonomy, reliability, and the ability to have infrastructure challenges addressed in a timely manner. Plus Services are hidden behind a firewall and only accessible to a single organization.

Pros for Each

The biggest difference is definitely with growth rates, and those rates are to some extent a reflection of adoption preferences for people / companies and why one or the other is a better fit for greater numbers of them. Public cloud spending continues to steam ahead at a rate around 25% over the past couple of years, while it’s just around 10% for private cloud adoption for that time. That second number is continuing to go down though.

The pros for Public Clouds are that they offer a large array of services and the pay-as-you-go system without maintenance costs appeals to decision makers for a lot of reasons. Most public cloud hosting providers will be able to offer enterprise-level security and support, and you’ll also benefit from faster upgrades and speedier integrations that make for better scalability.

The pro for Private Clouds is singular and not plural, and it is simply in the fact that you have such a massively greater level of accessibility and control over your data and the infrastructure you choose to have in place for it.

Cons for Each

Drawbacks for public cloud solutions are that your options with customization and improving infrastructure are always limited, and this is even more of a shortcoming if you’re looking to integrate legacy platforms. And some may find that the affordability of the pay-as-you-go system is countered by difficulties working that into an operating budget, not sure about what is required for payment until the end of the month.

The public cloud will also require businesses to rely on their cloud hosting company for security, configuration, and more. Being unable to see beyond front-end interfaces will be a problem for some, and others won’t be entirely okay with legal and industry rules that make sticking to compliance regulations a bit of a headache sometimes. Security is always going to weaker with the public cloud too, although that won’t come as a surprise to anyone. You’re also likely to have less service reliability.

Private cloud drawbacks are not as extensive, despite the increased complexity. As we touched on at the beginning here, a quality private clod setup is definitely going to cost you considerably more and there are higher start-up and maintenance costs too. One that will be less talked about but still very relevant for many will be way that gaps in knowledge found with IT staff can put data at risk, and for larger businesses this is even more of one.

The extent to which you have remote access may also be compromised. Slower technology integration and upgrades take away from scalability too, but for many the security and reliability of a private cloud make them willing to look through these shortcoming and make adaptations on their end to work with them.

Need for Attack Surface Management for Businesses Online

Reading Time: 4 minutes

Look back across history and you’ll see there has been plenty of empires, but even the longest-lasting of them still eventually came to an end. When we talk about larger businesses operating online and taking advantage of new web-based business technologies no one is going to compare any of them to empires, perhaps with the exception of Google. But to continue on that tangent briefly, there is not better example of an empire failing because it ended being spread to thin quite like the Mongol empire.

The reason we mention it as our segue here to this week’s blog topic is because nowadays as businesses expand in the digital space they naturally assume more of a surface, or what you might call the ‘expanse’ of their business in Cyberspace to the extent they’ve wanted / needed to move it there. With all that expansion comes greater risk of cyber-attacks, and that leads us right into discussing attack surface management. So what is that exactly? Let us explain.

An attack surface is every asset that an organization has facing the Internet that may be exploited as entry points in a cyber-attack. They could be anything from websites, subdomains, hardware, applications, to clod resources or IP addresses. Social media accounts or even vendor infrastructures can also be a part of the ‘vulnerabilities’ based on the size of your surface.

All of which will be of interest to us here at 4GoodHosting as quality Canadian web hosting providers given how web hosting is very much an abutment for these businesses with the way it’s a part of the foundation for their online presence. So let’s dig further into this topic as it relates to cloud security for businesses.

Rapid Expansions

We only touched on the possibility for an attack surface above. They are rapidly expanding and can now include any IT asset connected to the internet, so we can add IoT devices, Kubernetes clusters, and cloud platforms to the list of potential spots where threat actors could infiltrate and initiate an attack. Having external network vulnerabilities creating an environment that can prompt a potential breach is an issue too.

It’s for these reasons that attack surface management is a bit of a new buzzword in cyber security circles, and those tasked with keeping businesses’ digital assets secure likely have already become very familiar with it. The key is in first identifying all external assets with the aim to discover vulnerabilities or exposures before threats do. There is also a priority on vulnerabilities based on risk so that remediation efforts can focus on the most critical exposures.

Logically then, attack surface managements needs to be based on continuous, ongoing reviews of potential vulnerabilities as new, more sophisticated threats emerge and attack surfaces expand. It’s interesting that term was being bandied about early as 2014, but it is only recent developments and trends that have made it put more at the forefront for cyber security than before.

6 Primaries

Here are the trends in business nowadays that are enhancing the risk posed by having expanded attack surfaces.

  1. Hybrid Work – Facilitating remote work inherently creates an environment where companies are more dependent on technology while less affected by an limitations based on location. But the benefits are accompanied by an expanded attack surface and the potential for increased exposures.
  2. Cloud Computing – The speed and enthusiasm with which businesses have adopted cloud computing has also spread out the attack surface at a speed that cyber security platforms haven’t been able to keep up with. This frequently results in technical debt or insecure configurations.
  3. Shadow IT – It is quite common now for employees now to be using their own devices and services to work with company data as needed, and how ‘shadow IT’ expands attack surface risks is fairly self-explanatory.
  4. Connected Devices – Internet-connected devices have exploded in numbers over recent years, and their related implementation in business environments has created a new variance with attack surfaces at high risk. One that’s directly connected to the insecurity of many IoT devices.
  5. Digital Transformation – The way companies are digitizing as broadly, deeply, and quickly as possible to stay competitive means they’re at the same time creating new attack surface while layers, plus altering the layers that already exist.
  6. Development Expectations – Always launching new features and products is an expectation for many businesses, and this has factored into how quickly technologies will go to market. There is pressure to meet these demands, and that pressure may lead to new lines of code being hastily written. Again, fairly self-explanatory with relation to growing attack surfaces.

The attack surface has become significantly more widespread and more difficult to keep contained as organizations grow their IT infrastructure. Plus this growth will often occur despite resource shortages that come at an unideal time with a record-breaking 146 billion cyber threats reported for 2022 and likely much of the same when this year is tallied up.

It’s for all these reasons that attack surface management is even more of a priority for organizations as they take on key challenges with the frontline of cybersecurity.

New Optical Data Transmission World Record of 1.8 Petabit per Second Established

Reading Time: 3 minutes

Speed has been and always will be the name of the game when it comes to data transmission as part of utilizing web-based technologies. That is true for the smallest of them in the same way it is for the biggest, and it’s not going to come as a surprise that with the advances in those technologies comes a need to handle much more data, and handle the increased volume of it faster at the same time. Add the fact that global network use continues to grow explosively all the time and there’s a lot of gain – both functional and financial – to be had for those who can introduce means of moving data faster.

And that’s just what’s happened courtesy of Danish and Swedish researchers who have succeeded in setting a new benchmark fastest speed for optical data transmission. Many of you will be aware of what a Tbps (Terabits-per-second) speed score would indicate with regards to fastness in this context, but if you’ve heard of a Petabit in the same one then consider us impressed. It’s been nearly 3 years since the previous data transmission speed record was set, and you’re entirely excused if you’d only heard of a Terabit back then.

The score 178 Tbps set in August 2020 was quite remarkable for the time, but not anymore. And it certainly would have been for us here at 4GoodHosting in the same way it would have been for any good Canadian web hosting provider based on the fact that we have a roundabout way of associating with this based on the parameters of what people do with the sites we may be hosting for them. But enough about that for now, let’s get right to discussing the new data transmission world speed record and what I may mean for all of us.

Doubling Current Global Internet Traffic Speeds

This mammoth jump in speed capacity is made possible by a novel technique that leverages a single laser and a single, custom-designed optical chip that makes throughputs of 1.8Pbps (Petabits per second) possible. That works out to double today’s global internet traffic, and that highlights just how much of a game changer this has the potential to be.

The 2020 record speed we talked about earlier is only around 10% of today’s maximum throughput announcement. This equates to improving the technology tenfold in less than 3 years. A proprietary optical chip is playing a big part in this too. The workings of it are having an input from a single infrared laser creating a spectrum of many colors, and each color representing a frequency that resembles the teeth of a comb.

Each is perfectly and equally distinguishable from one another and mimics the human process with which we distinguish colors, detecting the different frequencies of light materials as they are reflected in our direction. But because there is a set separate distance between each it makes it so that information can be transmitted across each of these frequencies. Greater varieties of colors, frequencies, and channels means that ever greater volumes of data can be sent.

High Optical Power / Broad Bandwidth

The current optical technology we have now would need around 1,000 different lasers to produce the same amount of wavelengths capable of transmitting all of this information. The issue with that is that each additional laser adds to the amount of energy required. Further, it also means multiplying the number of failure points and making the setup more difficult to manage.

That is one of the final research hurdles that needs to be overcome before this data transmission technology can be considered more ideal. But the combination of high optical power and being designed to cover a broad bandwidth within the spectral region is immediately a huge accomplishment and one that couldn’t arrive sooner given the way big data is increasingly a reality in our world.

For now let’s just look forward to seeing all that may become possible with Petabit speeds like this that are capable of handling about 100x the flow of traffic currently accommodated as a maximum by today’s internet.