Why Ransomware Tends to Avoid Cloud Environments

Malware concept with person using tablet computer, low key red and blue lit image and digital glitch effect

Ransomware attacks can be headaches of the highest order, and in many instances they have disastrous repercussions and that is to say nothing of the way those repercussions are paired with major expenses no matter how the problem ends up being resolved. When Japanese automaker giant Toyota had to shut down 14 factories across the country for one day, it was staggering to see just how much in the way of financial loss could come from just 24 hours of dealing with an attack.

Most businesses will be of a much smaller scale, but there’s also been plenty of instances of data being breached for businesses that no one will necessarily be familiar with. No matter what size of operation, if you have sensitive data stored in digital format – and who doesn’t nowadays – then you will want to make sure you have all the defences in place there and ready to do their job if and when it’s needed of them. Ransomware attacks are increasing; between 2019 and the end of 2021 they had risen well over 200% overall worldwide, and again it’s not always just the big fish being attacked.

Likely goes without saying that data management and data security are two aspects of operation that we can relate to here at 4GoodHosting, and that will almost certainly be true for any other quality Canadian web hosting provider who has the same solid operating principles for their web hosting business. Like most we’re also as enthusiastic and bullish about the ever-evolving potential for cloud computing, which leads us to our topic of discussion for this entry – why do ransomware attacks tend to look past cloud computing environments when weighing potential victims?

Franchised Ransomware

A recent advisory from the Cybersecurity and Infrastructure Security Agency (CISA), the US FBI, and the NSA reveals the latest trend is now ransomware as a service. Where gangs of malicious hackers essentially ‘franchise’ their ransomware tools and techniques and then make them available to less organized or less skilled hackers. This works out to many more attacks, despite some of them being not as sophisticated as others for the same reasons.

But the long and short of it is protecting against ransomware attacks must be part of any organization’s holistic cybersecurity strategy. And it turns out that is especially true if you’re still operating data center infrastructure and not cloud infrastructure. Hardening data centers and endpoints to protect against ransomware attacks is more and more needed every year, but it is true that cloud infrastructure faces a different kind of threat.

To be clear – if your organization is all in the cloud, ransomware can be less of a worry.

What, and Why?

First and foremost you shouldn’t be mistaking ransomware attacks as simply data breaches. A data breach only means data has been exposed, and it doesn’t even necessarily connote that data has been taken. Ransomware isn’t primarily ‘stealing’ either, and with it the aim is not to steal your data necessarily. Instead the aim is usually to take control of the systems that house or encrypt your data and prevent you from having access to it, unless you pay to have that access re-established for you.

The reason why ransomware attacks are not being carried out against cloud environments has everything to do with fundamental differences between cloud infrastructure and data center infrastructure.

For starters, any cloud environment is not simply a remote replica of its onsite data center and IT systems. Cloud computing is 100% software driven by APIs – application programming interfaces— which function as middlemen for the software and allowing different applications to have interactions with each other. The control plane is the API surface that configures and operates the cloud, and that control pane may be used to build a virtual server, modify a network route, and gain access to data in databases or snapshots of databases.

Key Resilience

Cloud platform providers have been working around the understanding that consumers who will pay for the technology and service are expecting data to be robust and resilient. Keep in mind replicating data in the cloud is both easy and cheap, and a well-architected cloud environment ensures multiple backups of data are done regularly. That’s the key means by which an attacker’s ability to use ransomware is impeded. Frequent takings of multiple copies of your data means they have less of the ability to lock you out. Should an attacker be able to encrypt your data and demand a ransom, you can take all their leverage away from them with simply reverting to the latest version of the data backed up prior to the encryption.

Effective security in the cloud is the result of good design and architecture rather than reactive intrusion detection and security analysis. Hackers have no other choice but to try to exploit cloud misconfigurations that enable them to operate against your cloud control plane APIs and steal your data. And to this point very few if any of them have had much success with that.

Automation is Best

Having cloud security protocols working automatically is best, as the number of cloud services keeps growing along with the number of deployments most of you will have. Add all the expanding resources and you can get why there is a need to not be manually monitoring for misconfigurations and enabling developers to write code that can be flexible for future revisions. Hardening your cloud security ‘posture’ is helpful too, with efforts to know your operating environment and its weak points on an ongoing basis as well as continuously surveying your cloud environment, to maintain situational awareness at all times.

Successful organizations evaluate all the time to know where they stand, where they’re going, and to quantify the progress they’ve made and are making towards addressing vulnerabilities and the security incidents that have or may result from them.

Toronto First to Get 3 Gbps Internet from Bell in Canada

Patience may well be a virtue, but few of us have much of it when we’re doing whatever is we’re doing online. We all remember those ads from years ago when broadband internet was new where they compared the previous super slow speeds as sucking a milkshake through the narrowest of straws. We’re a long way from that, and while the standard 1.5 gigabyte-per second speeds have been quite nice if there’s improvements to be made well then let’s get right to that.

Bell is one of the big players as a home broadband internet provider and they are the first to now be making 3 Gbps internet speeds available to Canadian consumers. We don’t need to talk about how this will appeal to people, and the needs for ever-better download and upload speeds is something that those of us here at 4GoodHosting will relate to in the same way any good Canadian web hosting provider would. We are tops for VPS virtual private server web hosting in Canada and we know that is sites like these with dynamic content that will be among the many that are better served this way.

People with sites hosted are going to want to have uptime guarantees for sure, but let’s not forget that these faster Internet speeds in Canada are also going to pair exceptionally well with the new 5G networks, and what is most noteworthy there is what that will do for Internet of Things applications and all that is set to do for the welfare of humanity and more.

Toronto is Canada’s most populous urban area, so it likely makes sense that the new 3-Gig internet speeds are going to be made available there first. Exciting news for anyone in Hogtown, so let’s use this week’s entry to look into this in greater detail.

Sub-30 Seconds

Downloading a 10GB file might not have necessarily been daunting in the past, but you would certainly understand you wouldn’t be viewing anything anytime soon. With this new type of high speed internet, it is a whole different story. Bell announced this new plan on April 6th, and as mentioned the fanfare expected for doubling existing Internet speeds to a full 3 Gbps is needing no explanation. That’s for both download and upload speeds, and we haven’t touched on what this will do for gaming either.

At peak performance, users can download a 10GB file in under 30 second and the plan is ideal for users like content creators who have ongoing needs to move data to and from the cloud, and in very large quantities. Downloading games is going to benefit in a big way too, and downloads of tens or even hundreds of gigabytes isn’t going to seem like such a big deal. Small businesses and home offices may see higher productivity levels resulting from the higher bandwidth to increase productivity.

Start in TO, but Expansion Soon

This is all part of making broadband and higher-speed Internet available to all, and Bell is only one of many providers who are taking part in the initiative. They are in the midst of their most aggressive fibre buildout ever in 2022, aiming to connect 900,000 more homes and businesses across much of Canada with direct fibre connections and making this type of upload and download speeds available to all of them.

Bell’s 2-year capital expenditure program has put almost $10 billion into this, and it is now in its second year where the focus is on an efficient and timely rollout of its broadband fibre, 5G and rural networks.

All of this stands to be fairly reasonably priced too – currently this new internet package from Bell is available to GTA area residents for $150 a month, and they are at this time offering the Home Hub 4000 router-modem with Wi-Fi 6 capabilities. If the highest of high-speed internet is music to your ears then get set to be as pleased as can be.

Human Side of Artificial Intelligence

Artificial intelligence making possible new computer technologies and businesses

If there has been any time in human history where technological advances in computing have come as fast and furious as they have recently, we’re not aware of it and likely you aren’t either. What comes with so much promise also comes with some level of trepidation for people, but that has always been the case with any type of game-changer advance that’s been seen in human society. A.I. is a great example where we wouldn’t even think about slowing the speed of change it is promising but at the same time most of us hope this doesn’t go sideways in any regard.

But one aspect of this where the potential positive ramifications absolutely must be encouraged is with making better use of data. This is something that any Canadian web hosting provider will be able to relate to, and here at 4GoodHosting we are no different. We understand the magnitude and potential for data management, but can also relate to a lot of shortcomings in how this data is understood and utilized properly.

Which leads us to the topic at hand here today. Advances in AI research have made it so that the use of computer algorithms to differentiate patterns from noise in data is entirely possible, and we now have open-source software and many data scientists streaming into the field. All of this leads to the belief that computer science, statistics, and information technology can lead to a successful AI project with useful outcomes. Teaching humans to think like AI will have value, but will have more of it is teaching AI to understand the value of humans.

Deep – and Deeper – Learning for Neural Networks

One of the most notable ways deep learning neural networks are being put to use is in healthcare, and specifically with more accurately predicting health outcomes. What we are seeing now is state-of-the-art AI algorithms for predicting health outcomes fairly accurately. This could be for whether a patient should be readmitted to the hospital following a surgery, or any number of other very significant decisions that can be better made by AI and its modeling as compared to what a doctor might advise.

While it is true that the deep learning models performed better than some standard clinical models, they still came up short in regards to logistic regression, a widely used statistical method. This suggests that AI learning does have limitations, but it seems the consensus those limitations are a) not significant enough to scale back deployment, and b) likely addressed in future advances with the technology.

Use of AI in healthcare is a great example of how the technology projects, so we’ll stick with it for now. There are indeed limitations, and to address them the early days of this roll out had the readers harmonizing all the data and feeding it into a deep learning algorithm before endeavouring to make sense of it. Some factors that may not be useful for clinical intervention weren’t included, and primarily because they can’t be changed.

The factors that were prioritized for incorporation into neural network architecture were the ones that would improve the performance and the interpretability of the resulting predictive models. Still, until recently there was less of a focus on how all of this would work when dealing with the human operation end of the equation, and that’s where the focus needs to be now.

Advantage to Human Side of AI

Human involvement with an AI project needs to start with a programmer or engineer formulating the question the AI is to address. Let’s say right off the bat that AI will not be able to master this anytime soon. It will require depth, breadth, and synthesis of knowledge of different kinds and AI simply isn’t there yet. Imaging what is missing or what is wrong from what is known is – at least for now – very difficult for modern AIs to do.

Humans are also needed for knowledge engineering. This part of it has been important in the AI field for decades and the current focus is on domain-specific knowledge in the right format to the AI. Reason being so that it doesn’t need to start from scratch when solving a problem. As powerful as AI can be, we should remember that humans have an ability to synthesize knowledge that far exceeds what any computer algorithm can do. That right there is the crux of why valuable human input and reciprocal learning is going to be absolutely essential if AI is going to do what we hope it will.

As AI moves ever forward in being a big part of our lives, it is important to remember that users and those who stand to benefit from it have a key role to play in the data science process attached to the development of it. Done right it will improve the results of an AI implementation and reduce the potential for harm or misuse.

Insufficient Wi-Fi Connectivity Dampening WFH Prospects for Many

Two years ago this month was the first time that many of us spend more than a handful of days working remotely from home, and as we’ve learned since then a whole lot of people have remarked how its done very positive things for their lives. At the same time a lot of employers have said they haven’t seen any dip in productivity so – why not? Granted a whole lot of us have returned to the office for at least a portion of the week, but if you are able to continue working from home then a strong and reliable Wi-Fi connection is going to be a must.

A report from Cisco that came out early this year has indicated that for a lot of people that lack of reliable internet connection is forcing them to reconsider the home office. There’s no getting around the fact it’s a deal breaker for most people if that can’t be counted on, although there are ways to improve your internet connection that may be doable here provided you aren’t too far out from a major city center. The problem there might be that people have relocated to these quieter locales to now find the internet connection just isn’t good enough.

Here at 4GoodHosting we’re like any Canadian web hosting provider in that we can certainly relate to how that has to be all important. There are many of our customers with websites offering creative services and the like and they are among the thousands of Canadians nationwide who are 100% reliant on their internet to be able to work from home.

So let’s look at everything this report had to say and perhaps get a look at how we’re maybe not entirely ready to have so many people working from home and demanding more than what can be provided when it comes to reliable Wi-Fi.

Broadband Indexing

The report is called Cisco’s Broadband Index, and it was a survey of 60,000 workers in 30 countries who provided feedback on their home broadband access, quality, and usage. It has indicated that today people value access to the internet more than ever and believe access to a fast, reliable connection, is universally a key to economic and societal growth.

Plus we now have hybrid-office and remote-work business models that have grown out of the COVID-19 pandemic where employees are relying heavily on the internet connections they’re able to access. Around 84% of survey respondents stated they are actively using broadband at home for longer than 4 hours a day. 75% of them said broadband services need significant upgrades to support the number of people now able to work from home and want to do that.

The challenge is these Internet connections are under much more strain now, and white collar workers were confined to their homes during the last two years are a big part of that strain to go along with all the streaming people of all sorts are doing these days. Here’s another consideration – 60% of survey respondents live in households where more than three people use the internet at the same time. Of those 60%, only 3% responded saying they have a pet that could put people’s lives in danger if it got out of the house.

Many Ready to Upgrade Service

Estimates are that nearly half of the world’s workforce now rely on their home internet to work or run a business, 43% of respondents stated their intention to upgrade their service in the next 12 months to stay in front of additional demands being placed on their broadband connection.

As mentioned there is also a large percentage of workers who are still at home for a significant portion of the work week. Some of them have said they’d rather look for a new job than lose the chance to work from home. So we can see that secure, high-quality, reliable internet is essential, especially if hybrid work models are to be continued because of not interrupting the effective working parts of a business.

Tackling Digital Divide

We may want to still be thankful about how good we do have it with connectivity considering 40% of the world remains unconnected. One thing industry experts agree on is that the inability to connect those 3-point something billion people over the next 10 years will likely increase the digital divide. Shortcomings with infrastructure is a significant factor in limited internet around the world. Rural and remote areas are more likely offline or insufficiently online, and this is usually due to costs being much higher than in urban areas.

The Broadband Index went further in providing data that puts even more of a light on concerns about the digital divide. 65% of respondents said access to affordable and reliable broadband will become a major issue, and particularly if connectivity becomes increasingly vital for job and educational opportunities. Another 58% of those surveyed said they were any number of factors blocking them from access to critical services such as online medical appointments, online education, social care, and utility services and all resulting in an unreliable broadband connection.

Stay Protected with Web Filtering and SafeDNS

It’s likely that no one who has business interests online will be unaware of the size of risk posed by cyberattacks these days. It seems like there’s a new high-profile, large scale one every month or so now and it shows that even though major corporations are spending plenty on cybersecurity it continues to not be enough depending on the nature of the threat and how advanced it may be. Another aspect of it that the average person may not grasp as readily is the fact these attacks often have repercussions that go far beyond just leaked data itself.

The one that maybe doesn’t get talked about enough is reputation, as if you’re big enough to be newsworthy when a cyber attack occurs then you’re big enough to have real interests in how your company is regarded in the big picture. There is definitely a risk to your reputation if a cyber attack of any magnitude occurs and you’re left needing to explain how you didn’t have the right level of security in place. Here at 4GoodHosting we’re like any reputable Canadian web hosting provider in that we fully relate to the need for security, and for businesses like ours that has everything to do with servers.

One newer solution that businesses can consider is web filtering and using the new SafeDNS service that promises to reinforce your web security efforts in a big way. That’s what we’ll look at here today and go into greater detail about why it is so highly recommended for anyone who would have a lot to lose if a cyber attack were able to make its way through their current defenses.

Thwarting Malware

Security has to be a foremost concern for businesses of all sizes, especially with more present risks of malware or others when the business is in the process of growing and expanding. And the number of these threats is growing all the time, and the sophistication of them means keeping your business protected online has never been as challenging as it is now. However, there are smart moves you can make.

Moves you should make, if we’re going to be plain about it. As we touched on briefly being hit by a cyberattack may not just damage data, it may have significant financial and reputational effects – as it did for Target, Equifax or SolarWinds to name just 3 of the high-profile cyber breach cases in recent years. It is estimated that over 80% of businesses have been victimized by ransomware, and some of them are reporting cyberattack attempts occurring up to 4 times a day. Let’s also consider the average ransomware demand for a US business. It’s well over $6 million.

Why Web Filtering

Web filtering may sound like an insignificant contribution when you consider the size and magnitude of the threats and the security landscape in it entirety, but it is a technology that can be very beneficial in making sure your business is never put at risk. This is where we’ll take the opportunity to introduce SafeDNS, which is aa comprehensive platform designed to protect organizations from online threats by means of its web filtering technology.

The primary way web filtering keeps your business safe is with monitoring of internet traffic for risks such as malware and phishing scams. It also has the ability to restrict access to unsuitable or dangerous websites, which makes it less likely an employee would create a breach unintentionally when visiting an external site where they simply don’t know any better about why it should be avoided. It is a cloud-based service entirely, and the appeal of that of course is that there is no bulky hardware taking up valuable office space, and it can be set up in very little time too.

Installation is not complicated in the slightest and there will be nothing in the way of deployments that will mean calls-out that can be expensive. But maybe the most noteworthy advantage it has is the fact the platform is run on powerful AI technology that incorporates machine learning and big data services to keep internet traffic safe. And if you need any other reinforcement of the fact that cyberthreats are a pressing concern, SafeDNS reports that overall it blocks some 33 million or so of them every day.

Other Pros

SafeDNS is also built on a network of 13 global data centers, and currently boasts 109 million sites in its database that are then sorted into 61 different subject content categories. The DNS-based filtering blocks unwanted sites before anyone has he chance to access them and create the immediate risk of the device being infected with one of the many types of malware.

What this does is allow for creation of customized policies for your workers, and you can also take advantage of traffic monitoring services and a detailed service dashboard that puts you very much on top of your new and improved cyber security defenses. Plus should your business grow your policies can expand too. There is no limit on the number of users and extensibility during filtering.

New 4-Way Consortium Coming Together for More Consistent Web Standard

Uniformity tends to be a good thing the majority of the time, and that can be said no matter what it is we’re talking about. The biggest reason that is true is because it allows people to have expectations about what they’re to experience, and to be able to rely on that experience in and out with every interaction. When it comes to web browsing you don’t even have to be a savvy individual to pick up on how not every page displays or behave the same way based on what browser you’re using.

Now of course that would be dependent on you using a different browser, so if you use only one exclusively then this may be something you don’t pick up on. But most of us will move between them for whatever reason, and when we do we notice that it’s rare for pages to be the same based on browser choice. This is something that has been the norm for well over 20 years now, and while it’s not a deal breaker to have this happen there would be something to say for a more consistent web standard.

Here at 4GoodHosting we are like any Canadian web hosting provider who can see the appeal of that based on simple visual comfort levels. Even if we are not aware of it there is a calming and soothing part of seeing what we expect to see each time, and we can also understand that if there is any level of new exploration required because a page is displaying / behaving differently then that is definitely undesirable too.

So the reason that we’re making this newsworthy is because there is a new 4-way effort underway to establish a more consistent and better web standard.

New Standard

Apple is working with browser developers Google, Microsoft, and Mozilla to make web design technologies more consistent, and consistent that way independent of what browser people are using. The problem here is that some browsers will have different built-in ways of handling web technologies. So in this sense there isn’t a standard of any sort for the web, and we then have developers, attempting to create consistent web interfaces across platforms, products, and elsewhere when a particular browser has the potential to undo all of that.

These 4 are making up the Interop 2022 alliance and the aim – as stated – is to ascertain how web standards are implemented by the different vendors. Some of this building on what came out of the Compat 2021 grouping.

The bigger picture aim of the project is to try to make it so that web applications based on these standards work and look the same no matter the devices, platforms, or operating systems. The hope then is that eventually web developers will be able to be more confident in the end-user experiences they deliver users being experienced in they way they intend them to be.

The further focus is on moving towards a future where making these areas interoperable is entirely possible, with constant updates to the relevant web standards for them, and extensive but quickly undertaken evaluation of that effectiveness also made possible.

Tests of 15 Web Platforms

15 web platform specifications have been tested so far, along with three capabilities that have not been fully developed yet. The tests are for Cascade Layers, Color Spaces, CSS color functions, Scrolling and more. We can be sure that developers, users, and platform operators alike will all welcome improvements in this area.

Another thing that needs to be pointed out is that this new consortium is digging deeper than what you might expect would be the case to simply find general interoperability. However, browser code isn’t where they are looking for the most part and instead what Interop 2022 is focusing on is the finite details in experience and design. Part of the reason this approach is being taken is because browser developers won’t want to unlock access to core functionality for competitors, and for obvious reason.

Some are saying that this is showcasing the limitations of WebKit in iOS development. The complaint is that developers of other browsers have no choice but to use WebKit rather than their own tech. IT’s fair to assume Apple will not approve this request. Sure, it may point out Safari’s limitations but it also could diminish hardware performance, security, and battery life.

Collective Criticisms

The goal of making the web as interoperable as it can be is an admirable one, and creating this reality shouldn’t take aways from any of these 4 big players with regards to their primary development interests as competitors among each other. There are some saying Apple hasn’t accelerated implementation of some web APIs that might help developers create web apps to compete against native iOS apps. This is very likely true, but we can almost certainly say the same about Google at the very least.

All in all, this is a commendable and very potentially beneficial step. A more uniform web standard stands to be a plus for all of us, from developers right down to everyday web browsers and simple site visitors.

Aspect of New Windows 11 Earns A+ as Sustainability Initiative

With growing populations come growing demands for resources, and energy is far and away the one of them that humanity is struggling to come up with and allocate fairly here in the 21st century. Much has been made of how power grids are under real strain and particularly in dense urban areas. We know full well that there’s no stopping progress, and so the electricity demands of the digital age need to be accommodated in step with that progress.

There are plenty of things your devices – and the data centres behind them – do on a daily basis that adds to a larger ecological footprint because of the amount of power required for them. The data centre part of that is front and center with how those of us here at 4GoodHosting can relate to all of this and in the same way any other good Canadian web hosting provider would. We certainly know how power-intensive data centers can be, and how it’s hard to cut back on power usage when so much of what we rely on is brought forth from those data centers.

Which is why this news about Windows Insider build 22567 is noteworthy enough for us to make a blog entry about it. System updates are well know as being major power eaters and there’s a reason you are prompted to have your notebook plugged in before beginning one. What this new Windows 11 update will do is schedule updates for times when the local power grid in your area is drawing energy from more reliable sources, and avoiding times when it is getting them from more traditionally harmful sources.

Definitely qualifies as ‘smart’ technology and something that is in line with what most people put a lot of importance on these days, so let’s look at in greater detail.

Informed Decisions

This will be made possible by Microsoft taking regional data on carbon intensity from sources that publish large-scale electricity use-rate data, and all of this is only possible if your laptop or PC is plugged into an outlet. It can be overruled by choosing to install updates immediately by navigating to Settings > Windows Update and then selecting to check for updates.

The reason it’s important to use sustainable power sources as much as possible is because many electrical grids are still powered by fossil fuel sources. What this does now is make it so that Windows 11 now prioritizes update installs to proceed only at times when it detects larger amounts of clean energy sources are available. Wind, solar, hydro – it could be any of them.

What you can look for is a small message in the Windows Update section of your settings reading ‘Windows Update is committed to reducing carbon emissions’, and if you do see it that means that your PC is able to make the determination with information provided through the web and different monitoring resources available. But it can be possible in some instances that the carbon data won’t be made available to the device for any number of reasons.

Microsoft has said there will be ongoing efforts to promote better accessibility for that, including working with new partners as needed.

Positive Step Towards Curbing Power Consumption

As mentioned, there is no getting around the fact that modern technology uses a lot of power. Our drive to constantly improve and build upon previous technology means this is a difficult reality to get around. Technology is here to stay, and so any initiative taken to build in environmentally-conscious policies and features into products developed by these manufacturers definitely is a show of good faith.

Having an OS that is geared for sustainability on its own without user input required to undertake it is a huge plus, and hopefully this is the start of a larger trend that will be seen right across the computer manufacturing industry. Let’s keep in mind that the number of people across the wider market that will use Windows 11 is only going to grow bases on older operating systems starting to be phased out as they become obsolete.

In many ways Microsoft may deserve more props than other Big Tech manufacturers, as this new effort is on top of many other notable pledges towards sustainability and ethical practices. Examples of this are using recycled marine plastics to create peripherals like mouses and we can also look at the adaptive controllers for the Xbox console series. The implementation of inclusive features across its entire hardware and software range to help the disabled community is another example of their benevolence.

We can all do our parts too, and one thing I’ve been saying to many people lately is to consider turning down the brightness on your display. Many people will be shocked to see how bright it was, and enjoy the less intense (and power saving) setting a lot more based on how it is more comfortable for their eyes. Win-win.

Pros to a USB-C Hub Monitor

There’s been plenty of instances where we’ve joined in with the consensus and stated that you can’t stop progress, and truth be told we’re of the mindset you wouldn’t want to anyway most of the time. Especially when it comes to anything that can make our workday better and up our productivity – something that is important if you’re working for yourself or for somebody else in an employee capacity. And for so many of us that involves working in the digital realm on personal computing devices.

So it’s here that we are looking at the benefits of a USB-C Hub Monitor, and of course the primary and most obvious reason they are a practical choice at the very least is because nowadays so many of us need to run peripherals from our main device. External power bricks may have been the norm for a while now, but we’re willing to guess they’re going to start making their way towards obsolescence. And that’s a good thing as less is always more and preferable when it comes to any desktop workstation.

All this resonates with us here at 4GoodHosting in the same way it would with any Canadian web hosting provider given the way our work puts the same premium on efficiency and getting a good job done as quickly but thoroughly as possible. So let’s spend this entry looking at what makes USB-C Hub Monitors such a good choice for anyone who’s not sitting at their desk because they could be doing something different.

Smart Upgrade

A USB-C hub monitor is every bit a smart upgrade for your home office. They remove the need for numerous cords from your desk and rather than connecting peripherals to your PC, you can instead connect them directly to the monitor to connect to your PC over USB-C. Desktops get added functionality too, but it is laptops that benefit most from USB-C. They can act as both a video cable and power cable in and that means you don’t have to have the device’s standard power adapter resting on the desktop or hanging to floor beside it.

Most who own a USB-C compatible laptop will need a USB-C hub or dock as it is, and so by bundling it with the monitor you won’t need to find space for a separate hub or dock there. The best of them will include Ethernet, multiple USB-A ports, and support daisy-chain displays over the unit’s DisplayPort. A USB-C monitor that can handle numerous peripherals at once is quite the treat when you put that versatility to work.

For the ultimate clean setup you can’t beat buying a monitor arm with a laptop stand and use wire clips to router wires behind the monitor arm. You’ll remove all wires from sight and only the USB-C cord that connects the laptop to the monitor will be visible.

Be ‘Hub’ Specific & Demanding

It’s important to know that USB-C hub is not a standardized term, so you need to make sure you’re getting what you need and not something that might be misleading with how its named. A USB ‘Hub’ is a term used to describe devices that extend USB connectivity. That’s it, and this is why you should pay attention to a monitor’s specifications. The benefits of using many of them as a USB-C hub are limited.

The best USB-C hub monitor are ones featuring at least four additional USB ports and a mix of USB-A and USB-C, and it is even more preferable if it comes with an Ethernet port if any devices doesn’t have an Ethernet port as many notebooks and laptops don’t.

Display & Power Interests

There isn’t any USB version that includes a video standard included in any base specification. What you will see is that USB-C devices handle video via an optional addition called DisplayPort Alternate Mode. USB-C monitors will list the version of DisplayPort supported by the USB-C port in the monitor’s specifications, and this is also something you will want to be looking for.

The version of DisplayPort used by the monitor isn’t something you need to consider, as any that supports video over USB-C will use a DisplayPort version that is sufficient for driving the monitor at its native resolution and refresh rate. But do check that a USB-C port with DisplayPort Alternate Mode is available on the PC you plan to connect to the monitor. Not all PCs that have a USB-C port that supports this, and DisplayPort version will matter if you connect to a device with a high refresh display.

Most USB-C devices support power delivered by USB, which is a requirement for supplying power over USB. The power standard here will vary though and to quite an extent for different devices. The wattage available can vary from a few watts to 100W or more. Determining if the USB-C monitor can provide enough power for any devices is another consideration for you.

That can be done by checking the monitor’s specifications to find out how much power it can send over USB-C. Compare that number to the rated wattage for your laptop’s power adapter. The monitor should support USB Power Delivery at an equal level to wattage supplied by the laptop power adapter.

Thunderbolt is Best

Thunderbolt Hubs are similar to USB-C ones, but they have some distinct advantages. For starters they have a higher minimum data rate than USB. Thunderbolt 3 and Thunderbolt 4 must support a data rate of 40Gbps, while USB 3.1 supports a minimum data rate of 10Gbps, and USB4 supports a minimum of 20Gbps.

The higher data rate you get with a Thunderbolt Hub could be important if you want to connect multiple high-speed storage devices or implement external graphics. Power delivery stays variable too, so you still need to check the power delivered by a Thunderbolt hub monitor is enough for the device you want to connect.

The last mention we’ll make here if for USB4, the latest USB standard. It stands ready to increase the minimum data rate for USB to 20Gbps and support a maximum of 40Gbps. Which will pair especially well with the latest version of USB Power Delivery supporting up to 240 watts of power. This is going to allow easy, single-cable connections with many laptops that would currently be incompatible for that reason.

Now the bad news: Compatible devices are rare. Only a few USB-C hubs or docks are available and none of the USB-C monitors shown at CES 2022 announced support for USB4 or the 240-watt Power Delivery standard.

USB4 makes inclusion of Power Delivery standard and has the option to support the latest up to 240 watts of power, but it doesn’t increase the required minimum wattage. You’ll still need to keep a close eye on exactly how much power a USB-C monitor provides.

Conclusion

USB-C hub monitors can be a bit confusing, but deciphering the details is worth the effort. You’ll need to do the same mental gymnastics to buy the right USB-C hub or dock, anyway. Choosing a USB-C hub monitor over a standalone dock will offer the same benefits and save space on your desk.

Intel and Their Aim to Dominate Data Chips

To lean on something means to rely on it, and with that understood it is fair to say companies are leaning on big data more and more by the day. All of this was foreseen with the way digital connectivity was revolutionizing business 20+ years ago, but here we are now with data centre capacity being more of an issue than ever before. The demand for chips that can meet to those capacity needs has been negatively impacted by the worldwide chip shortage going on right now, and that’s a reflection of just how much of private, public, and municipal life has gone digital.

This is something that any web hosting in Canada provider can relate to, as most will be like us at 4GoodHosting in that we’ve invested in data centre expansions so that we have them in different major regions of the country, and BC and Ontario when it comes to us. It’s an essential part of us being able to offer the uptime guarantees we do with our Canadian web hosting packages. It’s elementary that we have an understanding of the data and collocation centres, and as such recent news of Intel’s new Xeon roadmap is definitely something given what the chipmaker giant is aiming for.

Intel has revealed ambitious plans to establish a degree of dominance in the data centre chip landscape in the immediate future. They have recently revealed their multiyear Xeon roadmap for the next few years, including the heralded Sapphire Rapids, with a ground breaking new chip built on Intel’s 7 process which is estimated to boost AI performance by a massive 30x.

Better serving tech always sound really good around here, so let’s spend this week’s entry looking at the new Intel data center chip and all that it may offer to improve user experiences.

Rapid Expansions

This new chip is going to be an introductory offering that will be followed by Emerald Rapids. The follow-up Emerald chip is also built on Intel 7 and should arrive in 2023, and then again with Sierra Forest built on Intel 3 set to come out in 2024. We’re also seeing Intel planning to move its Granite Rapids offering from Intel 4 to Intel 3 on the same timeframe.

The various Rapids versions are aimed directly at the top of the CPU market, and the most immediate appeal of them is with performance leaps across AI and ML workloads. The way people have taken to he Cloud is a factor here, as Sapphire Rapids is especially focused on data centres and that works in line with one of the most relevant topics in computing these days.

Intel is stating that it offers better performance across a multitude of workloads useful to data centres and the processor class is set to ship next month. Any timeframe for the others is yet to be determined, and of course that is based on just how long next-generation technology takes in regard to proper development

The reality is though that the competition is nearly keeping pace and the company has challengers, and especially Amazon as they have recently begun exploring the development of their own silicon for data centres.

Focus on Developer Ecosystems

The Xeon roadmap is expected to be industry leading, and industry experts do laud it as having a product portfolio that’s developed according to user realities for the majority of Intel customers and based on their diverse needs, as well as aligned to their timelines and designed to be conducive to developer ecosystems and real innovation within them.

The chip will be the first Intel server processor to use extreme ultraviolet lithography, the deployment of which is a key technology if Intel is to catch up with TSMC and other top chip manufacturers. Intel officials said on Thursday that by 2025 the company plans to reach 10% annual sales growth but with moderate revenue growth this year plus entering an investment phase at this time where they expect at least $1 billion in negative free cash flow in 2022 occurring alongside capital spending increases.

Better Patching Making For Better Online Security

There’s seemingly no stopping the trend that every new day we are facing the greatest risks ever seen when it comes to being online, and that’s why cybersecurity in an ongoing big deal for both individuals and organizations. No one likes to be the recipient of malware or to have their private information exposed when they are just going about their day to day online, but it’s a very real possibility.

Companies that make digital products should be proactive in making sure those products are safe to use when connected to the web, but that is something that’s been slow to come around on a grander scale. Fortunately now it is though, and major ones like Google, Apple, and Microsoft have been much better about finding the right preventative fixes and making them available to people in a timely manner.

Here at 4GoodHosting we are like any quality Canadian web hosting provider in that we can relate to the importance of security patches being made available, especially for ones that have no choice but to handle sensitive data provided by customers or business associates when working together with them. Nowadays as cybersecurity threats becomes more pronounced and far reaching, the means of addressing and thwarting them is advancing in step better than at any time before.

93.4% Fix Rate

New research from Google indicates companies are getting much better at fixing security vulnerabilities found in their products. Many firms also now taking less time to address various issues along with going past their established deadlines for patch fixes less frequently than in previous years.

Project Zero is Google’s team of security analysts tasked with finding zero-day vulnerabilities. These are unknown or unpatched flaws that can be abused through malware. The team recently published a blog post pointing out 376 issues it found between 2019 and 2021 and then detailing how vendors responded to the findings, and what the overall successes of those responses meant for overall cybersecurity in the digital realm.

Of those 376 issues, 351 of them (93.4%) have been fixed and only 14 (3.7%) have not had any type of fix applied to them. 11 (2.9%) remain active but 8 of those were classified as having already passed their 90-day deadline.

Google, Microsoft, and Apple Doing Best

Roughly two-thirds of all these vulnerabilities (65%) are attributable to these 3 major companies. Microsoft has had 96 (26%), Apple 85 (23%), and Google 60 (16%). In the evaluation 90 days was the deadline for a vendor to fix an issue and ship an improved version to its customers’ endpoints. A 14-day grace period was made available if the vendor asked for it while still promising to deliver a patch fix.

Apple did best with all the reported vulnerabilities, fixing 87% of them within that 90-day window. Microsoft came in second at 76%, and then Google with 53% fixed. Microsoft has had the most patches issued during the grace period (15 flaws, or 19%). Google was best with seeing to them fastest – an average of 44 days to fix a problem compared to Apple’s 69 days or Microsoft’s 83 days.

These numbers are more significant when you look at them in the comparison to how long it took these 3 to achieve the same thing in years previous. It took Apple 71 days to fix an issue on average in 2019 and in 2020 it was 63. It took Microsoft 85 days to do it in 2019 and then moving up to 87 for 2020. Google didn’t move much either way, but these companies have been consistently cutting down on time required for addressing various vulnerabilities.

The good news is that now vendors are fixing almost all of the bugs they get, and doing it relatively quickly. The past three years have seen accelerated patch delivery too, having learned best practices from each other as well as the influence of increasing transparency in the industry too.

Paid Rewards

Google has a Vulnerability Reward Programs (VRP), and through 2021 Google and the wider cybersecurity community have discovered thousands of vulnerabilities. Some of which have been fixed by those outside of the company for paid rewards. The sum of which is apparently in the vicinity of $800k. Nearly 700 researchers have been paid out for their hard work in discovering new bugs, with the highest reward being $157,000 and going to a researcher who discovered an exploit chain in Android.

The Android VPR paid out twice what it did last year, rising to almost $3 million. A total of 115 Chrome VRP researchers were rewarded for 333 unique security bugs found. and payouts for that totalled into the millions.