New AI Breakthrough Has Big Potential to for Engineers

None of us were alive when Greek Mythology was in its heyday, but if you were you’d certainly now that Cerberus was the 3-headed dog that guarded the gates to Hades. Sure, there will be a few history buffs alive today that know of this mythological creature, but it really is a shame that generally speaking humans don’t read books like they used to. But enough about that, this blog is about anything and everything related to web hosting and / or computer development so fair to ask where we’re going with this.

No one at Cerebras Systems has more than one head and will never have been anywhere near any entrance to the Underworld. But Cerebras does have the distinction of being the maker of the world’s largest processor. And if you’re an engineer who works with development in AI then you may well think they’ve outdone themselves with their newest offering to the development world. Do a little research into AI and you’ll learn that the capacity it has is entirely based on being able to add more parameters to it.

20 billion is a big number, and so the reason this news around the CS-2 system and WSE-2 chip is so big is because that the newest offering from Cerebras is able to train AI models with up to 20 billion parameters. This is the ultimate in optimization at the software level, and that’s why it is something that a top Canadian web hosting provider like those of us here at 4GoodHosting are going to take an interest in. Like most, we have a nice front row seat for everything that major advances in computing technology have promise for doing for us.

So let’s use this week’s entry to have a much deeper look into these CS-2 devices. Long before AI technology becomes commonplace in darn near everything of course, but it’s good to be early rather than late.

No More Mass Partitioning

What this all promises to do is resolve one of the most frustrating problems for AI engineers. That being the need to partition large-scale models across thousands of GPUs in order to facilitate full cross-compatibility. What the CS-2 promises to do is drastically reduced the time it takes to develop and train new models.

Natural Language Processing has undeniable benefits, but the degree of functionality for them is entirely dependent on the number of these parameters that can be accommodated. To date the way it has worked is that the performance of the model correlates in a linear fashion with the number of parameters. Larger models provide better results, but these days the development of large-scale AI products traditionally requires a large number of GPUs or accelerators, with the models spread across them.

The wheels fall off when there are too many parameters to be housed within memory or compute performance is incapable of handing training workloads. Compounding the problem then is the way the process is unique to each network compute cluster pair, so each cluster has to be catered to individually and that makes the whole thing so much more of a drawn-out process. That’s time many of these engineers will be displeased to be losing.

Bottleneck Buster

It is true that the most complex models consist of many more than 20 billion parameters, but it may be possible now that the ability to train relatively large-scale AI models on a single CS-2 device will do away with these bottlenecks and majorly accelerate development for existing players, plus increasing access for those previously unable to participate in the space. More cooks in the kitchen is a plus when it comes to high tech development much of the time.

The consensus seems to be that the CS-2 may we have the ability to bring large language models to the masses in a cost-efficient way, and this greater ease of access may well be the stimulus needed for a new era in AI where big new steps are made and in the long run society benefits from it in many different ways.

There’s potentially more to this too, as according to Cerebras the CS-2 system may be able to accommodate even bigger models down the line, perhaps even capable of trillions of parameters. If that’s true, chaining together multiple CS-2 systems could start us down the path to AI networks that are more expansive than the human brain.

Tips for Protecting GPUs and CPUs from Heatwaves

If you’re from these parts of North America you won’t soon forget how the weather was about exactly a year ago from now. Late June of 2021 saw the West Coast hit with a heatwave that put every other one EVER before it to shame with just how incredibly hot it got. The mercury soared to well over 40 degrees in most places and one location not too far from here recorded the hottest temperature in Canadian history. To say it was sweltering hot would be an understatement, and unfortunately it seems that we’re all living in a much, much warmer world than we used to.

We are not going to segue into discussing climate change and / or global warming here, but what we will do is say that it is not just us humans and the animals around us that stand to be worse for wear because of extreme temperatures. Turns out they can be plenty harmful for the digital devices a lot of us rely on for work, and the same ones that nearly all of us will use for entertainment even if we don’t spend our workday in front of monitors and a keyboard. There a reason most GPUs come with the own radiator and cooling fan, and we’ve all heard our CPU or notebook whirring when it needs to cool itself down.

This is a topic of interest for us here at 4GoodHosting, and we are like any reliable Canadian web hosting provider in that just like many of you we are mix of both those scenarios and if it’s not the desktop or notebook that is being put through its paces it is the mobile. Such is the way with modern life for a lot of us, and anything that puts the operations of that in jeopardy is going to have a target on its back. There’s all sorts of buzz around malware and DDoS attacks and the likes these days, but what about temperature extremes?

Components Kept Cool

Processors and graphics cards are very sensitive to heat, which is why ensuring they have the ability to stay cool when the temperature around them rises drastically is important. This isn’t a new requirement reality, and most will come with some form of cooling solution and better ones depending how premium the product is. In high-strain environments you may also find water-cooling blocks and AIO cooling systems can require a fan and radiator to pull fan-cooled water over the hot surface. The thing is, these solutions will be dependent on having cool air around them if they’re to be effective.

As it is now the system generally only cools in relation to the ambient temperature of its surroundings, and this puts additional strain on our computers and laptops. But how hot is too hot here exactly? This isn’t our area of expertise, but we’ve dug up that typical safe GPU temperatures are usually between 150° to 185° (65° to 85° Celsius).

For CPUs the same values are around 140° and 160° Fahrenheit (60° to 70° Celsius), but anything even near that may be putting your system in jeopardy of overheating to the point of failure. That’s within your power to control, but what if the same reality exists for a CPU or GPU and there’s nothing you can do to remedy the situation?

Stay Cool Tips for Users

So here’s what you can do to safeguard your components if you’re aware that a heatwave is on its way.

  1. Clean Device and Make Sure Vents are Clean

The buildup of dust and fluff is likely occurring in every desktop PC much faster than the owner is aware of, and this may mean your system is already deprived of airflow. The smart move is to use a can of compressed air to gently dust away any buildup every once in a while. If you have a PC case that can be opened up by removing a panel then you should also give your case fans a blast of air to clean them, and make sure that the inside of your system is clear from any dust build-up.

  1. Overclock Less Often

Many PC enthusiasts aim to maximize performance with overclocking. This not only draws additional power, it also increases the average operating temperature and sometimes quite significantly. Unless you absolutely need your system to be overclocked at all times, consider pulling back on the days when the weather is hot and likely going to get hotter.

  1. Get Fanning

Case fans are integral to keeping your CPU and GPU cool. With some units you may have a controller that allows you to bump your fan speed up to 100%, but if not you will need to control this directly on the system by heading into the BIOS again. This is time consuming, so a good option is to install 3rd-party software like SpeedTemp, but be aware that some CPU fans won’t be compatible with it.

A case upgrade may be in order if you have those fans maxed out through SpeedTemp and the unit is still running too hot.

  1. Replace Thermal Paste / Thermal Pads

This one may be well beyond the means of a lot of people, and quite likely if you’ve never opened up a computer before in any capacity. If your build has a few years behind it, consider cleaning off the old thermal paste from the CPU and replacing it with a fresh batch. This may help to bring the heat down.

Some units may not have the best thermal pads to begin with either. We’ll let you look into this one further on your own as we don’t want to on too long here. There are good tutorial videos easily found on YouTube.

Machine Learning for Google Chrome Set to Make Web Browsing Much Better

\

It’s pretty rare to be entirely satisfied with something, and there’s usually always little things that irk you when it comes to whatever it is. And all that despite being overall fairly satisfied with the experience of it. When it comes to web browsing, it is likely that most people will say it’s the pop ups that annoy them most, and for people who have health concerns that have connections to optic nerve overstimulation they can be more than just a nuisance.

There’s no getting around the need for revenue, so unfortunately pop ups and the like aren’t going anywhere. But the web development giants of the world have made the efforts to improve the experience when using their web browsers over the years. Real improvements haven’t been fast to arrive, and in many ways that’s because the imperfections of browsers like Edge, Firefox, and Chrome are tolerable in the big picture of everything.

So with that understood we can assume that anything that does make news when it comes to an improved web browsing experience has to be something rather big and significant. And it is here with Google making it clear that they are now making real strides implanting machine learning into the function of their flagship Chrome browser. A browser that – as many of you will know – is far and away the one of-choice for most people around the world.

And here at 4GoodHosting we’re like most good Canadian web hosting providers in that we feel similarly, and we’re like anyone anywhere for whom a more enjoyable and tailored web browsing experience is going to sound plenty good. So let’s dig into this here today as it’s something that likely all of us are going to benefit from in the more long-term picture around this.

Specific Task Focuses

It was in a recent blog post that Google announced the rolling out of new features to Chrome via ‘on device’ machine learning. Improving a person’s browsing experience is the goal, by adding several new ML (machine learning) models that will focus on different tasks this will be made possible. For starters, ‘web notifications’ will be handled differently, and we may well be presented with a better and more adaptive tool bar too. Google states these new features will promote a ‘safer, more accessible and more personalized browsing experience’ and one that is better for all different types of users.

Another big plus is that having models run (and stay) on your device instead of in the cloud will promote better privacy, and maybe we can go so far as to think it may eventually keep certain people from feeling they have to go with Duck Duck Go or something similar.

But let’s stick with the indicated changes for web notifications first. What we’re going to see in upcoming versions of Chrome is an on-device machine learning structure that will examine how you interact with notifications. If and when it finds you are routinely denying permissions to certain types of notifications then it will put a stop to you receiving that type in the future. Nearly everyone will be thankful to not have to click ‘dismiss’ every time they pop up, and as a lot of us know they pop up way too often.

Good intentions, sure, but it will be nice if the browser is able to see the pattern and realize that permission is never going to be approved and then stop requesting for it. However, you’re still able to override Google’s prediction if in fact you don’t want to continue having the choice for yourself.

Responsive to Behaviour Patterns

Another aim that they’ve focused on where is to have Chrome change what the tool bar does based on your past behavior. One example would be where people like to use voice search in the morning while on public transit, or more likely when a person is prone to sharing a lot of links. For either scenario Chrome will then anticipate your needs and add either a microphone button or ‘share’ icon in the tool bar to simplify the process for the user.

Customizing it manually will be possible as well, but as of now whether or not this functionality will appear on other platforms is not known. We do know that any time proprietary interest aren’t involved that most other developers will follow suit if the changes to the technology end up being well received.

Google has also been quick to tout the work machine learning is already doing for Chrome users. Even with the current version when you arrive at a web page that page is scanned and compared to a database of known phishing or malicious sites. In the event of a match you are provided with a warning, and you may already be familiar with it – a full-page, all-red page block, that has been standard since March of this year. Google states that it is a reflection of how Chrome is able to now detect 2.5 times more malicious sites than it could before.

Some – Not All – In Next Release

Apparently the smart silencing of notifications will be part of the next release of Chrome, but we may have to wait for certain other offerings that are part of Google’s greater incorporation of machine learning into the flagship web browser. Another question people are asking is if these improvements will be for mobile only, and some people are wondering if overall browser performance might decline by this. Google has said that the adaptive toolbar is one feature that won’t be in the next roll out

This will be an interesting development to keep an eye on, and in large part because as mentioned when improvements are proven positive and well received other tech giants tend to follow suit with regards to their products. And given how much time nearly all of us spend web browsing it’s something that most will have a good level of interest in.

Power-Providing Clothing for Wearable Devices Soon to be a Thing

Most people won’t have heard of power dressing, or if they have they’ll assume it’s a term related to style choices for people who want to make the right impression in business. We will freely admit we’ve never heard of it in either context, but it seems as if the clothing that can power devices is in fact going to be joining the many rather stunning technological advances that have been made over recent years. Indeed, portable power banks in purses have been extremely helpful for people who tend to really put certain devices through their paces during the day, but can we think that they might be replaced en masse by garments?

Well, apparently they may be and what we are talking about here is a flexible waterproof fabric that is built to convert body movement into electrical energy for powering those devices. So if you’re wondering if we’re talking about a pair of pants that powers you up the more you walk – that’s exactly what we are talking about here! It really is quite the eye-opener to dig up all the technological advancements that are occurring in the world, and what we do here each week puts us directly in line to do that.

This is the sort of news that is intriguing for any good Canadian web hosting provider, and obviously that’s because it is going to have an immediate appeal for everyone and not just those who are especially digitally and web savvy, or even a person who relies on someone like us for reliable web hosting. If motion can mean recharging, who’s not going to see the upside in that?

Press on Polymer

If what we’ve read about this is correct, this really is some cool technology in the making and as the researchers in Singapore have noted it is set up to be particularly ideal for people with wearable smart devices that have more in the way of ongoing power needs and / or aren’t quite as easy to be recharging regularly.

The key ingredient in this powering fabric is a polymer that when pressed or squeezed takes vibrations produced by the smallest of motions and then is able to convert them into an electric charge. The material is made with a spandex base layer and reinforced with a rubber-like component. When conducting an experiment with it researchers found that when they tapped a 4cm piece of the material it immediately generated enough electrical energy to light up 100 LED bulbs.

Again, they are referring to this as Power Dressing, and the concept of it has actually been analyzed for more than 20 years before it picked up enough steam to be where it is today and getting closer to realization. One thing that they are finding, however, is that most electricity-producing fabrics don’t hold up to long-term use. They also don’t take very well to being cleaned in washing machines, and that is a hurdle that absolutely has to be overcome.

Work in Progress

So obviously the immediate focusing was getting over that roadblock, and developing something that does not degrade in function after being washed and maintains the same level of electrical output over time. Consider it done, as the development team in Singapore has now done just that and is reporting their revised conductive material doesn’t lose anything when washed, folded, or crumpled. What’s more, they are estimating that it will maintain stable electrical output for five months of wear, and keeping in mind no one’s going to be wearing these garments every day.

The prototype is set to woven into garments, but it may also eventually be compatible with footwear, which is where it would most likely get the most bang-for-buck with charging based on intensity of motion.

It is capable of producing a 2.34-watts-per-square-meter charge in one of 2 ways: either by pressing or squashing in the same way standard piezoelectricity is created., or when it comes into contact or is generating any measure of friction between other materials, including skin or certain other fabrics that would promote it. When attached to an arm, leg, hand, elbow, or even to the insole of a shoe it will be able to harness energy from a range of human movements, including running, playing sports, or roaring through the park trying to keep up with your dog.

The Coming Significance of Platform Engineering in Web Development

A cobbler is the person who will repair a shoe, although it is increasingly rare that anyone gets a pair of shoes repaired these days. In fact most shoes aren’t even made with any real possibility of reattaching a sole or anything similar. In the 21st century you can even say it’s more likely that someone will be putting those same energies into some type of digitally-built tool or resource and one that people will likely get a lot of mileage out of as compared to an inexpensive pair of sneakers. ‘Cobbling’ in that sense is putting together a collection of disparate tools and then making them works as well as can be expected when working with web development.

That’s our roundabout way of introducing platform engineering, which if you haven’t heard of it is likely to be the next big thing in development and a means of bringing technologies together with more functional compatibility for builds that come together with a) much more speed, and b) a whole lot more in the way of getting to a ‘finished’ product sooner. What this stands to be replacing is the way the different home-grown self service frameworks have – for the most part – shown themselves to be brittle, high maintenance, and often way to expensive when the entirety of everything is taken into account.

We’ll get into the meat of what more may be possible with this here, but it goes without saying that here at 4GoodHosting we’re like any other quality Canadian web hosting provider in seeing how this may be a very relevant technological development for the people who do the work behind the scenes that make our digital world what it is and made available with all it’s capable of doing. Interesting stuff for sure.

Engineering Made Common

More succinctly, platform engineering is the practice of building and operating a common platform made available for internal development teams for sharing and using with relation to software release acceleration. The belief is that it will bridge the gap between software and hardware and platform engineers will enable application developers to release more innovative software in less time with much more in the way of efficiency.

Not only if this super relevant for platform engineering, but it is potentially big for the hybrid cloud too. The aim is to have self-service as both a hallmark in DevOps maturity and a key platform engineering attribute that is proven for supporting developer-driven provisioning of both applications and any underlying infrastructure required along with it. The value in that should be self-evident, as today application teams working in hybrid and multi-cloud environments require different workflows, tools, and skillsets across different clouds

This means complexity, and often too much of it for things to get done right. The need is for getting to a final product asap, and the challenge shines a spotlight on the growing demand for a unified approach to platform engineering and developer self-service.

Ideal for Scaling DevOps Initiatives

There are estimates that upwards of 85% of enterprises will come up short with their efforts to scale DevOps initiatives if no type of self-service platform approach is made available for them by the end of next year (2023). To counter that the recommendation is that infrastructure and operations leaders begin appointing platform owners and establishing platform engineering teams. But the important point is that they be building in self-service infrastructure capabilities that are in line with developer needs at the same time.

Similar estimates that by 2025 75% of organisations with platform teams will provide self-service developer portals as means of improving developer experience and giving product innovation a real boost given how much more quickly results are seen and developers are given a better understanding of where the current development path is likely to take them.

Compelling Benefits

Accelerated software development is the big carrot at the end of the stick leading the investment into platform engineering development. By pursuing it with such enthusiasm it ensures application development teams bring that productivity to all aspects of the software delivery cycle. Ongoing examination of the entire software development life cycle from source code to test and development and along into provisioning and production operations is the best way to facilitate the right backside of the development equation.

From there what it promotes is much better processes and platforms that enable application developers to rapidly provision and release software. Teams will probably be using infrastructure automation and configuration management tools like Ansible, Chef, or Puppet. These tools are conducive to continuous automation that extends processes used in software development to infrastructure engineering.  Look for code (IaC) tools such as Terraform to work well for codifying task required for new resource application and continue playing key role in raising the growth of platform engineering and platform engineering teams.

Most notable at this time is Morpheus Data. It is a platform engineered for platform engineers and is set up explicitly to allow self service provisioning of application services into any private or public cloud.  When combined with Dell VxRail hyperconverged infrastructure you will see digital transformations sped up impressively and you’ll also be improving on your cloud visibility at the same time.

FAXID For Much Faster Ransomware Detection

Long gone are the days when you actually had to have a captive individual in order to demand a ransom. Nowadays that would even be very uncommon, as much more often it is digital property rather than a person that’s been captured and the takers are looking to get paid if that property is to be released. We’ve gone on here at some length about how costly it can be for some companies when they choose to be lax about cybersecurity, and especially nowadays. The age old dance with all of this remains the same – security improves, threats evolve, security improves to counter those evolutions, and then threats evolve again.

And of course it’s the bigger fish that need to be concerned about frying. If you’re one the smaller side of the scale when it comes to running a business you probably won’t get targeted, but there’s still not guarantee you won’t be. We don’t claim to be web security experts, but here at 4GoodHosting we’re like any good Canadian web hosting provider in that we can point you in the direction of one of them if that’s what you need. We do have an understanding of the basics on the subject and that’s part of the reason why we’re fairly keen to share any news related to it here, especially when it means even better means of avoiding a ransom situation.

So what is newsworthy here is a new technology that is in the process of proving itself to be MUCH faster for identifying ransomware attacks and detecting them early enough so that countermeasures can be implemented – something that will be part of a complete cybersecurity plan that is a much for any business that is of a sufficient size that there’s potentially serious loss if data is accessed and then taken for ransom.

Malware Meeting Match?

A new approach for implementing ransomware detection techniques has been developed by researchers, and the appeal of it is that is able to detect a broad range of ransomware far more quickly than using previous systems. We will at this point assume we don’t need to provide much of an explanation about what ransomware is here, but if we do then it is a type of malware and when ransomware infiltrates a system, it encrypts that system’s data and it becomes immediately inaccessible to users.

What will follow next are the demands; the people responsible for the ransomware make it clear to the system’s operators that if they want access to their own data they had better be sending money. And this type of digital threat has already proved plenty expensive. The FBI says they received 3,729 ransomware complaints in 2021 and the amount paid out in ransom is around $49 million. That’s a lot of money, and it makes clear why the attackers are going to lengths they are to improve on the sneakiness of their ransomware and then putting it out there.

We do know that computing systems already make use of a variety of security tools that monitor incoming traffic with an eye to detecting potential malware and preventing it from breaching the system and new ransomware detection approaches are being evaluated all the time by many different interest groups and developers. A lot of it is very effective IF it can be implemented in a timely way.

The challenge here is detecting ransomware quickly enough to prevent it from fully establishing itself in the system. File encryption begins as soon as ransomware enters the system, so if the counterattack can be made aware of it’s time to go then that is boing to be very beneficial.

FAXID Pairs with XGBoost

What’s getting buzz these days and why we are on this topic is a machine-learning algorithm called XGBoost. It has been proven effective for detecting ransomware for some time, but up until now when systems run XGBoost as software through a CPU or GPU it doesn’t run quickly enough. Add to that attempts to incorporate XGBoost into hardware systems haven’t gone as well as hoped because of a lack of flexibility.

By focusing on very specific challenges it becomes difficult or impossible for them to be on top of the entirety of ransomware attacks types and being able to identify them as soon as needed.

But this new FAXID technology is a hardware-based approach that allows XGBoost to monitor for a wide range of ransomware attacks and do so much more quickly than with the existing software approaches

Not only is FAXID just as accurate as software-based approaches at detecting ransomware, but the speed it could so in with was drastically faster. FAXID was up to 65.8x faster than software running XGBoost on a CPU and up to 5.3x speedier as compared to software running XGBoost on a GPU.

FAXID is also getting high marks for the way it allows us to run problems in parallel and rather than allocating all the security hardware’s computing power to separate problems you could devote some amount of the hardware to ransomware detection and another percentage of the hardware to another challenge like fraud detection or some other identified threat that may be present in unison.

This has a lot of potential for cybersecurity as a whole given the current atmosphere where ransomware attacks are becoming much more sophisticated. People in business should be thankful these types of advances are being made as it may be contributing to preventing them from quite the expensive headache in the future.

3rd-Party Web Trackers Logging Pre-Submission Information Entered

Canadian Data Protection

Anyone and everyone is going to be extra mindful of what information is shared digitally these days, and even most kids are aware of the fact that you can’t be entirely at ease about what you type into submission fields and then press ‘Enter’. You need to be mindful of what you share, but it turns out you need to be the same way before you even press the enter button at all. Many people may think they’ve smartly avoided any potential problems by backspacing over something they’ve typed and were about to submit, but it turns out the damage may already be done.

We’ll get to what exactly is at issue here, but before we do we should make clear that ‘leaks’ don’t always end up being what they are on purpose. Many times there is information exposed not because someone is choosing to do so, but rather because the information is contained in location that doesn’t actually have the security protocols owners / users will think that it does. Truth of the matter it is nearly impossible to be airtight with this stuff 100% of the time.

Here at 4GoodHosting we’re like any other good Canadian web hosting provider in that we like to share information with our customers anytime we find example of it that we know will have real significance with them. This is one of those scenarios, as nearly everyone is going to be choosing to voluntarily provide information about themselves when asked to do so online. Any way you can be more in the know about dos and don’ts when it comes to this is going to be helpful, so here we are for this week.

Made Available

A recent study that looked into the top 100k ranking websites is indicating that many are leaking information you enter in the site forms to third-party trackers, and that this may be happening ever before you press submit. The data that is being leaked may include personal identifiers, email addresses, usernames, passwords, along with messages that were entered into forms but deleted and never actually submitted.

This type of data leak is sneaky because until now internet users would assume that the information they type on websites isn’t available unless they submit it. That IS true most of the time, but for almost 3% of all tested sites there is the possibility of once it’s typed out it’s already been made available and that’s the reality even if you don’t actually submit the info.

A crawler based on DuckDuckGo’s Tracker Radar Collector tool was what was used to monitor exfiltration activities, and the results do confirm that this is very much a possibility and there’s not much if anything that could be seen as tip-off for users to indicate to them when this risk is present and where information should ideally not be entered into the field at all.

Nearly 19k of Sites

The crawler was equipped with a pre-trained machine-learning classifier that detected email and password fields as wall as making access to those fields interceptable. Then the test of 2.8 million pages found on the top 100,000 highest ranking sites in the world, and then found that of those 100k 1,844 websites let trackers exfiltrate email addresses before submission when visited from Europe. That is not such a high percentage, but for the same ratio in America it’s an entirely different story.

When visiting those same websites from the US, there were a full 2,950 sites collecting information before submission and in addition researchers determined 52 websites to be collecting passwords in the same way. It should be mentioned that some of them did make changes and efforts to improve security after being made aware of the research findings and informed that they were leaking.

But the logical next question here is who is receiving the data? We know that website trackers serve to monitor visitor activity, derive data points related to preferences, log interactions, and for each user an ID is created and one that is – supposedly – anonymous. Trackers are used by the sites to give a more personalized online experience to their users, and the value for them is having advertisers serve targeted ads to their visitors with an eye to increasing monetary gains.

Keystroke Monitoring

The bulk of these 3rd-party trackers are using scripts that monitor for keystrokes when inside a form. When this happens they then save the content, and collect it even before the user has pressed that submit button. The outfall of this then becomes having data entered on forms logged but losing the anonymity of trackers to push up privacy and security risks big time.

There are not a lot of these trackers out there, and most of the ones that are in operation are known by name. 662 sites were found to have LiveRamp’s trackers, 383 had Taboola, and Adobe’s Bizible was running on 191 of them. Further, Verizon was collecting data from 255 sites. All of this is paired with the understanding that the problem stems from a small number of trackers that are prevalent on the web.

So what is a person or organization to do? The consensus is the best way to deal with this problem is to block all 3rd-party trackers using your browser’s internal blocker. A built-in blocker is standard for nearly all web browsers, and it is usually found in the privacy section of the settings menu.

Private email relay services are a smart choice to because they give users the capacity to generate pseudonymous email addresses. In the event someone does get their hands on it, identification won’t be possible. And for those who want to be maximum proactive there is a browser add-on named Leak Inspector and it monitors exfiltration events on any site and provides warnings to users when there is a need for them.

Edge Now Second to Only Chrome as Web Browser of Choice

We can go right ahead and assume that there are so many Mac users who opt to use Chrome as their browser rather than the Safari that their device came with. We say that with confidence because we’re one of them, and it is a fact that Google’s offering continues to be the de facto choice as a web browser for the majority of people all around the world. There’s plenty of reasons for that, although at the same time we will be like most people and say that both Safari and Firefox aren’t bad per se. Internet Explorer on the other hand is an entirely different story.

Now to be fair if IE hadn’t been left to wither on the vine that might not be the case, but the fact it was played a part in why the Edge browser has made the inroads into the market it has. But as always choice is a good thing and if anything it puts the pressure on the runner ups to get better to reclaim whatever user share they’ve lost. So competition joins choice as a good thing. This is one topic that everyone can relate too, and it’s been a topic of discussion in nearly every office here in North American and likely elsewhere around the globe.

Like any good Canadian web hosting provider we’re no different here at 4GoodHosting, and you can know that those of us around here have the same strong opinions about which web browser is best and why. Likely you have much the same going on around your places of productivity, so this is the topic for our blog entry this week.

Closed the Gap

February of this year had Microsoft Edge on the cusp of catching Safari with less than a half percentage point separating the 2 browsers in terms of popularity among desktop users. Estimates are now that Edge is used on 10.07% of desktop computers worldwide, and that is 0.46% ahead of Safari who has now dipped down to 9.61%.

Google Chrome is still far and away the top dog though, being the browser of choice for 66.58% of all desktop users. Mozilla’s Firefox isn’t doing nearly as well as either of them, currently with just 7.87% of the share. That’s quite the drop from the 9.18% share it had just a few months ago.

Edge’s lead on other browsers, however, needs to be quantified depending on location. If we are to look at just the US, Edge trails Safari by quite a bit with only 12.55% of market share as compared to Safari’s 17.1%. In contrast Edge long ago passed Safari on the other side of the pond, with 11.73% and 9.36%, respectively in Europe.

And for Firefox it’s not looking promising at all, despite it being what we consider a very functional browser that doesn’t really come up short in comparison to others if you look at it strictly from the performance angle. Yes, it doesn’t have the marketing clout of either Microsoft or Google and that means brand recognition won’t be the same.

Long Ago in January 2021

As the default Windows 11 browser, the popularity of Edge has gone up quite a bit. We talked about February of this year, but let’s go back one year + 1 month further even to the start of 2021. There were concrete signs that Edge would be passing Safari for 2nd place in user popularity, and at that time the estimate was that it is being used on 9.54% of desktops globally. But back in January 2021 Safari was in possession of a 10.38% market share, and so what we are seeing is a gradual decline in popularity over the last year plus.

Chrome continues to move forward with speed though, even if it’s not ‘pulling away’ at all. It has seen its user base increase ever so slightly over that time, but at the same time Firefox has been losing users since the beginning of the year. And that is true even though Firefox hasn’t been at rest at all and has made regular updates and improvements to their browser.

So perhaps Apple and Safari can take some consolation in the fact they’re holding on third place quite well, but the reality is they have lost 0.23% of market share since February. However, we should keep in mind that Apple has hinted that it may be making sweeping changes to the way Safari function in macOS 13 towards the end of 2022.

Different for Mobile

It’s a different story for mobile platforms, and that can be directly attributed to Microsoft’s lack of a mobile operating system since Windows Mobile was abandoned. In this same market analysis Edge doesn’t even crack the top 6 browsers for mobile, while Chrome has 62.87% of usage share and Safari on iPhones and iPads coming in at 25.35% for a comfortable second place. Samsung Internet comes 3rd with 4.9%.

Overall statistics for desktop and mobile – Chrome 64.36% , Safari 19.13%, Edge 4.07%, Firefox 3.41%, Samsung Internet 2.84%, and Opera 2.07%.

It is true that Safari for desktop has received complaints from users recently because of bugs, user experience, and with matters related to website compatibility. Apple’s Safari team responded to that by asking for feedback on improvements and to be fair it did lead to a radical redesign of its browser. May of them were rolled back before the final version was publicly released in September.

New ‘Declaration of the Future of the Internet’ Signed Onto by More than 60 Countries

Go back some 30 years and those of us who anywhere past adolescence by that time would be dumbfounded to learn just how life-changing this new ‘Internet’ thing would become, along with being impressed with dial-up modems in a way that would seem bizarre nowadays considering where that technology has gone in such a short time. As with anything there’s been growing pains with the Internet too, and like any influential and game-changing technology it has been used for ill in the same way it’s provided good for society.

It’s also become an integral internationally shared resource, and that goes beyond just the sphere of business. The inter connectivity of the modern world is increasingly dependent on the submarine cables laid across entire ocean floors so that the globe can be connected by the Internet, and here at 4GoodHosting we are like any good Canadian web hosting provider in that it’s something that is near and dear to our hearts given the nature of what we do for people and the service we provide.

This is in part connected to the need to safeguard the future of the Internet, as there are so many complexities to it that didn’t exist previously and no doubt there will be more of them in the future. This is why the recently-signed Declaration of the Future of the Internet is such a big deal and more than worthy of being the subject for this week’s blog entry here.

Protecting Democracy & More

One of the ways that the Internet has most notably been abused is to threat democratic institutions like the legitimacy of election results and the like, and there’s no doubt that there are anti-North American interest groups in other parts of the world that are using the Web as the means of infiltrating and being subversive withing democratic institutions. The belief is that if no efforts are made to nip this in the bud or counter it now then it may become too big to rein in in the future.

This is why there was such a push to get countries onboard for this declaration now, and it seems there was enough enthusiasm and resolve to see it through. The Declaration of the Future of the Internet is to strengthen democracy online as the countries that have agreed to its terms have promised they will not undermine elections by running online misinformation campaigns or illegally spying on people. At least this is according to the White House.

More specifically what the declaration does is commit to the promotion of safety and the equitable use of the internet, with countries signed on agreeing to refrain from imposing government-led shutdowns and also committing to providing both affordable and reliable internet services for their populous. This declaration isn’t legally binding, but countries signed on have been told that if they back out they will get a very disapproving finger wag from Sleepy Joe at the very least.

Bigger Picture Aim

What this declaration is more accurately aiming to do is have the principles set forth within it will serve as a reference for public policy makers, businesses, citizens and civil society organizations. The White House put out a fact sheet where it provided further insight on how the US and other partners will collaborate to safeguard the future of the internet, saying they and their partners will work together to promote this vision and its principles globally, but with respect for each other’s regulatory autonomy within our own jurisdictions. Also being in accordance with our respective domestic laws and international legal obligations.

60 and Counting

So far 60 countries have committed to the declaration and there is the possibility of more doing so in the next little while. Russia, China and India were the notable absents and while India is a bit of a surprise the other 2 are not considering the reasons they might have for interfering into democratic processes and utilizing the web within the most effective ways of making that happen. Google is among the US-based tech giants endorsing the declaration, and their assertion is that the private sector must also play an important role when furthering internet standards.

What is likely is that something similar will be required every couple of decades so moving forward, and particularly if the web is to make even deeper inroads into life beyond a shallower level. It certainly has shown it has the potential for that, and that potential is likely growing all the time.

Tape Storage’s Resurgence for Unstructured Data

Server room in datacenter. Hosting services.

It’s not necessarily devolution when you choose to go back to an outdated technology, although many people will be absolute in their thinking that it is. But there are plenty of instances where the way it used to be ends up working better, and often when new operating realities change the game. This can be true if we look at it from the perspective of data storage where companies are doing what everyone else is doing – that is, choosing to locate most of that data storage in the cloud. Now if we were to collectively lose our minds and revert back entirely to physical storage, that would be devolution.

Where we’re going with this is that some physical storage means are making a comeback, and for plenty good reasons. Tape storage for unstructured data is one example here that’s really at the forefront these days, and naturally anything related to data storage will be relatable for us here at 4GoodHosting or for any good Canadian web hosting provider. We’re always attuned to the need for data storage security, and it’s a priority for us in the same way it is for many of you.

So that is why we see tape storage’s ongoing resurgence as being at least somewhat newsworthy and we’re making it the topic for this week’s entry. Let’s get into it.

Obsolete? Not so Fast

The fact that a record 148 exabytes of tape was shipped last year definitely indicates that tape storage has not become obsolete at all. In fact, a recent report is showing that LTO tape saw an impressive 40% growth rate for 2021. The reason for this is many organizations are attempting to cut costs related to cloud storage when archiving their unstructured data. And while only 105EB of total tape capacity was shipped during the pandemic in 2020, the amount that was ordered for 2021 broke a new record.

What we’re seeing here is organizations returning to tape technology, seeking out storage solutions that have the promise of higher capacities, reliability, long term data archiving and stronger data protection measures that have what is needed to counter ever-changing and expanding cybersecurity risks.

Increasing prevalence and complexity of malware is a contributing factor too. Irreparable harm can come from an organization having its systems infected with malware and the potential for harm is nearly the same as when data is locked following a ransomware attack. It’s true there are many well-established ways a company can be proactive in defending against the latest cyberthreats, but at the same time tape storage prevents sensitive files and documents from being online to begin with.

Air Gap is Key

We’re also seeing many businesses and organizations turning to LTO tape technology for increased data protection in response to surging ransomware attacks. The key component in it that makes it superior and effective is an air-gap which denies cybercriminals the physical connectivity needed to access, encrypt, or delete data.

Also of significance in all of this is the 3-2-1-1 backup rule. You’ve likely never heard of that, so let’s lay out what it is. It means making at least three copies of data and storing them on 2 different storage mediums. And then with one storage location off site and another one offline. LTO-9 tape also makes it easier for businesses to store more data on a single tape because of its increased tape storage capacity that can be as high 45 terabytes when compressed.

As a last note promoting this type of storage for unstructured data, his medium also has the advantage of being backward compatible with LTO-8 cartridges in the event that any organization still needs to work with existing tape storage. It certainly is nice to have options, and sometime what is now old may be a better fit than what is newer and has replaced it.