Maximizing Your Use of Google Cloud Free Tier

Some people will say you can never be too frugal or thrifty, and in honesty even if you can afford to spend more there’s really no good reason you should if there’s an alternative. This is something a lot of people can relate to. If you’re the webmaster or similarly titled individual who’s at the helm of a company or organization and needing to find savings in your web operations then the Google Cloud Free Tier offers just that if you are taking advantage of what cloud storage offers these days.

The appeal of cloud storage of course needs no explanation. Physical storage is just that – physical – and that’s means space demands and as we become increasingly more digital in lives and business every day it’s more and more impractical to be housing data in physical data centers. Cloud storage has been a godsend in this regard, and we’re very fortunate to have it now.

Here at 4GoodHosting, we’re like every other Canadian web hosting provider in that our data centers are usually working at near capacity, and we also take very opportunity to utilize cloud storage for internal storage needs related to our business and day-to-day operations. Google has had its Free Tier for Google Cloud for a while now, and while it’s great for the individual or smaller-scale operations there’s just not enough capacity of the likes of us.

For the average individual or smaller business, however, it’s a great resource to have at your disposal. Successful businesses grow, and you wouldn’t want it any other way. But that may mean stretching your storage needs beyond what the Google Cloud Free Tier maximum.

But it also many not, and particularly if you find ways to maximize your use of the Google Cloud Free Tier. Here’s how you can do that.

Store Only What’s Needed

Free databases included like Firestore and Cloud Storage are completely flexible tools that let you nest away key-value documents and objects respectively. Google Cloud’s always-free tier allows no-cost storage of your first 1GB (Firestore) and 10GB (Cloud Storage). For apps that keep more details, however, it’s common to go through those free gigabytes fairly quickly. So quit saving information unless you absolutely need it. This means no obsessive collection of data just in case you need it for debugging later. Skip extra timestamps, and be judicious about which large caches full of data you really need to keep in this resource.

Make Use of Compression

There’s no shortage of code pieces for adding a layer of compression to your clients. Rather than storing fat blocks of JSON, the client code can run the data through an algorithm like LZW or Gzip before sending it over the wire to your server for unpacking and storage. That means faster responses, fewer bandwidth issues, and significantly less demand on your free monthly data storage quota. Do be aware though that smaller data packets can get bigger when the compression’s overhead is factored in.

Serverless is the Way to Go

When it comes to intermittent compute services that are billed per request, Google is already fairly accommodating. Cloud Run will boot up and run a stateless container that answers two million requests each month, and at no cost to you. Cloud Functions will fire up your function in response to another two million requests. That averages out to more than 100,000 different operations per day. So writing your code to the serverless model can often be a good choice.

Make Use of the App Engine

Spinning up a web application using Google’s App Engine is an excellent choice to do it without fussing over all of the details regarding how it will be deployed or scaled. Almost everything is automated, so expanses to the load will automatically result in new deployments. 28 ‘instance hours’ are included for each day with the App Engine, allowing your app to run free for 24 hours per day and free too scale up for four hours if there’s a surge in demand.

Consolidate Service Calls

You can also find flexibility for adding extras if you’re careful. The limits on serverless invocations are on the number of individual requests, and not on the complexity. Packing more action and more results into each exchange by bundling all of the data operations into one bigger packet is entirely possible here. Just keep in mind that Google counts the memory used and the compute time, and that your functions can’t exceed 400k GB-seconds memory and 200k GHz-seconds of compute time.

Go with Local Storage

A number of suitable places to store information can be found with the modern web API. You can go with the usually-fine cookie that’s limited to 4 kilobytes. For a document-based key valu system, the Web Storage API can cache at least 5 megabytes of data and some browsers can retain up to 10 megabytes. The Indexed DB offers more, with database cursors and indices that have the capacity to process mounds of data which is often stored limitlessly.

The more data you store locally on your user’s machine, the less valuable server-side storage you need. This can also mean faster responses and much less bandwidth promised to ferrying endless copies of the data back to your server. This is a fairly basic fix to implement, and it’s going to be doable for the majority of users.

Uncover Hidden Bargains

Dig enough and you’ll find Google has a helpful page that puts all of their free products in one place, but dig a little further and you’ll find plenty of free services that aren’t even listed. Take Google Maps, for instance. It offers a $200 free monthly usage, and then Google Docs and a few of the other APIs are always free as well.

Use G Suite

Many G Suite products like Docs, Sheets, and Drive are billed separately and users will receive them free with their GMail account or the business pays for them as a suite. Rather than creating an app with built-in reporting, just write the data to a spreadsheet and share the sheet. Build a web app and you’ll need to burn your compute and data quotas to handle the interactive requests. Create a Google Doc for your report and you’re then delegating most of the work to Google’s machine.

Ignore Gimmicks

Unfortunately, superfluous features of modern web applications abound in a big way. Banks won’t need to include stock quotes with their applications, but they might add them. Local times or temperatures probably don’t need to be there. Embedding the latest tweets or Instagram photos probably isn’t needed either. Doing away with all of these extras means a lot fewer calls to your server machines and those calls consume your free limits.

Be Wary of New Options

There are some new and fancy tools for building A.I. services for your stack and are well suited for experimenting. The AutoML Video service allows you to train your machine learning model on video feeds for 40 hours each month for free. The service for tabular data will grind your rows and rows of information on a node free for six hours. This is all fine and dandy, but be careful as automating the process so every user could trigger a big machine learning job comes with real risks.

New CMS-Based Botnet Cyber Attack a Real Doozy

If there’s one thing those of us who do content and communications exclusively will know like the back our hands, it’s a CMS of one sort or another. If you don’t know what that abbreviation stands for, it’s Content Management System. Even if you’ve never used WordPress, you’ll almost certainly still have heard of it and it’s pretty much the original CMS and is the one most used by people all over the world. And that’s not just for blogs like this one.

Here at 4GoodHosting, our expertise is in web hosting in the same way it will be for any good Canadian web hosting provider, but any and all of us will also know how integral content is to SERPs and the like. That’s why the KashmirBlack botnet is such a newsmaker in the digital world today, and for good reason.

Now at this point you’re probably saying ‘what?’, and that’s to be expected given the exotic name given to this malicious little critter. Name aside, you may even be asking what exactly is a botnet? We can answer that. A botnet is a type of malicious attack where a series of connected computers are utilized to attack or promote failure of a network, network device, website or IT environment, and usually done with the intention to disrupt normal working operations or degrade the system’s service capacities.

Now with this new KashmirBlack botnet, we shouldn’t assume that it has originated in India, or that those who created it are huge fans of the classic Led Zeppelin song. Really it’s just a name. What is worth talking about, though, is what this and why it’s showing itself to be so problematic.

Mining, Malicious Redirects, and Defacing

So let’s get into what you might need to know about this if you’re the person behind a website, any website and one being utilized for whatever aims. Imperva is a web security research organization that’s fairly reputable and held in high regard in the digital community worldwide, and they’re the ones who have discovered and tracked the KashmirBlack botnet.

Their research has indicated that this botnet is responsible for infecting hundreds of thousands of websites, and does so by going after their content management system (CMS) platforms.

It’s believed this botnet has been in operation since November of last year. It wasn’t much more than a blip in the beginning, but since then it’s really grown and expanded its reach. The consensus is now that it has evolved into a sophisticated operation that has the capacity to attack thousands of sites every day.

How exactly it works, and why it does what it does, can be summarized this way; the botnet’s main purpose is to infect websites in order to use their servers for one or more of the following illicit aims:

  1. to mine cryptocurrency
  2. to redirect legitimate web traffic to spam pages
  3. display web defacements, including pictures of Vancouver sluts in action

Which then natural leads to the question of which CMS are most at risk. This botnet has already had success infiltrating a wide variety of popular CMS platforms including WordPress, Joomla!, PrestaShop, Magento, Drupal, vBullentin, osCommerce, OpenCart and Yeager, and then targeting vulnerabilities within them that may be unique to one CMS in comparison to another.

Then, as you might guess, a ‘one-size-fits-all’ solution becomes less likely, as all these different CMS are configured and built differently, and have vulnerabilities that my be unique to them.

Vulnerability Finder

The KashmirBlack botnet mainly infects popular CMS platforms. It makes use of dozens of known vulnerabilities on its victims’ servers, and performs millions of attacks per day on average. Victims who’ve identified themselves as being victims of KashmirBlack are in more than 30 different countries around the world.

To explain more, it has a complex operation managed by one specific command and control server and uses in excess of 60 servers as part of its infrastructure. Hundreds of bots are handled and dispersed when opportunities are identified, with each one then communicating with the C&C to receive new targets, carry out force attacks, install backdoors, and expand the botnet’s size and capacities accordingly.

The size expansion part of it is done by expanding searches for sites with outdated software. When it finds one, its operators use exploits for known vulnerabilities to infect both the vulnerable site and its underlying server. And that’s happened some 16 different times in the last year, but it’s picking up speed and CMS like Joomla!, Magento, Yeager, WordPress, and vBulletin are most at risk, and particularly when working on outdated software.

This is really where you can identify yourself as a more likely potential victim if your CMS is operating on outdated software.

If there’s one thing those of us who do content and communications exclusively will know like the back our hands, it’s a CMS of one sort or another. If you don’t know what that abbreviation stands for, it’s Content Management System. Even if you’ve never used WordPress, you’ll almost certainly still have heard of it and it’s pretty much the original CMS and is the one most used by people all over the world. And that’s not just for blogs like this one.

Here at 4GoodHosting, our expertise is in web hosting in the same way it will be for any good Canadian web hosting provider, but any and all of us will also know how integral content is to SERPs and the like. That’s why the KashmirBlack botnet is such a newsmaker in the digital world today, and for good reason.

Now at this point you’re probably saying ‘what?’, and that’s to be expected given the exotic name given to this malicious little critter. Name aside, you may even be asking what exactly is a botnet? We can answer that. A botnet is a type of malicious attack where a series of connected computers are utilized to attack or promote failure of a network, network device, website or IT environment, and usually done with the intention to disrupt normal working operations or degrade the system’s service capacities.

Now with this new KashmirBlack botnet, we shouldn’t assume that it has originated in India, or that those who created it are huge fans of the classic Led Zeppelin song. Really it’s just a name. What is worth talking about, though, is what this and why it’s showing itself to be so problematic.

Mining, Malicious Redirects, and Defacing

So let’s get into what you might need to know about this if you’re the person behind a website, any website and one being utilized for whatever aims. Imperva is a web security research organization that’s fairly reputable and held in high regard in the digital community worldwide, and they’re the ones who have discovered and tracked the KashmirBlack botnet.

Their research has indicated that this botnet is responsible for infecting hundreds of thousands of websites, and does so by going after their content management system (CMS) platforms.

It’s believed this botnet has been in operation since November of last year. It wasn’t much more than a blip in the beginning, but since then it’s really grown and expanded its reach. The consensus is now that it has evolved into a sophisticated operation that has the capacity to attack thousands of sites every day.

How exactly it works, and why it does what it does, can be summarized this way; the botnet’s main purpose is to infect websites in order to use their servers for one or more of the following illicit aims:

  1. to mine cryptocurrency
  2. to redirect legitimate web traffic to spam pages
  3. display web defacements, including pictures of Vancouver sluts in action

Which then natural leads to the question of which CMS are most at risk. This botnet has already had success infiltrating a wide variety of popular CMS platforms including WordPress, Joomla!, PrestaShop, Magento, Drupal, vBullentin, osCommerce, OpenCart and Yeager, and then targeting vulnerabilities within them that may be unique to one CMS in comparison to another.

Then, as you might guess, a ‘one-size-fits-all’ solution becomes less likely, as all these different CMS are configured and built differently, and have vulnerabilities that my be unique to them.

Vulnerability Finder

The KashmirBlack botnet mainly infects popular CMS platforms. It makes use of dozens of known vulnerabilities on its victims’ servers, and performs millions of attacks per day on average. Victims who’ve identified themselves as being victims of KashmirBlack are in more than 30 different countries around the world.

To explain more, it has a complex operation managed by one specific command and control server and uses in excess of 60 servers as part of its infrastructure. Hundreds of bots are handled and dispersed when opportunities are identified, with each one then communicating with the C&C to receive new targets, carry out force attacks, install backdoors, and expand the botnet’s size and capacities accordingly.

The size expansion part of it is done by expanding searches for sites with outdated software. When it finds one, its operators use exploits for known vulnerabilities to infect both the vulnerable site and its underlying server. And that’s happened some 16 different times in the last year, but it’s picking up speed and CMS like Joomla!, Magento, Yeager, WordPress, and vBulletin are most at risk, and particularly when working on outdated software.

This is really where you can identify yourself as a more likely potential victim if your CMS is operating on outdated software.

To conclude here today, FWIW it’s believed that an Indonesian hacking group ‘PhantomGhost’ is behind KashmirBlack.

Reasons You Should Be Wary of Elasticsearch Servers

If you’ve never heard of Elasticsearch, you can certainly be excused. Here at 4GoodHosting we’ve got some pretty smart cookies around, but as a whole we’re a Canadian web hosting provider who’ll never claim to be entirely full of digital wherewithal. Truth be told I hadn’t heard of it either until recently, but no one had ever suggested to me that I should give a second thought to whether or not I’ll trust it as base for searching online.

Right then, get right to the definition you say. Elasticsearch is an open source search and analytics engine and data store developed by Elastic. The appeal of it has always been in the way it allows for searching through huge amounts of data with reasonable timeframes, and running calculations with resultant data in the blink of an eye.

However, recent news indicates that there’s a potential downside to using Elasticsearch, and sharing what we know about that is going to be the subject of this week’s entry here.

Legit Associations

Elasticsearch has been all over the headlines – well, industry headlines at least – recently, and not in a good way. It seems like each new week brings along a new story about a breached Elasticsearch server resulting in troves of data being exposed. But why is this happening with Elasticsearch buckets as predominantly as it has been, and is it legit to associate Elasticsearch with an ever-present risk of this happening?

The question then further becomes can businesses leveraging this otherwise very-helpful technology do so to the full extent while still avoid data leaks?

Organizations have been using this platform en masse to store information in depositories (aka ‘buckets’), the contents of which then become emails, spreadsheets, social media posts, files and any and all matter of raw data in the form of text, numbers, or geospatial data.

The problem for Elastic is that now it’s beyond debate that their storage option is leaving massive amounts of date unprotected and potentially exposed online. Sometimes this leak is disastrous, and the number of high-profile breaches attributed to use of Elasticsearch continues to grow.

nvenient as this sounds, it can be disastrous when mass amounts of data are left unprotected and exposed online. Unfortunately for Elastic, this has resulted in many high-profile breaches involving well-known brands from a variety of industries.

Where There’s Smoke..

Just this year alone, there’s been a few doozies related to Elasticsearch. Cosmetics giant Avon had 19 million records leaked, and an online genealogy service called Family Tree Maker had over 25GB of sensitive data made available as a result of it. Sport giant Decathlon also got bitten, with 123 million records leaked.

During 2020 alone, cosmetics giant Avon had 19 million records leaked on an Elasticsearch database. Another misconfigured bucket involving Family Tree Maker, an online genealogy service, experienced over 25GB of sensitive data exposed. The same happened with sports giant, Decathlon, which saw 123 million records leaked. Despite more than few insistences from the people at Elastic, it’s clear that there’s a fundamental risk factor here and people should be made aware of it.

At Issue

Those who choose to use cloud-based databases must be aware of the inherent risks that come with that, as well as performing the necessary due diligence to configure and secure every corner of the system. Shared research indicates this necessity is often being overlooked or just plain ignored, so we can say that the problem with Elasticsearch in part has to do with the shortcomings of some of those using it.

One contributing security researcher even determined how long it would take for hackers to locate, attack, and exploit an unprotected Elasticsearch server when purposely left exposed online. That task was completed in eight hours. Not a short period of time, but also not too long and especially if there’s something significant in it for you if you’ll be the one arranging the leak.

Cloud storage technology is going to continue to be eagerly adopted, and it’s safe to say by this point that nothing is going to curb that eagerness. While cloud technologies certainly have their benefits, improper use of them has very negative consequences. Failing or refusing to understand the security ramifications of this technology can have very serious fallouts, and we’re seeing that now.

As it relates to Elasticsearch, just because a product is freely available and highly scalable doesn’t mean skipping the basic security recommendations and configurations is advisable. In fact, it’s not advisable at all. The problem is that some organizations are putting less of priority on data privacy and security have and more of one on profit as they aim to capitalize on the data-gold rush.

Multiple Breach Methods

Is there only one attack vector for a server to be breached? Not really. In truth, there are a variety of different ways for the contents of a server to be leaked – a password being stolen, hackers infiltrating systems, or even the threat of an insider breaching from within the protected environment itself. The most common, however, occurs when a database is left online without any security (even lacking a password), leaving it open for anyone to access the data.

A lot of what we’re seeing here, if we’re going to be plain about it, is attributable to a poor understanding of the Elasticsearch security features and what is expected from organizations when protecting sensitive customer data. That data security is automatically attributed as a responsibility of the cloud service provider simply isn’t true.

More often than not any attempt at that results in misconfigured or under-protected servers. Cloud security is – and should be – a shared responsibility between the organization’s security team and the cloud service provider.

What we can say is that the organization itself – in this case Elastic – owns the responsibility to perform the necessary due diligence to configure and secure every corner of the system properly to mitigate any potential risks.

To effectively avoid Elasticsearch (or similar) data breaches, a different mindset to data security is required and one that allows data to be a) protected wherever it may exist, and b) by whomever may be managing it on their behalf. This is why a data-centric security model is more appropriate, as it allows a company to secure data and use it while it is protected for analytics and data sharing on cloud-based resources.

Standard encryption-based security is one way to do this, but encryption methods can be a headache and the farthest thing from straightforward. Also, many encryption algorithms can be easily cracked. Tokenization is the better choice, and that’s really what should be seen here if the product manufacturer is seriously interested in rectifying this situation.

Tokenization is a data-centric security method that replaces sensitive information with innocuous representational tokens. So even if the data falls into the wrong hands, no clear meaning can be derived from the tokens. Sensitive information remains protected, and the malicious intention types have no means of capitalizing on the breach and helping themselves to available data that’s not deciphered.

Don’t sour on cloud storage just yet, but if you’re putting sensitive data into the cloud and doing so a large-scale then do be sure to do your homework and be explicitly in the know about what can (and needs) to be done to minimize the risks of data leaks.

Chrome to Debut Truncated URLs to Combat Phishing

Soft consonant constructions are devilishly hard for people who are new to English to understand, and the colloquial form of fishing as ‘phishing’ to describe underhanded and fraudulent information requests on the web is a good example. But if we are to expand on that, many people of any first language will be confused as to why anyone would go to the trouble of ‘phishing’ in the first place.

There’s always going to be people with bad intentions in any walk of life, and yes it does require a significant input of time and effort to set up, test, and then roll out a series of phishing emails or something similar. The reason they go to all of these efforts is – quite plainly – that’s there’s money to be made illicitly when they do find someone who’s gullible enough to click through or do whatever else it is that the phishing email requests of them.

Most younger people who are increasingly more web savvy will be aware enough to avoid falling into the trap, but for others who aren’t that way and have still – like everyone – been forced to exist in an increasingly digital world it is actually a real risk. As a rule, anything that looks amiss with any type of web communication should be a red flag and reason to discard it.

The same goes for any communication that seems ‘odd’ as to why the sender would be sending to you, whether it’s an unsolicited communication or one where it simply seems strange that they would be sending it to you. Here at 4GoodHosting, we can assure you that like any quality Canadian web hosting provider we’ve gone ‘fishing’ many times, but the interest was only ever in catching dinner and enjoying a quiet day on the lake. Obtaining info for fraudulent aims was never part of the equation!

But in all seriousness, this is an ever-bigger issue and in response to it Google is introducing a wrinkle for it’s nearly-ubiquitous Chrome web browser that’s going to make it more difficult for ‘phishers’ to get anyone on the hook.

October’s Here

The Internet giant announced this would debut in October, and here we are on the day after Canadian Thanksgiving so we can safely assume this is going to be arriving soon. But what exactly are we talking about here?

Well, Google will run a trial with their new Chrome 86 browser on its way this month that will hide much of a site’s URL as a way to foil phishing attacks. By experimenting with how URLs are shown in the address bar on desktop platforms, the belief there is that through real-world usage they’re going to find that showing URLs this way will help users realize they’re visiting a malicious website, and protects them from phishing and social engineering attacks.

Participants for the trial phase are going to be chosen randomly. The exact number for how many Chrome users who’ll see the address bar pilot isn’t known, and Enterprise-enrolled devices aren’t going to be included in this Chrome 86 experiment.

Strategic Condensing

Instead of displaying the entire URL in Chrome’s address bar, rather what will happen is that the browser will automatically condense it into what’s going to from hereon out be referred to as the ‘registrable domain,’ or what they are claiming will be the ‘most significant’ part of the domain name. Right, so what’s the criteria for what is or isn’t ‘most significant’ there?

If the full URL for, say, a National Post article is https://www.nationalpost.com/article/3571224/government-to-extend-pandemic-financial-assistance-measures.html then the registrable domain would be nationalpost.com.

The belief here is that by showing only the truncated and now ‘registrable’ domain, it will be more natural for users to look at the address bar and more immediately determine they are in the right place and not being redirected to somewhere they would choose to not be if they weren’t put off from looking at a long and detailed domain at the top of their browser.

Which is fair enough, as most people are in fact naturally inclined to be put off by a long string of letters and characters that they usually see in URLs that are a departure from the home page or something similar.

The idea is that this will ensure they have a means of determining if they’re still at the right place, and not at a malicious site they’d been tricked into visiting. This is important because there are so many different ways that attackers can manipulate URLs to confuse users about a website’s identity, leading to rampant phishing, social engineering and scams.

How to Work With This

For anyone who sees one of these truncated URLs but still has concerns, you can view the complete URL by simply moving the pointer atop the address bar and letting it hover there a moment. This will prompt the Chrome browser to reconstitute the URL to its full form. You can do the same thing this way: Chrome will be showing a new menu item in the right-click menu – ‘Always show full URLs’ – and activating it will set the address bar to show the whole URL for all sites.

How to ‘Spot’ a ‘Bot’, and Steer Clear of Them

If you’re one of the many people who enjoy twitter feeds or even the comments section at your favourite news websites, then you may already be well aware that some of the contributors aren’t exactly sitting or standing somewhere with a mobile device or notebook in front of them like you are. What we’re talking about here is ‘Bots’ and by that’s what means is an fabricated identity created in the digital space and armed with AI to be able to participate in convos and the like to further the interests of whatever interest group might be behind them.

‘Russian’ bots are the flavour of the years these days, and it’s believed that many of these non-animate opinion swayers come from Russia. Truth is, however, bots come from all over the place and these days they are all too commonplace. And they’re likely not going away anytime soon, so it’s good to know what these bots are, what they get up to, and – perhaps most importantly – how you can identify bots and put a whole lot less significance on what they have to say.

Now it needs to be said that here at 4GoodHosting we’re like any other reputable Canadian web hosting provider in that we’ve never created a bot, and in truth despite our familiarity with all things web hosting we wouldn’t even know how to even if we tried. We imagine that’s there are at least a few of out web hosting customers in Canada who have these malevolent means, but that’s neither here nor there.

Let’s spend today talking about what everyday, average individuals like you can actually do to distinguish between a bot and a legit, human contributor.

Looks and Sounds Legit, But…

Sophisticated bots look and act like human users, and it’s true that most bot activity indistinguishable from human activity to the naked eye. Even the majority of bot detection software struggles with being able to identify the entirety of them. This is a problem, and here’s why – with the ability to look like a million different humans at any time you could do a lot that wouldn’t be possible if you only had your one actual identity to work with online.

Among other examples, you could ‘listen’ to a song or ‘watch’ a video as necessary to push it to the top of the charts and quickly create the impression that something is popular or trending. Then there’s the trend of upvoting comments or retweeting content to further political aims – something we’re almost sure to be seeing right now even with upcoming presidential election in the US.

Successful bot-related cybercrime requires to elements for unquestioned success. First is a valuable demographic, and second is the technology to go undetected by intended victims. Getting back to the ‘Russian’ front with this, one of the things that as noted was how Russian interference in UK politics displayed how powerful bots can be in influencing public sentiment.

The current COVID 19 pandemic has this very much on display too. As lockdowns became a reality in the spring and people became increasingly forced to live their lives digitally, cybercriminals were presented with the perfect opportunity and bots have been the perfect tool for all the disruptiveness they’re aiming for.

So how does the less tech-savvy majority of us here in North America even have a clue as to who might be a bot, and who is definitely NOT a bot. But what about ‘good’ bots.

Yes, they exist. So let’s compart the two before getting to ways to identify bots online.

Good Bots / Bad Bots

As we just suggested, bots aren’t always bad. Bots – in their most basic identifiable form – are merely software scripts living on computers, and we should keep in mind that many everyday internet tasks are taken on by bots all the time and we all benefit from that.

These little digital ‘critters’ are essential to search engines and anti-virus companies being able to crawl, analyse, and catalogue data from web servers. It’s only the alternate end of things when bots are used by cybercriminals that the whole thing becomes malicious. We’re talking about stealing login credentials, hacking accounts, spreading disinformation, and so on.

Get thousands of the critters out there working in unison with each other and you have what’s called a botnet. Cybercriminals get a lot of mileage out of these botnets, and that’s not a good thing.

Start with Fraudulent Apps

One of the ways cybercriminals have been making the most of people’s changing behavior is via fraudulent apps, our recent research determined. Looking at apps critically is a good place to start for learning how to identify bots. So how is a person supposed to know if an app is legit.

Here are common identifiers that should put up red flags for you:

Do reviews relate that ads pop up all the time? Even while on the Android homepage?

Do they talk about the app disappearing from the drawer and being unable to uninstall it?

Are they full of complaints that the app doesn’t work?

Is this the only app the app ‘publisher’ has to offer?

Find that the answer is yes to any of the above, and it might be an app full of bots and one that you should consider taking a pass on.

Bots & Account Takeovers

Account takeovers are another good indicator of bots being on the scene. Bots have the ability to use your credentials to log into your accounts, such as banking, ticketing sites, social media platforms and online stores, without ever being detected.

Sure, CAPTCHA security protocols exist, but sometimes they’re insufficiently strong enough to be able to decipher a sophisticated bot from a human. This makes clear how human-like these bots can be. Sophisticated cybercriminal operations even have people working for them to crack CAPTCHA forms for ease of entry. It results in sophisticated bots being able to use your data and your personal information to assume your identity and causing mayhem with your personal accounts.

Examples could be transferring money to themselves via your online banking account, or asking friends and family members to do the same via social media. There’s a lot of possibilities here, and they’re all bad.

Ways to Keep your Accounts Safe

Follow these suggestions and you’ll be better protected against being infected with bots:

  • Avoid using the same password for multiple accounts. Going with a password manager to generate, store and autofill strong passwords is a much better – and safer – choice
  • Skip clicking on any links from suspicious emails or text messages. They could lead to phishing sites or cause you to accidentally download malware
  • Put in place 2-step verification or 2-factor authentication wherever possible. Third-party apps that help you do this are out there
  • Shop online only with reputable brands only, and choose to NOT store your credit card information with any of them

On public Wi-Fi? Use a VPN

Dreamcatcher: Using AI to Analyze Dreams May Soon Be Reality

2020 has been a roaring disappointment for all of us, but that’s coming at it from more socio-cultural and global perspective. A lot of the world has been hampered by the Global pandemic, but on the plus side (if there is one) the tech world hasn’t slowed down much if at all when it has come to advancing what the digital world is capable of offering us. The just-around-the-corner advent of 5G is getting the bulk of the hype, but major advances in AI (artificial intelligence) really need to be receiving some fanfare too.

It’s fairly natural that 5G is going to be trumpeted most loudly by those of us who are a quality Canadian web hosting provider, and here at 4GoodHosting we fit the bill pretty nicely in that regard. When nearly everything you do is related to the workings of the Information Superhighway, it’s nearly impossible not to be wholly excited about what 5G has the potential to do for all of us. That’s likely why 5G has been a recurring theme in our content here, but today we can’t help but choose to go in an entirely different direction when it comes to be excited about something.

It’s always been interesting to us that a bad dream is known as a ‘nightmare’, but there’s no exact term used for a good dream. Thankfully, most of us have good dreams more often than we have bad dreams, but most of the time our dreams are of the ordinary, unexceptional, forgotten by the time we wake up type.

Sometimes, however, the dream we just had is something where we can’t get our minds off of it. Now imagine if there were a technological means of understanding what you can read into the dream you just had?

It may be a reality before long, and that’s because of Dreamcatcher

What’s All This?

Just a few months back researchers from Nokia Bell Labs in Cambridge, U.K. made the announcement that they’ve created a tool called ‘Dreamcatcher’ that is able to use the latest NLP (Natural Language Processing) algorithms to identify themes from dreams as they are reported by the people who’ve experienced them.

All of this is based on an approach to dream analysis called the continuity hypothesis. It’s then supported by strong evidence from decades of research into dreams, and the general consensus that suggests that a person’s dreams are reflections of their everyday concerns and ideas.

That might sound like a very elementary conclusion, but in truth it’s a different way of thinking about dreams than the deeply complex interpretations of Freud and Jung devotees who simply repeat what was taught to them in Uni, viewing dreams as windows into hidden desires and other very conceptual summaries on noted brain activity while asleep.

An Automatic Dream Analyzer

This dream analyzer A.I. tool works by parsing written description of dreams and then scoring them according to an established dream analysis inventory. That inventory is known as the Hall-Van De Castle scale.

It is made up of a set of scores that measure the extent to which different elements featured in the dream are more or less frequent than some normative values established by previous research on dreams. Taken into account are positive or negative emotions, aggressive interactions between characters, presence of imaginary characters, and other responses. What this means is that it is not an actual interpretation of the dream, and more about quantifying interesting or anomalous aspects in them.

Built on Existing Manual Data

Integral to building of Dreamcatcher were some 24 thousand dream report records from Dream Bank, the largest public collection of English language dream reports available. The algorithm they created based on them is capable of pulling these reports apart and reassembling them in a way that makes sense to the system.

By way of the person recording the dream and its content, Dreamcatcher is able to make assumptions based on thee description and automatically extracted various insights. The developers have stated that some of these insights are expected, while others are the furthest thing from what the connections they expected to make. One of the more interesting findings as they went through this process is that blind people’s dreams feature more imaginary characters than the norm. This suggests that our senses influence the way we dream.

Manual vs Digital Psychoanalysis

With this kind of analysis possibility, surely Dreamcatcher is something that psychology professionals are going to take an interest in. Whether or not they will, it is exciting to witness the growing ability of NLP to capture increasingly complex and intangible aspects of language and looking into the vast expanse of what might be capable with manual dream annotation.

So the question then becomes can an app or digital development match or exceed what these psychotherapy professionals are capable of when it comes to making sense of people’s dreams and helping them use their dreams to better their mental well being or life as a whole.

Some research has already been done on this, and when Dreamcatcher matched up to scores calculated by psychologists, the A.I. algorithm matched them 76% of the time

Keep an Eye on This One

It goes without saying that the potential of a mood-tracking app that asks users to record their dreams, and then pulls out recurrent though processes and deeply seated inclinations of spiritual shortcomings is pretty huge.

Yep, on-the-fly legit and implementable dream analysis would be huge! But in the bigger picture a kind of large-scale dream-tracking project that could map the world’s dreams onto real events to see how one informs the other may have even greater potential applications as it regards psychological and spiritual well being. Especially when combined and cross-referenced with other real-world data.

6 Types of Content You Site Should Do Without, and Why

It’s always said there’s many different cogs to the machine, and that can be true of pretty much any machine. If you want to look at your online presence in this way, it’s a very conceptual understanding but it’s still there to be had. Your web designer or webmaster may be the most primary cog in it all, but there’s a whole lot of other valuable people too. Now I may be biased in this way being a copy and content writer here at , but I imagine that everyone who’s in a similar role with any good Canadian webs hosting provider is going to agree.

Agree with what exactly? Well that good, smart, and – perhaps most importantly – strategic content is very much an integral component in the success of your business or venture being online. In fact we’re pretty confident in saying that, as quality content goes perhaps the longest way in getting you’re the SERP rankings that you’re after. Alright, maybe that’s a s t r e t c h to say the longest, but we certainly can’t just take a ‘whatever’ approach to what we write for the sites.

Alright, enough patting ourselves on the back, but you get the point and it’s a very valid one. While there is an incredible volume of different content varieties that you can use for your website content – including blogs like this one – there are also a few types of content that might make sense given the context of your communications, but you should still avoid them.

Here they are, and here’s why you need to avoid them.

Content is king, marketers say – but not all of it. Indeed, some content can harm your website, slowing its performance, irritating visitors and badly affecting its search engine ranking. The way to deal with this is to check your site regularly and get rid of the damaging material. Here, we’ll explain what to look for.

1 – Heavy Images

While it’s true that images are important elements of a website, and they do have a positive impact on user engagement, you really need to be choosy about which ones you incorporate into site. Some are data-heavy and can slow down the loading time, which detracts from SEO and diminishes user-friendliness. No one’s suggesting you go deleting your images, but optimizing them for your site is a must.

Here’s the basic guidelines:

  • Using PNG files of 72 dpi that are much lighter and can be loaded more quickly than larger files
  • Using an image optimizing plugin that will take existing images and create light versions of the right dimensions for your theme
  • Also possibly speed up your site even more with lazy loading or a content delivery network

2 – Pop Ups

It’s also true that popups do increase conversion rates, but you know what they do far more often and much more reliably? They annoy site visitors and actually make them resentful in many cases. I’ve heard that for some people with seizures pop ups can actually be the cause of a very bad and dangerous situation sometimes.

Use multiple pop ups and you may well be contributing to your bounce rate in a big way. If you don’t need them, take them down. If you do, ensure you use them lightly and judiciously and you should also make it so that closing them is easy and that they don’t appear on every page.

Keep in mind as well that a pop up adds an additional script to your website. This WILL affect its performance and impact SEO, and not in a good way for either.

3 – Overly Enthusiastic Cookie Consent Pop Up

You’ve probably noticed how all websites are required to ask for the visitor’s consent to using Cookies. However, users end up being seriously annoyed with having to click ‘accept’ every time they visit. So this is something you should try to avoid doing, and that is possible.

You can start replacing page dominating cookie popups with less obtrusive methods that aren’t as full frontal disruptive for viewers who’d rather just keep reading. The next thing you should do is set the cookie consent form to appear at the same frequency as your shortest cookie life. Once you have permission to store cookies on a user’s device, asking for it again isn’t required unless you start collecting new cookie information or change the length of the cookie.

4- Broken Links

Links are hugely valuable for both user experience and SEO. Internal links serve to aid users with finding the content they are looking for more quickly, plus enabling search engine crawlers to discover and index content on your site. Search engines see outbound links as adding value to your content and then they improve your SEO in turn.

Working links are great and helpful, broken ones are not and totally detrimental. Users will have little tolerance if they click on a link to a page that doesn’t exist anymore, and they may be so disillusioned that they ‘split’ altogether. That poor experience is noted by search engines when they follow your links, and this can also lead to the pages that they appear on (YOURS) being downranked too.

The good news is that there are many slink checking plugin that can make sure the links your out-linking to are up and operational, and not broken.

5 – Out of Date Content

One of the biggest trends that those of us in the copy business have been make wise to is ‘evergreen’ content. This means content that doesn’t have any degree of timeliness to it, and it can be as contextually relevant in years from now as it can be today. But then there’s also the issue of how in the metadata of your web pages is the date on which the content was published. It’s not visible to your visitors, but it is visible to search engines which use it to understand how up-to-date your content is.

Long story short, search engines prefer fresher content.

Plus, users themselves want the latest information – someone searching for ‘Best shoe stores in Brampton’, for example, would be disappointed if they found a page containing a list of shops of which weren’t even open anymore.

So what this requires of you is regularly going through your content, deleting pages and posts which are completely out of date and updating outdated information on those that still had some relevance. For companies which have product and service pages where there has been no change to what’s on offer, it may seem that there is no need to make changes. Sometimes even just little tweaks can make a significant difference.

6 – Third-Party Ads

Many sites will display adverts from 3rd parties like Google or Bing and that can be a nice source of income, with image-led links to content on other websites. Usually this is okay in moderation, but some sites go overboard with it and you should make sure yours doesn’t. This can slow down the loading time of certain pages in a big way and become a major obstacle to reader retention. It can have a serious impact on SEO, user engagement and conversion rates.

Testing how the loading time of your website is affected by the ads is a must if you’re going to use any amount of them that’s more than just one or two if that.

No one is going to be okay with website pages that load slower, are less relevant or provide a poor user experience, all of which can make your site perform worse in search engine results. And if visitors abandon your site, well that’s not going to be good for anyone. High bounce rates definitely aren’t cool.

Promising Quantum Computing Algorithms May Soon Solve Unsolvable Problems

Lost to some extent in all the timely buzz and hubbub these days about 5G, Edge Computing and the like is the ongoing realization of quantum computing being put to work for the benefit of humanity. And considering what’s gone on so far in 2020, we could certainly use as much benefit as we can get right about now. We can go ahead and assume that most of you who’d be reading this blog have enough industry wherewithal to know what quantum computing is, but in case that’s not you we’ll go over that briefly before we proceed.

And proceed to what? Well, it would appear that quantum computing is going to be making a whole host of ‘beyond our abilities’ largest scale computing stumbling blocks much less of stumbling blocks for countries and societies that are trying to get the very most of out of technological advances to make life better for all of us. We’re like any Canadian web hosting provider here at 4GoodHosting in that we can relate to just how big a deal this can be.

But we promised a brief intro to quantum computing –

What is Quantum Computing?

Why don’t we go with old faithful and take a definition directly from Wikipedia – Quantum computing is the use of quantum phenomena such as superposition and entanglement to perform computation. By superpositioning Quantum bits, computer scientists are able to encode multiple values at the same time. For the last 40+ years they’ve only been able encode single values one by one, and this has hampered the speed with which certain equations can be put into use.

That’s a very brief and perfunctory look at it, but it will suffice for now.

To add a little more though, what computer scientists are looking to quantum computing to do is provide answers to the effective ways certain inapplicable solutions or equations can become applicable. Another relevant term here is constructive quantum interference, which is where quantum computers amplify certain signals over others to provide clues to as to finding these answers.

It’s been suggested that quantum computers may well have the ability to counter climate change, provide a cure for cancer, and provide solutions to civic and global issues of all sorts. While that’s wishful thinking and possible, it really remains to be seen whether or not those are realistic expectations.

The Promising Algorithms

Some of you many not even be aware of it, but it’s probably at least once or twice a day that you’re thwarted in your attempt to do something based on the failings of the device you’re asking to do it. Nine times out of 10 that has less to do with the device itself and more to do with the framework it’s working with. For example, in reference to the first algorithm we’re going to look at below – imagine being asked to find a listing in an unordered list that that’s not regulated from top to bottom by some sort of criteria. No alphabetical means of reference, no numerical – no nothing.

  1. Grover’s algorithm

That would be quite a task unless the list was only a short one of say 50 or less entries. Imagine it being a thousand plus entries, and you’re expected to find that one listing in less than a minute. Those are the types of demands that will be put on quantum computers if they’re to be trusted with very important tasks like – for example – the ‘smart’ traffic lights that are supposedly not far away and sound oh so good for everyone who hates how horrific traffic is in major cities.

Ok, enough about that and onto Grover’s algorithm.

We wish we could make a legit reference to Sesame Street here, but afraid not. This one is named after the man who developed it in 1196. What it does is finds the inverse of a function in O(√N) steps, and it can also be used to search an unordered list. It provides a quadratic speedup over classical methods, which need O(N) steps.

A lot of tech speak there for sure, but that’s the nature of what’s being discussed here.

Other applications include estimating the mean and median of a set of numbers, solving the collision problem, and reverse-engineering cryptographic hash functions. Because of the cryptographic application, researchers may recommend that the doubling of symmetric key lengths to protect against future quantum attacks.

  1. Shor’s algorithm

This one is also named after the creator, and what it does is it finds the prime factors of an integer. It runs in polynomial time in log(N), and that makes it so much speedier than any standard number field sieve. Public-key cryptography schemes, such as RSA, are not so isolated if there are quantum computers with a sufficient number of qubits and building based on Shor’s algorithm makes that more of a possibility.

However, whether quantum computers ever become big and reliable enough to run Shor’s algorithm successfully against the sort of large integers used in RSA encryption remains to be seen. If they do, then there would be some fallout in any industry relying on crypto-encryption like banking as it would put those powers in the hands of those who’d use it for illicit aims too.

Much of this has been put to trial with the Quantum Learning Machine, which behaves as though it has 30 to 40 qubits. The hardware/software package includes a quantum assembly programming language and a Python-based high-level hybrid language. A few national labs and technical universities are using it, and it’s been quite a success at giving looks into how this all might work.

Google, Microsoft, Intel, and IBM All in on This

Google AI is focusing its research on superconducting qubits with chip-based scalable architecture targeting two-qubit gate error to build a framework capable of implementing a quantum neural network on near-term processors, and on quantum supremacy. Two years ago there was a whole lot of fanfare around it’s introduction of the Bristlecone superconducting chip, and it’s likely we haven’t the last of it.

IBM has given us its Q systems, and is offering three ways to access its quantum computers and quantum simulators, and late last year (2019) they debuted an new generation of IBM Q system sporting a full 53 qubits at the IBM Quantum Computation Center in New York.

Intel’s contribution to all of this is has been Tangle Lake, a superconducting quantum processor that incorporates 49 qubits in a package that is capable of scaling upward from 17 qubits in its predecessor.

And of course, Microsoft is in on the action too, as they’ve been researching the development and application of quantum computing since before the turn on the last century. Between their Q# programming language and their QDK kit, it’s not long before quantum computers will be available as co-processors in the Azure cloud.

And if you’re wondering, here’s 10 areas where Quantum Computing may be making life better for humans:

  • Cybersecurity
  • More accurate and practical AI (artificial intelligence)
  • Weather Forecasting and Mitigating Climate Change
  • Drug Development
  • Financial Modeling
  • Better Batteries
  • Cleaner Fertilization
  • Traffic Optimization
  • Solar Capture
  • Electronic Materials Discovery

It’s all genuinely quite exciting, so we don’t know about you but we’ll be keeping an eye on all this in a big way.

Tips for Making Siri More Useful When Working From Home

In the middle of march when all the international craziness of COVID started it’s very likely that most of the people who then started working from home didn’t expect that arrangement to last long. But here we are nearly 6 months later and a good many of those folks are still working from home. The truth is that those of us who have professions that allow us to be working remotely are fortunate to be able to do so, and then of course it’s a good time to say how important many people who haven’t been able to have been to the continuing of a functioning society and economy.

One of the things that us here at 4Goodhosting have as a Canadian web hosting provider is that many of the people who choose us to host their websites will be among those who haven’t seen the office in many months. Now we’ll go ahead and assume you’re fairly good with adapting and finding new ways to maintain your productivity and satisfaction with new working arrangements, but surely you’re open to any suggestion that makes any aspect of that better.

We’re living in the digital world, and among the many technological advances that have come with that is the virtual assistant. Whether yours is named Siri or Alexa is dependent on whether you’re an iOS or Android person, and whatever preference you have is fine as both are pretty darn good in the big picture of things.

Today though what we’re going to do is deliver for iOS users who like to get the most with workday productivity via their iPhones, iPads, or Macs, and so here are a number of ways you can get more out of Siri as you go about that workday. It’s a good topic because if you’re an iOS device owner who still hasn’t figured out everything your super device is capable of, well – join the club!

Opening Apps

Couldn’t be any easier to open any application on any Apple platform that supports Siri. All you need to do is ask her to ‘Open [app name].’ It is that simple.

Send Messages, Texts or Emails

It’s possible to ask Siri to send an email to a named person, provided they are listed among your contacts. All that’s required is to say ‘Send an email to [person] saying – add your message – “ Texts can be sent too. Just say, ‘Send a text….’

It’s also possible to send short messages the same way; just replace the word ‘email’ with ‘message.’ Siri will inform you if you have new emails and read the name and subject line for you too if preferred, but we’ve read that many people find that takes more time than finding this information manually.

The last thing is that if you’ve paired your Mac with your iPhone, you can also get Siri to make a call via your Mac.

Identify a Caller

Siri is also capable of telling you who is calling if they are included within your iPhone’s Contacts. Again, super simple – Set this up in Settings>Phone>Announce Calls.

FaceTime Calling or Scheduling Zoom Meetings

Tell Siri to start a FaceTime call with (name of the person). You can also request that Siri ‘schedule a Zoom meeting on Friday at 3pm with [name].’

Take a Note

It’s true that many people don’t make much use of Notes, but if you do then Siri will let you open and create a new Note. All that’s needed is to say ‘Create a note that says….’ Some people find this to be a very useful tool when trying to stay focused on work with one application while making a note of an idea or something you’ve remembered quickly and easily.

Organize Meetings

You can ‘Add a meeting’ on a dedicated day or schedule a meeting for a certain date and time and in the same instruction tell Siri to inform contacts about the proposed meeting. Simple say ‘Add a meeting Tuesday at 4 and tell [name] and [name].’ As long as the named parties are in your Contacts, Siri will proceed to message them and inform of the meeting.

Manage Meetings

Take advantage of Siri’s meeting management tools to stay on top of your day-to-day, asking questions like ‘When is my next meeting?’ or ‘When is my meeting next Monday?’ Provided the information has been entered into your device’s calendar, Siri will happily let you know.

Siri can also help you reschedule a meeting. Just say ‘Move my 11AM meeting to 12.30PM’ for example.

Getting Correct Times

Some professionals hold meetings across time zones, and Siri has the entirety of international time zones and the correct time in every place on earth ready to go. Want to know the time in Moscow? Ask Siri. How about Samoa? Siri knows that too. You can quickly and easily find out what the time is anywhere by asking Siri ‘what time is it in [location]?’ and Siri will have that for you in seconds.

Siri can also convert currency, figure out %s and complete almost any calculation for you.

Remembering Passwords

If you’re like us and have a devil of a time remembering passwords on your own then the way Siri can take care of that for you is pretty darn great. You can open System Preferences/Settings just by asking Siri to do this for you. You can also ask Siri to access your passwords by saying, for example, ‘Hey Siri, show my enterprise VPN password.’ You’ll need to authorize that it is you asking of course (good thing), but after that you’ll be presented with what you need.

Dictations

With any app on an iPhone or iPad, you can have Siri write out what you’re saying by just tapping the microphone icon on your keyboard and asking Siri to dictate for you.

Siri’s Talents for Mac

Siri is capable of opening Mission Control, searching for stuff on your Mac, opening named items, opening System Preferences, or showing you specified documents and images. Plus she can provide you with useful system information, like how much memory you’ve got to work with on the device. Ask pretty much anything related to your MacBook or iMac and ye shall receive.

5G Download Speed in Canada Is 2nd Best in World

We’ve all heard so much fanfare, excitement, and anticipation about the arrival of 5G network technology and what we can expect once it’s the new normal. There’s been some trepidation about it too, and most notably in who we’ll allow to build the 5G network in Canada. We’re going to steer well clear of that topic of discussion, but what we will do is have a look at a recent survey that found that 5G downloads in Canada are fairly darn speedy in comparison to elsewhere.

Here at 4GoodHosting, that’s going to be quite promising for any good Canadian web hosting provider that has a fairly inherent understanding of all the potential that’s going to come with 5G and how pleasing it’s going to be to enjoy it with open-throttle operating speeds. However, the one thing that’s likely the most promising is probably the one aspect people are least enthusiastic about – the IoT (Internet of Things for anyone not familiar with the acronym)

So back to the topic, what went into this determination, and what does all this suggest in the big picture once the rollout of 5G Networks is complete?

All About the Signal

We’ve been told all sorts of what 5G wireless technology may become, but not what it is exactly. Unless you’re a mega tech-savvy person, there might a need to start from the start. 5G networks are the next generation of mobile internet connectivity, and it’s promising to offer connections that are much, much faster and reliable than what was offered with previous mobile technology.

You may not be familiar with what 1 Full gigabyte of download speeds entails, but trust us when we say it’s fast and a LOT faster than what most of us have enjoyed as a standard on 4G. And the good news being that 1Gbps (or darn close to it) speeds are set to become the new standard.

Provided, that is, that you’re running on a good strong signal.

What a 5G network is able to offer will depend in large part on what signal your 5G is running on, and there are three categories of signal bands. You’ll be working with either high-band, mid-band, or low-band signal bands. And before you jump to conclusions about low-band signal bands, you might want to know that they’re better for penetrating walls, which makes them a better choice for condos, basement suites and the like.

Considering how many Canadians in major metro areas live in these type of homes that’s going to be a good thing. We can imagine the sale of Wi-Fi expanders to people who get home to find they do little if anything is going to go down considerably.

Mid-band works is ideal for connectivity in the city, but not in the country. High-band is impressively fast, but it can be unreliable and especially when you’re indoors and have other local factors that are also affecting the signal.

And even while 5G technology is being trumpeted in the most favourable of lights pretty much all over the place, the technology does have its detractors. An entry in the Scientific American journal last year highlighted how more than 240 scientists signed the International EMF Scientist Appeal and expressed their concern about nonionizing radiation attributable to 5G.

5G will use millimeter waves plus microwaves that have been in use for older cellular technologies from 2G all the way through the current 4G. The issue with 5G in this way is that it will require cell antennas every 100 to 200 metres or so, and that’s going to ramp up radiation in a big way. 5G also employs new technologies which pose unique challenges for measuring exposures.

The most well known of these are active antennas capable of beam-forming phased arrays, and massive multiple inputs and outputs, or MIMO as they’re called.

While that’s a very legit concern, however, the old expression ‘you can’t stop progress’ probably really applies here. The potential for good (at least in as far as determining that by what people want) outweighs the potential for bad – at least in the court of public opinion.

Pretty Darn Speedy

Alright, enough about relevant related and background information. People who read the title almost certainly want to know more about Canada coming in second for 5G network speeds.

It’s true, as a company that tests the performance of mobile networks recently analyzed users’ real-world 5G experiences in 10+ different countries to determine who’s enjoying the best 5G network speeds.

Taken into evaluation were users’ average 5G and 4G download speeds measured through various mobile operators, while also weighing time spent on connecting to each generation of wireless technology.

So we’ve already established Canada having the Second fastest 5G network speeds on the planet, but by this point you’re probably thinking when are they going to say who got top spot?

Any guesses?

KSA #1

We’re going to go ahead an imagine none of you envisioned the correct answer being Saudi Arabia here, but it’s true. Right there smack dab in the Middle of the Middle East they were enjoying 144.5Mbps (mega bits per second). Even if that’s the furthest thing from being within your comprehension abilities, trust us when we say that’s pretty much screaming fast.

And with Canada coming second, the truth is that we came in a distant second. Canada did come second with 90.4Mbps, but the different but that’s a difference of nearly 55Mbps and that pretty much makes it qualify as a distant second.

Now we DO imagine that a lot of you would have guessed South Korea based on the fact it’s regarded as the most wired country in the World AND they have the highest adoption rates for 5G networks so far. They did come in the top 5, but what’s also surprising is that the country that came in with the worst score (32.6Mbps) wasn’t a developing country or anything of the like.

It was the UK!

However, the study did find that if they were only examining 5G speeds rather than both 5G and 4G, South Korea moved ahead into second place at 312.7 Mbps and the Saudis retained the top spot with 414.2 Mbps. We Canadians slid back to 5th spot at 178.1 Mbps, trailing Australia (215.7 Mbps) and Taiwan (210.2 Mbps).

And to continue with our trend of surprises here, it was actually the USA that came dead last when looking at 5G speeds exclusively. 50.9 Mbps.

Keep in mind though that these less-than-impressive 5G download speeds in the U.S. are due to a combination of the limited amount of new mid-band 5G spectrum that is available and the continuing popularity of low-band spectrum and its excellent availability and reach but lower average speeds than the 3.5GHz mid-band spectrum used as the main 5G band in every country outside of the U.S.