Chrome 80: Everything You Need to Know

Shouldn’t come as much of a surprise that Google Chrome continues to be the world’s most preferred web browser, and there doesn’t seem to be much of a risk of it relinquishing that title anytime soon. Sure, there’s going to be plenty of iPhone users that will be perfectly fine with Safari when web browsing with their mobile devices, but even most of them will probably spend more than a little time using Chrome on their notebook or desktop. One thing’s for sure, both of them (along with Firefox) have definitely left the now-obsolete Internet Explorer in the dust.


Which is the way it should be, but it’s still true that even Google’s super-popular web browser hasn’t avoided having a few glitches as it’s been progressively rolled out. Here at 4GoodHosting, we imagine we’re just the same as any good Canadian web hosting provider in that we understand that a person’s web browser of-choice is going to be very relevant in regards to how well they experience the websites and other dynamic multimedia content that’s offered by those of people like the very same clients we have. It’s for that reason we’ve decided that a brief overview of the extensive Chrome 80 version update is a worthwhile topic of discussion for this week’s blog.


So let’s get to it.


Ambitious and Extensive Offering


Chrome 😯 arrived a week and some back, and it’s been promoted most notably as promising to put the clamps on cookies while patching 56 vulnerabilities at the same time. Making this happen has reportedly cost Google about 48k to address the vulnerabilities to ‘bugs’, with 10 specific ones being prioritized as ‘high risk’. Half of those 10 were submitted by engineers of Google’s own Project Zero team.


Chrome updates in the background, so by relaunching their browser most users can complete the upgrade. If a manual update is needed, then select ‘About Google Chrome’ from the Help menu under the vertical ellipsis at the upper right. You’ll then see a tab showing that the browser has been updated or displays the download process before making a “relaunch” button available.


Limiting Function of ‘Cookies’


This is a huge part of what makes the Chrome 80 update such a big deal, and especially for anyone who feels a little put off about how their computer seems to ‘know so much about them.’


Google had already promised it would find a way to restrict cookies. For those of you who may not know what a ‘Cookie’ is, they are small bits of code websites rely on to identify individual users. This is done using the SameSite standard. SameSite was designed to give web developers a way to control which cookies can be sent by a browser – under certain conditions.


The Chrome 80 update will mean that Google will begin enforcing SameSite, and Cookies distributed from a third-party source – ones not initiated by the site the user is currently visiting – must be correctly set and will now only be accessible over secure connections. It’s also reported that enforcement of the new cookie classification system in Chrome 80 will commence later in February, and we should remember that Google generally prefers to roll out new features and other changes in stages, to verify things are working as expected before making them available to their enormous pool of users. The company has stated this week of Feb. 17 is going to be the switch-on-SameSite salvo, so we may get news of that today or tomorrow in confirmation.


Another aspect of Chrome 80 is that cookies without a SameSite definition will be considered as first-party only by default; third-party cookies – ones from an external ad distributor tracking users as they wander the web – won’t be sendable.


It’s believed that the idea behind this is an aggressive push by Google to motivate site makers and other cookie distributors to get behind the SameSite standard, and that this is important and advisable based on Google Chrome being the industry leader for web browsers. We’ll keep in mind that SameSite is not Google’s answer to the increasing anti-tracking positions being offered by rivals like Mozilla and Microsoft. However, Google is quick to tout SameSite’s better, security prowess, and especially for preventing cross-site request forgery (CSRF) attacks,


A Cease to Notification Nagging


Chrome 80 is implementing the quieter notifications that Google promised last month too. Instead of letting sites place pop-ups on the page requesting permission to send notifications, following the Chrome 80 update you’ll instead see an alarm bell icon with a strike-through near the right edge of the address bar. We’re one of the many who’ve found notification pop ups to be very annoying, so this is very likely going to be extremely well received.


Users will be able to manually engage the new notification UI using an option in Settings > Advanced > Privacy and security > Site Settings > Notifications. Toggle the “Use quieter messaging (blocks notification prompts from interrupting you)” switch and you’ll immediately have activated the pop-up blocker.


Google has said it would also automatically enable the quieter UI for some, and a new feature where users who repeatedly deny notification requests will be auto-enrolled in it. Google will automatically silence some sites too, and ones that fish extremely hard for notification enrolments are going to be targeted.


Tab groups are also expected to debut in Chrome 80, but as of this writing it seems that feature has yet to be entirely rolled out yet. For those of you who’ll be eager to see it this is where you’ll be able to confirm:


  • The option to turn it on is behind chrome://flags: Search for Tab Groups, change the setting at the right to Enabled, and relaunch the browser


Google is claiming that the feature should begin rolling out to users with Chrome 80, but it may not be in final form until March’s Chrome 81 which is scheduled to arrive on March 17, 20202. When it does, users should be able to right-click tabs and choose new menu items to create groups, assign tabs to them or remove tabs from those groups.


On least thing to note for the Chrome 80 update is that it will allow for effective blocking of employees trying to install external add-ons. Administrators can call on the BlockExternalExtensions policy to stop the practice.

Security Risks Increasing Considerably When Moving Sensitive Data to Enterprise Cloud

Stick your head around pretty much any corner and there’s bound to be something about the ever wider reaches of cloud computing and what it promises to entail for the future in the digital world. The ability to utilize non-physical storage and then share data with requiring access to this storage has really been a game changer. Now with good usually comes at least a little not-so-good, and – surprise, surprise – cloud computing is no exception. However, if there was a ‘do over’ button would anyone press it and go back to the times of exclusively physical location storage and access?

Not a chance.

Cloud computing is going to be one of the centerpieces of modern computing technology for the foreseeable future, so we are going to need to accept and overcome a few bumps in the road along the say. Increased security risks are at everyone’s forefront in the digital realm these days, and here at 4GoodHosting we’re like any reputable Canadian web hosting provider in that we’re making enterprise-level security measures standard with most of our web hosting packages.

And while we’re huge fans of cloud computing, our expertise is in web hosting and we don’t claim to know much if anything about security risks related to cloud computing. However, research is something we ARE very proficient with and as such we’re always happy to dig into topics that our customers are likely to find relevant to what they do on a day-to-day business on the World Wide Web.

Cloud with Caution

And so here we are in a brand new decade and there’s going to be no one surprised with the fact that enterprises continue to feed their clouds with increasingly sensitive information. However, it would seem doing is increasingly risky and decision makers are being urged to move forward with caution. A recent study logged from anonymous data from 30 million enterprise cloud users found that roughly 26 percent of files analysed in the cloud now contain sensitive data, and the trend has been for this to increase some 23% year over year.

This becomes potentially problematic when you consider that 91% of cloud services do not encrypt data upon entering cloud storage. That means of every 10 or so entries, more than 9 aren’t guarded well – if guarded at all – sitting in the cloud.

Now, to be fair, data loss protection (DLP) software does exist and a lot of it is quite good and reasonably effective. However, it’s also estimated that only 37% of cloud service providers say they are utilising DLP. Add next that nearly 80% of them also access to enterprise-approved cloud services from personal devices, and – perhaps more alarmingly – a quarter of companies report having sensitive data downloaded from the cloud to an unmanaged personal device.

Spotty Security and Risk Management

It’s not that the current infrastructures in place are bad, and more that they’re insufficient and spotty with how and where they’ve been rolled out. Gaps in data visibility and shielding continue to mean that certain networks look very inviting to breach attempts and non-compliance.

A recent survey found that 93% of cloud storage providers agree that the responsibility to secure data in the clouds is theirs. However, many of these same respondents say there is an emerging trend in the industry where there are simply not enough individuals with the skills required to put the right infrastructure in place and maintain it. SaaS (software as a service) is new, but it’s not that new and to some degree it’s hard to believe this assertion.

It IS fair to say, however, that technology and training continues to be outpaced by cloud’s aggressive enterprise growth. The expression ‘growing pains’ may be very appropriate here.

Smart Reactionary / Precautionary Measures

So what are the recommendations for anyone with above-average concerns about sensitive data of theirs being stored in the cloud?

Here are 3 things you can – and should – do to increase security of cloud-stored data:

  1. Evaluate your data protection strategy for devices and the cloud

Consider the difference between a disparate set of technologies at each control point, along with the advantages of merging them into a single set of policies, workflows, and results

  1. Investigate the breadth and risk of shadow IT

Determine your scope of cloud use, and put a primary focus on high-risk services; then move to enabling your approved services and restricting access to any that have the potential of putting data at risk

  1. Plan for the future with unified security for your data

Context about devices improves cloud data security, and context about the risk of cloud services improves access policy through the web. Many more efficiencies will exist, while some are yet to be discovered. The smart merging of all these control points will be what will deliver the future of data security when it comes to utilizing all the advantages of cloud storage and access.

In conclusion, a last consideration that you can have is to look a little longer at what sensitive files will be fine in physical storage and better there with all the inherent security that comes with that. Never look at cloud storage as something to be used just because it’s there. If you don’t see a particular set of files as needing the ease accessibility the cloud provides, and they don’t ask much for much space, then perhaps they’re just fine staying stored where they are.


New Windows 10 Patch More of a Problem Creator than Problem Solver

It’s not often we choose to use relevant recent software news as the subject for our weekly blog post, and the reason for that is not only because there’s usually plenty more noteworthy news out there, but also because often times these software shortcomings don’t affect a large swath of people. However, any time it’s about anything related to a Windows OS issue then the sheer number of people that rely on that particular operating system make it so that it’s worthy of mention. We’re certain that the software engineers that put out these patches are qualified and have best intentions, but we all get it wrong sometimes.

Here at 4GoodHosting, we’re just the same as any quality Canadian web hosting provider in that we’ll see the value in putting certain news on the billboard – if you will – so long as it will be welcome information for a good many of our customers. Now we’re fairly sure that there’s more than a few of you sitting with a Windows OS device in front of you, so that’s we’ve decided to make the shortcomings of the new Windows patch our topic of discussion this week.

Admittedly it’s not the most engrossing stuff. But if it leads even a few of you to avoid major headaches by skipping this patch and ‘leaving well enough alone’, as the expression goes, then we will have done something for the collective good.

Alright, let’s get to it.

A Not-So-Good Fix for Search Function Bugs?

Windows 10 recently issued forth an update which was promoting itself as being the cure for the long-standing bugs in the search function that have been a real thorn in the side for Microsoft Windows OS users. To get right to the meat of this, what seems to have happened is that in their efforts to find a working fix for the search bugs (which was accomplished), what this patch has actually done has tampered with other parts of the OS and as such introduced a whole manner of new issues.

Hate to be overly critical, but sometimes you just have to call it as it is – this is really quite the mess for Windows 10 users who were simply looking to get past the Search hang up. If you haven’t downloaded the newest Windows 10 patch yet, you might want to avoid doing that altogether.

And here’s that worse case scenario we were talking about – more than a few people have reported installing update KB4532695 – and then receiving a ‘blue screen of death’ for their troubles, meaning their PC is totally locked up and probably needing a trip to see a computer repairman unless you’re something of a computer repair tech yourself.

The bad continues; if a thread on Microsoft’s help forum is to be believed, the patch is isn’t done there when it comes to undesirable outcomes; a reported boot failure, disabling audio and the sound card on the PC, rendering Bluetooth useless, or making connection to the Internet and impossibility – even after reboots.

And if you are still booting fine, they may be sluggish and annoyingly slow. Some people described being stuck at the splash screen for a good 5 minutes at least, and only deinstalling the update fixed this for them.

Glitches Too

KB4532695 is something of a failure for other reasons too; while it’s true not everyone is going to experience ALL of this, it’s still expected to be a huge nuisance for a number of people to the extent that the onus is definitely on Microsoft to fix this, and fix it without too much delay.

Fortunately, deinstalling the update IS possible, and it may well be your best choice to do this, put up with poor search functionality for the time being, and wait until a better and more wholesome patch is issued from Seattle.

Search Fix, Any Fix?

We’ve established that this patch does little to solve the sear problems with File Explorer, despite this being the reason for its creation. We will give credit where its due and say the KB4532695 patch DOES resolve issues with right-clicking, and the search bar being unresponsive. However, there are still bugs affecting the bar even after installing this patch. So that’s a negative too

Users have reported having to left-click twice to get the cursor to appear in the place where they’re clicking inside the search bar. Apparently you need to left-click first before a right-click on the search bar has any function.

Obviously not intentional and I’m sure certain individuals have been told to get back to the drawing board without delay – but one things for sure, this new Microsoft search bar issue patch is something of a dud. Not recommended, especially if you can make do until a PROPER and FUNCTIONAL successive patch arrives.


A Reminder About the Relationship Between VPNs and SEO

It’s likely fair to say that Virtual Private Networks have been enthusiastically promoted by web hosting providers in Canada enough for the foreseeable future, at least from a consumer’s perspective. But it’s also fair to say that if the interests that lead you to have an online presence have SEO and search engine page rankings among them then the benefits of a virtual private network can’t be overstated. We may have touched on this before, but in a new decade where the digital sphere is scheduled to become even more prevalent in the business world it needs to be said again.

So that’s what we’ll talk about here today. Part of being a leading Canadian web hosting provider here at 4GoodHosting is being attuned to what’s important to people who trust us to ensure their website is up and open 24/7 and 365. We don’t need to take a survey to know that for the vast majority who have business interests with their website, ranking well in SERPs is going to be well up there. That people aren’t inclined to sift past the end of the first page – if they even get that far – isn’t likely ever going to change. And so the importance of 1st page search rankings isn’t going to change either.

So let’s take a short but thorough look at it for anyone who’s open to the idea of anything that can improve my website’s SEO and search engine rankings.

Brief VPN Introduction

Virtual private network services, or VPNs, are different from shared networks in that the user is able to create a new path for their online activities to connect to the Internet. Rather than directly connecting to the web, a virtual private network directs the traffic via one of its own servers in advance of sending it on to the resource. Data is also encrypted in this process, and that’s also a big part of the appeal of VPNs, but we’ll leave that for anther discussion.

We will say that when end-users connects to the Internet in this way, there’s a lot of security benefits; Their identity, IP address and what they’re accessing online is all kept entirely private and masked from view. This functionality has plenty of practicality for people working in SEO, but what that is may not be immediately clear.

Local SEO Results

Anyone working on SEO projects for global clients will be aware that achieving good organic results and paid ads on search engines can be quite challenging. The difference with a VPN is that since it allows people to choose their country of connection, they are then able to search via the VPN to get a clear view of how competitive it is to rank for certain keywords. Being able to easily switch between countries makes it a lot easier to handle multiple clients, especially when those clients are a geographically diverse bunch.

Better Visibility with Domestic Searches

Unless you’re extremely SEO savvy, you likely won’t know that the search results that Google shows a person is based strongly on their search history and location, plus other markers that the search engine uses to identify an individual. As a result, simply checking on rankings to see whether they made progress with their clients’ sites isn’t a very accurate representation of the strength of a site’s SEO. They will be a viewing a skewed representation that may not reflect how frequently or withing what phrasings people are searching for these keywords. However, when a user’s identity is masked, the VPN service takes out the identity-based adjustments that Google makes and allows the webmaster (or anyone else) a much more accurate representation.

They can then report this information to their client and give them a more smartly designed plan to improve their search engine rankings. By being able to see the competition and exactly how far they have to go, they can create a plan that best addresses the reality of their search engine ranking deficiencies.

Protection of Sensitive SEO Data

Effective SEO strategies and targeted keywords will be something you will want to keep to yourself, not only for a competitive advantage but also because this can be sensitive information. You won’t want the competition or another unauthorized entity to get their hands on it. Malware exists in many forms and hackers are quite sophisticated in their strategic approches these days, so anything extra an SEO pro can do to protect this information when it’s moving from their computer to the client is going to be helpful.

A VPN offers this to the user, with strong encryption built into the system that makes it fairly unlikely that a hacker would be unable to access useable information, even if they mange to intercept the transmission.

It’s also true that some VPN services have infrastructure in place to block ads and protect the user against malware and phishing attempts automatically. That’s always going to be a big plus, no matter what you’re doing and how vigorously you’re working to improve your SEO.

Stay Safer on Public Wi-Fi

SEO experts do enjoy the freedom to be a digital nomad as one of the perks that come with the nature of their work. Working remotely in different locations is almost always going to involve public Wi-Fi internet connections now and then. They’re great, but they put data and systems at risk. The potential risks of public Wi-Fi networks are nothing new, but some people have more to lose than others

The good news with a virtual private network in this regard is that A VPN encrypts all of the traffic going from the person’s computer through the VPN server. Plus, not needing to tether to mobile data-reliant device in order to get the same level of safety is advantageous for obvious reasons too.

Advanced Understanding of Google Ads

The benefits of a VPN don’t end there. It’s also a better choice for anyone who makes regular use of Google Ads in the promotion of their business. When you use a VPN for paid ad placement it allows you to see whether it’s showing up in the position you’re expecting it to. You can see the geographic disparity as they move around servers. Information learnt from this process can prompt you to make smart strategic changes to your SEO strategy and other improvements.

No Geo-Restrictions

Some countries have geo-restricted content, and for some people that impedes their efforts to do their job. A VPN gets around these limitations since that traffic will look like it’s coming from a different country. Firewalls aren’t much of a wall at all when you’re working from a VPN.


Choosing the Right VPN

A good place to start is by looking for ones that have a variety of supported countries. Another important consideration is the type of encryption that the service uses. Look for VPNs that have military-grade or bank-grade levels of encryption. If someone has the means of getting through this type of security, they’re probably the type who would be getting through no matter what you had in place to lock them out.

Most quality VPNs will also have a kill switch that stops all Internet usage if the VPN connection goes down. Price is also a consideration, but those who opt for a free VPN will almost certainly find it’s bare boned nature isn’t to their liking. Typical restrictions for free services include a small number of supported devices and connections, and a data cap. Premium services don’t have these types of limits or eliminate them entirely to make life easier for their users.


“Fleeceware” – What is It, and What’s the Risk?

It’s likely accurate to say that most people put a lot more priority on security measures for their desktops or notebooks than they do for their phones. While it is true that most mobile operating systems will have anti-virus features to some extent, it’s becoming increasingly clear that nowadays that’s not going to be sufficient much of the time.

Incidences of phones become infected with malware are increasingly common, and there’s going to be very few people who aren’t familiar with that term.

However, one newer variety of ‘ware’ that isn’t going to be as universally well known as malware is going to the subject our post here today. Here at 4GoodHosting, we may a quality Canadian web hosting provider but our higher level of web-savviness doesn’t make us any less at risk of these bugs messing with our mobile devices than the rest of the average citizenry is. The difference is we’re in a position to always be made aware of new threats that come along, whereas most of your likely aren’t.

That’s why we always make a point to share these types of information. Who wouldn’t be especially displeased to find out their phone has been compromised, so here it is – a discussion about the newest type of malware to arrive onto the scene – what exactly is ‘fleeceware’, and what can we do about it?

Perils of Free Trial Periods

Before we discuss this new type of malware, iPhone users can breathe easier and then see themselves out. Fleeceware is making victims out of Android users exclusively, at least for now, and it’s de facto delivery method is actually through the Google Play Store. Obviously this is one of the most visited digital storefronts in the world, if a recent research survey is to be believed then these Fleeceware apps have been unwittingly downloaded and installed by over 600 million Android users after making purchases through Google Play.

Now for those of you who don’t have the most expansive vocabulary, fleece – when used as a verb – is ‘to strip of money or property by fraud or extortion’ (credit to the good folks behind Merriam-Webster’ excellent online dictionary). So that gives you an idea of what’s going on here with this.

It was last September when this term was coined, after it was discovered a new type of financial fraud taking place on the Google Play Store. The term itself refers to apps that abuse the ability to offer trial periods to users before their accounts are charged. But of particular caution here is when a person signs up for an Android app’s ‘trial period’. If this is something you’re considering, be forewarned that you really need to proceed with caution.

How it Happens

Here’s how this plays out, both nefariously and all too discreetly; When a user signs up for an Android app trial period, they must manually cancel the trial to avoid being charged. Most users choose instead to uninstall apps they don’t like, and most app developers take this as an indication they wish to cancel the trial period without being charged.

It was only recently that it was discovered that some app developers made no such cancellations to an Android app’s trial period after it was uninstalled. Rather, they kept charging them in spite of the fact that they were no longer using the app.

They were ‘fleecing’ these former free trial-period users, and doing so in a way that didn’t allow these individuals any way of knowing they were still ‘on the hook’ for the app even though they’d deleted it from their devices before the free trial period ended.

More Than a Few Fleeceware Apps

Industry watchdogs discovered 24 Android apps that were charging high fees, ones that were between $100 and $240 on average per year, for simple apps such as QR readers and calculators. And again, after their trial periods hand ended and independent of whether or not the person had deleted the app from their phone

Plus, it’s also been revealed that another set of Android fleeceware apps have been unwittingly downloaded by people through the Google Play Store with no reason for them to be suspicious. The good news is that many of these dark-sided apps have telltale signs that indicate a possible fleeceware app.

  • Unprofessional design and ‘cheap’ appearance and / or UI (user interface)
  • Abnormal number of 4 or 5-star reviews that do not have any commentary attached to them, or very little and vague wording (aka ‘sockpuppet reviews’)

The industry consensus seems to be that while fleeceware apps are being scrutinized and put in the public spotlight more, there’s still less focus on them as compared to ‘debilitating’ types of malware that affect the function of the device more directly. It’s a problem that Google will have to deal with for their Play Store, and it would be nice to seem them move more quickly in response to this.

What can you do to protect yourself? For starters, and quite basically, you should think twice about signing up for any trial period, and especially for any app that meets the criteria listed above for possible fleeceware ones. Next, be sure to actually cancel any trial periods rather than opting to simply delete the app.

More and more folks are choosing an anti-malware software for use with their mobile devices, and it’s really a smart call these days. Here’s hoping all of you who frequent the Google Play Store are more informed when it comes to shopping safely these days.

Understanding UWB (Ultra Wideband) and Its Significance for the Internet of Things

Most people will need to look no further than their video doorbells, wireless thermostats, and the like to know to just what extent the IoT (Internet of Things) is increasingly integral to the conveniences of modern life. It was inevitable that consumer goods and appliance manufacturers would utilize the power of the Internet to make these products more personalized AND powerful. The majority of us are quite enthused with the development, although there are some concerns about privacy.

All in all though, the ongoing development of the IoT is a big benefit for consumers for whom their everyday lives take up nearly all of available attention, and anything that can make the task that come along with it easier is going to be especially welcome. Here at 4GoodHosting, we’re like any good Canadian web hosting in that these types of trends and technological developments are especially interesting to us given the digital nature of what we do here.

While it would be a challenge to find anyone who’s not at least somewhat familiar with IoT, it’s going to be understandable if you haven’t hear of UWB (Ultra Wideband) connectivity. That’s what we’re going to discuss with our entry here today, and the first thing to know about it is that it’s going to be an integral part of the IoT being expanded to have even greater functional capacities.

Getting to Know UWB

The new line of iPhones need no introduction, but one interesting point about them is they contain a chip called a U1 chip that provides Ultra Wideband connectivity. These chips are said to provide ‘spatial awareness’ – the ability for your phone to recognize its surroundings and the objects in it. In more example-based terms, it’s what allows one iPhone 11 user to point their phone at another iPhone 11 and transfer a file or photo.

UWB is a short-range, wireless communication protocol that is similar to Bluetooth or Wi-Fi in that it uses radio waves, but it’s different in that it operates at a very high frequency. It also uses a wide spectrum of several GHz. 15+ years ago, UWB was used for military radars and covert communications and for some medical imaging applications too.

Today, however, engineers are aiming to utilize it for location discovery and device ranging. UWB is more precise and accurate than Wi-Fi and Bluetooth when locating other devices and connecting to them. It also uses less power and should offer a lower price point.

All of the world’s largest smartphone makers are all involved in UWB projects including chip and antenna production. Apple has beaten all others to the punch when it comes to actually deploying it in a phone though.

How Does UWB Work?

UWB transmitters work by sending billions of pulses across the wide spectrum frequency before a corresponding receiver then translates the pulses into data by identifying a familiar pulse sequence sent by the transmitter. One pulse will go out about every two nanoseconds, and this is how UWB is able to offer real-time accuracy.


UWB is extremely low power, but the high bandwidth (500MHz) makes relaying extensive quantities of data from a host device to other devices much more possible, and up to roughly 30 feet away. It is true that UWB does struggle with transmitting through walls.

But as long as there’s a direct ‘line of sight’, if you will, it has some seriously impressive data transfer capabilities. Increasing UWB’s range and reception reliability is made possible by a MIMO (multiple-input and multiple-output), distributed antenna system added to the standard that enables short-range networks. When embedded into a smartphone or other devices such as a wristband or smart key, it creates a whole new ballgame as far as connectivity and data transfer are concerned.

Superior Receptiveness and Responsiveness

With use cases like asset tracking or device localization, one of the UWB devices calculates the precise location of another UWB-enabled object. For example, a UWB-enabled device can be used to unlock a car like a key fob or enable entrance to a secure area within a building. Or, a UWB-enabled smart phone or watch could establish secure access to a bank account via an ATM.

Security is always a popular topic these days, and UWB can augment that too. It could be a way to thwart relay or man-in-the-middle attacks, where bad actors monitor an area like a parking lot in an attempt to intercept and then store authentication messages between two devices, such as a key fob and a car. The UWB device’s signal would ignore all other devices in an area if they’re not oriented as a UWB-ready device themselves.

What is UWB Capable of Doing with the iPhone 11?

Apple developed its U1 chip has enabled all three models of the iPhone 11 to transmit data using the AirDrop file transfer service, and it will work with distances similar to those of Bluetooth.

While Bluetooth and Wi-Fi are indeterminate in their working, the Apple-developed U1 chip enables specificity. To put that differently, you have more physical control over which device(s) have access to yours based on where you position it. This makes us believe that it’s almost certain that Apple will enable the iPhone as a type of vehicle key fob sometime in the not-too-distant future.


6 Skills for Web and App Developers to Learn for 2020

The new year – and a new decade – are upon us now and for many of us our New Year’s resolution will be to further our careers. Sure, there’s going to be many other resolutions too but no matter what industry you work in there’s always something to be said for expanding the horizons of what you’re able to do. For those who work in IT, it’s probably fair to say that there’s even more pressure to continually be mastering more given the extremely competitive nature of the industry and the willingness of companies to jettison employees in favour of more capable ones if an when they present themselves.

That certainly is the nature of that industry, but every one of us that enjoys digital connectivity and everything that comes along with it needs to be thankful for what web and app developers do for us. Here at 4GoodHosting, being a leading Canadian web hosting provider doesn’t qualify us for such appreciation, but we do sit in position that lets us have a more insider’s view into what goes on in the IT industry and how there’s never a day when these developers can rest on the laurels. It’s entirely true that if you’re not moving forward and learning and exploring all the time, you’re likely not long for the profession.

As long as user behaviour and user needs are changing, then web and application development will always be changing similarly. Fail to keep up with that and it’s going to be impossible to keep your products and projects relevant. The answer of course is to expand your skill set so that you’re able to catch the balls no matter which direction they fly at you from.

With that understood, here are 6 key skill areas where web and app developers would be wise to put their focuses in 2020.

  • Artificial intelligence

AI’s importance in application development can’t be stressed strongly enough at the moment. Most people who are in this business will know the extent to which it’s become so ubiquitous. Many users won’t even realise their interacting with AI or machine learning systems, but the fact that they are offers major benefits to both the inputter and receiver of the information.

The ways in which AI can be used by web and app developers is extensive and constantly growing. Personal recommendations is likely the most obvious example of it, but chatbots and augmented reality are where the boundaries are really being pushed with what’s possible with AI in the development field.

For some developers, AI might still be a little intimidating. And that’s fine, but be aware that you don’t need a computer science or math degree to use it effectively. Platforms and tools that make it possible to use machine learning technology out of the box are coming out quite regularly now, from Azure’s Cognitive Services, Amazon’s Rekognition, and ML Kit, built by Google for mobile developers.

  • Azure Cognitive Services for Developers

Azure Cognitive Services lets you build smart, AI-backed applications with impressively intuitive and user-friendly design. It does require a basic degree of wherewithal regarding web development, but what it does most primarily is make processes quicker to save you time that you can then use for more challenging parts of the development process.

  • New Programming Languages

If you’re not familiar with the term ‘polyglot’, it’s a term used for a person who has a natural gift for learning multiple languages and speaking all of them with real fluency. When a programmer is similarly ‘fluent’ with different programming languages, they’re more able to choose the right language to solve tough engineering problems.

Web and app developers responsible for building increasingly complex applications and websites won’t need to be reminded of this fact. The emergence of languages like TypeScript and Kotlin attest to the importance of expanding on your programming proficiencies every chance you get. However popular core languages like JavaScript and Java may be, there are now some tasks that they’re just not capable of dealing with.

Learning a new language (or 3) is a great way to build your skill set. Use your time wisely.

  • Accessibility

Real, functional web accessibility cannot be overlooked any longer. With increasing pressure to deliver quality, bug-free software on time, thinking about the consequences of specific design decisions on different types of users, is almost certainly going to be pushed to the bottom of developer’s priorities. Nowadays a two-pronged approach is the much wiser choice, and developers will do well to commit to learning web accessibility themselves, along with actively communicating its importance to non-technical team members.

More from the developer’s own perspective, though, it will also help developers to become more aware and well-rounded in their design decisions. That’s going to keep you in good stead no matter where and how you’re working.

  • JAMStack and Static Websites

Traditional content management systems like WordPress can be a major headache for developers if you want to build something that is more customized than a standard offering. This is why JAMStack – a smartly designed combo of JavaScript, APIs, and markup – offers web developers a way to build secure, performance-oriented websites very quickly.

Some have said that JAMstack is the ticket to producing the next generation of static websites, but it’s important to know that JAMStack sites aren’t exactly static, as they call data from the server-side through APIs. Developers then put a templated markup to work, and usually in the form of static site generators (like Gatsby.js) or build tools to serve as a pre-built front end.

Among all the benefits that can come with learning JAMStack, likely the most important is how it offers a really great developer experience. It allows you to build with the tools that you want to use, and integrate with services you might already be using, and minimizes the level of complexity that can come with some development approaches.

  • State Management

State management is a bit of a buzzword these days, and we’re sure many of you who are developers have noted that. Among the many strengths related to its use is that it’s especially well suited for accommodating increasing app complexity.


It’s highly advisable for developers to learn some of the design patterns and approaches for managing application states that have come to the forefront over recent years. The two most popular are Flux and Redux, and both are very closely associated with React. And if you’re a Vue developer then Vuex is well worth learning.

Like to take the chance to do the same thing we did this time last week in wishing you all well, but this time for a Happy and Prosperous New Year, whether you’re a web or app developer or if you choose (perhaps wisely) to not sit in front of multiple monitors for 8 hours as part of your work day

6 Website Navigation Improvement Tips for E-Commerce Websites

There’s all sorts of different people, and all sorts of different prerogatives for each of them. But for those who are in business for themselves, one wish for the New Year that will be shared between all of them is the wish for greater business success, and the profitability that comes along with that. Now we could go on about the new economic realities of the digital world here in the 21st century, but we’re sure you don’t need to be informed of what you already know.

Offering customers the option to buy online is an absolute must for nearly any business, particularly one that’s in retail. It can even extend to service, albeit to a lesser extent. There’s a darn good reason why so many business don’t spare expense when it comes to building their online identity – something that extends way beyond simply having a good website.

Here at 4GoodHosting, one inherent attribute that comes with be a quality Canadian web hosting provider is really understanding the value of doing this. We do have a free website builder for those who purchase our advanced level web hosting packages, but there’s so much to be said for paid web design as web developers understand that navigation and general structure are every bit as important as visual appeal.

Today we’re going to close out 2019 with 6 improvement you can – and should – make to your website to improve navigation and therefore have more visitors ending up in the checkout rather than ‘bouncing’.

For those who don’t know, your ‘bounce rate’ is the number for the amount of visitors who leave a site within a certain time of arriving on it. And believe me when I tell you that they WILL ‘bounce’ if they find your site to be even the slightest bit user-unfriendly.

Right then, no more about that. Here’s these 6 very doable tips for you.

  1. Slim Menus

Navigation is key to everything that is important about your website. Trying to force all of your navigation means into one area can have negative results when it comes to the user and their being pleased with how the site is laid out and made navigable.

As a general rule, you should have no more than seven menu items in your navigation scheme. The reality is a menu loaded with options can have a big negative impact. Navigation that shows your main services or products and is descriptive as well as concise, works best.

  1. Descriptive Menu Items

As Google or another search engine crawls your site, what happens is your descriptive menu items will be indexed. Those that are only using a general or generic term will have their site lumped into that mix. It is preferable to create terms that are as specific to your product or message as possible. They will index more effectively and work to drive the most desirable traffic to your website. When products and services are too general they will apply to way too many types of businesses. You won’t be distinguishing yourself on the web like you need to be, and like many of your competitors WILL be doing.

Making you navigation terms descriptive will help with limiting bad clicks and lowering bounce rates.

  1. Pitfalls of Dropdown Menus

When the entirety of a site’s categories are listed in a dropdown menu, there’s a very real chance that a visitor won’t mouse over it. If that happens then they may bounce because they aren’t able to see what they want offered quickly enough. A simple navigation menu with descriptive terms will direct your user to a page where you can present more sub items. Done properly, this will promote more engagement and not put the visitor at risk of misunderstanding something that’s missing from your site when it’s really there

  1. Importance of Order

It’s well understood that optimized navigation is based on a sound understanding that the first and last items that appear will be most effective. Items that appear at the beginning and at the end of your menu are going to be the ones that are most effective for retaining site visitors. Your most important items need to be at the beginning and end of the menu, while the least important items should be located in the middle.

  1. Include Search

When a menu fails and site dropdowns are not sufficiently visible, a determined user will then proceed to find and use the search bar. A readily visible and easily located search bar is an essential component of a well designed site, especially an eCommerce one. A working search bar should be included near the top of every page on your website.

  1. Content and Social Media Items

The saying ‘Content is King’ really is accurate, as a blog and social media links can be explicitly beneficial to improving your conversion rates. Engaging your audience helps build lasting relationships, and over time they start to contribute to business and traffic growth. Visitors and shoppers that find your social media and web content to be agreeable / informative / interesting are more likely to see you as an authority of the subject nature of your business and as such they’ll be more inclined to buy from you.

Links to these areas should be an integral part of your site navigation. Don’t have the link to your blog sitting as part of the footer navigation options on your site. Have it clearly and readily accessible at the top of the home page.

All in all, navigation needs to be sharp to create a usability ease environment with your site while still maintaining search engine ranking to drive traffic. Check your analytics before and then again after making navigation adjustments to your site. Follow your instincts regarding maximum simplicity, and know that when you simply tweak a few fundamental areas you can watch your traffic convert and see your name gain real traction in the e-commerce realm for your chosen industry.

With that said, we do wish you the very success you wish for in 2020, and more simply we’ll take this chance to wish a Very Happy New Year to all of you!

Your Need-to-Know for Thunderbolt 3

Optical data transfer has been an absolute godsend for business computing interests since the USB cable arrived on the scene many, many years ago. Since then what was speedy has become downright fast, and when Apple introduced Thunderbolt technology nearly 10 years ago it was definitely a game changer for rapid charging and transferring large quantities of data between devices without having to twiddle your thumbs for a long time while that task was completed.

It would seem that the speedy transfer revolution isn’t done yet, as instead of competing with the USB-C port, the Thunderbolt 3 port that’s been on the scene since 2016 has decided to join forces with it to offer the newest and best choice for lightning-fast data transfer and charging capabilities.


Here at 4GoodHosting, we’re just the same as any other good Canadian web hosting provider in that we know that even the most layperson individual is going to have large-scale data transfer tasks from time to time, and if not more often than that. Thunderbolt 3 is definitely going to be the solution of-choice for both Mac and PC users, and for people who are in business or creative arts it’s going to be a game changer for sure.


Big Boom Data Transfer & Recharging


As mentioned, Thunderbolt technology has been around since the late 2000s, but once Thunderbolt 3 showed up in 2016, times had changed. USB-C had stolen the thunder and emerged as the latest version of the standard – an updated and powerful USB cable that could deliver up to 15 watts of power for devices and up to 100 watts for charging compatible laptops or similar devices. This was a major advancement for USB, and the future of many common computer connections was changed for good.


The guys behind Thunderbolt’s development at Apple had to make an objective decision. What they chose to do is rather than facing off against USB-C, they chose to join with it. Thunderbolt 3 then ditched the old DisplayPort connection base, and decided to go with a USB-C connection. Combining the two technologies has resulted in one particularly powerful hybrid.


By going with USB-C Thunderbolt 3 was able to make the leap from Apple devices to other PCs and laptops, a process that for many people was long overdue. The only downside has been the issue of compatibility. If you’re using Thunderbolt or Thunderbolt 2, the new USB connection isn’t compatible with yours unless your purchase a pricey adapter.


What can you do with a USB-C Thunderbolt 3 port today? Here’s some of the most common examples:


  • transmit data at a rate of 40Gbps
  • output video to two 4K monitors at 60 Hz
  • charge smartphones and most laptops with up to 100 watts of power
  • connect to an external GPU (unless it’s been blocked by the manufacturer)


Some may not know which type of port they have. To determine whether or not your USB-C port is actually Thunderbolt 3, you can look for the little Lightning bolt symbol near it, and this symbol is usually what differentiates it from a conventional USB-C port.

The Thunderbolt Port’s History


Thunderbolt technology got its start in the late 2000s as an Intel project called Light Peak. The aim for this project was to add optical data transfer to traditional data transfer used with computer peripherals – which is by and large combining wire and fibre optics. Engineers soon came to understand that the prototypes with the old copper wiring standard were already achieving satisfactory results at a much a lower cost.


Thunderbolt then made its debut in the early 2010s, and at first was available only on Apple devices. It was seen as very promising by designers and engineers who were using laptops but still needed high-powered connections to external storage, high-resolution displays, and so on.


However, as the first Thunderbolt release was only available for Macs for the first year or so, it didn’t have the wide reaching usability and appeal that its creators had in mind for it. In addition to limited availability, this new tech required unique Thunderbolt cables, and at around $50 they were expensive.


In June of 2013 Thunderbolt 2 was released with the big promise of being enabled for simultaneous 4K video file transfer and display. Thunderbolt 3 arrived just a couple of years later in June 2015, and devices using it came out in December of that year.


Thunderbolt 2 combined the two 10 Gbps bi-directional channels of the first cable and created a single 20 Gbps bi-directional channel that could provide a lot more power to peripherals when necessary. They proved they had higher speeds than any other popular peripheral cable of the day, but the most relevant difference was their 4K compatibility. Users who depended on Thunderbolt connections could now know that the highest resolutions would be supported when necessary.


The new Thunderbolt standard made it so that with a little extra hardware to work with the updated USB-C port, even the USB-C’s data transfer and powering capabilities had been outdone. Dedicated Thunderbolt ports were now a thing of the past, and that’s why USB-C ports can come with or without Thunderbolt 3 capabilities, and USB-C cables may or may not be compatible with Thunderbolt 3.


Thunderbolt Developments


Charging devices using USB-C Thunderbolt connections has become more common. Compatibility has been expanded to include the USB 3.1 cable standard, but USB is moving to a more complicated standard setup with 3.2 and is set to appear on the market soon and will include several different varieties with different capabilities.


It should also be mentioned that the USB 4 standard is on its way, and it is reported to have speeds that can rival Thunderbolt 3. According to official data, USB 4 will offer two-lane data transfer with total speeds up to 40Gbps and its being more open-source will encourage more manufacturers to use it.


Consider security threats as well. Security experts recently warned of the Thunderclap vulnerability on Macs and PCs. This vulnerability allows hackers to use the Thunderbolt port to access and steal files on a computer via a device loaded with the right malware by bypassing Thunderbolt security measures in seconds. It’s something to be very aware of and reminder that you should never use these ports with unfamiliar devices.


Merry Christmas to Everyone from all of us here at 4GoodHosting!

Interesting Possibilities: Blockchain and IoT could come together to Disempower Food Fraud Interests within 5 Years

We’re well aware that most the time we’re discussing something more explicitly related to web hosting, computing, or online business interests with our blog here. In a few instances we’ve discussed Blockchain technology and the Internet of Things, but we’ve never discussed them together before, and certain not within the context of how the two could be coming together in the future to ensure that what you’re putting on your dinner plate IS what you think you’re putting on your dinner plate.

Recent industry news is highlighting this exact possibility, and we’ll get into explaining just how exactly this may work. Here at 4GoodHosting, we’re like all other quality Canadian web hosting providers in that we very much enjoy hearing of how digital technology advances have the ability to not only make life better for many, but also to make it a lot more difficult for those who purposefully choose to be deceitful in the interest of making a dirty dollar.

So let’s get right to it, in the interest of any and all who’ve been sold farmed Tilapia that’s labelled as ocean-caught wild cod or something similar.

The Full Food Journey – Detailed Entirely

If what we are to understand here, the way blockchain and IoT tracking technology will be able to trace any grown, raised, or cultivated food along its entire journey from farms to grocery store shelves will change the food industry in very revolutionary ways. Most notably, it may actually LOWER your grocery bill. That may be made possible by reducing retailers’ costs by streamlining supply chains and simplifying regulatory compliance. All of this is according to a new study by Juniper Research in the United Kingdom.

The primary key to all of this would be blockchain’s immutable ledger, which when combined with IoT sensors and trackers, would create a supremely efficient food recall process.

But how would it take on the huge problem that is food fraud? For those not familiar, Food fraud is when food is mislabelled, diluted or substituted food and ingredients.

Let’s take an example, any example. How about extra virgin olive oil? It may be on your local grocery’s shelf labelled as originating from Greece. However, it originated from somewhere nowhere even close to the Aegean region. Blockchain’s ledger will be able to make end retailers be aware of these fraudulent claims, which of course are almost exclusively used to raise prices on products to create greater profits for manufacturers and middlemen in the food industry.

Big Time Billions in Savings from Fraud

Ever greater adoption of Blockchain and IoT tracking technologies in the supply chain industry are forecasted to create $31 billion in food fraud savings globally by 2024. This will be made possible by tracking food across the supply chain, and tracking it so accurately that the information will be functionally indisputable. Substantial savings in food fraud will be realized as early as 2021 and compliance costs are expected to be reduced by 30% within 4 or so years from now.

As it is now, food tracking systems rely on paper trail to manually track assets throughout the supply chain, and do so far too much. These inefficient systems make it so that records can be lost or unreconciled. Plus these new more irrefutable records could be shared by all supply chain users, which would promote the overall visibility of the supply chain.

The Starbucks Example

Starbucks’ and Microsoft are taking the lead on this, creating a mobile ‘bean to cup’ tracking app that incorporates this blockchain / IoT merger perfectly and enables customers to see where their coffee was grown, and the full journey it took before filling their cup (ideally not a disposable one).

It’s helpful to look at this further and understand that companies often have to rely on intermediaries to perform these tracking tasks. The drawback is that this adds a level of complexity to the supply chain, resulting in increases inefficiency, fraud and waste. New Blockchain and IoT technology supply chain tracking would eliminate nearly all of this very nicely.

Further, private or ‘permissioned’ blockchains can be created within a company’s four walls, or between trusted partners. Being able to administer them centrally while still retaining control over who has access to information on the network (in order to overview any who may try to falsify it) is going to be hugely beneficial.

It is true that IoT devices shipped with goods link the physical and digital worlds primarily via location tracking sensors, along with temperature and humidity monitoring. Add blockchain to the equation and you’ve now got a place where the data can be stored and accessed by everyone on the ledger. Ledger users can also be segmented so sensitive business data isn’t exposed to competitors, the report said.

This new distributed ledger technology’s innate capabilities have not been lost on enterprises either. More pilots and proofs of concept are sprouting up all across the grocery and food industry. Look no further than the fact that 20% of the top 10 global grocers will be using blockchain by 2025, and it’s expected that the widespread utilization of this – and belief in it’s effectiveness – will increase consumer confidence and help with customer trust and loyalty.

Transparency in product nature and sourcing is going to be HUGE for customer satisfaction and return business in a multi billion dollar a year industry in North America.

Talking about Starbucks again, they’ve partnered with Microsoft to build a blockchain supply chain aimed at tracking coffee beans from farms to stores. Not stopping there, they’re also planning to create a mobile app that lets customers track the full supply chain journey of the beans that are the source of the coffee they’re enjoying at that very moment.

GrainChain, a blockchain-based supply chain service based in McAllen, Texas, is also being an early bird with all of this. Their new service is being piloted by roughly 10% of Honduras coffee growers – some 12,000 farmers – with an eye on going into full production around April 2020.

So the next time you bite into a food of any sort and question whether you’ve got what you’ve paid for, take some solace in the fact that new digital technologies may be making the ease of this deception a thing of the past. That’s going to be good for EVERYONE, as well as spend a lot of our hard-earned money at the grocery store year in and out, and especially if you’ve got children at home.