Getting Ready for Wi-Fi 6: What to Expect

Reading Time: 5 minutes

Most people aren’t any more familiar with Wi-Fi beyond understanding that it means a wireless internet connection. Those same people won’t be aware that in the last near decade the digital world has moved from Wi-Fi 4 to to Wi-Fi 5, and now Wi-Fi 5 is set to be replaced by Wi-Fi 6. What’s to be made of all of this for the average person who only knows that the wi-fi networks in their home and office are essential parts of their connected day-to-day, and that the wi-fi in Starbucks is pretty darn convenient as well.

The numeric chain that identifies a Wi-Fi standard is something they may well recognize though. 802.11 is the standard, but the Wi-Fi 4 you had from 2009 to 2014 is different from the same 802.11 standard you’ve had with Wi-Fi 5 since then till now. What’s to come later this year with Wi-Fi 6 will be a different 802.11. Right, we get you – what’s the difference exactly.

Here at 4GoodHosting, we’re like any quality Canadian web hosting provider in that the nature of our work and interests makes it so that we pick up on these things, if for no other reason than we’re exposed to and working with them on a regular basis. Much of the time these little particulars related to computing, web hosting, and digital connectivity aren’t worth discussing in great detail.

However, because Wi-Fi is such an essential and much-appreciated resource for all of us we thought we’d look at the ‘new’ Wi-Fi set to arrive later this year here today.

Wi-Fi 6: Problem Solver

When we look at ‘802.11ac’, the average person won’t get the significance of that. The fact is, however, they should and what Wi-Fi 6 is being designed to be is a solution to that problem.

What we’re going to see is the beginning of generational Wi-Fi labels.

Let’s make you aware that there is a collective body known as the Wi-Fi Alliance. They are in charge of deciding, developing, and designating Wi-Fi standards. We are all aware of how devices are becoming more complex and internet connections evolve, and when they do the process of delivering wireless connections also changes.

As a results, Wi-Fi standards — the technical specifications that manufacturers establish to create Wi-Fi — need to be updated from time to time so that new technology can flourish and compatibility extends to the near entirety of devices out there.

As mentioned though, the naming of Wi-Fi standards is totally foreign to the average person if they ever try to figure what that numeric 802-something chain stands for. The Wi-Fi Alliance’s response to this is now to simply refer to the number of the generation. Not only will this apply to the upcoming Wi-Fi 6, but will also be retroactive and thus apply to older standards. For example:

802.11n (2009) – Wi-Fi 4

802.11ac (2014) – Wi-Fi 5

802.11ax (expected late 2019) – Wi-Fi 6

It’s easier to see how this is a better classification approach, but there’s likely going to be a a period of confusion where some products are labeled with the old code and some are just called Wi-Fi 4 or Wi-Fi 5 when they’re functionally interchangeable in as far as ‘type’ is concerned. Eventually, however, this should be resolved as older product labeling is phased out and everyone – or most people at least – become familiar with the new Wi-Fi classifications. In all honesty, for most people if you just pay even the slightest amount of attention you’ll begin to notice the difference without having to put much thought into it.

How Wi-Fi 6 Will Be Different – And Better

The biggest impetus to create Wi-Fi 6 was to better accommodate all the many new Wi-Fi technologies that have been emerging. Wi-Fi 6 helps standardize them. Here’s the most relevant developments, and exactly what they should mean for your wireless network.

Lower Latency

Lower latency is a BIG plus that’s going to come with Wi-Fi 6, and you’ll probably notice it right quick. Reduced latency means shorter or no delay times as data is sent – which is very similar to ping rate and other such measurements. Low latency connections improve load times and prevents disconnects and other issues more effectively. Wi-Fi 6 lowers latency compared to older Wi-Fi standards, and it does so using more advanced technology like OFDMA (orthogonal frequency division multiple access). Long story short, it’s going to pack data into a signal much more completely and reliably.

Speed

Wi-Fi 6 will also be faster, and considerably faster compared to Wi-Fi 5. By offering full support for technologies like MU-MIMO, connection quality will improve for compatible mobile devices in a big way, and content delivery should be sped up accordingly. These improvements won’t be as relative to Internet speed as you might think too. They can and likely will improve the speed of your Wi-Fi data and let your receive more information, more quickly.

Now a question we imagine will come up for most of you – will all routers be able to work with the new 802.11ax standard? No, they won’t. If your router is especially dated, you should happily accept the fact it’s time to get a newer model. It will be 100% worth it, don’t have any doubts about that.

Wi-Fi 6 is also going to mean fewer dead zones, as a result of expanded beamforming capabilities being built into it. ‘Beamforming’, you say? That’s the name for the trick your router uses to focus signals on a particular device, and that’s quite important if the device is having difficulty working with a connection. The new WiFi 6 802.11ax standard expands the range of beamforming and improves its capabilities. Long story short again, ‘dead zones’ in your home are going to be MUCH less likely.

Improved Battery Life

Wi-Fi 6 is going to mean better battery life, and we’ll go right ahead and assume that’s going to be most appealing for a lot of you who are away from home for long periods of the day and taking advantage of Wi-Fi connectivity fairly often throughout.

One of the new technologies that Wi-Fi 6 is set up to work with is called ‘TWT’, or target wake time. It assists connected device with customizing when and how they ‘wake up’ for the purpose of receiving data signals from Wi-Fi. Devices are able to ‘sleep’ while waiting for the next necessary Wi-Fi transmission and battery drain is reduced as a result. Your phone does not sleep at all itself, only the parts of it that are operating with Wi-Fi.

Everybody will like the idea of more battery life and less time spent plugging in to recharge.

Keep an Eye Out for the Wi-Fi 6 Label

How will you know if a router, phone or other device works with the new 802.11ax standard? Simply look for the phrase ‘Wi-Fi 6’ on packaging, advertisements, labels or elsewhere. Look up the brand and model # online if for some reason you don’t see it on the packaging. The Wi-Fi Alliance has also suggested using icons to show the Wi-Fi generation. These icons appear as Wi-Fi signals with a circled number within the signal.

Identifying these icons should help you pick out the right device. If not, you can of course always ask the person behind the till and they should be knowledgable regarding this (if they work there you’d have to assume they would be).

Keep in mind that most of the devices around 2020 and later are expected to be Wi-Fi 6, and so we’ll have to wait a year or so before they start to populate the market.

 

Project Pathfinder for an ‘Even Smarter’ SIRI

Reading Time: 4 minutes

AI continues to be one of the most game-changing developments in computing technology these days, and it’s hard to argue that there’s no more commonplace example of AI than in the digital assistants that have nearly become household names – Apple’s SIRI and Google’s Alexa. Even a decade ago many people would have stated their disbelief at the notion that it might be possible to make spoken queries to a digital device, and then have them provide a to-the-minute accurate reply.

The convenience and practicality of AI has been a hit, and what’s noteworthy about it is the way that folks of all ages have taken to it. After all, it doesn’t even require the slightest bit of digital know-how to address Siri or Alexa and rattle of a question. Indeed, both tech giants have done a great job building the technology for their digital assistants. With regards to Siri in particular, however, it appears that Apple is teaming up with a company that’s made a name for themselves developing chatbots for enterprise clients.

Why? – to make Siri an even better digital assistant and even more the so the beacon of AI made possible for everyday people.

Here at 4GoodHosting, like most Canadian web hosting providers we have the same level of profound interest in major developments in the computing, web hosting, and digital worlds that many of our customers do. This zeal for ‘what’s next’ is very much a part of what makes us tick, and this coming-soon improvement to Siri makes the cut as something worth discussing in our blog here today.

Proven Partnership

The aim is to make it so that Siri gets much better at analyzing and understanding real-world conversations and developing AI models capable of handling their context and complexity. In order to do that, they’ve chosen to work with a developer who they have a track record of success with. That’s Nuance, who is an established major player in conversation-based user interfaces. They collaborated with Apple to begin with Siri, and so this is round 2.

As mentioned, Nuance’s present business is focused on developing chatbots for enterprise clients, and so they’re ideally set up to hit the ground running with Project Pathfinder.

Project Pathfinder

The focus of Project Pathfinder came from Apple’s belief that machine learning and AI can automate the creation of dialog models by learning from logs of actual, natural human conversations.

Pathfinder is able to mine huge collections of conversational transcripts between agents and customers before building dialog models from them and using those models to inform two-way conversations between virtual assistants and consumers. Conversation designers are then more able to develop smarter chatbots. Anomalies in the conversation flow are tracked, and problems in the script can then be identified and addressed.

Conversation Building

Voice assistants like Siri and Alexa have inner workings that make it so that your speech is interacting with reference models. The models then try to find a solution to the intent of your question, and accurate replies depend on conversation designers doing two things; 1, having learned from subject matter experts, and 2 – doing the same from a LOT of trial and error process related to query behavior.

As far as Apple’s concerned, giving the nod to Nuance and their conversation designers was the best way to go.

Pathfinder empowers them to build on their existing knowledge base with deep insights gathered from real conversational interactions that have taken place inside call centers. More to the point, however, the software doesn’t only learn what people are discussing, but it also makes determinations on how human agents guide users through the transactions.

Adding more intelligence to voice assistants/chatbots is made possible with this information, and so Siri is primed to build on her IQ in the same way. It certainly sounds promising!

Self-Learning Conversation Analytics

All you need to do is spend a short period of time with Siri or Alexa and you’ll quickly find that they definitely do have limitations. That’s a reflection of the fact that they are built for the mass market, as they must much more diverse requests than chatbots that are primarily built for business. This means that they come with a lack of focus, and it’s more difficult to design AI that can respond to spoken queries on all the thousands of different topics around the globe with sensible responses. Then you have follow-up queries too.

In conclusion, the queries posed to virtual assistants are based in human questions 95+% of the time, and as such they’re less focused and less predictable. So then how do you build AI that’s more capable of handling the kind of complex enquiries that characterize human/machine interactions in the real world?

The answer to that is to start with call center chatbots, and that’s what the Pathfinder Project is doing. It will accelerate development of spoken word interfaces for more narrow vertical intents – like navigation, weather information, or call center conversation – and by doing so it should also speed up the development of more complex conversational models.

It will make these machines capable of handling more complex conversations. It will, however, take some time to come to realization (projected for summer 2019). Assuming it’s successful, it will show how conversational analytics, data analysis and AI have the ability to empower next-generation voice interfaces. And with this we’ll also be able have much more sophisticated human/computer interactions with our virtual assistants.

Seeing the unlocked power of AI with understood context and intent of conversation, rather than primarily asking Siri or Alexa to turn the lights off, etc. etc. promises to be really helpful and a very welcome advance in AI for all of us.

 

Chromium Manifest V3 Updates May Disable Ad Blockers

Reading Time: 4 minutes

It’s likely that a good many of you are among the thousands upon thousands of people who have an Ad Blocker installed for your web browsers of choice. Some people do use them simply to avoid the nuisance of having to watch ad after ad, and it’s people like these that have necessitated some sites to insist that you ‘whitelist’ them in order to proceed into the website they want to visit. That’s perfectly understandable, as those paying advertisers are the way the website generates income for the individual or business.

For others, however, we spend a great deal of our working day researching and referencing online, and having to watch ads before getting to the content we need in order to do our work. For us, an ad blocker is much more of a tool of necessity rather than convenience. Still, we get caught up in more than a few sites that will insist on being whitelisted too. For me, my ad blocker is a godsend and I don’t whitelist any website or disable my ad blocker for any of them.

Here at 4GoodHosting, part of what makes us a good Canadian web hosting provider is having built up an insight into what really matters to our customers. The bulk of them are people who use the Information Superhighway as a production resource rather than web ‘surfers’ for whom it’s more of an entertainment one. That’s why today’s news is some that’s sure to be very relevant for most of our customers.

Weakened WebRequest APIs

Some of you may not know how your ad blocker works, and that’s perfectly normal. As long as it does its job, you don’t really need to know. Chromium is Google’s newest all-powerful web browser, and just like Chrome did you can expect it to soon become nearly ubiquitous as most people’s web browser of-choice.

However, Chromium developers in the last few weeks have shared that among the updates they are planning to do in Manifest V3 is one that will restrict the blocking version of the webRequest API. The alternative they’re introducing is called declrativeNetRequest API.

After becoming aware of it, many ad blocker developers expressed their belief that the introduction of the declarativeNetRequest API will mean many already existing ad blockers won’t be ‘blocking’ much of anything anymore.

One industry expert stated on the subject, “If this limited declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two existing and popular content blockers like uBO and uMatrix will cease to be functional.”

What is the Manifest V3 Version?

It’s basically a mechanism through which specific capabilities can be restricted to a certain class of extensions. These restrictions are indicated in the form of either a minimum, or maximum, version.

Why the Update?

Currently, the webRequest API allows extensions to intercept requests and then modify, redirect, or block them. The basic flow of handling a request using this API is as follows,

  • Chromium receives the request / queries the extension / receives the result

However, in Manifest V3 the use of this API will have its blocking form limited quite significantly. The non-blocking form of the API that permits extensions to observer network requests for modifying, redirecting, or blocking them will not be discouraged. In addition, the limitations they are going to put in the webRequest API have yet to be determined

Manifest V3 is set to make the declarativeNetRequest API as the primary content-blocking API in extensions. This API will then allow extensions to tell Chrome what to do with a given request, instead of Chromium forwarding the request to the extension. This will enable Chromium to handle a request synchronously. Google insists this API is overall a better performer and provides better privacy guarantees to users – the latter part of which if of course very important these days.

Consensus Among Ad Blocker Developers and Maintainers?

When informed about this coming update many developers were concerned that the change will end up completely disabling all ad blockers. The concern was that the proposed declarativeNetRequest API will result in it being impossible to develop new and functional filtering engine designs. This is because the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and some ad blocker developers have commented that it’s very limited in its scope.

It’s also believed that the declarativeNetRequest API developers will be unable to implement other features, such as blocking of media element that are larger than a set size and disabling of JavaScript execution through the injection of CSP directives, among other features.

Others are making the comparison to Safari content blocking APIs, which essentially put limits on the number of admissible rules. Safari has introduced a similar API recently, and the belief is that’s the reason why Apple has gone in this direction too. Many seem to think that extensions written in that API are more usable, but still fall well short of the full power of uBlock Origin. The hope is that this API won’t be the last of them in the foreseeable nearest future.

Dedicated IP Addresses and SEO

Reading Time: 5 minutes

Even the most layman of web endeavourers will be familiar with the acronym SEO. We imagine further there’s very few if any individuals anywhere who don’t know it stands for search engine optimization, and understand just how integral SEO is for having success in digital marketing. Most people with a small business that relies on its website for maximum visibility with prospective customers will hire an SEO professional to SEO optimize their site. That continues to be highly recommended, and for 9 out of 10 people it is NOT something you can do effectively on your own, no matter how much you’ve read online or how many YouTube videos you’ve watched.

Here at 4GoodHosting, we are like any other top Canadian web hosting provider in that we offer SEO optimization services for our clients. Some people will think that choosing the best keywords and having them at the ideal density is most integral to having good SEO, and that’s true and by and large. But there are a number of smaller but still significant influence that influence SEO, and they’ll be beyond the wherewithal of most people.

Whether websites benefit from a Dedicated IP address rather that a Shared IP address isn’t something you’ll hear discussed regularly. When you learn that the answer is yes, they do, and exactly why, however, it’s a switch many people will want to consider if they currently have a Shared IP address. Let’s have a look at why that is today.

What Exactly Is an IP address?

For some, we may need to start at the start with all of this so let’s begin be defining what exactly an IP address is. Any device connected to the Internet has a unique IP address, and that’s true if it’s a PC, laptop, mobile device, or your web host’s server. It’s made up of a 4-number string which will start at 0 and then go up to 255. Here’s an example of one:

1.25.255.255

This numerical string code makes the machine you are using known. Once it’s identified – and it has to be – the Internet is then able to send data to it. You now can access the hundreds of thousands of websites along the Information Superhighway.

What’s a Shared IP address?



In most instances, the server your web host uses to host your site will be a single machine with a matching single IP address. For most people – and nearly all who go with the most basic hosting package without giving it much thought – you’ll be set up in an arrangement where the server is hosting thousands of websites like yours. It’s not ‘dedicated’ to you and your site exclusively.

Instead, all of the websites hosted it will be represented by the single IP address allocated to the web host’s server. Now if your website is utilized for more of a personal venture or hobby and it’s NOT going to be a leverage point in trying to secure more business, shared hosting will probably be fine. Alternately, if page rankings are a priority for you then shared hosting may be putting you at a disadvantage.

The solution? A dedicated IP address for your Canadian website. If you need one, we can take care of that for you quickly and fairly easily for you. But we imagine you’ll need more convincing, so let’s move now to explaining what constitutes a Dedicated IP address..

The Dedicated IP Address

A dedicated IP address involves you having your own server, and that server only has one website on it – yours. It is common, however, for more than one site reside on a specific server. A Dedicated IP address is an IP address that is allocated to a single website, instead of one being assigned the server and representing every website hosted there by default.

The Purpose of Dedicated IP Addresses

The primary appeal of Dedicated IP addresses is that they promote large ecommerce being more secure, and in particular as it regards sensitive data like credit card numbers, etc. On a more individual scale, though, a dedicated IP address is superior for SEO interests as well.

Why is that? Let’s list all of the reasons here:

1. Speed

When you share space, you share resources and in as far as shared web hosting and shared IP addresses are concerned that means you are sharing bandwidth. The long and short of it is all those other sites on the same server will be slowing yours down. That might be a problem in itself, but if it isn’t then the way slow site speeds push you further down Google’s rankings will be.

While adding a unique IP address to your site will not automatically mean it loads faster, but migrating to a Dedicated Server with a Dedicated IP address definitely will. Sites with a Dedicated IP address are faster, more reliable, and more secure, and that’s a big deal.

2. SSL

For nearly 5 years now Google has been giving preference to websites that have added an SSL 2048-bit key certificate. The easiest way to see whether that’s been done or not is seeing the site’s URL change from HTTP to HTTPS. SSL sites typically utilize unique IP addresses. Google continues to insist that SSL impacts less than 1% of searches, but it’s a factor nonetheless and is another benefit of a Dedicated IP address.

SSL can make your website more visible through public networks and can make websites operate marginally faster, and the benefit of this is in the way visitors get a faster response from the website because it’s not held up by Google the way it would be if it didn’t have an SSL cert. The majority of ecommerce sites with a Dedicated IP address will also have an SSL cert.

3. Malware

Malware is software that’s designed and disseminated for the explicit purpose of throwing wrenches into the gears of a working web system. Unfortunately, the thousands of websites that may be on a shared server drastically increases the risk of being exposed to malware if you’re one of them. Further, when you share an IP address with any site that’s been infected with malware then your site is actually penalized despite the fact it’s not you who’s been infected.

In these cases, you’ll be best served by going with a Dedicated IP address and choosing a more reliable Canadian web hosting provider that has measures in place to protect malware from making its way into the servers in the fist place. A dedicated IP means you’re standing alone, and you’re regarded accordingly.

How Do I Get a Dedicated IP Address?

If you’re with us here at 4GoodHosting, all you need to do is ask. We’ve been setting our customers up with Dedicated IP addresses for quite some time now, and you’ll find that when you do so through us it’s not nearly as pricey as you had expected it to be.

It’s very recommended for any ecommerce site or one that’s utilized for very business-strategic aims, and it’s fair to say that you really can’t go wrong moving to a dedicated server if you’ve made the commitment to do anything and everything to protect your SEO and enjoy the same page rankings moving forward. The vast majority of people see it as a wise investment, and of course you always have option of switching back to a shared hosting arrangement if over time you don’t see any real difference or benefits for you.

Google Chrome Solution for ‘History Manipulation’ On Its Way

Reading Time: 3 minutes

No one will need to be convinced of the fact there’s a massive number of shady websites out there designed to ensnare you for any number of no-good purposes. Usually you’re rerouted to them when you take a seemingly harmless action and then often you’re unable to back <- yourself out of the site once you’ve unwilling landed on it. Nobody wants to be on these spammy or malicious pages and you’re stressing out every second longer that you’re there.

The well being of web surfers who also happen to be customers or friends here at 4GoodHosting is important to us, and being proactive in sharing all our wisdom about anything and everything related to the web is a part of what makes one of the best Canadian web hosting providers.

It’s that aim that has us sharing this news with you here today – that Google understands the unpleasantness that comes with this being locked into a website and has plans to make it remediable pretty quick here.

The first time something like this occurs you’ll almost certainly be clicking on the back button repeatedly before realizing it’s got no function. Eventually you’ll come to realize that you’ve got no other recourse than to close the browser, and most often times you’ll quit Chrome altogether ASAP and then launch it again for fear of inheriting a virus or something of the sort from the nefarious site.

How History Manipulation Works, and what Google is Doing About It

You’ll be pleased to hear the Chrome browser will soon be armed with specific protection measures to prevent this happening. The way the ‘back’ button is broken here is something called ‘history manipulation’ by the Chrome team. What it involves is that the malicious site stacks dummy pages onto your browsing history, and these work to fast-forward you back to the unintended destination page you were trying to get away from.

Fortunately, Chrome developers aren’t letting this slide. There are upcoming changes to Chromium’s code which will facilitate the detection of these dummy history entries and then flag sites that use them.

The aim is to allow Chrome to ignore the entirety of these false history entries to make it so that you’re not buried in a site that you had no intention of landing on and the back button functions just as you expect it to.

This development is still in its formative stages, and we should be aware that these countermeasures aren’t even in the pre-release test versions of Chrome yet. However, industry insiders report that testing should begin within the next few weeks or so, and all signs point towards the new feature being part of the full release version of the web browser.

In addition, this being a change to the Chromium engine makes it so that it may eventually benefit other browsers based on it. Most notable of these is Microsoft Edge, making it so that the frustrations of a paralyzed back button will be a thing of the past for either popular web browser. So far there’s no industry talk of Apple doing the same for Safari, but one can imagine they’ll be equally on top of this in much the same way.

Merry Christmas from 4GoodHosting

Given it’s the 24th of December here we of course would like to take this opportunity to wish a Merry Christmas to one and all. We hope you are enjoying the holidays with your family and this last week of 2018 is an especially good one. We can reflect on 2018, and look forward to an even more prosperous year in 2019.

Happy Holidays and best wishes, from all of us to all of you!

Cloudflare is changing the game

Reading Time: 2 minutes

In a world where Google, Amazon and Facebook dominate the tech space, Cloudflare has stolen away the headlines for the betterment of the internet with its recent announcement. The company announced on its 8th birthday that they would be launching a domain registry, and it is unlike any we have seen before.

Cloudflare, to the shock of many in the industry, has decided not to charge anything above the federally mandated cost to register a domain with the government. That is right; this multi-billion dollar company has chosen to not make a single penny off of your domain registration. In a world where the average Canadian spends between $10-$15 per domain, this is remarkable.

Cloudflare is not a small company and is about the same scale as Google at the moment. It has a core set of business that sees itself as a content distribution platform and secure infrastructure vendor for millions of client across the globe. It also has recently announced it is on a path to an IPO and has raised hundreds of millions of dollars in preparation for this. So why do this?

Cloudflare is a unique company in the tech and capital market as they are doing two different things than any other major brand. First, the company does not see the internet as a property that you can corner, and instead looks to promote a free, equal and open internet, much like the values from Internet 1.0. Secondly, the company is doing things for the good of the internet, and although this might ultimately fail once the company scales, it is still a refreshing view from a larger company in the tech space.

This does leave one important question for consumers, what does this mean for the cost and registration of their domain? Well, it is a little up in the air. The Cloudflare system is still being tested and should be live within the month, but it looks to be set up similar to every other registry system. If you are up for renewal, it might be time to take a look around and see if you can benefit from using this new system. As well, for those who are operating hosting or other third party services, your overall cost to your company to get a website should start to drop for your packages if you choose Cloudflare as your registry option.

However, this does still leave some questions. Will the other registry companies like GoDaddy also drop their prices, or will they continue the same old costing options going forward? As well, if you are looking for other nations or domain names, will Cloudflare offer those? Finally, will Cloudflare provide an easy to use swapping option? These are all tough questions, and we will need to wait and see how Cloudflare’s announcement has changed the industry in only a few short weeks.

What are your thoughts? Is this just a bump in the road for the major registry options on the web, or the start of more competitive space for those looking to register domains?

The Dangers of Abandoned Domain Names

Reading Time: 3 minutes

Many people will have a domain name they once owned that eventually lost its value and was discarded. Most of those folks won’t have given much thought to it after declining to renew it with their web hosting provider, and 9 times out of 10 it’s true that nothing more will come of it. However, cyber security experts are now letting people know that an abandoned domain name can allow cybercriminals to gain access to email addresses of the company or individual that previously owned it.

Here at 4GoodHosting, we’re not unlike any other Canadian web hosting provider in the way we claim domain names for clients across hundreds of different industries. Many of whom will have that same domain name for themselves to this day, but some will have abandoned one or more because they found something better or simply because the domain name wasn’t required anymore for whatever reason.

Here’s what happens when a domain name expires. It goes into a reserved state for a certain time, during which time the the recent owner has the ability to reclaim it. If and when that time expires, it becomes available for re-registration for whomever at no additional costs, identity or ownership verification. Now while it is true that SEO professionals and spam trap operators are good at keeping track of abandoned domain names for various purposes, many of them will not know they are a potential security risk. So let’s discuss this here today.

Insider Access Information

Look no further for a pressing concern than the fact that the new owner of the domain name can take control of the email addresses of the former owner. The email services can then be configured to receive any number of email correspondences that are sensitive in nature. These accounts can then be used to reset passwords to online services requiring sensitive info like personal details, financial details, client-legal privileged information, and a lot more.

Recently this has been more in the new because of research performed on domain names abandoned by law-firms in Australia that were cast off as a result of different mergers and acquisitions between companies. These law firms had stored and processed massive amounts of confidential data, and when the domain names were abandoned they still left breadcrumbs that could possibly lead the new owners of those domains to sensitive information.

The possibility of this being VERY problematic should be easy to understand. Email is an essential service in every business, and is a company lost control of their email lists it could be devastating, especially considering sensitive information and documents are often exchanged over emails between clients, colleagues, vendors and service providers due to the simple convenience of doing so.

The study Down Under found that an average of nearly a thousand ‘.au’ domain names (country code TLD for Australia) become expired every day, and we can assume that number is considerably larger here in North America. Further, the list of expiring domain names is typically published in a simple CSV file format and accessible to whoever would like to see it, giving access to anyone who wants to see the domain names that have expired.

Communications storied in the cloud are especially at risk. IIf all the messages aren’t deleted from these cloud platforms, they may remain accessible for the new owner of the domain and then you now have the potential for a leak of sensitive info.

Of further concern is the fact that if that email address has been used to sign up for an account on social media platforms like Facebook, Twitter, or LinkedIn, etc. then the domain’s new owner can reset the passwords and gain access to those accounts.

To avoid this scenario, Companies should ensure that the domain name remains valid for an indefinite period even if it has been abandoned. All the notifications that may contain confidential information should be unsubscribed from the emails.

In addition, disconnecting or closing the accounts that are created using business emails is recommended. Enable two-factor authentication for all the online services that allows it as well, and be sure to do this as soon as possible and leave it in place indefinitely. This is good advice not only for businesses or venture that make use of multiple domains and have moved on from plenty in the past, but it’s good advice for anyone in today’s day and age of cyber threats.

Linux or Windows

Reading Time: 4 minutes

The vast majority of websites are hosted on either Linux or Windows OS servers, and the market share is now shifting towards Linux according to a recent report from W3tech. Consumer surveys indicated that Unix servers make up some 66% of all web servers while Windows accounts for just over 33%. For most this isn’t going to be something they’ll give any consideration to, and it’s true that websites with standard HTML pages will be served equally well with either OS.

These days greater numbers of websites have been ‘revamped’ since their inception and now feature dynamic design elements that enhance the UX experience for viewers. If you are planning to design or redesign your website to be much more engaging, you should work with forms and execute web applications both systems will serve your needs.

Linux and Windows are pretty much neck and neck when it comes to functionality. Each works with a number of frameworks and front end programming languages, and have impressive features when it comes to hosting. Linux and Windows handle data in the same way too, and both sport easy, convenient and fast FTP tools to serve a wide range of file management functions.

Nine times out of 10 you’ll be at your best with either, and at 4GoodHosting our Linux and Windows web hosting specs make us one of the best Canadian web hosting providers with data centers in both Eastern and Western Canada.

Our standard web hosting is via ultra-fast, dual-parallel processing Hexa Core Xeon Linux-based web servers with the latest server software installations, and our Windows hosting includes full support for the entire spectrum of frameworks and languages: ASP.NET, Frontpage, Choice of MySQL, MSSQL 2000 or 2005 DB, ATLAS, Silverlight, ASP 3.0, PHP4 & PHP5, and Plesk.

Let’s have a look at the difference with each.

Price

The most significant difference between Linux and Windows web hosting is the core operating system on which the server(s) and user interface run. Linux uses some form of the Linux kernel, and these are usually free. There are some paid distributions, Red Hat being a good one, which comes with a number of special features aimed at better server performance. With Windows you’ll have a licensing fee because Microsoft develops and owns its OS and hardware upgrade needs can be a possibility too. We like Linux because over its lifespan, Linux servers generally cost significantly less than a similar one that’s Windows-based.

Software Support

Before choosing an OS, you’ll also have to consider the script languages and database applications that are required to host the website on it. If your website needs Windows-based scripts or database applications to display correctly, then a Windows web hosting platform is probably best for you. Sites developed with Microsoft ASP.NET, ASP Classic, MSSSQL, MS Access, SharePoint technologies will also head over to the Windows side.

Conversely, if your website requires Linux-based script or database software, then a Linux-based web hosting platform is going to be your best choice. Plus, anyone planning to use Apache modules, NGINX or development tools like Perl, PHP, or Python with a MySQL database will enjoy the large support structure for these formats found with Linux.

Control Panel And Dev Tools

Another consideration with these two web hosting options is that Linux offers control panels like cPanel or WHM, and Windows uses Plesk. There are fundamental differences between them. cPanel has a simple user-friendly interface and users can download applications, such as WordPress, phpBB, Drupal, Joomla, and more with super simple one-click installs. Creating and manage MySQL databases and configuring PHP is easy, and cPanel automatically updates software packages too. Plus, we like our Linux hosted websites for people new to the web. cPanel makes it easy for even people with no coding knowledge to create websites, blogs, and wiki pages. You can get tasks done faster without having to learn the details of every package installed.

Plesk is very versatile in that it can help you run the Windows version of the Linux, Apache, MySQL, and PHP stack. Plesk also supports Docker, Git, and other advanced security extensions. Windows servers have many unique and practical tools available as well, such as the Microsoft Web Platform Installer (Web PI) for speedier installation of the IIS (Internet Information System web server), MSSQL, and ASP.NET stack.

Because it’s been on the field longer, there are loads of open-source Linux applications available online. Windows hosting has fewer apps to choose from, but you have the security of knowing they are all from from vetted licensed providers. This increases the rate you can move ahead with database deployment.

Performance And Security

A reputable Canadian web host can be expected to secure your website within its data centres, but online attacks on Windows servers over the last few years show that they may be more of a red flag here than with Linux servers. That’s not to say that Linux – or any OS that has been or ever will be developed – will not have any security issues. Solid security is a product of good passwords, applying necessary patches, and using the rack for support.

Further, Linux server is pretty much universally considered to superior to Windows for stability and reliability. They rarely need to be rebooted and configuration changes rarely require a restart. Running multiple database and file servers on Windows can make it unstable, and another small difference is that Linux files are case-sensitive and Windows files are not.

Penguin for the Win

Your choice of server should be dictated by the features & database application needed for the proper functioning of your hosting or website development project. Those of you working on your own external-facing site and looking for a combination of flexibility and stability will be set up perfectly with Linux and cPanel. Those working in a complex IT environment with existing databases and legacy applications running on Windows servers will be best served being hosted on a Windows OS server.

What’s in a ‘Tweet’: Understanding the Engagement-Focused Nature of Twitter’s Algorithm

Reading Time: 3 minutes

It would seem that of all the social media platforms, Twitter is the one that businesses struggle with most in understanding just how to harness it for effective promotional means. The common assumption is any shortcomings are related to your use of the ever-ubiquitous #hashtag, but in fact they’re not nearly as pivotal as you might think.

Here at 4GoodHosting, we’ve done well in establishing ourselves as a premier digital marketing agency in Canada and a part of that is sharing insights on how to get more out of your online marketing efforts. Social media is of course a big part of that, and as such we think more than a few of you will welcome tips on how to ‘up’ your Twitter game.

It’s easy to forget that these social media platforms have algorithms working behind them, and working quite extensively. What’s going on behind the screen controls and narrows down what you actually see on your timeline.

For example, let’s say you have specific political affiliations. The algorithms ensure that the majority of the tweets you’ll see will be linked to that party’s views. Or perhaps you’re especially into sports. If so, plenty of sports news sources will be all over your timeline. Oppositely, if you dislike something then that theme will slowly end up disappearing over the course of the week or beyond.

All of this is a reflection of ALL social media platforms, Twitter included, are using more and more complex algorithms to satisfy their user base and deliver content they are likely to find favourable.

So this is what do you’ll need to know about Twitter’s algorithms, and the best ways to use them to your advantage.

Keep Your Eyes Peeled For These

There’s no disputing the fact that Twitter has faded quite considerably in popularity and the strength of its reach. Despite this, Twitter is really narrowing its scope of engagement and a key way to increase engagement is through increasing relevance of the posts seen.

Directly from Twitter’s engineering blog, here are a few of the factors that decide whether a Tweet is sufficiently engaging and thus worthy of ‘appearances’

  • The level of recency to your posts, the likes, retweets, and other things such as attached media
  • Whether you have previously liked, or retweeted the author of the tweet
  • Your previous positive interaction with certain types of tweets

Twitter will then recommend people to like over the next couple of days. Depending on your responses to those recommendations, it will then adjust the content that’s seen by you to better reflect how it is gauging your preferences.

What’s easy to conclude is that users themselves play a predominant factor in what’s going to be seen on their timelines. Liking or using the “I don’t like this” button once or twice goes a long way in this regard.

By this point it begs asking the question; is Twitter’s algorithm perhaps a little too simple? It is definitely not as complex as other media platforms such as Facebook, but the benefit in that is that it is easier to manipulate. Among the many benefits of this is the way that smaller companies may tag a random brand or company in a tweet that is completely non-associable with their tags. Twitter’s algorithms allow this to be a very effective means of getting increased exposure.

Gain Your Advantage

Generating engagement with your tweets is a reliable way to boost exposure and put yourself on top of the algorithm game. Engaging your audience and boosting exposure keeps you ‘in’ the talk and seeing to it you’re using the correct hashtags will ensure you’re being talked about.

Smaller companies can benefit from tagging large companies in their tweets to gain exposure, and that’s especially advisable if the company relates to what you’re talking about. Sure, it only works to a certain degree, but gaining followers by any means possible is always a plus.

Putting all this talk about engagement into perspective, it’s important to understand how to spark the right sorts of conversation. Asking random questions will make it look forced, while if you don’t interact at all you may see a dip in exposure. Find a way to be genuine in your responses, and adhere faithfully to what you’ve defined as your brand’s voice.

Federal Government Taking Canada.ca Out of Country for Web Hosting

Reading Time: 3 minutes

The top dogs in the world of web hosting all reside south of the 49th parallel, and their sway of influence over consumers and the way they command the lion’s share of web hosting business is well established down in America. Recent news from the Canadian government, however, suggests that their influence may be making perhaps the biggest of inroads up here in Canada too.

Here at 4GoodHosting, in addition to being a quality Canadian web hosting provider we’re also keenly interested in developments that are both related to web hosting AND are tied directly to any of the different offshoots of the business as it pertains to Canada as a whole. As such, the Canadian Government’s announcement last month that it was moving web hosting for its departmental and agency website related to the Canada.ca domain to Amazon Web Services in the U.S.

March of 2015 saw the government grant a contract to Adobe Corp. for a fully hosted service with a content delivery network, analytics, and hosting environments. Adobe then contracted Amazon Web Services in the U.S. to handle all of the government’s website data.

That contract has been extended by one year, and the value of it has grown exponentially – to $9.2 million.

It would seem that Canada.ca is now no longer as Canadian as it sounds. With all the reputable and reliable web hosting providers in Canada that would have no problem accommodating such a busy client, it’s worth taking a look at why the Federal Government would make this move.

Related to the Cloud & ‘Unclassified’

The Government recently produced a draft plan for cloud computing that recommended that data deemed to be “unclassified” by the government — meaning it’s seen as being of no potential harm on a national or personal level — can be stored on servers outside of Canada.

There is however some debate as to whose responsibility it is to determine what information should be considered sensitive. Further, when information is deemed sensitive, it remains unclear how that data will be stored, and where it will be stored. Of course, this raises some obvious questions on the part of registered Canadians who want to know that personal data is always entirely secure.

Spokespersons have reported that no sensitive information is being stored on the American servers, adding further that as more departments join the Canada.ca website – Canada Revenue Agency being the most notable – there will need to be workarounds implemented to ensure that sensitive data is not relayed on to the American servers.

Cloud Makes $ and Sense for Canada

The appeal of cloud computing for the Canadian government is that it will help them get better value for taxpayers’ dollars, become more streamlined in its operations, as well as better meet the evolving needs of Canadians.

Managed Web Services will be used solely to deliver non-sensitive information and services to visitors. Similarly, secure systems such as a person’s ‘My CRA’ Account will continue to be hosted on separate platforms and servers within the current GC network.

The previous Conservative government spearheaded the move to Canada.ca in 2013, and it was regarded as being a key part of the federal government’s technological transformation. The idea was to help the government save money and become more efficient by creating better efficiencies between the technological needs of the 90 departments and agencies that will be a part of Canada.ca very soon. Prior to all of this, each of the entities had their own website that came with a URL that the majority of people found very hard to remember.

All departments have until December 2017 to take themselves over to the new Canada.ca website.