Windows 7 End Time Reminders On Their Way for PCs Starting Next Month

It would appear that a good many personal computers out there are still running Windows 7. If they weren’t then we can assume that there wouldn’t be the need for Microsoft to take the action they’ll be taking soon – sending out reminders to PC users still running this admittedly archaic OS that the end is nigh. Microsoft is calling them ‘courtesy reminders’ and while the message doesn’t go so far as to say what’s really the message – update your operating system or your device will become by and large inoperative – it certainly implies as much

Now admittedly as a leading Canadian web hosting provider we’re the type to be updating our OS systems just as soon as the opportunity presents itself each time, but we’re also able to go ahead and imagine that many of our clients will have friends or family members who don’t have the need to be equipped with the latest and greatest in computing technology. As such this might be a prompting to tell those people not to ignore anything that pops on their screen talking about the end of Windows 7.

So what’s all this going to involve? Not a whole lot really, but it’s worthwhile to take something of longer glance at why this is necessary and what PC users can expect if they’re still rocking Windows 7.

Friendly, yet Persistent Reminders

Microsoft has stated that starting in April if you are a Windows 7 user you can expect to see a notification appear on your Windows 7 PC a number of times over the next month. The hope is that one or more of them will be all it takes to make you aware that Windows 7 will officially be unsupported as of January 14, 2020.

As you might expect, users will be able to reject future notifications by selecting a ‘do not notify me again’ option, or if they’d prefer to know a little bit more about why their favourite OS (we have to assume there’s a reason they’ve resisted updating for so many years) is going the way of the Dodo Bird then there’ll also a be a ‘learn more’ button.

FWIW, the same thing happened with Windows XP a few years back. That OS went extinct fairly smoothly, so the expectation is that the same thing will happen here. Just in case that’s not the way it goes, however, Microsoft is trying to be proactive. The Windows 7 notices will appear eight months earlier than those XP warnings.

One big difference will be in that it was only in March of 2014, just a month before XP’s expiration, that Microsoft began placing on-screen reminders of the impending date. After that, they came monthly. Should Microsoft follow the same schedule and cadence, it should begin pushing notices to Windows 7 on April 14 before repeating them on the 14th of each month following.

Accelerated Schedule

The issue behind this sped-up schedule regarding the whole thing is that – believe it or not – Windows 7 is still surprisingly relevant. Check out this stat from Computerworld; it’s estimated that Windows 7 will still be powering more than 40% of all Windows personal computers at the end of January 2020.

If that’s correct, that number is quite a bit higher – about 35% – than the one attached to Windows XP when it was coming to the end of its working life. It would seem that Microsoft’s logic in starting to send out this reminders earlier is that it will reduce the larger fraction of Windows 7 systems before support ends.

As recently as 5 years ago Microsoft pushed on-screen alerts only to systems maintained using Windows Update, working with the knowledge that most small businesses and the like would be utilizing that resource. Windows 7 PCs managed by enterprise IT staff using Windows Server Update Services (WSUS) had no such reminder delivered. Administrators were also able to remove and / or prevent the warning by modifying the Windows registry, or by setting a group policy.

We can likely expect that similar options will exist for the Windows 7 notices. As the saying goes, all good things come to an end. We’ll try to pacify anyone who’ll be sad to see Windows 7 go by saying that by putting these OS out to pasture the developers are able to put more of their energies towards improving existing and future ones, and that’s better in the big picture of things.

We’ll conclude here today by leaving you with a Windows 7 to Windows 10 migration guide.

5G Networks: What to Expect

We don’t know about you, but for those of us here it doesn’t seem like it was that long ago that 3G Internet speeds were being revelled in as the latest and greatest. Things obviously change fast, as 3G has been in the rear view mirror for a long time now, and the reality is that the newest latest and greatest – 4G – is about to join it there.

Here at 4GoodHosting, the fact we’re a leading Canadian web host makes us as keen to learn more about what the new 5G networks have in store for us as anyone else who’s in the digital space day in and out. It appears that we’re in for quite a treat, although there are some who suggest tempering expectations. That’s to be expected anytime wholesale changes to infrastructure key to big-picture operations are forthcoming.

Nonetheless, we’re supposed to be immersed in the 5G world before the end of next year. Mobile 5G is expected to start making appearances in cities around North America this year, with much more extensive rollouts expected in 2020 so a discussion of what we can all expect from 5G is definitely in order. Let’s do it.

What is 5G, and How’s It Going to Work?

To cut right to it, 5G is the next generation of mobile broadband that will augment 4G LTE connections for now before eventually replacing them. 5G is promising to deliver exponentially faster download and upload speeds along with drastically reduced latency – the time it takes devices to communicate with each other across wireless networks. Right, that alone is worthy of some serious fanfare, but fortunately there’s even more to this.

But before getting into additional benefits expected to be seen with 5G networks, let’s have a look at what makes them different from 4G ones and how exactly these new super networks are predicted to function.

Spectrum-Specific Band Function

It’s important to start with an understanding of the fact that unlike LTE, 5G is going to operate on three different spectrum brands. The lowest one will be the sub-1GHz spectrum bands like GSMA / ITU. They are what’s known as low-band spectrums, and they’re the ones used for LTE by most carriers in North America. This spectrum is quite literally running out of steam, so it’s ready to be replaced. It does provide great area coverage and signal penetration but peak data speeds never exceed 100Mbps and often you’re not even anywhere close to that even.

Mid-band spectrums provides faster coverage and lower latency but the long-standing complaint related to them is that they fail to penetrate buildings and peak speeds top out at around 1GB

High-band spectrums (aka mmWave) are what most people think of when they think of 5G, and high-band spectrums can offer peak speeds up to 10 Gbps along with impressively low latency most of the time. The major drawback here though? It has low coverage area and building penetration is poor.

It appears that most carriers are going to start out by piggybacking 5G on top of their 4G LTE networks to start, and then nationwide 5G-exclusive networks will be built. Providers are very aware that small cells are going to required so that these suped-up 4G LTE networks don’t have their 5G appeal diminished with poor penetration rates and intermittently average download speeds.

In this regard, we all stand to benefit from the industry being cautious about not rolling out 5G on its own and then having growing pains with these networks.

Right, some people may not be familiar with small cells. They’re low-power base stations that cover small geographic areas that allow carriers using mmWave for 5G to offer better overall coverage area. Beamforming will be used to improve 5G service on the mid-band by sending a single focused signal to each and every user in the cell, while systems using it monitor each user to make sure they have a consistent signal.

Latency promises to be nearly if not entirely non-existent between the small cells and beamforming within 5-G enabled 4G LTE networks.

Examples of How 5G SHOULD Make Things Better

  1. Improved broadband

The reality today is that carriers are running out of LTE capacity in many major metropolitan areas. In some spots, users are already experiencing noticeable slowdowns during busy times of day. 5G will add huge amounts of spectrum in bands that have not been dedicated for commercial broadband traffic.

  1. Autonomous vehicles

Uber may have a devil of a time getting footed in Vancouver, but you can likely expect to see autonomous vehicles made possible with ubiquitous 5G deployment. The belief is that it will make it possible for your vehicle to communicate with other vehicles on the road, provide information to other vehicles regarding road conditions, and share performance information with both drivers and automakers.

This applications has a TON of promise, and it’s definitely one to keep an eye on.

  1. Public Infrastructure & Safety

It’s also predicated that 5G will allow cities and other municipalities to operate with greater efficiency. All sorts of civic maintenance process will be made more efficient by means of 5G networks.

  1. Remote Device Control

The remarkably low levels of latency expected with 5G make it so that remote control of heavy machinery may become possible. This means fewer actual people in hazardous environments, and it will also allow technicians with specialized skills to control machinery from any location around the globe.

  1. Health Care

5G and its super low latency may also be huge for health care applications. Since URLLC reduces 5G latency even further than what you’ll see with enhanced mobile broadband, we may see big improvements in telemedicine, remote recovery and physical therapy via AR, precision surgery, and even remote surgery in the very near future once 5G becomes the norm.

One of the most beneficial potential advances that may come with 5G as it concerns healthcare is that hospitals may be able to create massive sensor networks to monitor patients, allow physicians to prescribe smart pills to track compliance, and let insurers monitor subscribers to determine appropriate treatments and processes.

  1. IoT

Last but certainly not least is the way 5G will benefit the Internet of Things. As it is now, sensors that can communicate with each other tend to require a lot of resources and really drain LTE data capacity.

With 5G and it’s fast speeds and low latencies, the IoT will be powered by communications among sensors and smart devices. These devices will require fewer resources than ones that are currently in use, and there’s huge efficiencies to be had with connecting to a single base station.

It’s interesting to think that one day 5G will probably be as long-gone and forgotten as 3G is now, despite the fanfare we all gave it many years ago. You can’t stop progress in the digital world, and it’s fair to say that 99% of us wouldn’t want to even if we could.

 

What’s Best? Sleep, Hibernate, or Shut Down Your Computer at Night

\

Most people are perfectly fine with prompting their desktop or notebook to ‘nod off’ at the end of a day, and especially for those who work on their device and will be back in front of the screen first thing tomorrow morning. It’s true that they can go into a low-power mode and there’ll be no light coming from the screen and illuminating the room once you tell your computer to go to sleep. Others who aren’t going to be using theirs as regularly may instead choose to shut those down and be perfectly all right with the time it takes to get it booted up and running again once they do want to use it again.

The majority won’t really give it much more than that, and here at 4GoodHosting we’re like any other good Canadian web hosting provider with a good reputation in that we’ve got our minds on much more detailed and relevant aspects of what’s going in the digital world. But like any of you we’ve got desktops and notebooks at home too. That’s why we found a certain article on this topic to be informative in just the type of way we aim to offer our weekly blog content, and so here it is for you too!

Let’s have a look at this, and try to come to a consensus on what’s the best choice for you when you’re doing using your computer – put it to sleep, have it hibernate, or shut it down entirely.

Popular Thinking

The standard belief is that choosing not to turn your computer off at night is preferable, because shut downs and start ups tax the computer and lead to some of its components wearing out more quickly. Alternately, leaving it on does the same for other ones that never get to rest when the computer is still running, and even if it’s long since asleep.

There’s some truth to each of them, so the question then becomes which is the better of the two choices. Here’s the skinny on all of that.

The Issue

It’s easy to understand why believing that cutting the power with shutting down properly has the potential to do damage to your computer’s hardware. But can frequent shutdowns and restarts do the same? What are the comparison between turning the device off and leaving it on but in low-power ‘sleep’ or ‘hibernate’ states when not in use?

The source turned to for a definitive answer in this case was Best Buy’s Geek Squad, and here’s what they had do say on a topic that most would agree there very well qualified to comment on. So they were asked very plainly – is it best to leave my computer on and let it go to sleep and eventually hibernate if I’m done using it, or is it best to shut it down and then restart it then.

The Verdict, and Reasoning

According to the knowledgeable guys and gals at Geek Squad, the answer as to which choice is best depends on how often you use your computer. Those who use it more than a few times every day are best to leave it on and let it drift off into a sleep. Alternately, those who use it for an hour or two a day and here and there should go ahead and turn it off between usages.

The long and short explanation for this – and the most relevant piece of information regarding resultant wear & tear on the device – is that leaving a computer on indefinitely is less stressful overall than turning it on and off, especially if you were to do that several times a day.

Every time a computer turns on, the surge of power required for the boot up isn’t harmful in itself, but over years the repeating of that power surge can shorten the computer’s lifespan. These risks are of course greater for an older computer, and in particular for ones that have a traditional hard disk drive with moving parts rather than a solid state drive that’s more robust.

That said, all mechanical parts will fail eventually, and using them constantly will inevitably wear them down. There’s drawbacks to leaving devices on too; computers heat up more and more as they work and certain processes continue even when the device is asleep. Heat is detrimental for all components, and with computers left on you have a steady supply of it at varying moderate levels.

However, the heat and gear grinding that goes on with start up IS more detrimental long term. The exception to this would be with LCD panel displays if they weren’t timed out to go dark after certain timed period of inactivity. If they weren’t, leaving your computer on would be much more problematic – not to mention the nuisance of never-ending illumination of your workspace area.

Batteries and hard drives also have a limited life cycle. Allowing them to turn off (or sleep) and spin down when not being used will extend the life of these components, and especially if you’re only restarting the computer once or twice in a week if at all.

Even Better Reasoning

Some people will aim to refute this belief, stating that the very concept that shut downs and start ups make for damaging stress on components is a very dated way of looking at things. There are arguments to be made for both sides.

Reasons to leave it on

  • Using the PC as a server means you want to be able to remotely access it.
  • Background updates, virus scans, or other activities are welcome to go ahead while you’re away.
  • Long waits during start ups are unacceptable.

Reasons to turn it off

  • Conserving electricity and can slightly increase your power bill.
  • Wishing to not be disturbed by notifications or fan noise.
  • Rebooting does improve computer performance inherently

Having It Sleep, Or Hibernate?

Sleep puts a computer into a low power state without turning it completely off, while when hibernating your computer stops using power and resumes where it was when you put it in that mode. Overall, the consensus seems to be that sleep mode is preferable to hibernate because hibernate produces wear and tear that is similar to start and stop.

The recommendation is that if you’re going to leave it on all the time, make sure that you have the right sleep options set up in the Shut down menu. Saving a lot of power with no real downside becomes possible

Surge Protectors a Must

We’re going a little off topic here to wrap this up, but it really is worth relating the importance of using a surge protector between your computer and the wall outlet. Unless you actually like the idea of having expensive componentry fried by an electrical spike that arrives without warning, a surge protector is going to be a nice defense that hopefully you never need.

The best choice is to get an uninterruptible power supply (UPS), which is basically a battery backed-up surge protector. These help condition power to even it out, and power spikes that can do irreparable damage to your computer’s components.

Lastly, keep your computer clean. Spend some time now and then to open it up and get rid of dust and debris. Uninstalls of old software and cleaning up old files and processes is recommended too.

The Final Decision

Here it is – if you use your computer more than once a day, leave it on at least all day. If you use it only briefly during the morning and at night, leaving it on overnight is probably best. Those who use their computer for only a few hours once a day or even less that should go ahead and turn it off when they’re done.

 

Distractions Begone: Introducing Google Chrome’s Focus Mode

It’s been said that here and now in the 21st century we’ve never had more distractions pulling at our attention day in and out like we do now. This is especially true when we’re in front a screen, and we imagine not many of you need any convincing of that. Distractions aren’t particularly problematic when you’re only web surfing or the like, and more often than not they’re what you might call an irresistible nuisance in those situations.

When you’re on your computer for productive purposes, however, all those distractions can add up to a considerable amount of lost time. That’s where people might wish there was something to be done about them… and it appears as if now there is.

Here at 4GoodHosting, we’re like any industrious Canadian web hosting provider in the way we have our eyes and ears peeled for developments in the computing world. Most of them aren’t worthy of discussing at large here, but considering that nearly everyone has had difficulty staying on task when making use of the Internet then this one definitely is.

Google Introduces Focus Mode for Chrome Browser

As if the Chrome browser needed any more assistance in being the nearly ubiquitous web-browser of choice these days. Google is set to announce focus mode, and while they haven’t actually announced this new feature as of yet, tech insiders have found there’s a new flag to be seen that indicates whether or not ‘focus mode’ is on.

It should be mentioned that they’re not broaching uncharted territory here. Different applications have attempted to take on the problem of getting people to focus while working on a computer, and there have been software solutions available for both Mac and PC that have arrived with little or no fanfare. It goes without saying, however, that no power players commands the attention that Google does these days.

At this time little is known about the Focus Mode feature, asides fro the fact it will soon be implemented with the world’s most popular web browser. The seen flag is reportedly indicating that if ‘#focus-mode’ is enabled, it allows a user to switch to Focus Mode.

What, and How?

We bet nearly all of you will be saying right, right – but how exactly is Focus Mode going to work? At the moment, we can only speculate on the features the new option might offer to users. We think it’s safe to assume that Focus Mode will restrict specific websites or applications from being accessed. For example, Focus Mode may stop a user from browsing sites such as YouTube, Reddit, and Facebook (likely the most necessary for most people!). Other industry insiders have suggested that the mode may integrate with Windows 10‘s Focus Assist when working in conjunction with a PC’s operating system.

That last part there is important, as it appears that – at least initially – Focus Mode will be available on PCs running Windows 10, and it’s believed that it will allow users to silence notifications and other distracting pop-ups. We’re prone to wonder if Focus Mode will also work with Windows 10 to stop websites from screaming out for your attention, or restricting those pop-up announcements that are way too common and explicitly designed to take your attention elsewhere.

Patience, Grasshopper

As mentioned, Focus Mode isn’t quite here yet, but for those who are distracted way too easily (and you can certainly count us among them) when time is a valuable commodity to get needed tasks done then this really has a lot of potential.

We can most likely expect to see Focus Mode in a test build such as Chrome Canary before it becomes a mainstream feature available to one and all with Google Chrome. We’ll be following these developments keenly, and we imagine that now a good many of you will be too.

 

Getting Ready for Wi-Fi 6: What to Expect

Most people aren’t any more familiar with Wi-Fi beyond understanding that it means a wireless internet connection. Those same people won’t be aware that in the last near decade the digital world has moved from Wi-Fi 4 to to Wi-Fi 5, and now Wi-Fi 5 is set to be replaced by Wi-Fi 6. What’s to be made of all of this for the average person who only knows that the wi-fi networks in their home and office are essential parts of their connected day-to-day, and that the wi-fi in Starbucks is pretty darn convenient as well.

The numeric chain that identifies a Wi-Fi standard is something they may well recognize though. 802.11 is the standard, but the Wi-Fi 4 you had from 2009 to 2014 is different from the same 802.11 standard you’ve had with Wi-Fi 5 since then till now. What’s to come later this year with Wi-Fi 6 will be a different 802.11. Right, we get you – what’s the difference exactly.

Here at 4GoodHosting, we’re like any quality Canadian web hosting provider in that the nature of our work and interests makes it so that we pick up on these things, if for no other reason than we’re exposed to and working with them on a regular basis. Much of the time these little particulars related to computing, web hosting, and digital connectivity aren’t worth discussing in great detail.

However, because Wi-Fi is such an essential and much-appreciated resource for all of us we thought we’d look at the ‘new’ Wi-Fi set to arrive later this year here today.

Wi-Fi 6: Problem Solver

When we look at ‘802.11ac’, the average person won’t get the significance of that. The fact is, however, they should and what Wi-Fi 6 is being designed to be is a solution to that problem.

What we’re going to see is the beginning of generational Wi-Fi labels.

Let’s make you aware that there is a collective body known as the Wi-Fi Alliance. They are in charge of deciding, developing, and designating Wi-Fi standards. We are all aware of how devices are becoming more complex and internet connections evolve, and when they do the process of delivering wireless connections also changes.

As a results, Wi-Fi standards — the technical specifications that manufacturers establish to create Wi-Fi — need to be updated from time to time so that new technology can flourish and compatibility extends to the near entirety of devices out there.

As mentioned though, the naming of Wi-Fi standards is totally foreign to the average person if they ever try to figure what that numeric 802-something chain stands for. The Wi-Fi Alliance’s response to this is now to simply refer to the number of the generation. Not only will this apply to the upcoming Wi-Fi 6, but will also be retroactive and thus apply to older standards. For example:

802.11n (2009) – Wi-Fi 4

802.11ac (2014) – Wi-Fi 5

802.11ax (expected late 2019) – Wi-Fi 6

It’s easier to see how this is a better classification approach, but there’s likely going to be a a period of confusion where some products are labeled with the old code and some are just called Wi-Fi 4 or Wi-Fi 5 when they’re functionally interchangeable in as far as ‘type’ is concerned. Eventually, however, this should be resolved as older product labeling is phased out and everyone – or most people at least – become familiar with the new Wi-Fi classifications. In all honesty, for most people if you just pay even the slightest amount of attention you’ll begin to notice the difference without having to put much thought into it.

How Wi-Fi 6 Will Be Different – And Better

The biggest impetus to create Wi-Fi 6 was to better accommodate all the many new Wi-Fi technologies that have been emerging. Wi-Fi 6 helps standardize them. Here’s the most relevant developments, and exactly what they should mean for your wireless network.

Lower Latency

Lower latency is a BIG plus that’s going to come with Wi-Fi 6, and you’ll probably notice it right quick. Reduced latency means shorter or no delay times as data is sent – which is very similar to ping rate and other such measurements. Low latency connections improve load times and prevents disconnects and other issues more effectively. Wi-Fi 6 lowers latency compared to older Wi-Fi standards, and it does so using more advanced technology like OFDMA (orthogonal frequency division multiple access). Long story short, it’s going to pack data into a signal much more completely and reliably.

Speed

Wi-Fi 6 will also be faster, and considerably faster compared to Wi-Fi 5. By offering full support for technologies like MU-MIMO, connection quality will improve for compatible mobile devices in a big way, and content delivery should be sped up accordingly. These improvements won’t be as relative to Internet speed as you might think too. They can and likely will improve the speed of your Wi-Fi data and let your receive more information, more quickly.

Now a question we imagine will come up for most of you – will all routers be able to work with the new 802.11ax standard? No, they won’t. If your router is especially dated, you should happily accept the fact it’s time to get a newer model. It will be 100% worth it, don’t have any doubts about that.

Wi-Fi 6 is also going to mean fewer dead zones, as a result of expanded beamforming capabilities being built into it. ‘Beamforming’, you say? That’s the name for the trick your router uses to focus signals on a particular device, and that’s quite important if the device is having difficulty working with a connection. The new WiFi 6 802.11ax standard expands the range of beamforming and improves its capabilities. Long story short again, ‘dead zones’ in your home are going to be MUCH less likely.

Improved Battery Life

Wi-Fi 6 is going to mean better battery life, and we’ll go right ahead and assume that’s going to be most appealing for a lot of you who are away from home for long periods of the day and taking advantage of Wi-Fi connectivity fairly often throughout.

One of the new technologies that Wi-Fi 6 is set up to work with is called ‘TWT’, or target wake time. It assists connected device with customizing when and how they ‘wake up’ for the purpose of receiving data signals from Wi-Fi. Devices are able to ‘sleep’ while waiting for the next necessary Wi-Fi transmission and battery drain is reduced as a result. Your phone does not sleep at all itself, only the parts of it that are operating with Wi-Fi.

Everybody will like the idea of more battery life and less time spent plugging in to recharge.

Keep an Eye Out for the Wi-Fi 6 Label

How will you know if a router, phone or other device works with the new 802.11ax standard? Simply look for the phrase ‘Wi-Fi 6’ on packaging, advertisements, labels or elsewhere. Look up the brand and model # online if for some reason you don’t see it on the packaging. The Wi-Fi Alliance has also suggested using icons to show the Wi-Fi generation. These icons appear as Wi-Fi signals with a circled number within the signal.

Identifying these icons should help you pick out the right device. If not, you can of course always ask the person behind the till and they should be knowledgable regarding this (if they work there you’d have to assume they would be).

Keep in mind that most of the devices around 2020 and later are expected to be Wi-Fi 6, and so we’ll have to wait a year or so before they start to populate the market.

 

Project Pathfinder for an ‘Even Smarter’ SIRI

AI continues to be one of the most game-changing developments in computing technology these days, and it’s hard to argue that there’s no more commonplace example of AI than in the digital assistants that have nearly become household names – Apple’s SIRI and Google’s Alexa. Even a decade ago many people would have stated their disbelief at the notion that it might be possible to make spoken queries to a digital device, and then have them provide a to-the-minute accurate reply.

The convenience and practicality of AI has been a hit, and what’s noteworthy about it is the way that folks of all ages have taken to it. After all, it doesn’t even require the slightest bit of digital know-how to address Siri or Alexa and rattle of a question. Indeed, both tech giants have done a great job building the technology for their digital assistants. With regards to Siri in particular, however, it appears that Apple is teaming up with a company that’s made a name for themselves developing chatbots for enterprise clients.

Why? – to make Siri an even better digital assistant and even more the so the beacon of AI made possible for everyday people.

Here at 4GoodHosting, like most Canadian web hosting providers we have the same level of profound interest in major developments in the computing, web hosting, and digital worlds that many of our customers do. This zeal for ‘what’s next’ is very much a part of what makes us tick, and this coming-soon improvement to Siri makes the cut as something worth discussing in our blog here today.

Proven Partnership

The aim is to make it so that Siri gets much better at analyzing and understanding real-world conversations and developing AI models capable of handling their context and complexity. In order to do that, they’ve chosen to work with a developer who they have a track record of success with. That’s Nuance, who is an established major player in conversation-based user interfaces. They collaborated with Apple to begin with Siri, and so this is round 2.

As mentioned, Nuance’s present business is focused on developing chatbots for enterprise clients, and so they’re ideally set up to hit the ground running with Project Pathfinder.

Project Pathfinder

The focus of Project Pathfinder came from Apple’s belief that machine learning and AI can automate the creation of dialog models by learning from logs of actual, natural human conversations.

Pathfinder is able to mine huge collections of conversational transcripts between agents and customers before building dialog models from them and using those models to inform two-way conversations between virtual assistants and consumers. Conversation designers are then more able to develop smarter chatbots. Anomalies in the conversation flow are tracked, and problems in the script can then be identified and addressed.

Conversation Building

Voice assistants like Siri and Alexa have inner workings that make it so that your speech is interacting with reference models. The models then try to find a solution to the intent of your question, and accurate replies depend on conversation designers doing two things; 1, having learned from subject matter experts, and 2 – doing the same from a LOT of trial and error process related to query behavior.

As far as Apple’s concerned, giving the nod to Nuance and their conversation designers was the best way to go.

Pathfinder empowers them to build on their existing knowledge base with deep insights gathered from real conversational interactions that have taken place inside call centers. More to the point, however, the software doesn’t only learn what people are discussing, but it also makes determinations on how human agents guide users through the transactions.

Adding more intelligence to voice assistants/chatbots is made possible with this information, and so Siri is primed to build on her IQ in the same way. It certainly sounds promising!

Self-Learning Conversation Analytics

All you need to do is spend a short period of time with Siri or Alexa and you’ll quickly find that they definitely do have limitations. That’s a reflection of the fact that they are built for the mass market, as they must much more diverse requests than chatbots that are primarily built for business. This means that they come with a lack of focus, and it’s more difficult to design AI that can respond to spoken queries on all the thousands of different topics around the globe with sensible responses. Then you have follow-up queries too.

In conclusion, the queries posed to virtual assistants are based in human questions 95+% of the time, and as such they’re less focused and less predictable. So then how do you build AI that’s more capable of handling the kind of complex enquiries that characterize human/machine interactions in the real world?

The answer to that is to start with call center chatbots, and that’s what the Pathfinder Project is doing. It will accelerate development of spoken word interfaces for more narrow vertical intents – like navigation, weather information, or call center conversation – and by doing so it should also speed up the development of more complex conversational models.

It will make these machines capable of handling more complex conversations. It will, however, take some time to come to realization (projected for summer 2019). Assuming it’s successful, it will show how conversational analytics, data analysis and AI have the ability to empower next-generation voice interfaces. And with this we’ll also be able have much more sophisticated human/computer interactions with our virtual assistants.

Seeing the unlocked power of AI with understood context and intent of conversation, rather than primarily asking Siri or Alexa to turn the lights off, etc. etc. promises to be really helpful and a very welcome advance in AI for all of us.

 

DNS Flag Day This Past Friday: What You Need to Know About Your Domain

We’re a few days late getting to this, but we’ve chosen to make DNS Flag Day our topic this week as the ramifications of what’s to come of it will be of ongoing significance for pretty much anyone who has interests in digital marketing and the World Wide Web as a whole. Those that do will very likely be familiar with DNS and what the abbreviation stands for, but for any who don’t DNS is domain name system.

DNS has been an integral part of the information superhighway’s infrastructure for nearly as long as the Internet itself has been in existence. So what’s it’s significance? Well, in the Internet’s early days there wasn’t a perceived need for the levels of security that we know are very much required these days. There as much more in the way of trust and less in the way of pressing concerns. There wasn’t a whole lot of people using it, and as such the importance of DNS as a core service didn’t receive much focus and wasn’t developed with much urgency.

Any Canadian web hosting provider will be on the front lines of any developments regarding web security measures, and here at 4GoodHosting we’re no exception. Offering customers the best in products and services that make their website less vulnerable is always going to be a priority. Creating informed customers is something we believe in too, and that’s why we’re choosing to get you in the know regarding DNS flag day

What Exactly is this ‘Flag Day’?

The long and short of this is that this past Friday, February 1 2019, was the official DNS flag day. So, for the last 3 days, some organisations may now have a non-functioning domain. Not likely many of them, but may will see their domains now being unable to support the latest security features – making them an easier target for network attackers.

How and why? Well, a little bit of background info is needed. These days DNS has a wide-spread complexity, which is ever more necessary because cyber criminals launching are launching ever more complex disruptive distributed denial of service (DDoS) attacks aimed at a domain’s DNS. They’ve been having more success, and when they do it works out that no functioning DNS = no website

Developers have done their part to counter these threats quite admirably, and most notably with many workaround’s put in place to guarantee that DNS can continue to function as part of a rapidly growing internet.

The situation as it’s become over recent years is one where a combination of protocol and product evolution have made it so that DNS is being pushed and pulled in all sorts of different directions. This naturally means complications, and technology implementers typically have to weigh these ever-growing numbers of changes against the associated risks.

Cutting to the chase a bit again, the workarounds have ended up allowing legacy behaviours and slowing down DNS performance for everyone.

To address these problems, as of last Friday, vendors of DNS software – as well as large public DNS providers – have removed certain DNS workarounds that many people have been consciously or unconsciously relying on to protect their domains.

Flag’s Up

The reason this move had to be made is because broken implementations and protocol violations have resulted in delayed response times, far too much complexity and difficulty with upgrading to new features. DNS Flag Day has now put an end to the mass backing of many workarounds.

The change will affect sites with software that doesn’t follow published standards. For starters, domain timeouts will now be identified as being a sign of a network or server problem. Moving forward, DNS servers that do not respond to extension mechanisms for DNS (EDNS) queries will be regarded as inactive servers, and won’t return requests from browsers.

Test Your Domain

If you’re the type to be proactive about these things then here’s what you can do. You can test your domain, and your DNS serves with the extension mechanism compliance tester. You’ll receive a detailed technical report that will indicate your test failing, failing partially, or being successful.

Failures in these tests are caused by broken DNS software or broken firewall configuration, which can be remediated by upgrading DNS software to the latest stable version and re-testing. If the tests still fail, organisations will need to look further into their firewall configuration.

In addition to the initial testing, it’s recommended that business that rely on their online presence (which really is every one of them these days) use the next three months to make sure their domain meets what’s required of it now. Organizations with multiple domains that are clustered on a single network and in a shared server arrangement may well find that there is an increased chance that you may end up being caught up in a DDoS attack on another domain sitting near to yours.

Also, if you’re using a third-party DNS provider, most attacks on the network won’t be aimed at you, but you’re still at risk due to being on shared hosting. VPS hosting does eliminate this risk, and VPS web hosting Canada is already a better choice for sites that need a little more ‘elbow room’ when it comes to bandwidth and such. If VPS is something that interests you, 4GoodHosting has some of the best prices on VPS hosting packages and we’ll be happy to set you up. Just ask!

DNS Amplification and DNS Flood Risks

We’re now going to see more weak domains spanning the internet than ever before, and this makes it so that there is even more opportunity for cyber criminals to exploit vulnerable DNS servers through any number of different DDoS attacks.

DNS amplification is one of them, and it involves attackers using DNS to respond to small look-up queries with a fake and artificial IP of the target. The target is then overloaded with more larger DNS responses that are more than it’s able to handle. The result is that legitimate DNS queries are blocked and the organization’s network is hopelessly backed up.

Another one is DNS floods, and this involves waves of responses being aimed at the DNS servers hosting specific websites. They take over server-side assets like memory or CPU and proceed to fire a barrage of UDP requests generated by running scripts on compromised botnet machines.

Layer 7 (application layer) attacks will almost certainly be on the rise now too, and including those targeting DNS services with HTTP and HTTPS requests. These attacks are built to target applications with requests that look like legitimate ones, which can make them particularly difficult to detect.

What’s Next

Cyber-attacks will continue, as well as continue to evolve. Organizations will continue to spend time, money and resource on security. As regards DNS, it’s now possible to corrupt and take advantage of what was once the fail-safe means of web security. The measures taken as DNS Flagging have been put in place to address this problem, and it’s important that you now that your domain matches the new requirement. Again, use the link above to test yours.

There’s going to be a bit of a rough patch for some, but this is a positive step in the right direction. DNS is an essential part of the wider internet infrastructure. Entering or leaving a network is going to be less of a simple process now, but it’s the way it has to be.

Chromium Manifest V3 Updates May Disable Ad Blockers

It’s likely that a good many of you are among the thousands upon thousands of people who have an Ad Blocker installed for your web browsers of choice. Some people do use them simply to avoid the nuisance of having to watch ad after ad, and it’s people like these that have necessitated some sites to insist that you ‘whitelist’ them in order to proceed into the website they want to visit. That’s perfectly understandable, as those paying advertisers are the way the website generates income for the individual or business.

For others, however, we spend a great deal of our working day researching and referencing online, and having to watch ads before getting to the content we need in order to do our work. For us, an ad blocker is much more of a tool of necessity rather than convenience. Still, we get caught up in more than a few sites that will insist on being whitelisted too. For me, my ad blocker is a godsend and I don’t whitelist any website or disable my ad blocker for any of them.

Here at 4GoodHosting, part of what makes us a good Canadian web hosting provider is having built up an insight into what really matters to our customers. The bulk of them are people who use the Information Superhighway as a production resource rather than web ‘surfers’ for whom it’s more of an entertainment one. That’s why today’s news is some that’s sure to be very relevant for most of our customers.

Weakened WebRequest APIs

Some of you may not know how your ad blocker works, and that’s perfectly normal. As long as it does its job, you don’t really need to know. Chromium is Google’s newest all-powerful web browser, and just like Chrome did you can expect it to soon become nearly ubiquitous as most people’s web browser of-choice.

However, Chromium developers in the last few weeks have shared that among the updates they are planning to do in Manifest V3 is one that will restrict the blocking version of the webRequest API. The alternative they’re introducing is called declrativeNetRequest API.

After becoming aware of it, many ad blocker developers expressed their belief that the introduction of the declarativeNetRequest API will mean many already existing ad blockers won’t be ‘blocking’ much of anything anymore.

One industry expert stated on the subject, “If this limited declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two existing and popular content blockers like uBO and uMatrix will cease to be functional.”

What is the Manifest V3 Version?

It’s basically a mechanism through which specific capabilities can be restricted to a certain class of extensions. These restrictions are indicated in the form of either a minimum, or maximum, version.

Why the Update?

Currently, the webRequest API allows extensions to intercept requests and then modify, redirect, or block them. The basic flow of handling a request using this API is as follows,

  • Chromium receives the request / queries the extension / receives the result

However, in Manifest V3 the use of this API will have its blocking form limited quite significantly. The non-blocking form of the API that permits extensions to observer network requests for modifying, redirecting, or blocking them will not be discouraged. In addition, the limitations they are going to put in the webRequest API have yet to be determined

Manifest V3 is set to make the declarativeNetRequest API as the primary content-blocking API in extensions. This API will then allow extensions to tell Chrome what to do with a given request, instead of Chromium forwarding the request to the extension. This will enable Chromium to handle a request synchronously. Google insists this API is overall a better performer and provides better privacy guarantees to users – the latter part of which if of course very important these days.

Consensus Among Ad Blocker Developers and Maintainers?

When informed about this coming update many developers were concerned that the change will end up completely disabling all ad blockers. The concern was that the proposed declarativeNetRequest API will result in it being impossible to develop new and functional filtering engine designs. This is because the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and some ad blocker developers have commented that it’s very limited in its scope.

It’s also believed that the declarativeNetRequest API developers will be unable to implement other features, such as blocking of media element that are larger than a set size and disabling of JavaScript execution through the injection of CSP directives, among other features.

Others are making the comparison to Safari content blocking APIs, which essentially put limits on the number of admissible rules. Safari has introduced a similar API recently, and the belief is that’s the reason why Apple has gone in this direction too. Many seem to think that extensions written in that API are more usable, but still fall well short of the full power of uBlock Origin. The hope is that this API won’t be the last of them in the foreseeable nearest future.

Dedicated IP Addresses and SEO

Even the most layman of web endeavourers will be familiar with the acronym SEO. We imagine further there’s very few if any individuals anywhere who don’t know it stands for search engine optimization, and understand just how integral SEO is for having success in digital marketing. Most people with a small business that relies on its website for maximum visibility with prospective customers will hire an SEO professional to SEO optimize their site. That continues to be highly recommended, and for 9 out of 10 people it is NOT something you can do effectively on your own, no matter how much you’ve read online or how many YouTube videos you’ve watched.

Here at 4GoodHosting, we are like any other top Canadian web hosting provider in that we offer SEO optimization services for our clients. Some people will think that choosing the best keywords and having them at the ideal density is most integral to having good SEO, and that’s true and by and large. But there are a number of smaller but still significant influence that influence SEO, and they’ll be beyond the wherewithal of most people.

Whether websites benefit from a Dedicated IP address rather that a Shared IP address isn’t something you’ll hear discussed regularly. When you learn that the answer is yes, they do, and exactly why, however, it’s a switch many people will want to consider if they currently have a Shared IP address. Let’s have a look at why that is today.

What Exactly Is an IP address?

For some, we may need to start at the start with all of this so let’s begin be defining what exactly an IP address is. Any device connected to the Internet has a unique IP address, and that’s true if it’s a PC, laptop, mobile device, or your web host’s server. It’s made up of a 4-number string which will start at 0 and then go up to 255. Here’s an example of one:

1.25.255.255

This numerical string code makes the machine you are using known. Once it’s identified – and it has to be – the Internet is then able to send data to it. You now can access the hundreds of thousands of websites along the Information Superhighway.

What’s a Shared IP address?

In most instances, the server your web host uses to host your site will be a single machine with a matching single IP address. For most people – and nearly all who go with the most basic hosting package without giving it much thought – you’ll be set up in an arrangement where the server is hosting thousands of websites like yours. It’s not ‘dedicated’ to you and your site exclusively.

Instead, all of the websites hosted it will be represented by the single IP address allocated to the web host’s server. Now if your website is utilized for more of a personal venture or hobby and it’s NOT going to be a leverage point in trying to secure more business, shared hosting will probably be fine. Alternately, if page rankings are a priority for you then shared hosting may be putting you at a disadvantage.

The solution? A dedicated IP address for your Canadian website. If you need one, we can take care of that for you quickly and fairly easily for you. But we imagine you’ll need more convincing, so let’s move now to explaining what constitutes a Dedicated IP address..

The Dedicated IP Address

A dedicated IP address involves you having your own server, and that server only has one website on it – yours. It is common, however, for more than one site reside on a specific server. A Dedicated IP address is an IP address that is allocated to a single website, instead of one being assigned the server and representing every website hosted there by default.

The Purpose of Dedicated IP Addresses

The primary appeal of Dedicated IP addresses is that they promote large ecommerce being more secure, and in particular as it regards sensitive data like credit card numbers, etc. On a more individual scale, though, a dedicated IP address is superior for SEO interests as well.

Why is that? Let’s list all of the reasons here:

1. Speed

When you share space, you share resources and in as far as shared web hosting and shared IP addresses are concerned that means you are sharing bandwidth. The long and short of it is all those other sites on the same server will be slowing yours down. That might be a problem in itself, but if it isn’t then the way slow site speeds push you further down Google’s rankings will be.

While adding a unique IP address to your site will not automatically mean it loads faster, but migrating to a Dedicated Server with a Dedicated IP address definitely will. Sites with a Dedicated IP address are faster, more reliable, and more secure, and that’s a big deal.

2. SSL

For nearly 5 years now Google has been giving preference to websites that have added an SSL 2048-bit key certificate. The easiest way to see whether that’s been done or not is seeing the site’s URL change from HTTP to HTTPS. SSL sites typically utilize unique IP addresses. Google continues to insist that SSL impacts less than 1% of searches, but it’s a factor nonetheless and is another benefit of a Dedicated IP address.

SSL can make your website more visible through public networks and can make websites operate marginally faster, and the benefit of this is in the way visitors get a faster response from the website because it’s not held up by Google the way it would be if it didn’t have an SSL cert. The majority of ecommerce sites with a Dedicated IP address will also have an SSL cert.

3. Malware

Malware is software that’s designed and disseminated for the explicit purpose of throwing wrenches into the gears of a working web system. Unfortunately, the thousands of websites that may be on a shared server drastically increases the risk of being exposed to malware if you’re one of them. Further, when you share an IP address with any site that’s been infected with malware then your site is actually penalized despite the fact it’s not you who’s been infected.

In these cases, you’ll be best served by going with a Dedicated IP address and choosing a more reliable Canadian web hosting provider that has measures in place to protect malware from making its way into the servers in the fist place. A dedicated IP means you’re standing alone, and you’re regarded accordingly.

How Do I Get a Dedicated IP Address?

If you’re with us here at 4GoodHosting, all you need to do is ask. We’ve been setting our customers up with Dedicated IP addresses for quite some time now, and you’ll find that when you do so through us it’s not nearly as pricey as you had expected it to be.

It’s very recommended for any ecommerce site or one that’s utilized for very business-strategic aims, and it’s fair to say that you really can’t go wrong moving to a dedicated server if you’ve made the commitment to do anything and everything to protect your SEO and enjoy the same page rankings moving forward. The vast majority of people see it as a wise investment, and of course you always have option of switching back to a shared hosting arrangement if over time you don’t see any real difference or benefits for you.

Global Environmental Sustainability with Data Centers

Last week we talked about key trends for software development expected for 2019, and today we’ll discuss another trend for the coming year that’s a bit more of a given. That being that datacenters will have even more demands placed on their capacities as we continue to become more of a digital working world all the time.

Indeed, datacenters have grown to be key partners for enterprises, rather than being just an external service utilized for storing data and business operation models. Even the smallest of issues in datacenter operations can impact business.

While datacenters are certainly lifeblood for every business, they also have global impacts and in particular as it relates to energy consumption. Somewhere in the vicinity of 3% of total electricity consumption worldwide is made by datacenters, and to put that in perspective that’s more than the entire power consumption of the UK.

Datacenters also account for 2% of global greenhouse gas emissions, and 2% electronic waste (aka e-waste). Many people aren’t aware of the extent to which our growingly digital world impacts the natural one so directly, but it really does.

Like any good Canadian web hosting provider who provides the service for thousands of customers, we have extensive datacenter requirements ourselves. Most will make efforts to ensure their datacenters operate as energy-efficiently as possible, and that goes along with the primary aim – making sure those data centers are rock-solid reliable AND as secure as possible.

Let’s take a look today at what’s being done around the globe to promote environmental sustainability with data centers.

Lack of Environmental Policies

Super Micro Computer recently put out a report entitled ‘Data Centers and the Environment’ and it stated that 43% of organizations don’t have an environmental policy, and another 50% have no plans to develop any such policy anytime soon. Reasons why? high costs (29%), lack of resources or understanding (27%), and then another 14% don’t make environmental issues a priority.

The aim of the report was to help datacenter managers better understand the environmental impact of datacenters, provide quantitative comparisons of other companies, and then in time help them reduce this impact.

Key Findings

28% of businesses take environmental issues into consideration when choosing datacenter technology

Priorities that came before it for most companies surveyed were security, performance, and connectivity. However, 9% of companies considered ‘green’ technology to be the foremost priority. When it comes to actual datacenter design, however, the number of companies who put a priority on energy efficiency jumps up by 50% to 59%.

The Average PUE for a Datacenter is 1.89

Power Usage Effectiveness (PUE) means the ratio of energy consumed by datacenter in comparison to the energy provided to IT equipment. The report found the average datacenter PUE is approx. 1.6 but many (over 2/3) of enterprise datacenters come in with a PUE over 2.03.

Further, it seems some 58% of companies are unaware of their datacenter PUE. Only a meagre 6% come in that average range between 1.0 and 1.19.

24.6 Degrees C is the Average Datacenter Temperature

It’s common for companies to run datacenters at higher temperatures to reduce strain on HVAC systems and increase savings on energy consumption and related costs. The report found 43% of the datacenters have temperatures ranging between 21 degrees C and 24 degrees C.

The primary reasons indicated for running datacenters at higher temperatures are for reliability and performance. Hopefully these operators will come to soon learn that recent advancements in server technology have optimized thermal designs and newer datacenter designs make use of free-air cooling. With them, they can run datacenters at ambient temperatures up to 40 degrees C and see no decrease in reliability and performance. It also helps improve PUE and saving costs.

Another trend in data center technology is immersion cooling, where datacenters are cooled by being entirely immersed. We can expect to see more of this type of datacenter technology rolled out this year too.

3/4 of Datacenters Have System Refreshes Within 5 Years

Datacenters and their energy consumption can be optimized with regular updates of the systems and adding modern technologies that consume low power. The report found that approximately 45% of data center operators conduct a refreshing of their system sometime within every 3 years. 28% of them do it every four to five years. It also seems that the larger the company, the more likely they are to do these refreshes.

8% Increase in Datacenter E-Waste Expected Each Year

It’s inevitable that electronic waste (e-waste) is created when datacenters dispose of server, storage, and networking equipment. It’s a bit of a staggering statistic when you learn that around 20 to 50 million electric tons of e-waste is disposed every year around the world, and the main reason it’s so problematic is that e-waste deposits heavy metals and other hazardous waste into landfills. If left unchecked and we continue to produce it as we have then e-waste disposal will increase by 8% each year.

Some companies partner with recycling companies to dispose of e-waste, and some repurpose their hardware in any one of a number of different ways. The report found that some 12% of companies don’t have a recycling or repurposing program in place, and typically they don’t because it’s costly, partners / providers are difficult to find in their area, and lack of proper planning.

On a more positive note, many companies are adopting policies to address the environmental issues that stem from their datacenter operation. Around 58% of companies already have environmental policy in place or are developing it.

We can all agree that datacenters are an invaluable resource and absolutely essential for the digital connectivity of our modern world. However, they are ‘power pigs’ as the expression goes, and it’s unavoidable that they are given the sheer volume of activity that goes on within them every day. We’ve seen how they’ve become marginally more energy efficient, and in this year to come we will hopefully see more energy efficiency technology applied to them.