What to Expect From Next Month’s Windows 10 Update from Microsoft

Windows continues to be the most popular and ubiquitous of operating systems for desktops and notebooks around the world, and while there are those who will have nothing to do with it (see Mac devotees) that fact is a testament to the enduring popularity of what is ‘old faithful’ for the most part when it comes to computer operating systems.

Here at 4GoodHosting, we’re a Canadian web hosting provider that’s in the position to see the value of both Mac and PC operating systems, and it’s true that both have their strengths and weaknesses – which is of course true of pretty much everything. One thing that Microsoft has benefited from for decades now is that it was first to the party, and that’s meant that many people will always choose a Windows OS device because it’s especially familiar for them.

And so it is that the next version of Windows 10 — scheduled for a May 2019 update release— is now just around the corner. This is not going to be a massive overhaul of the OS by any means, but there are as it approaches its 4-year anniversary there are some nice tweaks to make it fresher and more well-suited to determined user preferences. Foremost among these are a new light theme and changes to the search experience, Cortana, and more.

Let’s have a look at the most recent update to Windows 10 here today.

On the House

We’ll start by stating for anyone who might be unaware that Windows 10 updates are always free. The May 2019 Update via Windows Update will be provided at no charge for existing Windows 10 users on any device deemed compatible with the update. The noteworthy difference here, however, is with the rollout method – it is no longer automatically downloaded to your PC.

What you’ll get instead is a notification in Windows Update that the May 2019 update is available. From there you’ll have the option of downloading or installing it. However, only those running a version of Windows 10 that is close to end of support will receive the update automatically. Just as with prior releases, rollouts of major Windows 10 updates are gradual to ensure the best quality experience. For this reason you might not see the May 2019 update right away.

Further as regards the timing of this, let’s not forget Microsoft’s troubles with releasing previous Windows 10 versions. Don’t count on this update arriving exactly when it’s expected.

Improvements

Let’s shift to the meat of all of this, and detail all of the improvements to be seen in the new Windows 10.

  • Light Theme & Improved Start Menu

Microsoft debuted a dark mode for Windows 10 in 2018, and a new light theme is being introduced with this update to augment overall contrast with the operating system. Users will see that both the taskbar, start menu and Action Center are a brighter and lighter white color. Some icons in the system tray and taskbar are now also tailored to match the new theme — including both OneDrive and File Explorer.

A new and improved start menu is part of this too. Installing the May 2019 update will give users a single column, and fewer preinstalled apps and live tiles. Plus, they can also now remove more of the stock Windows 10 apps that aren’t used much, including 3D Viewer, Calculator, Calendar, Mail, and Movies & TV, Paint 3D, Snip & Sketch, Sticky Notes, and Voice Recorder.

  • Cortana & Search

The separation of Cortana and Search in the Windows 10 taskbar is one of the most notable changes coming with next month’s update. With previous releases they were integrated with each other, but now the search box in the taskbar will only launch searches for files and documents, and the circular Cortana icon will summon the digital assistant when clicked. Some people have already surmised that this may mean the end of Cortana before long, but it’s likely that a bit presumptive at this point.

Search experience will also be changing, and now Windows will index and search all folders and drives, rather than just limiting it to the default documents, pictures, and videos folders. Along with a new search interface featuring landing pages for Apps, Documents, Email, Web, users can now expect accurate and faster searches when aiming to dig up important files.

  • Reserved Space for Windows Update

It’s true that Windows Updates can cause bugs, data loss, and failures, and there’s been no shortage of people eager to point that out every chance they get. This May 2019 update, however, is going to enable all Windows 10 users to pause updates for up 35 days – something that was for Windows 10 Enterprise and Professional users only up until now.

Having more time to read up and decide on when to install Microsoft’s monthly updates is going to be a nice freedom for many users

The fact that the May 2019 update will also reserve 7GB of disk space for installing general updates promises to be a more contentious point. The move has been made to keep your PC secure, and there reasons this new space cannot be removed from Windows 10 is so that it makes future OS updates more efficient.

The space is also intended for apps, temporary files, and system caches undertaken as your PC sees fit. The size of the reserve will depend on your system, so removing unnecessary files on your hard drive in advance of the update might be a good idea.

  • Sandbox Integrated Feature

Last but not least regarding the Windows update for 2019, we have Windows Sandbox. This integrated feature for Windows 10 Pro and Enterprise lets users create a secure desktop environment that is able to isolate and run untrusted and sketchy apps while keeping them separate from the rest of your system. Thus the term ‘sandbox’ – when a Windows Sandbox is closed, all the software with all its files and state are permanently deleted along with that move.

In our opinion, this is the best and most well-thought out feature added to this Windows 10 2019 update. Especially considering all the different well-disguised threats out there these days. It might not be the most exciting feature for your average, but you can be sure developers are going to be plenty impressed with it.

Will be interesting to see how well received this update is, and it appears we won’t have to wait long to find out.

Protecting a VPN From Data Leaks

One thing that certainly hasn’t changed from previous years as we move towards the quarter pole for 2019 is that hackers are keeping IT security teams on their toes as much as ever. That shouldn’t come as much of a surprise given the cat and mouse game that’s been going on in cyberspace between the two sides for a long time now. Cyber threats are as sophisticated as ever now, and for everyday individuals they biggest concern is always that the privacy of sensitive data will be compromised.

One of the most common responses to enhanced and more enabled threats is to go with a Virtual Private Network and all the enhanced security features that come with them. Here at 4GoodHosting, we’ve been promoting them for our customers very actively in likely what same way every other Canadian web hosting provider has. There’s merit to the suggestion, as VPN connections protect online privacy by creating a secure tunnel between the client – who is typically uses a personal computing device to connect to the internet – and the Internet.

Nowadays, however, VPN networks aren’t as automatic as they were when it comes to trusting in secure connections and understanding that there won’t be data leaks. The good news is that even people with the most average levels of digital understanding can be proactive in protecting their VPN from data leaks. Let’s look at how that’d done here today.

Workings of VPN

A reliable VPN connection disguises the user’s geographical location by giving it a different IP address. There is also architecture in place to encrypt data transmitted during sessions and provide a form of anonymous browsing. As it is with almost all internet tools, however, VPN connections can also face certain vulnerabilities that weaken their reliability. Data leaks are a concern amongst information security researchers who focus on VPN technology, and it’s these issues that are most commonly front and centre among them:

  1. WebRTC Leaks

Web Real-Time Communication (WebRTC) is an evolution of the VOIP (Voice over Internet Protocol) for online communications. VoIP is the technology behind popular mobile apps such as Skype and WhatsAppp, and it’s been the leading force behind making legacy PBX telephone systems at many businesses entirely obsolete now.

WebRTC is also extremely valuable with the way that it allows companies to hire the best personnel. Applicants can be directed to a website for online job interviews with no need for Skype or anything similar installed.

Everything would be perfect, except for the fact that the IP addresses of users can be leaked, and even through a VPN connection.

  1. DNS Hijacking

It’s fair to say that hijacking domain name system (DNS) servers is one of the most tried-and-true hacking strategies, and interestingly a large portion of that has been made possible by well-intentioned efforts to enact internet censorship. The biggest DNS hijacking operation on the planet is conducted by Chinese telecom regulators through the Great Firewall, put in place with the aim of restricting access to certain websites and internet services.

DNS hijacking encompasses a series of attacks on DNS servers, but arguably the most common one involves taking over a router, server or even an internet connection with the aim of redirecting traffic. By doing so hackers are able to impersonate websites; your intention was to check CBC News, but instead you’ll be directed to a page that may resemble it but actual uses code to steal passwords, compromise your identity, or leave you with malware on your device.

Often times WebRTC and DNS hijacking are working in conjunction with each other: a malware attack known as DNS changer that can be injected into a system by means of JavaScript execution followed by a WebRTC call that you’re unaware of. Done successfully, it can gain your IP address.

Other lesser-known vulnerabilities associated with VPN networks are Public IP address, torrents, and geolocation

How to Test for Leaks

It might be best to cut right to chase here sort of – The easiest way to determine if you’ve got a leak is to visit IPLeak.net, and do it with your VPN turned off. This site is a very nice resource. Once you’ve visited, then leave seat and turn your VPN back on before repeating the test.

Then, you compare results.

The torrents and geolocation tests available are fairly worthwhile themselves, but probably not as much of a factor indicator as the DNS. Navigating the internet is done by your device communicating with DNS servers that translate web URLs into numeric IP addresses. In the bulk of those instances, you’ll have defaulted through your ISP servers, and unfortunately these servers tend to be very leaky on their own to begin with.

Leakage through your local servers can serve up your physical location to those with bad intentions, even with a VPN set up and utilized. VPN services route their customers through servers separate from their ISP in an effort to counter these actions.

Once you determine your data is leaking, what is there you can do to stop it? Read on.

Preventing Leaks and Choosing the Right VPN

A good suggestion is to disable WebRTC in your browser, and doing so even before installing a VPN solution. Some developers have set this to be a default configuration, while most better ones will have this is an enabled option.

Search ‘WebRTC’ in the help file of your browser and you may be able to find instructions on how to modify the flags or .config file. Do so with caution, however, and don’t take actions until you’re 100% certain they’re the correct ones or you may risk creating quite a mess for yourself.

Other good preventative measures include:

  • Going with the servers suggested when configuring your VPN – typically not those of your Internet service provider (ISP) but ones maintained by the VPN provider. Not all of them have them, though
  • Aiming to have a VPN that has upgraded protocols making it compatible with the new IPv6 address naming system. Without one, you’ll have a much greater risk of leaks. If you’re about to move to a VPN, this should be one of your primary determinations
  • Making sure your VPN uses the newest version of the OpenVPN protocol, and especially if you’re on a Windows 10 OS device (it has a very problematic default setting where the fastest DNS servers is chosen automatically. OpenVPN prevents this)

Overall, the security of tunneled connections is going to be compromised big time by a leaky VPN. If the security of your data is a priority for you, then you should be evaluating VPN products, reading their guides and learning about best ways to secure your system against accidental leaks.

Keep in mind as well this isn’t a ‘set it and forget it’ scenario either. You need to check for leakage from time to time to ensure nothing has changed with your system. Last but not least, make sure the VPN you use has a kill-switch feature that will cut off your connection immediately if a data leak is detected.

New Epic Quickly Becoming Browser Of-Choice for Those Big on Privacy

Things change quickly in the digital world, and what was barely even on the radar can become a front and centre issue overnight in some cases. Go back 10 years and the issue of privacy in web browsing wasn’t something the vast majority of people paid even the slightest bit of attention to. Nowadays, however, it’s definitely a hot-button topic given all the news that’s come out about web browsing histories and the like being tracked, monitored, and then made available to whoever doesn’t mind paying for information about what people like YOU search for online.

Some people don’t have a problem with that. Other people have quite a significant problem with that. If you’re part of the second group there then you may have already switched over to using a web browser like DuckDuckGo or something similar. It’s a fine privacy-promoting web browser in itself, but it’s a bit of a generalist in that it works suitably well across the board but not especially well for any one framework.

And that’s where and why Epic coming onto the scene is as noteworthy as it is. It is a Chromium-based browser designed to ensure privacy without giving up anything i speed or functionality. It blocks ads as well as prevents user tracking, and also includes built-in protection against a wide range of surveillance methods cryptocurrency mining scripts among them.

It promises to be just what the Doctor ordered for those who think these types of overwatch activities are unacceptable, and here at 4GoodHosting we’re like any other quality Canadian web hosting provider in that we agree with you wholeheartedly. Let’s take a look at what makes this new no-tracking web browser such a good fit and why it promises to be especially well received.

Surfers 1 / Watchers 0

It’s fair to say that it’s really a shame that the innocence and carefreeness of using the world wide web to gain information is gone now, and that government agencies, corporations, and malicious hackers lurking in the shadows and taking notes is entirely unacceptable. Even those who aren’t overly incensed at having their privacy violated will almost certainly choose to stay ‘incognito’ if the opportunity to do so exists.

Epic’s creator, Alok Bhardwaj, attributes much of his need to build such a resource on coming to understand that on average, there are some 10 or so trackers on pretty much every website you visit. For some still, there’s up to 30 or 40 companies that are logging your visit.

Fortunately, his new Epic browser includes built-in protection against a wide range of surveillance tactics, and without any of the BS like what was seen in 2015 in the States with AT&T’s policy where subscribers had to pay up to 50% more to secure a reasonable level of privacy.

The original version of Epic has been around since August of 2018, but the Chromium-based version of it is still new to the scene. It allows users to enjoy private browsing without sacrificing speed or functionality, and also blocks ultrasound signal tracking and cryptocurrency mining scripts. Plus, with a new mobile browser on the way, Epic continues to take actions that support the company’s belief in a free internet.

 

Sight for Sore Eyes: Privacy-Focused Web Browser

U.S. President Donald Trump’s 2017 decision to cann internet privacy rules as passed by the Federal Communications Commission in the previous year put an effective end to internet users having more rights concerning what service providers can do with their data. Here in Canada we certainly haven’t been immune to the increasingly grey areas of what can and can’t be done as far as monitoring a web browser user’s history.

Likely no one needs convincing that relying on governmental agencies to solve data privacy issues will likely result in little if anything being done. So we’re left to take matters into our hands as much as we can. Good news on that front, as Epic is an exceptionally private browsing experience that’s also fast and intuitive and based on Google’s open-source Chromium project for long-term practicality in the bigger picture of things.

That perspective was very important in the development of this new browser, according to Bhardwaj. Microsoft announced that the company would build their next browser on Chromium, and so the decision was made to build a browsing experience that’s very private, but just as fast as using Google Chrome.

Mission Accomplished

We’d say it is – Epic is one of the most simple, private, and fast browsers on the market today, and it’s really raised the bar that was set by the original private browser, Tor. (which is still a great browser FWIW, still doing very well and also offers an extremely anonymous service)

One area where Epic meets a need that Tor can’t, however, is with malicious cryptocurrency activities. Hackers have used Tor to steal cryptocurrency from users, and fairly recently too.

Long story short, Epic is the only private browser out there that just works out of the box with a high level of privacy and speed, and it doesn’t have any of the issues where advanced security protocols render certain website undeliverable. In the event that one won’t, Epic lets you turn off the proxy and ad blocking feature for a particular website if needed.

Other appealing features:

  • Free VPN
  • 1-click encrypted proxy
  • Blocks fingerprinting and ultrasound signaling
  • Locally stored database of the top 10,000 websites in the world

Coming to Mobile Soon

Epic is expected to launch the company’s mobile browser before long. They expect their mobile browsers to be even more significant than the desktop browsers, given the scale that mobile’s going to operate on. With the extent to which most of us use our smartphones for internet search queries, there’s no doubt that this mobile browser release will put Epic even more in the spotlight in the near future.

5G Networks: What to Expect

We don’t know about you, but for those of us here it doesn’t seem like it was that long ago that 3G Internet speeds were being revelled in as the latest and greatest. Things obviously change fast, as 3G has been in the rear view mirror for a long time now, and the reality is that the newest latest and greatest – 4G – is about to join it there.

Here at 4GoodHosting, the fact we’re a leading Canadian web host makes us as keen to learn more about what the new 5G networks have in store for us as anyone else who’s in the digital space day in and out. It appears that we’re in for quite a treat, although there are some who suggest tempering expectations. That’s to be expected anytime wholesale changes to infrastructure key to big-picture operations are forthcoming.

Nonetheless, we’re supposed to be immersed in the 5G world before the end of next year. Mobile 5G is expected to start making appearances in cities around North America this year, with much more extensive rollouts expected in 2020 so a discussion of what we can all expect from 5G is definitely in order. Let’s do it.

What is 5G, and How’s It Going to Work?

To cut right to it, 5G is the next generation of mobile broadband that will augment 4G LTE connections for now before eventually replacing them. 5G is promising to deliver exponentially faster download and upload speeds along with drastically reduced latency – the time it takes devices to communicate with each other across wireless networks. Right, that alone is worthy of some serious fanfare, but fortunately there’s even more to this.

But before getting into additional benefits expected to be seen with 5G networks, let’s have a look at what makes them different from 4G ones and how exactly these new super networks are predicted to function.

Spectrum-Specific Band Function

It’s important to start with an understanding of the fact that unlike LTE, 5G is going to operate on three different spectrum brands. The lowest one will be the sub-1GHz spectrum bands like GSMA / ITU. They are what’s known as low-band spectrums, and they’re the ones used for LTE by most carriers in North America. This spectrum is quite literally running out of steam, so it’s ready to be replaced. It does provide great area coverage and signal penetration but peak data speeds never exceed 100Mbps and often you’re not even anywhere close to that even.

Mid-band spectrums provides faster coverage and lower latency but the long-standing complaint related to them is that they fail to penetrate buildings and peak speeds top out at around 1GB

High-band spectrums (aka mmWave) are what most people think of when they think of 5G, and high-band spectrums can offer peak speeds up to 10 Gbps along with impressively low latency most of the time. The major drawback here though? It has low coverage area and building penetration is poor.

It appears that most carriers are going to start out by piggybacking 5G on top of their 4G LTE networks to start, and then nationwide 5G-exclusive networks will be built. Providers are very aware that small cells are going to required so that these suped-up 4G LTE networks don’t have their 5G appeal diminished with poor penetration rates and intermittently average download speeds.

In this regard, we all stand to benefit from the industry being cautious about not rolling out 5G on its own and then having growing pains with these networks.

Right, some people may not be familiar with small cells. They’re low-power base stations that cover small geographic areas that allow carriers using mmWave for 5G to offer better overall coverage area. Beamforming will be used to improve 5G service on the mid-band by sending a single focused signal to each and every user in the cell, while systems using it monitor each user to make sure they have a consistent signal.

Latency promises to be nearly if not entirely non-existent between the small cells and beamforming within 5-G enabled 4G LTE networks.

Examples of How 5G SHOULD Make Things Better

  1. Improved broadband

The reality today is that carriers are running out of LTE capacity in many major metropolitan areas. In some spots, users are already experiencing noticeable slowdowns during busy times of day. 5G will add huge amounts of spectrum in bands that have not been dedicated for commercial broadband traffic.

  1. Autonomous vehicles

Uber may have a devil of a time getting footed in Vancouver, but you can likely expect to see autonomous vehicles made possible with ubiquitous 5G deployment. The belief is that it will make it possible for your vehicle to communicate with other vehicles on the road, provide information to other vehicles regarding road conditions, and share performance information with both drivers and automakers.

This applications has a TON of promise, and it’s definitely one to keep an eye on.

  1. Public Infrastructure & Safety

It’s also predicated that 5G will allow cities and other municipalities to operate with greater efficiency. All sorts of civic maintenance process will be made more efficient by means of 5G networks.

  1. Remote Device Control

The remarkably low levels of latency expected with 5G make it so that remote control of heavy machinery may become possible. This means fewer actual people in hazardous environments, and it will also allow technicians with specialized skills to control machinery from any location around the globe.

  1. Health Care

5G and its super low latency may also be huge for health care applications. Since URLLC reduces 5G latency even further than what you’ll see with enhanced mobile broadband, we may see big improvements in telemedicine, remote recovery and physical therapy via AR, precision surgery, and even remote surgery in the very near future once 5G becomes the norm.

One of the most beneficial potential advances that may come with 5G as it concerns healthcare is that hospitals may be able to create massive sensor networks to monitor patients, allow physicians to prescribe smart pills to track compliance, and let insurers monitor subscribers to determine appropriate treatments and processes.

  1. IoT

Last but certainly not least is the way 5G will benefit the Internet of Things. As it is now, sensors that can communicate with each other tend to require a lot of resources and really drain LTE data capacity.

With 5G and it’s fast speeds and low latencies, the IoT will be powered by communications among sensors and smart devices. These devices will require fewer resources than ones that are currently in use, and there’s huge efficiencies to be had with connecting to a single base station.

It’s interesting to think that one day 5G will probably be as long-gone and forgotten as 3G is now, despite the fanfare we all gave it many years ago. You can’t stop progress in the digital world, and it’s fair to say that 99% of us wouldn’t want to even if we could.

 

What’s Best? Sleep, Hibernate, or Shut Down Your Computer at Night

\

Most people are perfectly fine with prompting their desktop or notebook to ‘nod off’ at the end of a day, and especially for those who work on their device and will be back in front of the screen first thing tomorrow morning. It’s true that they can go into a low-power mode and there’ll be no light coming from the screen and illuminating the room once you tell your computer to go to sleep. Others who aren’t going to be using theirs as regularly may instead choose to shut those down and be perfectly all right with the time it takes to get it booted up and running again once they do want to use it again.

The majority won’t really give it much more than that, and here at 4GoodHosting we’re like any other good Canadian web hosting provider with a good reputation in that we’ve got our minds on much more detailed and relevant aspects of what’s going in the digital world. But like any of you we’ve got desktops and notebooks at home too. That’s why we found a certain article on this topic to be informative in just the type of way we aim to offer our weekly blog content, and so here it is for you too!

Let’s have a look at this, and try to come to a consensus on what’s the best choice for you when you’re doing using your computer – put it to sleep, have it hibernate, or shut it down entirely.

Popular Thinking

The standard belief is that choosing not to turn your computer off at night is preferable, because shut downs and start ups tax the computer and lead to some of its components wearing out more quickly. Alternately, leaving it on does the same for other ones that never get to rest when the computer is still running, and even if it’s long since asleep.

There’s some truth to each of them, so the question then becomes which is the better of the two choices. Here’s the skinny on all of that.

The Issue

It’s easy to understand why believing that cutting the power with shutting down properly has the potential to do damage to your computer’s hardware. But can frequent shutdowns and restarts do the same? What are the comparison between turning the device off and leaving it on but in low-power ‘sleep’ or ‘hibernate’ states when not in use?

The source turned to for a definitive answer in this case was Best Buy’s Geek Squad, and here’s what they had do say on a topic that most would agree there very well qualified to comment on. So they were asked very plainly – is it best to leave my computer on and let it go to sleep and eventually hibernate if I’m done using it, or is it best to shut it down and then restart it then.

The Verdict, and Reasoning

According to the knowledgeable guys and gals at Geek Squad, the answer as to which choice is best depends on how often you use your computer. Those who use it more than a few times every day are best to leave it on and let it drift off into a sleep. Alternately, those who use it for an hour or two a day and here and there should go ahead and turn it off between usages.

The long and short explanation for this – and the most relevant piece of information regarding resultant wear & tear on the device – is that leaving a computer on indefinitely is less stressful overall than turning it on and off, especially if you were to do that several times a day.

Every time a computer turns on, the surge of power required for the boot up isn’t harmful in itself, but over years the repeating of that power surge can shorten the computer’s lifespan. These risks are of course greater for an older computer, and in particular for ones that have a traditional hard disk drive with moving parts rather than a solid state drive that’s more robust.

That said, all mechanical parts will fail eventually, and using them constantly will inevitably wear them down. There’s drawbacks to leaving devices on too; computers heat up more and more as they work and certain processes continue even when the device is asleep. Heat is detrimental for all components, and with computers left on you have a steady supply of it at varying moderate levels.

However, the heat and gear grinding that goes on with start up IS more detrimental long term. The exception to this would be with LCD panel displays if they weren’t timed out to go dark after certain timed period of inactivity. If they weren’t, leaving your computer on would be much more problematic – not to mention the nuisance of never-ending illumination of your workspace area.

Batteries and hard drives also have a limited life cycle. Allowing them to turn off (or sleep) and spin down when not being used will extend the life of these components, and especially if you’re only restarting the computer once or twice in a week if at all.

Even Better Reasoning

Some people will aim to refute this belief, stating that the very concept that shut downs and start ups make for damaging stress on components is a very dated way of looking at things. There are arguments to be made for both sides.

Reasons to leave it on

  • Using the PC as a server means you want to be able to remotely access it.
  • Background updates, virus scans, or other activities are welcome to go ahead while you’re away.
  • Long waits during start ups are unacceptable.

Reasons to turn it off

  • Conserving electricity and can slightly increase your power bill.
  • Wishing to not be disturbed by notifications or fan noise.
  • Rebooting does improve computer performance inherently

Having It Sleep, Or Hibernate?

Sleep puts a computer into a low power state without turning it completely off, while when hibernating your computer stops using power and resumes where it was when you put it in that mode. Overall, the consensus seems to be that sleep mode is preferable to hibernate because hibernate produces wear and tear that is similar to start and stop.

The recommendation is that if you’re going to leave it on all the time, make sure that you have the right sleep options set up in the Shut down menu. Saving a lot of power with no real downside becomes possible

Surge Protectors a Must

We’re going a little off topic here to wrap this up, but it really is worth relating the importance of using a surge protector between your computer and the wall outlet. Unless you actually like the idea of having expensive componentry fried by an electrical spike that arrives without warning, a surge protector is going to be a nice defense that hopefully you never need.

The best choice is to get an uninterruptible power supply (UPS), which is basically a battery backed-up surge protector. These help condition power to even it out, and power spikes that can do irreparable damage to your computer’s components.

Lastly, keep your computer clean. Spend some time now and then to open it up and get rid of dust and debris. Uninstalls of old software and cleaning up old files and processes is recommended too.

The Final Decision

Here it is – if you use your computer more than once a day, leave it on at least all day. If you use it only briefly during the morning and at night, leaving it on overnight is probably best. Those who use their computer for only a few hours once a day or even less that should go ahead and turn it off when they’re done.

 

Distractions Begone: Introducing Google Chrome’s Focus Mode

It’s been said that here and now in the 21st century we’ve never had more distractions pulling at our attention day in and out like we do now. This is especially true when we’re in front a screen, and we imagine not many of you need any convincing of that. Distractions aren’t particularly problematic when you’re only web surfing or the like, and more often than not they’re what you might call an irresistible nuisance in those situations.

When you’re on your computer for productive purposes, however, all those distractions can add up to a considerable amount of lost time. That’s where people might wish there was something to be done about them… and it appears as if now there is.

Here at 4GoodHosting, we’re like any industrious Canadian web hosting provider in the way we have our eyes and ears peeled for developments in the computing world. Most of them aren’t worthy of discussing at large here, but considering that nearly everyone has had difficulty staying on task when making use of the Internet then this one definitely is.

Google Introduces Focus Mode for Chrome Browser

As if the Chrome browser needed any more assistance in being the nearly ubiquitous web-browser of choice these days. Google is set to announce focus mode, and while they haven’t actually announced this new feature as of yet, tech insiders have found there’s a new flag to be seen that indicates whether or not ‘focus mode’ is on.

It should be mentioned that they’re not broaching uncharted territory here. Different applications have attempted to take on the problem of getting people to focus while working on a computer, and there have been software solutions available for both Mac and PC that have arrived with little or no fanfare. It goes without saying, however, that no power players commands the attention that Google does these days.

At this time little is known about the Focus Mode feature, asides fro the fact it will soon be implemented with the world’s most popular web browser. The seen flag is reportedly indicating that if ‘#focus-mode’ is enabled, it allows a user to switch to Focus Mode.

What, and How?

We bet nearly all of you will be saying right, right – but how exactly is Focus Mode going to work? At the moment, we can only speculate on the features the new option might offer to users. We think it’s safe to assume that Focus Mode will restrict specific websites or applications from being accessed. For example, Focus Mode may stop a user from browsing sites such as YouTube, Reddit, and Facebook (likely the most necessary for most people!). Other industry insiders have suggested that the mode may integrate with Windows 10‘s Focus Assist when working in conjunction with a PC’s operating system.

That last part there is important, as it appears that – at least initially – Focus Mode will be available on PCs running Windows 10, and it’s believed that it will allow users to silence notifications and other distracting pop-ups. We’re prone to wonder if Focus Mode will also work with Windows 10 to stop websites from screaming out for your attention, or restricting those pop-up announcements that are way too common and explicitly designed to take your attention elsewhere.

Patience, Grasshopper

As mentioned, Focus Mode isn’t quite here yet, but for those who are distracted way too easily (and you can certainly count us among them) when time is a valuable commodity to get needed tasks done then this really has a lot of potential.

We can most likely expect to see Focus Mode in a test build such as Chrome Canary before it becomes a mainstream feature available to one and all with Google Chrome. We’ll be following these developments keenly, and we imagine that now a good many of you will be too.

 

Getting Ready for Wi-Fi 6: What to Expect

Most people aren’t any more familiar with Wi-Fi beyond understanding that it means a wireless internet connection. Those same people won’t be aware that in the last near decade the digital world has moved from Wi-Fi 4 to to Wi-Fi 5, and now Wi-Fi 5 is set to be replaced by Wi-Fi 6. What’s to be made of all of this for the average person who only knows that the wi-fi networks in their home and office are essential parts of their connected day-to-day, and that the wi-fi in Starbucks is pretty darn convenient as well.

The numeric chain that identifies a Wi-Fi standard is something they may well recognize though. 802.11 is the standard, but the Wi-Fi 4 you had from 2009 to 2014 is different from the same 802.11 standard you’ve had with Wi-Fi 5 since then till now. What’s to come later this year with Wi-Fi 6 will be a different 802.11. Right, we get you – what’s the difference exactly.

Here at 4GoodHosting, we’re like any quality Canadian web hosting provider in that the nature of our work and interests makes it so that we pick up on these things, if for no other reason than we’re exposed to and working with them on a regular basis. Much of the time these little particulars related to computing, web hosting, and digital connectivity aren’t worth discussing in great detail.

However, because Wi-Fi is such an essential and much-appreciated resource for all of us we thought we’d look at the ‘new’ Wi-Fi set to arrive later this year here today.

Wi-Fi 6: Problem Solver

When we look at ‘802.11ac’, the average person won’t get the significance of that. The fact is, however, they should and what Wi-Fi 6 is being designed to be is a solution to that problem.

What we’re going to see is the beginning of generational Wi-Fi labels.

Let’s make you aware that there is a collective body known as the Wi-Fi Alliance. They are in charge of deciding, developing, and designating Wi-Fi standards. We are all aware of how devices are becoming more complex and internet connections evolve, and when they do the process of delivering wireless connections also changes.

As a results, Wi-Fi standards — the technical specifications that manufacturers establish to create Wi-Fi — need to be updated from time to time so that new technology can flourish and compatibility extends to the near entirety of devices out there.

As mentioned though, the naming of Wi-Fi standards is totally foreign to the average person if they ever try to figure what that numeric 802-something chain stands for. The Wi-Fi Alliance’s response to this is now to simply refer to the number of the generation. Not only will this apply to the upcoming Wi-Fi 6, but will also be retroactive and thus apply to older standards. For example:

802.11n (2009) – Wi-Fi 4

802.11ac (2014) – Wi-Fi 5

802.11ax (expected late 2019) – Wi-Fi 6

It’s easier to see how this is a better classification approach, but there’s likely going to be a a period of confusion where some products are labeled with the old code and some are just called Wi-Fi 4 or Wi-Fi 5 when they’re functionally interchangeable in as far as ‘type’ is concerned. Eventually, however, this should be resolved as older product labeling is phased out and everyone – or most people at least – become familiar with the new Wi-Fi classifications. In all honesty, for most people if you just pay even the slightest amount of attention you’ll begin to notice the difference without having to put much thought into it.

How Wi-Fi 6 Will Be Different – And Better

The biggest impetus to create Wi-Fi 6 was to better accommodate all the many new Wi-Fi technologies that have been emerging. Wi-Fi 6 helps standardize them. Here’s the most relevant developments, and exactly what they should mean for your wireless network.

Lower Latency

Lower latency is a BIG plus that’s going to come with Wi-Fi 6, and you’ll probably notice it right quick. Reduced latency means shorter or no delay times as data is sent – which is very similar to ping rate and other such measurements. Low latency connections improve load times and prevents disconnects and other issues more effectively. Wi-Fi 6 lowers latency compared to older Wi-Fi standards, and it does so using more advanced technology like OFDMA (orthogonal frequency division multiple access). Long story short, it’s going to pack data into a signal much more completely and reliably.

Speed

Wi-Fi 6 will also be faster, and considerably faster compared to Wi-Fi 5. By offering full support for technologies like MU-MIMO, connection quality will improve for compatible mobile devices in a big way, and content delivery should be sped up accordingly. These improvements won’t be as relative to Internet speed as you might think too. They can and likely will improve the speed of your Wi-Fi data and let your receive more information, more quickly.

Now a question we imagine will come up for most of you – will all routers be able to work with the new 802.11ax standard? No, they won’t. If your router is especially dated, you should happily accept the fact it’s time to get a newer model. It will be 100% worth it, don’t have any doubts about that.

Wi-Fi 6 is also going to mean fewer dead zones, as a result of expanded beamforming capabilities being built into it. ‘Beamforming’, you say? That’s the name for the trick your router uses to focus signals on a particular device, and that’s quite important if the device is having difficulty working with a connection. The new WiFi 6 802.11ax standard expands the range of beamforming and improves its capabilities. Long story short again, ‘dead zones’ in your home are going to be MUCH less likely.

Improved Battery Life

Wi-Fi 6 is going to mean better battery life, and we’ll go right ahead and assume that’s going to be most appealing for a lot of you who are away from home for long periods of the day and taking advantage of Wi-Fi connectivity fairly often throughout.

One of the new technologies that Wi-Fi 6 is set up to work with is called ‘TWT’, or target wake time. It assists connected device with customizing when and how they ‘wake up’ for the purpose of receiving data signals from Wi-Fi. Devices are able to ‘sleep’ while waiting for the next necessary Wi-Fi transmission and battery drain is reduced as a result. Your phone does not sleep at all itself, only the parts of it that are operating with Wi-Fi.

Everybody will like the idea of more battery life and less time spent plugging in to recharge.

Keep an Eye Out for the Wi-Fi 6 Label

How will you know if a router, phone or other device works with the new 802.11ax standard? Simply look for the phrase ‘Wi-Fi 6’ on packaging, advertisements, labels or elsewhere. Look up the brand and model # online if for some reason you don’t see it on the packaging. The Wi-Fi Alliance has also suggested using icons to show the Wi-Fi generation. These icons appear as Wi-Fi signals with a circled number within the signal.

Identifying these icons should help you pick out the right device. If not, you can of course always ask the person behind the till and they should be knowledgable regarding this (if they work there you’d have to assume they would be).

Keep in mind that most of the devices around 2020 and later are expected to be Wi-Fi 6, and so we’ll have to wait a year or so before they start to populate the market.

 

Project Pathfinder for an ‘Even Smarter’ SIRI

AI continues to be one of the most game-changing developments in computing technology these days, and it’s hard to argue that there’s no more commonplace example of AI than in the digital assistants that have nearly become household names – Apple’s SIRI and Google’s Alexa. Even a decade ago many people would have stated their disbelief at the notion that it might be possible to make spoken queries to a digital device, and then have them provide a to-the-minute accurate reply.

The convenience and practicality of AI has been a hit, and what’s noteworthy about it is the way that folks of all ages have taken to it. After all, it doesn’t even require the slightest bit of digital know-how to address Siri or Alexa and rattle of a question. Indeed, both tech giants have done a great job building the technology for their digital assistants. With regards to Siri in particular, however, it appears that Apple is teaming up with a company that’s made a name for themselves developing chatbots for enterprise clients.

Why? – to make Siri an even better digital assistant and even more the so the beacon of AI made possible for everyday people.

Here at 4GoodHosting, like most Canadian web hosting providers we have the same level of profound interest in major developments in the computing, web hosting, and digital worlds that many of our customers do. This zeal for ‘what’s next’ is very much a part of what makes us tick, and this coming-soon improvement to Siri makes the cut as something worth discussing in our blog here today.

Proven Partnership

The aim is to make it so that Siri gets much better at analyzing and understanding real-world conversations and developing AI models capable of handling their context and complexity. In order to do that, they’ve chosen to work with a developer who they have a track record of success with. That’s Nuance, who is an established major player in conversation-based user interfaces. They collaborated with Apple to begin with Siri, and so this is round 2.

As mentioned, Nuance’s present business is focused on developing chatbots for enterprise clients, and so they’re ideally set up to hit the ground running with Project Pathfinder.

Project Pathfinder

The focus of Project Pathfinder came from Apple’s belief that machine learning and AI can automate the creation of dialog models by learning from logs of actual, natural human conversations.

Pathfinder is able to mine huge collections of conversational transcripts between agents and customers before building dialog models from them and using those models to inform two-way conversations between virtual assistants and consumers. Conversation designers are then more able to develop smarter chatbots. Anomalies in the conversation flow are tracked, and problems in the script can then be identified and addressed.

Conversation Building

Voice assistants like Siri and Alexa have inner workings that make it so that your speech is interacting with reference models. The models then try to find a solution to the intent of your question, and accurate replies depend on conversation designers doing two things; 1, having learned from subject matter experts, and 2 – doing the same from a LOT of trial and error process related to query behavior.

As far as Apple’s concerned, giving the nod to Nuance and their conversation designers was the best way to go.

Pathfinder empowers them to build on their existing knowledge base with deep insights gathered from real conversational interactions that have taken place inside call centers. More to the point, however, the software doesn’t only learn what people are discussing, but it also makes determinations on how human agents guide users through the transactions.

Adding more intelligence to voice assistants/chatbots is made possible with this information, and so Siri is primed to build on her IQ in the same way. It certainly sounds promising!

Self-Learning Conversation Analytics

All you need to do is spend a short period of time with Siri or Alexa and you’ll quickly find that they definitely do have limitations. That’s a reflection of the fact that they are built for the mass market, as they must much more diverse requests than chatbots that are primarily built for business. This means that they come with a lack of focus, and it’s more difficult to design AI that can respond to spoken queries on all the thousands of different topics around the globe with sensible responses. Then you have follow-up queries too.

In conclusion, the queries posed to virtual assistants are based in human questions 95+% of the time, and as such they’re less focused and less predictable. So then how do you build AI that’s more capable of handling the kind of complex enquiries that characterize human/machine interactions in the real world?

The answer to that is to start with call center chatbots, and that’s what the Pathfinder Project is doing. It will accelerate development of spoken word interfaces for more narrow vertical intents – like navigation, weather information, or call center conversation – and by doing so it should also speed up the development of more complex conversational models.

It will make these machines capable of handling more complex conversations. It will, however, take some time to come to realization (projected for summer 2019). Assuming it’s successful, it will show how conversational analytics, data analysis and AI have the ability to empower next-generation voice interfaces. And with this we’ll also be able have much more sophisticated human/computer interactions with our virtual assistants.

Seeing the unlocked power of AI with understood context and intent of conversation, rather than primarily asking Siri or Alexa to turn the lights off, etc. etc. promises to be really helpful and a very welcome advance in AI for all of us.

 

Chromium Manifest V3 Updates May Disable Ad Blockers

It’s likely that a good many of you are among the thousands upon thousands of people who have an Ad Blocker installed for your web browsers of choice. Some people do use them simply to avoid the nuisance of having to watch ad after ad, and it’s people like these that have necessitated some sites to insist that you ‘whitelist’ them in order to proceed into the website they want to visit. That’s perfectly understandable, as those paying advertisers are the way the website generates income for the individual or business.

For others, however, we spend a great deal of our working day researching and referencing online, and having to watch ads before getting to the content we need in order to do our work. For us, an ad blocker is much more of a tool of necessity rather than convenience. Still, we get caught up in more than a few sites that will insist on being whitelisted too. For me, my ad blocker is a godsend and I don’t whitelist any website or disable my ad blocker for any of them.

Here at 4GoodHosting, part of what makes us a good Canadian web hosting provider is having built up an insight into what really matters to our customers. The bulk of them are people who use the Information Superhighway as a production resource rather than web ‘surfers’ for whom it’s more of an entertainment one. That’s why today’s news is some that’s sure to be very relevant for most of our customers.

Weakened WebRequest APIs

Some of you may not know how your ad blocker works, and that’s perfectly normal. As long as it does its job, you don’t really need to know. Chromium is Google’s newest all-powerful web browser, and just like Chrome did you can expect it to soon become nearly ubiquitous as most people’s web browser of-choice.

However, Chromium developers in the last few weeks have shared that among the updates they are planning to do in Manifest V3 is one that will restrict the blocking version of the webRequest API. The alternative they’re introducing is called declrativeNetRequest API.

After becoming aware of it, many ad blocker developers expressed their belief that the introduction of the declarativeNetRequest API will mean many already existing ad blockers won’t be ‘blocking’ much of anything anymore.

One industry expert stated on the subject, “If this limited declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two existing and popular content blockers like uBO and uMatrix will cease to be functional.”

What is the Manifest V3 Version?

It’s basically a mechanism through which specific capabilities can be restricted to a certain class of extensions. These restrictions are indicated in the form of either a minimum, or maximum, version.

Why the Update?

Currently, the webRequest API allows extensions to intercept requests and then modify, redirect, or block them. The basic flow of handling a request using this API is as follows,

  • Chromium receives the request / queries the extension / receives the result

However, in Manifest V3 the use of this API will have its blocking form limited quite significantly. The non-blocking form of the API that permits extensions to observer network requests for modifying, redirecting, or blocking them will not be discouraged. In addition, the limitations they are going to put in the webRequest API have yet to be determined

Manifest V3 is set to make the declarativeNetRequest API as the primary content-blocking API in extensions. This API will then allow extensions to tell Chrome what to do with a given request, instead of Chromium forwarding the request to the extension. This will enable Chromium to handle a request synchronously. Google insists this API is overall a better performer and provides better privacy guarantees to users – the latter part of which if of course very important these days.

Consensus Among Ad Blocker Developers and Maintainers?

When informed about this coming update many developers were concerned that the change will end up completely disabling all ad blockers. The concern was that the proposed declarativeNetRequest API will result in it being impossible to develop new and functional filtering engine designs. This is because the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and some ad blocker developers have commented that it’s very limited in its scope.

It’s also believed that the declarativeNetRequest API developers will be unable to implement other features, such as blocking of media element that are larger than a set size and disabling of JavaScript execution through the injection of CSP directives, among other features.

Others are making the comparison to Safari content blocking APIs, which essentially put limits on the number of admissible rules. Safari has introduced a similar API recently, and the belief is that’s the reason why Apple has gone in this direction too. Many seem to think that extensions written in that API are more usable, but still fall well short of the full power of uBlock Origin. The hope is that this API won’t be the last of them in the foreseeable nearest future.

Dedicated IP Addresses and SEO

Even the most layman of web endeavourers will be familiar with the acronym SEO. We imagine further there’s very few if any individuals anywhere who don’t know it stands for search engine optimization, and understand just how integral SEO is for having success in digital marketing. Most people with a small business that relies on its website for maximum visibility with prospective customers will hire an SEO professional to SEO optimize their site. That continues to be highly recommended, and for 9 out of 10 people it is NOT something you can do effectively on your own, no matter how much you’ve read online or how many YouTube videos you’ve watched.

Here at 4GoodHosting, we are like any other top Canadian web hosting provider in that we offer SEO optimization services for our clients. Some people will think that choosing the best keywords and having them at the ideal density is most integral to having good SEO, and that’s true and by and large. But there are a number of smaller but still significant influence that influence SEO, and they’ll be beyond the wherewithal of most people.

Whether websites benefit from a Dedicated IP address rather that a Shared IP address isn’t something you’ll hear discussed regularly. When you learn that the answer is yes, they do, and exactly why, however, it’s a switch many people will want to consider if they currently have a Shared IP address. Let’s have a look at why that is today.

What Exactly Is an IP address?

For some, we may need to start at the start with all of this so let’s begin be defining what exactly an IP address is. Any device connected to the Internet has a unique IP address, and that’s true if it’s a PC, laptop, mobile device, or your web host’s server. It’s made up of a 4-number string which will start at 0 and then go up to 255. Here’s an example of one:

1.25.255.255

This numerical string code makes the machine you are using known. Once it’s identified – and it has to be – the Internet is then able to send data to it. You now can access the hundreds of thousands of websites along the Information Superhighway.

What’s a Shared IP address?

In most instances, the server your web host uses to host your site will be a single machine with a matching single IP address. For most people – and nearly all who go with the most basic hosting package without giving it much thought – you’ll be set up in an arrangement where the server is hosting thousands of websites like yours. It’s not ‘dedicated’ to you and your site exclusively.

Instead, all of the websites hosted it will be represented by the single IP address allocated to the web host’s server. Now if your website is utilized for more of a personal venture or hobby and it’s NOT going to be a leverage point in trying to secure more business, shared hosting will probably be fine. Alternately, if page rankings are a priority for you then shared hosting may be putting you at a disadvantage.

The solution? A dedicated IP address for your Canadian website. If you need one, we can take care of that for you quickly and fairly easily for you. But we imagine you’ll need more convincing, so let’s move now to explaining what constitutes a Dedicated IP address..

The Dedicated IP Address

A dedicated IP address involves you having your own server, and that server only has one website on it – yours. It is common, however, for more than one site reside on a specific server. A Dedicated IP address is an IP address that is allocated to a single website, instead of one being assigned the server and representing every website hosted there by default.

The Purpose of Dedicated IP Addresses

The primary appeal of Dedicated IP addresses is that they promote large ecommerce being more secure, and in particular as it regards sensitive data like credit card numbers, etc. On a more individual scale, though, a dedicated IP address is superior for SEO interests as well.

Why is that? Let’s list all of the reasons here:

1. Speed

When you share space, you share resources and in as far as shared web hosting and shared IP addresses are concerned that means you are sharing bandwidth. The long and short of it is all those other sites on the same server will be slowing yours down. That might be a problem in itself, but if it isn’t then the way slow site speeds push you further down Google’s rankings will be.

While adding a unique IP address to your site will not automatically mean it loads faster, but migrating to a Dedicated Server with a Dedicated IP address definitely will. Sites with a Dedicated IP address are faster, more reliable, and more secure, and that’s a big deal.

2. SSL

For nearly 5 years now Google has been giving preference to websites that have added an SSL 2048-bit key certificate. The easiest way to see whether that’s been done or not is seeing the site’s URL change from HTTP to HTTPS. SSL sites typically utilize unique IP addresses. Google continues to insist that SSL impacts less than 1% of searches, but it’s a factor nonetheless and is another benefit of a Dedicated IP address.

SSL can make your website more visible through public networks and can make websites operate marginally faster, and the benefit of this is in the way visitors get a faster response from the website because it’s not held up by Google the way it would be if it didn’t have an SSL cert. The majority of ecommerce sites with a Dedicated IP address will also have an SSL cert.

3. Malware

Malware is software that’s designed and disseminated for the explicit purpose of throwing wrenches into the gears of a working web system. Unfortunately, the thousands of websites that may be on a shared server drastically increases the risk of being exposed to malware if you’re one of them. Further, when you share an IP address with any site that’s been infected with malware then your site is actually penalized despite the fact it’s not you who’s been infected.

In these cases, you’ll be best served by going with a Dedicated IP address and choosing a more reliable Canadian web hosting provider that has measures in place to protect malware from making its way into the servers in the fist place. A dedicated IP means you’re standing alone, and you’re regarded accordingly.

How Do I Get a Dedicated IP Address?

If you’re with us here at 4GoodHosting, all you need to do is ask. We’ve been setting our customers up with Dedicated IP addresses for quite some time now, and you’ll find that when you do so through us it’s not nearly as pricey as you had expected it to be.

It’s very recommended for any ecommerce site or one that’s utilized for very business-strategic aims, and it’s fair to say that you really can’t go wrong moving to a dedicated server if you’ve made the commitment to do anything and everything to protect your SEO and enjoy the same page rankings moving forward. The vast majority of people see it as a wise investment, and of course you always have option of switching back to a shared hosting arrangement if over time you don’t see any real difference or benefits for you.