.Inc Domains Now Available for Business Websites

For the longest time the .com domain extension was the one and only in the world of domains. In the early days of the Internet that wasn’t an issue, but as ever-greater numbers of sites – literally thousands of them – came onboard there became a need for alternative domain extensions. With a quick nod of acknowledgement to the .org and .net domains of the world, the most noteworthy development was the creation of country-specific domain name extensions. For example, if you’re here in Canada you’ll know that .ca domains are pretty much as numerous as .com ones.

All of this is why news like this is going to be of interest to any leading Canadian web hosting provider, whether their way out west like us here at 4GoodHosting or anywhere between here and St. John’s. And to get right to it, that news is that a new top-level domain is now available for registration.

Introducing .inc domains!

Most will be aware that inc. is short for incorporated, and without going into great detail that means that a business acts and exists independently of its owners.

Back to relevant information, however – the appeal of these new .inc domains is obviously that they’ll be an immediate indicator of a website being a business one. It’s likely that many decision makers will also perceive a greater sense of authority to having a .inc domain. This new option joins Google making the .dev domain name available for developers.

Specifically for Businesses

The new .inc TLD will be operated by Intercap Registry Inc., and their belief is that any business that ends its title with ‘Incorporated’ will be quite keen to have a domain that allows to have the website address end the same way. And it’s not just a select few who’ll be able to go this route if they’d like to. The .inc domain name will be available to register in the official language of more than 190 countries and used by any business—from start-ups to established major players in many different industries.

It’s safe to say that having a .inc at the end of a web address can help businesses gain credibility and, as mentioned, add a certain level of inherent authority to their website.

Bonuses

If that’s not incentive enough for incorporated businesses all over the globe, there’s some perks available for those ready to make the switch – free member benefits worth $2500 from leading brands. Among others, they are:

  • $1,000 in free transaction fee credits from Square
  • Free press release on GlobeNewsWire to announce the new .inc website
  • Free $100 credit for sponsored job listings on Indeed
  • $150 ad spending match for Google Ads

Converts are also encouraged to know that website migration to the .inc domain will not include downtime or a negative impacting of search engine optimization (SEO).

Harder to Cybersquat

Now here’s a term that should be much more foreign to many of you. Cybersquatting is registering, selling, or using a domain name with intent of profiting on the goodwill of another person or organization’s trademark. It’s generally done by buying up domain names that use the name of existing businesses with the intent of selling them for a profit later on.

It is also hoped that these .inc TLDs will prevent cybersquatting, allowing businesses to avoid having to deal with pre-registered domains related to their business name being held for ransom. The .inc TLDs are now available for priority trademark registration till April 30, and can be registered here.

After April 30 and until May 7 they will be available for priority public registration, and then after that for global public registration. Priority trademark registration is expected to set businesses back around $3,500. Definitely not cheap for the average individual, but not doubt a good many businesses will see that as money reasonably well spent all things considered.

If you’d like to know more about these domains, or have any questions about domain names and domain name extensions in general, we’d be happy to answer them for you. Contact us anytime.

What to Expect From Next Month’s Windows 10 Update from Microsoft

Windows continues to be the most popular and ubiquitous of operating systems for desktops and notebooks around the world, and while there are those who will have nothing to do with it (see Mac devotees) that fact is a testament to the enduring popularity of what is ‘old faithful’ for the most part when it comes to computer operating systems.

Here at 4GoodHosting, we’re a Canadian web hosting provider that’s in the position to see the value of both Mac and PC operating systems, and it’s true that both have their strengths and weaknesses – which is of course true of pretty much everything. One thing that Microsoft has benefited from for decades now is that it was first to the party, and that’s meant that many people will always choose a Windows OS device because it’s especially familiar for them.

And so it is that the next version of Windows 10 — scheduled for a May 2019 update release— is now just around the corner. This is not going to be a massive overhaul of the OS by any means, but there are as it approaches its 4-year anniversary there are some nice tweaks to make it fresher and more well-suited to determined user preferences. Foremost among these are a new light theme and changes to the search experience, Cortana, and more.

Let’s have a look at the most recent update to Windows 10 here today.

On the House

We’ll start by stating for anyone who might be unaware that Windows 10 updates are always free. The May 2019 Update via Windows Update will be provided at no charge for existing Windows 10 users on any device deemed compatible with the update. The noteworthy difference here, however, is with the rollout method – it is no longer automatically downloaded to your PC.

What you’ll get instead is a notification in Windows Update that the May 2019 update is available. From there you’ll have the option of downloading or installing it. However, only those running a version of Windows 10 that is close to end of support will receive the update automatically. Just as with prior releases, rollouts of major Windows 10 updates are gradual to ensure the best quality experience. For this reason you might not see the May 2019 update right away.

Further as regards the timing of this, let’s not forget Microsoft’s troubles with releasing previous Windows 10 versions. Don’t count on this update arriving exactly when it’s expected.

Improvements

Let’s shift to the meat of all of this, and detail all of the improvements to be seen in the new Windows 10.

  • Light Theme & Improved Start Menu

Microsoft debuted a dark mode for Windows 10 in 2018, and a new light theme is being introduced with this update to augment overall contrast with the operating system. Users will see that both the taskbar, start menu and Action Center are a brighter and lighter white color. Some icons in the system tray and taskbar are now also tailored to match the new theme — including both OneDrive and File Explorer.

A new and improved start menu is part of this too. Installing the May 2019 update will give users a single column, and fewer preinstalled apps and live tiles. Plus, they can also now remove more of the stock Windows 10 apps that aren’t used much, including 3D Viewer, Calculator, Calendar, Mail, and Movies & TV, Paint 3D, Snip & Sketch, Sticky Notes, and Voice Recorder.

  • Cortana & Search

The separation of Cortana and Search in the Windows 10 taskbar is one of the most notable changes coming with next month’s update. With previous releases they were integrated with each other, but now the search box in the taskbar will only launch searches for files and documents, and the circular Cortana icon will summon the digital assistant when clicked. Some people have already surmised that this may mean the end of Cortana before long, but it’s likely that a bit presumptive at this point.

Search experience will also be changing, and now Windows will index and search all folders and drives, rather than just limiting it to the default documents, pictures, and videos folders. Along with a new search interface featuring landing pages for Apps, Documents, Email, Web, users can now expect accurate and faster searches when aiming to dig up important files.

  • Reserved Space for Windows Update

It’s true that Windows Updates can cause bugs, data loss, and failures, and there’s been no shortage of people eager to point that out every chance they get. This May 2019 update, however, is going to enable all Windows 10 users to pause updates for up 35 days – something that was for Windows 10 Enterprise and Professional users only up until now.

Having more time to read up and decide on when to install Microsoft’s monthly updates is going to be a nice freedom for many users

The fact that the May 2019 update will also reserve 7GB of disk space for installing general updates promises to be a more contentious point. The move has been made to keep your PC secure, and there reasons this new space cannot be removed from Windows 10 is so that it makes future OS updates more efficient.

The space is also intended for apps, temporary files, and system caches undertaken as your PC sees fit. The size of the reserve will depend on your system, so removing unnecessary files on your hard drive in advance of the update might be a good idea.

  • Sandbox Integrated Feature

Last but not least regarding the Windows update for 2019, we have Windows Sandbox. This integrated feature for Windows 10 Pro and Enterprise lets users create a secure desktop environment that is able to isolate and run untrusted and sketchy apps while keeping them separate from the rest of your system. Thus the term ‘sandbox’ – when a Windows Sandbox is closed, all the software with all its files and state are permanently deleted along with that move.

In our opinion, this is the best and most well-thought out feature added to this Windows 10 2019 update. Especially considering all the different well-disguised threats out there these days. It might not be the most exciting feature for your average, but you can be sure developers are going to be plenty impressed with it.

Will be interesting to see how well received this update is, and it appears we won’t have to wait long to find out.

Protecting a VPN From Data Leaks

One thing that certainly hasn’t changed from previous years as we move towards the quarter pole for 2019 is that hackers are keeping IT security teams on their toes as much as ever. That shouldn’t come as much of a surprise given the cat and mouse game that’s been going on in cyberspace between the two sides for a long time now. Cyber threats are as sophisticated as ever now, and for everyday individuals they biggest concern is always that the privacy of sensitive data will be compromised.

One of the most common responses to enhanced and more enabled threats is to go with a Virtual Private Network and all the enhanced security features that come with them. Here at 4GoodHosting, we’ve been promoting them for our customers very actively in likely what same way every other Canadian web hosting provider has. There’s merit to the suggestion, as VPN connections protect online privacy by creating a secure tunnel between the client – who is typically uses a personal computing device to connect to the internet – and the Internet.

Nowadays, however, VPN networks aren’t as automatic as they were when it comes to trusting in secure connections and understanding that there won’t be data leaks. The good news is that even people with the most average levels of digital understanding can be proactive in protecting their VPN from data leaks. Let’s look at how that’d done here today.

Workings of VPN

A reliable VPN connection disguises the user’s geographical location by giving it a different IP address. There is also architecture in place to encrypt data transmitted during sessions and provide a form of anonymous browsing. As it is with almost all internet tools, however, VPN connections can also face certain vulnerabilities that weaken their reliability. Data leaks are a concern amongst information security researchers who focus on VPN technology, and it’s these issues that are most commonly front and centre among them:

  1. WebRTC Leaks

Web Real-Time Communication (WebRTC) is an evolution of the VOIP (Voice over Internet Protocol) for online communications. VoIP is the technology behind popular mobile apps such as Skype and WhatsAppp, and it’s been the leading force behind making legacy PBX telephone systems at many businesses entirely obsolete now.

WebRTC is also extremely valuable with the way that it allows companies to hire the best personnel. Applicants can be directed to a website for online job interviews with no need for Skype or anything similar installed.

Everything would be perfect, except for the fact that the IP addresses of users can be leaked, and even through a VPN connection.

  1. DNS Hijacking

It’s fair to say that hijacking domain name system (DNS) servers is one of the most tried-and-true hacking strategies, and interestingly a large portion of that has been made possible by well-intentioned efforts to enact internet censorship. The biggest DNS hijacking operation on the planet is conducted by Chinese telecom regulators through the Great Firewall, put in place with the aim of restricting access to certain websites and internet services.

DNS hijacking encompasses a series of attacks on DNS servers, but arguably the most common one involves taking over a router, server or even an internet connection with the aim of redirecting traffic. By doing so hackers are able to impersonate websites; your intention was to check CBC News, but instead you’ll be directed to a page that may resemble it but actual uses code to steal passwords, compromise your identity, or leave you with malware on your device.

Often times WebRTC and DNS hijacking are working in conjunction with each other: a malware attack known as DNS changer that can be injected into a system by means of JavaScript execution followed by a WebRTC call that you’re unaware of. Done successfully, it can gain your IP address.

Other lesser-known vulnerabilities associated with VPN networks are Public IP address, torrents, and geolocation

How to Test for Leaks

It might be best to cut right to chase here sort of – The easiest way to determine if you’ve got a leak is to visit IPLeak.net, and do it with your VPN turned off. This site is a very nice resource. Once you’ve visited, then leave seat and turn your VPN back on before repeating the test.

Then, you compare results.

The torrents and geolocation tests available are fairly worthwhile themselves, but probably not as much of a factor indicator as the DNS. Navigating the internet is done by your device communicating with DNS servers that translate web URLs into numeric IP addresses. In the bulk of those instances, you’ll have defaulted through your ISP servers, and unfortunately these servers tend to be very leaky on their own to begin with.

Leakage through your local servers can serve up your physical location to those with bad intentions, even with a VPN set up and utilized. VPN services route their customers through servers separate from their ISP in an effort to counter these actions.

Once you determine your data is leaking, what is there you can do to stop it? Read on.

Preventing Leaks and Choosing the Right VPN

A good suggestion is to disable WebRTC in your browser, and doing so even before installing a VPN solution. Some developers have set this to be a default configuration, while most better ones will have this is an enabled option.

Search ‘WebRTC’ in the help file of your browser and you may be able to find instructions on how to modify the flags or .config file. Do so with caution, however, and don’t take actions until you’re 100% certain they’re the correct ones or you may risk creating quite a mess for yourself.

Other good preventative measures include:

  • Going with the servers suggested when configuring your VPN – typically not those of your Internet service provider (ISP) but ones maintained by the VPN provider. Not all of them have them, though
  • Aiming to have a VPN that has upgraded protocols making it compatible with the new IPv6 address naming system. Without one, you’ll have a much greater risk of leaks. If you’re about to move to a VPN, this should be one of your primary determinations
  • Making sure your VPN uses the newest version of the OpenVPN protocol, and especially if you’re on a Windows 10 OS device (it has a very problematic default setting where the fastest DNS servers is chosen automatically. OpenVPN prevents this)

Overall, the security of tunneled connections is going to be compromised big time by a leaky VPN. If the security of your data is a priority for you, then you should be evaluating VPN products, reading their guides and learning about best ways to secure your system against accidental leaks.

Keep in mind as well this isn’t a ‘set it and forget it’ scenario either. You need to check for leakage from time to time to ensure nothing has changed with your system. Last but not least, make sure the VPN you use has a kill-switch feature that will cut off your connection immediately if a data leak is detected.

Windows 7 End Time Reminders On Their Way for PCs Starting Next Month

It would appear that a good many personal computers out there are still running Windows 7. If they weren’t then we can assume that there wouldn’t be the need for Microsoft to take the action they’ll be taking soon – sending out reminders to PC users still running this admittedly archaic OS that the end is nigh. Microsoft is calling them ‘courtesy reminders’ and while the message doesn’t go so far as to say what’s really the message – update your operating system or your device will become by and large inoperative – it certainly implies as much

Now admittedly as a leading Canadian web hosting provider we’re the type to be updating our OS systems just as soon as the opportunity presents itself each time, but we’re also able to go ahead and imagine that many of our clients will have friends or family members who don’t have the need to be equipped with the latest and greatest in computing technology. As such this might be a prompting to tell those people not to ignore anything that pops on their screen talking about the end of Windows 7.

So what’s all this going to involve? Not a whole lot really, but it’s worthwhile to take something of longer glance at why this is necessary and what PC users can expect if they’re still rocking Windows 7.

Friendly, yet Persistent Reminders

Microsoft has stated that starting in April if you are a Windows 7 user you can expect to see a notification appear on your Windows 7 PC a number of times over the next month. The hope is that one or more of them will be all it takes to make you aware that Windows 7 will officially be unsupported as of January 14, 2020.

As you might expect, users will be able to reject future notifications by selecting a ‘do not notify me again’ option, or if they’d prefer to know a little bit more about why their favourite OS (we have to assume there’s a reason they’ve resisted updating for so many years) is going the way of the Dodo Bird then there’ll also a be a ‘learn more’ button.

FWIW, the same thing happened with Windows XP a few years back. That OS went extinct fairly smoothly, so the expectation is that the same thing will happen here. Just in case that’s not the way it goes, however, Microsoft is trying to be proactive. The Windows 7 notices will appear eight months earlier than those XP warnings.

One big difference will be in that it was only in March of 2014, just a month before XP’s expiration, that Microsoft began placing on-screen reminders of the impending date. After that, they came monthly. Should Microsoft follow the same schedule and cadence, it should begin pushing notices to Windows 7 on April 14 before repeating them on the 14th of each month following.

Accelerated Schedule

The issue behind this sped-up schedule regarding the whole thing is that – believe it or not – Windows 7 is still surprisingly relevant. Check out this stat from Computerworld; it’s estimated that Windows 7 will still be powering more than 40% of all Windows personal computers at the end of January 2020.

If that’s correct, that number is quite a bit higher – about 35% – than the one attached to Windows XP when it was coming to the end of its working life. It would seem that Microsoft’s logic in starting to send out this reminders earlier is that it will reduce the larger fraction of Windows 7 systems before support ends.

As recently as 5 years ago Microsoft pushed on-screen alerts only to systems maintained using Windows Update, working with the knowledge that most small businesses and the like would be utilizing that resource. Windows 7 PCs managed by enterprise IT staff using Windows Server Update Services (WSUS) had no such reminder delivered. Administrators were also able to remove and / or prevent the warning by modifying the Windows registry, or by setting a group policy.

We can likely expect that similar options will exist for the Windows 7 notices. As the saying goes, all good things come to an end. We’ll try to pacify anyone who’ll be sad to see Windows 7 go by saying that by putting these OS out to pasture the developers are able to put more of their energies towards improving existing and future ones, and that’s better in the big picture of things.

We’ll conclude here today by leaving you with a Windows 7 to Windows 10 migration guide.

5G Networks: What to Expect

We don’t know about you, but for those of us here it doesn’t seem like it was that long ago that 3G Internet speeds were being revelled in as the latest and greatest. Things obviously change fast, as 3G has been in the rear view mirror for a long time now, and the reality is that the newest latest and greatest – 4G – is about to join it there.

Here at 4GoodHosting, the fact we’re a leading Canadian web host makes us as keen to learn more about what the new 5G networks have in store for us as anyone else who’s in the digital space day in and out. It appears that we’re in for quite a treat, although there are some who suggest tempering expectations. That’s to be expected anytime wholesale changes to infrastructure key to big-picture operations are forthcoming.

Nonetheless, we’re supposed to be immersed in the 5G world before the end of next year. Mobile 5G is expected to start making appearances in cities around North America this year, with much more extensive rollouts expected in 2020 so a discussion of what we can all expect from 5G is definitely in order. Let’s do it.

What is 5G, and How’s It Going to Work?

To cut right to it, 5G is the next generation of mobile broadband that will augment 4G LTE connections for now before eventually replacing them. 5G is promising to deliver exponentially faster download and upload speeds along with drastically reduced latency – the time it takes devices to communicate with each other across wireless networks. Right, that alone is worthy of some serious fanfare, but fortunately there’s even more to this.

But before getting into additional benefits expected to be seen with 5G networks, let’s have a look at what makes them different from 4G ones and how exactly these new super networks are predicted to function.

Spectrum-Specific Band Function

It’s important to start with an understanding of the fact that unlike LTE, 5G is going to operate on three different spectrum brands. The lowest one will be the sub-1GHz spectrum bands like GSMA / ITU. They are what’s known as low-band spectrums, and they’re the ones used for LTE by most carriers in North America. This spectrum is quite literally running out of steam, so it’s ready to be replaced. It does provide great area coverage and signal penetration but peak data speeds never exceed 100Mbps and often you’re not even anywhere close to that even.

Mid-band spectrums provides faster coverage and lower latency but the long-standing complaint related to them is that they fail to penetrate buildings and peak speeds top out at around 1GB

High-band spectrums (aka mmWave) are what most people think of when they think of 5G, and high-band spectrums can offer peak speeds up to 10 Gbps along with impressively low latency most of the time. The major drawback here though? It has low coverage area and building penetration is poor.

It appears that most carriers are going to start out by piggybacking 5G on top of their 4G LTE networks to start, and then nationwide 5G-exclusive networks will be built. Providers are very aware that small cells are going to required so that these suped-up 4G LTE networks don’t have their 5G appeal diminished with poor penetration rates and intermittently average download speeds.

In this regard, we all stand to benefit from the industry being cautious about not rolling out 5G on its own and then having growing pains with these networks.

Right, some people may not be familiar with small cells. They’re low-power base stations that cover small geographic areas that allow carriers using mmWave for 5G to offer better overall coverage area. Beamforming will be used to improve 5G service on the mid-band by sending a single focused signal to each and every user in the cell, while systems using it monitor each user to make sure they have a consistent signal.

Latency promises to be nearly if not entirely non-existent between the small cells and beamforming within 5-G enabled 4G LTE networks.

Examples of How 5G SHOULD Make Things Better

  1. Improved broadband

The reality today is that carriers are running out of LTE capacity in many major metropolitan areas. In some spots, users are already experiencing noticeable slowdowns during busy times of day. 5G will add huge amounts of spectrum in bands that have not been dedicated for commercial broadband traffic.

  1. Autonomous vehicles

Uber may have a devil of a time getting footed in Vancouver, but you can likely expect to see autonomous vehicles made possible with ubiquitous 5G deployment. The belief is that it will make it possible for your vehicle to communicate with other vehicles on the road, provide information to other vehicles regarding road conditions, and share performance information with both drivers and automakers.

This applications has a TON of promise, and it’s definitely one to keep an eye on.

  1. Public Infrastructure & Safety

It’s also predicated that 5G will allow cities and other municipalities to operate with greater efficiency. All sorts of civic maintenance process will be made more efficient by means of 5G networks.

  1. Remote Device Control

The remarkably low levels of latency expected with 5G make it so that remote control of heavy machinery may become possible. This means fewer actual people in hazardous environments, and it will also allow technicians with specialized skills to control machinery from any location around the globe.

  1. Health Care

5G and its super low latency may also be huge for health care applications. Since URLLC reduces 5G latency even further than what you’ll see with enhanced mobile broadband, we may see big improvements in telemedicine, remote recovery and physical therapy via AR, precision surgery, and even remote surgery in the very near future once 5G becomes the norm.

One of the most beneficial potential advances that may come with 5G as it concerns healthcare is that hospitals may be able to create massive sensor networks to monitor patients, allow physicians to prescribe smart pills to track compliance, and let insurers monitor subscribers to determine appropriate treatments and processes.

  1. IoT

Last but certainly not least is the way 5G will benefit the Internet of Things. As it is now, sensors that can communicate with each other tend to require a lot of resources and really drain LTE data capacity.

With 5G and it’s fast speeds and low latencies, the IoT will be powered by communications among sensors and smart devices. These devices will require fewer resources than ones that are currently in use, and there’s huge efficiencies to be had with connecting to a single base station.

It’s interesting to think that one day 5G will probably be as long-gone and forgotten as 3G is now, despite the fanfare we all gave it many years ago. You can’t stop progress in the digital world, and it’s fair to say that 99% of us wouldn’t want to even if we could.

 

What’s Best? Sleep, Hibernate, or Shut Down Your Computer at Night

\

Most people are perfectly fine with prompting their desktop or notebook to ‘nod off’ at the end of a day, and especially for those who work on their device and will be back in front of the screen first thing tomorrow morning. It’s true that they can go into a low-power mode and there’ll be no light coming from the screen and illuminating the room once you tell your computer to go to sleep. Others who aren’t going to be using theirs as regularly may instead choose to shut those down and be perfectly all right with the time it takes to get it booted up and running again once they do want to use it again.

The majority won’t really give it much more than that, and here at 4GoodHosting we’re like any other good Canadian web hosting provider with a good reputation in that we’ve got our minds on much more detailed and relevant aspects of what’s going in the digital world. But like any of you we’ve got desktops and notebooks at home too. That’s why we found a certain article on this topic to be informative in just the type of way we aim to offer our weekly blog content, and so here it is for you too!

Let’s have a look at this, and try to come to a consensus on what’s the best choice for you when you’re doing using your computer – put it to sleep, have it hibernate, or shut it down entirely.

Popular Thinking

The standard belief is that choosing not to turn your computer off at night is preferable, because shut downs and start ups tax the computer and lead to some of its components wearing out more quickly. Alternately, leaving it on does the same for other ones that never get to rest when the computer is still running, and even if it’s long since asleep.

There’s some truth to each of them, so the question then becomes which is the better of the two choices. Here’s the skinny on all of that.

The Issue

It’s easy to understand why believing that cutting the power with shutting down properly has the potential to do damage to your computer’s hardware. But can frequent shutdowns and restarts do the same? What are the comparison between turning the device off and leaving it on but in low-power ‘sleep’ or ‘hibernate’ states when not in use?

The source turned to for a definitive answer in this case was Best Buy’s Geek Squad, and here’s what they had do say on a topic that most would agree there very well qualified to comment on. So they were asked very plainly – is it best to leave my computer on and let it go to sleep and eventually hibernate if I’m done using it, or is it best to shut it down and then restart it then.

The Verdict, and Reasoning

According to the knowledgeable guys and gals at Geek Squad, the answer as to which choice is best depends on how often you use your computer. Those who use it more than a few times every day are best to leave it on and let it drift off into a sleep. Alternately, those who use it for an hour or two a day and here and there should go ahead and turn it off between usages.

The long and short explanation for this – and the most relevant piece of information regarding resultant wear & tear on the device – is that leaving a computer on indefinitely is less stressful overall than turning it on and off, especially if you were to do that several times a day.

Every time a computer turns on, the surge of power required for the boot up isn’t harmful in itself, but over years the repeating of that power surge can shorten the computer’s lifespan. These risks are of course greater for an older computer, and in particular for ones that have a traditional hard disk drive with moving parts rather than a solid state drive that’s more robust.

That said, all mechanical parts will fail eventually, and using them constantly will inevitably wear them down. There’s drawbacks to leaving devices on too; computers heat up more and more as they work and certain processes continue even when the device is asleep. Heat is detrimental for all components, and with computers left on you have a steady supply of it at varying moderate levels.

However, the heat and gear grinding that goes on with start up IS more detrimental long term. The exception to this would be with LCD panel displays if they weren’t timed out to go dark after certain timed period of inactivity. If they weren’t, leaving your computer on would be much more problematic – not to mention the nuisance of never-ending illumination of your workspace area.

Batteries and hard drives also have a limited life cycle. Allowing them to turn off (or sleep) and spin down when not being used will extend the life of these components, and especially if you’re only restarting the computer once or twice in a week if at all.

Even Better Reasoning

Some people will aim to refute this belief, stating that the very concept that shut downs and start ups make for damaging stress on components is a very dated way of looking at things. There are arguments to be made for both sides.

Reasons to leave it on

  • Using the PC as a server means you want to be able to remotely access it.
  • Background updates, virus scans, or other activities are welcome to go ahead while you’re away.
  • Long waits during start ups are unacceptable.

Reasons to turn it off

  • Conserving electricity and can slightly increase your power bill.
  • Wishing to not be disturbed by notifications or fan noise.
  • Rebooting does improve computer performance inherently

Having It Sleep, Or Hibernate?

Sleep puts a computer into a low power state without turning it completely off, while when hibernating your computer stops using power and resumes where it was when you put it in that mode. Overall, the consensus seems to be that sleep mode is preferable to hibernate because hibernate produces wear and tear that is similar to start and stop.

The recommendation is that if you’re going to leave it on all the time, make sure that you have the right sleep options set up in the Shut down menu. Saving a lot of power with no real downside becomes possible

Surge Protectors a Must

We’re going a little off topic here to wrap this up, but it really is worth relating the importance of using a surge protector between your computer and the wall outlet. Unless you actually like the idea of having expensive componentry fried by an electrical spike that arrives without warning, a surge protector is going to be a nice defense that hopefully you never need.

The best choice is to get an uninterruptible power supply (UPS), which is basically a battery backed-up surge protector. These help condition power to even it out, and power spikes that can do irreparable damage to your computer’s components.

Lastly, keep your computer clean. Spend some time now and then to open it up and get rid of dust and debris. Uninstalls of old software and cleaning up old files and processes is recommended too.

The Final Decision

Here it is – if you use your computer more than once a day, leave it on at least all day. If you use it only briefly during the morning and at night, leaving it on overnight is probably best. Those who use their computer for only a few hours once a day or even less that should go ahead and turn it off when they’re done.

 

Distractions Begone: Introducing Google Chrome’s Focus Mode

It’s been said that here and now in the 21st century we’ve never had more distractions pulling at our attention day in and out like we do now. This is especially true when we’re in front a screen, and we imagine not many of you need any convincing of that. Distractions aren’t particularly problematic when you’re only web surfing or the like, and more often than not they’re what you might call an irresistible nuisance in those situations.

When you’re on your computer for productive purposes, however, all those distractions can add up to a considerable amount of lost time. That’s where people might wish there was something to be done about them… and it appears as if now there is.

Here at 4GoodHosting, we’re like any industrious Canadian web hosting provider in the way we have our eyes and ears peeled for developments in the computing world. Most of them aren’t worthy of discussing at large here, but considering that nearly everyone has had difficulty staying on task when making use of the Internet then this one definitely is.

Google Introduces Focus Mode for Chrome Browser

As if the Chrome browser needed any more assistance in being the nearly ubiquitous web-browser of choice these days. Google is set to announce focus mode, and while they haven’t actually announced this new feature as of yet, tech insiders have found there’s a new flag to be seen that indicates whether or not ‘focus mode’ is on.

It should be mentioned that they’re not broaching uncharted territory here. Different applications have attempted to take on the problem of getting people to focus while working on a computer, and there have been software solutions available for both Mac and PC that have arrived with little or no fanfare. It goes without saying, however, that no power players commands the attention that Google does these days.

At this time little is known about the Focus Mode feature, asides fro the fact it will soon be implemented with the world’s most popular web browser. The seen flag is reportedly indicating that if ‘#focus-mode’ is enabled, it allows a user to switch to Focus Mode.

What, and How?

We bet nearly all of you will be saying right, right – but how exactly is Focus Mode going to work? At the moment, we can only speculate on the features the new option might offer to users. We think it’s safe to assume that Focus Mode will restrict specific websites or applications from being accessed. For example, Focus Mode may stop a user from browsing sites such as YouTube, Reddit, and Facebook (likely the most necessary for most people!). Other industry insiders have suggested that the mode may integrate with Windows 10‘s Focus Assist when working in conjunction with a PC’s operating system.

That last part there is important, as it appears that – at least initially – Focus Mode will be available on PCs running Windows 10, and it’s believed that it will allow users to silence notifications and other distracting pop-ups. We’re prone to wonder if Focus Mode will also work with Windows 10 to stop websites from screaming out for your attention, or restricting those pop-up announcements that are way too common and explicitly designed to take your attention elsewhere.

Patience, Grasshopper

As mentioned, Focus Mode isn’t quite here yet, but for those who are distracted way too easily (and you can certainly count us among them) when time is a valuable commodity to get needed tasks done then this really has a lot of potential.

We can most likely expect to see Focus Mode in a test build such as Chrome Canary before it becomes a mainstream feature available to one and all with Google Chrome. We’ll be following these developments keenly, and we imagine that now a good many of you will be too.

 

Project Pathfinder for an ‘Even Smarter’ SIRI

AI continues to be one of the most game-changing developments in computing technology these days, and it’s hard to argue that there’s no more commonplace example of AI than in the digital assistants that have nearly become household names – Apple’s SIRI and Google’s Alexa. Even a decade ago many people would have stated their disbelief at the notion that it might be possible to make spoken queries to a digital device, and then have them provide a to-the-minute accurate reply.

The convenience and practicality of AI has been a hit, and what’s noteworthy about it is the way that folks of all ages have taken to it. After all, it doesn’t even require the slightest bit of digital know-how to address Siri or Alexa and rattle of a question. Indeed, both tech giants have done a great job building the technology for their digital assistants. With regards to Siri in particular, however, it appears that Apple is teaming up with a company that’s made a name for themselves developing chatbots for enterprise clients.

Why? – to make Siri an even better digital assistant and even more the so the beacon of AI made possible for everyday people.

Here at 4GoodHosting, like most Canadian web hosting providers we have the same level of profound interest in major developments in the computing, web hosting, and digital worlds that many of our customers do. This zeal for ‘what’s next’ is very much a part of what makes us tick, and this coming-soon improvement to Siri makes the cut as something worth discussing in our blog here today.

Proven Partnership

The aim is to make it so that Siri gets much better at analyzing and understanding real-world conversations and developing AI models capable of handling their context and complexity. In order to do that, they’ve chosen to work with a developer who they have a track record of success with. That’s Nuance, who is an established major player in conversation-based user interfaces. They collaborated with Apple to begin with Siri, and so this is round 2.

As mentioned, Nuance’s present business is focused on developing chatbots for enterprise clients, and so they’re ideally set up to hit the ground running with Project Pathfinder.

Project Pathfinder

The focus of Project Pathfinder came from Apple’s belief that machine learning and AI can automate the creation of dialog models by learning from logs of actual, natural human conversations.

Pathfinder is able to mine huge collections of conversational transcripts between agents and customers before building dialog models from them and using those models to inform two-way conversations between virtual assistants and consumers. Conversation designers are then more able to develop smarter chatbots. Anomalies in the conversation flow are tracked, and problems in the script can then be identified and addressed.

Conversation Building

Voice assistants like Siri and Alexa have inner workings that make it so that your speech is interacting with reference models. The models then try to find a solution to the intent of your question, and accurate replies depend on conversation designers doing two things; 1, having learned from subject matter experts, and 2 – doing the same from a LOT of trial and error process related to query behavior.

As far as Apple’s concerned, giving the nod to Nuance and their conversation designers was the best way to go.

Pathfinder empowers them to build on their existing knowledge base with deep insights gathered from real conversational interactions that have taken place inside call centers. More to the point, however, the software doesn’t only learn what people are discussing, but it also makes determinations on how human agents guide users through the transactions.

Adding more intelligence to voice assistants/chatbots is made possible with this information, and so Siri is primed to build on her IQ in the same way. It certainly sounds promising!

Self-Learning Conversation Analytics

All you need to do is spend a short period of time with Siri or Alexa and you’ll quickly find that they definitely do have limitations. That’s a reflection of the fact that they are built for the mass market, as they must much more diverse requests than chatbots that are primarily built for business. This means that they come with a lack of focus, and it’s more difficult to design AI that can respond to spoken queries on all the thousands of different topics around the globe with sensible responses. Then you have follow-up queries too.

In conclusion, the queries posed to virtual assistants are based in human questions 95+% of the time, and as such they’re less focused and less predictable. So then how do you build AI that’s more capable of handling the kind of complex enquiries that characterize human/machine interactions in the real world?

The answer to that is to start with call center chatbots, and that’s what the Pathfinder Project is doing. It will accelerate development of spoken word interfaces for more narrow vertical intents – like navigation, weather information, or call center conversation – and by doing so it should also speed up the development of more complex conversational models.

It will make these machines capable of handling more complex conversations. It will, however, take some time to come to realization (projected for summer 2019). Assuming it’s successful, it will show how conversational analytics, data analysis and AI have the ability to empower next-generation voice interfaces. And with this we’ll also be able have much more sophisticated human/computer interactions with our virtual assistants.

Seeing the unlocked power of AI with understood context and intent of conversation, rather than primarily asking Siri or Alexa to turn the lights off, etc. etc. promises to be really helpful and a very welcome advance in AI for all of us.

 

DNS Flag Day This Past Friday: What You Need to Know About Your Domain

We’re a few days late getting to this, but we’ve chosen to make DNS Flag Day our topic this week as the ramifications of what’s to come of it will be of ongoing significance for pretty much anyone who has interests in digital marketing and the World Wide Web as a whole. Those that do will very likely be familiar with DNS and what the abbreviation stands for, but for any who don’t DNS is domain name system.

DNS has been an integral part of the information superhighway’s infrastructure for nearly as long as the Internet itself has been in existence. So what’s it’s significance? Well, in the Internet’s early days there wasn’t a perceived need for the levels of security that we know are very much required these days. There as much more in the way of trust and less in the way of pressing concerns. There wasn’t a whole lot of people using it, and as such the importance of DNS as a core service didn’t receive much focus and wasn’t developed with much urgency.

Any Canadian web hosting provider will be on the front lines of any developments regarding web security measures, and here at 4GoodHosting we’re no exception. Offering customers the best in products and services that make their website less vulnerable is always going to be a priority. Creating informed customers is something we believe in too, and that’s why we’re choosing to get you in the know regarding DNS flag day

What Exactly is this ‘Flag Day’?

The long and short of this is that this past Friday, February 1 2019, was the official DNS flag day. So, for the last 3 days, some organisations may now have a non-functioning domain. Not likely many of them, but may will see their domains now being unable to support the latest security features – making them an easier target for network attackers.

How and why? Well, a little bit of background info is needed. These days DNS has a wide-spread complexity, which is ever more necessary because cyber criminals launching are launching ever more complex disruptive distributed denial of service (DDoS) attacks aimed at a domain’s DNS. They’ve been having more success, and when they do it works out that no functioning DNS = no website

Developers have done their part to counter these threats quite admirably, and most notably with many workaround’s put in place to guarantee that DNS can continue to function as part of a rapidly growing internet.

The situation as it’s become over recent years is one where a combination of protocol and product evolution have made it so that DNS is being pushed and pulled in all sorts of different directions. This naturally means complications, and technology implementers typically have to weigh these ever-growing numbers of changes against the associated risks.

Cutting to the chase a bit again, the workarounds have ended up allowing legacy behaviours and slowing down DNS performance for everyone.

To address these problems, as of last Friday, vendors of DNS software – as well as large public DNS providers – have removed certain DNS workarounds that many people have been consciously or unconsciously relying on to protect their domains.

Flag’s Up

The reason this move had to be made is because broken implementations and protocol violations have resulted in delayed response times, far too much complexity and difficulty with upgrading to new features. DNS Flag Day has now put an end to the mass backing of many workarounds.

The change will affect sites with software that doesn’t follow published standards. For starters, domain timeouts will now be identified as being a sign of a network or server problem. Moving forward, DNS servers that do not respond to extension mechanisms for DNS (EDNS) queries will be regarded as inactive servers, and won’t return requests from browsers.

Test Your Domain

If you’re the type to be proactive about these things then here’s what you can do. You can test your domain, and your DNS serves with the extension mechanism compliance tester. You’ll receive a detailed technical report that will indicate your test failing, failing partially, or being successful.

Failures in these tests are caused by broken DNS software or broken firewall configuration, which can be remediated by upgrading DNS software to the latest stable version and re-testing. If the tests still fail, organisations will need to look further into their firewall configuration.

In addition to the initial testing, it’s recommended that business that rely on their online presence (which really is every one of them these days) use the next three months to make sure their domain meets what’s required of it now. Organizations with multiple domains that are clustered on a single network and in a shared server arrangement may well find that there is an increased chance that you may end up being caught up in a DDoS attack on another domain sitting near to yours.

Also, if you’re using a third-party DNS provider, most attacks on the network won’t be aimed at you, but you’re still at risk due to being on shared hosting. VPS hosting does eliminate this risk, and VPS web hosting Canada is already a better choice for sites that need a little more ‘elbow room’ when it comes to bandwidth and such. If VPS is something that interests you, 4GoodHosting has some of the best prices on VPS hosting packages and we’ll be happy to set you up. Just ask!

DNS Amplification and DNS Flood Risks

We’re now going to see more weak domains spanning the internet than ever before, and this makes it so that there is even more opportunity for cyber criminals to exploit vulnerable DNS servers through any number of different DDoS attacks.

DNS amplification is one of them, and it involves attackers using DNS to respond to small look-up queries with a fake and artificial IP of the target. The target is then overloaded with more larger DNS responses that are more than it’s able to handle. The result is that legitimate DNS queries are blocked and the organization’s network is hopelessly backed up.

Another one is DNS floods, and this involves waves of responses being aimed at the DNS servers hosting specific websites. They take over server-side assets like memory or CPU and proceed to fire a barrage of UDP requests generated by running scripts on compromised botnet machines.

Layer 7 (application layer) attacks will almost certainly be on the rise now too, and including those targeting DNS services with HTTP and HTTPS requests. These attacks are built to target applications with requests that look like legitimate ones, which can make them particularly difficult to detect.

What’s Next

Cyber-attacks will continue, as well as continue to evolve. Organizations will continue to spend time, money and resource on security. As regards DNS, it’s now possible to corrupt and take advantage of what was once the fail-safe means of web security. The measures taken as DNS Flagging have been put in place to address this problem, and it’s important that you now that your domain matches the new requirement. Again, use the link above to test yours.

There’s going to be a bit of a rough patch for some, but this is a positive step in the right direction. DNS is an essential part of the wider internet infrastructure. Entering or leaving a network is going to be less of a simple process now, but it’s the way it has to be.

Global Environmental Sustainability with Data Centers

Last week we talked about key trends for software development expected for 2019, and today we’ll discuss another trend for the coming year that’s a bit more of a given. That being that datacenters will have even more demands placed on their capacities as we continue to become more of a digital working world all the time.

Indeed, datacenters have grown to be key partners for enterprises, rather than being just an external service utilized for storing data and business operation models. Even the smallest of issues in datacenter operations can impact business.

While datacenters are certainly lifeblood for every business, they also have global impacts and in particular as it relates to energy consumption. Somewhere in the vicinity of 3% of total electricity consumption worldwide is made by datacenters, and to put that in perspective that’s more than the entire power consumption of the UK.

Datacenters also account for 2% of global greenhouse gas emissions, and 2% electronic waste (aka e-waste). Many people aren’t aware of the extent to which our growingly digital world impacts the natural one so directly, but it really does.

Like any good Canadian web hosting provider who provides the service for thousands of customers, we have extensive datacenter requirements ourselves. Most will make efforts to ensure their datacenters operate as energy-efficiently as possible, and that goes along with the primary aim – making sure those data centers are rock-solid reliable AND as secure as possible.

Let’s take a look today at what’s being done around the globe to promote environmental sustainability with data centers.

Lack of Environmental Policies

Super Micro Computer recently put out a report entitled ‘Data Centers and the Environment’ and it stated that 43% of organizations don’t have an environmental policy, and another 50% have no plans to develop any such policy anytime soon. Reasons why? high costs (29%), lack of resources or understanding (27%), and then another 14% don’t make environmental issues a priority.

The aim of the report was to help datacenter managers better understand the environmental impact of datacenters, provide quantitative comparisons of other companies, and then in time help them reduce this impact.

Key Findings

28% of businesses take environmental issues into consideration when choosing datacenter technology

Priorities that came before it for most companies surveyed were security, performance, and connectivity. However, 9% of companies considered ‘green’ technology to be the foremost priority. When it comes to actual datacenter design, however, the number of companies who put a priority on energy efficiency jumps up by 50% to 59%.

The Average PUE for a Datacenter is 1.89

Power Usage Effectiveness (PUE) means the ratio of energy consumed by datacenter in comparison to the energy provided to IT equipment. The report found the average datacenter PUE is approx. 1.6 but many (over 2/3) of enterprise datacenters come in with a PUE over 2.03.

Further, it seems some 58% of companies are unaware of their datacenter PUE. Only a meagre 6% come in that average range between 1.0 and 1.19.

24.6 Degrees C is the Average Datacenter Temperature

It’s common for companies to run datacenters at higher temperatures to reduce strain on HVAC systems and increase savings on energy consumption and related costs. The report found 43% of the datacenters have temperatures ranging between 21 degrees C and 24 degrees C.

The primary reasons indicated for running datacenters at higher temperatures are for reliability and performance. Hopefully these operators will come to soon learn that recent advancements in server technology have optimized thermal designs and newer datacenter designs make use of free-air cooling. With them, they can run datacenters at ambient temperatures up to 40 degrees C and see no decrease in reliability and performance. It also helps improve PUE and saving costs.

Another trend in data center technology is immersion cooling, where datacenters are cooled by being entirely immersed. We can expect to see more of this type of datacenter technology rolled out this year too.

3/4 of Datacenters Have System Refreshes Within 5 Years

Datacenters and their energy consumption can be optimized with regular updates of the systems and adding modern technologies that consume low power. The report found that approximately 45% of data center operators conduct a refreshing of their system sometime within every 3 years. 28% of them do it every four to five years. It also seems that the larger the company, the more likely they are to do these refreshes.

8% Increase in Datacenter E-Waste Expected Each Year

It’s inevitable that electronic waste (e-waste) is created when datacenters dispose of server, storage, and networking equipment. It’s a bit of a staggering statistic when you learn that around 20 to 50 million electric tons of e-waste is disposed every year around the world, and the main reason it’s so problematic is that e-waste deposits heavy metals and other hazardous waste into landfills. If left unchecked and we continue to produce it as we have then e-waste disposal will increase by 8% each year.

Some companies partner with recycling companies to dispose of e-waste, and some repurpose their hardware in any one of a number of different ways. The report found that some 12% of companies don’t have a recycling or repurposing program in place, and typically they don’t because it’s costly, partners / providers are difficult to find in their area, and lack of proper planning.

On a more positive note, many companies are adopting policies to address the environmental issues that stem from their datacenter operation. Around 58% of companies already have environmental policy in place or are developing it.

We can all agree that datacenters are an invaluable resource and absolutely essential for the digital connectivity of our modern world. However, they are ‘power pigs’ as the expression goes, and it’s unavoidable that they are given the sheer volume of activity that goes on within them every day. We’ve seen how they’ve become marginally more energy efficient, and in this year to come we will hopefully see more energy efficiency technology applied to them.