Project Pathfinder for an ‘Even Smarter’ SIRI

AI continues to be one of the most game-changing developments in computing technology these days, and it’s hard to argue that there’s no more commonplace example of AI than in the digital assistants that have nearly become household names – Apple’s SIRI and Google’s Alexa. Even a decade ago many people would have stated their disbelief at the notion that it might be possible to make spoken queries to a digital device, and then have them provide a to-the-minute accurate reply.

The convenience and practicality of AI has been a hit, and what’s noteworthy about it is the way that folks of all ages have taken to it. After all, it doesn’t even require the slightest bit of digital know-how to address Siri or Alexa and rattle of a question. Indeed, both tech giants have done a great job building the technology for their digital assistants. With regards to Siri in particular, however, it appears that Apple is teaming up with a company that’s made a name for themselves developing chatbots for enterprise clients.

Why? – to make Siri an even better digital assistant and even more the so the beacon of AI made possible for everyday people.

Here at 4GoodHosting, like most Canadian web hosting providers we have the same level of profound interest in major developments in the computing, web hosting, and digital worlds that many of our customers do. This zeal for ‘what’s next’ is very much a part of what makes us tick, and this coming-soon improvement to Siri makes the cut as something worth discussing in our blog here today.

Proven Partnership

The aim is to make it so that Siri gets much better at analyzing and understanding real-world conversations and developing AI models capable of handling their context and complexity. In order to do that, they’ve chosen to work with a developer who they have a track record of success with. That’s Nuance, who is an established major player in conversation-based user interfaces. They collaborated with Apple to begin with Siri, and so this is round 2.

As mentioned, Nuance’s present business is focused on developing chatbots for enterprise clients, and so they’re ideally set up to hit the ground running with Project Pathfinder.

Project Pathfinder

The focus of Project Pathfinder came from Apple’s belief that machine learning and AI can automate the creation of dialog models by learning from logs of actual, natural human conversations.

Pathfinder is able to mine huge collections of conversational transcripts between agents and customers before building dialog models from them and using those models to inform two-way conversations between virtual assistants and consumers. Conversation designers are then more able to develop smarter chatbots. Anomalies in the conversation flow are tracked, and problems in the script can then be identified and addressed.

Conversation Building

Voice assistants like Siri and Alexa have inner workings that make it so that your speech is interacting with reference models. The models then try to find a solution to the intent of your question, and accurate replies depend on conversation designers doing two things; 1, having learned from subject matter experts, and 2 – doing the same from a LOT of trial and error process related to query behavior.

As far as Apple’s concerned, giving the nod to Nuance and their conversation designers was the best way to go.

Pathfinder empowers them to build on their existing knowledge base with deep insights gathered from real conversational interactions that have taken place inside call centers. More to the point, however, the software doesn’t only learn what people are discussing, but it also makes determinations on how human agents guide users through the transactions.

Adding more intelligence to voice assistants/chatbots is made possible with this information, and so Siri is primed to build on her IQ in the same way. It certainly sounds promising!

Self-Learning Conversation Analytics

All you need to do is spend a short period of time with Siri or Alexa and you’ll quickly find that they definitely do have limitations. That’s a reflection of the fact that they are built for the mass market, as they must much more diverse requests than chatbots that are primarily built for business. This means that they come with a lack of focus, and it’s more difficult to design AI that can respond to spoken queries on all the thousands of different topics around the globe with sensible responses. Then you have follow-up queries too.

In conclusion, the queries posed to virtual assistants are based in human questions 95+% of the time, and as such they’re less focused and less predictable. So then how do you build AI that’s more capable of handling the kind of complex enquiries that characterize human/machine interactions in the real world?

The answer to that is to start with call center chatbots, and that’s what the Pathfinder Project is doing. It will accelerate development of spoken word interfaces for more narrow vertical intents – like navigation, weather information, or call center conversation – and by doing so it should also speed up the development of more complex conversational models.

It will make these machines capable of handling more complex conversations. It will, however, take some time to come to realization (projected for summer 2019). Assuming it’s successful, it will show how conversational analytics, data analysis and AI have the ability to empower next-generation voice interfaces. And with this we’ll also be able have much more sophisticated human/computer interactions with our virtual assistants.

Seeing the unlocked power of AI with understood context and intent of conversation, rather than primarily asking Siri or Alexa to turn the lights off, etc. etc. promises to be really helpful and a very welcome advance in AI for all of us.


DNS Flag Day This Past Friday: What You Need to Know About Your Domain

We’re a few days late getting to this, but we’ve chosen to make DNS Flag Day our topic this week as the ramifications of what’s to come of it will be of ongoing significance for pretty much anyone who has interests in digital marketing and the World Wide Web as a whole. Those that do will very likely be familiar with DNS and what the abbreviation stands for, but for any who don’t DNS is domain name system.

DNS has been an integral part of the information superhighway’s infrastructure for nearly as long as the Internet itself has been in existence. So what’s it’s significance? Well, in the Internet’s early days there wasn’t a perceived need for the levels of security that we know are very much required these days. There as much more in the way of trust and less in the way of pressing concerns. There wasn’t a whole lot of people using it, and as such the importance of DNS as a core service didn’t receive much focus and wasn’t developed with much urgency.

Any Canadian web hosting provider will be on the front lines of any developments regarding web security measures, and here at 4GoodHosting we’re no exception. Offering customers the best in products and services that make their website less vulnerable is always going to be a priority. Creating informed customers is something we believe in too, and that’s why we’re choosing to get you in the know regarding DNS flag day

What Exactly is this ‘Flag Day’?

The long and short of this is that this past Friday, February 1 2019, was the official DNS flag day. So, for the last 3 days, some organisations may now have a non-functioning domain. Not likely many of them, but may will see their domains now being unable to support the latest security features – making them an easier target for network attackers.

How and why? Well, a little bit of background info is needed. These days DNS has a wide-spread complexity, which is ever more necessary because cyber criminals launching are launching ever more complex disruptive distributed denial of service (DDoS) attacks aimed at a domain’s DNS. They’ve been having more success, and when they do it works out that no functioning DNS = no website

Developers have done their part to counter these threats quite admirably, and most notably with many workaround’s put in place to guarantee that DNS can continue to function as part of a rapidly growing internet.

The situation as it’s become over recent years is one where a combination of protocol and product evolution have made it so that DNS is being pushed and pulled in all sorts of different directions. This naturally means complications, and technology implementers typically have to weigh these ever-growing numbers of changes against the associated risks.

Cutting to the chase a bit again, the workarounds have ended up allowing legacy behaviours and slowing down DNS performance for everyone.

To address these problems, as of last Friday, vendors of DNS software – as well as large public DNS providers – have removed certain DNS workarounds that many people have been consciously or unconsciously relying on to protect their domains.

Flag’s Up

The reason this move had to be made is because broken implementations and protocol violations have resulted in delayed response times, far too much complexity and difficulty with upgrading to new features. DNS Flag Day has now put an end to the mass backing of many workarounds.

The change will affect sites with software that doesn’t follow published standards. For starters, domain timeouts will now be identified as being a sign of a network or server problem. Moving forward, DNS servers that do not respond to extension mechanisms for DNS (EDNS) queries will be regarded as inactive servers, and won’t return requests from browsers.

Test Your Domain

If you’re the type to be proactive about these things then here’s what you can do. You can test your domain, and your DNS serves with the extension mechanism compliance tester. You’ll receive a detailed technical report that will indicate your test failing, failing partially, or being successful.

Failures in these tests are caused by broken DNS software or broken firewall configuration, which can be remediated by upgrading DNS software to the latest stable version and re-testing. If the tests still fail, organisations will need to look further into their firewall configuration.

In addition to the initial testing, it’s recommended that business that rely on their online presence (which really is every one of them these days) use the next three months to make sure their domain meets what’s required of it now. Organizations with multiple domains that are clustered on a single network and in a shared server arrangement may well find that there is an increased chance that you may end up being caught up in a DDoS attack on another domain sitting near to yours.

Also, if you’re using a third-party DNS provider, most attacks on the network won’t be aimed at you, but you’re still at risk due to being on shared hosting. VPS hosting does eliminate this risk, and VPS web hosting Canada is already a better choice for sites that need a little more ‘elbow room’ when it comes to bandwidth and such. If VPS is something that interests you, 4GoodHosting has some of the best prices on VPS hosting packages and we’ll be happy to set you up. Just ask!

DNS Amplification and DNS Flood Risks

We’re now going to see more weak domains spanning the internet than ever before, and this makes it so that there is even more opportunity for cyber criminals to exploit vulnerable DNS servers through any number of different DDoS attacks.

DNS amplification is one of them, and it involves attackers using DNS to respond to small look-up queries with a fake and artificial IP of the target. The target is then overloaded with more larger DNS responses that are more than it’s able to handle. The result is that legitimate DNS queries are blocked and the organization’s network is hopelessly backed up.

Another one is DNS floods, and this involves waves of responses being aimed at the DNS servers hosting specific websites. They take over server-side assets like memory or CPU and proceed to fire a barrage of UDP requests generated by running scripts on compromised botnet machines.

Layer 7 (application layer) attacks will almost certainly be on the rise now too, and including those targeting DNS services with HTTP and HTTPS requests. These attacks are built to target applications with requests that look like legitimate ones, which can make them particularly difficult to detect.

What’s Next

Cyber-attacks will continue, as well as continue to evolve. Organizations will continue to spend time, money and resource on security. As regards DNS, it’s now possible to corrupt and take advantage of what was once the fail-safe means of web security. The measures taken as DNS Flagging have been put in place to address this problem, and it’s important that you now that your domain matches the new requirement. Again, use the link above to test yours.

There’s going to be a bit of a rough patch for some, but this is a positive step in the right direction. DNS is an essential part of the wider internet infrastructure. Entering or leaving a network is going to be less of a simple process now, but it’s the way it has to be.

Global Environmental Sustainability with Data Centers

Last week we talked about key trends for software development expected for 2019, and today we’ll discuss another trend for the coming year that’s a bit more of a given. That being that datacenters will have even more demands placed on their capacities as we continue to become more of a digital working world all the time.

Indeed, datacenters have grown to be key partners for enterprises, rather than being just an external service utilized for storing data and business operation models. Even the smallest of issues in datacenter operations can impact business.

While datacenters are certainly lifeblood for every business, they also have global impacts and in particular as it relates to energy consumption. Somewhere in the vicinity of 3% of total electricity consumption worldwide is made by datacenters, and to put that in perspective that’s more than the entire power consumption of the UK.

Datacenters also account for 2% of global greenhouse gas emissions, and 2% electronic waste (aka e-waste). Many people aren’t aware of the extent to which our growingly digital world impacts the natural one so directly, but it really does.

Like any good Canadian web hosting provider who provides the service for thousands of customers, we have extensive datacenter requirements ourselves. Most will make efforts to ensure their datacenters operate as energy-efficiently as possible, and that goes along with the primary aim – making sure those data centers are rock-solid reliable AND as secure as possible.

Let’s take a look today at what’s being done around the globe to promote environmental sustainability with data centers.

Lack of Environmental Policies

Super Micro Computer recently put out a report entitled ‘Data Centers and the Environment’ and it stated that 43% of organizations don’t have an environmental policy, and another 50% have no plans to develop any such policy anytime soon. Reasons why? high costs (29%), lack of resources or understanding (27%), and then another 14% don’t make environmental issues a priority.

The aim of the report was to help datacenter managers better understand the environmental impact of datacenters, provide quantitative comparisons of other companies, and then in time help them reduce this impact.

Key Findings

28% of businesses take environmental issues into consideration when choosing datacenter technology

Priorities that came before it for most companies surveyed were security, performance, and connectivity. However, 9% of companies considered ‘green’ technology to be the foremost priority. When it comes to actual datacenter design, however, the number of companies who put a priority on energy efficiency jumps up by 50% to 59%.

The Average PUE for a Datacenter is 1.89

Power Usage Effectiveness (PUE) means the ratio of energy consumed by datacenter in comparison to the energy provided to IT equipment. The report found the average datacenter PUE is approx. 1.6 but many (over 2/3) of enterprise datacenters come in with a PUE over 2.03.

Further, it seems some 58% of companies are unaware of their datacenter PUE. Only a meagre 6% come in that average range between 1.0 and 1.19.

24.6 Degrees C is the Average Datacenter Temperature

It’s common for companies to run datacenters at higher temperatures to reduce strain on HVAC systems and increase savings on energy consumption and related costs. The report found 43% of the datacenters have temperatures ranging between 21 degrees C and 24 degrees C.

The primary reasons indicated for running datacenters at higher temperatures are for reliability and performance. Hopefully these operators will come to soon learn that recent advancements in server technology have optimized thermal designs and newer datacenter designs make use of free-air cooling. With them, they can run datacenters at ambient temperatures up to 40 degrees C and see no decrease in reliability and performance. It also helps improve PUE and saving costs.

Another trend in data center technology is immersion cooling, where datacenters are cooled by being entirely immersed. We can expect to see more of this type of datacenter technology rolled out this year too.

3/4 of Datacenters Have System Refreshes Within 5 Years

Datacenters and their energy consumption can be optimized with regular updates of the systems and adding modern technologies that consume low power. The report found that approximately 45% of data center operators conduct a refreshing of their system sometime within every 3 years. 28% of them do it every four to five years. It also seems that the larger the company, the more likely they are to do these refreshes.

8% Increase in Datacenter E-Waste Expected Each Year

It’s inevitable that electronic waste (e-waste) is created when datacenters dispose of server, storage, and networking equipment. It’s a bit of a staggering statistic when you learn that around 20 to 50 million electric tons of e-waste is disposed every year around the world, and the main reason it’s so problematic is that e-waste deposits heavy metals and other hazardous waste into landfills. If left unchecked and we continue to produce it as we have then e-waste disposal will increase by 8% each year.

Some companies partner with recycling companies to dispose of e-waste, and some repurpose their hardware in any one of a number of different ways. The report found that some 12% of companies don’t have a recycling or repurposing program in place, and typically they don’t because it’s costly, partners / providers are difficult to find in their area, and lack of proper planning.

On a more positive note, many companies are adopting policies to address the environmental issues that stem from their datacenter operation. Around 58% of companies already have environmental policy in place or are developing it.

We can all agree that datacenters are an invaluable resource and absolutely essential for the digital connectivity of our modern world. However, they are ‘power pigs’ as the expression goes, and it’s unavoidable that they are given the sheer volume of activity that goes on within them every day. We’ve seen how they’ve become marginally more energy efficient, and in this year to come we will hopefully see more energy efficiency technology applied to them.

Key Trends in Software Development Expected for 2019

Here we are into the first week of 2019 and as expected we’ve got a whole lot on the horizon this year in the way of software development. We live in a world that’s more and more digital all the time, and the demands put on the software development industry are pretty much non-stop in response to this ongoing shift. Often times it’s all about more efficient ‘straight lining’ of tasks as well as creating more of a can-do environment for people who need applications and the like to work smarter.

Here at 4GoodHosting, a part of what makes us a reputable Canadian web hosting provider is the way we stay abreast of developments. Not only in the web hosting industry, but also in the ones that have a direct relevance for clients of ours in the way they’re connected to computing and computing technology.

Today we’re going to discuss the key trends in software development that are expected for this coming year.

Continuing to Come a Long Way

Look back 10 years and you’ll surely agree the changes in the types of applications and websites that have been built – as well as how they’ve been built – is really quite something. The web of 2008 is almost unrecognizable. Today it is very much an app and API economy. It was only just 10ish years ago that JavaScript framework was the newest and best around, but now building for browsers exclusively is very much a thing of the past.

In 2019 we’re going to see priorities put on progressive web apps, artificial intelligence, and native app development remain. As adoption increases and new tools emerge, we can expect to see more radical shifts in the ways we work in the digital world. There’s going to be less in the way of ‘cutting edge’ and more in the way of refinements on technology that reflect developers now having a better understanding of how technologies can be applied

The biggest thing for web developers now is that they need to expand upon the stack as applications become increasingly lightweight (in large part due to libraries and frameworks like Vue and React), and data grows to be more intensive, which can be attributed to the range of services upon which applications and websites depend.

Reinventing Modern JavaScript Web Development

One of the things that’s being seen is how topics that previously weren’t included under the umbrella of web development – microservices and native app development most notably– are now very much part of the need-to-know landscape.

The way many aspects of development have been simplified has forced developers to evaluate how these aspects fit together more closely. With all the layers of abstraction in modern development, the way things interact and work alongside each other becomes even more important. Having a level of wherewithal regarding this working relationship is very beneficial for any developer.

Those who’ve adapted to the new realities well will now agree that it’s no longer a case of writing the requisite code to make something run on the specific part of the application being worked on. Rather, it’s about understanding how the various pieces fit together from the backend to the front.

In 2019, developers will need to dive deeper become inside-out familiar with their software systems. Being explicitly comfortable with backends will be an increasingly necessary starting point. Diving into the cloud and understanding that dynamic is also highly advisable. It will be wise to start playing with microservices. Rethinking and revisiting languages you thought you knew is a good idea too.

Be Familiar With infrastructure to Tackle Challenges of API development

Some will be surprised to hear it, but as the stack shrinks and the responsibilities of web developers shift we can expect that having an understanding of the architectural components within the software being built will be wholly essential.

That reality is put in place by DevOps, and essentially it has made developers responsible for how their code runs once it hits production. As a result, the requisite skills and toolchain for the modern developer is also expanding.

RESTful API Design Patterns and Best Practices

You can make your way into software architecture through a number of different avenues, but exploring API design is likely the best of them. Hands on RESTful API Design gives you a practical way into the topic.

REST is the industry standard for API design, and the diverse range of tools and approaches is making client management a potentially complex but interesting area. GraphQL, a query language developed by Facebook is responsible for killing off REST, while Redux and Relay – a pair of libraries for managing data in React applications – have both seen a significant amount of interest over the last year as a pair of key tools for working with APIs.

Microservices for Infrastructure Responsibility

Microservices are becoming the dominant architectural mode, and that’s the reason we’re seeing such an array of tools capable of managing APIs. Expect a whole lot more of them to be introduced this year, and be proactive in finding which ones work best for you. While you may not need to implement microservices now, if you want to be building software in 5 years time then you really should become explicitly familiar with the principles behind microservices and the tools that can assist you when using them.

We can expect to see containers being one of the central technologies driving microservices. You could run microservices in a virtual machine, but as they’re harder to scale than containers you likely wouldn’t see the benefits you’ll expect from a microservices architecture. As a result, really getting to know core container technologies should also be a real consideration.

The obvious place to start is with Docker. Developers need to understand it to varying degrees, but even those who don’t think they’ll be using it immediately will agree that the real-world foundation in containers it provides will be valuable knowledge to have at some point.

Kubernetes warrants mention here as well, as it is the go-to tool that allows you to scale and orchestrate containers. It offers control over how you scale application services in a way that would have bee unimaginable a decade ago.

A great way for anyone to learn how Docker and Kubernetes come together as part of a fully integrated approach to development is with Hands on Microservices with Node.js.

Continued Embracing of the Cloud

It appears the general trend is towards full stack, and for this reason developers simply can’t afford to ignore cloud computing. The levels of abstraction it offers, and the various services and integrations that come with the leading cloud services make it so that many elements of the development process are much easier.

Issues surrounding scale, hardware, setup and maintenance nearly disappear entirely when you use cloud. Yes, cloud platforms bring their own set of challenges, but they also allow you to focus on more pressing issues and problems.

More importantly, however, they open up new opportunities. First and foremost of them is going Serverless becomes a possibility. Doing so allows you to scale incredibly quickly by running everything on your cloud provider.

There are other advantages too, like when you use cloud to incorporate advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools; AWS Lex helps you build conversational interfaces, and AWS Polly turns text into speech. Azure Cognitive Services has a nice array of features for vision, speech, language, and search.

As a developer, it’s going to be increasingly important to see the Cloud as a way of expanding on the complexity of applications and processes while keeping them agile. Features and optimizations previously might have found to be sluggish or impossible can and should be developed as necessary and then incorporated. Leveraging AWS and Azure (among others) is going to be something that many developers will do with success in the coming year.

Back to Basics with New languages & Fresh Approaches

All of this ostensible complexity in contemporary software development may lead some to think that languages don’t matter as much as they once did. It’s important to know that’s definitely not the case. Building up a deeper understanding of how languages work, what they offer, and where they come up short can make you a much more accomplished developer. Doing what it takes to be prepared is really good advice for a what’s an ever-more unpredictable digital world to come this year and in years to follow.

We can expect to see a trend where developers go back to a language they know and explore a new paradigm within it, or they learn a new language from scratch.

Never Time to Be Complacent

We’ll reiterate what the experts we read are saying; that in just a matter of years much of what is ‘emerging’ today will be old hat. It’s helpful to take a look at the set of skills many full stack developer job postings are requiring. You’ll see that the different demands are so diverse that adaptability should be a real priority for a developer that wants to remain upwardly mobile within his or her profession. Without doubt it will be immensely valuable both for your immediate projects and future career prospects.

Top-5 Strategic Technology Trends Expected for 2019

Here we are on the final day of the year, and most will agree that 2018 has seen IT technology expand in leaps and bounds exactly as it was expected to. In truth, it seems every year brings us a whole whack of new technology trends cementing themselves in the world of IT, web, and computing development. Not surprisingly, the same is forecast for 2019.

Here at 4GoodHosting, a significant part of what makes us one of the many good Canadian web hosting providers is that we enjoy keeping abreast of these developments and then aligning our resources and services with them when it’s beneficial for our customers to do so.

Worldwide IT spending for 2019 is projected to be in the vicinity of $3.8 trillion. That will be a 3.2% increased from the roughly $3.7 trillion spend this year. That’s a LOT of money going into the research and development shaping the digital world that’s so integral to the professional and personal lives for so many of us.

So for the last day of 2018 let’s have a look at the top 10 strategic technology trends we can expect to become the norm over the course of the year that’ll start tomorrow.

  1. Autonomous Things

We’ve all heard the rumblings that we’re on the cusp of the start of the robot age. It seems that may be true. Autonomous things like robots, drones and autonomous vehicles use AI to automate functions that were performed by humans previously. This type of automation goes beyond that provided by rigid programming models, and these automated things use AI to deliver advanced behaviors tailored by their interacting more naturally with their surroundings and with people – when necessary.

The proliferation of autonomous things will constitute a real shift from stand-alone intelligent things to collections of them that will collaborate very intelligently. Multiple devices will work together, and without human input if it’s not required – or not conducive to more cost-effective production or maintenance.

The last part of that is key, as the way autonomous things can reduce production costs by removing the employee cost from the production chain wherever possible is going to have huge ramifications for unskilled labour. As the saying goes – you can’t stop progress.

  1. Augmented Analytics

Augmented analytics can be defined as a focus on specific area of augmented intelligence, and most relevantly in what we’re talking about here is the way we’ll see it start to use machine learning (ML) to transform how analytics content is developed, shared, and consumed. The forecast seems to be that augmented analytics capabilities will quickly become part of mainstream adoption methods and affix itself as a key feature of data preparation, data management, process mining, modern analytics, data science platforms and business process management.

We can also expect to see Automated insights from augmented analytics being embedded in enterprise applications. Look for HR, finance, marketing, customer service, sales, and asset management departments to be optimizing decisions and actions of all employees within their context. These insights from analytics will no longer be utilized by analysts and data scientists exclusively.

The way augmented analytics will automate the data preparation process, insight generation and insight visualization, plus eliminate the need for professional data scientists promises to be a huge paradigm shift too. It’s expected that through 2020 the number of citizen data scientists will have expanded 5x faster than the number of ‘industry-expert’ data scientists, and these citizen variety will then fill the data science and machine learning talent gap resulting from the shortage and high cost of traditional data scientists.

  1. AI-Driven Development

We should also expect to see the market shifting from the old way where professional data scientists would partner with application developers to create most AI-enhanced solutions to a newer where a professional developer can operate on their own using predefined models that are now delivered as a service. The developer is now provided with an ecosystem of AI algorithms and models, and now has development tools that are tailored to integrating AI capabilities and models into workable solutions that weren’t reachable before.

AI being applied to the development process itself leads to another opportunity for professional application development that serves the aim to automate various data science, application development and testing functions. 2019 will be the start of a 3-year window where it’s forecast that at least 40% of new application development projects will have AI co-developers working within the development team.

  1. Digital Twins

Much as the name suggests, a digital twin is a digital representation of a real-world entity or system, and we can expect them to start being increasingly common over the coming year. So much so in fact that by 2020 it is estimated that there will be more than 20 billion connected sensors and endpoints serving digital twins working on millions and millions of different digital tasks.

These digital twins will be deployed simply at first, but we can expect them to evolve them over time and have ever-greater abilities to collect and visualize the right data, determine correct application of the right analytics and rules, and respond effectively to business objectives.

Organization digital twins will help drive efficiencies in business processes, plus create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically, and we can look for this trend to really start picking up steam in 2019.

  1. Immersive Experience

The last trend we’ll touch on here today is the one that most people will be able to relate to on a n everyday level. We’re all seeing the changes in how people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are revolutionizing the way people interact with the digital world, as well as how they regard it overall. It is from this combined shift in perception and interaction models that future immersive user experiences will be shaped.

2019 should see a continuance of thought and perspective about individual devices and how fragmented user interface (UI) technologies are used for a multichannel and multimodal experience. The relevance of that all will be in how the experience connects people with the digital world across hundreds of edge devices surrounding them – traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances will all increasingly be part of the ‘smart’ device crowd as we move forward.

In the bigger picture, this multi-experience environment will create an ambient experience where the spaces that surround us create a ‘digital entirety’ rather than the sum of individual devices working together. In a sense it will be like the environment itself is the digital processor.

We’ll discuss more about what’s forecasted to be in store for web hosting and computing in 2019 in following weeks, but for now we’d like to say Happy New Year to you and we continue to appreciate your choosing as us your web hosting provider. Here’s to a positive and productive coming year for all of you.



Why 64-Bit is Leaving 32-Bit in the Dust with Modern Computing

Having to choose between 32-bit and 64-bit options when downloading an app or installing a game is pretty common, and many PCs will have a sticker on it that reads 64-bit processor. You’ll be hard pressed to find a sticker on one that reads 32-bit. It’s pretty easy to conclude like you do with most things that more is better, but why is that exactly? Unless you’re a genuinely computer savvy individual you won’t know what the real significance of the difference between the two.

There is some meat to that though, and here at 4GoodHosting as a top Canadian web hosting provider we try to have our thumb on the pulse of the web hosting and computing world. Having a greater understanding of what exactly is ‘under the hood’ of your desktop or notebook and what’s advantageous – or not – about that is helpful. So let’s have a look at the importance difference between 32-bit and 64-bit computing today.

Why Bits Matter

First and foremost, it’s about capability. As you might expect, a 64-bit processor is more capable than a 32-bit processor, and primarily because it can handle more data at once. A greater number of computational values can be taken on by a 64-bit processor and this includes memory addresses. This means it’s able to access over four billion times the physical memory of a 32-bit processor. With the ever-greater memory demands of modern desktop and notebook computers, that’s a big deal.

The key difference in that is something else. 32-bit processors can handle a limited amount of RAM (in Windows, 4GB or less) without difficulty, while 64-bit processors can accordingly take on much more. The ability to do this, however, is based on your operating system being able to take advantage of this greater access to memory. Run anything Windows 10 or up for a PC and you won’t need to worry about limits.

The proliferation of 64-bit processors and larger capacities of RAM have led both Microsoft and Apple to upgrade versions of their operating systems now designed to take full advantage of the new technology. OS X Snow Leopard for Mac was the first fully 64-bit operating system to arrive, nearly 10 years ago in 2009. iPhone was the first smartphone with a 64-bit chip, the Apple A7.

Basic version of the Microsoft Window OS had software limitations with the amount of RAM available for use by applications. Even in the ultimate and professional version of the operating system, 4GB is the maximum usable memory available from the 32-bit. Before you think that going 64-bit is the solution to having nearly unlimited processor capability, however, understand that any real jump in power comes from software designed with to operate within the architecture.

Designed to Make Use of Memory

These days, the recommendation is that you shouldn’t have less than 8GB of RAM to make the best use of applications and video games designed for 64-bit architecture. This is especially useful for programs that can store a lot of information for immediate access, and ones that regularly open multiple large files at the same time.

Another plus is that most software is backwards compatible, which allows you to run applications that are 32-bit in a 64-bit environment and not experience performance issues or make it so that there’s more for you to do. There are exceptions to this, and the most notable of them are virus protection software and drivers. The hardware of these installs usually require the proper version be installed if they’re going to function properly.

Same, But Different

There’s likely no better example of this difference than one found right within your file system. If you’re a Windows user, you’ve likely noticed that you have two Program Files folders; the first is labeled Program Files, while the other is labeled Program Files (x86).

All applications installed shared resources on a Windows system (called DLL files), and how they are structured depends on whether they’re used for 64-bit applications or 32-bit ones. Should a 32-bit application reach out for a DLL and discover that it’s a 64-bit version, it’ll respond quite simply in one way – by refusing to run.

32-bit (x86) architecture has been in use for a good long time now, and there are still plenty of applications that run 32-bit architecture. How they run on some platforms is changing, however. Modern 64-bit systems have the ability to run 32-bit and 64-bit software, and the reason is because they have 2 separate Program Files directories. 32-bit applications are shuffled off to the appropriate x86 folder, and with that Windows then responds by serving up the right DLL, – which is the 32-bit version in this case. On the other side of things, every file in the regular Program Files directory can access the separate content.

Naturally, we can expect 32-bit computing architecture to go the way of the Dodo bird before long, but it’s interesting to note that the superiority of 64-bit is the sum of more than just being a doubling of bits between the two.


Top 5 Programming Languages for Taking On Big Data

In today’s computing world, ‘big data’ – data sets that are too large or complex for traditional data-processing application software – are increasingly common and having the ability to work with them is increasingly a to-be-expected requirement of IT professionals. One of the most important decisions these individuals have to make is deciding on a programming languages for big data manipulation and analysis. More is now required than just simply understanding big data and framing the architecture to solve it. Choosing the right language means you’re able to execute effectively, and that’s very valuable.

As a proven reliable Canadian web hosting provider, here at 4GoodHosting we are naturally attuned to developments in the digital world. Although we didn’t know what it would come to be called, we foresaw the rise of big data but we didn’t entirely foresee just how much of a sway of influence it would have for all of us who take up some niche in information technology.

So with big data becoming even more of a buzz term every week, we thought we’d put together a blog about what seems to be the consensus on the top 5 programming languages for working with Big Data.

Best languages for big data

All of these 5 programming languages make the list because they’re both popular and deemed to be effective.


Scale blends object-oriented and functional programming paradigms very nicely, and is fast and robust. It’s a popular language choice for many IT professionals needing to work with big data. Another testament to its functionality is that both Apache Spark and Apache Kafka have been built on top of Scala.

Scala runs on the JVM, meaning that codes written in Scala can be easily incorporated within a Java-based Big Data ecosystem. A primary factor differentiating Scala from Java is that Scala is a lot less verbose as compared to Java. What would take seemingly forever to write 100s of lines of confusing-looking Java code can be done in 15 or so lines in Scala. One drawback attached to Scala, though, is its steep learning curve. This is especially true compared to languages like Go and Python. In some cases this difficulty puts off beginners looking to use it.

Advantages of Scala for Big Data:

  • Fast and robust
  • Suitable for working with Big Data tools like Apache Sparkfor distributed Big Data processing
  • JVM compliant, can be used in a Java-based ecosystem


Python’s been earmarked as one of the fastest growing programming languages in 2018, and it benefits from the way its general-purpose nature allows it to be used across a broad spectrum of use-cases. Big Data programming is one of the primary ones of them.

Many libraries for data analysis and manipulation which are being used in a Big Data framework to clean and manipulate large chunks of data more frequently. These include pandas, NumPy, SciPy – all of which are Python-based. In addition, most popular machine learning and deep learning frameworks like Scikit-learn, Tensorflow and others are written in Python too, and are being applied within the Big Data ecosystem much more often.

One negative for Python, however, is that its slowness is one reason why it’s not an established Big Data programming language yet. While it is indisputably easy to use, Big Data professionals have found systems built with languages such as Java or Scala to be faster and more robust.

Python makes up for this by going above and beyond with other qualities. It is primarily a scripting language, so interactive coding and development of analytical solutions for Big Data is made easy as a result. Python also has the ability to integrate effortlessly with the existing Big Data frameworks – Apache Hadoop and Apache Spark most notably. This allows you to perform predictive analytics at scale without any problem.

Advantages of Python for big data:

  • General-purpose
  • Rich libraries for data analysis and machine learning
  • Ease of use
  • Supports iterative development
  • Rich integration with Big Data tools
  • Interactive computing through Jupyter notebooks


Those of you who put a lot of emphasis on statistics will love R. It’s referred to as the ‘language of statistics’, and is used to build data models which can be implemented for effective and accurate data analysis.

Large repositories of R packages (CRAN, also called as Comprehensive R Archive Network) set you up with pretty much every type of tool you’d need to accomplish any task in Big Data processing. From analysis to data visualization, R makes it all doable. It can be integrated seamlessly with Apache Hadoop, Apache Spark and most other popular frameworks used to process and analyze Big Data.

The easiest flaw to find with R as a Big Data programming language is that it’s not much of a general purpose language with plenty of practicality. Code written in R is not production-deployable and generally has to be translated to some other programming language like Python or Java. For building statistical models for Big Data analytics, however, R is hard to beat overall.

Advantages of R for big data:

  • Ideally designedfor data science
  • Support for Hadoop and Spark
  • Strong statistical modelling and visualization capabilities
  • Support for Jupyter notebooks


Java is the proverbial ‘old reliable’ as a programming language for big data. Much of the traditional Big Data frameworks like Apache Hadoop and the collection of tools within its ecosystem are based in Java, and still used in many enterprises today. This goes along with the fact that Java is the most stable and production-ready language of all the 4 we’ve covered here so far.

Java’s primary advantage is in the way you have an ability to use a large ecosystem of tools and libraries for interoperability, monitoring and much more, and the bulk of them have already been proven trustworthy.

Java’s verbosity is its primary drawback. Having to write hundreds of lines of codes in Java for a task which would require only 15-20 lines of code in Python or Scala is a big minus for many developers. New lambda functions in Java 8 do counter this some. Another consideration is that Java does not support iterative development unlike newer languages like Python. It is expected that future releases of Java will address this, however.

Java’s history and the continued reliance on traditional Big Data tools and frameworks will mean that Java will never be displaced from a list of preferred Big Data languages.

Advantages of Java for big data:

  • Array of traditional Big Data tools and frameworks written in Java
  • Stable and production-ready
  • Large ecosystem of tried & tested tools and libraries


Last but not the least here is Go. one of the programming languages that’s gained a lot of ground recently. Designed by a group of Google engineers who had become frustrated with C++, Go is worthy of consideration simply because of the fact that it powers many tools used in Big Data infrastructure, including Kubernetes, Docker and several others too.

Go is fast, easy to learn, and it is fairly easy to develop applications with this language. Deploying them is also easy. What might be more relevant for it though is as businesses look at building data analysis systems that can operate at scale, Go-based systems are a great fit for integrating machine learning and undertaking parallel processing of data. That other languages can be interfaced with Go-based systems with relative ease is a big plus too.

Advantages of Go for big data:

Fast and easy to use

Many tools used in the Big Data infrastructure are Go-based

Efficient distributed computing

A few other languages will get HMs here too – Julia, SAS and MATLAB being the most notable ones. All of our 5 had better speed, efficiency, ease of use, documentation, or community support, among other things.

Which Language is Best for You?

This really depends on the use-case you will be developing. If your focus is hardcore data analysis involving s a lot of statistical computing, R would likely be your best choice. On the other hand, if your aim is to develop streaming applications, Scala is your guy. If you’ll be using machine learning to leverage Big Data and develop predictive models, Python is probably best. If you’re building Big Data solutions with traditionally-available tools, you shouldn’t stray from the old faithful – Java.

Combining the power of two languages to get a more efficient and powerful solution might be an option too. For example, you can train your machine learning model in Python and then deploy it with Spark in distributed mode. All of this will depend on how efficiently your solution is able to function, and more importantly, how speedy and accurate it’s able to work.


Determining a Domain Name’s Worth

All of us have heard the stories of people who’ve smartly purchased the rights to domain names they foresaw being in demand in the future, and then selling them for a tidy profit some time later. Then ther was the well-publicized story of a former Google employee who owned for a whole minute and was handsomely rewarded by the Internet giant for giving it back to them in 2015. That same year Google became a subsidiary of Alphabet, and they wisely nipped any problem in the bud by acquiring shortly thereafter.

Here at 4GoodHosting, we register many new domain names for clients every month as a Canadian web hosting provider who offers the service free with our web hosting packages. If you’ve identified the perfect domain for your website, you can request it right here – – and provided it’s available we can secure it for you. For those of you that have ever wondered about the $ of your domain name, you might be surprised to learn that you can actually come to an approximate valuation of it with a few online tools.

Even if your domain name is the most obscure one imaginable and would almost certainly never be in demand, this is quite interesting to learn more about.

Domain Hoarding?

The first thing to understand here is that there are hundreds of thousands of domain names that have been registered but do not have a website attached to them. Nearly all of them have been acquired by individuals who see the possibility of selling it in the future. There’s some very promising examples of this, like when the Expedia group paid $11 million for, or the person who registered receiving millions for it.

If your domain is one that is not unique and describes the nature of your business, or uses a term or portion of it to describe some aspect of the business or venture that would apply to similar ones elsewhere then there may resale value to the domain name. In some instances, there will be individuals who are willing to pay to assume ownership of it. Most of the time they’ll reach out to the owner by their web hosting provider reaching out to yours, and in rarer instances the domain owner will be aware of growing interest in the domain and put it ‘up for sale.’

What Makes a Domain Name Valuable?

For the most part a domain name is only worth as much as someone is willing to pay for it. For some domains, however, there are certain attributes that might make it have greater value:

  • Length – Shorter domain names tend to be easy to remember and require less effort to type them into a browser. Generally speaking, shorter domains tend to be worth more than longer ones.
  • Number of words – One-word domain names are always the most valuable, but combining 2 words to make business names (LinkedIn, Facebook) is a trend and has led to 2-word domain names being worth more too. Combining 3 words is almost unheard of and not recommended, so this type of domain would be by and large useless.
  • Accurate spelling – It’s true that some big brands will buy up domain names that are similarly spelled to their primary domain name. A popular domain name that’s correctly spelled will have more resale value if it is ever made available.
  • Domain name age and activity – Domains that have been live and accessible for a long time will come with built-in SEO attributes. This of course gives them significant value, with whoever buying the name not having to work as hard to get favourable search engine results from it.
  • TLD – A top-level domain (TLD) is extremely important to the value of any domain name. This is the suffix to your domain name, and with .com domains being the most common and popular it is these ones that are the most valuable. Acquiring one will cost more than the .org or .net. version of the same domain name. Niche TLDs like .pizza will typically have little to no resale value..

Finding Out If a Domain Has Value

There are a handful of domain name appraisal services online, and most won’t cost you anything to use them. Do keep in mind that the values these services place on domains are only approximations, so don’t take any valuation provided by them to be a 100% reliable estimation of what a domain name is worth.

Free Valuator is the best among them in our opinion. You can get a value estimation for a domain name in a matter of seconds and they can also introduce you to a professional domain name value assessor if you are considering making your domain name available. Estibot is another one, and it gets a mention here because it uses a different approach to determining how much a website name is worth. It actually uses mathematical models to calculate the value of a domain name.

What’s Next?

After you’ve checked the value of your domain name, you have 2 options; if it’s valuable you can go ahead and make it available for sale. Putting it on a domain auction site, like the one at GoDaddy, is a popular choice. Alternately, you might want to contact a professional domain name broker. They’ll have the knowledge and connections to get you the biggest $ return for your domain name. This is the best course of action if you think a big brand might want your domain name.

Next, if you think the value is bound to be greater in the future then you could sit tight and wait to see if that happens. If this is what you choose to do then taking steps to improve the value of your domain name, like adding content to your site and other approaches to boost its SEO value, is a smart move.

Have a domain name that’s estimated to be much more valuable than you thought? We’d like to hear about it here.

New Blockchain Development Kit Arrives from Microsoft

Blockchain isn’t exactly a household name in the digital commerce world – yet – but for those of us on the inside track it’s already well established as the next big thing in as far as grand-scale transactional computing is concerned. For those who aren’t familiar with it, we’ll explain briefly here; blockchain is a shared distributed ledger technology where each transaction is digitally signed to ensure its authenticity and integrity. From a ‘what does that mean for me’ perspective, it’s a new and very powerful means of upping security for digital transactions as well as ensuring pinpoint accuracy.

Right, now that we’ve got the basic explanation out of the way we’re going to come at this from an angle that’s designed for those of you already very much in the know regarding blockchain. Here at 4GoodHosting, we’re like any leading Canadian web hosting provider in that a good many of our customers have ecommerce websites where secure transactions are an absolute priority. As such, blockchain can’t arrive in full soon enough and that’s why recent news from Microsoft is very promising.

Microsoft is about to offer a new serverless blockchain development kit powered by its intelligent cloud platform – Azure. As of now it’s being called the ‘Azure Blockchain Development Kit’ and the aim with it is to facilitate seamless integration of blockchain with the best of Microsoft and other third-party SaaS offerings. The Principal Program Manager at Microsoft states that it will enable users to build key management, off-chain identity and data monitoring and messaging APIs into reference architectures that can be used to quickly build blockchain applications.

It is expected to have 3 major capabilities:

  • Integrating data and systems
  • Connecting interfaces
  • Deployment of smart contracts and blockchain networks

It should enable organizations and individuals to connect to blockchain through user interfaces. The development kit will come ready with voice interfaces, SMSes, internet of things, support for mobile clients, device integration, virtual assistants, and bots. Voice and SMS interfaces for the purpose of tracking and supply chain solutions promise to be very useful for developers, and it will have support for Android and iOS mobile operating systems

In addition, it will be compatible with other ledger technologies like Ethereum and Bitcoin too.

Concurrently with this new Azure Blockchain Development Kit, Microsoft is also announcing their release of a set of Flow Connectors and Logic Apps for blockchain. This Ethereum Blockchain connector will allow users to call contract actions, deploy contracts, trigger other Logic Apps, and read contract state. This is important, because end to end blockchain solutions require integration with data, software, and media that live ‘off chain’ as this state is referred to.

This new product from Microsoft is a next step in their quest to simplify and quicken blockchain accessibility and affordability for anyone who has an idea of what they might be able to do with it. Based on the serverless architecture, the expectation is that it will further reduce costs and thus making it accessible for every blockchain enthusiast, ISV and enterprise.

The Kit is being built atop Microsoft’s investments in blockchain technology and connects to Azure’s compute, storage, data and integration services, which are already proven reliable in the workspace. Over recent years, Microsoft has been working on extending the use of Blockchain and related distributed ledger technologies. The idea is that new digital identities will eventually come together to promote greater personal security, privacy and control.

It should be mentioned that other major players (Google most notably) also recently launched similar blockchain development kits. What will remain to be seen is what developers will think of it and how practical it is for them.

Microsoft has a white paper on how to deploy any decentralized application using the Azure Blockchain Development Kit. You can download it here, and overall the development of Blockchain is definitely something worth keeping tabs on as it continues to change the landscape of the ecommerce world.


Testing – And Improving – Page Speed for More Responsive Sites

In all the recent hubbub about https, GPDR regulation and the like there’s been some degree of neglect for the importance of website loading speeds. Most people behind a website won’t need to be made aware of what bounce rates are, or that in general people tend to be just as impatient when it comes to viewing a website as they are for nearly everything else in their life. Page speed has been a part of the Google algorithm for many years, in fact it’s been a big deal for the better part of 10 years now.

Here at 4GoodHosting, the nature of our business and the fact we’re a Canadian web hosting provider with our thumb on the pulse of the web hosting industry makes it so that we really grasp the importance of issues like these when it comes to website performance. We’re 10 months removed from Google starting to educate us all about how page speed is important for the user experience. The focus has of course shifted to mobile search in a big way, and again that’s quite natural given the way mobile is become the predominant search method.

At the start of 2018 Google announced its ‘speed update’, saying that it would only affect a small percentage of sites that were offering a painfully slow user experience. Most people have gotten on board with it sufficiently over the last year, but for those who have yet to let’s spend today discussing how to test and improve website page speed.

How To Test Your Site

There’s choices when it comes to online services you can use to gain an understanding of how quick your site is. Google’s two are really all you need to consider here. First up is PageSpeed Insights, which provides you with a reasonably accurate overview of how your site is performing and some things you can do to improve it. What we’ve learned from it is that render blocking (a slow part of the page that stops the whole page from loading) is the culprit most of the time. This issue isn’t easy to remedy, but you have to do it.

If mobile is your primary focus, then this tool here is perfect for you. It compares your site to other mobile sites and delivers a percentage score. Keep in mind that for both of these the numbers are estimates, and while they’ll likely be fairly close to accurate you shouldn’t take them as definitive findings.

This leads to the next part of our discussion here – tips you can implement to improve your page loading speeds.

How To Improve Your Page Load Times

There’s much you can do to speed up your site. Sometimes you’ll be addressing platform specific problems, while in other instances they will be more general issues. Some of these changes you can implement yourself, but for others you may need to bring in someone more web savvy than yourself.

  1. Better Hosting

Inexpensive shared hosting means your site is on a server filled with other domains like yours. This of course leads to a slower site due to a lack of available resources on the server. The simple fix for this is to move to better hosting. A dedicated servers (vps hosting in Canada) is an option, but for many smaller sites and interests its going to be an expensive and really unnecessary solution. However, it should be something to consider if shared hosting is being your website’s slow page loading times.

  1. Optimize Your Images

Plain and simple, compressing your images and reduce their size is the easiest and arguably the most effective way to improve page load speed times. Optimizing the image can be done in an offline editor, and one of the best ones is a site called which in our opinion is better than the Adobe compression tool for smaller images sizes.

  1. Cache Your Site For Speed Gains

Caching your site can speed it up enormously. When you cache a site, it takes a snapshot of the page and keeps it handy. It then is able to deliver it to the visitor much quicker than it would normally. This can be done in numerous different ways. WordPress users can do so in the W3 Total Cache. The large amount of options there are something you should familiarize yourself with.

  1. Use A Content Delivery Network

Content delivery networks assume an extremely valuable role in the internet’s infrastructure. A CDN delivers a webpage or any file to a user by accessing it from the closest geographic location available. The benefits of doing this are that it is far more efficient, conserves bandwidth, protects the network, and also improving the user experience by providing the asset quicker.

CDN’s are fairly commonplace now, with estimates suggesting that 40% of all sites are using one. The best ones will be able to offer speed gains and protection from DDOs attacks.

  1. Minimize The Number Of Http Requests

An onslaught of http requests – requests for information from your server – can overwhelm a website. When someone visits your site they are requesting various files to load in the browser. Most of these requests are sequential and the increase in the number of external files means more requests, and that means a slower load time for that user.

One tip here is to take all the css files and put them into the same file. This can be done with Javascript files too. Consolidating as many files together to reduce the number of http requests is highly recommended.

  1. Disable Hotlinking

Hotlinking is when other sites leech your image content. Visitors to another site receive an image loaded from your server. This can mean your monthly bandwidth is stolen, but the fix is quick, easy, and effective – edit your .htaccess file.

  1. Serve Your Pages Via AMP

Google has put a lot of time and effort into improving the web for mobile, and pushed websites to improve user experience if it’s diminished by a slow connection. One of their recent innovations was AMP – accelerated mobile pages. These pages load faster and serve ads faster as well, which benefits those dealing with a slower internet connection. AMP pages use Amp Html, a special library of JavaScript, and a cache to serve pages.

AMP pages have been well received by larger news sites that see their specific need to be able to serve pages more quickly.

  1. Use An External Commenting System

Having an engaged user base is very desirable, but how they are set up for commenting on the site can be an issue here. It can be a real problem for on-page SEO, and can lead to the page loading much slower. A popular fix for this is to ‘lazy load’ the comments, making it so that the page doesn’t serve up this user-generated content to Google’s web crawlers. Instead, it only shows it to real visitors.

Another fix for commenting problems is to use an external commenting service. Supporting your platform with Disqus is a good choice here.