6 Best Practices for Quickly Scaling Apps to Meet Demand

It’s been said that you can never predict what the future hold, and in the same way you also can’t predict what the future will hold for you. And perhaps we can also agree that you’re going to be equally uncertain as to what the future will demand of you. It’s likely that there’s no better sphere for this reality to be put on display than the digital one, and one corner of that world where it’s ever so true it with applications.

Some apps make quite a splash upon introduction, while others sink like a stone. The majority of them fall somewhere in between, but for those that splash it’s not uncommon to see the demand for your app exceed even your expectations. That’s the rosy part of it all, and the app developer will be just basking in their success and relishing the demand.

When the demand is more related to what the app’s users – particularly if they’re paying ones – expect of it, however, that’s when the receptiveness is often mixed with some degree of ‘how am I going to go accommodate these demands?’

Now here at 4GoodHosting we’re well established as quality Canadian web hosting provider, but we’re the furthest thing from web app developers and that’s probably a good thing considering our expertise is in what we do and a developer’s expertise is in what they do. That’s as it should be, but one thing we do know is that everyone is going to be all ears when it comes to learning what they can do better to be better at what they do.

Which leads us to today’s topic – what are the best ways for scaling apps rapidly when a developer simply doesn’t have the time he or she’d like to accommodate demand that expects expanded capabilities now.

We’ll preface here by saying we’ve taken this entirely from SMES (subject matter experts if you’re not familiar with the acronym) who are on top of everything related to the world of app development, but we can say it checks out as legitimately good information.

Pandemic Spiking Demand

The COVID-19 pandemic continues on, and many companies in e-commerce, logistics, online learning, food delivery, online business collaboration, and other sectors are seeing big time spikes in demand for their products and services. Many of these companies are seeing evolving usage patterns caused by shelter-in-place and lockdown orders creating surges in business and specifically in demand for their products.

These surges have pushed many an application to its limits, and what that’s often doing is potentially resulting in frustrating outages and delays for customers. So how do you best and most effectively accommodate application loads?

What’s needed is the best, quickest, and most cost-effective way to increase the performance and scalability of applications as a means of offering a satisfactory customer experience, but not assuming excessive costs in doing so.

Here are 6 of the best ways to do that:

Tip 1: Understand the full challenge

Addressing only part of the problem is almost certainly not going to be sufficient to remedy these new shortcomings. Be sure to consider all of the following.

  • Technical issues– Application performance under load (and how end users experience it) is determined by the interplay between latency and concurrency. Latency is the time required for a specific operation, or more simply how long it takes for a website to respond to a user request.
  • Concurrency – the number of simultaneous requests a system can handle is its concurrency. When concurrency is not scalable, a significant increase in demand can cause an increase in latency because the system is unable to respond to all requests as quickly as they are received. A poor customer experience is what’s usually the outcome here, as response times increase exponentially and look bad on your app. So while ensuring low latency for a single request may be essential, it may not solve the challenge created by surging concurrency on its own and you need to be aware of this and make the right moves to counter.

It’s imperative that you find a way to scale the number of concurrent users while simultaneously maintaining the required response time. It’s equally true that applications must be able to scale seamlessly across hybrid environments, and often ones that span multiple cloud providers and on-premises servers.

  • Timing– Fully grown strategies take years to implement, like when you’re rebuilding an application from scratch, and they aren’t helpful for addressing immediate needs. The solution you adopt should enable you to begin scaling in weeks or months.
  • Cost– Budget restrictions are a reality for nearly every team dealing with these issues, so a strategy that minimizes upfront investments and minimizes increased operational costs is going to be immeasurably beneficial and it’s something you need to have in place before you get into the nitty gritty of what you expanded scaling is going to involve.

Tip 2: Planning both short and long term

So even as you’re smack dab in the middle of addressing the challenge of increasing concurrency while keeping latency in check, it’s never a good idea to rush into a short-term fix that may lead to a dead end due to the haste with which it was incorporated. If a complete redesign of the application isn’t planned or feasible, then you should adopt a strategy that will enable the existing infrastructure to scale to whatever extent it’s needed.

Tip 3: Choose the right technology

The proven consensus for the most cost-effective way to rapidly scale up system concurrency is with Open source in-memory computing solutions that can still maintain or even improve latency. Apache Ignite, for example, is a distributed in-memory computing solution which is deployed on a cluster of commodity servers. It consolidates CPUs and RAM of the cluster and distributes data and compute to the individual nodes. Whether deployed on-premises or in a public or private cloud or hybrid environment, Ignite can be deployed as an in-memory data grid (IMDG) stuffed between existing application and data layers and requiring no major modifications to either component. Ignite also supports ANSI-99 SQL and ACID transactions.

Relevant data from the database is cached in the RAM of this newfound cluster when an Apache Ignite in-memory data grid is in place. It is then available for processing that’s free of the delays caused by normal reads and writes to a disk-based data store. The Ignite IMDG uses a MapReduce approach and runs application code on the cluster nodes to execute massively parallel processing (MPP) across the cluster with minimal data movement across the network. Between in-memory data caching, sending computes to the cluster nodes, and MPP dramatically increases concurrency and reduces latency, you get an up to 1000 times increase in application performance compared to applications built on a disk-based database.

By adding new nodes the distributed architecture of Ignite makes it possible to increase the compute power and RAM of the cluster, and it’s also now able to automatically detect the additional nodes and redistributes data across all nodes in the cluster. This means optimal use of the combined CPU and RAM and you also now have massive scalability to support rapid growth.

We only have so much space to work with here, but a Digital Integration Hub (DIH) and Hybrid transactional/analytical processing (HTAP) get honourable mentions here as other really smart choices for scaling up apps. Look into them too.

Tip 4: Open Source Stacks – Consider Them

You need to identify which other proven open-source solutions make the grade for allowing you to create a cost-effective, rapidly scalable infrastructure, and here are 3 of the best:

Apache Kafka or Apache Flink for building real-time data pipelines for delivering data from streaming sources, such as stock quotes or IoT devices, into the Apache Ignite in-memory data grid.

Kubernetes for automating the deployment and management of applications previously containerized in Docker or other container solutions. Putting applications in containers and automating the management of them is becoming a norm for successfully building real-time, end-to-end business processes in our newdistributed, hybrid, multi-cloud world.

Apache Spark for taking large amounts of date and processing and analyzing it efficiently. Spark takes advantage of the Ignite in-memory computing platform to more effectively train machine learning models using the huge amounts of data being ingested via a Kafka or Flink streaming pipeline.

Tip 5: Build, Deploy, and Maintain Correctly

The need to deploy these solutions in an accelerated timeframe is clear, and the consequences of delays being very serious is usually a standard scenario too. For both reason it is necessary to make a realistic assessment of the in-house resources that are available for the project. If you and your team are lacking in either regard then you shouldn’t hesitate to consult with 3rd-party experts. You can easily obtain support for all these open source solutions on a contract basis, making it possible to gain the required expertise without the cost and time required to expand your in-house team.

Tip 6: Keep Learning More

There are plenty of online resources available to help you get up to speed on the potential of these technologies and garner strategies that fit your organization and what is being demanded of you. Start by exploring whether your goal is to ensure an optimal customer experience in the face of surging business activity, or whether it’s to start planning for growth in a (hopefully) coming economic recovery. And determine whether or not either aim is going to involve an open source infrastructure stack powered by in-memory computing being your cost-effective path to combining unprecedented speed with scalability that’s both not limited by constraints and can be rolled out without taxing you and your people too much.

COVID Lockdowns Putting Strain on Broadband Infrastructure Around the Globe

Safe to say there won’t be anyone who’s even slightly enamoured with all the different fallouts from the Global Pandemic, and if your discontent is particularly strong then you had best buckle down as projections of the 2nd wave arriving imminently are looking to be pretty darn accurate (at least here in Vancouver where the general disregard to protocols is pretty much absolute in public). One wrinkle in all of this – albeit a pretty big wrinkle – is we’re leaning on the World Wide Web more heavily than ever before it seems.

This was especially in the early spring when the stay-at-home messaging was still being well received, and people were either online working or keeping themselves entertained indoors. Since then the nature of demand has shifted, but we’re not sufficiently in the know regarding all of this to say exactly how it’s all worked. But the long and short of it is that collectively we’re putting demand strains on Broadband infrastructure like never before, and in a lot of ways it’s buckling under the weight of these demands.

We’re like any quality Canadian web hosting provider here at 4GoodHosting in that we’re likely more up front when it comes to having this be readily apparent. We know from extensive 2nd hand experience how much people get up in arms over the struggles that come with a lack of bandwidth and the nature of what we do (and know accordingly) makes us all to aware of how big a problem this has the potential to become. Particularly with the imminently ubiquitous nature of 5G network use around the globe.

All this said, let’s use today’s blog to have a more detailed look at this ‘constriction’ and the significance of it.

Only So Much Width to the Tube

Not the most natural of analogies for this phenomenon, but bear with us. So there’s been a map recently created in Australia, and while we’re not able to show the map due to copyright restrictions it’s quite telling. Its been referred to as a ‘global internet pressure’ map and what it does is show the extent to which the coronavirus pandemic is putting constrictions on internet services around the world.

Now as you might guess, the #1 cause of such bandwidth-intense activity is high definition (HD) video streaming and online gaming, and it’s true these are among the leading causes of contribution to the congestion. No matter how you might it wish it were otherwise, more and more people either working from home or lounging at home means much more in the way of big bandwidth appetites.

So here’s where we get our tube analogy from. The workings of this is not that much functionally different from a very large group of overweight children trying to make their way through a crowded subway tunnel. The streaming video or video upload during teleconferencing is made up of packets of information that can be far from small depending on what’s contained within them. When too many of these packets are trying to make their way down copper and fiber-optic cables across vast distances it’s inevitable that some aren’t going to arrive when they’re expected to.

Internet Use Through Lockdowns

Researchers have been looking at how each nation’s internet was performing from the time people started to stay at home and use it for both work and home-based entertainment through until now. Also tracked were changes in internet latency that emerged between March 12 to 13, which coincided with several countries — including France, Spain and Italy — beginning enforcement of government-imposed lockdowns aimed at stopping the spread of the coronavirus.

There was a point made to differentiate between the first days of the lockdown period and the baseline period in early February, and then finding a median starting point for legit internet pressure, where marked latency or speed, issues started to affect millions of internet users across certain regions. They then made a point to look at those a collective whole, but that information is more subjective to readers who’ll have a look at the map.

The long and short of it is this – current Internet bandwidth infrastructure is sufficient only at the very best of times, and even without a global pandemic we’re very likely nearing the end of the realistic and practical working life of the existing infrastructure as it is. Without major investments in upgrades all the ‘progress’ we’ve prided ourselves in being able to offer one another is about to hit some serious snags.

3 – 7% – Much Bigger Numbers in Reality

The values for increased usage may seem relatively small – like the 3 to 7 percent that is fairly standard for many specific regions indicated – but it’s actually quite a jump that is far from normal and it’s a difference that indicates that many users are quite likely experiencing bandwidth congestion.

What has been seen in his is the highest levels of pressure on internet networks is in countries like Italy, Spain, Sweden, Iran and Malaysia. That’s not to suggest residents in other countries aren’t experiencing the same difficulties, it’s just that they’re not on the leaderboard yet.

Now, yes there’s been all sorts of jokes about fully grown men spending long stretches of days playing online games. As funny and somewhat pathetically accurate as the truth of that might be, it’s not just men playing a whole lot of online games and eating up plenty of bandwidth while they slay dragons or whatever it is they do.

However, it turns out that entertainment streaming is a whole lot more gluttonously consumptive when it available bandwidth. Verizon reporting a 75 percent increase in gaming traffic during peak hours is among many different stats and observed behaviours that bear this out.

The More To It

It might then seem to be a legit default conclusion that gaming is the primary source of the increase in internet use. However, that’s not entirely true. The overall bandwidth used by the medium pales in comparison to that of others and a study comparing how much bandwidth gaming consumed compared to online video streaming services found that gamers consumed an average of only 300 megabytes per hour.

In comparison, HD content streamers consumed 3,000 megabytes per hour, and that jumped up to 7K per hour when it’s 4K video. While it’s true streaming companies are trying to limit bandwidth use, there’s really only so much that can be done in that regard and who’s going to give up Netflix n’ Chill, right?

There are some helpful efforts being made though. A number of video streaming companies are now implementing measures to decrease their bandwidth use. Streaming giant Netflix recently stated that they would work to reduce traffic on networks by around 25%.

Baby steps, but progress needs to start somewhere if collectively we’re going to have the infrastructure in place to handle our ever-growing insatiable thirst for Internet-based whatever-it-is we can’t go without at any given time.

If you’d like to see this map, you can click here.

Steering Clear of New-Found Risks Related to Adware

There are a lot of people who’ve decried the way the Internet has gone from being a genuine open-source information and content resource to by and large a massive, seething marketing tool. That’s a fair enough complaint, but if you’re making it you had better by an average joe and not someone with business (or income) interests in it serving that purpose. Truth is it was going to happen one way or another anyways, and so we’re all well advised to get used to it.

The explosion of advertising that has come with that and assaulted your screen at every opportunity is something you need to tolerate, but with it has come the concurrent explosion of adware that can do all sorts of nefarious things to your web-browsing devices without you being aware of it. These web security threats tend not to rest on their laurels, and these days we’re seeing the threats related to Adware morphing into different and more sneaky forms.

Now of course here at 4GoodHosting, our being a premier Canadian web hosting provider has us fairly front and center for watching all of this transpire in the same way it would be for anyone working in the industry. All of this does beg the question what exactly are the interests of those who put time and effort into building and deploying this adware, but that’s a whole different discussion.

Anyways, back to the topic at hand – There are both emerging and expanding problems related to Adware these days, and keeping yourself insulated from them requires more than it used to. So let’s have a look at all of that today and hopefully put you more in the know about what you need to do to stay safe from them.

Adware – What is It Exactly?

Adware is lurking-in-the-background software that works to display ads on computers and mobile devices. At times it’s referred to as an adware virus or a potentially unwanted program (PUP) and nearly all the time they’re installed without a user okaying any such ad. Adware is quite the troublemaker – it interferes with browsing experiences, displays excessive amounts of unwelcome pop-ups, banners, text link ads, and even sometimes auto-plays video commercials that have absolutely no business being wherever it is you are on the Web

And to what aim you ask? Well, the goal of any adware is income generation for its creator by displaying all those excessive ads. With that basic understanding I place, we can now look at different types of adware. There are two main types, and they’re differentiated based on their ‘penetration’ method:

With Type 1, the adware infiltrates the device by the means of freeware, while Type 2 breaks in via exposure to infected websites. The reason this sketchy behaviour occurs is because the developers want to fund the development and distribution of these free programs by monetize them with adding ‘additional’ programs to the installation files. The type of adware that usually comes hidden in free software usually isn’t the malicious type, but it sure can be annoying.

Not the Same as Spyware

Adware should not be confused with spyware. For starters, Spyware represents a separate program but it still gets downloaded without user knowledge. Spyware tracks user’s browsing actions on the Internet to display targeted advertisements and with this comes collection of different information about users exposed to it.

Infected website adware is often associated with web browser hijacking when users visit an infected website loaded with malicious scripts that promote unauthorized adware installation. Once an infected user browses those sites they are actively shown ads on an ongoing basis. They might think this is just the ‘way it is’ but in reality the ads are being shown as a result of the adware activity that was installed on the device.

Adware Red Flags

It’s good to be in the know about signs that may indicate your web browsing device has been infected with adware. Here’s some of the more common ones:

Below are several signs that indicate adware is installed:

  • A web browser’s home page has been changed without user’s permission
  • Advertisements appear where they ordinarily would not
  • Websites start redirecting you to unanticipated pages
  • Web page layouts are displayed in a different way each time users visit the web page
  • Web browsers are inexplicably slow and may also malfunction
  • Unwanted programs being installed automatically
  • New plugins, toolbars, or extensions appear without user consent
  • PC resource consumption (CPU usage for example) is unsteady and jumps without any reasonable explanation

The Extent of Risk from Adware

It is true that most adware pieces are more of an annoyance than a legit danger. Annoying activities include text ads, banners, and pop-ups that appear inside the browser window while users dig for information. You may also have random pages or bookmarks open unexpectedly or see strange malfunctions occurring with the device.

But there also more serious and legit threat issues when adware collects the user’s data. This usually involves the developer trying to sell the user’s ad profile along with browsing history and included IP address, performed searches, and visited websites. As you imagine, no good is going to come of that.

Preventing Adware from Infecting Devices

The best and most direct way to prevent adware is to exercise caution when visiting web sites that look suspicious. Be wary of downloading free software and download all programs only from trusted sources. While downloading freeware, the installation wizard may display small pre-checked checkboxes that indicate your agreeing to installation of additional ‘bundled’ software.

Another good general suggestion is to not click any ads, alerts, or notifications when browsing the Internet. The old ‘Your PC is infected with serious viruses, and this n antivirus scan is strongly recommended’ is a classic ploy here. A lot of people fall victim to this cunning deception and then install adware without any idea that’s what they’ve done.

Also ensure that your operating system and other software are regularly updated. Non-updated software is vulnerable to many types of hacker attacks with malware exploiting their security holes.

Certain security settings available on Windows, Apple, and other devices can be enabled to protect users from inadvertently downloading adware. Configuring a web browser to block all pop-ups is a good start. It is also particularly important to carefully check each file that gets download from the Internet and the best (see not free) antiviruses will also provide real-time web protection.

Removing Adware

Unless you’re a webmaster wizard or anything else of the sort, the recommendation here is going to be to use special antimalware solutions like Malwarebytes to get rid of Adware that’s taken up residence inside your device. And then be much more wary of these threats in the future and be smarter about what you can do to avoid exposure to them.

Getting to Know the Next Big aaS – Container As a Service

If you’re a layperson like most and haven’t heard of Kubernetes, or container orchestration systems in the bigger picture of things, then your unawareness is excusable. If you’re a web developer and you haven’t heard of either, however, you almost certainly have been living under a rock as the expression goes. These days the digital world is doing everything on the grandest of scales, and with that size of operations comes the need to be able to consolidate and store data and everything else that makes that world tick as much as it needs to tick.

Now even most of the most ordinary of you will have heard of SaaS – Software as a service – and it’s likely a good many are even taking advantage of it at this very time. One of the most common instances is how, for example, you’re paying a monthly fee to enjoy the Microsoft Office suite on your desktop or notebook rather than having had to fork over big bucks all at once for a box containing a disk and a need for you to install the software. The SaaS train continues to pick up speed, and truth be told that’s a good thing – no one needs excess packaging OR having to spend time installing anything if there’s a legitimate alternative to it.

With SaaS, there is – and here at 4GoodHosting we’re not unlike any other good Canadian web hosting provider in that we ourselves are benefitting from all these ‘aaS’ developments too and it’s something we’re almost certainly just seeing the tip of the iceberg of when it comes to how much of what we previously ‘bought’ hard copies of is not available as a service provided through the web.

CaaS – containers as a services – is just the latest and shiniest offering in this regard. So what’s it all about, and how is it going to be relevant for who? Let’s get into that here today.

Brief Background

Global enterprises are more and more eager to make use of containers, with 65 percent of organizations stating they use Docker containers, and 58 percent using the Kubernetes orchestration system in some way or another according to a recent report.

However, as appealing and mega functional as they are, the primary challenges for new converts is with lack of resources and insufficient expertise with using containers to build and maintain applications. This is the primary reason why containers-as-a-service (CaaS) offerings are being very welcome conveniences as soon as they’re made available.

Containers-as-a-Service Defined

When cloud vendors provide a hosted container orchestration engine — typically based on the super-popular Kubernetes open source project, which originated at Google — the appeal of a CaaS option to go along with is in the ability to deploy and run containers, manage clusters, automate scaling and failure management, and allow easier maintenance of the common infrastructure layer. Governance and security is included in this.

The entirety of networking, load balancing, monitoring, logging, authentication, security, autoscaling, and all continuous integration and delivery (CI/CD) functions are handled by the CaaS platform, making it an excellent task consolidator and handler.

CaaS allows individuals to take the benefits of their cloud infrastructure and best leverage them, while helping to avoid any vendor lock-in common with platform-as-a-service (PaaS) — that might come along with them. The containers are very portable across various environments, and this makes them even more versatile and multi functional.

For most it will be helpful to know the difference between a CaaS and running on classic infrastructure-as-a-service (IaaS). In large part it comes down to whether the organization has the resources and skills to implement and manage a specific container orchestration layer itself, or perhaps leaving that to a cloud provider would be a better choice. That will often depend on whether your container environment must span multiple clouds and/or on-prem environments. CaaS platforms that can be deployed either on-prem or in the cloud are offered by a number of vendors these days.

To summarize, the big benefit is in you either managing things at the infrastructure level and set up the orchestrator yourself, or using a container platform that handles the underlying infrastructure and puts in place a pre-installed orchestrator that is ready for you to deploy and scale your containers.

CaaS Benefits

Running containers on CaaS is very much like running your virtual machines on IaaS. Speed of deployment and ease of use are the primary benefits, along with the simplicity of the pay-as-you-go cloud model and the ability for vendor lock-in we mentioned previously.

Leaving your container infrastructure to a cloud vendor means you can get up and running without investing in your own hardware and no need to build or run your own container orchestration system(s). In addition, by containerizing applications you’re able to migrate applications into different environments or vendor ecosystems more easily, giving greater flexibility and scaleability options.

Cost efficiencies are definitely a part of the appeal too, as containers are better equipped to scale horizontally as demand dictates and make it so that organizations pay only for the cloud resources they use. Containers are also nowhere near as heavy as VMs, meaning they’re less resource intensive, which usually means better speeds and general operating costs reductions.

Another benefit comes with consistency of instrumentation and logging, as isolating individual services in containers can allow for more effective log aggregation and centralized monitoring through the popular sidecar deployment model.

After a long string of pluses, we do have to share one minus. Migrating traditional apps to containers is a hurdle for some who are interested in making the switch. It’s common to have to break down monolithic applications into microservices when migrating to containers, and for larger, older organizations that can sometimes be too drastic a change to be expected of them all at once.

Google’s Revamped Gmail Looking To Be More Competitive with Microsoft Teams

Real-time messaging, video chatting, and just-like-that file sharing have become everyday norms in the digital world now, and it’s hard to imagine that there’s anyone who doesn’t take advantage of what these technological advances have made available to use these days. Venturing into this space isn’t without risk of venture capitalists-slash-software developers these days, as Zoom’s recent fall from grace regarding the ‘zoom bombing’ incidents have made clear. The demand for these conveniences – especially in the workplace – mean that all the big players need to be onboard to at least some extent.

Now if there’s one thing we know about Google it’s that they’re not hesitant to throw their weight around and they’ll be onboard of anything to whichever extent they’re inclined. Which likely explains why they’re making strategic moves to counter their #1 rival at the top of the digital kingdom when it comes to these kinds of flash communication and sharing means. Microsoft’s Teams has made meteoric gains over the last little while (and in large part at the expense of Zoom if we’re to call it like it is), so it makes sense that Google is going to get their elbows up a bit.

Here at 4GoodHosting, we can relate to the growing ubiquitous nature of these apps and digital conveniences and like any other quality Canadian web hosting provider we’re the same types of enthusiasts for them that the rest of you all are. Time during the workday is an invaluable resource, and these ones make it so that many of us are working much more effectively and on-point with co-workers. Productivity is a great thing.

So let’s now get into what we’re talking about here today, how Gmail is being revamped to expand its already-significant sphere of influence over people in the world of digital communications.

Big Hub Getting Bigger

So here we are these past two weeks with Google having unveiled a revamped Gmail that will serve as an even bigger hub for collaboration and providing quick access to video, chat and shared files available directly from the email client. Not only is this being done in direct competition with Microsoft Teams but it is also being done to solidify its position against newcomers that might fancy a slice of the pie.

Google began by integrating its various G Suite apps with Gmail, and that involved the Meet video and Chat team messaging applications being slowly but deliberately integrated into the email client. Word in the industry is that Google has had this play in mind for a long time and sees Gmail as the most logical home base for this strategic collaboration.

A new Gmail app unveiled at the company’s Cloud Next event on Wednesday underscored Google’s intention to connect its various tools even more tightly.

The most noteworthy of what we can expect here is an updated mobile app with quick access to Mail, Chat, Rooms and Meet functionality via four buttons on the bottom menu bar. In these ‘rooms’ users will be able to jump straight into a group chat, for example.

Easier Leaps

Google is also aiming to make it much simpler to switch between apps in the browser-based version of Gmail, like jumping from a text chat to a video call or flipping a conversation from an email into a chat room without missing a beat. The belief is that the primary reason this will appeal to users is because it will reduce distractions. This then allows the participants to be having their conversations in the most appropriate channels more naturally, and that could be for real-time conversations, face-to-face video, or email for asynchronous messaging.

This is also being lauded as having the potential to boost productivity in a big way, and find us a department manager anywhere on earth who won’t really like the sound of that. This revamped Gmail and its functionalities will include ‘side-by-side’ document editing that lets team members work together on a document within Gmail.

You’ll also have access to Google Docs and Sheets from within a single app, and that functionality is probably fairly intentional with the way it mirrors Microsoft’s focus with Teams and having it act as a portal to its Office apps.

On the Menu

Here’s what else is going to be unveiled with the new Gmail:

  • Expanded Gmail search that can oversee Chat conversations quickly and effectively, with the idea of making it easier to locate information on a specific project, regardless of where that information is located or where the discussion of it got its start.
  • Do not disturb and out-of-office warnings able to be set up across the various apps, along with suggestions and nudges that can aid with prioritizing information.
  • New features like the freedom to jump into a video meeting from a shared document with picture-in-picture video that lets you still reference the document directly while you’re meeting to discuss it

If you’d like some visuals to go along with your enticement package, the new Gmail is currently available as a preview, and the word on the street is that Google will be expanding access to G Suite customers nearly imminently here.

Search on Steroids

Most people can relate to having the search function in their email client coming up really short, especially after manually digging over however long a period of time reveals the email / information you were looking for all along. So the good news here is that this revamping is also going to include an expanded Gmail search will cover Chat conversations, making it easier to locate information on any specific project and get right down to utilizing it without delay.

By fixing Gmail as the focal point of its collaboration strategy, Google is definitely playing to its existing strengths. Especially as it is competing with other collaboration vendors offering a suite of apps, and many of them have already made their initial inroads in this regard.

What Google has going for it that counters any such leads is the fact they are the de-facto number one Internet giant, and there’s no reason to think they won’t be the preferred central hub for productivity within G Suite in very short order once G suite apps are strengthened in as far as their ability to work together with massively increased functionality.

No need to necessarily be watching for this, as it’s likely going to be impossible to miss even if you’re not paying even an ounce of attention but you spend any portion of your workday on a computer or mobile device. As the expression goes, ‘you can’t stop progress.’

3 Tips for Applying Agile to Data Science and Data Ops

It’s plain for all to see that nearly everything is becoming increasingly data driven these days, and the explosive emergence of the IoT has fuelled a lot of that. Every effort made to harness data and either implement it or make decisions based on it is in the interests of competitive advantages, and for as long as we live in a capitalist society where only certain birds get worms that’s going to be the driving force behind much of what goes on in the digital world.

Visualizations, analytics, and the ‘biggie’ – machine learning – are among other aspects of big data that are demanding more attention and more budgetary investment allowances that ever before. Machine learning in particular is kind of like an unexplored continent and it 1620 rather than 2020. Most of you who’ll be reading this blog won’t need us to go into the how’s and why’s of that, so we’ll just continue with where we’re going with all of this in today’s blog.

Here at 4GoodHosting, it probably goes without saying that we’re very front and center in as far as the audience for all these developments are concerned. While anything regarding big data isn’t immediately relevant for us, it certainly is in a roundabout way and that’s very likely true for any good Canadian web hosting provider in Canada. The changes has been revolutionary and continue to be so, and so let’s get to today’s topic.

While we are not shot callers or developers, we know that some of you are and as such here are 3 solid tips for applying agile to data science and data ops.

All About Agile Methodologies

Nowadays you’ll be hard pressed to find even one organization that isn’t trying to become more data-driven. The aim of course is to leverage data visualizations, analytics, and machine learning for advantages over competitors. Strong data ops programs are essential for providing actionable insights through analytics requires and the same goes for a proactive data governance program to address data quality, privacy, policies, and security.

The 3 components and their realities that should be shaping aligned stakeholder priorities are delivery of data ops, analytics, and governance. Being able to implement multiple technologies and amass the right people with the right skills at the right time are going to become as-expected aspects of any interest group that’s working towards this.

Further, agile methodologies can form the working process to help multidisciplinary teams prioritize, plan, and successfully deliver incremental business value. The benefits of having these methodologies in place can also extend to capturing and processing feedback from customers, stakeholders, and end-users. This volunteered data usually has great value for promoting data visualization improvements, machine learning model recalibrations, data quality increases, and data governance compliance.

We’ll conclude this preface to the 3 tips by saying agile data science teams should be multidisciplinary, meaning a collection of e data ops engineers, data modelers, database developers, data governance specialists, data scientists, citizen data scientists, data stewards, statisticians, and machine learning experts should be the norm – whatever that takes on your end . Of course you’ll be determining that actual makeup on the scope of work and the complexity of data and analytics required.

Right then, on to our 3 for applying agile to data science and data ops:

  1. Developing and Upgrading Analytics, Dashboards, and Data Visualizations

Data science teams are nowadays best utilized when they’re conceiving dashboards to help end-users answer questions.

But the key here is in taking a very deep and equivocal look at agile user stories, and each should be looked at through 3 different lenses:

  • Who are the end-users?
  • What problem do they want addressed?
  • What makes the problem important?

Answers to these questions can then be the basis for writing agile user stories that deliver analytics, dashboards, or data visualizations. You may also want to make efforts to determine who intends to be using the dashboard and what answers they will be looking for. This process is made easier when stakeholders and end-users provide hypotheses indicating how they intend to take results and make them actionable.

  1. Develop / Upgrade Machine Learning Models

Segmenting and tagging data, feature extraction and making sure data sets are run through selectively and strategically chosen algorithms and configurations needs to be an integral part of the process of developing analytical and machine learning models. Also increasingly common is having agile data science teams taking records of agile user stories for prepping data for use in model development.

From there, separate stories for each experiment are logged and then cross-referenced for patterns across them or additional insights determined from seeing them side by side.

The transparency helps teams review the results from experiments, decide on successive priorities, and discuss whether current approaches are still to be seen as conducive to beneficial results. You need to take a very hard look in regard to the last part of that, and be willing to move in entirely different directions if need be. Being fixed in your ways here or partial to any approach has the ability to sabotage your interests in a big way.

  1. Discovering, Integrating, and Cleansing Data Sources

Ideally geared agile data science teams will be seeking out new data sources to integrate and enrich their strategic data warehouses and data expanses. Let’s consider data siloed in SaaS tools used by marketing departments for reaching prospects or communicating with customers as an excellent example. Other data sources might provide additional perspectives around supply chains, customer demographics, or environmental contexts that impact purchasing decisions.

Other smart choices are agile backlogs with story cards to research new data sources, validating sample data sets, and integrating prioritized ones into primary data repositories. Further considerations may be automating the data integration, implementing data validation and quality rules, and linking data with master data sources.

Lastly, data science teams should also capture and prioritize data debt. To date many data entry forms and tools did not have sufficient data validation, and integrated data sources did not have cleansing rules or exception handling. Refer to this as keeping a clean house if you will, but it is something that’s a good idea even if it’s not something that’s ever going to take priority.

Between all of this you should be able to improve data quality and deliver tools for leveraging analytics in decision making, products, and services.

A Reminder on Webhosting and Its Relation to SEO

We realize it’s not the first time we’ve decided to go over the subject, but it has been a while since we took the opportunity to point how much of a factor your web hosting will have for your website’s search engine rankings. While it’s true that there are a good many other factors that are more relevant in that equation, anyone who’s new to the digital world with their website should be aware that going with the most inexpensive option for web hosting may negatively affect the visibility of your new found site.

Now we will add quickly before going on further here that we are not the only good Canadian web hosting provider, and there are a number of others who can offer you equally reliable and competitively priced web hosting. That said, there are a number of advantages we do provide for our customers that should give us something of an edge but we’ll leave that for another discussion. What we’re going to share with you here today regarding the relationship between web hosting and SEO is going to apply no matter which Canadian web hosting provider you choose.

The Very Real Connection

SEO involves a lot more than just keyword optimization and link building. There’s a long list of things webmasters can do to promote major jumps with where the site ranks in SERPS (search engine result pages). In this regard what you may be getting as a package and at the same price points from one web hosting provider may well not have the same benefits in this regard.

So what do you do? Well, you start by being in the know about how all this stuff, so let’s get to it. The first thing you do is by establishing your objectives – namely, what you’re hoping to gain from all the efforts you’ve put into taking yourself online.

Defining Objectives First

For most people, the reason they’ve built a website and taken it online is to either increase online sales, increase customer interaction with the business (online or otherwise), or to simply increase traffic to the site itself. No matter what your main priority is, one of the primary understandings anyone will have is the page-load speeds play a big part in how your website is evaluated by search engines like Google and the like.

Now if you’re thinking it’s a simple as faster is better, you’re at least partially correct.Bottom of Form While it’s absolutely true that your website should load quickly, page load speed is only one small part of the equation. There’s going to be any number of providers who can promise you quality page speeds and especially when you’re purchasing a more expensive web hosting package. And quite often those promises are legit.

Make sure they are, because quite often your experience with page load speeds on say, your desktop, may be very different than what another person visiting on a mobile device might experience. Try it and see, and have your friends or family do the same and report. Do they see what they wanted? Did the right stuff load quickly? Your website’s visitors should see your site’s core content quickly. Some of the ancillary content can take longer to load, and if so that’s okay.

Indicator Number One

What this is referred to is First Meaningful Paint, and it’s a measurement (albeit a subjective one) of how your site keeps visitors happy and retains them. What this means is that while your actual page-load process may be three seconds long, visitors may see all of your meaningful content in just a little more than a second.

It’s nearly always true that some elements that take longer to load are not essential to the immediate visitor experience. Facebook pixel loads are a really good example.

Where all of this goes next is in preventing those visitors from becoming part of your bounce-rate stats. Bounce rate is the percentage of visitors to your site who leave within a certain (short) period of time after entering it. And yes, page load speeds are far and away the primary cause of that.

Should Google see that users are making their way into a page and then coming back out within a certain amount of time, that becomes a signal that the website didn’t deliver in the way the visitor was expecting it would. Having a slow website or irrelevant content is going to be problematic, and while web hosting may have nothing to do with the second part of that it definitely can have much to do with the first part of it.

Uptime – Related to the Right Host

Another majorly important aspect of providing a premium user experience is uptime. Any time Google or a user requests access to your site but has it constantly timing out or the server’s unable to return a result for it then your SEO is going to be taking a hit. Ensuring 100 percent uptime – or as close to it as is possible – is integrally important for providing a user experience for an average visitor that they’ll deem to be acceptable.

There are also a pair of load-time factors that Google uses to measure your site. Not surprisingly, both of them can be affected by your web host. The first of these is DNS lookup. When it takes longer for your host to complete DNS lookup, it takes a correspondingly longer time for your host to begin loading your page.

Long look up times aren’t conducive to high SERP rankings, and neither is the same for number 2 – delayed page load times. Find yourself with a host that uses a slow server and you’ll be ideally situated for a SERP ranking slide. The general guideline here is anything longer than 100 milliseconds to load the first bite is the beginning of unacceptable territory.

The time it takes the server to answer a browser’s request should ideally be no more than 50 milliseconds, and most hosts with quality servers will be answering even more speedily than that.

Solid SEO Strategy Choices

Here’s four approaches you can use to improve your site’s SEO

  1. Have Clean Code

Even the most solid of web hosts won’t be able to remedy the damage done by a website that has poorly written code slowing down load times and making the user experience unsatisfactory. Code be kept light and clean, and if you don’t know what that means then you’re clearly not the one writing it. Extra CSS, JavaScript, and files that aren’t necessary for site loading purposes don’t belong in your code. Another good idea is to make sure your code is W3C compliant by using a markup validation service.

  1. Keep Your Site Secure.

Site hacking is more of a problem these days than it has ever been before, and having hackers maliciously adding links to a site without permission or anyone even being aware of them is a real potential problem now. If Google sees a website with these irrelevant links they’ll proceed to penalize the site and decrease the page’s rankings for it. You’ll have to work to proactively keep these bad links away or choose a hosting provider that can help you keep them at bay.

  1. Measure Site Load times and Time to First Byte

There are a few free tool like Tools.pingdom.com, among others, where you can determine how long your site really takes to load and communicate with browsers. Even testing from different regions is possible. GTMetrix and Yslow may be better choices if you’re using Google Chrome. Do some digging on this, there’s plenty of good information to be found with a simple search.

  1. Take a Look at Managed Hosting

One the biggest overall benefits that comes with managed hosting is making the user’s site experience that much easier. It addresses a lot of the issues website owners commonly have, and managed hosting makes it so that you are paying someone else to worry about the SEO-critical aspects of your site so you can focus on other things – and ideally creating great content.

This can also mean you’re more ready for anything unforeseen, like traffic spikes or hacker-related activities. Managed web hosting can be worth the increase in price, and especially given how important website performance is in relation to SEO.

Take Advantage of Available SEO Tools

We’re among the many reputable web host providers in Canada that also offer tools that can fast-track SEO optimization of your website. They’ll start by scanning the content on your website and then comparing the information gathered against the SEO influencing aspects of your website before giving it a score. You’ll then have strategies suggested to help you increase your ranking on the popular search engines.

Some of the better and further reaching ones will also analyze the structure of your website and whether or not it’s presented in a form that can be understood by the popular search engines. You might also have tools that’ll check whether important characteristics of your post such as titles and meta description can be read clearly by search engines.

3 Cloud Realizations Coming Out of COVID-19

We’re coming up on 4 months into this current topsy-turvy world of ours that is the global COVID pandemic. While absolutely no one is pleased that this has transpired the way that it has, there’s going to be more than a few who’ll say it’s best to just roll with the punches and do what’s needed to get through it. Any time you have a chance to have the mettle of something tested in the climate of challenges and adversity there is the possibility for learning, and when that’s about learning about the application of what you ‘have’ there’s value in that.

We imagine we are much the same as any other good Canadian web hosting provider here at 4GoodHosting in that we can’t help but take an interest in every single turn in the world of digital connectivity and the realm of e-commerce. Not so much because we work in it of sorts, but really more because the nature of what we do gives us a front row to seat to all of this. Both in what has the potential to do or become for the people who make up our clientele, and also with how it has the potential to affect the directions we’ll be taking in the future.

The meteoric rise to precedence taken by cloud computing has been one such topic. One of the things that people like us and industry experts have taken notice of is how the new and challenging realities of COVID have made us all come to new understandings about our utilization of the Cloud. 3 of them in particular are ‘hard lessons’ worthy of some discussion, so that’s what we’re going to do here today.

Cloudops – More Important Than First Realized

For most enterprises, cloud operations have continued to be by and large an afterthought and that’s been especially true after the deployment of them. While IT organizations have given it some attention, the reality is the constraint of cloudops best practices and the use of the technology is most attributable to limited budgets and a general lack of understanding. With this pandemic those shortcomings have had a spotlight shone on them in a big way.

Much of this is attributable to the increased use of public cloud providers and cloud systems being accessed by an increasingly numerous and industry-crossing remote workforce. This has put increasing focus on the need for operational tools and talent. Cloudops were in place, but it seems their self-correcting capacities aren’t up to scratch for dealing with scaling on such an instantly-bigger level.

We’re continuing to see so many enterprises lacking the tools to automate self-correcting processes, and then there’s the often concurrent issue of a lack of available talent to set up the systems properly. Whether that shortage is temporary or not remains to be seen once all of this is over, because it’s quite likely that the expansion of cloud utilization is outstripping the supply of individuals qualified to be setting up the different platforms properly.

Urgent Need for Solid Enterprise API Strategies

The way data integration has gone from a nice-to-have to have to a necessity in record time is something that’s one of them more front and centre aspects of the Cloud shift. Then there’s the similar need for enterprises to be sharing services that bind behaviour to data. Leveraging well-secured and governed APIs is the solution to both those challenges, but that still remains a challenge and it’s one of those areas where more thought would have gone into it if the need had not been so pressing with the Cloud.

While it’s true that some systems have APIs (ones provided by SaaS vendors in particular), the majority of cloud-based custom enterprise applications have little to nothing in the way of APIs providing access to system data and services. It’s for this reason that integrations need to occur using one-off processes that won’t scale as the business needs to change because of the problems created by this current global pandemic.

Remote Workers & Cloud Security Not An Easy Pairing

Even before all of this befell the world cloud security teams were already working with remote employees, and enterprises became quickly aware that an employee’s home network is not the company’s network.

Look no further than VPNs, virtual private clouds, encryption, and legal compliances for vulnerabilities around cloud security, and much of that as a result of a completely remote workforce. Security teams working with cloud infrastructure were overwhelmed with the speed with which all this was required, and it shouldn’t come as a surprise then that in many cases what was built in response to the demand didn’t cut the mustard.

It’s been reported that the risk of a breach increased from .0001 percent for most enterprises to .2 percent in a few weeks following the new digital and working world realities that came with the pandemic, and that has to be a red flag for cloud computing security experts moving forward with all of this.

New Safari Browser with iOS 14 Introducing Biometric Authentication for Logins

The expression ‘the future is now’ has been bandied around for decades now with regards to new innovations, and perhaps so much so that it really doesn’t have the same weight of meaning anymore. But every once in a while we do see genuine examples of futuristic technology being realized and becoming available to everyday people. Being able to gain access to resources online through you face or fingertips definitely meets the criteria for being one of them.

Here at 4GoodHosting, we’re just the same as any other reliable Canadian web hosting provider in that we fill only the basic of roles in the big picture of what the ever-expanding digital world is. But what it does provide for us is an even more engaging view of all of these advances and something of a platform to share the futuristic news with the likes of you all.

So here it is – with Safari on iOS 14, MacOS Big Sur and iPadOS 14, you’ll now have the abilit to login to websites using Apple’s Face ID and Touch ID biometric authentication. All of this is being made possible with a technology called FIDO (fast Identity online) that’s speeding our way to a future where typing in a password is by and large an obsolete approach.

The term that’s being used for this is ‘biometric authentication’, and Apple made the announcement of this on Wednesday of last week at an annual developers conference. While biometric scan access means aren’t entirely new, they are stating that the appeal of this new wrinkle is that it’s faster and offers more solid security.

Big Leap for Web Authentication

All of this is a major boost for Web Authentication browser technology (often shortened to be WebAuthn) as it’s been constructed by FIDO consortium allies. Apple joins Mozilla Firefox, Google Chrome and Microsoft Edge among other allies here, and this is the same engine that’s been behind Windows Hello facial recognition and Android fingerprint authentication.

Now of course Apple’s clout in the smartphone market makes it so that any such development on their side of the fence creates a much bigger splash in the pool.

And that splash just may be what’s needed to push the entire online security sphere onto its side, and that’s going to be a good thing if so. Passwords just don’t cut it anymore. Unfortunately nowadays hackers often can use one single password obtained through a data breach to also break into many other websites.

Plus there’s then the fact that good passwords are hard to make up, and often even harder to remember. For older people typing them into phone screens isn’t easy, and then there’s the way that password managers are complex and often have cross-device compatibility issues.

FIDO technology has the potential to be a far-reaching fix for all of this, and eliminate the need for punching in characters of passwords every time you want ‘in’. It looks like it will be able to standardize how apps and websites utilize hardware security keys and biometric authentication, and in that sense it may well be the last piece (or one of the last pieces) of the puzzle.

The key part of that will be in bolstering passwords with two-factor authentication systems that are more secure than SMS codes, which simply don’t have the solidity they need to. There’s quite a bit to that, but to boil it down what it’s likely to do is enable two-factor authentication with no need for passwords at all.

You start with a registered device with a phone or fixed / mobile internet access device or security key. Then the biometric check completes the 2nd factor by scanning your face or fingerprint.

Apple will let you log in to websites with Face ID or Touch ID, and that’s a big step towards being able to discard flawed password technology once and for all. To move to FIDO login, you’ll have to jump through a hoop once to register your device, like a Mac or iPhone.

No Good for Phishers

One big benefit sure to be readily embraced with FIDO is that it blocks phishing efforts pretty much entirely. Login credentials are locked to the real version of a website, and then of course hackers lose the primary means by which they gain access – by stealing passwords. There’s nothing there to steal anymore!

There is a general consensus that we shouldn’t be getting ready to dump all our passwords, at least not yet. Obviously, if you lose your iPhone or iPad all of this new technology is rendered immediately obsolete and you’ll need to have some other access means.

This is really where the developers need to look at next, but this is definitely a huge leap for people who are iPhone or iPad devotees.

Defending Your Site Against Spamdexing

Last week we talked about ways you can make sure your website is optimally indexed, and in keeping with that theme will talk about another aspect of being proactive in ensuring your website is optimally ‘positioned’ as it were when it comes to be indexed by search engines.

Nearly all of you will know spam to be a reference to unsolicited, unwelcome communications through the Web rather than as jellied meat concoction that most of you most certainly don’t have in your kitchen cupboards. Truth is, however, it’s a lot easier to avoid that kind of spam than it is to avoid the email kind and the like.

The fact that spam – of this sort at least – is so universally unwelcome is the reason that search engines make the effort they do to ensure those of us surfing the web for whatever reason are exposed to it as little as possible. To that end they’ve developed algorithms that evaluate a website about whether or not it’s oriented to serve ‘spamming’ purposes.

Here at 4GoodHosting, we’re like any other good Canadian web hosting provider in that we know maximizing visibility is going to be a priority for anyone who’s having a website hosted for e-commerce purposes. There’s a lot that goes into that, but making sure your site is indexed as it should be and without anything that’s marginalizing it in that way is a big part of what’s important.

So today we’re going to talk about what you can do to see to it your site isn’t ‘spamdexed’ without you even being aware of it.

What’s Spamdexing?

Spamdexing is defined – and loosely considering it’s an industry-lingo slang term for the most part – as an attempt to manipulate search engine rankings and generate traffic which is used later to fuel a scam designed by people who have less-than-legit intentions.

How this is accomplished is these threat actors gain access to a normal, healthy website before injecting malicious keywords and links into it.

It’s defined a little differently when it’s in occurring in the sphere of digital marketing and online advertising. Spamdexing here is also called SEO spam, and it’s one of the most common hacks to increase search engine ranking. It’s estimated that nearly half of all sites that end up being hacked were broken into and ‘reformatted’ for SEO reorientation purposes.

These hacks typically take aim at web sites in order to manipulate the success of a site’s SEO campaign and boost its ranking in Google, Bing or other search engines.

SEO Spam

SEO spam is when an individual attempts to manipulate search engine rankings and generate traffic, but traffic for their own interests and safe to say not the same ones you’d have.

Then, as mentioned, what happens then is an otherwise normal website is injected with keywords and links intended to lure traffic to different scams. This practice tricks unsuspecting users who believe that they are visiting a real website to purchase orders but end up getting scammed.

Types of SEO Spam

Search engine spam can be executed through:

  • Spammy links
  • Spammy keywords
  • Spammy post & pages

Negative Impacts

So the long and short of all this is that by gaining access to a legit website and injecting links and keywords, the hackers create a working path to their scam-oriented websites. They’re piggybacking off that site’s credibility to get their own rankings with search engines.

So the question then becomes what can you do to stay safe from SEO spam? Spamdexing is going to be an ongoing threat, so it’s going to be helpful to know what you can do to counter it effectively. Here’s the list of best practices:

  1. Run updates – Be certain to keep plugins and other website applications updated with the latest security patches. Overlooking updates may make your entire site totally accessible for spamdexing or SEO spam
  2. Maintain strong passwords – Using strong and unpredictable passwords is important, especially for protecting access to sensitive areas of your site.
  3. Conduct regular scans – Scanning websites on a regular basis goes a long way to ensuring owners are identifying and understanding security issues with those sites. The problem is it’s fairly common for owners to not know they’ve been hacked until they’re being penalized for having been identified as an SEO spammer source. When that happens, the damage is done in as far as your credibility with search engines is concerned.
  4. Utilize a firewall – A web application firewall (WAF) is a proven-effective solution to prevent a search engine spam infection. It defends websites from unknown threats, plus speeds up the efficiency that the website’s operating with.