Chromium Manifest V3 Updates May Disable Ad Blockers

Reading Time: 4 minutes

It’s likely that a good many of you are among the thousands upon thousands of people who have an Ad Blocker installed for your web browsers of choice. Some people do use them simply to avoid the nuisance of having to watch ad after ad, and it’s people like these that have necessitated some sites to insist that you ‘whitelist’ them in order to proceed into the website they want to visit. That’s perfectly understandable, as those paying advertisers are the way the website generates income for the individual or business.

For others, however, we spend a great deal of our working day researching and referencing online, and having to watch ads before getting to the content we need in order to do our work. For us, an ad blocker is much more of a tool of necessity rather than convenience. Still, we get caught up in more than a few sites that will insist on being whitelisted too. For me, my ad blocker is a godsend and I don’t whitelist any website or disable my ad blocker for any of them.

Here at 4GoodHosting, part of what makes us a good Canadian web hosting provider is having built up an insight into what really matters to our customers. The bulk of them are people who use the Information Superhighway as a production resource rather than web ‘surfers’ for whom it’s more of an entertainment one. That’s why today’s news is some that’s sure to be very relevant for most of our customers.

Weakened WebRequest APIs

Some of you may not know how your ad blocker works, and that’s perfectly normal. As long as it does its job, you don’t really need to know. Chromium is Google’s newest all-powerful web browser, and just like Chrome did you can expect it to soon become nearly ubiquitous as most people’s web browser of-choice.

However, Chromium developers in the last few weeks have shared that among the updates they are planning to do in Manifest V3 is one that will restrict the blocking version of the webRequest API. The alternative they’re introducing is called declrativeNetRequest API.

After becoming aware of it, many ad blocker developers expressed their belief that the introduction of the declarativeNetRequest API will mean many already existing ad blockers won’t be ‘blocking’ much of anything anymore.

One industry expert stated on the subject, “If this limited declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two existing and popular content blockers like uBO and uMatrix will cease to be functional.”

What is the Manifest V3 Version?

It’s basically a mechanism through which specific capabilities can be restricted to a certain class of extensions. These restrictions are indicated in the form of either a minimum, or maximum, version.

Why the Update?

Currently, the webRequest API allows extensions to intercept requests and then modify, redirect, or block them. The basic flow of handling a request using this API is as follows,

  • Chromium receives the request / queries the extension / receives the result

However, in Manifest V3 the use of this API will have its blocking form limited quite significantly. The non-blocking form of the API that permits extensions to observer network requests for modifying, redirecting, or blocking them will not be discouraged. In addition, the limitations they are going to put in the webRequest API have yet to be determined

Manifest V3 is set to make the declarativeNetRequest API as the primary content-blocking API in extensions. This API will then allow extensions to tell Chrome what to do with a given request, instead of Chromium forwarding the request to the extension. This will enable Chromium to handle a request synchronously. Google insists this API is overall a better performer and provides better privacy guarantees to users – the latter part of which if of course very important these days.

Consensus Among Ad Blocker Developers and Maintainers?

When informed about this coming update many developers were concerned that the change will end up completely disabling all ad blockers. The concern was that the proposed declarativeNetRequest API will result in it being impossible to develop new and functional filtering engine designs. This is because the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and some ad blocker developers have commented that it’s very limited in its scope.

It’s also believed that the declarativeNetRequest API developers will be unable to implement other features, such as blocking of media element that are larger than a set size and disabling of JavaScript execution through the injection of CSP directives, among other features.

Others are making the comparison to Safari content blocking APIs, which essentially put limits on the number of admissible rules. Safari has introduced a similar API recently, and the belief is that’s the reason why Apple has gone in this direction too. Many seem to think that extensions written in that API are more usable, but still fall well short of the full power of uBlock Origin. The hope is that this API won’t be the last of them in the foreseeable nearest future.

Google Chrome Solution for ‘History Manipulation’ On Its Way

Reading Time: 3 minutes

No one will need to be convinced of the fact there’s a massive number of shady websites out there designed to ensnare you for any number of no-good purposes. Usually you’re rerouted to them when you take a seemingly harmless action and then often you’re unable to back <- yourself out of the site once you’ve unwilling landed on it. Nobody wants to be on these spammy or malicious pages and you’re stressing out every second longer that you’re there.

The well being of web surfers who also happen to be customers or friends here at 4GoodHosting is important to us, and being proactive in sharing all our wisdom about anything and everything related to the web is a part of what makes one of the best Canadian web hosting providers.

It’s that aim that has us sharing this news with you here today – that Google understands the unpleasantness that comes with this being locked into a website and has plans to make it remediable pretty quick here.

The first time something like this occurs you’ll almost certainly be clicking on the back button repeatedly before realizing it’s got no function. Eventually you’ll come to realize that you’ve got no other recourse than to close the browser, and most often times you’ll quit Chrome altogether ASAP and then launch it again for fear of inheriting a virus or something of the sort from the nefarious site.

How History Manipulation Works, and what Google is Doing About It

You’ll be pleased to hear the Chrome browser will soon be armed with specific protection measures to prevent this happening. The way the ‘back’ button is broken here is something called ‘history manipulation’ by the Chrome team. What it involves is that the malicious site stacks dummy pages onto your browsing history, and these work to fast-forward you back to the unintended destination page you were trying to get away from.

Fortunately, Chrome developers aren’t letting this slide. There are upcoming changes to Chromium’s code which will facilitate the detection of these dummy history entries and then flag sites that use them.

The aim is to allow Chrome to ignore the entirety of these false history entries to make it so that you’re not buried in a site that you had no intention of landing on and the back button functions just as you expect it to.

This development is still in its formative stages, and we should be aware that these countermeasures aren’t even in the pre-release test versions of Chrome yet. However, industry insiders report that testing should begin within the next few weeks or so, and all signs point towards the new feature being part of the full release version of the web browser.

In addition, this being a change to the Chromium engine makes it so that it may eventually benefit other browsers based on it. Most notable of these is Microsoft Edge, making it so that the frustrations of a paralyzed back button will be a thing of the past for either popular web browser. So far there’s no industry talk of Apple doing the same for Safari, but one can imagine they’ll be equally on top of this in much the same way.

Merry Christmas from 4GoodHosting

Given it’s the 24th of December here we of course would like to take this opportunity to wish a Merry Christmas to one and all. We hope you are enjoying the holidays with your family and this last week of 2018 is an especially good one. We can reflect on 2018, and look forward to an even more prosperous year in 2019.

Happy Holidays and best wishes, from all of us to all of you!

The Surprising Ways We Can Learn About Cybersecurity from Public Wi-Fi

Reading Time: 6 minutes

A discussion of cybersecurity isn’t exactly a popular topic of conversation for most people, but those same people would likely gush at length if asked about how fond of public wi-fi connections they are! That’s a reflection of our modern world it would seem; we’re all about digital connectivity, but the potential for that connectivity to go sour on us is less of a focus of our attention. That is until it actually does go sour on you, of course, at which point you’ll be wondering why more couldn’t have been done to keep your personal information secure.

Here at 4GoodHosting, cybersecurity is a big priority for us the same way it should be for any of the best Canadian web hosting providers. We wouldn’t have it any other way, and we do work to keep abreast of all the developments in the world of cybersecurity, and in particular these days as it pertains to cloud computing. We recently read a very interesting article about how our preferences for the ways we (meaning the collective whole of society) use public wi-fi can highlight some of the natures and needs related to web security, and we thought it would be helpful to share it and expand on it for you with our blog this week.

Public Wi-Fi and Its Perils

Free, public Wi-Fi is a real blessing for us when mobile data is unavailable, or scarce as if often the case! Few people really know how to articulate exactly what the risks of using public wi-fi are and how we can protect ourselves.

Let’s start with this; when you join a public hotspot without protection and begin to access the internet, the packets of data moving from your device to the router are public and thus open to interception by anyone. Yes, SSL/TLS technology exists but all that’s required for cybercriminal to snoop on your connection is some relatively simple Linux software that he or she can find online without much fuss.

Let’s take a look at some of the attacks that you may be subjected to due to using a public wi-fi network on your mobile device:

Data monitoring

W-fi adapters are usually set to ‘managed’ mode. It then acts as a standalone client connecting to a single router for Internet access. The interface the ignore all data packets with the exception of those that are explicitly addressed to it. However, some adapters can be configured into other modes. ‘Monitor’ mode means an adapter all wireless traffic will be captured in a certain channel, no matter who is the source or intended recipient. In monitor mode the adapter is also able to capture data packets without being connected to a router. It has the ability to sniff and snoop on every piece of data it likes provided it can get its hands on it.

It should be noted that not all commercial wi-fi adapters are capable of this. It’s cheaper for manufacturers to produce models that handle ‘managed’ mode exclusively. Still, should someone get their hands on one and pair it with some simple Linux software, they’ll then able to see which URLs you are loading plus the data you’re providing to any website not using HTTPS – names, addresses, financial accounts etc. That’s obviously going to be a problem for you

Fake Hotspots

Snaring unencrypted data packets out of the air is definitely a risk of public wi-fi, but it’s certainly not the only one. When connecting to an unprotected router, you are then giving your trust to the supplier of that connection. Usually this trust is fine, your local Tim Horton’s probably takes no interest in your private data. However, being careless when connecting to public routers means that cybercriminals can easily set up a fake network designed to lure you in.

Once this illegitimate hotspot has been created, all of the data flowing through it can then be captured, analysed, and manipulated. One of the most common choices here is to redirect your traffic to an imitation of a popular website. This clone site will serve one purpose; to capture your personal information and card details in the same way a phishing scam would.

ARP Spoofing

The reality unfortunately is that cybercriminals don’t even need a fake hotspot to mess with your traffic.
Wi-Fi and Ethernet networks – all of them – have a unique MAC address. This is an identifying code used to ensure data packets make their way to the correct destination. Routers and all other devices discover this information Address Resolution Protocol (ARP).

Take this example; your smartphone sends out a request inquiring which device on the network is associated with a certain IP address. The requested device then provides its MAC address, ensuring the data packets are physically directed to the location determined to be the correct one. The problem is this ARP can be impersonated, or ‘faked’. Your smartphone might send a request for the address of the public wi-fi router, and a different device will answer you with a false address.

Providing the signal of the false device is stronger than the legitimate one, your smartphone will be fooled. Again, this can be done with simple Linux software.

Once the spoofing has taken place, all of your data will be sent to the false router, which can subsequently manipulate the traffic however it likes.

MitM – ‘Man-in-the-Middle’ Attacks

A man-in-the-middle attack (MITM) is a reference to any malicious action where the attacker secretly relays communication between two parties, or alters it for whatever malevolent reason. On an unprotected connection, a cybercriminal can modify key parts of the network traffic, redirect this traffic elsewhere, or fill an existing packet with whatever content they wish.

Examples of this could be displaying a fake login form or website, changing links, text, pictures, or more. Unfortunately, this isn’t difficult to do; an attacker within reception range of an unencrypted wi-fi point is able to insert themselves all too easily much of the time.

Best Practices for Securing your Public Wi-Fi Connection

The ongoing frequency of these attacks definitely serves to highlight the importance of basic cybersecurity best practices. Following these ones to counteract most public wi-fi threats effectively

  1. Have Firewalls in Place

An effective firewall will monitor and block any suspicious traffic flowing between your device and a router. Yes, you should always have a firewall in place and your virus definitions updated as a means of protecting your device from threats you have yet to come across.

While it’s true that properly configured firewalls can effectively block some attacks, they’re not a 100% reliable defender, and you’re definitely not exempt from danger just because of them. They primarily help protect against malicious traffic, not malicious programs, and one of the most frequent instances where they don’t protect you is when you are unaware of the fact you’re running malware. Firewalls should always be paired with other protective measures, and antivirus software being the best of them.

  1. Software updates

Software and system updates are also biggies, and should be installed as soon as you can do so. Staying up to date with the latest security patches is a very proven way to have yourself defended against existing and easily-exploited system vulnerabilities.

  1. Use a VPN

No matter if you’re a regular user of public Wi-Fi or not, A VPN is an essential security tool that you can put to work for you. VPNs serve you here by generating an encrypted tunnel that all of your traffic travels through, ensuring your data is secure regardless of the nature of the network you’re on. If you have reason to be concerned about your security online, a VPN is arguably the best safeguard against the risks posed by open networks.

That said, Free VPNs are not recommended, because many of them have been known to monitor and sell users’ data to third parties. You should choose a service provider with a strong reputation and a strict no-logging policy

  1. Use common sense

You shouldn’t fret too much over hopping onto a public Wi-Fi without a VPN, as the majority of attacks can be avoided by adhering to a few tested-and-true safe computing practices. First, avoid making purchases or visiting sensitive websites like your online banking portal. In addition, it’s best to stay away from any website that doesn’t use HTTPS. The popular browser extender HTTPS everywhere can help you here. Make use of it!

The majority of modern browsers also now have in-built security features that are able to identify threats and notify you if they encounter a malicious website. Heed these warnings.

Go ahead an make good use of public Wi-Fi and all the email checking, web browsing, social media socializing goodness they offer, but just be sure that you’re not putting yourself at risk while doing so.

Site Isolation from Google Promises to Repel More Malware Attacks

Against malware
Reading Time: 2 minutes

Against malware

Security in the digital business world is really a challenge these days, and the world wide web is becoming as full of nefarious characters at the town of Machine, the ‘End of the Line’ as it were in the cool monochrome Western Dead Man with Johnny Depp from the ‘90s. A few months back we had detailed the big bad Spectre virus that had come onto the scene and posed major threats as regarded the insecurity of data for any type of website handling sensitive personal information.

It continues to be a ‘thing’, and in response to it Google recently enabled a new security feature in Chrome that secures users from malicious attacks like Spectre. It’s called Site Isolation, and is a new feature available with Chrome 67 on Windows, Mac, Linux, and Chrome OS. Here at 4GoodHosting, we’re a Canadian web hosting provider that puts an emphasis on this for obvious reasons, always seeking to be as on top of our clients’ web hosting needs as effectively as possible.

Google’s experimentation with Site Isolation has been going on since Chrome 63, and they’ve patched a lot of issues before enabling it by default for all Chrome users on desktop.

Chrome’s multi-process architecture allows different tabs to employ different renderer processes. Site Isolation functions by limiting each renderer process to documents from a single site. Chrome then relies on the operating system, and mitigates attacks between processes and any site.

Google has stated that in Chrome 67, Site Isolation has been enabled for 99% of users on Windows, Mac, Linux, and Chrome OS, according to a recent post on their company blog, stating further that ‘even if a Spectre attack were to occur in a malicious web page, data from other websites would generally not be loaded into the same process, and so there would be much less data available to the attacker. This significantly reduces the threat posed by Spectre.’

Additional known issues in Chrome for Android have been identified and are being worked on. Site Isolation for Chrome for Android should be ready with Chrome 68.

Need for Speed

Quick mention as well to Speed Update for Google Search on mobile. With this new feature the speed of pages will be a ranking factor for mobile searches. Of course, page speed has already been factoring into search engine rankings for some time now, but it was primarily based on desktop searches.

All of this is based on unsurprising finding showing people want to find answer to their searches as fast as possible, and page loading speed is an issue. Keeping that in mind, Google’s new feature for mobile users will only affect the pages that are painfully slow, and that has to be considered a good thing. Average pages should remain unaffected by and large.

We’re always happy to discuss in more detail how our web hosting service comes with the best in security and protective measures for your website when it’s hosted with us, and we also offer very competitively priced SSL certificates for Canadian websites that go a long way in securing your site reliably. Talk to us on the phone or email our support team.

The Appeal of Hybrid Cloud Hosting

Reading Time: 3 minutes

Most of you will need no introduction to the functionality and application of cloud computing, but those of who aren’t loaded with insight into the ins and outs of web hosting may be less familiar with cloud hosting and what makes it significantly different from standard web hosting. Fewer still will likely know of hybrid hosting and the way it’s made significant inroads into the hosting market with very specific appeals for certain web users with business and / or management interests.

Here at 4GoodHosting, we’ve done well establishing ourselves as a quality Canadian web hosting provider, and a part of what’s allowed us to do that is by having our thumb on the pulse of our industry and sharing those developments with our customers in language they can understand. Hybrid hosting may well be a good fit for you, and as such we’re happy to share what we know regarding it.

If we had to give a brief overview of it, we’d say that hybrid hosting is meant for site owners that want the highest level of data security along with the economic benefits of the public cloud. Privacy continues to be of a primary importance, but the mix of public and private cloud environments and the specific security, storage, and / or computing capacities that come along with the pairing are very appealing.

What Exactly is the Hybrid Cloud?

This combination of private and public cloud services communicate via encrypted technology that allows for data and / or app portability, consisting of three individual parts; the public cloud / the private cloud / a cloud service and management platform.

Both the public and private clouds are independent elements, allowing you to store and protect your data in your private cloud while employing all of the advanced computing resources of the public cloud. To summarize, it’s a very beneficial arrangement where your data is especially secure but you’re still able to bring in all the advanced functionality and streamlining of processes that come with cloud computing.

If you have no concerns regarding the security of your data, you are; a) lucky, and b) likely to be quite fine with a standard cloud hosting arrangement.

If that’s not you, read on…

The Benefits of Hybrid Clouds

One of the big pluses for hybrid cloud hosting is being able to keep your private data private in an on-prem, easily accessible private infrastructure, which means you don’t need to push all your information through the public Internet, yet you’re still able to utilize the economical resources of the public cloud.

Further, hybrid hosting allows you to leverage the flexibility of the cloud, taking advantage of computing resources only as needed, and – most relevantly – also without offloading ALL your data to a 3rd-party datacenter. You’re still in possession of an infrastructure to support your work and development on site, but when that workload exceeds the capacity of your private cloud, you’re still in good hands via the failover safety net that the public cloud provides.

Utilizing a hybrid cloud can be especially appealing for small and medium-sized business offices, with an ability to keep company systems like CRMS, scheduling tools, and messaging portals plus fax machines, security cameras, and other security / safety fixtures like smoke or carbon monoxide detectors connected and working together as needed without the same risk of web-connection hardware failure or security compromise.

The Drawbacks of Hybrid Clouds

The opposite side of the hybrid cloud pros and cons is that it can be something of a demanding task to maintain and manage such a massive, complex, and expensive infrastructure. Assembling your hybrid cloud can also cost a pretty penny, so it should only be considered if it promises to be REALLY beneficial for you, and keep in mind as well that hybrid hosting is also less than ideal in instances where data transport on both ends is sensitive to latency, which of course makes offloading to the cloud impractical for the most part.

Good Fits for Hybrid Clouds

It tends to be a more suitable fit for businesses that have an emphasis on security, or others with extensive and unique physical data needs. Here’s a list of a few sectors, industries, and markets that have been eagerly embracing the hybrid cloud model:

  • Finance sector – the appeal for them is in the decreased on-site physical storage needs and lowered latency
  • Healthcare industry – often to overcome regulatory hurdles put in place by compliance agencies
  • Law firms – protecting against data loss and security breaches
  • Retail market – for handling compute-heavy analytics data tasks

We’re fortunate that these types of technologies continue to evolve as they have, especially considering the ever-growing predominance of web-based business and communication infrastructures in our lives and the data storage demands and security breach risks that go along with them.

Seven Steps to a Reliably Secure Server

Reading Time: 5 minutes

In a follow up to last week’s blog post where we talked about how experts expect an increase in DDoS attacks this year, it makes sense for us to this week provide some tips on the best way to secure a server. Here at 4GoodHosting, in addition to being a good Canadian web hosting provider we also try to take an interest in the well being of clients of ours who are in business online. Obviously, the premise of any external threat taking them offline for an extended period of time will endanger the livelihood of their business, and as such we hope these discussions will prove valuable.

Every day we’re presented with new reports of hacks and data breaches causing very unwelcome disruptions for businesses and users alike. Web servers tend to be vulnerable to security threats and need to be protected from intrusions, hacking attempts, viruses and other malicious attacks, but there’s no replacing a secure server with its role for a business that operates online and engages in network transactions.

They tend to be the target because they are many times all too penetrable for hackers, and add to that the fact they’re known to contain valuable information. As a result, taking proper measures to ensure you have a secure server is as vital as securing the website, web application, and of course the network around it.

Your first decisions to evaluate are the server, OS and web server you’ll choose to collectively function as server you hope will be secure, and then the kind of services that run on it. No matter which particular web server software and operating system you choose to run, you must take certain measures to increase your server security. For starters, everyone will need to review and configure every aspect of your server in order to secure it.

It’s best to maintain a multi-faceted approach that offers in-depth security because each security measure implemented stacks an additional layer of defence. The following is a list we’ve assembled from many different discussion with web development and security experts that individually and collectively will help strengthen your web server security and guard against cyberattacks, stopping them essentially before they even have the chance to get ‘inside’ and wreak havoc.

Let’s begin;

  1. 1. Automated Security Updates

Unfortunately, most vulnerabilities come with a zero-day status. Before you know it a public vulnerability can be utilized to create a malicious automated exploit. Your best defence is to keep an eye ALWAYS on the ball when it comes to receiving security updates and having them put into place. Now of course your eye isn’t available 24/7, but you can and should be applying automatic security updates and security patches as soon as they are available through the system’s package manager. If automated updates aren’t available, you need to find a better system – pronto.

  1. Review Server Status and Server Security

Being able to quickly review the status of your server and check whether there are any problems originating from CPU, RAM, disk usage, running processes and other metrics will often help pinpoint server security issues with the server in a much faster period of time. In addition, ubiquitous command line tools can also review the server status. Each of your network services logs, database logs, and site access logs (Microsoft SQL Server, MySQL, Oracle) present in a web server are best stored in a segregated area and checked with regularity. Be on the lookout for strange log entries. Should your server be compromised, having a reliable alerting and server monitoring system standing guard will prevent the problem from snowballing and allow you to take strategic reactive measures.

  1. Perimeter Security With Firewalls

Seeing to it you have a secure server means involves the installation of security applications like border routers and firewalls ready and proven effective for filtering known threats, automated attacks, malicious traffic, DDoS filters, and bogon IPs, plus any untrusted networks. A local firewall will be able to actively monitor for attacks like port scans and SSH password guessing and effectively neutralize their threat to the firewall. Further, a web application firewall helps to filter incoming web page requests that are made for the explicit purpose of breaking or compromising a website.

  1. Use Scanners and Security Tools

Fortunately, we’ve got many security tools (URL scan, mod security) typically provided with web server software to aid administrators in securing their web server installations. Yes, configuring these tools can be a laborious process and time consuming as well – particularly with custom web applications – but the benefit is that they add an extra layer of security and give you serious reassurances.

Scanners can help automate the process of running advanced security checks against the open ports and network services to ensure your server and web applications are secure. It most commonly will check for SQL injection, web server configuration problems, cross site scripting, and other security vulnerabilities. You can even get scanners that can automatically audit shopping carts, forms, dynamic web content and other web applications and then provide detailed reports regarding their detection of existing vulnerabilities. These are highly recommended.

  1. Remove Unnecessary Services

Typical default operating system installations and network configurations (Remote Registry Services, Print Server Service, RAS) will not be secure. Ports are left vulnerable to abuse with larger numbers of services running on an operating system. It’s therefore advisable to switch off all unnecessary services and then disable them. As an added bonus, you’ll be boosting your server performance by doing this with a freeing of hardware resources.

  1. Manage Web Application Content

The entirety of your web application or website files and scripts should be stored on a separate drive, away from the operating system, logs and any other system files. By doing so it creates a situation where even if hackers gain access to the web root directory, they’ll have absolutely zero success using any operating system command to take control of your web server.

  1. Permissions and Privileges

File and network services permissions are imperative points for having a secure server, as they help limit any potential damage that may stem from a compromised account. Malicious users can compromise the web server engine and use the account in order to carry out malevolent tasks, most often executing specific files that work to corrupt your data or encrypt it to their specifics. Ideally, file system permissions should be granular. Review your file system permissions on a VERY regular basis to prevent users and services from engaging in unintended actions. In addition, consider removing the “root” account to enable login using SSH and disabling any default account shells that you do not normally choose to access. Make sure to use the least privilege principle to run specific network service, and also be sure to restrict what each user or service can do.

Securing web servers can make it so that corporate data and resources are safe from intrusion or misuse. We’ve clearly established here that it is about people and processes as much as it is about any one security ‘product.’ By incorporating the majority (or ideally all) measures mentioned in this post, you can begin to create a secure server infrastructure that’s supremely effective in supporting web applications and other web services.

IT Security Insiders: Expect an Escalation in DDoS Attacks for Duration of 2017

Reading Time: 4 minutes

The long and short of it is that Internet security will always be a forefront topic in this industry. That’s a reflection of both the never-ending importance of keeping data secure given the predominance of e-commerce in the world today and the fact that cyber hackers will never slow in their efforts to get ‘in’ and do harm in the interest of making ill-gotten financial gains for themselves.

So with the understanding that the issue of security / attacks / preventative measures is never going to be moving to the back burner, let’s move forward to discuss what the consensus among web security experts is – namely, that DDoS Attacks are likely to occur at an even higher rate than previously for the remainder of 2017.

Here at 4GoodHosting, in addition to being one of the best web hosting providers in Canada we’re very active in keeping on top of trends in the Web-based business and design worlds. as they tend to have great relevance to our customers. As such, we think this particularly piece of news is worthy of some discussion.

Let’s have at it – why can we expect to see more DDoS attacks this year?

Data ‘Nappers and Ransom Demands

As stated, IT security professionals predict that DDoS attacks will be more numerous and more pronounced in the year ahead, and many have started preparing for attacks that could cause outages worldwide in worst-case scenarios.

One such scenario could be – brace yourselves – a worldwide Internet outage. Before you become overly concerned, however, it would seem that the vast majority of security teams are already taking steps to stay ahead of these threats, with ‘business continuity’ measures increasingly in place to allow continued operation should any worst-case scenario come to fruition.

Further, these same insiders say that the next DDoS attack will be financially motivated. While there are continued discussions about attackers taking aim at nation states, security professionals conversely believe that criminal extortionists are the most likely group to successfully undertake a large-scale DDoS attack against one or more specific organizations.

As an example of this, look no further than the recent developments regarding Apple and their being threatened with widespread wiping of devices by an organization calling itself the ‘Turkish Crime Family’ if the computing mega-company doesn’t cough up $75,000 in cryptocurrency or $100,000 worth of iTunes gift cards.

A recent survey of select e-commerce businesses found that 46% of them expect to be targeted by a DDoS attack over the next 12 months. Should that attack come with a ransom demand like the one above, it may be particularly troublesome for any management group (given the fact that nearly ALL of them will not have the deep pockets that Apple has)

Further, the same study found that a concerning number of security professionals believe their leadership teams would struggle to come up with any other solution than to give in to any ransom demands. As such, having effective protection against ransomware and other dark software threats is as important as it’s ever been.

Undercover Attacks

We need to mention as well that these same security professionals are also worried about the smaller, low-volume DDoS attacks that will less 30 minutes or less. These have come to be classified as ‘Trojan Horse’ DDoS attack, and the problem is that they typically will not be mitigated by most legacy DDoS mitigation solutions. One common ploy used by hackers is to employ a Trojan horse as a distraction mechanism that diverts guard to open up the gates for a separate, larger DDoS attack.

Citing the same survey yet again, fewer than 30% of IT security teams have enough visibility worked into their networks to mitigate attacks that do not exceed 30 minutes in length. Further, there is the possibility of hidden effects of these attacks on their networks, like undetected data theft.

Undetected data theft is almost certainly more of a problem than many are aware – and particularly with the fast-approaching GDPR deadline which will make it so that organizations could be fined up to 4% of global turnover in the event of a major data breach deemed to be ‘sensitive’ by any number of set criteria.

Turning Tide against ISPs

Many expect regulatory pressure to be applied against ISPs that are perceived to be insufficient in protecting their customers against DDoS threats. Of course, there is the question as to whether an ISP is to blame for not mitigating a DDoS attack when it occurs, but again it seems the consensus is that it is, more often that not. This seems to suggest that the majority would find their own security teams to be responsible.

The trend seems to be to blame upstream providers for not being more proactive when it comes to DDoS defense. Many believe the best approach to countering these increasing attacks is to have ISPs that are equipped to defend against DDoS attacks, by both protecting their own networks and offering more comprehensive solutions to their customers via paid-for, managed services that are proven to be effective.

We are definitely sympathetic to anyone who has concerns regarding the possibility of these attacks and how they could lead to serious losses should they be able to wreak havoc and essentially remove the site from the web for extended periods of time. With the news alluded to earlier that there could even be a worldwide Internet outage before long via the new depth and complexity of DDoS attacks, however, it would seem that anyone with an interest in being online for whatever purpose should be concerned as well.

Understanding the New ‘Perimeter’ Against Cyber Attacks

Reading Time: 4 minutes

Hacker in hood with laptop initiating cyber attack. View from the back.

If you yourself haven’t been the victim of a cyber attack, you very likely know someone else who has, and in fact the numbers suggest that upwards of 90% of organizations experienced at least SOME level of an IT security breach in the past year. Further, it’s believed that one in 6 organizations have had significant security breaches during the same period.

Here at 4GoodHosting, we’ve established ourselves as a top Canadian web hosting provider but we’re always keen to explore industry trends – positive and negative – that impact what matters to our customers. And our array of customers covers pretty much any type of interest one could have in operating on the World Wide Web.

Cyberattacks have pretty much become a part of every day life. While not to suggest that these types of incidents are ‘inevitable’, there is only so much any one individual or IT team can do to guard against them. Yes, there are standard PROACTIVE web security protocols to follow, but we will not look at those here given the fact that they are quite commonly understood amongst those of you who have that as part of your job detail and responsibility within the organization.

Rather, let’s take a look at being REACTIVE in response to a cyber attack here, and in particular with tips on how to disinfect a data centre and beef it up against further transgressions.

Anti-Virus and Firewalls – Insufficient

It would seem that the overwhelming trend with cloud data security revolves around the utilization of firewalls, believing them to be a sufficiently effective perimeter. Oftentimes, however, exceptions are made to allow cloud applications to run and in thus doing so the door is opened for intrusions to occur.

So much for firewalls securing the enterprise.

Similarly, anti-virus software can no longer keep pace with the immense volume of daily viruses and their variants that are being created in cyberspace nearly everyday. A reputable cybersecurity firm recently announced the discovery of a new Permanent Denial-of-Service (PDos) botnet named BrickerBot, which serves to render the victim’s hardware entirely useless.

A PDoS attack – or ‘phlashing’ as it’s also referred to – can damage a system so extensively that full replacement or reinstallation of hardware is required, and unfortunately these attacks are becoming more prevalent.It is true that there are plenty of useful tools out there such as Malware bytes that should be used to detect and cleanse the data centre of any detected or suspected infections.

Making Use of Whitelisting And Intrusion Detection

Whitelisting is a good way to strengthen your defensive lines and isolate rogue programs that have successfully infiltrated your data center. Also known as application control, whitelisting involves a short list of the applications and processes that have been authorized to run. This strategy limits use by means of a “deny-by-default” approach so that only approved files or applications are able to be installed. Dynamic application whitelisting strengthens security defenses and helps with preventing malicious software and other unapproved programs from running.

Modern networking tools should also be integrated as part of your security arsenal, and if they are configured correctly they can highlight abnormal patterns that may be a cause for concern. As an example, intrusion detection can be set up to be triggered when any host uploads a significant load of data several times over the course of a day. The idea is to eliminate abnormal user behaviour and help with containing existing threats.

Security Analytics

What’s the best way to augment current security practices? Experts in this are increasingly advocating real-time analytics used in tandem with specific methodologies that focus on likely attack vectors. This approach revolves around seeing the web as a hostile environment filled with predators. In the same way behavioural analytics are used in protecting against cyber terrorists, we need to take an in-depth look at patterns to better detect internal security threats.

However, perhaps the most important thing to realize is that technology alone will never solve the problem. Perfect email filters and the transgressors will move to using mobile networks. Improve those filters and they’ll jump to social media accounts. The solution must address the source and initial entry concepts, with training and education implemented so that people in the position to respond and ‘nip it in the bud’ can be explicitly aware of these attacks just as they first begin.

End-user Internet security awareness training is the answer, but we are only in the formative stages of making it accessible for users across all the different types. Much of it is all about teaching users not to do inadvisable things like clicking on suspect URLs in emails, or opening attachments that let in the bad hats.

Putting all staff through requisite training may be expensive and time consuming / productivity draining, but we may be at the point soon where it’s no longer an option to NOT have these types of educational programs. The new reality is that what we previously referred to as ‘the perimeter’ no longer really exists, or if it does it’s by in large ineffective in preventing the entirety of cyber attacks. The ‘perimeter’ is now every single individual on their own, and accordingly the risks are even greater with the weakest link in the chain essentials being the entirety of your system defences.

Amnesty International Report on Instant Messaging Services and Privacy

Reading Time: 4 minutes

4gh-privacyconcerns-b

Skype & Snapchat, among other companies, have failed to adopt basic privacy protection as recent stated in Amnesty International’s special report “Message Privacy Ranking” report. The report compares 11 popular instant messaging services.

Companies were ranked based on their recognition of online threats to human rights, default deployment of end-to-end encryption, user disclosure, government disclosure, and publishing of the technical details of their encryption.

“If you think instant messaging services are private, you are in for a big surprise. The reality is that our communications are under constant threat from cybercriminals and spying by state authorities. Young people, the most prolific sharers of personal details and photos over apps like Snapchat, are especially at risk,” Sherif Elsayed-Ali, Head of Amnesty International’s Technology and Human Rights Team said in a statement.

“Snapchat” only scored 26 points in the report (out of 100) and Blackberry was rated even worse at 20 points). Skype has weak encryption, scoring only 40.

The middle group in the rankings included Google, which scored a 53 for its Allo, Duo, & Hangouts apps, Line and Viber, with 47 each, and Kakao Talk, which scored a 40.

The report also stated “that due to the abysmal state of privacy protections there was no winner.”

On a side not protecting privacy rights is also part of the motivation behind the Let’s Encrypt Project, which to use to supply free SSL Certificates.

Amnesty International has petitioned messaging services to apply “end-to-end encryption” (as a default feature) to protect: activists, journalists, opposition politicians, and common law-abiding citizens world-wide. It also urges companies to openly publish and advertise the details about their privacy-related practices & policies.

About the most popular instant messaging app: “Whatsapp” – Facebook has thrown everybody a new surprise twist.

WhatsApp is updating its privacy policy. Facebook wants your data and end-to-end encryption is going to soon be shut off.
WhatsApp , now owned by Facebook, started some uproar this week after the announcement that it’s changing its terms (or privacy) to *allow* data to be shared with Facebook. It means that for the first time Whatsapp will give permission to connect accounts to Facebook. This is after pledging, in 2014, that it wouldn’t do so – and has now backtracked.

WhatsApp now says that it will give the social networking site more data about its users – allowing Facebook to suggest phone contacts as “friends”.

“By coordinating more with Facebook, we’ll be able to do things like track basic metrics about how often people use our services and better fight spam on WhatsApp,” Whatsapp has written.

“By connecting your phone number with Facebook’s systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them. … For example, you might see an ad from a company you already work with, rather than one from someone you’ve never heard of.”

Many aren’t pleased with the move, especially since WhatsApp previously promised not to change its privacy settings.
If you want to carry on using whatsapp, you can’t opt out of the Facebook connection feature, as the update of terms and privacy policy is compulsory. “This allows us to do things like improve our app’s performance and better coordinate,” says WhatsApp.

The app’s end-to-end encryption will also be stopped. However previously the company implemented it earlier this year and claimed it made conversations more secure.

The popular messaging service’s recent change in privacy policy to start sharing users’ phone numbers with Facebook—the first policy change since WhatsApp was acquired by Facebook in 2014 – has attracted regulatory scrutiny in Europe.

The Italian antitrust watchdog on Friday also announced a separate probe into whether WhatsApp obliged users to agree to sharing personal data with Facebook.

The European Union’s 28 data protection authorities said in a statement they had requested WhatsApp stop sharing users’ data with Facebook until the “appropriate legal protections could be assured” to avoid falling foul of EU data protection law.

WhatsApp’s new privacy policy involves the sharing of information with Facebook for purposes that were not included in the terms of service when users signed up, raising questions about the validity of users’ consent, known as the Article 29 Working Party (WP29), as the European authorities have responded with.

The Wp29 group also urges WhatsApp to stop passing user data to Facebook while it investigates the legality of the arrangement.
Subsequently a spokeswoman for WhatsApp said the company was working with data protection authorities to address their questions.

Facebook has had run-ins with European privacy watchdogs in the past over its processing of users’ data. However, the fines that regulators can levy are paltry in comparison to the revenues of the big U.S. tech companies concerned.

The European regulators will discuss the Yahoo and WhatsApp cases in November.

“The Article 29 Working Party (WP29) has serious concerns regarding the manner in which the information relating to the updated Terms of Service and Privacy Policy was provided to users and consequently about the validity of the users’ consent,” it writes.

“WP29 also questions the effectiveness of control mechanisms offered to users to exercise their rights and the effects that the data sharing will have on people that are not a user of any other service within the Facebook family of companies.”

We haven’t heard of any discussion within Canada as of yet.

Thank you for reading the 4GoodHosting blog. We would love to hear from you.

Google & Facebook will be Building a Big Trans-Pacific Fiber-Optic Cable

Reading Time: 3 minutes
transpacificcable

Map published by Facebook

Google and Facebook are engaging in a partnership to pay for the laying of what will be one of the highest-capacity undersea data cables – piping data in the form of light all the way across the Pacific; bridging Los Angeles & Hong Kong.

This project is the second partnership Facebook has joined in. It is yet another current example recent big business in the submarine-fiber optic cable industry. This internet-centric industry has traditionally been dominated by group of private, and government, carriers.

Companies like Facebook, Google, Microsoft, and Amazon operate huge-scale data centers that deliver various internet services to people worldwide. These internet big boys have are quickly reaching a point where their global bandwidth needs are so high that it makes more sense for them to fund cable construction projects directly; rather than to purchase capacity from established carriers.

Previously this year, in May 2016, Facebook announced that teamed up with Microsoft on a high capacity cable across the Atlantic called “MAREA”. This cable will be linking internet backbone hubs in Virginia Beach, and Bilbao, in Spain. Telefonica will be administrating this future transatlantic data line.

Europe and the Asia Pacific region are important markets internet services giants. The MAREA cable will boost bandwidth levels between both companies’ data centers both in Asia and the US.

The submerged fibre line is named the “Pacific Light Cable Network”, named after Pacific Light Data Communications, Inc – the 3rd partner of the project.

Both the MAREA and Pacific Light cable will be built by “TE SubCom”; one of the biggest names in the submarine fibre optic cable industry.

The 120Tbps (Terabits per second) PLCN system will provide greater diversity in transpacific cable routes, as Facebook recently published. “Most Pacific subsea cables go from the United States to Japan, and this new direct route will give us more diversity and resiliency in the Pacific,” Facebook’s article states.

read_more

One difference that PLCN and MAREA have from traditional transoceanic cable systems is they will be interoperable with different networking equipment; rather than being designed to function with specific or proprietary landing-station technologies. Companies will be able to choose what optical equipment best fits their needs best. When better technology become available, the companies involved will be able to change or upgrade that equipment – as better technologies becomes available. Equipment refreshes can occur as optical technology improves.

When equipment can be replaced by better technology at a quicker pace, costs should go down and bandwidth rates should increase more quickly.

Another cable, “FASTER”, is backed by Google and several Asian telecommunications and IT services companies, became operational in early 2016. Yet another big submarine cable project is the “New Cross Pacific Cable System”, which is backed by Microsoft and several Asian telecoms. NCP is expected to light up in 2017.

Also earlier this year; Amazon Web Services made its first direct investment on a submerged cable – helping make the planned “Hawaiki” Submarine Cable project between the US, Australia, and New Zealand possible. Both before-mentioned cables are to be surfacing in Oregon.

High speed optical cable is bringing the world together at the speed of light faster than ever before. At the speed of light, approximately 186,000 miles per second, data can circle the whole world more than 7 times a second.

Due to factors such as this, 4GoodHosting.com intends to continue serving websites all over the world, and reaching a larger, global market of new customers who wish to have their website hosted from Canada. (a most liberal, low-key & relaxed country)