Google Chrome Solution for ‘History Manipulation’ On Its Way

Reading Time: 3 minutes

No one will need to be convinced of the fact there’s a massive number of shady websites out there designed to ensnare you for any number of no-good purposes. Usually you’re rerouted to them when you take a seemingly harmless action and then often you’re unable to back <- yourself out of the site once you’ve unwilling landed on it. Nobody wants to be on these spammy or malicious pages and you’re stressing out every second longer that you’re there.

The well being of web surfers who also happen to be customers or friends here at 4GoodHosting is important to us, and being proactive in sharing all our wisdom about anything and everything related to the web is a part of what makes one of the best Canadian web hosting providers.

It’s that aim that has us sharing this news with you here today – that Google understands the unpleasantness that comes with this being locked into a website and has plans to make it remediable pretty quick here.

The first time something like this occurs you’ll almost certainly be clicking on the back button repeatedly before realizing it’s got no function. Eventually you’ll come to realize that you’ve got no other recourse than to close the browser, and most often times you’ll quit Chrome altogether ASAP and then launch it again for fear of inheriting a virus or something of the sort from the nefarious site.

How History Manipulation Works, and what Google is Doing About It

You’ll be pleased to hear the Chrome browser will soon be armed with specific protection measures to prevent this happening. The way the ‘back’ button is broken here is something called ‘history manipulation’ by the Chrome team. What it involves is that the malicious site stacks dummy pages onto your browsing history, and these work to fast-forward you back to the unintended destination page you were trying to get away from.

Fortunately, Chrome developers aren’t letting this slide. There are upcoming changes to Chromium’s code which will facilitate the detection of these dummy history entries and then flag sites that use them.

The aim is to allow Chrome to ignore the entirety of these false history entries to make it so that you’re not buried in a site that you had no intention of landing on and the back button functions just as you expect it to.

This development is still in its formative stages, and we should be aware that these countermeasures aren’t even in the pre-release test versions of Chrome yet. However, industry insiders report that testing should begin within the next few weeks or so, and all signs point towards the new feature being part of the full release version of the web browser.

In addition, this being a change to the Chromium engine makes it so that it may eventually benefit other browsers based on it. Most notable of these is Microsoft Edge, making it so that the frustrations of a paralyzed back button will be a thing of the past for either popular web browser. So far there’s no industry talk of Apple doing the same for Safari, but one can imagine they’ll be equally on top of this in much the same way.

Merry Christmas from 4GoodHosting

Given it’s the 24th of December here we of course would like to take this opportunity to wish a Merry Christmas to one and all. We hope you are enjoying the holidays with your family and this last week of 2018 is an especially good one. We can reflect on 2018, and look forward to an even more prosperous year in 2019.

Happy Holidays and best wishes, from all of us to all of you!

The Surprising Ways We Can Learn About Cybersecurity from Public Wi-Fi

Reading Time: 6 minutes

A discussion of cybersecurity isn’t exactly a popular topic of conversation for most people, but those same people would likely gush at length if asked about how fond of public wi-fi connections they are! That’s a reflection of our modern world it would seem; we’re all about digital connectivity, but the potential for that connectivity to go sour on us is less of a focus of our attention. That is until it actually does go sour on you, of course, at which point you’ll be wondering why more couldn’t have been done to keep your personal information secure.

Here at 4GoodHosting, cybersecurity is a big priority for us the same way it should be for any of the best Canadian web hosting providers. We wouldn’t have it any other way, and we do work to keep abreast of all the developments in the world of cybersecurity, and in particular these days as it pertains to cloud computing. We recently read a very interesting article about how our preferences for the ways we (meaning the collective whole of society) use public wi-fi can highlight some of the natures and needs related to web security, and we thought it would be helpful to share it and expand on it for you with our blog this week.

Public Wi-Fi and Its Perils

Free, public Wi-Fi is a real blessing for us when mobile data is unavailable, or scarce as if often the case! Few people really know how to articulate exactly what the risks of using public wi-fi are and how we can protect ourselves.

Let’s start with this; when you join a public hotspot without protection and begin to access the internet, the packets of data moving from your device to the router are public and thus open to interception by anyone. Yes, SSL/TLS technology exists but all that’s required for cybercriminal to snoop on your connection is some relatively simple Linux software that he or she can find online without much fuss.

Let’s take a look at some of the attacks that you may be subjected to due to using a public wi-fi network on your mobile device:

Data monitoring

W-fi adapters are usually set to ‘managed’ mode. It then acts as a standalone client connecting to a single router for Internet access. The interface the ignore all data packets with the exception of those that are explicitly addressed to it. However, some adapters can be configured into other modes. ‘Monitor’ mode means an adapter all wireless traffic will be captured in a certain channel, no matter who is the source or intended recipient. In monitor mode the adapter is also able to capture data packets without being connected to a router. It has the ability to sniff and snoop on every piece of data it likes provided it can get its hands on it.

It should be noted that not all commercial wi-fi adapters are capable of this. It’s cheaper for manufacturers to produce models that handle ‘managed’ mode exclusively. Still, should someone get their hands on one and pair it with some simple Linux software, they’ll then able to see which URLs you are loading plus the data you’re providing to any website not using HTTPS – names, addresses, financial accounts etc. That’s obviously going to be a problem for you

Fake Hotspots

Snaring unencrypted data packets out of the air is definitely a risk of public wi-fi, but it’s certainly not the only one. When connecting to an unprotected router, you are then giving your trust to the supplier of that connection. Usually this trust is fine, your local Tim Horton’s probably takes no interest in your private data. However, being careless when connecting to public routers means that cybercriminals can easily set up a fake network designed to lure you in.

Once this illegitimate hotspot has been created, all of the data flowing through it can then be captured, analysed, and manipulated. One of the most common choices here is to redirect your traffic to an imitation of a popular website. This clone site will serve one purpose; to capture your personal information and card details in the same way a phishing scam would.

ARP Spoofing

The reality unfortunately is that cybercriminals don’t even need a fake hotspot to mess with your traffic.
Wi-Fi and Ethernet networks – all of them – have a unique MAC address. This is an identifying code used to ensure data packets make their way to the correct destination. Routers and all other devices discover this information Address Resolution Protocol (ARP).

Take this example; your smartphone sends out a request inquiring which device on the network is associated with a certain IP address. The requested device then provides its MAC address, ensuring the data packets are physically directed to the location determined to be the correct one. The problem is this ARP can be impersonated, or ‘faked’. Your smartphone might send a request for the address of the public wi-fi router, and a different device will answer you with a false address.

Providing the signal of the false device is stronger than the legitimate one, your smartphone will be fooled. Again, this can be done with simple Linux software.

Once the spoofing has taken place, all of your data will be sent to the false router, which can subsequently manipulate the traffic however it likes.

MitM – ‘Man-in-the-Middle’ Attacks

A man-in-the-middle attack (MITM) is a reference to any malicious action where the attacker secretly relays communication between two parties, or alters it for whatever malevolent reason. On an unprotected connection, a cybercriminal can modify key parts of the network traffic, redirect this traffic elsewhere, or fill an existing packet with whatever content they wish.

Examples of this could be displaying a fake login form or website, changing links, text, pictures, or more. Unfortunately, this isn’t difficult to do; an attacker within reception range of an unencrypted wi-fi point is able to insert themselves all too easily much of the time.

Best Practices for Securing your Public Wi-Fi Connection

The ongoing frequency of these attacks definitely serves to highlight the importance of basic cybersecurity best practices. Following these ones to counteract most public wi-fi threats effectively

  1. Have Firewalls in Place

An effective firewall will monitor and block any suspicious traffic flowing between your device and a router. Yes, you should always have a firewall in place and your virus definitions updated as a means of protecting your device from threats you have yet to come across.

While it’s true that properly configured firewalls can effectively block some attacks, they’re not a 100% reliable defender, and you’re definitely not exempt from danger just because of them. They primarily help protect against malicious traffic, not malicious programs, and one of the most frequent instances where they don’t protect you is when you are unaware of the fact you’re running malware. Firewalls should always be paired with other protective measures, and antivirus software being the best of them.

  1. Software updates

Software and system updates are also biggies, and should be installed as soon as you can do so. Staying up to date with the latest security patches is a very proven way to have yourself defended against existing and easily-exploited system vulnerabilities.

  1. Use a VPN

No matter if you’re a regular user of public Wi-Fi or not, A VPN is an essential security tool that you can put to work for you. VPNs serve you here by generating an encrypted tunnel that all of your traffic travels through, ensuring your data is secure regardless of the nature of the network you’re on. If you have reason to be concerned about your security online, a VPN is arguably the best safeguard against the risks posed by open networks.

That said, Free VPNs are not recommended, because many of them have been known to monitor and sell users’ data to third parties. You should choose a service provider with a strong reputation and a strict no-logging policy

  1. Use common sense

You shouldn’t fret too much over hopping onto a public Wi-Fi without a VPN, as the majority of attacks can be avoided by adhering to a few tested-and-true safe computing practices. First, avoid making purchases or visiting sensitive websites like your online banking portal. In addition, it’s best to stay away from any website that doesn’t use HTTPS. The popular browser extender HTTPS everywhere can help you here. Make use of it!

The majority of modern browsers also now have in-built security features that are able to identify threats and notify you if they encounter a malicious website. Heed these warnings.

Go ahead an make good use of public Wi-Fi and all the email checking, web browsing, social media socializing goodness they offer, but just be sure that you’re not putting yourself at risk while doing so.

Linux or Windows

Reading Time: 4 minutes

The vast majority of websites are hosted on either Linux or Windows OS servers, and the market share is now shifting towards Linux according to a recent report from W3tech. Consumer surveys indicated that Unix servers make up some 66% of all web servers while Windows accounts for just over 33%. For most this isn’t going to be something they’ll give any consideration to, and it’s true that websites with standard HTML pages will be served equally well with either OS.

These days greater numbers of websites have been ‘revamped’ since their inception and now feature dynamic design elements that enhance the UX experience for viewers. If you are planning to design or redesign your website to be much more engaging, you should work with forms and execute web applications both systems will serve your needs.

Linux and Windows are pretty much neck and neck when it comes to functionality. Each works with a number of frameworks and front end programming languages, and have impressive features when it comes to hosting. Linux and Windows handle data in the same way too, and both sport easy, convenient and fast FTP tools to serve a wide range of file management functions.

Nine times out of 10 you’ll be at your best with either, and at 4GoodHosting our Linux and Windows web hosting specs make us one of the best Canadian web hosting providers with data centers in both Eastern and Western Canada.

Our standard web hosting is via ultra-fast, dual-parallel processing Hexa Core Xeon Linux-based web servers with the latest server software installations, and our Windows hosting includes full support for the entire spectrum of frameworks and languages: ASP.NET, Frontpage, Choice of MySQL, MSSQL 2000 or 2005 DB, ATLAS, Silverlight, ASP 3.0, PHP4 & PHP5, and Plesk.

Let’s have a look at the difference with each.

Price

The most significant difference between Linux and Windows web hosting is the core operating system on which the server(s) and user interface run. Linux uses some form of the Linux kernel, and these are usually free. There are some paid distributions, Red Hat being a good one, which comes with a number of special features aimed at better server performance. With Windows you’ll have a licensing fee because Microsoft develops and owns its OS and hardware upgrade needs can be a possibility too. We like Linux because over its lifespan, Linux servers generally cost significantly less than a similar one that’s Windows-based.

Software Support

Before choosing an OS, you’ll also have to consider the script languages and database applications that are required to host the website on it. If your website needs Windows-based scripts or database applications to display correctly, then a Windows web hosting platform is probably best for you. Sites developed with Microsoft ASP.NET, ASP Classic, MSSSQL, MS Access, SharePoint technologies will also head over to the Windows side.

Conversely, if your website requires Linux-based script or database software, then a Linux-based web hosting platform is going to be your best choice. Plus, anyone planning to use Apache modules, NGINX or development tools like Perl, PHP, or Python with a MySQL database will enjoy the large support structure for these formats found with Linux.

Control Panel And Dev Tools

Another consideration with these two web hosting options is that Linux offers control panels like cPanel or WHM, and Windows uses Plesk. There are fundamental differences between them. cPanel has a simple user-friendly interface and users can download applications, such as WordPress, phpBB, Drupal, Joomla, and more with super simple one-click installs. Creating and manage MySQL databases and configuring PHP is easy, and cPanel automatically updates software packages too. Plus, we like our Linux hosted websites for people new to the web. cPanel makes it easy for even people with no coding knowledge to create websites, blogs, and wiki pages. You can get tasks done faster without having to learn the details of every package installed.

Plesk is very versatile in that it can help you run the Windows version of the Linux, Apache, MySQL, and PHP stack. Plesk also supports Docker, Git, and other advanced security extensions. Windows servers have many unique and practical tools available as well, such as the Microsoft Web Platform Installer (Web PI) for speedier installation of the IIS (Internet Information System web server), MSSQL, and ASP.NET stack.

Because it’s been on the field longer, there are loads of open-source Linux applications available online. Windows hosting has fewer apps to choose from, but you have the security of knowing they are all from from vetted licensed providers. This increases the rate you can move ahead with database deployment.

Performance And Security

A reputable Canadian web host can be expected to secure your website within its data centres, but online attacks on Windows servers over the last few years show that they may be more of a red flag here than with Linux servers. That’s not to say that Linux – or any OS that has been or ever will be developed – will not have any security issues. Solid security is a product of good passwords, applying necessary patches, and using the rack for support.

Further, Linux server is pretty much universally considered to superior to Windows for stability and reliability. They rarely need to be rebooted and configuration changes rarely require a restart. Running multiple database and file servers on Windows can make it unstable, and another small difference is that Linux files are case-sensitive and Windows files are not.

Penguin for the Win

Your choice of server should be dictated by the features & database application needed for the proper functioning of your hosting or website development project. Those of you working on your own external-facing site and looking for a combination of flexibility and stability will be set up perfectly with Linux and cPanel. Those working in a complex IT environment with existing databases and legacy applications running on Windows servers will be best served being hosted on a Windows OS server.

Site Isolation from Google Promises to Repel More Malware Attacks

Against malware
Reading Time: 2 minutes

Against malware

Security in the digital business world is really a challenge these days, and the world wide web is becoming as full of nefarious characters at the town of Machine, the ‘End of the Line’ as it were in the cool monochrome Western Dead Man with Johnny Depp from the ‘90s. A few months back we had detailed the big bad Spectre virus that had come onto the scene and posed major threats as regarded the insecurity of data for any type of website handling sensitive personal information.

It continues to be a ‘thing’, and in response to it Google recently enabled a new security feature in Chrome that secures users from malicious attacks like Spectre. It’s called Site Isolation, and is a new feature available with Chrome 67 on Windows, Mac, Linux, and Chrome OS. Here at 4GoodHosting, we’re a Canadian web hosting provider that puts an emphasis on this for obvious reasons, always seeking to be as on top of our clients’ web hosting needs as effectively as possible.

Google’s experimentation with Site Isolation has been going on since Chrome 63, and they’ve patched a lot of issues before enabling it by default for all Chrome users on desktop.

Chrome’s multi-process architecture allows different tabs to employ different renderer processes. Site Isolation functions by limiting each renderer process to documents from a single site. Chrome then relies on the operating system, and mitigates attacks between processes and any site.

Google has stated that in Chrome 67, Site Isolation has been enabled for 99% of users on Windows, Mac, Linux, and Chrome OS, according to a recent post on their company blog, stating further that ‘even if a Spectre attack were to occur in a malicious web page, data from other websites would generally not be loaded into the same process, and so there would be much less data available to the attacker. This significantly reduces the threat posed by Spectre.’

Additional known issues in Chrome for Android have been identified and are being worked on. Site Isolation for Chrome for Android should be ready with Chrome 68.

Need for Speed

Quick mention as well to Speed Update for Google Search on mobile. With this new feature the speed of pages will be a ranking factor for mobile searches. Of course, page speed has already been factoring into search engine rankings for some time now, but it was primarily based on desktop searches.

All of this is based on unsurprising finding showing people want to find answer to their searches as fast as possible, and page loading speed is an issue. Keeping that in mind, Google’s new feature for mobile users will only affect the pages that are painfully slow, and that has to be considered a good thing. Average pages should remain unaffected by and large.

We’re always happy to discuss in more detail how our web hosting service comes with the best in security and protective measures for your website when it’s hosted with us, and we also offer very competitively priced SSL certificates for Canadian websites that go a long way in securing your site reliably. Talk to us on the phone or email our support team.

Seven Steps to a Reliably Secure Server

Reading Time: 5 minutes

In a follow up to last week’s blog post where we talked about how experts expect an increase in DDoS attacks this year, it makes sense for us to this week provide some tips on the best way to secure a server. Here at 4GoodHosting, in addition to being a good Canadian web hosting provider we also try to take an interest in the well being of clients of ours who are in business online. Obviously, the premise of any external threat taking them offline for an extended period of time will endanger the livelihood of their business, and as such we hope these discussions will prove valuable.

Every day we’re presented with new reports of hacks and data breaches causing very unwelcome disruptions for businesses and users alike. Web servers tend to be vulnerable to security threats and need to be protected from intrusions, hacking attempts, viruses and other malicious attacks, but there’s no replacing a secure server with its role for a business that operates online and engages in network transactions.

They tend to be the target because they are many times all too penetrable for hackers, and add to that the fact they’re known to contain valuable information. As a result, taking proper measures to ensure you have a secure server is as vital as securing the website, web application, and of course the network around it.

Your first decisions to evaluate are the server, OS and web server you’ll choose to collectively function as server you hope will be secure, and then the kind of services that run on it. No matter which particular web server software and operating system you choose to run, you must take certain measures to increase your server security. For starters, everyone will need to review and configure every aspect of your server in order to secure it.

It’s best to maintain a multi-faceted approach that offers in-depth security because each security measure implemented stacks an additional layer of defence. The following is a list we’ve assembled from many different discussion with web development and security experts that individually and collectively will help strengthen your web server security and guard against cyberattacks, stopping them essentially before they even have the chance to get ‘inside’ and wreak havoc.

Let’s begin;

  1. 1. Automated Security Updates

Unfortunately, most vulnerabilities come with a zero-day status. Before you know it a public vulnerability can be utilized to create a malicious automated exploit. Your best defence is to keep an eye ALWAYS on the ball when it comes to receiving security updates and having them put into place. Now of course your eye isn’t available 24/7, but you can and should be applying automatic security updates and security patches as soon as they are available through the system’s package manager. If automated updates aren’t available, you need to find a better system – pronto.

  1. Review Server Status and Server Security

Being able to quickly review the status of your server and check whether there are any problems originating from CPU, RAM, disk usage, running processes and other metrics will often help pinpoint server security issues with the server in a much faster period of time. In addition, ubiquitous command line tools can also review the server status. Each of your network services logs, database logs, and site access logs (Microsoft SQL Server, MySQL, Oracle) present in a web server are best stored in a segregated area and checked with regularity. Be on the lookout for strange log entries. Should your server be compromised, having a reliable alerting and server monitoring system standing guard will prevent the problem from snowballing and allow you to take strategic reactive measures.

  1. Perimeter Security With Firewalls

Seeing to it you have a secure server means involves the installation of security applications like border routers and firewalls ready and proven effective for filtering known threats, automated attacks, malicious traffic, DDoS filters, and bogon IPs, plus any untrusted networks. A local firewall will be able to actively monitor for attacks like port scans and SSH password guessing and effectively neutralize their threat to the firewall. Further, a web application firewall helps to filter incoming web page requests that are made for the explicit purpose of breaking or compromising a website.

  1. Use Scanners and Security Tools

Fortunately, we’ve got many security tools (URL scan, mod security) typically provided with web server software to aid administrators in securing their web server installations. Yes, configuring these tools can be a laborious process and time consuming as well – particularly with custom web applications – but the benefit is that they add an extra layer of security and give you serious reassurances.

Scanners can help automate the process of running advanced security checks against the open ports and network services to ensure your server and web applications are secure. It most commonly will check for SQL injection, web server configuration problems, cross site scripting, and other security vulnerabilities. You can even get scanners that can automatically audit shopping carts, forms, dynamic web content and other web applications and then provide detailed reports regarding their detection of existing vulnerabilities. These are highly recommended.

  1. Remove Unnecessary Services

Typical default operating system installations and network configurations (Remote Registry Services, Print Server Service, RAS) will not be secure. Ports are left vulnerable to abuse with larger numbers of services running on an operating system. It’s therefore advisable to switch off all unnecessary services and then disable them. As an added bonus, you’ll be boosting your server performance by doing this with a freeing of hardware resources.

  1. Manage Web Application Content

The entirety of your web application or website files and scripts should be stored on a separate drive, away from the operating system, logs and any other system files. By doing so it creates a situation where even if hackers gain access to the web root directory, they’ll have absolutely zero success using any operating system command to take control of your web server.

  1. Permissions and Privileges

File and network services permissions are imperative points for having a secure server, as they help limit any potential damage that may stem from a compromised account. Malicious users can compromise the web server engine and use the account in order to carry out malevolent tasks, most often executing specific files that work to corrupt your data or encrypt it to their specifics. Ideally, file system permissions should be granular. Review your file system permissions on a VERY regular basis to prevent users and services from engaging in unintended actions. In addition, consider removing the “root” account to enable login using SSH and disabling any default account shells that you do not normally choose to access. Make sure to use the least privilege principle to run specific network service, and also be sure to restrict what each user or service can do.

Securing web servers can make it so that corporate data and resources are safe from intrusion or misuse. We’ve clearly established here that it is about people and processes as much as it is about any one security ‘product.’ By incorporating the majority (or ideally all) measures mentioned in this post, you can begin to create a secure server infrastructure that’s supremely effective in supporting web applications and other web services.

IT Security Insiders: Expect an Escalation in DDoS Attacks for Duration of 2017

Reading Time: 4 minutes

The long and short of it is that Internet security will always be a forefront topic in this industry. That’s a reflection of both the never-ending importance of keeping data secure given the predominance of e-commerce in the world today and the fact that cyber hackers will never slow in their efforts to get ‘in’ and do harm in the interest of making ill-gotten financial gains for themselves.

So with the understanding that the issue of security / attacks / preventative measures is never going to be moving to the back burner, let’s move forward to discuss what the consensus among web security experts is – namely, that DDoS Attacks are likely to occur at an even higher rate than previously for the remainder of 2017.

Here at 4GoodHosting, in addition to being one of the best web hosting providers in Canada we’re very active in keeping on top of trends in the Web-based business and design worlds. as they tend to have great relevance to our customers. As such, we think this particularly piece of news is worthy of some discussion.

Let’s have at it – why can we expect to see more DDoS attacks this year?

Data ‘Nappers and Ransom Demands

As stated, IT security professionals predict that DDoS attacks will be more numerous and more pronounced in the year ahead, and many have started preparing for attacks that could cause outages worldwide in worst-case scenarios.

One such scenario could be – brace yourselves – a worldwide Internet outage. Before you become overly concerned, however, it would seem that the vast majority of security teams are already taking steps to stay ahead of these threats, with ‘business continuity’ measures increasingly in place to allow continued operation should any worst-case scenario come to fruition.

Further, these same insiders say that the next DDoS attack will be financially motivated. While there are continued discussions about attackers taking aim at nation states, security professionals conversely believe that criminal extortionists are the most likely group to successfully undertake a large-scale DDoS attack against one or more specific organizations.

As an example of this, look no further than the recent developments regarding Apple and their being threatened with widespread wiping of devices by an organization calling itself the ‘Turkish Crime Family’ if the computing mega-company doesn’t cough up $75,000 in cryptocurrency or $100,000 worth of iTunes gift cards.

A recent survey of select e-commerce businesses found that 46% of them expect to be targeted by a DDoS attack over the next 12 months. Should that attack come with a ransom demand like the one above, it may be particularly troublesome for any management group (given the fact that nearly ALL of them will not have the deep pockets that Apple has)

Further, the same study found that a concerning number of security professionals believe their leadership teams would struggle to come up with any other solution than to give in to any ransom demands. As such, having effective protection against ransomware and other dark software threats is as important as it’s ever been.

Undercover Attacks

We need to mention as well that these same security professionals are also worried about the smaller, low-volume DDoS attacks that will less 30 minutes or less. These have come to be classified as ‘Trojan Horse’ DDoS attack, and the problem is that they typically will not be mitigated by most legacy DDoS mitigation solutions. One common ploy used by hackers is to employ a Trojan horse as a distraction mechanism that diverts guard to open up the gates for a separate, larger DDoS attack.

Citing the same survey yet again, fewer than 30% of IT security teams have enough visibility worked into their networks to mitigate attacks that do not exceed 30 minutes in length. Further, there is the possibility of hidden effects of these attacks on their networks, like undetected data theft.

Undetected data theft is almost certainly more of a problem than many are aware – and particularly with the fast-approaching GDPR deadline which will make it so that organizations could be fined up to 4% of global turnover in the event of a major data breach deemed to be ‘sensitive’ by any number of set criteria.

Turning Tide against ISPs

Many expect regulatory pressure to be applied against ISPs that are perceived to be insufficient in protecting their customers against DDoS threats. Of course, there is the question as to whether an ISP is to blame for not mitigating a DDoS attack when it occurs, but again it seems the consensus is that it is, more often that not. This seems to suggest that the majority would find their own security teams to be responsible.

The trend seems to be to blame upstream providers for not being more proactive when it comes to DDoS defense. Many believe the best approach to countering these increasing attacks is to have ISPs that are equipped to defend against DDoS attacks, by both protecting their own networks and offering more comprehensive solutions to their customers via paid-for, managed services that are proven to be effective.

We are definitely sympathetic to anyone who has concerns regarding the possibility of these attacks and how they could lead to serious losses should they be able to wreak havoc and essentially remove the site from the web for extended periods of time. With the news alluded to earlier that there could even be a worldwide Internet outage before long via the new depth and complexity of DDoS attacks, however, it would seem that anyone with an interest in being online for whatever purpose should be concerned as well.

Understanding the New ‘Perimeter’ Against Cyber Attacks

Reading Time: 4 minutes

Hacker in hood with laptop initiating cyber attack. View from the back.

If you yourself haven’t been the victim of a cyber attack, you very likely know someone else who has, and in fact the numbers suggest that upwards of 90% of organizations experienced at least SOME level of an IT security breach in the past year. Further, it’s believed that one in 6 organizations have had significant security breaches during the same period.

Here at 4GoodHosting, we’ve established ourselves as a top Canadian web hosting provider but we’re always keen to explore industry trends – positive and negative – that impact what matters to our customers. And our array of customers covers pretty much any type of interest one could have in operating on the World Wide Web.

Cyberattacks have pretty much become a part of every day life. While not to suggest that these types of incidents are ‘inevitable’, there is only so much any one individual or IT team can do to guard against them. Yes, there are standard PROACTIVE web security protocols to follow, but we will not look at those here given the fact that they are quite commonly understood amongst those of you who have that as part of your job detail and responsibility within the organization.

Rather, let’s take a look at being REACTIVE in response to a cyber attack here, and in particular with tips on how to disinfect a data centre and beef it up against further transgressions.

Anti-Virus and Firewalls – Insufficient

It would seem that the overwhelming trend with cloud data security revolves around the utilization of firewalls, believing them to be a sufficiently effective perimeter. Oftentimes, however, exceptions are made to allow cloud applications to run and in thus doing so the door is opened for intrusions to occur.

So much for firewalls securing the enterprise.

Similarly, anti-virus software can no longer keep pace with the immense volume of daily viruses and their variants that are being created in cyberspace nearly everyday. A reputable cybersecurity firm recently announced the discovery of a new Permanent Denial-of-Service (PDos) botnet named BrickerBot, which serves to render the victim’s hardware entirely useless.

A PDoS attack – or ‘phlashing’ as it’s also referred to – can damage a system so extensively that full replacement or reinstallation of hardware is required, and unfortunately these attacks are becoming more prevalent.It is true that there are plenty of useful tools out there such as Malware bytes that should be used to detect and cleanse the data centre of any detected or suspected infections.

Making Use of Whitelisting And Intrusion Detection

Whitelisting is a good way to strengthen your defensive lines and isolate rogue programs that have successfully infiltrated your data center. Also known as application control, whitelisting involves a short list of the applications and processes that have been authorized to run. This strategy limits use by means of a “deny-by-default” approach so that only approved files or applications are able to be installed. Dynamic application whitelisting strengthens security defenses and helps with preventing malicious software and other unapproved programs from running.

Modern networking tools should also be integrated as part of your security arsenal, and if they are configured correctly they can highlight abnormal patterns that may be a cause for concern. As an example, intrusion detection can be set up to be triggered when any host uploads a significant load of data several times over the course of a day. The idea is to eliminate abnormal user behaviour and help with containing existing threats.

Security Analytics

What’s the best way to augment current security practices? Experts in this are increasingly advocating real-time analytics used in tandem with specific methodologies that focus on likely attack vectors. This approach revolves around seeing the web as a hostile environment filled with predators. In the same way behavioural analytics are used in protecting against cyber terrorists, we need to take an in-depth look at patterns to better detect internal security threats.

However, perhaps the most important thing to realize is that technology alone will never solve the problem. Perfect email filters and the transgressors will move to using mobile networks. Improve those filters and they’ll jump to social media accounts. The solution must address the source and initial entry concepts, with training and education implemented so that people in the position to respond and ‘nip it in the bud’ can be explicitly aware of these attacks just as they first begin.

End-user Internet security awareness training is the answer, but we are only in the formative stages of making it accessible for users across all the different types. Much of it is all about teaching users not to do inadvisable things like clicking on suspect URLs in emails, or opening attachments that let in the bad hats.

Putting all staff through requisite training may be expensive and time consuming / productivity draining, but we may be at the point soon where it’s no longer an option to NOT have these types of educational programs. The new reality is that what we previously referred to as ‘the perimeter’ no longer really exists, or if it does it’s by in large ineffective in preventing the entirety of cyber attacks. The ‘perimeter’ is now every single individual on their own, and accordingly the risks are even greater with the weakest link in the chain essentials being the entirety of your system defences.

Defining DNS…. And What’s Exactly In It For Hackers?

Reading Time: 3 minutes

DNS isn’t exactly a buzzword in discussions among web hosting providers or those in the web hosting industry, but it’s darn close to it. DNS is an acronym for Domain Name Servers and what DNS does is see to it that after entering a website URL into your browser you then end up in the right spot – among the millions upon millions of them – on the World Wide Web.

DNS. Domain name system sign on white background

When you enter this URL, your browser starts trying to figure out where that website is by pinging a series of servers. These could be resolving name servers, authoritative name servers, or domain registrars, among others. But those servers themselves – often located all around the world – are only fulfilling an individual part in the overall process.

The process itself is a verification of identities by means of converting URLs into identifiable IP addresses, which the networks communicate with each other and by which your browser confirms that it’s taking you down the right path. In a world with literally billions of paths, that’s a more impressive feat than you might think, especially when you consider it’s done in mere seconds and with impressive consistency.

It’s quite common to hear of DNS in conjunction with DDoS, with is another strange acronym that is paired with the term ‘attack’ to create a phenomena noun. What DDoS is and how it’s related so explicitly to DNS much of the time is as follows:

A DDoS attack is a common hack in which multiple compromised computers are used to attack a single system by overloading it with server requests. In a DDoS attack, hackers will use often use infected computers to create a flood of traffic originating from many different sources, potentially thousands or even hundreds of thousands. By using all of the infected computers, a hacker can effectively circumvent any blocks that might be put on a single IP address. It also makes it harder to identify a legitimate request compared to one coming from an attacker.

The DNS is compromised in the way browsers essentially can’t figure out where to go to find the information to load on the screen. This type of attack happens typically involves hackers creating a little army of private computers infected with malicious software known as a Botnet. The people that are often participating in the attack don’t realize their computer has been compromised, and is now a part of the growing problem.

Why Go To So Much Trouble?

With all of this now understood, it begs the question – What’s in it for hackers to do this?

technology, cyberspace, virtual reality and people concept - man or hacker in headset and eyeglasses with keyboard hacking computer system or programming over binary code projection

It’s believed that the initial appeal of hacking is in proving that you can penetrate something / somewhere that’s purported to be impenetrable, and where someone with a skill set similar to yours has gone to significant effort to make it that way. It’s very much a geeks’ chest thumping competition – my virtual handiwork is better than yours!

As hackers become established and the ‘novelty’ of hacking wears off however, these individuals often find new inspiration for their malicious craft. The more time they spend doing it, the sooner they realize that a certain level of skills can introduce them to opportunities for making money with hacking. Among other scenarios, this can be either by stealing credit card details and using them to buy virtual goods, or by getting paid to create malware that others will pay for. And that happens much more often than you might think.

Their creations may silently take over a computer, or subvert a web browser so it goes to a particular site for which they get paid, or lace a website with commercial spam. As the opportunities in the digital world increase, hacking opportunities increase right along with them and that’s the way it will continue to be

Here at 4GoodHosting, we are constantly reevaluating the security measures we have in place to defend our clients’ websites from DDoS attacks, as well as keeping on top of industry trends and products that help us keep hackers and their nefarious handiwork away from you and your website. It’s a priority for sure.

DNS – “What Is It and How Does It Work?”

Reading Time: 3 minutes

index

DNS is an acronym that stands for “domain name system”. Without domain names, we wouldn’t have a way of getting to particular a website, such as numbers like 292.14.78.251, or worse the new ipv6 numbers that are much longer, too long in fact to provide an example here.

In this article we will give an overview of how D.N.S functions. The most important thing, that our web hosting customers can learn from this, is the process of how DNS records are changed, or updated to your web host’s “name servers” address’. These are important domain names which also rely on the DNS system to be converted into your webserver’s numerical IP-address.

“Name Servers”

Name servers enable people to use their domain name in order to access your webserver; which then in turn directs your site visitors to your specific web hosting account and files, rather than a complex IP address. DNS also makes economical “shared hosting” possible, since the server’s IP address can be reused for dozens of different websites.

Your name servers are the most important detail of your domain record, again their purpose is to redirect a visitor’s web browser to the place, that is web-server on a rack in a data-room someplace, where your site is being hosted.

Modifying your domain name server(s) enables you to change your web host without having to transfer your domain to another registrar.

Name servers can also be referred to as DNS servers; which can create confusion due to the two synonymous terms.

DNS Records

DNS refers to the layer of the internet stack, very similar to a database application, that contains the domain names, name servers, IP address’ and personal or company registration information encapsulating every public site on the Internet.
DNS records contain various types of data, syntax, and commands for how a webserver should respond to lookup requests.

Some of the most common syntax items defined:

· “A”-record. The actual webserver IP address that is associated with the domain.

· “CNAME”-record. CNAME indicates sub-domains that can be associated with your domain.

· “MX”-record. This refers to specific mail servers that might be optionally used in accordance with your domain, such as using gmail with your domain for email.

· “NS”-record. The nameservers that are currently set for your domain.

· “SOA”-record. Information about your domain, like when your domain was last updated and relevant contact information.

· “TXT”-record. Text about any additional information about the domain.

As you can see, there are numerous components of your DNS records, but most of this information can’t and shouldn’t be altered. The main component of your DNS records that will be of concern to you, if you ever have to change, is your name servers.

Changing Name Servers



Registrars are responsible for allowing you to edit your DNS record name severs. The default usually automatically assigns same host and registrar upon registration. Domain transfers usually carry over using the same name server information as from the previous registrar.

However, if your domain is not registered and also hosted in the same place, then you’ll follow the general steps below to update the name servers.

4GoodHosting customers can follow these instructions to change their name servers:

1. Locate Domain Management

Every registrar has domain management tools which allow you to edit your name servers. This ability will usually be found on the domain management area your client account/portal.

2. Find Your Name Servers

Under each of your individual domain(s) you’ll be able to change your name servers. Your name servers will look something like this:
ns1.4GoodHosting.com
ns2.4GoodHosting.com
You will need to change both the “primary” and “secondary” name server. The second server exists in the rare even that the first one crashes, or some other condition that prevents it from being resourced.

3. Setting New Name Servers

Simply change your existing name servers to the new name servers and click “Update” or “Save”. These changes don’t take place immediately across the entire internet. Domain name server updates usually take anywhere from 4-24 hours to ‘propagate’ throughout the global DNS internet system.

With this information, you are able to understand some key functionality of how the internet logically works. If you have any remaining questions concerning your domains or domain records, please contact us at support @ 4goodhosting.com for a rapid response to your inquiries.

Please contact us at one of the very best Canadian Web Hosting Companies, 4GoodHosting.

Amnesty International Report on Instant Messaging Services and Privacy

Reading Time: 4 minutes4gh-privacyconcerns-b

Skype & Snapchat, among other companies, have failed to adopt basic privacy protection as recent stated in Amnesty International’s special report “Message Privacy Ranking” report. The report compares 11 popular instant messaging services.

Companies were ranked based on their recognition of online threats to human rights, default deployment of end-to-end encryption, user disclosure, government disclosure, and publishing of the technical details of their encryption.

“If you think instant messaging services are private, you are in for a big surprise. The reality is that our communications are under constant threat from cybercriminals and spying by state authorities. Young people, the most prolific sharers of personal details and photos over apps like Snapchat, are especially at risk,” Sherif Elsayed-Ali, Head of Amnesty International’s Technology and Human Rights Team said in a statement.

“Snapchat” only scored 26 points in the report (out of 100) and Blackberry was rated even worse at 20 points). Skype has weak encryption, scoring only 40.

The middle group in the rankings included Google, which scored a 53 for its Allo, Duo, & Hangouts apps, Line and Viber, with 47 each, and Kakao Talk, which scored a 40.

The report also stated “that due to the abysmal state of privacy protections there was no winner.”

On a side not protecting privacy rights is also part of the motivation behind the Let’s Encrypt Project, which to use to supply free SSL Certificates.

Amnesty International has petitioned messaging services to apply “end-to-end encryption” (as a default feature) to protect: activists, journalists, opposition politicians, and common law-abiding citizens world-wide. It also urges companies to openly publish and advertise the details about their privacy-related practices & policies.

About the most popular instant messaging app: “Whatsapp” – Facebook has thrown everybody a new surprise twist.

WhatsApp is updating its privacy policy. Facebook wants your data and end-to-end encryption is going to soon be shut off.
WhatsApp , now owned by Facebook, started some uproar this week after the announcement that it’s changing its terms (or privacy) to *allow* data to be shared with Facebook. It means that for the first time Whatsapp will give permission to connect accounts to Facebook. This is after pledging, in 2014, that it wouldn’t do so – and has now backtracked.

WhatsApp now says that it will give the social networking site more data about its users – allowing Facebook to suggest phone contacts as “friends”.

“By coordinating more with Facebook, we’ll be able to do things like track basic metrics about how often people use our services and better fight spam on WhatsApp,” Whatsapp has written.

“By connecting your phone number with Facebook’s systems, Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them. … For example, you might see an ad from a company you already work with, rather than one from someone you’ve never heard of.”

Many aren’t pleased with the move, especially since WhatsApp previously promised not to change its privacy settings.
If you want to carry on using whatsapp, you can’t opt out of the Facebook connection feature, as the update of terms and privacy policy is compulsory. “This allows us to do things like improve our app’s performance and better coordinate,” says WhatsApp.

The app’s end-to-end encryption will also be stopped. However previously the company implemented it earlier this year and claimed it made conversations more secure.

The popular messaging service’s recent change in privacy policy to start sharing users’ phone numbers with Facebook—the first policy change since WhatsApp was acquired by Facebook in 2014 – has attracted regulatory scrutiny in Europe.

The Italian antitrust watchdog on Friday also announced a separate probe into whether WhatsApp obliged users to agree to sharing personal data with Facebook.

The European Union’s 28 data protection authorities said in a statement they had requested WhatsApp stop sharing users’ data with Facebook until the “appropriate legal protections could be assured” to avoid falling foul of EU data protection law.

WhatsApp’s new privacy policy involves the sharing of information with Facebook for purposes that were not included in the terms of service when users signed up, raising questions about the validity of users’ consent, known as the Article 29 Working Party (WP29), as the European authorities have responded with.

The Wp29 group also urges WhatsApp to stop passing user data to Facebook while it investigates the legality of the arrangement.
Subsequently a spokeswoman for WhatsApp said the company was working with data protection authorities to address their questions.

Facebook has had run-ins with European privacy watchdogs in the past over its processing of users’ data. However, the fines that regulators can levy are paltry in comparison to the revenues of the big U.S. tech companies concerned.

The European regulators will discuss the Yahoo and WhatsApp cases in November.

“The Article 29 Working Party (WP29) has serious concerns regarding the manner in which the information relating to the updated Terms of Service and Privacy Policy was provided to users and consequently about the validity of the users’ consent,” it writes.

“WP29 also questions the effectiveness of control mechanisms offered to users to exercise their rights and the effects that the data sharing will have on people that are not a user of any other service within the Facebook family of companies.”

We haven’t heard of any discussion within Canada as of yet.

Thank you for reading the 4GoodHosting blog. We would love to hear from you.