‘Spintronic’ Ultra Fast Computer Chips for Fail-Safe Data Retention

Reading Time: 3 minutes

Many of you who work at a desktop in an office will probably have been in a situation where the lights flicker, and your co-workers then quickly urge everyone in the vicinity to ‘save’. It’s nice if you could know in advance of power outage coming, and sometimes you do get little warnings like these ones. But if you don’t and you’ve done a bunch of work without saving any time recently, well that’s going to be a problem.

Or at least it’s a problem for now, as there’s a new type of computer chip that’s been developed that has the potential to save you from the frustration of having to do work over or anything of the sort. This is something that most of us can relate to if you spend the majority of your day working in front of a screen, and of course here at 4GoodHosting we’re like that in the same way that most Canadian web hosting providers would be too.

Truth is the new technological advances in computing are coming fast and furious these days, and that’s a darn good thing. Spintronic devices are appealing alternatives to conventional computer chips, providing digital information storage that is extremely energy efficient plus relatively easy to manufacture on a large scale.

The only current drawback for these devices, which depend on magnetic memory, is that they’re impeded by their relatively slow speeds in comparison to conventional electronic chips.

Superior Speeds

Magnetization switching is the term used for writing information into magnetic memory, and that’s what happens when data is saved to computer chips. What’s happened now is that there is a new technique for it where the speeds are nearly 100 times faster than state-of-the-art spintronic devices. This creates the potential for ultrafast magnetic memory in computer chips, and so fast in fact that data would be retained even if there was no power.

This is achieved using very short, 6-picosecond electrical pulses to switch the magnetization of a thin film in a magnetic device. And the key is that being done with great energy efficiency. A picosecond is one-trillionth of a second.

Whereas with conventional computer chips, the 0s and 1s of binary data are stored as either ‘on’ or ‘off’ states of individual silicon transistors. Not so with magnetic memory; this same information can be stored as the opposite polarities of magnetization, typically described as the ‘up’ or ‘down’ states.

We can reference the Cloud in all of this too, as magnetic memory is the basis for magnetic hard drive memory. This is the technology used to store the vast amounts of data in the cloud.

Suitably Stable

How magnetic memory works to prevent data loss when power supply is discontinued is by it being ‘non-volatile’. What this means is that even if no electricity is being provided, the information is still retained. Now given the often catastrophic nature of power failures in data centres – or at least ones without regular daily backups like we have here at 4GoodHosting with ALL our data centres in Canada – the potential for this technology in places of the world where monetary constraints prevent more extensive data center protocols is quite huge.

What this will do is allow local data on-chip to be retained when the power is off, as well as enabling the information to be accessed way faster than pulling it from a remote disk drive. This technological advance may have other applications too, and particularly for integration with electronics and controlling multiple conventional electronic circuits all on the same chip.

More on How

A key component of spintronics is spin-orbit torque device. It’s a small area of a magnetic film that is deposited on top of a metallic wire, and a current is then directed through the wire leads to a flow of electrons with a magnetic movement. What comes out of that is magnetic torque that’s put onto the magnetic bit to switch its polarity.

The types of these ones developed so far require current pulses of at least a nanosecond to switch polarity on the bit, and that needed to be improved upon. Which is where the focus is now, getting it up to the picoseconds speeds that will make the grade for what’s going to be asked of these spintronics. One thing they’ve learned so far is that ultrafast heating promotes better magnetization reversal.

In this study, the researchers launched the 6-picosecond-wide electrical current pulses along a transmission line into a cobalt-based magnetic bit. The magnetization of the cobalt bit was then demonstrated to be reliably switched by the spin-orbit torque mechanism.

Preliminary energy usage estimates are looking good, as the energy needed for these ‘ultrafast’ spin-orbit torque device is significantly smaller than with conventional spintronic devices operating on much longer time frames for polarization reversal.

This is definitely a technology that’s worth keeping an eye on for anyone who has extensive data storage needs on site and it’s something we’ll definitely be keeping track of too.


More on 5G in Canada

Reading Time: 4 minutes

There’s been a whole lot of hubbub about the iPhone 12 and other new smartphone offerings from the big manufacturers, but there’s no other factor that’s playing into all the hype quite like 5G network connectivity. If you’re the type to be reading this then you’re also probably one who’s had plenty of news and info thrust upon you regarding the coming of 5G, and in truth there is a lot to be enthused about.

You can look at that from the individual perspective, but if you ask us the real promise here is in the collective benefit and what this means for the IoT and things like advanced remote healthcare and the like. It’s going to be fairly revolutionary and that’s likely not an exaggeration. Here at 4GoodHosting we’re like any other Canadian web hosting provider in that we know it’s likely going to have big time ramifications for how we do things in the long term too, and as you’ve likely seen it’s a topic we seemingly can’t get enough of.

There’s been a lot made of how 5G infrastructure is going to go in Canada, and how the widespread consensus is that Huawei can’t be allowed to be a part of that. But let’s leave any all references to national security out of this and instead focus on what’s the current status of 5G in Canada.

For starters, there’s no ‘ultra wideband’ in Canada. What’s happened here is carriers in Canada have installed 5G on channels that previously were intended for 4G. Rogers channels are very similar to T-Mobile’s ‘nationwide’ and mid-band 5G down south, but as of now Bell and Telus are using a 5G channel very similar to a popular 4G band.

The long and short of it is that Bell, Telus, and Rogers all view 5G as an add-on to 4G now, so that means even if you’re using a 5G enabled smartphone you’re probably still going to be connected with 4G. The good news response to that is that the 5G experience should get broader and somewhat better over the next year as the carriers extend coverage beyond their current limited areas.

Think more like 2022 – in mid-2021 the government make available a batch of fresh, clear airwaves around 3.5GHz for 5G via auction.

Speed Interests

Independent research into current 5G network speeds in Canada is that the signal is strong, both 4G and 5G speeds are great, and 5G speeds are in fact higher than 4G in the way consumers would expect them to be. Some insist though that there’s not a whole lot of difference in responsiveness, at least not yet. This is likely due to the fact that the phone was working on a combination of 4G and 5G, and as such it’s still not enjoying the full spectrum of 5G’s lower latency.

The same findings indicated there’s not a marked difference in upload speeds either, but again this may also be a case of not yet.

Panel Factors

Depending on where you are in the country, the next part of this discussion of 5G in Canada may or may not be of relevance. If you’re in a major urban area, however, you will likely already have panels broadcasting 5G network to the surrounding neighbourhood.

Again, the surveys seem to indicate that these don’t make a noticeable difference with downloads speeds when comparing operation on the 4G network as compared to the available 5G. And while we don’t have any interest in favouring one network provider over the other, according to the sources Rogers did show noticeably faster upload speeds on 5G than 4G.

Consistency will of course be the key to acknowledging the superiority of 5G network connectivity, and that’s still something that’s being figured out. As of now, it seems that the only way you can get that type of network speed consistency on any 5G network in Canada is if you’re using around 10MHz less spectrum.

Unlimited Plan? 5G a Must

We know that Canadian carriers have only recently enabled their most popular plans beyond a few gigabytes per person. As customers sign up for bigger plans, the carriers need more capacity. Tests showed that Canadian carriers are holding their own on speed, even as subscribers to the new unlimited plans use up to 2x the data as their subscribers did before. The added efficiency of 5G could be a key component of how the carriers will handle the added traffic.

This is the reasoning behind why any carrier is going to prefer you buy a 5G phone as soon as possible. When ever greater numbers of people are consuming ever larger amounts of data, if that’s going to be done on a 5G network it’s best to do it on the clearest, most efficient channels if performance and speed are to be maintained.

It’s quite possible that existing 5G channels are yet to be used enough, and that’s more of a factor in any ‘underwhelming’ perspective on 5G at this time.

If that’s actually the case, can we expect to see better speed and responsiveness from 5G in Canada once more and more of us have 5G phones. There’s plenty of reason to hope that’s not it, because that will inevitably lead to all sorts of questions about the propriety of what’s being orchestrated by major network providers and tech giants.

Ways to Optimize CPU Usage & Performance

Reading Time: 5 minutes

Many people will immediately attribute a noticeable slowdown with their computer to an aging and failing hard drive, but that’s not always the case. Even desktops and notebooks that aren’t particularly old can start to have performance shortcomings, and it’s not like most of us have the means to simply replace our devices every time this occurs. Adding to this issue is the fact that modern life is demanding that we demand more for those devices too, and this makes it so that our personal and work computers are often overloaded in ways.


This trend isn’t lost on anyone we imagine, and here at 4GoodHosting one of the things we’re very aware of as a Canadian web hosting provider is that anyone who has a website is going to be wary of a high bounce rate. If you don’t know what that is, it’s the rate at which visitors to your site leave within a certain time period of their arrival. Would-be visitors to your site who perceive some sort of problem with it opening and displaying may put that on you, when in fact it has much more to do with the shortcoming of their browser device of-choice.


The good news is that there are ways to optimize CPI usage and performance, and a good many of them aren’t going to ask much in the way of your time, effort, or expense. So let’s have a look at those with our blog post for this week.


All About Throughput


The way it works for most applications is that performance is centered around throughput. The definition for that term is how much work the server can process in a certain timeframe.

Nowadays most high-end servers on the market are designed with throughput in mind.


Finding just one server that’s ideally optimized for running all types of workloads out-of-the-box hasn’t happened yet, however. So getting the most out of your server’s available resources is best done by the CPU’s physical limitations relative to your performance requirements.


Understanding that all CPU optimization efforts are essentially software-based is a good place to start. So rather than upgrading hardware, it’s better to focus on your workload or other software running on the machine. Better server performance is often about tradeoffs – optimizing one aspect of the server at the expense of another.


Clarify Performance Objectives


We can start with this; when you increase the size of the cache the overall processing speed will increase along with it, but that results in higher memory consumption. So while this approach most people would take by default, it’s not the fix most of you will hope it to be.

Understanding and being responsive to workloads running on it has the be a part of the analysis and response too.


Since each workload is unique and consumes resources differently, there are no fixed rules for defining hardware requirements or performance targets. SQL database workloads, for example, are heavy on the CPU while 3D rendering workloads gobble up high amounts of memory.


So in advance of deploying or configuring your server infrastructure, you should start by assessing your requirements, and that’s best done through what’s called ‘capacity planning.’ Doing so gives you insight into your current resource usage and – done right – guides you to developing optimization strategies for existing and future performance expectations.


As this relates to common types of workloads likedatabase processing, app or web hosting, here’s what you should be looking at and evaluating for CPU performance objectives:


  • What number of anticipated users do you expect to be using your application?
  • What is the size of an average single request?
  • How many requests do you expect during average load and spikes in demand?
  • What is the desired SLA? (defined as periods of measure time)
  • Can you establish target CPU usage?


You should determine which server components your application relies on the most too, understanding that;


  • Performance/throughput is CPU based
  • Memory consumption is RAM based
  • Latencyis network based
  • I/O operationsis disk based


Optimizing for performance is also very much related to the the number of CPU cores, and this is likely something that a layperson will be able to understand more readily. Long story shorter, a multi-core processor is going to be a lot more usable for multi-threaded jobs. Without one you may find yourself dealing with a high CPU wait time, which is going to be pretty much intolerable for anyone and everyone.


Storage configurations are a factor too; and certain ones will have negative impacts on performance. Having the right disk type and properly configured storage resources decreases latency. If you are running latency-sensitive workloads such as database processing, this becomes something you’ll really want to pay attention to. Such workloads need to be optimized to utilize memory resources efficiently.


Here’s some connections you can make regarding all of this:


  • CPU-intensive workloads: machine learning, high-traffic servers, video encoding, algorithmic calculations
  • Memory-hungry workloads: SQL databases, CGI rendering, real-time big data processing
  • Latency-sensitive workloads: video streaming, search engines, and mobile text message processing


Keep Tabs on Resource Usage


Here’s a fairly ubiquitous equation that will apply to pretty much everyone; When usage exceeds 80% of the CPU’s physical resources, applications begin to load slower or the server may no longer respond to requests.


So what sorts of actions take CPU usage up to those levels? A common scenario is if a database server uses 100% of the CPU capacities then that might be the result of the application running too many processor-intensive tasks such as sorting queries. To decrease the load, you would have to optimize your SQL queries to utilize available resources more efficiently.

High CPU usage could also be attributed to poorly coded applications, outdated software, or security vulnerabilities. The only way to precisely determine the root cause of performance issues or bottlenecks is to use analytics and monitoring tools.


Generally speaking, CPU utilization should never be at full capacity. When an application is consuming 100% of the total processing power under average load, the CPU will be unable to handle sudden demand spikes, and that means latency or service unavailability.


Stay Under 80


Having latency-sensitive applications utilizing no more than 80% of the CPU’s power is a general good-practice guideline. Applications that are less sensitive to latency may allow CPU utilization up to 90%. Be judicious about how you determine that though, and don’t be forgiving.


To mitigate high CPU usage issues, common solutions include optimizing your application’s code, limiting the number of requests, optimizing the kernel or operating system processes, or offloading the workload to additional servers.


One last mention on this topic is programming languages. The programming language running your application also factors into CPU usage. Programming languages that run close to the metal, such as C or C++, provide more control over hardware operations than compiled languages such as Python or PHP. Writing applications in C or C++ is advantageous this way, and particularly if you need granular control over how your application consumes CPU and memory resources.

Maximizing Your Use of Google Cloud Free Tier

Reading Time: 5 minutesSome people will say you can never be too frugal or thrifty, and in honesty even if you can afford to spend more there’s really no good reason you should if there’s an alternative. This is something a lot of people can relate to. If you’re the webmaster or similarly titled individual who’s at the helm of a company or organization and needing to find savings in your web operations then the Google Cloud Free Tier offers just that if you are taking advantage of what cloud storage offers these days.

The appeal of cloud storage of course needs no explanation. Physical storage is just that – physical – and that’s means space demands and as we become increasingly more digital in lives and business every day it’s more and more impractical to be housing data in physical data centers. Cloud storage has been a godsend in this regard, and we’re very fortunate to have it now.

Here at 4GoodHosting, we’re like every other Canadian web hosting provider in that our data centers are usually working at near capacity, and we also take very opportunity to utilize cloud storage for internal storage needs related to our business and day-to-day operations. Google has had its Free Tier for Google Cloud for a while now, and while it’s great for the individual or smaller-scale operations there’s just not enough capacity of the likes of us.

For the average individual or smaller business, however, it’s a great resource to have at your disposal. Successful businesses grow, and you wouldn’t want it any other way. But that may mean stretching your storage needs beyond what the Google Cloud Free Tier maximum.

But it also many not, and particularly if you find ways to maximize your use of the Google Cloud Free Tier. Here’s how you can do that.

Store Only What’s Needed

Free databases included like Firestore and Cloud Storage are completely flexible tools that let you nest away key-value documents and objects respectively. Google Cloud’s always-free tier allows no-cost storage of your first 1GB (Firestore) and 10GB (Cloud Storage). For apps that keep more details, however, it’s common to go through those free gigabytes fairly quickly. So quit saving information unless you absolutely need it. This means no obsessive collection of data just in case you need it for debugging later. Skip extra timestamps, and be judicious about which large caches full of data you really need to keep in this resource.

Make Use of Compression

There’s no shortage of code pieces for adding a layer of compression to your clients. Rather than storing fat blocks of JSON, the client code can run the data through an algorithm like LZW or Gzip before sending it over the wire to your server for unpacking and storage. That means faster responses, fewer bandwidth issues, and significantly less demand on your free monthly data storage quota. Do be aware though that smaller data packets can get bigger when the compression’s overhead is factored in.

Serverless is the Way to Go

When it comes to intermittent compute services that are billed per request, Google is already fairly accommodating. Cloud Run will boot up and run a stateless container that answers two million requests each month, and at no cost to you. Cloud Functions will fire up your function in response to another two million requests. That averages out to more than 100,000 different operations per day. So writing your code to the serverless model can often be a good choice.

Make Use of the App Engine

Spinning up a web application using Google’s App Engine is an excellent choice to do it without fussing over all of the details regarding how it will be deployed or scaled. Almost everything is automated, so expanses to the load will automatically result in new deployments. 28 ‘instance hours’ are included for each day with the App Engine, allowing your app to run free for 24 hours per day and free too scale up for four hours if there’s a surge in demand.

Consolidate Service Calls

You can also find flexibility for adding extras if you’re careful. The limits on serverless invocations are on the number of individual requests, and not on the complexity. Packing more action and more results into each exchange by bundling all of the data operations into one bigger packet is entirely possible here. Just keep in mind that Google counts the memory used and the compute time, and that your functions can’t exceed 400k GB-seconds memory and 200k GHz-seconds of compute time.

Go with Local Storage

A number of suitable places to store information can be found with the modern web API. You can go with the usually-fine cookie that’s limited to 4 kilobytes. For a document-based key valu system, the Web Storage API can cache at least 5 megabytes of data and some browsers can retain up to 10 megabytes. The Indexed DB offers more, with database cursors and indices that have the capacity to process mounds of data which is often stored limitlessly.

The more data you store locally on your user’s machine, the less valuable server-side storage you need. This can also mean faster responses and much less bandwidth promised to ferrying endless copies of the data back to your server. This is a fairly basic fix to implement, and it’s going to be doable for the majority of users.

Uncover Hidden Bargains

Dig enough and you’ll find Google has a helpful page that puts all of their free products in one place, but dig a little further and you’ll find plenty of free services that aren’t even listed. Take Google Maps, for instance. It offers a $200 free monthly usage, and then Google Docs and a few of the other APIs are always free as well.

Use G Suite

Many G Suite products like Docs, Sheets, and Drive are billed separately and users will receive them free with their GMail account or the business pays for them as a suite. Rather than creating an app with built-in reporting, just write the data to a spreadsheet and share the sheet. Build a web app and you’ll need to burn your compute and data quotas to handle the interactive requests. Create a Google Doc for your report and you’re then delegating most of the work to Google’s machine.

Ignore Gimmicks

Unfortunately, superfluous features of modern web applications abound in a big way. Banks won’t need to include stock quotes with their applications, but they might add them. Local times or temperatures probably don’t need to be there. Embedding the latest tweets or Instagram photos probably isn’t needed either. Doing away with all of these extras means a lot fewer calls to your server machines and those calls consume your free limits.

Be Wary of New Options

There are some new and fancy tools for building A.I. services for your stack and are well suited for experimenting. The AutoML Video service allows you to train your machine learning model on video feeds for 40 hours each month for free. The service for tabular data will grind your rows and rows of information on a node free for six hours. This is all fine and dandy, but be careful as automating the process so every user could trigger a big machine learning job comes with real risks.