Compatibility of Web 3 and Net Zero Transition in Question

Reading Time: 4 minutes

One thing is for sure, and that is that there are an awful lot of adult-age humans on the planet right now and with that comes a whole lot of consumption of resources. This same over consumption is occurring in the digital world too, and it is a struggle to stay ahead of roaring demand while still providing consumers with what they want AND not being too wasteful or taxing on the natural environment while doing so. The Internet is a primary means of doing much with what those adults do in their personal and professional lives, and a large part of the challenge with the transition to Web 3 is accommodating the demand while still trying to be environmentally accountable.

Up until recently a lot of people didn’t know that all their online exploits, streaming, surfing and other activities of the like were contributing to global warming. Admittedly not contributing as much as other factors, but it all adds up and we are at a very pivotal time with regards to all this stuff. We won’t go on further about that, but what we do know is that Web3 and its applications, platforms, and assets have the potential to get us past some of the biggest challenges to global sustainability. But how actually compatible is Web3 when it comes to being a contributor to a net zero future like it’s being touted?

This is something that any good Canadian web hosting provider will be keeping an eye on, and those of us here at 4GoodHosting are no exception as much of what we do provides the underlying role in making websites available and facilitating all that happens after that. There are new projects that example how Web 3 is on the right track with this, but there’s also concerns about whether this lines up when the transition is complete.

There’s no backtracking to be done on this, so let’s use our entry this week to talk about Web 3 and Global Sustainability in Computing.

New Systems Need

The decentralized autonomous organizations, tokens and blockchain networks of Web3 have immeasurable potential to be the foundations of a sustainable world, but there is even more of a critical view to it based on the IPCC’s latest report advising transformative new systems are urgently needed to adhere to the Paris Agreement and to keep global temperatures below the mandated 1.5C degrees. But are disruptive digital technologies pushing climate change along faster in the same way so much else in our lives are?

Web3 is often referred to as the ‘read-write-own’ web because it puts control over governance and operations into the hands of its global users and promotes different avenues for value creation as seen in NFTs most notably. People who see the advantage it are quick to talk about ‘integrity’ when it comes to blockchain technology here, and it is true that they act as the foundation for Web3’s digital assets offer unique advantages for overcoming challenges to meaningful progress on sustainability. 

All good, but there are studies that green assets like voluntary carbon credits aren’t making the difference that was foreseen for them, as more than 90% of credits fail to meet basic offset criteria. So there’s more demand for real proof of green value, and tokenization is the answer here as it offers the ability to trace origins and provenance of an asset through its metadata. Long story short, it is better in at least one regard because it is super transparent.

Automated Asset Generation

Another big selling point for Web3 is the way that it will allow Metadata information to be integrated with business systems and device intelligence for automated asset generation. There can be a carbon credit that pulls data from energy intelligence software, before verifying and certifying it through an ecosystem of authorities before it eventually stands separately as a fully tradeable and traceable digital asset.

Diverse types of ESG assets beyond carbon offsets can benefit from this asset verification process, and much has been made of how this will be ideal as certifications determining the legitimacy of low-emissions products like food or fuel, units of renewable power, and investments in sustainable projects like green energy or conservation.

Compatible platforms for the kind of borderless collaboration and international marketplaces required for a successful net zero transition are also going to be needed, and it is not as clear whether these developments are where they need to be given the imminent widespread incorporation of Web3 operating spheres. This shift is going to be huge, and whether there’s going to be the degree of collaboration needed remains to be seen.

One thing we do know though is that tokenization definitely has the potential to be an ideal way to encode leading global standards, and hold producers more reliably to account as to whether or not they are doing their part for the collective good when it comes to meeting goals for emission standards in the near future. 

A decentralized future is what we should be aiming for here, where most people play an active daily role in the energy transition, rather than its current domination by industry and government. Web3 could do for the energy transition what it is predicted to do for the internet – bring greater numbers of contributors into the battle for a better future and starting with our information systems as we know them in the here and now.

Smartphone Use to Improve Memory?

Reading Time: < 1 minute

Everyone will know that the standard understanding is that too much screen time is bad for us, and that includes the small screens on our phones. Blue light is a problem, and especially when it is stimulating you in the evening when you could be winding down and getting ready for sleep. A whole lot of people will admit that they spend too much time on their device, but it turns out that may not be entirely a bad thing. It may still be taking away from your productivity and some people are definitely hooked on playing games on their phone, although it seems there may be an upside to interacting with handheld devices.

There is research that is indicating that by entering and keeping information in your phone, you can actually improve your memory for information that is not stored in your phone. Meaning the information you have to reference with brainpower alone and not with your handheld digital assistant. Research published in the journal of Experimental Psychology suggests that much of it is in the way the devices ‘take a load off’ us with having to remember as much as we do, and it frees up memory to recall even more of the less important stuff.

Whether overuse of smartphones or other digital technologies could promote cognitive decline in other contexts remains to be seen, but apparently you’re not going to have your memory become worse by using your smartphone. It might actually get better. This is something that will be of interest to any good Canadian web hosting provider like us here at 4GoodHosting given that mobile web browsing is much more dominant than desktop or notebook these days, and being focused on having a good mobile website for your business is a need.

Even if you’re Let’s use this entry to talk about smartphone use and memory though and how it may actually improve if you’re very active with yours in keeping important information.

CIRCLES

In the past neuroscience has expressed concerns that the overuse of technology could result in the breakdown of cognitive abilities and cause what they termed ‘digital dementia’. The contrary results we talked about earlier were based a memory task that 158 study participants between ages of 18 and 71 played on a touchscreen digital tablet or computer.

They were shown up to 12 numbered circles on the screen, and then needed to drag some of these to the left and some to the right. Their success was based on the number of circles that they remembered to drag to the correct side. One side was designated ‘high value’, and dragging a circle to this side was worth 10 times as many points as remembering to drag a circle over to the other ‘low value’ side.

Participants performed this task 16 times. The key stipulation though was that they had to use their own memory to remember on half of the trials but allowed to set reminders on the digital device for the other half.

Most of the participants used the digital devices to store the details of the high-value circles and their memory for those circles went up 18%. Their memory for low-value circles was also improved by 27%, even for participants who had never set reminders for low-value circles.

INFERNAL MEMORY

Using reminders also proved that many participants were doing so at a cost to their memory, because they would forget them shortly after setting reminders for them and were inherently trusting in the device to do it for them. This is what all of us do with contacts, numbers, and much more. When they were taken away, the participants remembered the low-value circles better than the high-value ones.

When people were allowed to use an external memory, their smartphone or explosive device helped them to remember the information they had saved into it. The research also showed that the device improved people’s memory for unsaved information as well, and the belief is that when people had to remember by themselves, they used their memory capacity to remember the most important information. But when they could use the device, they saved high-importance information into the device and then utilized their own memory for less important information.

All interesting stuff and it may make you think about freeing up space every time you enter something into your calendar or put a new contact into your phone.

Use of Malicious Browser Extensions Becoming More Commonplace

Reading Time: 3 minutes

It is going back a long ways but there are some who have been in and around the digital space long enough to be able to tell you of a time when browser extensions weren’t even anything to discuss, and that’s simply because they didn’t exist. Why didn’t they exist the way they do today? Well, the simple answer is that web browsers didn’t need to have modifications made to the flows of information back then, and that’s because the information was more standardized in the way it was presented. The fact that the information available via the web is accommodated much more thoroughly now is a good thing, and browser extensions are a large part of why that’s the way it is.

Some may not have the web world wherewithal to know what a browser extension is in the first place, and if so that’s perfectly alright. They are small software modules installed to customize your browser’s appearance or function, and they can serve any number of purposes from enabling email encryption, ad-blocking, one-click password storage, spell checking, dark mode, calendar add-ons, and more. Browser extension are also fairly essential for online shopping, something pretty much everyone does these days at least to some extent.

All is well to a point, but where that point ends is when browser extensions become harmful rather than beneficial. And that’s the reality nowadays, and it is something that is of interest to us here at 4GoodHosting in the same way it would be for any quality Canadian web hosting provider. That’s because it’s very much in line with who we are and what we do – namely hosting websites and ones that may have browser extension compatibility formatted into them for any number of perfectly good reasons.

What makes all of this blog worthy today is that it seems that malicious browser extensions are becoming more common, and it’s more common for developers to be using them. Let’s look at why that is this week.

With 3rd Party Developers

The reason that bad-actor browser extensions are so common is that while there are many of them, only some of them are made by the developers of the primary browsers themselves. Most extensions are made by 3rd-party developers, and as you’d guess many are of no renown whatsoever and as such can make and distribute whatever they please without needing to account for the offering. In fact, malicious browser extensions are becoming so widespread that millions of users have them installed and many may not even be aware of having done so.

A report that analyzes telemetry data from an endpoint protection solution and found that in the last 2.5 years (January 2020 – June 2022), more than 4.3 million unique users attacked by adware hiding in browser extensions were recorded. That works out to some 70% of all affected users encountering a threat of this type, and if that type of widespread occurrence is transposed on a larger scale for everyone who is using the internet for a specific task then it makes clear the large extent of the problem.

The same report further suggests that implemented preventative measures put in place over recent years are responsible for more than 6 million users avoiding downloading malware, adware, and riskware that had been disguised as harmless browser extensions.

Adware and Malware

These extensions serve to target users with adware and other forms of malware and do so repeatedly, and again most people equipped with them have no idea what’s going on. The most common type of malicious browser extension is adware, which is unwanted software used to push affiliates rather than improving user experience in any way. They function by monitoring user behavior through browser history before redirecting them to affiliate pages and providing a commission for the extension’s makers.

The biggest culprit? WebSearch, detected by antivirus(opens in new tab) programs as not-a-virus:HEUR:AdWare.Script.WebSearch.gen. It has been inadvertently downloaded nearly a staggering 900,000 times.

This extension is promoted on the basis of the way the tool is designed to improved the working experience for those who need to do tasks like converting between .doc and .pdf files, but what it does in reality is change the browser’s start page and redirecting resources to earn extra money through affiliate links. It also changes the browser’s default search engine to myway for capturing user queries and then collecting and analyzing them, so that certain affiliate links are then served to the carrier in search engine results pages.

Malware is a huge problem too that can result from people carrying malicious browser extensions, and the worst ones being geared to steal login credentials and other sensitive information like payment data. To protect your devices from malicious browser plugins you should ensure that you are always downloading them from a source that is proven trustworthy, and checking reviews and ratings is recommended and a good way to do that.

DNA Storage the Solution for Data Storage Capacity Demands

Reading Time: 3 minutes

No getting around the fact that as so much of the workings of everyday life becomes more digital for all of us, there is an ever-increasing need for data storage. And try as they might those who make more storage available simply can’t keep pace with the growing demand. Innovation is always the requirement in any situation like this, and it appears that is exactly what’s happened. Everybody will know what DNA is, and how it is the genetic sequence that allows us to become who we are right from the moment we’re conceived.

So most people would struggle to connect the dots in as far as how DNA could be implemented as a means of better data storage, but it appears that’s exactly what’s happening. And it is definitely meeting a need as for many in the space the option of doing away with old data to make space for new data isn’t something they are willing or able to do based on the nature of what they’re making / offering. Estimates are that in just 7+ years from now there will be 7.8 million petabyte deficit between the supply of data storage available and the then-current demand that will be expected for it.

Here at 4GoodHosting we are like any other reliable Canadian web hosting service provider in that we find this stuff very interesting as data storage is very much front and center for all of us too and we can absolutely get how integral it is for some organizations and their operations. We wouldn’t be able to make the connection between genome sequencing and the strands of DNA with data storage, but we have now and that’s what we’ll share with you today with this entry.

Microsoft Leading the Way

The key in all of this is going to be reading and writing data faster, and Microsoft has been leading the way in developing a new method for writing synthetic DNA with a chip that is 1000 times faster so that the higher write throughput works out to lower costs. Which in turn will allow companies to be able to afford to buy / build the additional data storage they’re going to need anyways.

The data dilemma is this. As the volume of data produced by internet activity, digital devices and IoT sensors continues to expand with speed, the problem becomes a question of where to put it all. It’s true that hard disk drives (HDDs) and solid state drives (SSDs) do well with holding and supplying the quantities of data that servers and client devices need to function. But neither of them are really practical for storing vast quantities of information and for long periods of time.

For archival storage Linear Tape-Open (LTO) magnetic tape is best, boasting the lowest cost per capacity of any technology. The current generation of tape, LTO-9, has a native capacity of 18TB and at about $8.30 per terabyte it is affordable. Investing in it is key, especially as the alternative – as mentioned – is to delete older data. That’s just not a realistic option, especially for any organization working AI where products are typically informed by large and very exhaustive pools of data.

The only big drawback to LTO tape is that data can only be accessed serially. This makes it hard to locate specific files, and often creates the need to migrate to fresh tape to avoid data loss. Not something that has to be done regularly, but it does need to be done and it can be time consuming.

DNA – As Storage

The four molecular building blocks of DNA: adenine (A), guanine (G), cytosine (C) and thymine (T) can be utilized as an extremely dense and durable form of data storage. What it does is convert binary 1s and 0s into the four-lettered genetic alphabet, with just one gram of DNA being capable of storing 215 PB (220,000 TB) of data. Experts say this will work out to ultra-high density, acceptable cost, and much more in the way of sustainable operations.

As of now though the technology remains unusable at scale, and that’s because of the time it takes to write data to DNA and other challenges that are different in nature. Future datacenters will need everything the SSD, HDD, and tape industries are capable of, with what DNA and optical and perhaps other enterprise storage technologies offer being put in place to supplement them.

Who knows, eventually this is something that becomes commonplace in the web hosting industry here in Canada, and that’s not a far-fetched idea considering how the data storage needs for those like us are also increasing fairly exponentially over recent years and it’s likely that trend will continue.

AI and Machine Learning being Trained at Double Speed

Reading Time: 3 minutes

Seems to be no coincidence that the emergence of artificial intelligence and machine learning has occurred at exactly the time that humankind needs it most, as the entire world struggles to keep pace with a global population that continues to explode and that explosion bringing a whole whack of major challenges along with it. Especially when it comes to continuing to provide everyone with what they’ve come to expect in regards to essential services and the like. The need is pressing, but fortunately there are very talented engineers and equally dedicated directors above them who are applying themselves to the best of their ability.

While in the simplest sense this won’t have anything directly to do with providing web hosting, our industry is one that is already in the process of being touched by this trend too as there are definitely applications for better and more efficient data management and data streamlining that can be made possible by machine learning. One of those secondary affects from it could be in better proactivity for managing data overloads that can be caused by unexpected external influences. The major heatwave in the UK 2 weeks ago caused some data centers to have to shut down.

Machine learning may provide systems with the means of getting ahead of the curve with dealing with that so a complete shutdown isn’t needed. What would occur isn’t exactly what i known as load shedding, but the process would be similar; being able to foresee what’s coming and knowing where to best make cuts temporarily so that the cumulative affect of it all isn’t so catastrophic. As a Canadian web hosting provider, those of us here at 4GoodHosting can see all sorts of promise in this.

2x Speed

There are now a set of benchmarks – MLPerf – for machine-learning systems that are determine that they can be trained nearly 2x as quickly as they could last year. The bulk of these training speed gains are thanks to software and systems innovations, but new processors from Graphcore and Intel subsidiary Habana Labs plus others are contributing nicely too.

Previously there was no getting around the fact it took neural networks a REALLY long time to do their task. This is what drove companies like Google to develop machine-learning accelerator chips in house. But the new MLPerf data shows that training time for standard neural networks has gotten a lot less taxing in very little time. Neural networks can now be trained exponentially faster than what you would expect, and this really is a beautiful thing when you understand the big picture relevance of it.

It is prompting machine-learning experts to dream big, especially as the new neural networks continues to outpace computing power. MLPerf is based on 8 benchmark tests:

  • image recognition
  • medical-imaging segmentation
  • two versions of object detection
  • speech recognition
  • natural-language processing
  • recommendation
  • form of gameplay called reinforcement learning

As of now, systems built using Nvidia A100 GPUs have been dominating the results. Much of that can be attributed to Nvidia’s new GPU architecture, Hopper, that was designed with architectural features aimed at speeding training.

as for Google’s offering, TPU v4 features impressive improvements in computations per watt over its predecessor, now being able to compute 1.1 billion billion operations per second. At that scale, the system only needed just over 10 seconds for the image-recognition and natural-language-processing trainings.

Another standard trend is to have an IPU where both chips in the 3D stack do computing. The belief here is that we could see machine-learning supercomputers capable of handling neural networks 1,000 times or more as large as today’s biggest language models.

Better Sequencing Lengths

This advance goes beyond networks themselves, as there is also need for the length of the sequence of data the network to promote reliable accuracy. In simpler terms this relates to how many words a natural-language processor is able to be aware of at any one time, or how large an image a machine vision system is able to view. As of now those don’t really scale up well, but the aim is double its size and then quadruple the scale of the attention layer of the network. The focus remains of building an algorithm that gives the training process an awareness of this time penalty and a way to reduce it.